uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,501,115
arxiv
\section{Introduction} In this paper, we first construct nonholonomic systems of $n$ homogeneous balls $\mathbf B_1,\dots,\mathbf B_n$ with centers $O_1,...,O_n$ and with the same radius $r$ that are rolling without slipping around a fixed sphere $\mathbf S_0$ with center $O$ and radius $R$. We assume that a dynamically nonsymmetric sphere $\mathbf S$ of radius $R+2r$ and the center that coincides with the center $O$ of the fixed sphere $\mathbf S_0$ rolls without slipping over the moving balls $\mathbf B_1,\dots,\mathbf B_n$. The rolling of the balls $\mathbf B_i$ and the sphere $\mathbf S$ are considered under the inertia and in the absence of external forces. We refer to this system as a \emph{spherical ball bearing} (see Figure 1). As the second task, we consider the limit, when the radius $R$ tends to infinity. In that way, we obtain a corresponding planar problem consisting of $n$ homogeneous balls $\mathbf B_1,\dots,\mathbf B_n$ with centers $O_1,...,O_n$ and the same radius $r$ that are rolling without slipping over a fixed plane $\Sigma_0$, and a moving plane $\Sigma$ {that} moves without slipping over the homogeneous balls. We refer to this system as a \emph{planar ball bearing} (see Figure 2). Although the rolling ball problems are very well studied (see \cite{BM2002, BMB2013, BMT2014, BMbook}), the spherical and planar bearing problems seem not to be considered before. There are two nonholonomic systems which are close to the spherical ball bearings. One is the so-called spherical support system, introduced by Fedorov in \cite{F1}. It describes the rolling without slipping of a dynamically nonsymmetric sphere $\mathbf S$ over $n$ homogeneous balls $\mathbf B_1, \dots,\mathbf B_n$ of possibly different radii, but with fixed centers. The second one is the rolling of a homogeneous ball $\mathbf B$ over a dynamically asymmetric sphere $\mathbf S$, introduced by Borisov, Kilin, and Mamaev in \cite{BKM}. They considered in \cite{BKM} both situations: when the center of $\mathbf S$ is fixed, and when it is not. In Section \ref{sec2} we define the spherical ball bearing system: the configuration space $Q$, the nonholonomic distribution $\mathcal D\subset TQ$ and the Lagrangian that coincides with the kinetic energy of the system. The kinetic energy and the distribution are invariant with respect to an appropriate action of the Lie group $SO(3)^{n+1}$, and the system can be reduced to $\mathcal M=\mathcal D/SO(3)^{n+1}$. In Section \ref{sec3} we derive the equations of motion of the reduced spherical ball bearing system in terms of the reaction forces and list some first integrals in Propositions \ref{prva} and \ref{go}. Proposition \ref{prva} implies that the centers $O_1,...,O_n$ are in rest in relation to each other. Thus, there are no collisions of the balls $\mathbf B_1,\dots,\mathbf B_n$. \ \begin{pspicture}(12,7) \pscircle[linecolor=black,fillstyle=solid, fillcolor=gray!20, linestyle=dashed](3.5,3){0.88} \psellipticarc[linestyle=dashed](3.5,3)(0.88,0.2){-60}{180} \psellipticarc(3.5,3)(0.88,0.2){180}{300} \psdot[dotsize=2pt](3.5,3)\uput[0](2.7,3.2){$O_1$} \psellipticarc[linecolor=black](3.5,3)(0.895,0.895){102}{304} \psdot[dotsize=2pt](4,3.27)\uput[0](4,3.27){$A_1$} \psdot[dotsize=2pt](3,2.73)\uput[0](3,2.73){$B_1$} \pscircle[linecolor=black, fillstyle=solid, fillcolor=gray!20](7,3){1} \psellipticarc[linestyle=dashed](7,3)(1,0.2){0}{180} \psellipticarc(7,3)(1,0.2){180}{360} \psdot[dotsize=2pt](7,3)\uput[0](7,3.2){$O_2$} \psdot[dotsize=2pt](7.8,2.5)\uput[0](7.8,2.5){$B_2$} \psdot[dotsize=2pt](6.2,3.5)\uput[0](6.2,3.5){$A_2$} \pscircle[linecolor=black, fillstyle=solid, fillcolor=gray!20, linestyle=dashed](5.5,5){0.8} \psellipticarc[linestyle=dashed](5.5,5)(.8,0.2){0}{180} \psellipticarc[linestyle=dashed](5.5,5)(0.8,0.2){180}{360} \psdot[dotsize=2pt](5.5,5)\uput[0](5.5,5.2){$O_3$} \psdot[dotsize=2pt](5.38,4.4)\uput[0](5.38,4.6){$A_3$} \psdot[dotsize=2pt](5.62,5.6)\uput[0](5.52,5.8){$B_3$} \pscircle[linecolor=black, linestyle=dashed](5.3,4){2} \pscircle[linecolor=black](5.3,4){3} \psellipticarc[linecolor=black](5.3,4)(2.015,2.015){0}{300} \psdot[dotsize=2pt](5.3,4)\uput[0](5.3,4){$O$} \psellipticarc[linestyle=dashed](5.3,4)(2,0.2){-10}{180} \psellipticarc[](5.3,4)(2,0.2){180}{350} \uput[0](3.3,4.5){$\mathbf S_0$} \uput[0](2.6,5.5){$\mathbf S$} \uput[0](2,0.5){{\sc Figure 1}. Spherical ball bearing for $n=3$} \end{pspicture} In Section \ref{sec4} we perform the second reduction by fixing the values of the $n$ first integrals from Proposition \ref{go}. We obtain the closed system of equations of motion of the reduced spherical ball bearing system on the space $\mathcal N=\mathbb{ R}^3\times (S^2)^n$ in Theorem \ref{Glavna}. The complete set of non-reduced equations of motion is given in Corollary \ref{Originalne}. Finally, we prove that the spherical ball bearing problem has an invariant measure in Theorem \ref{mera}, in Section \ref{sec5}. The question of integrability in the spherical ball bearing problem will be studied in a separate paper. In Section \ref{sec6} we consider the planar ball bearing problem. To simplify notation, we consider the case $n=3$ and refer to the system as the \emph{three balls planar bearing problem}. However all the statements and considerations from Section \ref{sec6} hold for arbitrary $n$ in a straightforward manner. For general $n$, the configuration space and the nonholonomic distribution $\mathcal D$ are of the same dimensions as in the spherical ball bearing problem but the description of the system is slightly different. For $n=3$, we derive the equations of motion on $\mathcal D/SO(3)\times SO(3)\times SO(3)$, see Theorem \ref{Glavna2}. We perform a second reduction to a space $\mathcal Q\subset \mathbb{ R}^6$ defined by an algebraic inequality. We prove that the planar three balls bearing problem on $\mathcal Q$ has an invariant measure and four independent first integrals. Therefore, it is integrable according to the Euler-Jacobi theorem, see Theorem \ref{mera2}. \section{Rolling of a dynamically nonsymmetric sphere over $n$ moving homogeneous balls and a fixed sphere}\label{sec2} We consider the following spherical ball bearing problem: $n$ homogeneous balls $\mathbf B_1,\dots,\mathbf B_n$ with centers $O_1,...,O_n$ and the same radius $r$ roll without slipping around a fixed sphere $\mathbf S_0$ with center $O$ and radius $R$. A dynamically nonsymmetric sphere $\mathbf S$ of radius $R+2r$ with the center that coincides with the center $O$ of the fixed sphere $\mathbf S_0$ rolls without slipping over the moving balls $\mathbf B_1,\dots,\mathbf B_n$. For $n \ge 4$ there are initial positions of the balls $\mathbf B_1,\dots,\mathbf B_n$ that imply the condition that the centre of the moving sphere $\mathbf S$ coincides with the centre $O$ of the fixed sphere $\mathbf S_0$. Let us reiterate that the configuration of the balls is congruent during the time evolution. In order to include all possible initial positions for arbitrary $n$, the condition that $O$ coincides with the centre of the sphere $\mathbf S$ is assumed to be a holonomic constraint. Let \[ O\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3, \qquad O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3, \qquad O_i\vec{\mathbf e}^i_1,\vec{\mathbf e}^i_2,\vec{\mathbf e}^i_3, \qquad i=1,\dots,n \] be positively oriented reference frames rigidly attached to the spheres $\mathbf S_0$, $\mathbf S$, and the balls $\mathbf B_i$, $i=1,\dots,n$, respectively. By $\mathbf g,\mathbf g_i\in SO(3)$ we denote the matrices that map the moving frames $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$ and $O_i\vec{\mathbf e}^i_1,\vec{\mathbf e}^i_2,\vec{\mathbf e}^i_3$ to the fixed frame $O\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3$: \[ \mathbf g_{jk}=\langle \vec{\mathbf e}^0_j,\vec{\mathbf e}_k \rangle, \qquad \mathbf g_{i,jk}=\langle \vec{\mathbf e}^0_j,\vec{\mathbf e}^i_k \rangle, \qquad j,k=1,2,3, \qquad i=1,\dots,n. \] We apply the standard isomorphism between the Lie algebras $(so(3),[\cdot,\cdot])$ and $(\mathbb{ R}^3,\times)$ given by \begin{equation}\label{izomorfizam} a_{ij}=-\varepsilon_{ijk}a_k, \qquad i,j,k=1,2,3, \end{equation} The skew-symmetric matrices \[ \omega=\dot{\mathbf g}{\mathbf g}^{-1}, \qquad \omega_i=\dot{\mathbf g}_i \mathbf g_i^{-1} \] correspond to the angular velocities $\vec{\omega}$, $\vec{\omega}_i$ of the sphere $\mathbf S$ and the $i$-th ball $\mathbf B_i$ in the fixed reference frame $O\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3$ attached to the sphere $\mathbf S_0$. The matrices \[ \Omega=\mathbf g^{-1}\dot{\mathbf g}=\mathbf g^{-1}\omega\mathbf g, \qquad W_i=\mathbf g^{-1}_i\dot{\mathbf g}_i=\mathbf g^{-1}_i\omega_i\mathbf g_i \] correspond to the angular velocities $\vec{\Omega}$, $\vec{W}_i$ of $\mathbf S$ and $\mathbf B_i$ in the frames $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$ and $O_i\vec{\mathbf e}^i_1,\vec{\mathbf e}^i_2,\vec{\mathbf e}^i_3$ attached to the sphere $\mathbf S$ and the balls $\mathbf B_i$, respectively. We have \[ \vec{\omega}=\mathbf g\vec{\Omega}, \qquad \vec{\omega}_i=\mathbf g_i \vec{W}_i. \] Let $I$ be the inertia operator of the outer sphere $\mathbf S$. We choose the moving frame $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$, such that $O\vec{\mathbf e}_1$, $O\vec{\mathbf e}_2$, $O\vec{\mathbf e}_3$ are the principal axes of inertia: $I=\mathrm{diag}(A,B,C)$. Let $\mathrm{diag}(I_i,I_i,I_i)$ and $m_i$ be the inertia operator and the mass of the $i$-th ball $\mathbf B_i$. Then the configuration space and the kinetic energy of the problem are given by: \begin{align*} Q=&SO(3)^{n+1}\times (S^2)^{n}\{\mathbf g,\mathbf g_1,\dots,\mathbf g_n,\vec\gamma_1,\dots,\vec\gamma_n\},\\ T=&\frac12 \langle I\vec\Omega,\vec\Omega\rangle+\frac12 \sum_{i=1}^n I_i \langle \vec{W}_i,\vec{W}_i\rangle+\frac12 \sum_{i=1}^n m_i \langle \vec{v}_{O_i},\vec{v}_{O_i}\rangle\\ =&\frac12 \langle I\vec\Omega,\vec\Omega\rangle+\frac12 \sum_{i=1}^n I_i \langle \vec{\omega}_i,\vec{\omega}_i\rangle+\frac12 \sum_{i=1}^n m_i \langle \vec{v}_{O_i},\vec{v}_{O_i}\rangle. \end{align*} Here $\vec{\gamma}_i$ is the unit vector \[ \vec{\gamma}_i=\frac{\overrightarrow{OO_i}}{|\overrightarrow{OO_i}|} \] determining the position $O_i$ of the centre of $i$-th ball $\mathbf B_i$ and $\vec{v}_{O_i}=(R+r)\dot{\vec\gamma}_i$ is its velocity, $i=1,\dots,n$. The kinetic energy plays the role of the Lagrangian. Let us denote the contact points of the balls $\mathbf B_1,\dots,\mathbf B_n$ with the spheres $\mathbf S_0$ and $\mathbf S$ by $A_1,..., A_n$ and $B_1, B_2,...,B_n$, respectively. The condition that the rolling of the balls $\mathbf B_1,\dots,\mathbf B_n$ and the sphere $\mathbf S$ are without slipping leads to the nonholonomic constraints: \[ \vec{v}_{O_i}+\vec{\omega}_i\times\overrightarrow{O_iA_i}=0, \qquad \vec{v}_{O_i}+\vec{\omega}_i\times\overrightarrow{O_iB_i}=\vec{\omega}\times\overrightarrow{OB_i}, \qquad i=1,...,n, \] that is, \begin{equation}\label{VEZE} \vec{v}_{O_i}=r\vec{\omega}_i\times\vec{\gamma}_i, \qquad \vec{v}_{O_i}=(R+2r)\vec{\omega}\times\vec{\gamma}_i-r\vec{\omega}_i\times\vec{\gamma}_i,\qquad i=1,...,n. \end{equation} The dimension of the configuration space $Q$ is $5n+3$. There are $4n$ independent constraints in \eqref{VEZE}, defining a nonintegrable distribution $\mathcal D\subset TQ$. Therefore, the dimension of the vector subspaces of admissible velocities $\mathcal D_q\subset T_q Q$ is $n+3$, $q\in Q$. The phase space of the system has the dimension $6n+6$, which is the dimension of the bundle $\mathcal D$ as a submanifold of $TQ$. The equations of motion of the spherical ball bearing problem are given by the Lagrange-d'Alembert equations {\cite{AKN, Bloch}} \begin{equation}\label{L1} \delta T=\big(\frac{\partial T}{\partial q}-\frac{d}{dt}\frac{\partial T}{\partial \dot q},\delta q\big)=0, \quad \text{for all virtual displacement} \quad \delta q\in\mathcal D_q. \end{equation} Instead of using the Lagrange-d'Alembert equations, below we will derive the equations of motion directly from the fundamental laws of classical mechanics and using the vector notation. \begin{remark} We will show in Proposition \ref{prva} that if the initial conditions are chosen such that the distances between $O_i$ and $O_j$ are all greater than $2r$, $1 \le i< j \le n$, then the balls will not have collisions along the course of motion. This is the reason why we do not assume additional one-side constraints \begin{equation}\label{oblast} \vert \vec\gamma_i-\vec\gamma_j \vert \ge \frac{2r}{r+R}, \qquad 1\le i<j\le n. \end{equation} \end{remark} The kinetic energy and the constraints are invariant with respect to the $SO(3)^{n+1}$--action defined by \begin{equation}\label{transf} (\mathbf g,\mathbf g_1,\dots,\mathbf g_n,\vec\gamma_1,\dots,\vec\gamma_n)\longmapsto (\mathbf a\mathbf g,\mathbf a\mathbf g_1{\mathbf a}_1^{-1},\dots, \mathbf a\mathbf g_n{\mathbf a}_n^{-1},\mathbf a\vec\gamma_1,\dots,\mathbf a\vec\gamma_n), \end{equation} $\mathbf a,\mathbf a_1,\dots,\mathbf a_n\in SO(3)$, representing a freedom in the choice of the reference frames \[ O\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3, \qquad O_i\vec{\mathbf e}^i_1,\vec{\mathbf e}^i_2,\vec{\mathbf e}^i_3, \qquad i=1,\dots,n. \] Indeed, for the extension of the transformation \eqref{transf} to the tangent bundle $TQ$ we have \begin{align*} \Omega=\mathbf g^{-1}\dot{\mathbf g} &\longmapsto \big(\mathbf a\mathbf g\big)^{-1}\mathbf a\dot{\mathbf g}=\Omega, \\ \omega=\dot{\mathbf g}{\mathbf g}^{-1} &\longmapsto \mathbf a\dot{\mathbf g}\big(\mathbf a{\mathbf g}\big)^{-1}=\mathbf a\omega\mathbf a^{-1},\\ \omega_i=\dot{\mathbf g}_i \mathbf g_i^{-1}&\longmapsto \mathbf a \dot{\mathbf g}_i \mathbf a_i^{-1} \big(\mathbf a\mathbf g_i\mathbf a_i^{-1}\big)^{-1}=\mathbf a\omega_i\mathbf a^{-1},\\ \dot{\vec\gamma}&\longmapsto \mathbf a \dot{\vec\gamma}, \end{align*} and, therefore, \begin{equation}\label{transf1} \vec{\Omega}\longmapsto \vec{\Omega}, \qquad \vec{\omega}\longmapsto \mathbf a\vec{\omega}, \qquad \vec{\omega}_i\longmapsto \mathbf a\vec{\omega}_i, \qquad \vec{v}_{O_i}\longmapsto \mathbf a\vec{v}_{O_i}. \end{equation} It is clear that the kinetic energy and the constraints are invariant with respect to the transformation \eqref{transf1}. Also, note that \eqref{transf} does not change the vectors $\vec{\omega}_i$, $\vec{\gamma}_i$, $\vec{v}_{O_i}$ written in the moving frame $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$: \begin{align*} \vec{\Omega}_i=\mathbf g^{-1}\vec{\omega_i} &\longmapsto \big(\mathbf a\mathbf g\big)^{-1}\mathbf a\vec{\omega_i}=\vec{\Omega}_i, \\ \vec{\Gamma}_i=\mathbf g^{-1} \vec{\gamma}_i &\longmapsto \big(\mathbf a\mathbf g\big)^{-1}\mathbf a \vec{\gamma}_i=\vec{\Gamma}_i \quad (\text{implying }\dot{\vec\Gamma}_i\longmapsto \dot{\vec\Gamma}_i), \\ \vec{V}_{O_i}=\mathbf g^{-1}\vec{v}_{O_i}=(R+r)\mathbf g^{-1}\dot{\vec\gamma}_i &\longmapsto (R+r)\big(\mathbf a\mathbf g\big)^{-1}\mathbf a\dot{\vec\gamma}_i=\mathbf g^{-1}\vec{v}_{O_i}=\vec{V}_{O_i}. \end{align*} Thus, for the coordinates in the space $(TQ)/SO(3)^{n+1}$ we can take the angular velocities and the unit position vectors in the reference frame attached to the sphere $\mathbf S$: \[ (TQ)/SO(3)^{n+1}\cong{\mathbb R}^{3(n+1)}\times (TS^2)^n \{\vec\Omega,\vec{\Omega}_1,\dots,\vec{\Omega}_n,\dot{\vec{\Gamma}}_1,\dots,\dot{\vec{\Gamma}}_n,\vec{\Gamma}_1,\dots,\vec{\Gamma}_n\}. \] In the moving reference frame $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$, the constraints become: \begin{align} \label{veze1} \vec{V}_{O_i}&=(R+2r)\vec{\Omega}\times\vec{\Gamma}_i-r\vec{\Omega}_i\times\vec{\Gamma}_i,\\ \label{veze1*}\vec{V}_{O_i}&=r\vec{\Omega}_i\times\vec{\Gamma}_i, \qquad\qquad\qquad i=1,\dots,n, \end{align} defining the \emph{reduced phase space} $\mathcal M=\mathcal D/SO(3)^{n+1}\subset (TQ)/SO(3)^{n+1}$ of dimension $3n+3$. Since both the kinetic energy and the constraints are invariant with respect to the $SO(3)^{n+1}$--action \eqref{transf}, the equations of motion \eqref{L1} are also $SO(3)^{n+1}$--invariant. Thus, they induce a well defined system on the reduced phase space $\mathcal M$. \section{ The kinematic and momentum equations in the moving frame}\label{sec3} The time derivative of $\vec{\Gamma}_i$ can be directly extracted from the constraints as follows. \begin{lem} The kinematic part of the equations of motion of the spherical ball bearing system is: \begin{equation}\label{game1} \dot{\vec{\Gamma}}_i=\frac{R}{2R+2r}\vec{\Gamma}_i\times\vec{\Omega}, \qquad i=1,\dots,n. \end{equation} \end{lem} \begin{proof} Let us at the moment consider the fixed reference frame $O\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3$. One has \[ \dot{\overrightarrow{OO}}_i+\vec{\omega}_i\times\overrightarrow{O_iA_i}=0. \] Therefore, $(R+r)\dot{\vec{\gamma}}_i-r\vec{\omega}_i\times\vec{\gamma}_i=0$, or equivalently \begin{equation}\label{eq:gamma_i} \dot{\vec{\gamma}}_i=\frac{r}{R+r}\vec{\omega}_i\times\vec{\gamma}_i. \end{equation} The equation \eqref{eq:gamma_i} in the moving reference frame $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$ has the form \[ \dot{\vec{\Gamma}}_i+\vec{\Omega}\times\vec{\Gamma}_i=\frac{r}{R+r}\vec{\Omega}_i\times\vec{\Gamma}_i. \] Thus, we get \begin{equation}\label{game} \dot{\vec{\Gamma}}_i=\big(\frac{r}{R+r}\vec{\Omega}_i-\vec{\Omega}\big)\times\vec{\Gamma}_i. \end{equation} From the constraints \eqref{veze1}, \eqref{veze1*}, we obtain \begin{equation}\label{veze2} \vec{\Omega}_i\times\vec{\Gamma}_i=\frac{R+2r}{2r}\vec{\Omega}\times\vec{\Gamma}_i, \qquad i=1,\dots,n. \end{equation} Finally, using \eqref{veze2}, the equations \eqref{game} can be written in a more convenient form \eqref{game1}. \end{proof} As a consequence, we have: \begin{prop}\label{prva} The following functions are the first integrals of motion: \[ \langle\vec{\Gamma}_i,\vec{\Gamma}_j\rangle=\gamma_{ij}=const, \qquad i,j=1,\dots,n. \] \end{prop} \begin{proof} By a direct differentiation, we get \begin{align*} \frac{d}{dt}\langle\vec{\Gamma}_i,\vec{\Gamma}_j\rangle &= \langle\dot{\vec{\Gamma}}_i,\vec{\Gamma}_j\rangle+ \langle\vec{\Gamma}_i,\dot{\vec{\Gamma}}_j\rangle\\ &=\frac{R}{2R+2r}\big(\langle \vec{\Gamma}_i\times\vec{\Omega},\vec{\Gamma}_j\rangle+\langle\vec{\Gamma}_j\times\vec{\Omega},\vec{\Gamma}_i\rangle\big)=0. \end{align*} \end{proof} In other words, the centers $O_i$ of the homogeneous balls $\mathbf B_i$ are in rest in relation to each other. In particular, since $\langle \vec\gamma_i,\vec\gamma_j\rangle=\langle \vec\Gamma_i,\vec\Gamma_j\rangle$, the interior of the region \eqref{oblast} is invariant under the flow of the system. Next, let $\vec{\mathbf F}_{B_i}$ and $\vec{\mathbf F}_{A_i}$ be the reaction forces that act on the ball $\mathbf B_i$ at the points $B_i$ and $A_i$, respectively. The reaction force at the point $B_i$ on the sphere $\mathbf S$ is then $-\vec{\mathbf F}_{B_i}$. By using the laws of change of angular momentum and momentum of a rigid body in the moving reference frame for the balls $\mathbf B_i$ and the sphere $\mathbf S$ {(e.g., see \cite{AKN})}, we get: \begin{lem} The dynamical part of the equations of motion of the spherical ball bearing system is: \begin{align} \label{jednacine1} I_i\dot{\vec{\Omega}}_i &=I_i\vec{\Omega}_i\times \vec{\Omega} + r\vec{\Gamma}_i\times(\vec{\mathbf F}_{B_i}-\vec{\mathbf F}_{A_i}),\\ \label{jednacine2} m_i\dot{\vec{V}}_{O_i} &=m_i\vec{V}_{O_i} \times \vec{\Omega}+\vec{\mathbf F}_{B_i}+\vec{\mathbf F}_{A_i}, \qquad\qquad\qquad i=1,...,n\\ \label{jednacine3} I\dot{\vec{\Omega}}&=I\vec{\Omega}\times \vec{\Omega} -\sum_{i=1}^{n}(R+2r)\vec{\Gamma}_i\times\vec{\mathbf F}_{B_i}. \end{align} \end{lem} For $n=1$ and the absence of the interior fixed sphere $\mathbf S_0$, i.e. $\vec{\mathbf F}_{A_i}=0$, see \cite{BKM}. We still need to calculate the torques of reaction forces. Prior to that, we formulate and prove the following important statement. \begin{prop}\label{go} The projections of the angular velocities $\vec{\Omega}_i$ to to the directions $\vec{\Gamma}_i$ are the first integrals of motion: \[ \langle\vec{\Omega}_i,\vec{\Gamma}_i\rangle=c_i=const, \qquad i=1,...,n. \] \end{prop} \begin{proof} From \eqref{jednacine1} and \eqref{game} we get \begin{align*} \frac{d}{dt}\langle\vec{\Omega}_i,\vec{\Gamma}_i{\rangle} =&\langle\dot{\vec{\Omega}}_i,\vec{\Gamma}_i\rangle+\langle\vec{\Omega}_i,\dot{\vec{\Gamma}}_i\rangle\\ = &\langle\vec{\Omega}_i\times\vec{\Omega}, \vec{\Gamma}_i\rangle+\langle \frac{r}{I_i}\vec{\Gamma}_i\times(\vec{\mathbf F}_{B_i}-\vec{\mathbf F}_{A_i}),\vec\Gamma_i \rangle\\ &+\langle\vec{\Omega}_i,\frac{r}{R+r}\vec{\Omega}_i\times \vec{\Gamma}_i\rangle -\langle\vec{\Omega}_i,\vec{\Omega}\times \vec{\Gamma}_i\rangle=0. \end{align*} \end{proof} \section{ The reduced system}\label{sec4} From the constraints written in the moving frame \eqref{veze1}, \eqref{veze1*}, \eqref{veze2}, we get \[ \langle\vec{\Omega}\times\vec{\Gamma}_i, \vec{\Omega}_i\rangle=0. \] That means that vectors $\vec{\Gamma}_i,\ \vec{\Omega},\ \vec{\Omega}_i$ are coplanar. Moreover, we obtain: \begin{equation}\label{omegai} \vec{\Omega}_i=\langle\vec{\Gamma}_i,\vec{\Omega}_i\rangle\vec{\Gamma}_i+\frac{R+2r}{2r}\vec{\Omega}-\frac{R+2r}{2r}\langle\vec{\Gamma}_i,\vec{\Omega}\rangle\vec{\Gamma}_i. \end{equation} Further, from Proposition \ref{go}, we get that the reduced phase space $\mathcal M=\mathcal D/SO(3)^{n+1}$ is foliated on $2n+3$--dimensional invariant varieties \[ \mathcal M_c: \qquad \langle\vec{\Omega}_i,\vec{\Gamma}_i\rangle=c_i=const, \qquad i=1,...,n. \] On the invariant variety $\mathcal M_c$, the vector-functions $\vec{\Omega}_i$ can be uniquely expressed as functions of $\vec{\Omega}$, $\vec{\Gamma}_i$ using the equation \eqref{omegai},: \begin{equation}\label{omegai1} \vec{\Omega}_i=c_i\vec{\Gamma}_i+\frac{R+2r}{2r}\vec{\Omega}-\frac{R+2r}{2r}\langle\vec{\Gamma}_i,\vec{\Omega}\rangle\vec{\Gamma}_i. \end{equation} Whence, $\vec\Omega$ determines all velocities of the system on $\mathcal M_c$ and $\mathcal M_c$ is diffeomorphic to the \emph{second reduced phase space} \[ \mathcal N=\mathbb{ R}^3\times \big(S^2\big)^{n}\{\Omega,\vec\Gamma_1,\dots,\vec\Gamma_n\}. \] This can be seen as follows. Consider the natural projection \begin{align*} &\pi\colon (TQ)/SO(3)^{n+1}\cong{\mathbb R}^{3(n+1)}\times (TS^2)^n \to \mathcal N,\\ &\pi(\vec\Omega,\vec{\Omega}_1,\dots,\vec{\Omega}_n,\dot{\vec{\Gamma}}_1,\dots,\dot{\vec{\Gamma}}_n,\vec{\Gamma}_1,\dots,\vec{\Gamma}_n)=(\vec\Omega,\vec{\Gamma}_1,\dots,\vec{\Gamma}_n), \end{align*} and let $\pi_c$ be the restriction to $\mathcal M_c \subset \mathcal M \subset (TQ)/SO(3)^{n+1}$ of $\pi$. Then the projection \[ \pi_c\colon \mathcal M_c\longmapsto \mathcal N \] is a bijection. Thus, instead of the derivation of the torques of all reaction forces in \eqref{jednacine1} and \eqref{jednacine3}, it is sufficient to find the torque in the equation \eqref{jednacine3} on a given invariant variety $\mathcal M_c$. To simplify the equations \eqref{game1} and \eqref{omegai1}, we introduce the parameters \begin{equation}\label{parametri} \varepsilon=\frac{R}{2R+2r} \qquad \text{and} \qquad \delta=\frac{R+2r}{2r}. \end{equation} We define the \emph{modified operator of inertia} $\mathbf I$ as \begin{equation}\label{modI} \begin{aligned} \mathbf I=I+\delta^2\sum_{i=1}^n(I_i+m_ir^2)\mathrm{pr}_i, \end{aligned} \end{equation} where $\mathrm{pr}_i\colon \mathbb{ R}^3 \to \vec{\Gamma}_i^\perp$ is the orthogonal projection to the plane orthogonal to $\vec{\Gamma}_i$. We set \begin{align} \label{m1}\vec{M}=&\, \mathbf I\vec\Omega= I\vec\Omega+\delta^2\sum_{i=1}^n(I_i+m_ir^2)\vec{\Omega}-\delta^2\sum_{i=1}^n(I_i+m_ir^2)\langle\vec{\Gamma}_i,\vec{\Omega}\rangle\vec{\Gamma}_i,\\ \label{n1}\vec{N}=&\, \delta\sum_{i=1}^n I_ic_i\vec{\Gamma}_i. \end{align} \begin{thm}\label{Glavna} The reduction of the spherical ball bearing problem to $\mathcal M_c\cong \mathcal N$ is described by the equations \begin{align} \label{red1} &\frac{d}{dt}{\vec{M}}=\vec{M}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega},\\ \label{red2} &\frac{d}{dt}\vec\Gamma_i=\varepsilon\vec\Gamma_i\times\vec\Omega, \qquad\qquad i=1,\dots,n. \end{align} \end{thm} Note that the kinetic energy of the system takes the form \[ T=\frac12 \langle \vec M,\vec\Omega\rangle+\frac12\sum_{i=1}^n I_i c_i^2. \] Also, since \[ \frac{d}{dt}\vec N=\varepsilon \vec N\times \vec\Omega, \] the equation \eqref{red1} is equivalent to \begin{equation} \frac{d}{dt}(\vec M+\vec N)=(\vec M+\vec N)\times\vec\Omega. \label{red1*} \end{equation} \begin{proof}[Proof of Theorem \ref{Glavna}] From the equations \eqref{jednacine1} and \eqref{jednacine2} one have \[ \vec{\Gamma}_i\times\vec{\mathbf F}_{B_i}=\frac1{2r}(I_i\dot{\vec{\Omega}}_i+\vec{\Omega}\times(I_i\vec{\Omega}_i))+\frac{m_i}{2}\vec{\Gamma}_i\times\dot{\vec{V}}_{O_i}+ \frac{m_i}{2}\vec{\Gamma}_i\times(\vec{\Omega}\times\vec{V}_{O_i}) \] By plugging the last expression in the third equation of motion \eqref{jednacine3}, it becomes \begin{equation} \label{jednacina3} \begin{aligned} I\dot{\vec{\Omega}}+\vec{\Omega}\times I\vec{\Omega}=-&\sum_{i=1}^{n}\Big[\frac{R+2r}{2r}(I_i\dot{\vec{\Omega}}_i +\vec{\Omega}\times(I_i\vec{\Omega}_i))+\\ &\frac{m_i(R+2r)}{2}\vec{\Gamma}_i\times\dot{\vec{V}}_{O_i}+\frac{m_i(R+2r)}{2}\vec{\Gamma}_i\times(\vec{\Omega}\times\vec{V}_{O_i})\Big]. \end{aligned} \end{equation} From \eqref{veze1*}, \eqref{game1}, and \eqref{game}, we get $\dot{\vec{\Gamma}}_i\times\vec{V}_{O_i}=0$, and, therefore \[ \frac{d}{dt}\big(\vec{\Gamma}_i\times\vec{V}_{O_i}\big)=\vec{\Gamma}_i\times\dot{\vec{V}}_{O_i}. \] Also, we have \[ \vec{\Gamma}_i\times(\vec{\Omega}\times\vec{V}_{O_i})=\vec{\Omega}\times(\vec{\Gamma}_i\times\vec{V}_{O_i}). \] Having in mind the last two expressions, the equation \eqref{jednacina3} becomes \begin{equation} \label{jednacina3a} \begin{aligned} \frac{d}{dt}&\Big(I\vec{\Omega}+\sum_{i=1}^{n}\big(\frac{R+2r}{2r}I_i\vec{\Omega}_i+\frac{m_i(R+2r)}{2}\vec{\Gamma}_i\times{\vec{V}}_{O_i}\big)\Big)=\\ &-\vec{\Omega}\times \Big(I\vec{\Omega}+\sum_{i=1}^{n}\big(\frac{R+2r}{2r}I_i\vec{\Omega}_i+\frac{m_i(R+2r)}{2}\vec{\Gamma}_i\times{\vec{V}}_{O_i}\big)\Big) \end{aligned} \end{equation} Finally, using \eqref{omegai1}, constraints \eqref{veze2}, the definitions \eqref{parametri}, \eqref{m1}, and \eqref{n1} of parameters $\varepsilon$ and $\delta$ and the vectors $\vec M$ and $\vec N$, the equation \eqref{jednacina3a} takes the form \eqref{red1*}. \end{proof} \begin{remark} If we formally set $\varepsilon=1$ in the system \eqref{red1}, \eqref{red2} we obtain the equation of the spherical support system introduced by Fedorov in \cite{F1}. The system describes the rolling without slipping of a dynamically nonsymmetric sphere $\mathbf S$ over $n$ homogeneous balls $\mathbf B_1, \dots,\mathbf B_n$ of possibly different radii, but with fixed centers. It is an example of a class of nonhamiltonian L+R systems on Lie groups with an invariant measure (see \cite{FeRCD, FJ1, Jo1}). On the other hand, if we set $\vec N=0$, we obtain an example $\varepsilon$--modified L+R system studied in \cite{Jo2015}. \end{remark} Since $\langle\vec{\omega}_i,\vec{\gamma}_i\rangle=\langle\vec{\Omega}_i,\vec{\Gamma}_i\rangle$, we have that $\mathcal D$ is foliated on invariant varieties \[ \mathcal D_c\colon \qquad \langle\vec{\omega}_i,\vec{\gamma}_i\rangle=c_i, \qquad i=1,\dots,n, \qquad \dim\mathcal D_c=5n+6 \] and $\mathcal M_c=\mathcal D_c/SO(3)^{n+1}$. As a result we obtain the following diagram \begin{equation*} \label{principal} \xymatrix@R28pt@C28pt{ \mathcal D_c \,\,\ar@{->}[d]^{/SO(3)^{n+1}} \ar@{^{(}->}[r]& \,\, \mathcal D\ar@{^{}->}[d]^{/SO(3)^{n+1}} \,\, \ar@{^{(}->}[r] & TQ=(TSO(3))^{n+1}\times (TS^2)^n \ar@{^{}->}[d]^{/SO(3)^{n+1}}\\ \mathcal M_c \,\,\ar@{->}[rrd]^{\pi_c}_{\cong} \ar@{^{(}->}[r]& \,\,\mathcal M\,\, \ar@{^{(}->}[r] & (TQ)/SO(3)^{n+1}\cong{\mathbb R}^{3(n+1)}\times (TS^2)^n \ar@{^{}->}[d]^{\pi} \\ & & \mathcal N=\mathbb{ R}^3\times \big(S^2\big)^{n} } \end{equation*} which implies that $\mathcal D_c$ and $\mathbb{ R}^3\times SO(3)^{n+1}\times\big(S^2\big)^{n}\{\Omega,\mathbf g,\mathbf g_1,\dots,\mathbf g_n,\vec\Gamma_1,\dots,\vec\Gamma_n\}$ are diffeomorphic: \[ \mathcal D_c\cong \mathbb{ R}^3\times SO(3)^{n+1}\times\big(S^2\big)^{n}\{\Omega,\mathbf g,\mathbf g_1,\dots,\mathbf g_n,\vec\Gamma_1,\dots,\vec\Gamma_n\}. \] \begin{cor} \label{Originalne} The complete equations of motion of the sphere $\mathbf S$ and the balls $\mathbf B_1,\dots,\mathbf B_n$ of the spherical ball bearing problem on the invariant manifold $\mathcal D_c$ are given by \begin{align*} \dot{\vec{M}}&=\vec{M}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega},\\ \dot{\mathbf g}&=\mathbf g \Omega,\\ \dot{\mathbf g}_i&=\mathbf g\Omega_i(\vec\Omega,\vec\Gamma_i,c_i)\mathbf g_i,\\ \dot{\vec\Gamma}_i&=\varepsilon\vec\Gamma_i\times\vec\Omega, \qquad\qquad i=1,\dots,n, \end{align*} where $\vec M$, $\vec N$, are given by \eqref{m1} and \eqref{n1}. Here $\Omega$ and $\Omega_i(\vec\Omega,\vec\Gamma_i,c_i)$ are skew-symmetric matrices related to $\vec\Omega$ and $\vec{\Omega}_i$ after the identification \eqref{izomorfizam}; $\vec{\Omega}_i=\vec\Omega_i(\vec\Omega,\vec\Gamma_i,c_i)$ as in the equation \eqref{omegai1}. \end{cor} \section{The associated system on $\mathbb{ R}^3\times Sym(3)$ and an invariant measure}\label{sec5} Let \[ \Gamma=-\delta^2\sum_{i=1}^n(I_i+m_ir^2)\mathrm{pr}_i \] be the symmetric operator using which the definition of the modified inertia operator $\mathbf I$ \eqref{modI} can be rewritten as: \[ \mathbf I=I-\Gamma, \qquad \Gamma=\delta^2\sum_{i=1}^n(I_i+m_ir^2)\big(\vec{\Gamma}_i\otimes\vec{\Gamma}_i-\mathbf E\big), \qquad \mathbf E=\mathrm{diag}(1,1,1). \] Along the flow of the system, $\Gamma$ satisfies the equation \begin{equation}\label{GAMA} \frac{d}{dt}\Gamma=\varepsilon[\Gamma,\Omega], \end{equation} where $\Omega$ is the skew-symmetric matrix that corresponds to the angular velocity $\vec\Omega$ via isomorphism \eqref{izomorfizam}. Let us consider a special case when $c_1=0,\dots,c_n=0$, i.e., the invariant manifold $\mathcal M_0$. This means that there are no twisting of the balls, i.e. the vectors $\vec{\Omega}_i$ and $\vec{\Gamma}_i$ are orthogonal to each other. However, note that this conditions are not nonholonomic constraints, but the first integrals of motion. As a result we obtain the associated system \begin{equation}\label{eqc0} \begin{aligned} \dot{\vec{M}}&=\vec{M}\times\vec{\Omega}, \qquad \vec M=\mathbf I\vec\Omega=I\vec\Omega-\Gamma\vec\Omega,\\ \dot{\Gamma}&=\varepsilon[\Gamma,\Omega] \end{aligned} \end{equation} on the space $\mathbb{ R}^3\times Sym(3)$, where $Sym(3)$ are $3\times 3$ symmetric matrices. The system belongs to the class of $\varepsilon$--modified L+R systems studied in \cite{Jo2015}. Let $d\Omega$ and $d\Gamma$ be the standard measures on $\mathbb{ R}^3\{\vec\Omega\}$ and $Sym(3)\{\Gamma\}$. The system \eqref{eqc0} possesses the invariant measure $\mu(\Gamma) d\Omega \wedge d\Gamma$ with the density $\mu(\Gamma)=\sqrt{\det(\mathbf I)}$ (see Theorem 4, \cite{Jo2015}) Therefore, $\mu=\sqrt{\det(\mathbf I)}$ is a natural candidate for the density of an invariant measure of the system \eqref{red1}, \eqref{red2} when the constants $c_i$ are different from zero. Indeed, we have \begin{thm}\label{mera} For arbitrary values of parameters $c_i$, the reduced system \eqref{red1}, \eqref{red2} has the invariant measure \begin{equation}\label{MERA} \mu(\vec\Gamma_1,\dots,\vec\Gamma_n)d\Omega\wedge \sigma_1\wedge \dots \wedge \sigma_n, \qquad \mu=\sqrt{\det(\mathbf I)}=\sqrt{\det(I-\Gamma)}, \end{equation} where $d\Omega$ and $\sigma_i$ are the standard measures on $\mathbb{ R}^3\{\vec\Omega\}$ and $S^2\{\vec\Gamma_i\}$, $i=1,\dots,n$. \end{thm} The proof of the Theorem we are going to present is a variant of a corresponding proof for $\varepsilon$--modified L+R systems. It is given below for the completeness of the exposition. In what follows we use \begin{lem}\label{lemica} Let $A$ be a symmetric matrix and let $\vec\Omega\in\mathbb{ R}^3$ and $\Omega\in so(3)$ be related by \eqref{izomorfizam}. Then: \begin{itemize} \item[(i)] the symmetric part of the matrix $\partial\big(A\vec{\Omega}\times\vec{\Omega}\big)/{\partial\vec{\Omega}}$ is equal to $\frac{1}{2}[A,\Omega]$; \item[(ii)] $A\vec{\Omega}\times\vec{\Omega}=[A,\Omega]\vec{\Omega}$. \end{itemize} \end{lem} \begin{proof}[Proof of Theorem \ref{mera}] We can consider the system \begin{align} \label{pros1} &\frac{d}{dt}\big({\mathbf I\vec\Omega}\big)=\mathbf I\vec{\Omega}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega},\qquad \mathbf I=I-\Gamma,\\ \label{pros2} &\frac{d}{dt}\vec\Gamma_i=\varepsilon\vec\Gamma_i\times\vec\Omega, \qquad\qquad\qquad\qquad\qquad i=1,\dots,n, \end{align} as extended in the Euclidean space $\mathbb{ R}^{3n+3}\{\vec\Omega,\vec\Gamma_1,\dots,\vec\Gamma_n\}$ as well. The extended system also has first integrals $\langle\vec\Gamma_i,\vec\Gamma_j\rangle=\gamma_{ij}$. In particular, by taking $\gamma_{ii}=1$, $i=1,\dots,n$, we get that the reduced system is the restriction of the extended system \eqref{pros1}, \eqref{pros2} from $\mathbb{ R}^{3n+3}$ to the invariant variety $\mathcal N$. Therefore, it is sufficient to prove that the extended system preserves the measure \[ \nu=\mu(\vec\Gamma_1,\dots,\vec\Gamma_n)d\Omega\wedge d\Gamma_1\wedge \dots \wedge d\Gamma_n, \qquad \mu=\sqrt{\det(\mathbf I)}, \] where $d\Gamma_i$ is the standard measure in $\mathbb{ R}^3\{\vec\Gamma_i\}$. This is a standard construction: if a system has an invariant measure, then the restriction of the system to an invariant manifold also has an invariant measure induced by the measure from the ambient space. Let \[ X=(\dot{\vec{\Omega}},\dot{\vec\Gamma}_1,\dots,\dot{\vec\Gamma}_n) \] be the vector field on $\mathbb{ R}^{3n+3}$ defined by the equations \eqref{pros1}, \eqref{pros2} and assume that the Lie derivative $\mathcal L_X$ of $\nu$ vanishes. It is well known that for $\vec\Gamma_i\ne 0$, the volume form in $\mathbb{ R}^3\{\vec\Gamma_i\}$ can be written as $d\Gamma_i=\alpha_i\wedge \sigma_i$, where $\sigma_i$ is the standard measure on the unit sphere and \[ \alpha_i=\vert\vec\Gamma_i\vert^2 d(\vert \vec\Gamma_i\vert)=\frac13 d\langle\vec\Gamma_i,\vec\Gamma_i\rangle^{\frac32}, \qquad i=1,\dots,n. \] Since $\mathcal L_X(\alpha_i)=0$, we have \[ \mathcal L_X(\nu)=const \cdot \alpha_1 \wedge \dots \wedge \alpha_n \wedge \mathcal L_X\big(\mu\, d\Omega \wedge \sigma_1\wedge \dots \wedge \sigma_n\big)=0, \] $\vec\Gamma_i\ne 0$, $i=1,\dots,n$. Therefore, the reduced system preserves the measure \eqref{MERA}. The equation $\mathcal L_X(\nu)=0$ in $\mathbb{ R}^{3n+3}$ can be written in the equivalent form \begin{equation}\label{uslov} \dot{\mu} +\mu\mathrm{ div}(X)=\dot{\mu} +\mu\mathrm{tr}\frac{\partial{\dot{\vec\Omega}}}{\partial\vec\Omega} + \mu\sum_{i=1}^n\mathrm{tr}\frac{\partial{\dot{\vec\Gamma}_i}}{\partial\vec\Gamma_i}=0. \end{equation} We have the following equalities \begin{align*} \dot{\mathbf I}\vec{\Omega}&+{\mathbf I}\dot{\vec{\Omega}}=I\vec{\Omega}\times\vec{\Omega}-\Gamma\vec{\Omega}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega},\\ {\mathbf I}\dot{\vec{\Omega}}&=I\vec{\Omega}\times\vec{\Omega}-\Gamma\vec{\Omega}\times\vec{\Omega}+\varepsilon[\Gamma,\Omega]\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega}\\ &=I\vec{\Omega}\times\vec{\Omega}-(1-\varepsilon)\Gamma\vec{\Omega}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega}, \qquad \text{(item (ii) of Lemma \ref{lemica})}. \end{align*} Thus, \begin{align*} &\dot{\vec{\Omega}}={\mathbf I}^{-1}\big(I\vec{\Omega}\times\vec{\Omega}-(1-\varepsilon)\Gamma\vec{\Omega}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega}\big),\\ &\mathrm{tr}\frac{\partial{\dot{\vec\Omega}}}{\partial\vec\Omega}=\mathrm{tr}\Big({\mathbf I}^{-1}\frac{\partial}{\partial\vec{\Omega}}\big(I\vec{\Omega}\times\vec{\Omega}-(1-\varepsilon)\Gamma\vec{\Omega}\times\vec{\Omega}+(1-\varepsilon)\vec N\times\vec{\Omega}\big)\Big). \end{align*} The matrix $\partial(\vec N\times\vec\Omega)/\partial\vec\Omega$ is skew-symmetric. Since ${\mathbf I}^{-1}$ is symmetric, only the symmetric part of the expression in parenthesis in the last equation matters. Whence, using item (i) of Lemma \ref{lemica}, we get \begin{equation}\label{div1} \begin{aligned} \mathrm{tr}\frac{\partial{\dot{\vec\Omega}}}{\partial\vec\Omega}&=\mathrm{tr}\Big({\mathbf I}^{-1}\big(\frac{1}{2}[I,\Omega]-\frac{1-\varepsilon}{2}[\Gamma,\Omega]\big)\Big)\\ &=\mathrm{tr}\Big({\mathbf I}^{-1}\frac{1}{2}[{\mathbf I},\Omega]+\frac{\varepsilon}{2}{\mathbf I}^{-1}[\Gamma,\Omega]\Big)=\frac{\varepsilon}{2}\mathrm{tr}({\mathbf I}^{-1}[\Gamma,\Omega]). \end{aligned} \end{equation} On the other hand \begin{equation}\label{div2} \begin{aligned} \dot{\mu}=&\frac{1}{2\sqrt{\det ({\mathbf I})}}\frac{d}{dt}\det({\mathbf I})= \frac{1}{2\sqrt{\det ({\mathbf I})}}\det({\mathbf I})\mathrm{tr}\big({\mathbf I}^{-1}\frac{d}{dt}\big(I-\Gamma\big)\big)\\ =& -\frac{1}{2}\mu\mathrm{tr}({\mathbf I}^{-1}\varepsilon[\Gamma,\Omega]), \end{aligned} \end{equation} Here we used a well-known formula $\frac{d}{dt}\det({\mathbf I})=\det({\mathbf I})\mathrm{tr}({\mathbf I}^{-1}\dot{\mathbf I})$. Since the matrices ${\partial{\dot{\vec\Gamma}_i}}/{\partial\vec\Gamma_i}$ are skew symmetric and have zero traces, the equations \eqref{div1} with \eqref{div2} imply the required condition \eqref{uslov}. \end{proof} Note that the existence of an invariant measure for nonholomic problems is well studied in many classical problems \cite{BM2002, BMB2013}. After Kozlov's theorem on obstruction to the existence of an invariant measure for the variant of the classical Suslov problem (e.g., see \cite{BT2020, FJ1}) on Lie algebras \cite{Kozlov}, general existence statements for nonholonomic systems with symmetries are obtained in \cite{ZenkovBloch} and \cite{FGM2015}. A closely related problem is the integrability of the nonholonomic systems \cite{AKN}. Here we have the following statement. \begin{prop} The system \eqref{red1}, \eqref{red2} always has the following first integrals \[ F_1=\frac12 \langle \vec M,\vec\Omega\rangle, \quad F_2=\langle \vec M+\vec N, \vec M+\vec N\rangle, \quad F_{ij}=\langle \vec\Gamma_i, \vec\Gamma_j\rangle, \quad 1\le i < j\le n. \] \end{prop} Thus, in the special case $n=1$, we have the 5-dimensional phase space $\mathcal N=\mathbb{ R}^3\times S^2\{\vec\Omega,\vec\Gamma_1\}$, and the system has two first integrals and an invariant measure. For the integrability, one needs to find a third independent first integral. We will study integrability in the spherical ball bearing problems in a separate paper. Also, it would be interesting to study the appropriate nonholonomic systems in arbitrary dimension $\mathbb{ R}^m$, $m>3$ (e.g., see \cite{FK1995, FJ1, Jo1, Naranjo2019b, Jov2019}), or the systems where the homogeneous balls $\mathbf B_i$ are replaced by the systems of the form (ball + gyroscope), which satisfy the Zhukovskii conditions (see \cite{Zhuk1893, DGJ}). \section{Planar system - the three balls bearings problem}\label{sec6} \subsection{Definition of the planar three balls bearing problem} Consider the limit, when the radii of the spheres $\mathbf S_0$ and $\mathbf S$ both tend to infinity. For simplicity, we consider the case $n=3$. As a result, we obtain rolling without slipping of three homogeneous balls $\mathbf B_1,\mathbf B_2,\mathbf B_3$ of the radius $r$ and masses $m_1, m_2, m_3$ over the fixed plane $\Sigma_0$, together with the moving plane $\Sigma$ of the mass $m$ that is placed over the balls, such that there is no slipping between the balls and moving plane. We will refer to the system as \emph{the planar three balls bearing problem}. Note that all considerations of the Section can be easily adopted for the case of the planar ball bearing with rolling of $n$ homogeneous balls. Let $O_0$ be the fixed point of the plane $\Sigma_0$, $O, O_1, O_2, O_3$ be the centers of mass of the plane $\Sigma$ and the balls $\mathbf B_1,\mathbf B_2,\mathbf B_3$ respectively. Let also \[ O_0\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3, \qquad O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3, \qquad O_i\vec{\mathbf e}^i_1,\vec{\mathbf e}^i_2,\vec{\mathbf e}^i_3, , \] be positively oriented reference frames rigidly attached to the fixed plane $\Sigma_0$, the moving plane $\Sigma$, and the ball $\mathbf B_i$ ($i=1,2,3$), respectively. Here $\vec{\mathbf e}_3=\vec{\mathbf e}^0_3$ is the unit vector orthogonal to $\Sigma$ and $\Sigma_0$. In the fixed reference frame, the positions of the points $O$, $O_1$, $O_2$, and $O_3$ are respectively given by \[ O(x,y,2r), \qquad O_1(x_1,y_1,r), \qquad O_2(x_2,y_2,r), \qquad O_3(x_3,y_3,r). \] We denote by $\mathbf g\in SO(2)\subset SO(3)$ the rotation matrix that maps $O\vec{\mathbf e}_1,\vec{\mathbf e}_2,\vec{\mathbf e}_3$ to $O_0\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2\vec{\mathbf e}_3^0$, and by $\mathbf g_i\in SO(3)$ the matrix that maps the moving frame $O_i\vec{\mathbf e}^i_1,\vec{\mathbf e}^i_2,\vec{\mathbf e}^i_3$ to the fixed frame $O_0\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3$, $i=1,2,3$. As above, the skew-symmetric matrices \[ \omega=\dot{\mathbf g}{\mathbf g}^{-1}, \qquad \omega_i=\dot{\mathbf g}_i \mathbf g_i^{-1}, \] after the identification \eqref{izomorfizam}, correspond to the angular velocities $\vec{\omega}$ and $\vec\omega_i$ of the plane $\Sigma$ and the ball $\mathbf B_i$ relative to the fixed coordinate system. Note that \[ \mathbf g= \begin{pmatrix} \cos\varphi & -\sin\varphi & 0 \\ \sin\varphi & \cos\varphi & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad \omega= \begin{pmatrix} 0 & - \dot\varphi & 0 \\ \dot\varphi & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \quad \text{and} \quad \vec\omega=(0,0,\dot\varphi). \] Then the configuration space of the planar three balls bearing problem is \[ Q=SO(3)\times SO(3)\times SO(3) \times \mathbb{ R}^2\times SO(2)\times (\mathbb{ R}^2)^3\{\mathbf g_1,\mathbf g_2,\mathbf g_3,x,y,\varphi,x_1,y_1,x_2,y_2,x_3,y_3\}, \] while the kinetic energy is \[ T=\frac12 Iv_\varphi^2+\frac12 m\langle \vec{v}_O,\vec{v}_O\rangle+\frac12\sum_{i=1}^3 I_i \langle \vec{\omega}_i,\vec{\omega}_i\rangle+\frac12 \sum_{i=1}^3 m_i \langle \vec{v}_{O_i},\vec{v}_{O_i}\rangle, \] where $\mathrm{diag}(I_i,I_i,I_i)$ is the inertia operator of the ball $\mathbf B_i$, $i=1,2,3$, $I$ is the moment of inertia of the plane $\Sigma$ for the $O\vec{\mathbf e}_3$-axis through the mass centre $O$. \begin{pspicture}(12,7) \pscircle[linecolor=black,fillstyle=solid, fillcolor=gray!20](3.5,3){1} \psellipticarc[linestyle=dashed](3.5,3)(1,0.2){0}{180} \psellipticarc(3.5,3)(1,0.2){180}{360} \psdot[dotsize=2pt](3.5,3)\uput[0](3.5,3.2){$O_1$} \psdot[dotsize=1.5pt](3.5,4)\uput[0](3.5,4.2){$A$} \psdot[dotsize=1.5pt](3.5,2)\uput[0](3.5,1.8){$A_1$} \pscircle[linecolor=black, fillstyle=solid, fillcolor=gray!20](7,3){1} \psellipticarc[linestyle=dashed](7,3)(1,0.2){0}{180} \psellipticarc(7,3)(1,0.2){180}{360} \psdot[dotsize=2pt](7,3)\uput[0](7,3.2){$O_2$} \psdot[dotsize=1.5pt](7,4)\uput[0](7,4.2){$B$} \psdot[dotsize=1.5pt](7,2)\uput[0](7,1.8){$B_1$} \pscircle[linecolor=black, fillstyle=solid, fillcolor=gray!20](5.5,5){0.8} \psellipticarc[linestyle=dashed](5.5,5)(.8,0.2){0}{180} \psellipticarc(5.5,5)(0.8,0.2){180}{360} \psdot[dotsize=2pt](5.5,5)\uput[0](5.5,5.2){$O_3$} \psdot[dotsize=1.5pt](5.5,5.8)\uput[0](5.5,6){$C$} \psdot[dotsize=1.5pt](5.5,4.2)\uput[0](5.5,4){$C_1$} \psline[linecolor=black,linewidth=0.01cm](1,1)(10,1) \psline[linecolor=black, linewidth=0.01cm](1,1)(1.60,3.5) \psline[linecolor=black, linewidth=0.01cm, linestyle=dashed](1.60,3.5)(2,5.2) \psline[linecolor=black, linewidth=0.01cm, linestyle=dashed](2,5.2)(4.7,5.2) \psline[linecolor=black, linewidth=0.01cm, linestyle=dashed](6.3,5.2)(9,5.2) \psline[linecolor=black, linewidth=0.01cm, linestyle=dashed](9,5.2)(9.39,3.5) \psline[linecolor=black, linewidth=0.01cm,](9.39,3.5)(10,1) \psline[linecolor=black,linewidth=0.02cm](1,3.5)(10,3.5) \psline[linecolor=black, linewidth=0.02cm](1,3.5)(2,6) \psline[linecolor=black, linewidth=0.02cm](2,6)(9,6) \psline[linecolor=black, linewidth=0.02cm](9,6)(10,3.5) \uput[0](1.1,3.7){$\Sigma$} \uput[0](2,0.5){{\sc Figure 2}. Planar three balls bearing problem} \end{pspicture} Let $A_1$ and $A$ be the points of contact of $\mathbf B_1$ with fixed plane $\Sigma_0$ and with plane $\Sigma$, $B_1$ and $B$ be the contact points of $\mathbf B_2$, and $C_1$ and $C$ be the contact points of $\mathbf B_3$ with those planes. We have the following nonholonomic constraints written in the fixed reference frame $O_0\vec{\mathbf e}^0_1,\vec{\mathbf e}^0_2,\vec{\mathbf e}^0_3$: \begin{equation}\label{noncon} \begin{aligned} \vec{v}_{O_1}&-r\vec{\omega}_1\times\vec{\gamma}=0,\ \vec{v}_{O_2}-r\vec{\omega}_2\times\vec{\gamma}=0,\ \vec{v}_{O_3}-r\vec{\omega}_3\times\vec{\gamma}=0,\\ \vec{v}_{O_1}&+r\vec{\omega}_1\times\vec{\gamma}=\vec{v}_O+\vec{\omega}\times\overrightarrow{OA},\\ \vec{v}_{O_2}&+r\vec{\omega}_2\times\vec{\gamma}=\vec{v}_O+\vec{\omega}\times\overrightarrow{OB},\\ \vec{v}_{O_3}&+r\vec{\omega}_3\times\vec{\gamma}=\vec{v}_O+\vec{\omega}\times\overrightarrow{OC}, \end{aligned} \end{equation} where $\vec\gamma=(0,0,1)$ is the unit vector orthogonal to the planes $\Sigma_0$ and $\Sigma$, i.e., \[ \vec\gamma=\vec{\mathbf e}_3^0=\vec{\mathbf e}_3. \] The first three vector constraints are obtained from the condition that the velocities of the contact points $A_1, B_1, C_1$ with the fixed plane $\Sigma_0$ are zero. The remaining ones follow from the condition that there is no sliding between the balls and the plane $\Sigma$. This means that the velocities of points $A$, $B$ and $C$ are the same as velocities of the corresponding points at the plane $\Sigma$. The configuration space $Q$ is 18-dimensional and there are twelve independent nonholonomic constraints among \eqref{noncon}. Hence the vector subspaces of the admissible velocities $\mathcal D_q\subset T_q Q,$ $q\in Q$, are six--dimensional. Note that the dimensions of the configuration space $Q$ and the constraints manifold $\mathcal D$ in the problem of spherical ball bearing for $n=3$ and the planar three balls bearing problem coincide. Now, the additional one-side constraints read: \begin{equation}\label{jednostrane} \begin{aligned} &\vert\overrightarrow{O_1O_2}\vert=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2} \ge 2r, \\ &\vert\overrightarrow{O_2O_3}\vert=\sqrt{(x_3-x_2)^2+(y_3-y_2)^2} \ge 2r, \\ &\vert\overrightarrow{O_3O_1}\vert=\sqrt{(x_1-x_3)^2+(y_1-y_3)^2}\ge 2r. \end{aligned} \end{equation} \subsection{The equations of motion} As in the case of spherical ball bearing problem, we will derive the equations of motion in the planar case in the vector form. The corresponding reaction forces will be expressed in terms of Lagrange multipliers $\lambda_1,\dots,\lambda_{12}$ that correspond to twelve independent nonholonomic costraints. The equations of motion of the panar three balls bearing system relative to the fixed coordinate system are: \begin{equation}\label{jednacine} \begin{aligned} m_1 \dot{\vec{v}}_{O_1}&=(\lambda_1,\lambda_2,0)+(\lambda_7,\lambda_8,0),\\ m_2\dot{\vec{v}}_{O_2}&=(\lambda_3,\lambda_4,0)+(\lambda_9,\lambda_{10},0),\\ m_3\dot{\vec{v}}_{O_3}&=(\lambda_5,\lambda_6,0)+(\lambda_{11},\lambda_{12},0),\\ m\dot{\vec{v}}_O&=-(\lambda_7,\lambda_8,0)-(\lambda_9,\lambda_{10},0)-(\lambda_{11},\lambda_{12},0), \end{aligned} \end{equation} and \begin{equation}\label{jednacine*} \begin{aligned} {I}_1\dot{\vec{\omega}}_1&=-r\vec{\gamma}\times((\lambda_1,\lambda_2,0)-(\lambda_7,\lambda_8,0)),\\ {I}_2\dot{\vec{\omega}}_2&=-r\vec{\gamma}\times((\lambda_3,\lambda_4,0)-(\lambda_9,\lambda_{10},0),\\ {I}_3\dot{\vec{\omega}}_3&=-r\vec{\gamma}\times((\lambda_5,\lambda_6,0)-(\lambda_{11},\lambda_{12},0)), \\ I\dot{\vec{\omega}}&=-\overrightarrow{OA}\times(\lambda_7,\lambda_8,0)-\overrightarrow{OB}\times(\lambda_9,\lambda_{10},0)- \overrightarrow{OC}\times(\lambda_{11},\lambda_{12},0). \end{aligned} \end{equation} By differentiating the first three constraints from \eqref{noncon}, and using the first three equations of motion in \eqref{jednacine} and \eqref{jednacine*}, we get: \begin{equation}\label{mnozioci1} \begin{aligned} (\lambda_1,\lambda_2, 0)&=\frac{m_1r^2-I_1}{m_1r^2+I_1}(\lambda_7,\lambda_8,0)\\ (\lambda_3,\lambda_4, 0)&=\frac{m_2r^2-I_2}{m_2r^2+I_2}(\lambda_9,\lambda_{10},0),\\ (\lambda_5,\lambda_6, 0)&=\frac{m_3r^2-I_3}{m_3r^2+I_3}(\lambda_{11},\lambda_{12},0). \end{aligned} \end{equation} Therefore, the equations \eqref{jednacine}, \eqref{jednacine*} can be written as \begin{equation}\label{jed1} \begin{aligned} \dot{\vec{v}}_{O_1}&=\frac{2r^2}{m_1r^2+I_1}\vec{\mathbf F}_1,\\ \dot{\vec{v}}_{O_2}&=\frac{2r^2}{m_2r^2+I_2}\vec{\mathbf F}_2,\\ \dot{\vec{v}}_{O_3}&=\frac{2r^2}{m_3r^2+I_3}\vec{\mathbf F}_3,\\ m\dot{\vec{v}}_O&=-\vec{\mathbf F}_1-\vec{\mathbf F}_2-\vec{\mathbf F}_3,\\ \end{aligned} \end{equation} and \begin{equation}\label{jed2} \begin{aligned} \dot{\vec{\omega}}_1&=\frac{2r}{m_1r^2+I_1}\vec{\gamma}\times\vec{\mathbf F}_1,\\ \dot{\vec{\omega}}_2&=\frac{2r}{m_2r^2+I_2}\vec{\gamma}\times\vec{\mathbf F}_2,\\ \dot{\vec{\omega}}_3&=\frac{2r}{m_3r^2+I_3}\vec{\gamma}\times\vec{\mathbf F}_3,\\ I\dot{\vec{\omega}}&=-\overrightarrow{OA}\times\vec{\mathbf F}_1-\overrightarrow{OB}\times\vec{\mathbf F}_2- \overrightarrow{OC}\times\vec{\mathbf F}_3, \end{aligned} \end{equation} where \[ \vec{\mathbf F}_1=(\lambda_7,\lambda_8,0), \qquad \vec{\mathbf F}_2=(\lambda_9,\lambda_{10},0), \qquad \vec{\mathbf F}_3=(\lambda_{11},\lambda_{12},0). \] By differentiating the remaining constraints, we get the following linear system of six equations in the Lagrange multipliers $\lambda_7,\dots,\lambda_{12}$: \begin{equation}\label{mnozioci2} \begin{aligned} \frac{4Ir^2}{m_1r^2+I_1}\vec{\mathbf F}_1=&\big(\overrightarrow{OA}\wedge \vec{\mathbf F}_1+\overrightarrow{OB}\wedge \vec{\mathbf F}_2+\overrightarrow{OC}\wedge \vec{\mathbf F}_3\big)\overrightarrow{OA}\\ &+\vec{\omega}\times(\vec{v}_{O_1}-\vec{v}_{O})-\frac{I}{m}\big( \vec{\mathbf F}_1+\vec{\mathbf F}_2+\vec{\mathbf F}_3\big),\\ \frac{4Ir^2}{m_2r^2+I_2}\vec{\mathbf F}_2= &\big(\overrightarrow{OA}\wedge \vec{\mathbf F}_1+\overrightarrow{OB}\wedge \vec{\mathbf F}_2+\overrightarrow{OC}\wedge \vec{\mathbf F}_3\big)\overrightarrow{OB}\\ &+\vec{\omega}\times(\vec{v}_{O_2}-\vec{v}_{O})-\frac{I}{m}\big( \vec{\mathbf F}_1+\vec{\mathbf F}_2+\vec{\mathbf F}_3\big),\\ \frac{4Ir^2}{m_3r^2+I_3}\vec{\mathbf F}_3= &\big(\overrightarrow{OA}\wedge \vec{\mathbf F}_1+\overrightarrow{OB}\wedge \vec{\mathbf F}_2+\overrightarrow{OC}\wedge \vec{\mathbf F}_3\big)\overrightarrow{OC}\\ &+\vec{\omega}\times(\vec{v}_{O_3}-\vec{v}_{O})-\frac{I}{m}\big( \vec{\mathbf F}_1+\vec{\mathbf F}_2+\vec{\mathbf F}_3\big). \end{aligned} \end{equation} One can easily see that the system \eqref{mnozioci2} determines the Lagrange multipliers $\lambda_7,\dots,\lambda_{12}$ uniquely, and, at the same time, uniquely determines $\vec{\mathbf F}_1, \vec{\mathbf F}_2, \vec{\mathbf F}_3$. We have the following analogue of Propositions \ref{prva} and \ref{go} \begin{prop} The moving triangles $\triangle O_1O_2O_3(t)$ and $\triangle ABC(t)$ are congruent to the triangle formed by the centers of the balls at the initial condition. \end{prop} \begin{proof} From the constraints \eqref{noncon} we get \begin{equation*} \begin{aligned} &2\frac{d}{dt}\overrightarrow{AB}=\vec{\omega}\times\overrightarrow{AB},\\ &2\frac{d}{dt}\overrightarrow{BC}=\vec{\omega}\times\overrightarrow{BC},\\ &2\frac{d}{dt}\overrightarrow{CA}=\vec{\omega}\times\overrightarrow{CA}. \end{aligned} \end{equation*} Therefore, \[ \frac{d}{dt}\langle \overrightarrow{AB},\overrightarrow{AB}\rangle=\langle \overrightarrow{AB},\vec{\omega}\times\overrightarrow{AB}\rangle=0. \] Similarly, we have $\langle \overrightarrow{BC},\overrightarrow{BC}\rangle=const$, $\langle \overrightarrow{CA},\overrightarrow{CA}\rangle=const$. \end{proof} Thus, as in the spherical case, if the initial condition is within the interior of the region \eqref{jednostrane}, the system remains within the interior of the region \eqref{jednostrane} along the motion. Also, from \eqref{jed2} and $\dot\gamma=0$ we get: \begin{prop} The projections of the angular velocities $\vec{\omega}_i$ to $\vec{\gamma}$ are conserved along the motion: \[ \omega_{i3}=\langle \vec{\omega}_i,\vec\gamma\rangle=c_i, \qquad i=1,2,3. \] \end{prop} \subsection{Reduction} Set $v_\varphi=\dot\varphi$, $v_x=\dot x$, $v_y=\dot y$. By using the constraints \eqref{noncon} we can obtain a closed system of the equations of motion on the space \[ \mathcal P= T\mathbb{ R}^2\times TSO(2)\times (\mathbb{ R}^2)^3\{v_x,v_y,v_\varphi, x,y, \varphi, x_1,y_1,x_2,y_2,x_3,y_3\}, \qquad \dim\mathcal P=12. \] Note that we have a diffeomorphism \[ \mathcal D/SO(3)\times SO(3)\times SO(3) \cong \mathcal P\times\mathbb{ R}^3\{\omega_{13},\omega_{23},\omega_{33}\}, \] where the $SO(3)\times SO(3)\times SO(3)$--action on $\mathcal D\subset TQ$, as in the spherical case, is given by the right trivialisation of the tangent bundle of the Lie group $SO(3)\times SO(3)\times SO(3)$. It is interesting that, contrary to the spherical case, the equations on $\mathcal P$ do not depend on the integrals $c_i$. By using the constraints \eqref{noncon}, the kinetic energy for $c_1=c_2=c_3=0$ on $\mathcal P$ takes the form: \begin{align*} T= \frac12 Iv_\varphi^2+ \frac12 m\big(v_x^2+v_y^2\big)&+\frac12 (\frac{I_1+r^2m_1}{4r^2}) \langle \vec{v}_O+\vec{\omega}\times\overrightarrow{OA},\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\rangle\\ &+\frac12 (\frac{I_2+r^2m_2}{4r^2}) \langle \vec{v}_O+\vec{\omega}\times\overrightarrow{OB},\vec{v}_O+\vec{\omega}\times\overrightarrow{OB}\rangle\\ &+\frac12 (\frac{I_3+r^2m_3}{4r^2}) \langle \vec{v}_O+\vec{\omega}\times\overrightarrow{OC},\vec{v}_O+\vec{\omega}\times\overrightarrow{OC}\rangle. \end{align*} Let us denote \begin{equation}\label{oznake} \begin{aligned} \vec N=&\delta_1\overrightarrow{OA}+\delta_2\overrightarrow{OB}+\delta_3\overrightarrow{OC}, \\ M=&\delta_1\langle\overrightarrow{OA},\overrightarrow{OA}\rangle+ \delta_2\langle\overrightarrow{OB},\overrightarrow{OB}\rangle+\delta_3\langle\overrightarrow{OC},\overrightarrow{OC}\rangle,\\ \delta_1=&\frac{m_1r^2+I_1}{4r^2}, \quad \delta_1=\frac{m_2r^2+I_2}{4r^2}, \quad \delta_3=\frac{m_3r^2+I_3}{4r^2}, \quad \delta=\delta_1+\delta_2+\delta_3. \end{aligned} \end{equation} Note that $\vec N(t)$ determines the trajectory of the mass centre $S(t)$ of the moving triangle $\triangle ABC(t)$ with masses $\delta_1, \delta_2, \delta_3$ placed at the vertices $A, B, C$: $\overrightarrow{OS}=\frac{1}{\delta}\vec N$. Also, by definition, $\vec N$ and $M$ satisfy the inequality $$ \delta M \ge \langle \vec N,\vec N\rangle=N_1^2+N_2^2. $$ The equality would imply that the points $A, B, C$ coincide. Therefore, in the region of admissible motions \eqref{jednostrane} we have \[ \delta M > N_1^2+N_2^2. \] With the above notation, the formula for the kinetic energy simplifies to \[ T=\frac12 (I+M) v_\varphi^2+ \frac12 (m+\delta)(v_x^2+v_y^2)+v_\varphi(N_1 v_y-N_2 v_x). \] As in the problem of spherical ball bearing, we will derive the equations of motion in the planar case without calculating the explicit formulae for $\vec{\mathbf F}_1, \vec{\mathbf F}_2, \vec{\mathbf F}_3$. \begin{thm}\label{Glavna2} The equations of motion of the planar three balls bearing problem on $\mathcal P$ are given by \begin{equation}\label{kinematicke} \begin{aligned} &\dot \varphi=v_\varphi, \qquad \dot x=v_x, \qquad \dot y=v_y, \\ &2(\dot x_1,\dot y_1)=(v_x,v_y)+(-v_\varphi(y_1-y),v_\varphi(x_1-x)),\\ &2(\dot x_2,\dot y_2)=(v_x,v_y)+(-v_\varphi(y_2-y),v_\varphi(x_2-x)),\\ &2(\dot x_3,\dot y_3)=(v_x,v_y)+(-v_\varphi(y_3-y),v_\varphi(x_3-x)), \end{aligned} \end{equation} and \begin{equation}\label{trikugle} \begin{aligned} (m+\delta)\dot v_x=&\frac12 N_1 v_\varphi^2-\frac\delta2 v_\varphi v_y+N_2\dot{v}_\varphi,\\ (m+\delta)\dot v_y=&\frac12 N_2 v_\varphi^2+\frac\delta2 v_\varphi v_x-N_1\dot{v}_\varphi,\\ \big(I+M\big)\dot{v}_\varphi=&\frac12 v_\varphi(N_1 v_x+N_2 v_y)+N_2 \dot v_x-N_1\dot v_y, \end{aligned} \end{equation} where $\vec N$, $M$, ${\delta}$ are given by \eqref{oznake}. \end{thm} An explicit form of the equations \eqref{trikugle} is given below in \eqref{REDtrikugle}. \begin{proof} The kinematic equations \eqref{kinematicke} follow directly from the constraints \eqref{noncon}. From the last equations in \eqref{jed1} and \eqref{jed2}, we have: \begin{equation} (m\dot v_x,m\dot v_y, I\dot v_\varphi)=-\vec{\mathbf F}_1-\vec{\mathbf F}_2-\vec{\mathbf F}_3 -\overrightarrow{OA}\times\vec{\mathbf F}_1-\overrightarrow{OB}\times\vec{\mathbf F}_2- \overrightarrow{OC}\times\vec{\mathbf F}_3, \end{equation} where $\vec{\mathbf F}_1, \vec{\mathbf F}_2, \vec{\mathbf F}_3$, from \eqref{mnozioci2} and \eqref{kinematicke}, are written in terms of variables on $\mathcal P$. Then, from \eqref{jed1} and \eqref{noncon}, we get \begin{equation*} \begin{aligned} &\vec{\mathbf F}_1=2\delta_1 \dot{\vec v}_{O_1}=\delta_1\frac{d}{dt}\big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\big),\\ &\vec{\mathbf F}_2=2\delta_2 \dot{\vec v}_{O_2}=\delta_2\frac{d}{dt}\big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OB}\big), \\ &\vec{\mathbf F}_3=2\delta_1 \dot{\vec v}_{O_3}=\delta_3\frac{d}{dt}\big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OC}\big). \end{aligned} \end{equation*} Thus, the last equation in \eqref{jed1} can be rewritten as \begin{equation}\label{dotVo} \frac{d}{dt}\Big((m+\delta)\vec{v}_O+ \vec{\omega}\times \vec N \Big)=0, \qquad \delta=\delta_1+\delta_2+\delta_3. \end{equation} On the other hand, we have \begin{align*} \overrightarrow{OA}\times\vec{\mathbf F}_1=&\delta_1\overrightarrow{OA}\times\frac{d}{dt}\big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\big)\\ =&\frac{d}{dt}\big(\delta_1\overrightarrow{OA}\times \big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\big)\big)-\delta_1\big({\vec v}_{O_1}-{\vec v}_{O}\big) \times \big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\big)\\ =&\frac{d}{dt}\big(\delta_1\overrightarrow{OA}\times \big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\big)\big) -\frac{\delta_1}2\big(\vec{\omega}\times\overrightarrow{OA}-{\vec v}_{O}\big) \times \big(\vec{v}_O+\vec{\omega}\times\overrightarrow{OA}\big)\\ =&\frac{d}{dt}\big(\delta_1\overrightarrow{OA}\times \vec{v}_O + \delta_1\overrightarrow{OA}\times\big(\vec{\omega}\times\overrightarrow{OA}\big)\big) +{\vec v}_{O}\times \big(\vec{\omega}\times \delta_1\overrightarrow{OA}\big)\\ =&\frac{d}{dt}\big(\delta_1\overrightarrow{OA}\times \vec{v}_O + \delta_1\vec{\omega} \langle\overrightarrow{OA},\overrightarrow{OA}\rangle\big) +\vec{\omega}\langle {\vec v}_{O} , \delta_1 \overrightarrow{OA}\rangle. \end{align*} Similar equations hold for $\overrightarrow{OB}\times\vec{\mathbf F}_2$ and $\overrightarrow{OC}\times\vec{\mathbf F}_3$. Therefore, the last equation in \eqref{jed2} takes the form \begin{equation}\label{dotW} \frac{d}{dt}\Big(I{\vec{\omega}}+\vec N\times \vec{v}_O+M\vec{\omega}\Big)+\vec{\omega}\langle {\vec v}_{O} , \vec N \rangle=0. \end{equation} The time derivatives of $M$ and $\vec N$ along the motion are given by \begin{equation*} \begin{aligned} \dot M=&2\delta_1\langle\overrightarrow{OA},\vec{v}_{O_1}-\vec{v}_O\rangle+ 2\delta_2\langle\overrightarrow{OB},\vec{v}_{O_2}-\vec{v}_O\rangle+2\delta_3\langle\overrightarrow{OC},\vec{v}_{O_3}-\vec{v}_O\rangle\\ =&\delta_1\langle\overrightarrow{OA},\vec{\omega}\times\overrightarrow{OA}-{\vec v}_{O}\rangle+ \delta_2\langle\overrightarrow{OB},\vec{\omega}\times\overrightarrow{OB}-{\vec v}_{O}\rangle+\delta_3\langle\overrightarrow{OC},\vec{\omega}\times\overrightarrow{OC}-{\vec v}_{O}\rangle\\ =& - \langle {\vec v}_{O} , \vec N \rangle,\\ \dot{\vec N}=& \frac{\delta_1}2\big(\vec{\omega}\times\overrightarrow{OA}-{\vec v}_{O}\big)+ \frac{\delta_2}2\big(\vec{\omega}\times\overrightarrow{OB}-{\vec v}_{O}\big)+ \frac{\delta_3}2\big(\vec{\omega}\times\overrightarrow{OC}-{\vec v}_{O}\big)\\ =&\frac12\big(\vec\omega\times \vec N-\delta\vec{v}_O\big). \end{aligned} \end{equation*} Finally, the equations \eqref{dotVo} and \eqref{dotW} can be written as: \begin{equation*} \begin{aligned} (m+\delta)\dot{\vec{v}}_O+ \big(I+M\big)\dot{\vec{\omega}}=&\frac{d}{dt}\big(\vec N\times (\vec{\omega}-\vec{v}_O)\big)\\ =&\frac12\langle \vec\omega,\vec\omega\rangle \vec N+\frac12 \langle {\vec v}_{O} , \vec N \rangle\vec\omega+\frac{\delta}2 \vec\omega \times\vec{v}_O+ \vec N \times \big(\dot{\vec\omega}-\dot{\vec{v}}_O\big), \end{aligned} \end{equation*} which proves \eqref{trikugle}. \end{proof} \subsection{Invariant measure} It is clear that we can pass from $\mathcal P$ to the space \begin{equation}\label{eq:Q} \mathcal Q=\{(v_x,v_y,v_\varphi,N_1,N_2,M)\in \mathbb{ R}^6\,\vert\, \delta M > N_1^2+N_2^2\}, \end{equation} with the induced system described by the equations \eqref{trikugle} and \[ \dot{\vec N}=\frac12\big(\vec\omega\times \vec N-\delta\vec{v}_O\big), \qquad \dot M=- \langle {\vec v}_{O} , \vec N \rangle. \] If we introduce \begin{align*} &\mathbf v=(v_x,v_y,v_\varphi), \qquad \mathbf n=(N_1,N_2,M),\\ &\mathbf m=\frac12(N_1 v_\varphi^2-\delta v_\varphi v_y,N_2 v_\varphi^2+\delta v_\varphi v_x, v_\varphi(N_1 v_x+N_2 v_y)),\\ &\mathbb I= \begin{pmatrix} m+\delta & 0 & -N_2 \\ 0 & m+\delta & N_1 \\ -N_2 & N_1 & I+M \end{pmatrix},\qquad \mathbb J=-\frac 12 \begin{pmatrix} \delta & 0 & N_2\\ 0 & \delta & -N_1\\ 2N_1 & 2N_2 & 0 \end{pmatrix}, \end{align*} then the reduced equations of motion on $\mathcal Q$ \eqref{eq:Q} become \begin{equation}\label{REDtrikugle} \dot{\mathbf v}=\mathbb I^{-1}\mathbf m, \qquad \dot{\mathbf n}=\mathbb J \mathbf v. \end{equation} \begin{remark} Since \[ \det (\mathbb I)=(m+\delta)\big((m+\delta)I + m M+ (\delta M-(N_1^2+N_2^2) \big)> 0\vert_\mathcal Q, \] the matrix $\mathbb I$ is invertible on $\mathcal Q$ and we have \[ \mathbb I^{-1}=\frac{1}{\det(\mathbb I)} \begin{pmatrix} (m+\delta)(I+M)-N_1^2 & -N_1 N_2 & (m+\delta)N_2 \\ -N_1N_2 & (m+\delta)(I+M)-N_2^2 &-(m+\delta)N_1 \\ (m+\delta)N_2 & -(m+\delta)N_1 & (m+\delta)^2 \end{pmatrix}. \] \end{remark} \begin{thm}\label{mera2} The equations \eqref{REDtrikugle} have the following first integrals \begin{equation}\label{integraliRED} \begin{aligned} & f_1=(m+\delta) v_x- v_\varphi N_2, \\ & f_2=(m+\delta) v_y + v_\varphi N_1, \\ & f_3=\delta M-(N_1^2+N_2^2),\\ & f_4=T=\frac12 (I+M) v_\varphi^2+ \frac12 (m+\delta)(v_x^2+v_y^2)+v_\varphi(N_1 v_y-N_2 v_x). \end{aligned} \end{equation} and they possess an invariant measure \[ \sqrt{\det(\mathbb I)}\, dv_x \wedge dv_y \wedge dv_\varphi \wedge dN_1 \wedge dN_2 \wedge dM. \] The system \eqref{REDtrikugle} can be solved by quadratures. \end{thm} \begin{proof} It is clear that the functions \eqref{integraliRED} are the first integrals of the system \eqref{REDtrikugle}. Note that $f_3>0$ on $\mathcal Q$. Next, at the invariant level set \[ \mathcal Q_d\colon \qquad f_1=d_1,\qquad f_2=d_2, \qquad f_3=d_3, \] we have \begin{align*} &v_x=\frac{v_\varphi N_2+d_1}{m+\delta}, \qquad v_y=\frac{-v_\varphi N_1+d_2}{m+\delta},\qquad M=\frac{1}{\delta}(N_1^2+N_2^2)+\frac{d_3}{\delta},\\ &\det (\mathbb I)=(m+\delta)\big((m+\delta)I + \frac{m}{\delta}(N_1^2+N_2^2)+ \frac{m d_3}{\delta}+d_3 \big). \end{align*} We obtain a closed system in the space $\mathbb{ R}^3\{v_\varphi,N_1,N_2\}$ given by \begin{equation}\label{trikugle*} \begin{aligned} \dot{v}_\varphi=&\frac{m v_\varphi(N_1 {d_1}+N_2 {d_2})}{2\det(\mathbb I)},\\ \dot N_ =&-\frac{m+2\delta}{2(m+\delta)}N_2 v_{\varphi}-\frac{\delta d_1}{2(m+\delta)},\\ \dot N_ =&\frac{m+2\delta}{2(m+\delta)}N_1 v_{\varphi}-\frac{\delta d_2}{2(m+\delta)}. \end{aligned} \end{equation} Using the similar arguments as in the proof Theorem \ref{mera}, it is sufficient to prove that $\mu\vert_{\mathcal Q_d}$ is the density of an invariant measure of the reduced system \eqref{trikugle*}. Let $X=(\dot v_\varphi,\dot N_1,\dot N_2)$. Then \[ \mathrm{ div}(X)=\frac{m(N_1d_1+N_2d_2)}{2\det(\mathbb I)}. \] On the other hand \[ \frac{d}{dt}\det(\mathbb I)=-m(N_1d_1+N_2d_2). \] Therefore, the function $\mu=\sqrt{\det(\mathbb I)}$ satisfies the equation \[ \dot \mu+\mu\mathrm{ div}(X)=0, \] and the system \eqref{trikugle*} preserves the measure $\mu\, dv_\varphi \wedge dN_1 \wedge dN_2$. Integrability in quadratures follows according the Euler-Jacobi theorem \cite{AKN}. \end{proof} \begin{remark} By setting $d_1=d_2=0$, we get that \[ v_\varphi=const \qquad \text{and} \qquad N_1^2+N_2^2=const. \] Thus, the equations \eqref{trikugle*} can be solved in terms of trigonometric functions. \end{remark} \subsection*{Acknowledgements} We are very grateful to the referees for valuable remarks that helped us to improve the exposition. The research which led to this paper was initiated during the GDIS conference in Summer 2018, when all three authors visited Moscow Institute of Physics and Technology, kindly invited and hosted by Professor Alexey V. Borisov and his team. This research has been supported by the Project no. 7744592 MEGIC "Integrability and Extremal Problems in Mechanics, Geometry and Combinatorics" of the Science Fund of Serbia, Mathematical Institute of the Serbian Academy of Sciences and Arts and the Ministry for Education, Science, and Technological Development of Serbia, and the Simons Foundation grant no. 854861.
1,116,691,501,116
arxiv
\section{introduction} Generically, many-body quantum systems have two robust distinct fates after a quantum quench: thermalization and localization (see Ref.~\onlinecite{Nandkishore2015} and the references therein). In the former case, the system effectively serves as a heat bath for small enough subsystems, resulting in equilibration to a steady state and the distribution of excess energy (which is deposited into the system after the quench) in an almost thermal manner. This behavior stems from the so called eigenstate thermalization hypothesis~\cite{Deutsch1991,Srednicki1994,Tasaki1998,Rigol2008}. In the latter case, as shown in the seminal work of Anderson~\cite{Anderson1958}, the flow of energy is restricted, generically due to the presence of disorder, and systems can not act as reservoirs for themselves. The phenomenology of these nonthermalizing systems includes two categories: (i) many-body localization, where interactions play an important role~\cite{Basko2006, Oganesyan2007, Pal2010} and (ii) the simpler case of single-particle Anderson localization, where interactions are either absent or unimportant. Signatures of quantum quench dynamics for both paradigms of thermalization and localization are of great interest~\cite{Polkovnikov2011, Greiner2002, Kinoshita2006, Calabrese2006,Kollath2007,Cramer2008,Yukalov2011,Eisert2014,Canovi2011,Bardarson2012,Chandran2013,Zangara2013,Vosk2013,Sorg2014,Tang2015}. Historically, the bulk of the studies in the localization literature has focused on the single-particle case. Despite the absence of interactions, the physics of Anderson localization is rich and profound. In recent years, there has been a surge of interest in many-body localization, where interactions give rise to even richer phenomena. Substantial progress in understanding the nature of the many-body localization has been made by studying the novel question of the \textit{interplay of disorder and quench dynamics}~\cite{Canovi2011,Bardarson2012,Zangara2013,Vosk2013,Tang2015}. Surprisingly, however, despite the large body of work on single-particle localization, this particular aspect, namely, the effects of disorder alone on the quench dynamics, has remained relatively unexplored in the literature. Only recently, a few studies have begun to address this problem. In one spatial dimension, where all single-particle states are localized even for infinitesimally small disorder, it was shown that localization can prevent the emergence of a steady state~\cite{Ziraldo2012,Ziraldo2013}. There has also been related studies on the effects of quantum dynamics in systems with quasiperiodic potentials in one-dimensional models, where either all states are extended or localized depending on the strength of the quasiperiodic potential~\cite{Gramsch2012,Ribeiro2013,He2013}. (Although quasiperiodic potentials do not represent an ensemble of disorder realizations, they are expected to capture some aspects of the pertinent physics. Other aspects such as sample-to-sample fluctuations rely on having a true disorder ensemble.) \begin{figure} \includegraphics[width =0.95\columnwidth]{fig1} \caption{(Color online) Top: Schematic of the quench protocol. The system is initially in the ground state of a clean tight-binding model of spinless fermions at constant chemical potential $\mu=0$. A disorder potential with uniform distribution and strength $W$ (represented by the color of the lattice sites) is suddenly turned on at $t=0$. Bottom: The nonmonotonic behavior of th post-quench density fluctuations with the strength of disorder.} \label{fig:1} \end{figure} Here we study quench dynamics in the canonical three-dimensional fermionic model exhibiting the Anderson localization transition. Our thrust lies in demonstrating that Anderson localization physics and associated features have a direct effect on the post-quench dynamics, particularly on fluctuation properties of observables. The post-quench behavior of observables, including their fluctuations, is intimately related to the nature and the statistics of the disordered wave functions (see Ref.~\onlinecite{Mirlin2000} for a review). In the absence of disorder, these wave functions are plain waves. At infinite disorder, all disorder realizations give rise to the same set of on-site localized eigenfunctions. It is for intermediate strength of disorder that the salient features of the localization transition and crossover between these two limits appear. Different realizations of disorder give rise to a rich ensemble of quantum eigenstates. Some states are extended and others are localized. A change between these two different behaviors involves a phase transition and an associated diverging localization length. The two phases are separated by a mobility edge having associated multifractal wave functions. Even within the localization regime at higher (but finite) disorder, there is a wide distribution of localization lengths. To probe the consequences of these features on dynamics, we focus on quantum quenches where the system starts in the ground state of a clean three-dimensional tight-binding Hamiltonian of spinless fermions with translationally invariant nearest-neighbor hopping at fixed particle number. As shown in Fig.~\ref{fig:1}, a disordered chemical potential is then suddenly turned on. The post-quench time evolution is governed by the nature of the single-particle eigenfunctions of the final disordered Hamiltonian (how localized they are, what is the distribution of the localization lengths, etc.) and in particular their overlaps with the plain-wave eigenfunctions of the initial clean system. The more localized the final wave functions are, the more uniform these overlaps become. The final momentum distribution (averaged over time and disorder), which is the most easily accessible observable in cold-atom experiments through time-of-flight measurements (see Ref.~\onlinecite{Bloch2008} and the references therein) captures this feature. It monotonically crosses over from a typical Fermi-Dirac step function for quenches to small disorder to a uniform distribution for quenches to large disorder. Our main results concern the richer problem of the fluctuations of real-space density in our system. With regards to feasibility of measurement, once again, in cold atomic systems, new \textit{in situ} imaging techniques provide direct experimental access to the real-space density, giving a complementary picture to the time-of-flight measurements~\cite{Esteve2006,Gemelke2009,Muller2010,Bakr2010,Sherson2010}. Fluctuation phenomena are of particular interest in disordered systems as they stem from multiple sources. As argued in Ref. ~\onlinecite{Ziraldo2012}, the presence of disorder in the final Hamiltonian can prevent the relaxation of the system to a steady state in the type of systems considered here, resulting in \textit{persistent temporal fluctuations of various observables even in the thermodynamic limit}. Here we perform a quantitative analysis of the temporal fluctuations of local density and show that, quite surprisingly, they have a \textit{nonmonotnic} dependence on the strength of disorder. At very high strength of disorder, these fluctuations begin to decrease due to the competing effects of disorder. This leads to observable equilibration in the limit of infinite disorder. Having an ensemble of final Hamiltonians (and consequently an ensemble of quenches) is one of the distinctive attributes of disordered systems. Most studies of quantum quenches, in which a parameter in the Hamiltonian changes for a system initially in the ground state, are described by a unique time-dependent wave function. In the quantum quench we consider here, the parameter undergoing the quench is a property of a distribution. Conceptually, this quench can be regarded as an ensemble of quantum quenches~\cite{Rahmani2013}: for each realization of disorder, the chemical potentials $\mu^{\bf x}_W$ are suddenly tuned on and the system undergoes unitary evolution. Observables of interest are then averaged over the realizations of disorder. In addition to understanding the time evolution for individual realizations of disorder, the variations of the dynamics from one sample to another are therefore important in the full description of the quench. We thus consider a second type of fluctuations, namely, the fluctuations of the time averages of the local density from sample to sample. We find that these \textit{sample-to-sample fluctuations} also exhibit a nonmonotonic dependence on the strength of disorder. Hence, as our key result and as shown in Fig.~\ref{fig:1}, both temporal and sample-to-sample density fluctuations on a given site have a \textit{nonmonotonic} dependence on the strength of disorder, peaking at intermediate values of disorder. Temporal fluctuation captures the absence of equilibration and persist for stronger disorder than the fluctuations between samples; the former peaks at stronger disorder than the latter. A distinguishing feature of these fluctuations, which appears in the dynamics of disordered systems, is that they survive in the thermodynamic limit. In fact the moments that describe these fluctuations saturate upon increasing the system size. This is in contrast with the \textit{mesoscopic} fluctuations that appear in disordered systems at equilibrium and vanish in the thermodynamic limit due to self averaging. In what follows, we pinpoint the subtle nature of these fluctuations and argue that they strongly depend on transitions between extended and localized eigenstates as well as the presence of a large spread in localization lengths. The outline of this paper is as follows. In Sec.~\ref{sec:2}, we present the model and discuss some of it important features. In Sec.~\ref{sec:3}, we focus on the behavior of wave functions overlaps and the momentum distribution. In Sec.~\ref{sec:4}, we present the key results of this paper on the temporal and sample-to-sample fluctuations of density. We close the paper in Sec.~\ref{sec:5} with a brief summary and conclusions. \section{MODEL AND THE QUANTUM QUENCH} \label{sec:2} In this work, we study the prototypical Anderson model of localization in three spatial dimensions, which exhibits a localization-induced metal-insulator transition. The Hamiltonian describing the system is given by \begin{equation}\label{eq:hamil} H_W=-\Gamma\sum_{\langle {\bf x}{\bf y}\rangle}\left(c^\dagger_{\bf x} c_{\bf y}+c^\dagger_{\bf y} c_{\bf x}\right)+\sum_{\bf x} \mu^{\bf x}_W c^\dagger_{\bf x} c_{\bf x}, \end{equation} where $c_{\bf x}$ is the fermionic annihilation operator on site ${\bf x}$ of a three-dimensional (3D) cubic lattice and $\langle {\bf x}{\bf y}\rangle$ indicates nearest neighbor sites ${\bf x}$ and ${\bf y}$. The quantity $\mu^{\bf x}_W$ represents a random chemical potential drawn from a uniform distribution $\left[-{W\over 2},+{W\over 2}\right]$, with $W$ representing the strength of disorder (hereafter we set the hopping amplitude to unity, $\Gamma=1$). We assume the system is an $L\times L\times L$ cubic lattice (with $M=L^3$ sites) with periodic boundary conditions. As the total number of particles $N=\sum_{\bf x} c^\dagger_{\bf x} c_{\bf x}$ is conserved, we study the dynamics in sectors having constant density $N/M$. In the clean case ($W=0$), the Hamiltonian has translation invariance and momentum is a good quantum number. We can then write $H_0=\sum_{\bf k} \epsilon_{\bf k}c^\dagger_{\bf k}c_{\bf k}$, with dispersion relation $\epsilon_{\bf k}=-2\left(\cos k_x +\cos k_y +\cos k_z\right)$, where ${\bf k}=(k_x, k_y, k_z)$ is the momentum. The single-particle wave functions of the clean system are plain waves [as depicted in Fig.~\ref{fig:2}(a) by a one-dimensional schematic]: \begin{equation}\label{eq:pwave} \psi^n_0({ \bf x})\equiv\langle { \bf x}|\psi^n_0\rangle={1\over \sqrt{M}}e^{i{\bf k}_n. { \bf x}}, \end{equation} where $n$ is an integer that labels the momenta $ { \bf k}$ in the order of ascending energy $\epsilon_ { \bf k}$. Degenerate levels are arranged in an arbitrary manner. However, we always choose the number of fermions in such a way that all degenerate levels at a given energy are either empty or occupied so the arbitrary choice of the labeling is immaterial for physical properties. There is no gap in the single-particle spectrum for $W=0$ and the system is a Fermi-liquid metal at any density. When we add disorder to the system, the wave functions either (i) remain extended but acquire a characteristic mean free path as shown in Fig.~\ref{fig:2}(b) (roughly speaking for weak disorder the plain waves with momentum $\bf k$ are perturbed predominantly mixing with other plain waves of similar energy $\epsilon_{\bf k}$) or (ii) become localized as shown in Fig.~\ref{fig:2}(c), where the wave function effectively has support in a region of characteristic length $\xi$ known as the localization length with an exponentially decaying envelope from a localization center (naturally this requires the mixing of many plain waves). At a critical energy $E_c$, which demarcates the boundary between localized and extended states (known as the mobility edge), the localization length on the localized side diverges as $|E-E_c|^{-\nu}$. In the left-hand column of Fig.~\ref{fig:3}, we show a few examples of the (disorder-averaged) density of states for the Hamiltonian of Eq. (\ref{eq:hamil}). The extended (localized) states are shown in light pink (dark green). For any $W>0$ (even for arbitrarily small disorder), localized states appear at the lowest and highest ends of the spectrum for energies $|E|>E_c$ with mobility edges at $\pm E_c$. As we increase $W$, more states become localized. Finally, at $W\approx 16.5$, $E_c\to 0$, the two mobility edges at $\pm E_c$ meet and the full spectrum becomes localized. In a fermionic system, the many-body ground state is constructed by filling the lowest energy eigenstates up to an energy $E_F$ known as the Fermi level. When considering the equilibrium ground-state properties, the physics is largely dominated by the nature of the eigenfunction at the Fermi level. If it is localized, the conductance vanishes and the system is an insulator and if it is extended have a metal. Therefore, the transport properties of the system depends on where the Fermi energy $E_F$ lies in the spectrum with respect to the mobility edges. In the quench problem studied here, where at time $t=0$ a disordered chemical potential is suddenly turned on, the system is initially in the many-body ground state of the Hamiltonian~\eqref{eq:hamil} for $W=0$. As $N$ many-body wave functions are filled in this fermionic system and each has overlaps with \textit{all} eigenfunctions of the disordered Hamiltonian (shown in Figs.~\ref{fig:2}(bc), direct detection of the transition in the quench dynamics is challenging. However, the nature and the statistical properties of these wave functions and the distribution of overlaps with the plain waves lead to important and rather counterintuitive crossovers in the behavior of observables after the quench. In particular, as aforementioned, fluctuations of density are suppressed for both extremely localized and extremely extended states but are sensitive to the transient region, where either both extended and localized states are present or there is a large distribution of localization lengths. In wha follows, we study this behavior in depth and show that it results in the non-monotonic behavior depicted in Fig.~\ref{fig:1}. \begin{figure} \includegraphics[width =0.7\columnwidth]{fig2} \caption{Real-space schematic of (a) a plain-wave state, (b) an extended disordered wave function and (c) and a localized wave function.\label{fig:2}} \end{figure} \begin{figure} \includegraphics[width =0.95\columnwidth]{fig3} \caption{(Color online) Left: the density of states and the mobility edges separating the dark green and light pink regions for three different strengths of disorder. Right: The magnitudes of the overlaps between these disordered states and plain-wave eigenfunctions of the clean system. Black indicates zero overlap (or no states as the plot contains a factor of the joint density of states) and larger overlaps are shown in lighter color.. The expansion of the light region upon increasing $W$ indicates that the overlaps are approaching the uniform distribution of Eq. (11). } \label{fig:3} \end{figure} \section{WAVE FUNCTION OVERLAPS, OBSERVABLES, AND MOMENTUM DISTRIBUTION} \label{sec:3} Here we present an analysis of wavefunction overlaps between pre- and post-quench eigenstates, a generic formulation for evaulating observables after the quench, and the behavior of the post-quench momentum distribution. The overlaps of the initial and final wave functions plays a central role in quantum quench dynamics; here, an analysis of single-particle wavefunctions provides most of the required information. The initial state is a Slater determinant of $N$ plain-wave single-particle wave functions with the lowest energy $\epsilon_{\bf k}$. The final Hamiltonian for each realization of disorder similarly has $p=1\dots L^3$ eigenfunctions $|\psi^p_W\rangle$. At the single-particle level, for any given initial state $|\psi^n_0\rangle$, the post-quench time-dependent state is governed by the overlap between this state and the eigenstates of the final Hamiltonian. Specifically, the time-dependence of the wave function can be written as \begin{equation} \label{eq:psi0} \psi^n_0({ {\bf x},t})=\sum_p\langle \psi^p_W|\psi^n_0\rangle e^{-i\varepsilon_W^p t}\psi^p_W({\bf x}), \end{equation} in terms of the overlaps $\langle \psi^p_W|\psi^n_0\rangle$ (throughout the manuscript we set $\hbar=1$). Clearly if $W=0$, this overlap is $\delta_{pn}$. For weak disorder, the nature of the wave functions $\psi^p_W({\bf x})$ is determined by the scattering of plain waves off of the disorder potential. To leading order, such scattering mixes plain waves with momenta that are close in energy. Therefore, although the overlaps spread from the delta function above, they remain negligible for states that are far away in energy. As the disorder strength is increased, the disordered wave functions become localized starting at the edges of the spectrum . These localized wave functions are superpositions of a large number of plain waves, which results in an almost uniform distribution of overlaps. In the right-hand side of Fig. \ref{fig:3}, we show several numerically computed plots of the average overlaps with plain-wave eigenstates of the clean system. The horizontal (vertical) axis shows the energy $E'$ ($E$) of an eigenstate of the clean (disordered) Hamiltonian. Black indicates zero overlap, while lighter colors denotes larger overlaps. As expected, for small disorder, only states close to the diagonal have a large overlap, whereas for large $W$ we approach the situation where each initial plain-wave eigenstate has large overlaps with all the eigenstates of the final Hamiltonian. Note that the plotted overlaps are weighted by a factor consisting of the joint density of states. We now formulate the post-quench dynamics description for generic observables in terms of associated operators, wavefunction overlaps and appropriate averages. The initial and final Hamiltonians can be written as $H_0=\Psi^\dagger {\mathscr H}_0 \Psi$ and $H_W=\Psi^\dagger {\mathscr H}_W \Psi$, respectively, where $\Psi^\dagger\equiv(c^\dagger_1\dots c^\dagger_M)$, and ${\mathscr H}_{W}$ is an $M\times M$ Hermitian matrix in the position basis. For one realization of disorder, the quantum expectation value of a quadratic operator \begin{equation} O=\Psi^\dagger {\mathscr O}\Psi, \end{equation} where ${\mathscr O}$ is an $M\times M$ matrix, at time $t$ can then be computed by writing the Heisenberg operator \begin{equation}\label{eq:Heis} O(t)=e^{iH_W t}Oe^{-iH_Wt} =\Psi^\dagger \left(e^{i{\mathscr H}_W t}{\mathscr O}e^{-i{\mathscr H}_Wt} \right)\Psi. \end{equation} The above expression is characteristic of quadratic Hamiltonians and can be derived in many ways, e.g., by using the Baker-Campbell-Hausdorff formula in conjunction with the general relationship $[H_W,O]=\Psi^\dagger [{\mathscr H}_W,{\mathscr O}]\Psi$. Both clean and disordered single-particle Hamiltonians can be diagonalized by a unitary transformation as \begin{equation}\label{eq:diagW} {\mathscr H}_W=U_WD_WU_W^\dagger, \end{equation} where $D_W={\rm diag}(\varepsilon_W^1\dots \varepsilon_W^M)$, where $\varepsilon _W^n$ is an eigenvalue of ${\mathscr H}_W$ and the columns of the matrix $U_W$ are the corresponding single-particle eigenfunctions $\psi^n_W({\bf x})$ of the $n$th single-particle level in the position basis ($\varepsilon_W^n\leqslant \varepsilon_W^{n+1}$). In terms of quasiparticle operators \begin{equation} \label{eq:gammaW} \Gamma^\dagger_W\equiv (\gamma_W^{\dagger 1} \dots \gamma_W^{\dagger M})= \Psi^\dagger U_W, \end{equation} the Hamiltonians can then be written as $H_W=\Gamma_W^\dagger D_W \Gamma_W=\sum_p \varepsilon_W^p \gamma_W^{\dagger p}\gamma_W^p $. Using Eqs.~\eqref{eq:diagW} and \eqref{eq:gammaW}, we can then write Eq.~\eqref{eq:Heis} as \begin{equation}\label{eq:Heis2} O(t)=\Gamma_0^\dagger \left(U_0^\dagger U_W e^{iD_Wt}U^\dagger_W{\mathscr O} U_W e^{-iD_W t}U^\dagger_W U_0\right)\Gamma_0, \end{equation} where the subscript $0$ indicates $W=0$ in Eq.~\eqref{eq:gammaW}, i.e., $\Gamma^\dagger_0\equiv (\gamma_0^{\dagger 1} \dots \gamma_0^{\dagger M})= \Psi^\dagger U_0$, where the matrix $U_0$ contains the plane-wave eigenfunctions of the clean Hamiltonian [see Eq.~\eqref{eq:psi0}]. As we are working in the Heisenberg picture, we need to take the expectation value of Eq.~\eqref{eq:Heis2} with the initial many-body state, which is a Fermi sea of $N$ quasiparticles $\Gamma_0$ occupying the lowest energy states: \begin{equation} \label{eq:initial} |\Psi(0)\rangle=\prod_{n\leqslant N}\gamma^{\dagger n}_0|0\rangle \end{equation} where $|0\rangle$ is the vacuum. It is easy to observe that only the first $N$ diagonal elements of the $M\times M$ matrix appearing between $\Gamma_0^\dagger$ and $\Gamma_0$ contribute and the quantum expectation value is given by \begin{equation} \langle O(t)\rangle=\sum_{n\leqslant N}\left(U_0^\dagger U_W e^{iD_Wt}U^\dagger_W{\mathscr O} U_W e^{-iD_W t}U^\dagger_W U_0\right)_{nn}. \end{equation} The above expression for the quantum expectation value of a general quadratic operator after the quench, can be used as a building block (using the Wick's theorem) for computing the expectation values of higher order operators. In this paper, however, we only discuss quadratic operators. Now for the operator $O=c^\dagger_{\bf x}c_{\bf y}$, which will play a role in subsequent discussions, the above equation leads to \begin{equation}\label{eq:green} \langle c^\dagger_{\bf x}(t)c_{\bf y}(t)\rangle=\sum_{pq; n\leqslant N}\psi^{*p}_W({\bf x})\psi^q_W({\bf y})\langle\psi^{n}_0|\psi^p_W\rangle\langle\psi^{q}_W|\psi_0^n\rangle e^{i(\varepsilon_W^p-\varepsilon_W^q)t}, \end{equation} where the single-particle overlap are given by $\langle\psi^m|\psi^n\rangle\equiv\sum_{\bf x}\psi^m({\bf x})\psi^n({\bf x})$. After some transient time, the expectation value above settles relatively close to its time average, with some persistent temporal fluctuations around it. We refer to this state as a \textit{quasi}-steady state because the temporal fluctuations survive even in the thermodynamic limit. In the next section, focusing on the local density $n_{\bf x}= c^\dagger_{\bf x}c_{\bf x}$, we discuss these fluctuations in more detail. In evaluating the behavior of any observable $O$, we have three possible averages to take in to account. As discussed above, we have the quantum expectation value (denoted by $\langle O \rangle$) and we assume that this is always taken as the first step. We then have the time average (denoted by an overline), which we take in the long-time limit. For a general time-dependent object $f(t)$, the time average is defined as $\overline{f}\equiv\lim_{T\rightarrow\infty}\int_0^T dt f(t)$. Finally, we have the disorder average taken over many samples and we denote this as $\mathbb{E}(\dots)$, where the dots could be any operator or scalar property of the system (which may or may not depend on time). We summarize these conventions in the table below: {\renewcommand{\arraystretch}{1.4} \begin{center} \begin{tabular}{c|c|c} Quantum Average & Time Average & Disorder Average\\ \hline $\langle\dots \rangle$ & $\overline{\dots}$ & $\mathbb{E}(\dots)$\\ \end{tabular} \end{center} For the time-averaged behavior of the observable $O=c^\dagger_{\bf x}c_{\bf y}$, as we do not expect any spectral degeneracies for a disordered system (with as many random chemical potentials as the number of energy levels), the only contributions in Eq.~\eqref{eq:green} come from the diagonal terms $p=q$. In other words \begin{equation}\label{eq:diag_ave} \overline{e^{i(\varepsilon_W^p-\varepsilon_W^q)t}}=\delta_{pq}. \end{equation} Hence, we can write \begin{equation}\label{eq:ave} \overline{\langle c^\dagger_{\bf x}c_{\bf y}\rangle}=\sum_{p; n\leqslant N} \psi^{*p}_W({\bf x})\psi^p_W({\bf y})|\langle\psi^{n}_0|\psi^p_W\rangle|^2, \end{equation} Turning to physically motivated situations, the easiest quantity to observe in time-of-flight experiments is the momentum distribution. Here, to present a direct and simple measure for capturing our analysis of wave-function overlaps, we discuss the time and disorder average of the quantum expectation value the momentum distribution. Consider the Fourier transform of the fermion annihilation operator \begin{equation} c_{\bf k}={1\over \sqrt{M}}\sum_{\bf x} e^{-i{\bf k}.{\bf x}} c_{\bf x}. \end{equation} Using the above expression, the occupation of mode $c_{\bf k}$ is then given by the operator \begin{equation}\label{eq:mom} n_{\bf k}\equiv c^\dagger_{\bf k}c_{\bf k}={1\over M}\sum_{{\bf x}{\bf y}}e^{i{\bf k}.{(\bf x}-{\bf y})} c^\dagger_{\bf x}c_{\bf y}. \end{equation} The above occupation number of Fourier modes $c_{\bf k}$ can be readily measured in time-of-flight experiments. Using Eqs.~\eqref{eq:mom} and \eqref{eq:ave}, we have numerically computed the time and quantum averaged $\overline{\langle n_{\bf k}\rangle}$ for each realization of disorder. We have randomly generated enough realization so that the disorder average $\mathbb{E}(\overline{\langle n_{\bf k}\rangle})$ of this quantity converges in the number of realizations. The results are shown in Fig.~\ref{fig:4}. As expected for small $W$ the occupation number remains close to the initial Fermi-Dirac distribution. As the overlaps between the eigenstates of the clean and the disordered system become more uniform, the momentum distribution approaches a constant value (equal to the density $N/M$) that is independent of $\bf k$. This can be seen explicitly in the limit of $W\to\infty$, where we can neglect all the hopping terms. When the ratio of hopping to disorder strength $\Gamma/W\to 0$, the probability of $|\Gamma|\ll|\mu_\infty^{\bf x}|$ goes to one (recall that the hopping amplitude $\Gamma$ was set to unity in Eq.~\eqref{eq:hamil}). In this strong disorder limit, the wave functions are then localized on individual lattice sites $\psi^p_\infty({\bf x})=\delta_{{\bf x},{\bf x}_p}$. The index $p$ labels the eigenfunctions. As each eigenfunction is localized on one lattice site, there is a one-to-one correspondence between the lattice sites and eigenfunctions so we label the sites with the same index $p$, i.e., $\psi^p_\infty({\bf x})$ is localized on site ${\bf x}_p$. In this limit, we have $\langle\psi_0^n|\psi^p_\infty\rangle={1\over \sqrt{M}}e^{i{\bf k}_n. { \bf x_p}}$, which gives \begin{equation} |\langle\psi_0^n|\psi^p_\infty\rangle|^2=1/M. \end{equation} Inserting the above expression into Eq.~\eqref{eq:ave} then gives \begin{equation}\label{eq:cfg} \overline{\langle c^\dagger_{\bf x}c_{\bf y}\rangle}|_{W\to\infty}={N\over M}\sum_{p} \psi^{*p}_W({\bf x})\psi^p_W({\bf y})={N\over M}\delta_{\bf xy}, \end{equation} where we have used the condition of the unitarity. Using Eq.~\eqref{eq:mom}, we then find $\overline{\langle n_{\bf k}\rangle}|_{W\to\infty}=N/M$. \begin{figure} \includegraphics[width =0.95\columnwidth]{fig4} \caption{(Color online) The long-time-limit disorder-averaged momentum distribution for various densities and strengths of disorder $W$ (in the post-quench Hamiltonian) along a line $k_x=k_y=k_z$ in the Brillouin zone. Upon increasing $W$, the momentum distribution crosses over from a Fermi-Dirac step function to a uniform distribution.} \label{fig:4} \end{figure} While the features of the average momentum distribution reflect localization physics to some degree, we now show that a full-fledged analysis involving temporal and sample-to-sample fluctuations bring out more subtle, rich and surprising effects of the Anderson localization transition. \section{TEMPORAL AND SAMPLE-TO-SAMPLE DENSITY FLUCTUATIONS} \label{sec:4} \subsection{Formalism and connection with final eigenstates}\label{sec:4a} In this section, we consider the fluctuations of local density $n_{\bf x}=c^\dagger_{\bf x} c_{\bf x}$ on a site ${\bf x}$. Local observables such as $n_{\bf x}$ generically exhibit strong quantum fluctuations characterized by $\langle n_{\bf x}^2\rangle-\langle n_{\bf x}\rangle^2=\langle n_{\bf x}\rangle\left(1-\langle n_{\bf x}\rangle\right)$, where we have made use of the relationship $n_{\bf x}^2=n_{\bf x}$. As the quantum fluctuations are simply related to quantum expectation values $\langle n_{\bf x}\rangle$, here we only focus on \textit{temporal} and \textit{sample-to-sample} fluctuations of these quantum averages as discussed below. The treatment for temporal fluctuations is similar to those of previous works, which considered fluctuations for the one-dimensional case~\cite{Ziraldo2012,Ziraldo2013}. However, we present new results on the dependence of these fluctuations on disorder strength in three dimensions. We also present novel results on sample-to-sample fluctuations. Our underlying assumption is that for a given sample (i.e., realization of disorder), the quench experiment can be carried out over and over, an, thus, the local density at time $t$ after the quantum quench can be measured many times yielding time- and sample-dependent quantum expectation values $\langle n_{\bf x}(t)\rangle$ (see Fig.~\ref{fig:1}). Moreover, throughout this paper, we focus on the quasi-steady states reached after the transient dephasing time scales. Generically, local observables $O$ are expected to \textit{equilibrate} at long times, i.e., when $t\rightarrow \infty$, $\langle O(t)\rangle-\overline{\langle O\rangle}\rightarrow 0$. It has been recently suggested, however, that disordered systems may not equilibrate in the above sense due to their nonsmooth spectral properties~\cite{Ziraldo2012,Ziraldo2013}. As the standard notion of thermalization (either to the Gibbs or the generalized Gibbs ensemble) relies on equilibration (the decay of temporal fluctuations), these systems do not thermalize. In the absence of equilibration, we thus have the following hierarchy of fluctuations: (i) quantum fluctuations in a given sample at a fixed time (not discussed further in the present paper), (ii) temporal fluctuations of the quantum expectation values around their time average for a typical sample, and (iii) sample-to-sample fluctuations of the above-mentioned time averages. In analogy with the various types of moments used to characterize noise-driven systems~\cite{Dalessio2013}, we characterize the fluctuations (ii) and (iii) respectively by the following moments: \begin{eqnarray}\label{eq:var1} {\rm Var}_t[O]&\equiv& \mathbb{E}\left[\overline{\langle O\rangle^2}-\left(\overline{\langle O\rangle}\right)^2\right],\\\label{eq:var2} {\rm Var}_s[O]&\equiv& \mathbb{E}\left[\left(\overline{\langle O\rangle}\right)^2\right]-\left( \mathbb{E}\left[\overline{\langle O\rangle}\right]\right)^2, \end{eqnarray} where various averages are denoted in the table in the previous section. The variation ${\rm Var}_t[O]$ encodes how much the time-dependent $\langle O(t)\rangle$ fluctuates around its time average $\overline{\langle O\rangle}$ for an average sample, while ${\rm Var}_s[O]$ characterizes the fluctuations of the time average $\overline{\langle O\rangle}$ from sample to sample. To visualize the two types of fluctuations above, we consider the behavior of $\langle n_{\bf x}(t)\rangle$ for different samples as shown Fig.~\ref{fig:5} (for a system of $L=10$ at half filling after a sudden quench from $W=0$ to $W=4$ as an example). The blue circles represent $\langle n_{\bf x}(t)\rangle$ (for a particular site ${\bf x}$) as a function of time. Different data sets correspond to various samples. As seen in the figure, for each sample, $\langle n_{\bf x}(t)\rangle$ keeps fluctuating around its time average $\overline{\langle n_{\bf x}\rangle}$ and does not relax even in the limit of $t\rightarrow \infty$. We mention that we have observed that this behavior persists over time scales that are several orders of magnitude larger than what is shown in the figure. Moreover, the time averages $\overline{\langle n_{\bf x} \rangle}$ (shown in red lines) strongly fluctuate from sample to sample. In the discussion above, we arbitrarily chose a fixed site $\bf x$. With periodic boundary conditions, all sites are equivalent upon disorder averaging and the choice of the site $\bf x$ is unimportant. We note in passing that the sample-to-sample fluctuations are very similar to position-to-position fluctuations in a given sample in the thermodynamic limit. \begin{figure} \includegraphics[width =0.95\columnwidth]{fig5} \caption{(Color online) The blue circles represent $\langle n_{\bf x}(t)\rangle$, the density on site ${\bf x}$ at time $t$ after the quench in a system with $M=10^3$, $N=500$, and $W=4$ for various realizations of disorder. The temporal fluctuations around the time averages $\overline{\langle n_{\bf x} \rangle}$ (red solid lines) persist in the limit of $t\rightarrow \infty$ and are characterized by Eq.~\eqref{eq:var1}. The sample-to-sample fluctuations of $\overline{\langle n_{\bf x} \rangle}$ around their average (dashed black line) are characterized by Eq.~\eqref{eq:var2}}. \label{fig:5} \end{figure} Before quantifying the fluctuations~\eqref{eq:var1} and \eqref{eq:var2}, we present a qualitative discussion of the fluctuations. From Eq.~\eqref{eq:green} for ${\bf x}={\bf y}$, we have \begin{equation}\label{eq:ave2} \langle n_{\bf x}(t)\rangle=\sum_{pq; n\leqslant N}\psi^{*p}_W({\bf x})\psi^q_W({\bf x})\langle\psi^{n}_0|\psi^p_W\rangle\langle\psi^{q}_W|\psi_0^n\rangle e^{i(\varepsilon_W^p-\varepsilon_W^q)t}. \end{equation} From Eq.~\eqref{eq:diag_ave}, it is clear that temporal fluctuations are due to a contribution of states $p$ and $q$ with different energies in Eq.~\eqref{eq:ave2}. If $W=0$, there is no quench and everything is stationary. Mathematically, a nonzero product $\langle\psi^{n}_0|\psi^p_W\rangle\langle\psi^{q}_W|\psi_0^n\rangle$ requires $p=q=n$, making the temporal fluctuations vanish. As we increase $W$, we can see from Fig.~\ref{fig:3} that the spreading of the overlaps $\langle\psi^{n}_0|\psi^p_W\rangle$ allows for a larger contribution from states with $\epsilon_p\neq \epsilon_q$. However, the product $\psi^{*p}_W({\bf x})\psi^q_W({\bf x})$ has a competing effect that sets in at very large $W$. As we showed earlier, $\psi^{*p}_{\infty}({\bf x})=\delta_{{\bf x},{\bf x}_p}$ so when the localization length approaches the lattice spacing, the product $\psi^{*p}_W({\bf x})\psi^q_W({\bf x})$ vanishes for $p\neq q $, which obliterates the temporal fluctuations of Eq.~\eqref{eq:ave2}. Analogous arguments can be made about the sample-to-sample fluctuations. As mentioned before, we assume that there are no accidental degeneracies in the spectrum of the disordered Hamiltonian and that therefore Eqs.~\eqref{eq:diag_ave} and \eqref{eq:ave} hold. Now, Eq.~\eqref{eq:ave} immediately leads to \begin{equation}\label{eq:mean} \overline{\langle n_{\bf x}\rangle}=\sum_{p; n\leqslant N}|\psi^{p}_W({\bf x})|^2|\langle\psi^{n}_0|\psi^p_W\rangle|^2. \end{equation} Note that after averaging over disorder, the system must exhibit translation invariance and $\mathbb{E}[\overline{\langle n_{\bf x}\rangle}]={1\over M} \mathbb{E}[\sum_{\bf x}\overline{\langle n_{\bf x}\rangle}]=N/M$ for all ${\bf x}$. We can see this explicitly from Eq.~\eqref{eq:mean} by using the normalization $\sum_{\bf x}|\psi^{p}_W({\bf x})|^2=1$ and the resolution of identity $\sum_p |\psi_W^p\rangle\langle \psi_W^p|=\openone$. Clearly, for weak disorder, there is little difference between different samples. Increasing $W$ from 0, makes the wave functions for various realizations of disorder different and can increase the variations of expression~\eqref{eq:mean} from sample-to-sample. However, once again, very large disorder suppresses the fluctuations. For $W\to\infty$ this can be seen from Eq.~\eqref{eq:cfg}, where the time average of the density is $N/M$, independent of the realization of disorder. We argued that the temporal fluctuations vanish when all localization lengths approach the lattice spacing. For sample-to-sample fluctuations, on the other hand, we will later argue that even for relatively large localization length, if there is not much variation in $\xi$, the fluctuations become suppressed. Therefore, after the initial increase as a function of $W$, the sample-to-sample fluctuations begin to decrease at weaker disorder strength in comparison with the temporal fluctuations. We now proceed to calculate the moments~\eqref{eq:var1} and \eqref{eq:var2} (for $O=n_{\bf x}$). To compute the moment~\eqref{eq:var1}, we need $\overline{\langle n_{\bf x}\rangle^2}$, which can be found by using the time average~\cite{Ziraldo2012} \begin{equation}\label{eq:4ave} \overline{e^{i(\varepsilon_W^p-\varepsilon_W^q+\varepsilon_W^{p'}-\varepsilon_W^{q'})t}}=\delta_{pq}\delta_{p'q'}+\delta_{pq'}\delta_{p'q}-\delta_{pq}\delta_{qp'}\delta_{p'q'}. \end{equation} In obtaining the expression above, we have assumed that we are working with a finite system of finite size $L$ in the limit of long times $t\rightarrow \infty$ and then we have taken the limit of large system size. The time average in the equation above is nonvanishing only if the oscillatory term is time-independent, i.e., $\varepsilon_W^p-\varepsilon_W^q+\varepsilon_W^{p'}-\varepsilon_W^{q'}=0$. For a discrete spectrum (corresponding to a finite system), without any accidental degeneracies in energies and \textit{energy gaps}, this can be achieved when $p=q$ and $p'=q'$ or $p=q'$ and $p'=q$, giving rise to the first two terms on the right-hand side of Eq.~\eqref{eq:4ave}. The last term in the equation is added to correct for the overcounting when $p=q=p'=q'$. If the thermodynamic limit $L\rightarrow \infty$ is taken before the limit of $t\rightarrow \infty$ (in the definition of the time average), we do not have a discrete spectrum. For a continuous spectrum, the time average will be nonzero on a three-dimensional plane of the four-dimensional $pqp'q'$ space characterized by $\varepsilon_W^p-\varepsilon_W^q+\varepsilon_W^{p'}-\varepsilon_W^{q'}=0$, while in the discrete case, the time average is nonzero on a two-dimensional subset of this four-dimensional space. In practice, the order of limits we consider implies that the time scales are much longer that the inverse level spacing of the system. We can then write the following expression for one realization of disorder: \begin{equation}\label{eq:temporal} \begin{split} &\overline{\langle n_{\bf x} \rangle^2}-\left(\overline{\langle n_{\bf x}\rangle}\right)^2=\\ &\sum_{p\neq q;n,n'\leqslant N} |\psi^{p}_W({\bf x})|^2|\psi^q_W({\bf x})|^2\langle \psi^{n}_0|\psi^p_W\rangle\langle\psi^{q}_W|\psi_0^n\rangle \langle \psi^{n'}_0|\psi^q_W\rangle\langle \psi^{p}_W|\psi_0^{n'}\rangle. \end{split} \end{equation} Interestingly, Eqs.~\eqref{eq:mean} and \eqref{eq:temporal}, which characterize the asymptotic sample-to-sample and temporal density fluctuations through the moments~\eqref{eq:var1} and \eqref{eq:var2} (for $O=n_{\bf x}$), are independent of the eigenvalues $\varepsilon_W^p$ and can be be obtained from the statistics of eigenfunctions $\psi_W^p({\bf x})$ alone. Such statistics has been the subject of intensive studies, for e.g., using supersymmetric nonlinear sigma models~\cite{Efetov1997,Mirlin2000}. Using the translation invariance of the system (upon disorder averaging), we can write both variances ${\rm Var}_{t,s}[n_{\bf x}]$ in a form that is explicitly independent of $\bf x$: ${\rm Var}_{t,s}[n_{\bf x}]={1\over M}\sum_{\bf x} {\rm Var}_{t,s}[n_{\bf x}]$, which leads to \begin{eqnarray} {\rm Var}_{t}[n_{\bf x}]=\sum_{\scriptsize \begin{array}{c} p\neq q \\ n,n'\leqslant N \end{array}}\mathbb{E}\left[{C^{pq}_W\over M^2}\langle\psi_0^n|\psi^p_W\rangle\langle\psi^{q}_W|\psi_0^n\rangle \langle \psi^{n'}_0|\psi^q_W\rangle\langle \psi^{p}_W|\psi_0^{n'}\rangle\right],\label{eq:moment1}\\ {\rm Var}_{s}[n_{\bf x}]=\sum_{\scriptsize \begin{array}{c} p, q \\ n,n'\leqslant N \end{array}}\mathbb{E}\left[{1\over M^2}\left(C^{pq}_W-1\right)|\langle\psi_0^n|\psi^p_W\rangle|^2 |\langle \psi^{n'}_0|\psi^q_W\rangle|^2\right],~~~~\label{eq:moment2} \end{eqnarray} where the two-eigenfunction correlator $C^{pq}_W$ is defined as \begin{equation}\label{eq:cpq} C^{pq}_W\equiv M \sum_{\bf x} |\psi^{p}_W({\bf x})|^2|\psi^{q}_W({\bf x})|^2. \end{equation} The statistics of $C^{pq}_W$ has been studied in the context of the statistical properties of disordered eigenfunctions. For $p=q$, it is related to the inverse participation ratio, whose scaling with system size is an important diagnostic for distinguishing localized and extended states. It is easy to observe that for an extended state $p$, $C^{pp}_W$ does not scale with the system size, while for a localized state it scales with $M$. The behavior of $C^{pq}_W$ for two \textit{different} eigenstates has also been studied. If at least one of the eigenstates is extended $C^{pq}_W\approx 1$. If both of the eigenstates are localized, $C^{pq}_W$ vanishes most of the time, except when the localized wave functions overlap. It was shown in Ref.~\onlinecite{Cuevas2007} that on average, we have $\mathbb{E}[C^{pq}_W]\approx 1$ in this case as well. We can further show that the fluctuations are symmetric under the transformation $N\to M-N$. We first consider ${\rm Var}_{t}[n_{\bf x}]$ with $M-N$ particles: \begin{equation} \begin{split} &{\rm Var}_{t}[n_{\bf x}]|_{N-M}=\sum_{\scriptsize \begin{array}{c} p\neq q \\ M-N<n,n' \end{array}} \\ &\mathbb{E}\left[{C^{pq}_W\over M^2} \langle\psi^{q}_W|\left(\openone-|\psi_0^n\rangle\langle\psi_0^n|\right)|\psi^p_W\rangle \langle \psi^{p}_W|\left(\openone-|\psi_0^{n'}\rangle\langle\psi_0^{n'}|\right)|\psi^q_W\rangle\right], \end{split} \end{equation} where we have used the resolution of identity. Now since $\langle \psi^{p}_W|\openone|\psi^q_W\rangle=\delta_{pq}$ and the sum is over $p\neq q$, we find that ${\rm Var}_{t}[n_z]|_{N-M}$ is given by the same expression as ${\rm Var}_{t}[n_z]|_{N}$ except the sums over $n$ and $n'$ are over the $N$ highest-energy wave functions as opposed to the $N$ lowest-energy ones. Noting that the sums over $p$ and $q $ are over all levels and the overlaps $\langle \psi^{n}_0|\psi^p_W\rangle $ on average have a symmetric structure under $E\to -E$ (see Fig.~\ref{fig:3}), we conclude that the fluctuations must be symmetric around half-filling. We can give a similar argument for the sample-to-sample fluctuations again by using the resolution of identity to relate the sum over $M-N$ low-lying states to a sum over the $N$ highest-energy states. Here we need to show that \begin{equation} \sum_{\scriptsize \begin{array}{c} p, q \\ M-N<n,n' \end{array}} \mathbb{E}\left[{1\over M^2}{\left(C^{pq}_W-1\right)} \langle\psi^{p}_W|\openone|\psi^p_W\rangle \langle \psi^{q}_W|\openone|\psi^q_W\rangle\right]=0, \end{equation} which follows from $\langle \psi^{p}_W|\openone|\psi^p_W\rangle=\langle \psi^{q}_W|\openone|\psi^q_W\rangle=1$ and $\sum_{p}\left(C^{pq}_W-1\right)=0$ [see Eq.~\eqref{eq:cpq}]. \subsection{Numerical results and analysis} Having obtained tractable forms for the temporal and sample-to-sample fluctuations in terms of the eigenstates of the final Hamiltonian, in this section, we discuss the behavior of these moments using numerical simulations and corroborating analysis. We first compute the moments~\eqref{eq:moment1} and \eqref{eq:moment2} associated with these two fluctuations by direct numerical computation. The behavior of these moments as a function of the density $N/M$ is shown in Fig.~\ref{fig:6} for a system size of $L=8$ for quenches terminating in a series of different disorder strengths $W$. We have obtained good convergence in the disorder-averaged moments by averaging over 1000 samples. For a given disorder strength, we see that both fluctuations increase as a function of density, naturally doing so as more sites become filled. They reach a peak around half filling, and then decrease again as the density increases towards unity, thus allowing fewer and fewer empty sites for fluctuations. The trend holds for all quench disorder strengths. The unique feature that emerges from an interplay between quench dynamics, wave-function overlaps and localization physics, as discussed in previous sections and what follows, is that both fluctuations show nonmonotonic behavior as a function of disorder strength. In Fig.~\ref{fig:7}, we plot the moments at a fixed density (near half filling for which the maximum occurs) as a function of $W$, which shows a peak at finite $W$. We clearly see the nonmonotonic behavior. In addition to the nonmonotonic behavior itself, an important observation is that the peak for temporal fluctuations appears at a much higher $W$. Fig.~\ref{fig:7} summarizes the main findings of this work. We provided arguments in Sec.~\ref{sec:4a} for the nonmonotonic behavior of both temporal and sample-to-sample fluctuations. To reiterate the salient points, first, by construction, both moments ${\rm Var}_{t,s}[n_{\bf x}]\geqslant0$. We then consider the two extreme cases of extended ($W=0$) and localized ($W\to\infty$) states. As argued in the previous section \begin{eqnarray} {\rm Var}_{t,s}[n_{\bf x}]\big|_{W=0}&=&{\rm Var}_{t,s}[n_{\bf x}]\big|_{W\to\infty}=0. \end{eqnarray} The above extreme-value calculations immediately imply a nonmonotonic behavior for both moments (unless they identically vanish for all $W$). To elucidate, considerations of the previous section show that for the sample-to-sample fluctuation, Eq.~\eqref{eq:cfg} immediately implies that all time-averaged densities are equal to $N/M$ independent of the sample and therefore ${\rm Var}_{s}[n_{\bf x}]\big|_{W\to\infty}=0$ (It is obvious that for $W=0$ all samples are the same and there can not be any sample-to-sample fluctuations). As a check, we find that~\eqref{eq:moment2} agrees with the above: For $W=0$, we have plain waves~\eqref{eq:pwave} with $|\psi^{p}_0({\bf x})|^2=1/M$ and therefore $C_0^{pq}=1$ so all the terms in sum~\eqref{eq:moment2} vanish individually. For $W\to\infty$, on the other hand, we have $\psi^p_\infty({\bf x})=\delta_{\bf{x}, {\bf x}_p}$ (state $p$ with wave function $\psi^p_\infty({\bf x})$ is localized on site ${\bf x}_p$) and it follows from Eq.~\eqref{eq:cpq} that \begin{equation} C_\infty^{pq}\equiv\lim_{W\to\infty}C_W^{pq}=M\delta_{pq}. \end{equation} This leads to \begin{equation} \lim_{W\to\infty}{\rm Var}_{s}[n_z]={N^2\over M^4}\sum_{pq}\left(M\delta_{pq}-1\right)=0 \end{equation} Similar considerations can be applied to the temporal fluctuations. In particular, for $W=0$, the overlaps in~\eqref{eq:moment1} give $\delta_{np} \delta_{qn} \delta_{n'p} \delta_{qn'}$, which vanishes for $p\neq q$ (notice that the sum is over $p\neq q$). For $W\to \infty$, again $C_\infty^{pq}=M\delta_{pq}$, which makes the sum over $p \neq q$ vanish in Eq.~\eqref{eq:moment1}. In analyzing the nonmonotonicity, the simple argument above does not explain why the peak for the temporal fluctuations appears at a much larger $W$. Thus, a more in-depth analysis is needed. As mentioned previously, the decreasing sample-to-sample fluctuation relies on the localization lengths becoming uniform (rather than small as in the case of temporal fluctuations) and therefore sets in at smaller $W$. For large enough disorder, we can assume roughly that the overlaps appearing in the two expressions \eqref{eq:moment1} and \eqref{eq:moment2} are uniform and can be factored out of the sum. This is of course a rough approximation but gets better for larger and larger $W$. We can now observe the key difference between the two types of moments. If we assume that the states are localized with a localization $\xi$, $C_W^{pq}$ goes as $M/\xi^3$ with probability $\xi^3/M$ and vanishes otherwise. This indicates that $\mathbb{E}\left(C_W^{pq}-1\right)$, which appears in Eq.~\eqref{eq:moment2}, is not sensitive to the value of $\xi$ and vanishes to leading order. On the other hand, $\mathbb{E}\left(C_W^{pq}\right)$ itself, which appears in Eq.~\eqref{eq:moment1} does not vanish. The temporal fluctuations become significantly suppressed only when $C_W^{pq}$ approaches the $W\to\infty$ result due to the exclusion of $p=q$ terms in the sum. Finally, a comment is in order regarding the finite-size effects in the above results. It was shown in Ref.~\onlinecite{Ziraldo2012} that the temporal fluctuations eventually saturate to finite values as $L\to\infty$. In a three-dimensional system, it is not easy to reach the saturation regime for the temporal fluctuation, but we have checked by computing ${\rm Var}_s[n_{\bf x}]$ for $L=10$ that the sample-to-sample fluctuations indeed have already reached saturation for $L=8$ and the ${\rm Var}_s[n_{\bf x}]$ data in Fig.~\ref{fig:4} is close to the thermodynamic limit values. While ${\rm Var}_t[n_{\bf x}]$ data is still changing with $L$, the position of the peak as a function of $W$ is stabilized in the system sizes we can reach. The rise and fall of the two different type of density fluctuations as a function of disorder strength as well as the separation of energy scales for the peak in the temporal and sample-to-sample fluctuations are the key results of this paper. These behaviors are in contrast to quantum quenches in clean systems. They emerge as a subtle interplay between quench dynamics, wave-function overlaps and localization physics, and capture the manner in which features of the Anderson localization transition are encoded in the nature of the eigenfunctions and their statistical ensemble. The reduction of temporal fluctuations for strong disorder leads to the counterintuitive observation that increasing disorder could help with observable equilibration. \begin{figure} \includegraphics[width =0.95\columnwidth]{fig6} \caption{(Color online) Moments~\eqref{eq:var1} and~\eqref{eq:var2} of the local density $n_{\bf x}$ as a function of global average density $N/M$ for various $W$. The increasing (decreasing) maximum fluctuations as a function of $W$ is indicated by a dashed (solid) line. The apparent imperfect symmetry of the sample-to-sample fluctuations around half filling is an artifact of using a finite number of realizations in averaging over disorder.} \label{fig:6} \end{figure} \begin{figure} \includegraphics[width =0.95\columnwidth]{fig7} \caption{(Color online) The dependence of the sample-to-sample and temporal fluctuations on $W$ for a fixed density $N/M=0.434$. Both fluctuations exhibit a nonmonotonic dependence on $W$, first ascending and then descending. However, the temporal fluctuations peak at a much higher $W$ as they rely on reaching localization lengths of the order of the lattice spacing. } \label{fig:7} \end{figure} \section{Conclusions}\label{sec:5} In summary, in a tight-binding model exhibiting Anderson localization, we analyzed a quantum quench consisting of suddenly switching on a disordered potential. While the Anderson transition does not lead to a sharp transition in the resultant post-quench dynamics due to the contribution of many single-particle fermionic levels, the salient features of the transition, namely, the nature and the statistics of the disordered eigenfunctions, give rise to important crossover behaviors. The crossover in the behavior of the overlaps between the final (disordered) and initial (clean) eigenfunctions plays a central role. We demonstrated that as a consequence of the uniform overlaps between localized and extended states, upon increasing the strength of disorder, the momentum distribution at long times after the quench crosses over from the Fermi-Dirac distribution to a constant distribution. We then turned to the fluctuations of the local density, which constitute the main results of this paper. The persistent temporal fluctuations are a signature of the absence of equilibration in these systems. Interestingly, they survive even in the limit of infinite time and large system sizes in contrast to various mesoscopic fluctuations that are well known in the equilibrium behavior of disorderd systems. In addition to temporal fluctuations, there are other sources of fluctuations and, in particular, the variations of time averages from sample to sample. Neither of these two types of fluctuations monotonically increases with increasing the strength of disorder. For large disorder strength, they both decrease after some value of disorder strength that depends on the density. While both fluctuations vanish in the limit of infinite disorder, the temporal fluctuations begin to decease at a much larger disorder. This is because the reduction in temporal fluctuations relies on reaching regimes where all states are localized with a localization length of the order of the lattice spacing, whereas for the sample-to-sample fluctuations, many extended states and localized states having relatively uniform localization lengths do not contribute to the fluctuations at high enough disorder (where the overlaps between clean and disordered eigenfunctions is roughly uniform). Our results provide a systematic study of a relatively unexplored problem of the interplay of a system with a Anderson transition and its quench dynamics. They reveal intimate relations between the statistics of the disordered eigenfucnctions and post-quench behavior of observables and in particular their fluctuations (which are unique to disordered systems). The problem of unitary evolution in disordered systems is particularly interesting in light of the recent experimental progress on realizing disordered landscapes and Anderson localization in cold atomic gases~\cite{Sanchez2010,Shapiro2012,Billy2008,Roati2008,Kondov2011,McGehee2013}. The predictions made here would be potentially testable in and highly relevant to such cold atomic settings. For example, the momentum distribution is directly probed by time-of-flight experiments. Our results on the sample-to-sample and temporal fluctuations of the local density can also be observed through \textit{in situ} imaging of the real-space density~\cite{Esteve2006,Gemelke2009,Muller2010,Bakr2010,Sherson2010}. These systems would thus provide an ideal playground for investigating the Anderson-localization physics based on quench dynamics explored in this work. \acknowledgements We are grateful to Brian DeMarco, Pavel Ostrovsky, and Marcus Rigol for helpful discussions. This work was supported by the U.S. Department of Energy under LANL/LDRD program (A.R.), NSERC (A.R.), Max Planck-UBC Centre for Quantum Materials (A.R.), and the National Science Foundation under the grant DMR 0644022-CAR (SV). We thank the Kavli Institute for Theoretical Physics for their hospitality and support in initiating this work.
1,116,691,501,117
arxiv
\section{Introduction} In numerical weather prediction (NWP) a data assimilation procedure is used to combine observations of the atmosphere with a model description of the system in order to obtain initial conditions for forecasts. The contribution of each component is weighted by its respective error statistics. In recent years, interest in the understanding and use of correlated observation error statistics has grown (e.g. \citet{janjic17}). This increased interest has been motivated by results showing that neglecting correlated observation errors hinders forecasts \citep{rainwater15,stewart08}, and that even including poorly approximated correlation structures is better than using uncorrelated error statistics in the presence of correlated errors \citep{stewart13,healy05}.\linebreak Previously, uncorrelated observation error statistics were used for all observations, even when it was known that non-zero error correlations were present. Determining error statistics is a non-trivial problem, as they cannot be observed directly and must be estimated in a statistical sense. It was also thought that it would not be possible to use correlated observation error covariance (OEC) matrices operationally due to the increased computational cost associated with inverting a dense matrix rather than a diagonal matrix \citep{stewart13}. The development of a new method to check error consistency by \citet{desroziers05} was first applied to explicitly diagnose error correlations using the Met Office system \citep{stewart08b}. Since then, the diagnostic introduced in \citet{desroziers05} (henceforth referred to as DBCP) has been used widely at operational centres \citep{weston11,weston14,stewart14,QJ:QJ3097,bormann11,bormann16,campbell16,Gauthier18,Wang2018}, although uncorrelated OEC matrices are still used operationally for most instruments. Although much of the initial use of the diagnostic to estimate observation errors focussed on interchannel correlations, this has been extended to spatial correlations \citep{waller14,waller16b,waller16c,cordoba16,Michel18}. Theoretical work has also demonstrated how well the diagnostic is expected to perform depending on either the accuracy of the initial choice of background and OEC matrices for the single step \citep{waller16} and the iterative form of the diagnostic \citep{menard16,bathmann18}. The use of the diagnostic in data assimilation schemes using localization has also been considered \citep{waller17}.\linebreak The output of the diagnostic cannot be used directly in the assimilation procedure. Diagnosed matrices are asymmetric, and some are not positive definite \citep{stewart14,weston14} and are therefore not valid covariance matrices. Typically, the matrices are symmetrised, and negative and zero eigenvalues are set to be small and positive \citep{weston11}. Additionally, diagnosed OEC matrices are often ill-conditioned. This means that small perturbations to the observations will result in large changes to the analysis, and that iterative methods are likely to converge slowly. Indeed, the direct use of diagnosed matrices has led to problems with non-convergence of the minimisation of the data assimilation procedure \citep{weston11,weston14}. \citet{weston11} suggested that part of these problems were due to small minimum eigenvalues of the diagnosed OEC matrix, $\textbf{R}$. \linebreak One way to study the effect of changes to the assimilation system on the convergence of the objective function minimisation is by using the condition number of the Hessian of the variational objective function as a proxy for convergence. This was done in \citet{haben11c} for the case of a linear observation operator. In \citet{tabeart17a} the minimum eigenvalue of the OEC matrix, $\textbf{R}$, appears in bounds on the condition number of the Hessian of the variational assimilation problem, indicating that this term will also be important for convergence of the objective function minimisation. \linebreak Increased understanding of how the eigenvalues of $\textbf{R}$ affect the convergence of the data assimilation problem motivated investigation into `reconditioning' methods \citep{weston11,weston14,campbell16,tabeart17a}. These methods increase eigenvalues of the matrix $\textbf{R}$ to improve the conditioning of the OEC matrix, while maintaining much of the existing correlation structure of the diagnosed matrix. Two methods are commonly used by NWP centres: `ridge regression' which increases all eigenvalues of $\textbf{R}$ by the same amount, and the `minimum eigenvalue' method which changes only the smallest eigenvalues. These methods were investigated theoretically in \citet{tabeart17c} where it was found that both methods increase standard deviations, and that the ridge regression method strictly reduces all off-diagonal correlations. Both methods were compared in an operational system in \citet{campbell16}, where the sensitivity of forecasts to the choice of method was found to be small, but the ridge regression outperformed the minimum eigenvalue method in terms of convergence. A method similar to the minimum eigenvalue method is used at the European Centre for Medium Range Weather Forecasts (ECMWF) \citep{bormann16}, but will not be discussed further in this paper. \linebrea The aim of this paper is to investigate the use of the ridge regression method within the Met Office system. At the Met Office, in addition to the 4D-Variational data assimilation routine (4D-Var) that is used to produce the initial conditions for weather forecasts, a 1D-Variational data assimilation routine (1D-Var) is used for quality control and pre-processing purposes \citep{Eyre89}. The 1D-Var routine assimilates observations individually, and is used to remove observations that are likely to cause problems with convergence in the 4D-Var routine, as well as to estimate model variables that are not included in the 4D-Var state vector \citep{Pavelin14,Pavelin08}. After the work of \citet{weston11,weston14}, correlated OEC matrices were introduced in the 4D-Var routine for IASI (Infrared Atmospheric Sounding Instrument) and other hyperspectral IR sounders. However, this was not the case for the 1D-Var routine, where a diagonal OEC matrix continues to be used. Previous work found that diagnosed observation error correlations were small for most channels for the 1D-Var routine \citep{weston11,stewart14} and the proportional increase in computational cost was estimated to be large compared with using correlated OEC matrices in 4D-Var \citep{weston14}. \linebreak In this paper we study how the use of reconditioning methods affects the 1D-Var routine when applied to interchannel OEC matrices for the Infrared Atmospheric Sounding Interferometer (IASI). We examine whether the ridge regression method of reconditioning allows us to include correlated observation error information more efficiently than the diagnosed OEC matrix. This method of reconditioning is used at the Met Office to recondition OEC matrices that are used in the 4D-Var routine. We compare a selection of reconditioned OEC matrices with the current diagonal operational error covariance matrix, and an inflated diagonal OEC matrix. This is the first time that multiple levels of reconditioning have been compared systematically in an operational system. We study the impact of reconditioning in terms of the computational efficiency as well as the effect on important meteorological variables.\linebreak In Section \ref{sec:Background} the data assimilation problem is defined and the ridge regression method of reconditioning is introduced. In Section \ref{sec:ExperimentalOverview} we provide an overview of the experimental design. In Sections \ref{sec:Results1DVar} and \ref{sec:Results4DVar} we discuss the impact of changing the OEC matrix on the 1D-Var procedure, and alterations to the quality control and pre-processing for the 4D-Var routine respectively. We find that convergence is improved for any of the choices of reconditioning compared to the current operational choice of OEC matrix. Additionally, increasing the amount of reconditioning results in faster convergence - which corresponds to theoretical results for the linear variational data assimilation problem in \citet{tabeart17a}. However, the quality control procedure is altered by changing the OEC matrix, with a larger number of observations being accepted for reconditioned correlated OEC matrices compared to the current diagonal choice of OEC matrix. We also find that for most variables, the difference between retrieved values for different choices of OEC matrix are small compared to retrieved standard deviations. However, there are a significant minority of observations for which differences are very large. Finally, in Section \ref{sec:Conclusion} we summarise our results and conclusions. \section{Variational data assimilation and reconditioning}\label{sec:Background} \subsection{Data assimilation}\label{sec:BackgroundVar} In data assimilation, a weighted combination of observations, $\textbf{y} \in \mathbb{R}^p$, with a background, or `prior', field, $\textbf{x}_b \in \mathbb{R}^n$, is used to obtain the analysis, or posterior, $\textbf{x}_a\in\mathbb{R}^n$. The weights are the respective error statistics of the two components. The matrix $\textbf{R} \in \mathbb{R}^{p\times p}$ is the observation error covariance (OEC) matrix and $\textbf{B} \in \mathbb{R}^{n\times n}$ is the background error covariance matrix. In order to compare observations with the background field, the, possibly non-linear, observation operator $H:\mathbb{R}^n\rightarrow\mathbb{R}^p$ is used to map from state space to observation space. The weighted combination is written in the form of an objective function in terms of $\textbf{x}\in\mathbb{R}^{n}$, the model state vector. In the case of 3D-Var the objective function is given by: \begin{equation} \label{eq:costfn} \begin{split} J(\textbf{x})=&\frac{1}{2}(\textbf{x}-\textbf{x}_b)^T\textbf{B}^{-1}(\textbf{x}-\textbf{x}_b)\\&+\frac{1}{2}(\textbf{y}-H[\textbf{x}])^T\textbf{R}^{-1}(\textbf{y}-H[\textbf{x}]). \end{split} \end{equation} The value of $\textbf{x}$ that minimises \eqref{eq:costfn} is given by $\textbf{x}_a$. \linebreak The first order Hessian, or matrix of second derivatives, of the objective function \eqref{eq:costfn} is given by \begin{equation}\label{eq:Hessian} \nabla^2 J\equiv\textbf{S} = \textbf{B}^{-1}+ \textbf{H}^T\textbf{R}^{-1}\textbf{H}, \end{equation} where $\textbf{H}\in \mathbb{R}^{p \times N}$ is the Jacobian of the observation operator, $H[\textbf{x}]$, {\color{black} linearised about the current best estimate of the optimal solution of \eqref{eq:costfn}.} \linebrea We now define the condition number of a matrix. Let $\lambda_{max}(\textbf{S}) =\lambda_1(\textbf{S}) \ge \dots \ge \lambda_N(\textbf{S})=\lambda_{min}(\textbf{S})$ be the eigenvalues of $\textbf{S}$. We note that this ordering convention will be used for the remainder of the paper. Although covariance matrices are symmetric positive semi-definite by definition, in practice $\textbf{B}$ and $\textbf{R}$ are required to be strictly positive definite in order that they can be inverted in \eqref{eq:costfn}. This means that $\textbf{S}$ is symmetric positive definite, and its condition number is given by \begin{equation}\label{eq:condS} \kappa(\textbf{S}) = \frac{\lambda_1(\textbf{S})}{\lambda_N(\textbf{S})}. \end{equation} We note that the minimum possible value of the condition number of any matrix is one. The condition number of the Hessian is of interest because it can be used to study the sensitivity of the solution to small changes in the background or observation data \citep[Sec 2.7]{golub96}. As \eqref{eq:costfn} is non-linear, it is solved using a sequence of Gauss-Newton iterations with an inner linearised problem solved using the conjugate gradient method \citep{haben11b}. The rate of convergence of the minimisation of the linearised problem by a conjugate gradient function can also be bounded by $\kappa(\textbf{S})$ \citep{golub96}, although this bound is quite pessimistic. In particular, clustering of eigenvalues can result in much faster convergence than is predicted by $\kappa(\textbf{S})$ \citep{Nocedal06}. \subsection{Reconditioning: motivation and definition}\label{sec:Recond} In \citet{weston11}, observations from IASI were used at the Met Office for an initial study investigating the feasibility of using correlated observation error matrices in their 4D-Var system. A first guess of the OEC matrix was obtained using the DBCP diagnostic. \linebrea One problem that was encountered in \citet{weston11} and \citet{weston14} was the ill-conditioning of the matrix resulting from the DBCP diagnostic. The use of an ill-conditioned OEC matrix can result in slower convergence of a variational scheme \citep{weston14,tabeart17a}. Similar problems were encountered at ECMWF where a degradation in the forecast was seen when the raw output of the DBCP diagnostic was tested \citep{lupu15}. \citet{weston11} suggested that the convergence problems were caused by very small minimum eigenvalues of the diagnosed observation error covariance matrix. \linebreak \citet{tabeart17a} developed bounds for the condition number of the Hessian in terms of its constituent matrices {\color{black} in the case of a linear observation operator}. This provides an indication of the role of each matrix in the conditioning of $\textbf{S}$, and therefore the convergence of the associated minimisation problem. The bound which separates the role of each matrix is given by \begin{equation}\label{eq:newboundgen} \begin{split} \max &\Bigg\{\frac{1+\frac{\lambda_{max}(\textbf{B})}{\lambda_{min}(\textbf{R})}\lambda_{max}(\textbf{H}\bfH^T)}{\kappa(\textbf{B})}, \frac{1+\frac{\lambda_{max}(\textbf{B})}{\lambda_{max}(\textbf{R})}\lambda_{max}(\textbf{H}\bfH^T)}{\kappa(\textbf{B})}, \\ & \frac{\kappa(\textbf{B})}{1+\frac{\lambda_{max}(\textbf{B})}{\lambda_{min}(\textbf{R})}\lambda_{max}(\textbf{H}\bfH^T)}\Bigg\} \\ & \le \kappa(\textbf{S}) \le \Big(1+\frac{\lambda_{min}(\textbf{B})}{\lambda_{min}(\textbf{R})}\lambda_{max}(\textbf{H}\bfH^T)\Big)\kappa(\textbf{B}). \end{split} \end{equation} These bounds show that the minimum eigenvalue, $\lambda_{min}(\textbf{R})$, of the OEC matrix is a key term in the upper bound for $\textbf{S}$, meaning that increasing the minimum eigenvalue of $\textbf{R}$ is a reasonable heuristic for reducing the condition number of $\textbf{S}$ and improving the conditioning of the problem \eqref{eq:costfn}. \linebreak In the case that the error covariance matrices can be written as the product of a scalar variance with a correlation matrix, e.g. $\textbf{R} = \sigma_o^2\textbf{D}$ and $\textbf{B} = \sigma_b^2\textbf{C}$, and observations are restricted to model variables, we can simplify the bound \eqref{eq:newboundgen} to \begin{equation} \label{eq:mybound1} \begin{split} \max & \Bigg\{\frac{1+\frac{\sigma_b^2}{\sigma_o^2}\frac{\lambda_{max}(\textbf{C})}{\lambda_{min}(\textbf{D})}}{\kappa(\textbf{C})}, \frac{\kappa(\textbf{C})}{1+\frac{\sigma_b^2}{\sigma_o^2}\frac{\lambda_{max}(\textbf{C})}{\lambda_{min}(\mathbf{D})}}\Bigg\}\\& \le \kappa(\textbf{S}) \le \Bigg(1+\frac{\sigma_b^2}{\sigma_o^2}\frac{\lambda_{min}(\textbf{C})}{\lambda_{min}(\mathbf{D})}\Bigg)\kappa(\textbf{C}). \end{split} \end{equation} The qualitative conclusions of \cite{tabeart17a} can be summarised as follows. \begin{itemize} \item The minimum eigenvalue of $\textbf{R}$ was shown to be important for determining both the conditioning of the Hessian, and the speed of convergence of a minimisation procedure. This can be seen in \eqref{eq:newboundgen} and \eqref{eq:mybound1} \item The ratio of the background and observation variances was also shown to be important for conditioning of the Hessian. This can be seen in \eqref{eq:mybound1} explicitly for the case of direct observations where variances are homogeneous for both background and small scale matrices. However, we expect the conclusion to hold more broadly, for example in the case where all standard deviation values corresponding to an OEC matrix were larger than those corresponding to another OEC matrix, then the bounds would be smaller for the first choice of OEC matrix. \item Although \eqref{eq:newboundgen} and \eqref{eq:mybound1} separate the contribution of each term, {\color{black}numerical experiments revealed that the level of interaction between observation error and background error statistics depends on the choice of observation network. Examples of observation operators which yield identical bounds for \eqref{eq:newboundgen} but different dependence of $\kappa(\textbf{S})$ on $\textbf{B}$ and $\textbf{R}$ were found experimentally in \citet{tabeart17a}.} \end{itemize} These conclusions motivate the use of reconditioning methods. In order to make operational implementation of correlated observation error matrices feasible, it is necessary to reduce the impact of the very small eigenvalues of the matrix $\textbf{R}$ by increasing its condition number. To achieve this, different methods of inflation, or reconditioning are used to improve conditioning of correlation matrices for a variety of applications. The ridge regression method is used to recondition OEC matrices at the Met Office \citep{weston11,weston14}, and hence will be the reconditioning method that is considered in the remainder of this paper. The ridge regression method adds a scalar multiple of the identity to $\textbf{R}$ to obtain the reconditioned matrix $\textbf{R}_{RR}$. This scalar, $\delta$, is chosen such that $\kappa(\textbf{R}_{RR})=\kappa_{max}$, a user-specified condition number. The method for calculating $\delta$ for a given choice of $\kappa_{max}$ was formally defined in \citet{tabeart17c} as follows: \begin{mydef} \textbf{Ridge regression reconditioning constant, $\delta$ \citep{tabeart17c}}\\ Define $\delta = ({\lambda_{max}(\textbf{R}) - \lambda_{min}(\textbf{R})\kappa_{max}})/({\kappa_{max}-1}) $. \\ Set $\bfR_{RR} = \textbf{R}+\delta\textbf{I}$ \end{mydef} We note that this choice of $\delta$ yields $\kappa(\bfR_{RR})=\kappa_{max}$. Mathematical theory describing the effect of this reconditioning method on the correlations and variances of any covariance matrix was developed in \citet{tabeart17c}, which showed that the ridge regression method increases error variances for all observations, and decreases all off-diagonal correlations. {\color{black}In this paper we investigate whether the qualitative conclusions from \citet{tabeart17a} hold in the case of a non-linear observation operator, and we study the impact of reconditioning methods in an operational system.} \section{Experimental Overview}\label{sec:ExperimentalOverview} \subsection{Met Office System} \label{sec:MOsystem} The experiments carried out in this manuscript will use observations from the IASI instrument on the EUMETSAT MetOp constellation. IASI is an infrared Fourier transform spectrometer, and measures infrared radiation emissions from the atmosphere and surface of the earth \citep{chalon01}. {\color{black} We note that the observation operator for this instrument, a radiative transfer model, is highly non-linear so the conclusions from \citet{tabeart17a} will not necessarily apply to this problem.} The infrared spectrum is split into channels corresponding to different wavelengths; this means that an observation at a single location will provide information for up to 8641 channels. An early use of the DBCP diagnostic focused on observations from IASI implemented in the Met Office system \citep{stewart08b}. Much of the subsequent research on correlated observation error uses IASI observations \citep{weston11,weston14,stewart14,bormann16}. In particular IASI has channels that are sensitive to water vapour which have been found to have errors with large correlations \citep{stewart14,weston14,bormann16}. \linebrea One attraction of IASI, and other hyperspectral instruments, is the large number of available channels, which provides high vertical resolution. However, using all of these channels is not feasible in current operational NWP systems for reasons including computational expense, and not requiring too many observations of a similar type. Additionally, when IASI was first used, there was a reluctance to include correlated observation errors so an effort was made to choose channels that are spectrally different and hence less likely to have correlated errors \citep{stewart14}. This means that of the 8461 available channels, only a few hundred are used at most NWP centres \citep{stewart14}. At the {\color{black}time of the experiments, the Met Office stored a subset of $314$ channels with a maximum of $137$ being used in the 4D-Var system. A list of these channels is given by \citet[Appendix A]{stewart09}.} As there is a large degree of redundancy between channels \citep{Collard10}, directly assimilating a larger number of channels is likely to make the conditioning of the OEC matrix worse. This has motivated alternative approaches such as principal component compression \citep{Collard10} and the use of transformed retrievals \citep{prates16}, which will not be considered in this work.\linebreak A larger number of channels is used for the 1D-Var assimilation than for the Met Office 4D-Var assimilation; standard deviation values for these channels are filled in from the current operational (diagonal) OEC matrix. We chose to focus on the channels used in the 4D-Var system in order to be consistent between both assimilation systems. We also note that not all channels are used for each assimilation; for example, some channels are not used in the presence of cloud. In this case, rows and columns corresponding to channels that are affected by cloud are deleted from the OEC matrix. As the submatrix chosen from the full OEC matrix used could change at each observation time, there may be a difference in the condition number of the OEC matrix used in practice and the OEC matrices presented in this work. However, the Cauchy interlacing theorem \citep[Lemma 8.4.4]{Bernstein} states that the condition number will not be increased by deleting rows and columns of a symmetric positive definite matrix. This means that the values given here are upper bounds for $\kappa(\textbf{S})$ even if the quality control procedure excludes some channels. \linebreak We test the impact of using correlated OEC matrices in the Met Office 1D-Var system and consider the effect of using the ridge regression method of reconditioning with different choices of target condition number. At the Met Office, 1D-Var is run prior to every 4D-Var assimilation procedure, meaning that retaining current computational efficiency and speed of convergence is desirable. We note that a single IASI observation consists of brightness temperature values for each of the channels that are used in the assimilation. A 1D-Var procedure takes observations separately at each location to retrieve variables such as temperature and humidity over a 1D column of the atmosphere. This procedure is much cheaper and more parallelisable than a 4D-Var algorithm.\linebreak The 1D-Var routine performs two main functions: \begin{enumerate} \item Quality control (QC): Observations that require more than 10 iterations for the 1D-Var minimisation to reach convergence are not passed to the 4D-Var routine. This is because it is assumed that observations for which the retrieval procedure takes too long to converge for the 1D-Var minimisation will also result in slow convergence for a 4D-Var minimisation. {\color{black} The convergence criteria is based on the value of the cost function and normalised gradient \citep{Pavelin08}.} Changing the OEC matrix will alter the speed of convergence of 1D-Var, and hence affect which observations are accepted. \item Estimation of values for certain variables that are not included in the 4D-Var state vector: values for skin temperature, cloud fraction, cloud top pressure and emissivity over land are fixed by the 1D-Var procedure. Altering the OEC matrix will change retrieved values for these variables. \end{enumerate} Changing the OEC matrix is therefore likely to have two main effects on results of the 1D-Var procedure: changing the observations that are accepted by the quality control, and changing the values of those variables not included in the 4D-Var state vector. Skin temperature (ST), cloud fraction (CF) and cloud top pressure (CTP) are retrieved as scalar values at each observation location. In contrast, surface emissivity is retrieved as a spectrum, which is represented as a set of leading principal components \citep{Pavelin14}. As we expect the interactions between the choice of $\textbf{R}$ and the retrieved values to be complex, in this work we only consider the effect of changing $\textbf{R}$ on the three scalar variables: skin temperature, cloud top pressure and cloud fraction. \subsection{Experimental Design} \label{sec:experimentaldesign} We now describe the experimental framework and key areas of interest that will be investigated in Sections \ref{sec:Results1DVar} and \ref{sec:Results4DVar}. We use the operational Met Office 1D-Var framework at the time of the experiments (July 2016), and consider how the results change for different choices of OEC matrix. Background profiles are obtained from the Unified Model (UM) background files for the corresponding configuration. A number of different times and dates for the six months between December 2015 and June 2016 were considered, but as results were similar across all trials we only present results from experiments for 16th June 2016 0000Z. \linebreak The correlated choices of $\textbf{R}$ are calculated using the method introduced in \citet{weston14}; applying the ridge regression method of reconditioning to the diagnosed matrix for a variety of choices of $\kappa_{max}$. The matrices estimated by the DBCP diagnostic depend considerably on the choice of background and observation error matrices. For all OEC matrices produced, the same $4$ days of IASI and background NWP data (03/12/15-06/12/15), were used as input data. We note that the estimated OEC matrix was obtained using background and OEC matrices from the 4D-Var assimilation routine rather than the 1D-Var routine. {\color{black} Although this is not theoretically consistent with the smaller error correlations that have been estimated for the 1D-Var problem in previous studies \citep{stewart14,weston11}, the use of 4D-Var error statistics allows us to better understand the impact that our changes are likely to have on 4D-Var. We are using 1D-Var as a pre-processing step for 4D-Var to remove observations that are likely to cause convergence issues in the main assimilation algorithm.} \linebreak{ \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{FigBsd} \caption{Standard deviation values for the operational background error covariance matrices, $\textbf{B}$, for the northern hemisphere (solid line), tropics (dot-dashed line) and southern hemisphere (dashed line) for temperature (a) and ln(specific humidity) (b).} \label{fig:Bsd} \end{figure*} \begin{figure*} \centering \includegraphics[trim=30mm 0mm 30mm 0mm,clip,width=1\linewidth]{Bcorrelations} \caption{Correlation matrices for the operational background error covariance matrices, $\textbf{B}$, for the northern hemisphere (a), tropics (b) and southern hemisphere (c). Dashed vertical and horizontal lines separate inter and cross correlations between temperature, ln(specific humidity) and other variables (from left to right). ST is variable 72, CTP is 74 and CF is 75. } \label{fig:Bcorr} \end{figure*} \begin{table*} {\color{black}\centering \begin{tabular}{| l | c | c| c | c | c| } \hline Variable & CF & CTP (hPa) & ST (NH) (K) & ST (Tr) (K) & ST (SH) (K) \\ \hline Standard deviation & 1 & 1000 & 2.24 & 1.92 & 2.02\\ \hline \end{tabular} \caption{Background standard deviation values for variables not included in the 4D-Var state vector. } \label{tab:Bsd}} \end{table*} {\color{black}We use the operational background error covariance matrix, $\textbf{B}$, at the time of the experiments. This consists of three different choices of $\textbf{B}$ for the northern hemisphere (30N:90N), the tropics (30S:30N) and the southern hemisphere (90S:30S). Figure \ref{fig:Bsd} shows background standard deviation (BSD) values for temperature and humidity variables, and Table \ref{tab:Bsd} gives BSD values for CF, CTP and ST for each of the choices of $\textbf{B}$. We note that standard deviations for cloud variables are assumed to be very large so that the background is ignored for these variables \citep{Pavelin08}. In Sections \ref{sec:Results1DVar} and \ref{sec:Results4DVar} we will compare the standard deviations from the background error covariance matrix against retrieved standard deviations for the observations as well as differences between observations for different choices of $\textbf{R}$. Figure \ref{fig:Bcorr} shows that correlations corresponding to the three choices of $\textbf{B}$ are qualitatively very similar. Cross-correlations between variables are quite weak, with no correlations between temperature and specific humidity. Most correlations larger than 0.2 occur for adjacent model levels for temperature and specific humidity. Correlations greater than 0.2 also occur between surface temperature and ST and temperature for larger model level numbers, and surface specific humidity and specific humidity at larger model level numbers. CTP and CF are uncorrelated with all other variables. }\linebreak We apply the DBCP diagnostic to the subset of $137$ channels that are assimilated in the 4D-Var routine. The 1D-Var routine uses additional channels \citep{hilton09}, with a total of 183 channels being assimilated. Observation errors for these additional channels are assumed to be uncorrelated, and filled in with values from the diagonal error covariance matrix $\textbf{R}_{diag}$. If additional channels are included in future versions of the operational system, it would be advisable to recompute the DBCP diagnostic applied to all channels. \linebreak The seven different choices of the matrix $\textbf{R}$ that were tested are now listed: \begin{itemize} \item $\textbf{R}_{infl}$ which is an inflated diagonal matrix. This matrix was used prior to the introduction of correlated observation error in the 4D-Var assimilation scheme \citet{weston14}. In particular variances are inflated to account for the fact that the assumption of uncorrelated errors is incorrect. The standard deviations (square root of the diagonal entries of $\textbf{R}_{infl}$) are shown in \citet[Figure 1]{weston14} by the black dashed line. The largest value entry of $\textbf{R}_{infl}$ is $16$, and the smallest entry is $0.25$. The construction of $\textbf{R}_{infl}$ is described in \citet{hilton09}. \item $\bfR_{diag}$, the current operational matrix for 1D-Var retrievals, which is diagonal. The standard deviations are calculated as instrument noise plus $0.2K$ forward-model noise \citep{Collard07}. The variances of $\bfR_{diag}$ are shown in \citet[Figure 7]{stewart14} by the red line. The variances are much smaller than for $\textbf{R}_{infl}$; for the first $120$ channels, the diagonal elements of $\bfR_{diag}$ are all less than $ 0.27$ and the largest value of $\bfR_{diag}$ is given by $0.49$. \item $\textbf{R}_{est}$, the symmetrised raw output of the code that produces the DBCP diagnostic. This is computed by $\textbf{R}_{est} = \frac{1}{2}(\textbf{R}_{DBCP}+\textbf{R}_{DBCP}^T)$, where $\textbf{R}_{DBCP} \in \mathbb{R}^{137\times 137}$ is the output of the DBCP diagnostic \item Reconditioned versions of $\textbf{R}_{est}$ so that the correlated submatrix has a condition number of $1500$, $1000$, $500$ and $67$, referred to respectively as $\textbf{R}_{1500}, \textbf{R}_{1000}, \textbf{R}_{500}$ and $\textbf{R}_{67}$. \end{itemize} We refer to the experiments using each choice of OEC matrix as $E$ with subscript corresponding to that of the OEC matrix (i.e. $E_{diag}, E_{est}, E_{1500}E_{1000}, E_{500}, E_{67}$ and $E_{infl}$). \linebrea \begin{table*} \centering \begin{tabular}{| l | c | c| c | c | c|c|c| } \hline Experiment name & $E_{diag}$ & $E_{est}$ & $E_{1500}$ & $E_{1000}$ & $E_{500} $ &$E_{67}$& $E_{infl}$ \\ \hline Choice of $\textbf{R}$ & $\bfR_{diag}$ & $\textbf{R}_{est}$ & $\textbf{R}_{1500}$ & $\textbf{R}_{1000}$ & $\textbf{R}_{500} $ &$\textbf{R}_{67}$ & $\textbf{R}_{infl}$\\ \hline $\lambda_{min}(\textbf{R})$ & 0.025 & 0.00362 & 0.00482 & 0.007244 & 0.0145 & 0.1010 & 0.0625 \\ \hline $\kappa(\textbf{R})$ & 9.263 &2730 & 1500 & 1000 & 500 & 67& 64 \\ \hline \end{tabular} \caption{Minimum eigenvalues and condition number of $\textbf{R}$ for each experiment.} \label{tab:mineig} \end{table*} Details of the conditioning, and minimum eigenvalues of each of the choices of $\textbf{R}$ can be found in Table \ref{tab:mineig}. We see that for the non-diagonal matrices, as we decrease the target condition number, we increase the minimum eigenvalue of $\textbf{R}$. This agrees with the theoretical results of \citet{tabeart17c}. We also see that of the two diagonal choices of $\textbf{R}$, $\textbf{R}_{infl}$ has the larger value of $\lambda_{min}(\textbf{R})$, suggesting that we might expect better convergence compared to $\bfR_{diag}$. We also notice that the largest value of $\lambda_{min}(\textbf{R})$ occurs for $\textbf{R}_{67}$. It will be of interest to consider whether the introduction of correlations has more effect on convergence and conditioning than the value of $\lambda_{min}(\textbf{R})$. We note that the inclusion of $46$ extra channels in the 1D-Var algorithm, in addition to the $137$ channels used in the 4D-Var algorithm, could change the condition numbers presented in Table \ref{tab:mineig} by the introduction of very small or very large eigenvalues. \linebrea Our numerical experiments will be broadly split into two groups. Firstly we will consider the effect of changing the OEC matrix, $\textbf{R}$, on the 1D-Var procedure itself in Section \ref{sec:Results1DVar}. This includes the impact on retrieved values and the convergence of the 1D-Var assimilation. Secondly, in Section \ref{sec:Results4DVar}, we will consider the impact of these changes on the 4D-Var procedure, by looking at how the number of accepted observations varies, and how the retrieved values of skin temperature, cloud top pressure, and cloud fraction retrievals are altered. \section{Impact on Met Office 1D-Var routine}\label{sec:Results1DVar} In this section we consider the impact of changing the OEC matrix used in the Met Office 1D-Var system on the conditioning of the Hessian and on individual retrievals of temperature and humidity. In particular, the conditioning of the Hessian is important in terms of speed of convergence of the minimisation procedure. We recall (Section \ref{sec:MOsystem}) that in 1D-Var information for each observation location is assimilated separately. Here a single observation corresponds to information from a column of IASI channels valid at one location. {\color{black} This corresponds to 97330 observations over the $4$ days of data discussed in Section \ref{sec:experimentaldesign} with objective functions that converge in 10 or fewer iterations for all choices of $E_{exp}$. For much of the discussion that follows we will consider statistics of this set of 97330 observations to understand how changing the OEC matrix affects 1D-Var for IASI observations.} \subsection{Influence of observation error covariance matrix on convergence and conditioning of the 1D-Var routine} \label{sec:MOHessian} We begin by investigating explicitly the effect of changing the OEC matrix, $\textbf{R}$, on the 1D-Var routine. We consider two variables: the number of iterations required for convergence for the minimisation routine and the condition number of the Hessian of the 1D-Var cost function.\linebrea Firstly we consider the number of iterations required for the minimisation of the 1D-Var cost function to reach convergence for each assimilated observation. For NWP centres, this is a variable of significant interest, as the extra expense of introducing correlated error predominantly comes from the increase in the number of iterations needed before convergence in the case of interchannel errors \citep{weston11}. We note that this may not be the case for other types of error correlation such as spatial and temporal correlations (where the computation of matrix-vector products may require additional communication between processors \citep{simonin18}). The minimisation is deemed to have converged when the absolute value of the difference between each component of two successive estimates of the state vector is smaller than $0.4\sigma_{\textbf{B}}$, where $\sigma_{\textbf{B}}$ is the vector whose components are the background error variances for each retrieved variable. Values deemed to be unphysical, such as temperature components falling out of the range $70K-340K$, are discarded. \linebrea \begin{figure*} \centering \centering \includegraphics[trim= 0mm 3mm 0mm 1mm,clip,width=0.75\linewidth]{Fig1niterLABEL} \caption[Number of iterations]{Number of iterations required for convergence of the minimization of the 1D-Var cost function as a fraction of the total number of observations common to all choices of $\textbf{R}$. Symbols correspond to: $\bfR_{diag}$ ($\triangle$), $\textbf{R}_{est}$ ($\circ$), $\textbf{R}_{67}$ ($\square$) and $\textbf{R}_{infl}$ ($\diamond$). \label{fig:niter}} \end{figure*} \begin{table*} \centering \begin{tabular}{ l c c c c c c c } \hline & $ E_{diag}$ & $E_{est}$ & $E_{1500}$ & $E_{1000}$ & $E_{500}$ & $E_{67}$ &$ E_{infl}$ \\ \hline \small{}&\small{}&\small{}&\small{}&\small{}&\small{}&\small{}&\small{} \\ $\max \kappa(\textbf{S})$ & $3.01\times 10^{12}$ & $7.546\times 10^{11}$ & $7.469\times 10^{11}$ & $7.30\times 10^{11}$ & $7.02\times 10^{11}$ & $3.71\times 10^{11}$ & $1.74\times10^{11}$ \\ mean $\kappa(\textbf{S})$ &$2.78\times 10^{10}$ & $ 6.71\times 10^9$&$ 6.62\times 10^9$& $6.43\times10^9$& $6.00\times10^9$&$ 4.01\times10^9$ &$2.83\times10^9$ \\ median $\kappa(\textbf{S})$ &$2.09\times10^8$&$ 1.31\times10^8$&$ 1.32\times10^8$&$ 1.33\times10^8$&$ 1.37\times10^8$&$ 1.78\times10^8$&$ 2.89\times10^8$\\ \hline \end{tabular} \caption{Maximum, mean and median values of $\kappa(\textbf{S})$ for $E_{diag}$ and experiments.} \label{table:condHess} \end{table*} For each observation, we store the number of iterations required for the corresponding 1D-Var objective function to converge, $niter$. Figure \ref{fig:niter} shows the fraction of observations that have objective functions that converge in $niter$ iterations for four choices of $\textbf{R}$. We note that the behaviour for the other correlated experiments is similar to the behaviour for $E_{est}$ and hence only the distributions for $E_{est}$ and $E_{67}$ are shown. We see that for all experiments $niter = 2$ is the modal class and contains over $50\%$ of the observations. We begin by considering experiments corresponding to correlated choices of the matrix $\textbf{R}$. Our results show that as the minimum eigenvalue of the matrix $\textbf{R}$ increases, there is a decrease in the required number of iterations. This agrees with the theoretical conclusions of \citet{tabeart17a}. However, the overall effect of reconditioning on convergence speed is less for 1D-Var than was observed in the case of 3D-Var or 4D-Var as described in \citet{weston11}. It is likely that this is because the average number of iterations is greater in 3D and 4D-Var, and the maximum permitted number of iterations is much larger than the $10$ allowed for the 1D-Var minimisation. \linebreak We now consider the two diagonal choices of OEC matrix, $\textbf{R}_{infl}$ and $\bfR_{diag}$. The distribution corresponding to $E_{diag}$ is more heavily weighted towards a higher number of iterations than any of the correlated cases. This is not what we might expect from an uncorrelated choice of OEC matrix, particularly as it is well-conditioned compared to most other choices of OEC matrix. In particular, $\lambda_{min}(\bfR_{diag})$ is greater than the minimum eigenvalue for all choices of correlated OEC matrix apart from $\textbf{R}_{67}$ (see Table \ref{tab:mineig}). In contrast, for the experiment $E_{infl}$ convergence is faster than for any of the other experiments.\linebreak {\color{black} As we noted in Section \ref{sec:Recond}, the minimum eigenvalue of the matrix $\textbf{R}$ is not the only important property for determining the speed of convergence. The distribution of standard deviations for $E_{diag}$ and $E_{infl}$ is shown in Figure 1 of \citet{weston14}. As the standard deviations for $\textbf{R}_{infl}$ are much larger than the standard deviations for any other choice of $\textbf{R}$, the ratio of background variance to observation variance will be smaller for $E_{infl}$ than other experiments, resulting in smaller condition numbers of the Hessian and hence faster convergence of the 1D-Var minimisation. We recall from Section \ref{sec:Recond} that the ratio of background to observation error variances appears in the bounds on the condition number of the Hessian given by \eqref{eq:mybound1} in \citet{tabeart17a} and similar bounds in \cite{haben11c}. It is clear from these bounds that decreasing the observation error variance will increase the value of the bounds. We can therefore explain the worse convergence seen for $E_{diag}$ by considering channels 107-121 and 128-137, where variances for $\bfR_{diag}$ are smaller than the variances for correlated choices of $\textbf{R}$.} These channels are sensitive to water vapour, and also correspond to the strongest positive correlations in $\textbf{R}_{est}$. Typically, inflation is used when correlated errors are not accounted for; here we have the opposite effect with smaller variances for uncorrelated $\bfR_{diag}$. In terms of the minimisation of the 1D-Var objective function, this means that $E_{diag}$ is pulling much closer to observations for those channels than any of the correlated experiments. This makes it harder to find a solution, resulting in slower convergence. \linebrea We now consider how the condition number of the Hessian of the 1D-Var cost function, $\kappa(\textbf{S})$, changes with the experiment $E$. From theoretical results developed in \citet{tabeart17a}, in particular the result of Corollary~1, we expect $\kappa(\textbf{S})$ to decrease as $\lambda_{min}(\textbf{R})$ increases. The minimum eigenvalues for each choice of OEC matrix, $\textbf{R}$, discussed here can be seen in Table \ref{tab:mineig}. The condition number of $\textbf{S}$ is computed separately for each objective function. We can therefore consider the maximum, mean and median value of $\kappa(\textbf{S})$ over the 97330 observations for each experiment. This information is shown in Table~\ref{table:condHess}. As discussed in Section \ref{sec:BackgroundVar} the condition number of any matrix is bounded below by one. We therefore do not include the minimum values of $\kappa(\textbf{S})$ in the table. We firstly note that the maximum values of $\kappa(\textbf{S})$ are extremely large, with the largest value occurring for the matrix $\bfR_{diag}$. For experiments with correlated OEC matrices, increasing $\lambda_{min}(\textbf{R})$ results in a decrease in the maximum value of $\kappa(\textbf{R})$. {\color{black} We note that the changes to the condition number for $\textbf{R}_{67}$ compared to $\textbf{R}_{500}$ are much larger than the difference in conditioning between other experiments.} The maximum value of $\kappa(\textbf{R})$ for the OEC matrix $\textbf{R}_{infl}$ is the smallest of all choices of OEC matrix. A decrease in the maximum value of $\kappa(\textbf{S})$ corresponds to a distribution that has increased weight at the lower end of the spectrum for the iteration count distribution shown in Figure \ref{fig:niter}.\linebreak We now consider the mean and the median of $\kappa(\textbf{S})$. Firstly we note that the values of the mean and median differ by at least one order of magnitude. {\color{black} The distribution of $\kappa(\textbf{S})$ is not symmetric: it is bounded below by $1$, with very large maximum values. The mean is skewed by such outliers and we note that for a boxplot of this data (not shown) the mean does not lie within the interquartile range (IQR) of the data for all experiments other than $E_{67}$ and $E_{infl}$. Both the maximum and mean of $\kappa(\textbf{S})$ decrease with increasing $\lambda_{min}(\textbf{R})$, for correlated OEC matrices. The largest values occur for the experiment $E_{diag}$, and the smallest for the experiment $E_{infl}$. In contrast, the median is largest for the experiment $E_{infl}$, and decreasing $\lambda_{min}(\textbf{R})$ increases the median value of $\kappa(\textbf{S})$ for experiments with correlated choices of OEC matrix. Considering the deciles indicates that the spread of $\kappa(\textbf{S})$ across all observations reduces as more reconditioning is applied. }\linebreak We have seen that introducing correlated OEC matrices improves convergence and reduces $\kappa(\textbf{S})$ compared to the current operational choice. Additionally, reducing the target condition number results in further improvements. This behaviour agrees with the theoretical conclusions of \citet{tabeart17a} that were summarised in Section \ref{sec:Recond}. For a linear observation operator we expect the upper bound on the condition number of the Hessian to decrease as the minimum eigenvalue of the OEC matrix, $\textbf{R}$, increases. This is shown in \eqref{eq:newboundgen}. This equation also shows that the ratio between background and observation variance is important for the conditioning of $\textbf{S}$. The final column of Table \ref{table:condHess} shows that range of $\kappa(\textbf{S})$ for the experiment $E_{infl}$ is less than the range for any experiment with a correlated choice of OEC matrix. The variances for $\textbf{R}_{infl}$ are much larger than the variances for any other OEC matrix considered in this work. We therefore conclude that the qualitative conclusions of \citet{tabeart17a}, as presented in Section \ref{sec:Recond}, hold in this framework, even in the case of a non-linear observation operator. \subsection{Effect of changing the observation error covariance matrix on 1D-Var Retrievals}\label{sec:MO1DVar} In this section we consider how changing the OEC matrix impacts the retrieved values of physical variables. In particular we focus on temperature and specific humidity, as we obtain profiles that occur across multiple model levels rather than individual values. We note that the retrieved temperature and humidity values are not passed to the 4D-Var assimilation procedure. However, studying how these variables change for different choices of OEC matrix helps us understand the impact of changing the OEC matrix, $\textbf{R}$, on the 1D-Var assimilation. Additionally, as part of the 1D-Var assimilation procedure, retrieved standard deviation (RSD) values for each of the retrieval values are derived. The RSD values are calculated as the square root of the diagonal entries of the inverse of the Hessian given by \eqref{eq:Hessian}, i.e. the retrieved analysis error covariance in state variable space. For each 1D-Var assimilation we obtain a different value for RSD for each retrieved variable. We therefore consider the average RSD value for a given experiment and retrieved variable. For temperature and specific humidity this means that we obtain different RSD values for each model level. Comparing the range of differences between retrievals to the RSD values will allow us to determine whether the difference made when changing the OEC matrix, $\textbf{R}$, is of a similar order to expected variation, or much larger (and hence results in significant differences). We will also compare RSD and differences against BSD values as shown in Figure \ref{fig:Bsd}.\linebreak \begin{figure*}\centering \includegraphics[width=0.9\textwidth]{Fig0RetrievalBdiff.eps} \caption{Background minus retrieved profiles from observation at (-33.16N,-32.70E) for 16th June 2016 0000Z for (a) temperature (b) ln(specific humidity). Differences are shown for $E_{diag}$ (dot-dashed line), $E_{est}$ (dotted line), $E_{67}$ (solid line) and $E_{infl}$ (dashed line). \label{fig:1DVarRetrieval}} \end{figure*} Figure \ref{fig:1DVarRetrieval} shows {\color{black}background profiles minus retrieved profiles} for temperature and humidity for observations at the location (-33.16N,-32.70E). Retrievals are shown at pressure levels in the atmosphere. These model levels are determined by the $43$ evenly distributed pressure levels in the radiative transfer retrieval algorithm. Specific humidity is only calculated for the lowest $26$ model levels. We note that this is the configuration that was used at the time of the experiments (July 2016). {\color{black}Differences from the background are larger for specific humidity profiles than for temperature. However qualitative behaviour is similar for both variables. In both cases $E_{diag}$ is the most different from the background, implying that the use of correlated observation errors increases the weighted importance of the background. {\color{black} Increasing the amount of reconditioning used decreases the norm of the difference between the retrieved profile and the background for all correlated OEC matrices. Hence, applying a larger amount of reconditioning results in a retrieved profile that is closer to the background.} Finally, the retrieval corresponding to $E_{infl}$ is closest to the background for both variables. For this case, standard deviations have been inflated, meaning that we expect the retrieved profile to fit closer to the background. This is particularly evident for specific humidity where there is a large difference between background and retrieved values for model level 7 for $E_{diag}, E_{est} $ and $E_{67}$. This occurs due to large differences between background and retrieved brightness temperature for channels 128-137, which have water vapour mixing ratio Jacobians that peak at pressure level 7 \citep{stewart09}. We recall that these channels are sensitive to water vapour, and have the strongest positive correlations in $\textbf{R}_{est}$. This explains why specific humidity is particularly affected by changes to the OEC matrix for this model level, although we note that noticeable changes also occur for temperature for this model level. }\linebreak \begin{figure*} \centering \includegraphics[trim=5mm 10mm 5mm 5mm, clip,width=\textwidth]{Figure2Mean} \caption[Differences in retrievals]{Differences in retrievals between $E_{diag}$ and $E_{67}$ for trial on 16th June 0000Z for (a) temperature and (b) ln(specific humidity) for 97330 observations. Dashed lines and solid lines give the mean RSD values for $E_{diag}$ and $E_{67}$ respectively. Dashed lines with dots denote the median and solid lines with dots denote the mean for each pressure level. The solid box contains the middle $50\%$ of the data, and the whiskers (dashed horizontal lines) extend to the quartiles plus/minus $1.5$ times the interquartile range (IQR) - the difference between the third and first quartiles. Outliers, which lie outside the range of the whiskers, are not shown. } \label{fig:rawdiffall} \end{figure*} We now consider the differences between retrieved values for $E_{diag}$ and $E_{67}$ for all $97330$ observations that were accepted by the 1D-Var routine for all choices of OEC matrix. Figures \ref{fig:rawdiffall}a and b are box plots showing the distribution of these differences across each model level for temperature and ln(specific humidity profiles) respectively The qualitative behaviour for other experiments was very similar and is not shown here. Figures \ref{fig:rawdiffall}a shows that for most model levels the whiskers are contained within the average RSD values, and for model levels $1-41$ the central $50\%$ of differences lie within the averaged RSD. This indicates that changing from an uncorrelated to correlated choice of OEC matrix has a generally small impact on temperatures for the majority of model levels compared to RSD.\linebrea Mean RSD values for $E_{67}$ and $E_{diag}$ are also very similar, with larger mean RSD values for $E_{67}$ than $E_{diag}$ for all model levels. This is observed for all correlated choices of OEC matrix; the mean RSD is increased for all model levels compared to $E_{diag}$. This suggests that using a correlated choice of OEC matrix increases the mean RSD for temperature i.e. by introducing correlations we have less confidence in the retrieved values, or 1D-Var analysis. This increase to standard deviations is expected from theoretical and idealised studies \citep{stewart08,rainwater15,fowler17}. We also note that by including correlations we put less weight on the individual channels but allow more freedom to fit multivariate information arising from the combination of channels. {Comparing the RSD values to the BSD values given by Figure \ref{fig:Bsd} we find that across all three choices of $\textbf{B}$, the standard deviation values are similar to RSD values for most model levels. For model levels where the BSDs are smaller than both $E_{diag}$ and experimental RSD, differences are small in comparison to all standard deviation values. }\linebreak Figure \ref{fig:rawdiffall}b shows the differences between retrieved values of specific humidity for $E_{diag}$ and $E_{67}$ for $23$ model levels. As was the case for temperature, the mean RSD values for all other choices of experiment are larger than those for $E_{diag}$. We note that differences for model levels $1$ and $18-23$ are very small compared to RSD. However, for model levels $5- 12$, the whiskers lie outside the values for mean RSD. This means there is a large proportion of model levels where changing the OEC matrix has a larger impact on retrieved specific humidity values than we would expect due to instrument noise and other quantified types of uncertainty. We also note that for these model levels we have non-zero and non-equal means and medians. This suggests that the distribution of differences is not symmetric. Again, BSD values are larger than RSD values for the majority of model levels for specific humidity. However, whiskers still extend past the BSD values for levels 5 - 10 for all choices of $\textbf{B}$. \linebreak The effect of changing the OEC matrix, $\textbf{R}$, seems to affect a larger proportion of the retrieved specific humidity values than temperature values. This coincides with the findings of \citet{bormann16,weston14}. They found large changes to humidity fields with the introduction of correlated OEC matrices in 4D-Var assimilation procedures, which resulted in improved NWP skill scores. \section{Impact on variables that influence the 4D-Var routine}\label{sec:Results4DVar} In Section \ref{sec:Results1DVar} we showed that the choice of OEC matrix, $\textbf{R}$, does make a difference to the 1D-Var routine in terms of convergence, and the individual retrieval values. We now consider variables that directly impact the main 4D-Var procedure that is used to initialise forecasts. Changes to the OEC matrix in the 1D-Var routine affect 4D-Var in two main ways: firstly by altering the observations that are accepted by the quality control procedure, and secondly via retrieved values of variables that are not analysed in the 4D-Var state vector. We will consider these two aspects in turn. \subsection{Changes to the quality control procedure}\label{sec:ExptvsCtrl} In Section \ref{sec:MOHessian} we showed that increasing $\lambda_{min}(\textbf{R})$ increases the speed of convergence of the 1D-Var routine. We now investigate whether changing the OEC matrix, $\textbf{R}$, alters the number of observations that pass the quality control step that was described in Section \ref{sec:MOsystem}. We also consider how the number of observations accepted by experiment (respectively $E_{diag}$) and rejected by $E_{diag}$ (respectively experiment) changes for different choices of OEC matrix. This information is presented in Table \ref{table:acceptedobs}.\linebreak We begin by considering in more detail why changing the OEC matrix would result in changes to the number of observations that pass quality control. Observations are rejected if the minimisation of the 1D-Var procedure requires more than 10 iterations to converge. In Section \ref{sec:MOHessian} we found that introducing correlated observation error reduces the number of iterations required for convergence, and that decreasing the target condition number increases convergence speed further. This suggests that introducing correlated OEC matrices and using reconditioning will result in a larger number of observations that converge fast enough to pass this aspect of quality control. We therefore expect the use of reconditioning methods to result in a larger number of accepted observations. \linebreak \begin{table*} \centering \begin{tabular}{ lc c c c c c } \hline &\multicolumn{6}{c}{ $E_{exp}$}\\ \hline Experiment & $E_{est}$ & $E_{1500}$ & $E_{1000}$ & $E_{500}$ & $E_{67}$ &$ E_{infl}$ \\ \hline No. of accepted obs (T) & $100655$&$ 100795$& $101002$&$ 101341$& $102333$& $102859$ \\ No of obs accepted by both $E_{diag}$ and $E_{exp}$ & $99039$&$ 99175$&$99352$& $99656$& $100382$& $100679$ \\ Accepted by $E_{exp}$, rejected by $E_{diag}$ & $1616$ &$1620$& $1650$& $1685$& $1951$&$2180$ \\ Accepted by $E_{diag}$, rejected by $E_{exp}$ &$1647$&$1511$&$1334$&$1030$&$304$&$7$\\ \hline \end{tabular} \caption{Number of observations accepted by the 1D-Var quality control for each experiment ($E_{exp}$) compared to $E_{diag}$. For $E_{diag}$ the total number of accepted observations is 100686. Here T refers to the total number of distinct observations (defined in Section \ref{sec:MOsystem}) accepted by $E_{exp}$ for each experiment. The number of observations accepted by all experiments is 97330.} \label{table:acceptedobs} \end{table*} The first row of Table \ref{table:acceptedobs} shows that the number of accepted observations increases as $\lambda_{min}(\textbf{R})$ (see Table \ref{tab:mineig}) increases and the largest number of accepted observations occurs for experiment $E_{infl}$. This coincides with what we would expect due to alterations in the quality control procedure. However, we note that the number of accepted observations is slightly larger for $E_{diag}$ than $E_{est}$ even though convergence for $E_{est}$ was faster than for $E_{diag}$ across the set of common observations. {\color{black} The second row of Table \ref{table:acceptedobs} shows that most observations are accepted by both $E_{exp}$ and $E_{diag}$. We see that the number of accepted observations increases with $\lambda_{min}(\textbf{R})$ for correlated choices of $\textbf{R}$. The largest number of observations is accepted by $E_{infl}$. The third and fourth rows of Table \ref{table:acceptedobs} shows the number of observations that are accepted by $E_{exp}$ (respectively $E_{diag}$) and rejected by $E_{diag}$ (respectively $E_{exp}$). However, this number is smaller than 2.2\% of the total number of observations all choices of $E_{exp}$.} For what follows we shall consider the large majority of observations that are accepted by both $E_{diag}$ and $E_{exp}$. Although observations that are accepted by only one of $E_{diag}$ and $E_{exp}$ are of interest, the fact that there are very few observations in either of these sets makes it hard to study their properties statistically. \subsection{Changes to retrieved values for variables that are not included in the 4D-Var control vector}\label{sec:FixedVariablesOverall} In this Section we consider how altering the OEC matrix used in the 1D-Var routine alters the retrieved values of variables that are not included in the 4D-Var control vector. {\color{black} For all three variables, Figure \ref{fig:FixedvariablesRaw} shows that the majority of retrievals are changed by a small amount for each choice of experiment. The largest differences occur between $E_{diag}$ and $E_{exp}$ for ST, CF and CTP, where the IQR and whiskers are much larger than for any correlated choice of OEC matrix. For correlated OEC matrices, we see a reduction in IQR and whisker length as $\lambda_{min}(\textbf{R})$ increases. This indicates that as we increase the amount of reconditioning that is applied, the differences between $E_{diag}$ and $E_{exp}$ reduce. However, there are some differences between the variables. \linebreak \begin{figure*} \centering \includegraphics[ width=\textwidth]{Fig3axislabels} \caption{Box plot showing differences between retrieved variables for $E_{diag}-E_{exp}$ for (a) ST (skin temperature) (b) CF (cloud fraction) and (c) CTP (cloud top pressure). The circle shows the median, the triangle depicts the mean, the solid box contains the central $50\%$ of data (the interquartile range), and the dashed horizontal lines show the whiskers which extend to the quartile $\pm1.5\times IQR$. Vertical dashed lines show the mean retrieved standard deviation (RSD) values for the experiment, and the solid vertical lines shows the mean RSD values for $E_{diag}$. Outliers (not shown) lie in the range (a) $\pm 33.52K$, (b) $\pm 1$ and (c) $\pm 913.25hPa$. {\color{black} The number of outliers and extreme outliers for these experiments is presented in Tables \ref{table:outlierobs} and \ref{table:extremeobs}}.} \label{fig:FixedvariablesRaw} \end{figure*} Firstly, for ST all choices of OEC matrix yield whiskers that are equal to or exceed the RSD values corresponding to $E_{diag}$ (solid line), and all except $E_{67}$ exceed the RSD values for the corresponding experiment (dashed line). In contrast, the whiskers for correlated choices of OEC matrix are well within both RSD values for CF and CTP, as well as the BSD values given in Table \ref{tab:Bsd}. This shows that compared to expected observation variability, differences between CF and CTP retrievals are small for correlated choices of OEC matrix. However, we recall that BSD values for cloud variables were artificially inflated \citep{Pavelin08}.\linebreak For ST and CTP the values of the mean and median are close for all correlated choices of $E_{exp}$, and the box and whiskers are fairly symmetric about $0$. In contrast, for CF, differences between the mean and median occur, and the box extends further into the positive axis. Cloud errors are expected to vary greatly with the cloud state, meaning that it is difficult to interpret gross statistics \citep{Eyre89}. We include them here for completeness. }\linebreak \begin{table*} \centering \begin{tabular}{ l c c c c c c } \hline & $E_{est}$ & $E_{1500}$ & $E_{1000}$ & $E_{500}$ & $E_{67}$ &$ E_{infl}$ \\ \hline \% outliers (ST) &15.1&15.3&15.6&16.3&17.6 &15.9\\ \% of outliers (CF) &23.9&24.02& 24.2& 24.6& 25.3& 21.4 \\ \% outliers (CTP) &22.8&22.8&23.0&22.9&21.4&18.8\\ \hline Maximum difference (ST (K)) &21.67& 21.12& 21.14& 22.38& 21.03& 26.83\\ Minimum difference (ST (K)) & -33.52& -33.01& -32.14& -29.76& -23.82& -20.88\\ \hline \end{tabular} \caption{Percentage of outliers for cloud fraction, cloud top pressure and skin temperature. Outliers are differences which fall outside the whiskers shown in Figure \ref{fig:FixedvariablesRaw}. Maximum and minimum differences are shown for skin temperature only; maximum differences for cloud fraction and cloud top pressure are $\pm1$ and $\pm913.25hPa$ respectively, for all choices of $\textbf{R}$.} \label{table:outlierobs} \end{table*} \begin{table*} \centering \begin{tabular}{ l c c c c c c } \hline & $E_{est}$ & $E_{1500}$ & $E_{1000}$ & $E_{500}$ & $E_{67}$ &$ E_{infl}$ \\ \hline \% extreme outliers ($|ST|>5K$) & 1.6&1.5&1.5&1.4&1.4&3.6\\ \% extreme outliers ($|CF| >0.25$)& 4.9&4.7&4.4&3.9&3.2&7.5 \\ \% extreme outliers ($|CTP| >225hPa$) &3.3&3.3&3.3&3.3&2.7&4.4\\ \hline \end{tabular} \caption{Number of large outliers for cloud fraction, cloud top pressure and skin temperature for each experiment. Large outliers are defined as observations with absolute differences greater than $0.25$ for CF, $225hPa$ for CTP and $5K$ for ST. This corresponds to absolute differences greater than approximately $25\%$ of the maximum differences presented in Table \ref{table:outlierobs}. } \label{table:extremeobs} \end{table*} For all variables, the majority of retrievals change by a small amount, relative to RSD, when comparing the experiment to $E_{diag}$. However, {Table \ref{table:outlierobs} shows that \color{black}over 15\% of observations are classed as outliers for all three variables. These outliers are defined as observations with retrieval differences that are not between $Q_1-1.5IQR$ and $Q3 + 1.5IQR$, where $Q_1$ and $Q_3$ denote the first and third quartiles of the data respectively, and are not shown in Figure \ref{fig:FixedvariablesRaw}. Not all of these outliers represent large differences between retrieved values. Instead we consider `large' outliers, which we define in this setting as differences larger than $25\%$ of the maximum differences for each variable. For cloud variables the maximum difference is defined by the possible range of values: $\pm1$ and $\pm913.25hPa$ for CF and CTP respectively. For ST we use the maximum difference between retrievals from the data set. These values are given in Table \ref{table:outlierobs}. \linebreak Table \ref{table:extremeobs} shows the percentage of large outliers for each variable, which is much smaller than the total number of outliers for all variables and experiments. For all variables, the number of large outliers decreases with $\lambda_{min}(\textbf{R})$ for correlated experiments. The experiment $E_{infl}$ has a much greater number of large outliers than any experiment with a correlated choice of OEC matrix, agreeing with earlier findings that the qualitative and quantitative differences between $E_{infl}$ and $E_{diag}$ are much larger than for any other experiment. } \linebrea {\color{black} As background information has almost no weight for cloud variables, due to inflated BSD values, changing the OEC matrix could result in much larger differences between retrieved values for CF and CTP than for other variables. However, this is not the case for ST, where the maximum differences given in Table \ref{table:outlierobs} are extremely large compared to RSD and BSD values. The number of observations with extremely large retrievals is small: for correlated experiments fewer than 10 observations yield absolute differences larger than 20K. These observations can be considered as failures of the 1D-Var algorithm and should be removed by the quality control procedure. This emphasises that when altering the OEC matrix, the quality control procedure needs to be altered as well.} \linebreak Previous studies by \citet{stewart14,weston14,bormann16,campbell16} have shown that the largest impacts of applying the DBCP diagnostic to IASI occur for humidity sounding channels, which will affect clouds and retrieved values associated with clouds. Skin temperature is also sensitive to cloud; although in partly overcast conditions it is possible to retrieve estimates of skin temperature, errors in the modelling of cloud effects are likely to dominate the surface signal \citep{stewart14,Pavelin14}. In terms of impact, under cloudy conditions the 4D-Var assimilation procedure is less sensitive to skin temperature \citep{Pavelin14}, so it is possible that these large changes to retrievals will not result in large impacts when passed to 4D-Var. However, further work is needed to understand the origin and consequences of these extreme differences fully. \section{Conclusions}\label{sec:Conclusion} It is widely known that many observing systems in numerical weather prediction (NWP) have errors that are correlated \citep{janjic17} for reasons including scale mismatch between observation and model resolution, approximations in the observation operator or correlations introduced by preprocessing. However, diagnosed error covariance matrices have been found to be extremely ill-conditioned, and cause convergence problems when used in existing NWP computer systems \citep{campbell16,weston14}. \citet{tabeart17a} established that increasing the minimum eigenvalue of the OEC matrix improves bounds on the conditioning of the associated linear variational data assimilation problem. This provided insights into possible reconditioning methods which could permit the inclusion of correlation information while ensuring computational efficiency \citep{tabeart17c}.\linebreak In this paper we have investigated the impact of changing the OEC matrix for the IASI instrument in the Met Office 1D-Var system, an operational non-linear assimilation system. In particular we have considered how reconditioning methods could permit the implementation of correlated observation error matrices. The 1D-Var system is used for quality control purposes and to retrieve values of variables that are not included in the 4D-Var state vector. As each observation is assimilated individually, it is more straightforward to understand and isolate the effects of using different choices of OEC matrix on retrieved variables and convergence compared to the more complicated 4D-Var procedure.\linebreak We found that: \begin{itemize} \item The current operational choice of observation error covariance (OEC) matrix for IASI results in the slowest convergence of the 1D-Var routine of all OEC matrices considered. Increasing the amount of reconditioning applied to correlated OEC matrices improves convergence of the 1D-Var routine, in accordance with the qualitative theoretical conclusions of \citet{tabeart17a,tabeart17c}. \item Most experimental choices of correlated OEC matrix resulted in a larger number of IASI observations that were accepted by the 1D-Var routine than the current diagonal operational choice. Increasing the amount of reconditioning applied to correlated OEC matrices increases the number of IASI observations that converge in fewer than 10 iterations, and hence pass the quality control component of 1D-Var. \item Retrieval differences for skin temperature, cloud fraction and cloud top pressure are smaller than retrieved standard deviation values for over 75\% of IASI observations for all choices of correlated OEC matrix. Up to 5\% of retrievals have large differences relative to the retrieved standard deviation. \item As the minimum eigenvalue of the OEC matrix is increased, the difference between $E_{diag}$ (using the current operational diagonal OEC matrix) and experimental retrieved values reduces. \end{itemize} {\color{black} We also find that for most variables studied RSD values are of a similar size to BSD values. We note that the BSD values for cloud variables are artificially inflated, and are hence an order of magnitude larger than the corresponding RSD values. This indicates that observation information has as large or a larger weight in the 1D-Var objective function than background profiles. \linebreak} The qualitative conclusions from this work agree with the theoretical results of \citet{tabeart17a}, which prove that for a linear observation operator, increasing the minimum eigenvalue of the OEC matrix is important in terms of convergence of a variational data assimilation routine.\linebreak We emphasise that these convergence results contradict the common assumption that the use of correlated OEC matrices in a variational data assimilation scheme will cause convergence problems. In fact, one key benefit of using correlated OEC matrices in a 1D-Var framework is the increase in convergence speed, particularly when combined with reconditioning methods. At the Met Office the 1D-Var routine is run every 6 hours for the global model so reducing the cost of the routine would save significant computational effort. Additionally, the faster convergence that is achieved by correlated choices of OEC could permit stricter convergence criteria, e.g. reducing the maximum number of iterations from $10$ to $8$, which would also result in computational savings. However, care needs to be taken to consider how this will interact with other aspects of the quality control procedure and ensure that `good' observations are not rejected. \linebreak Changes to OEC matrices also alter the quality control aspect of the 1D-Var procedure, so care needs to be taken to ensure that these changes system are well understood. In particular, reducing the number of iterations required for convergence of the 1D-Var routine means that a larger number of observations were accepted by our tests and passed to the 4D-Var routine. For observations that were accepted by all experiments, we considered changes to retrieved estimates for skin temperature, cloud top pressure and cloud fraction. Although changes to the retrieved values with different OEC matrices were small for the majority of observations, for a small percentage of observations, the differences between retrieved values were very large. As ST, CTP and CF are not estimated as part of the 4D-Var procedure, such large changes may have significant effects on the analysis for 4D-Var. {\color{black} The most extreme of these differences (particularly for ST) are unrealistic and can be viewed as 1D-Var failures. This highlights that changes to the 1D-Var system, such as with the introduction of correlated OEC matrices, must be made in conjunction with tuning of the quality control procedures.} In general, improvements to convergence need to be be balanced with impacts on other aspects of the assimilation system, such as changes to quality control, analysis fit and forecast skill. \section*{Acknowledgements} This work is funded in part by the EPSRC Centre for Doctoral Training in Mathematics of Planet Earth, the NERC Flooding from Intense Rainfall programme (NE/K008900/1), the EPSRC DARE project (EP/P002331/1) and the NERC National Centre for Earth Observation. \bibliographystyle{plainnat}
1,116,691,501,118
arxiv
\section{Introduction} \label{sec:intro} For the decades, dark matter searches have been taken place by many experiments although nature is yet to be cleared. A direct dark matter search, which detect nuclear recoil by the dark matter, is one of the useful methods to explore its nature. Especially, assuming that a dark matter is Weakly Interacting Massive Particle (WIMP), direct dark matter searches make the strongest limits~\cite{Arbey:2021gdg}. One of the methods of the dark matter searches is to measure the deposited energy of recoil nuclei and evaluate any excesses over the background-only hypothesis in the energy spectra. On the other hand, the annual modulation and the directionality would be ones of more characteristic signatures of WIMPs. The annual modulation of WIMP-induced event rate comes from the orbital motion of the Earth around the sun, while the directionality of the signals is made by the rotation of the solar system around the Milky Way Galaxy. While the magnitude of the annual modulation could be a few percent~\cite{Baum:2018ekm}, the forward-backward ratio of nuclear recoil events in the rotation direction could be an order of magnitude~\cite{Spergel:1987kx}. Moreover, the directional method would distinguish WIMPs from neutrino-induced background (CE$\nu$NS), which allows to explore beyond the neutrino floor~\cite{Billard:2013qya}. Therefore the directionality measurement is a significant tool to reveal the nature of WIMP dark matter. One of the widely-used methods of direction-sensitive direct dark matter search is to detect trajectories of recoil nuclei using a gaseous time projection chamber (TPC). Several experiments such as DRIFT~\cite{DRIFT:2016utn} or NEWAGE~\cite{Ikeda:2021ckk} have carried out dark matter searches in underground observatories using their own gaseous TPCs. While NEWAGE uses CF$_{4}$ gas for the TPC volume and target for the dark matters, DRIFT uses CS$_{2}$ gas in the chamber. The CS$_{2}$ gas is so-called a "negative ion gas", which has electro-negative characteristics. In the negative ion gases, ionized electrons are captured by their own molecules right after their production and negative ions, instead of electrons, drift in the gas volume suffering less diffusions. Thus it could improve the precision of track reconstruction. Furthermore, it turned out that negative ion gas such as CS$_{2}$ creates "minority carriers" in the electron capture process. The creation of minority carrier allows us to reconstruct tracks with their absolute coordinate in drift direction~\cite{DRIFT:2014bny}. After the "discovery" of the CS$_{2}$ gas, SF$_{6}$, which is safer than CS$_{2}$ and is suitable for the spin-dependent search using fluorine (F) recoils, was found to behave similarly~\cite{Phan:2016veo}. Although several studies on SF$_{6}$ gas had been conducted~\cite{Ikeda:2017jvy,Baracchini:2017ysg,Ligtenberg:2021viw} aiming for dark matter searches, no practical TPCs with SF$_{6}$ gas had been developed because of the difficulty of the development of micro-patterned gaseous detectors (MPGDs) with suitable readout electronics for negative ion TPCs. Recently, a prototype negative ion micro TPC (NI$\mu$TPC) was developed with a MPGD with dedicated electronics~\cite{Ikeda:2020pex}, which demonstrated the first three-dimensional tracking of alpha particles with absolute coordinate reconstructions. This paper describes further studies on the NI$\mu$TPC performance of three-dimensional track reconstructions of nuclear recoils using a neutron source. We developed a new preamplifier ASIC with low noise and implemented a self-triggering system in the back-end electronics. The development of the NI$\mu$TPC made a portal of dark matter searches using negative ion gases. The rest of this paper consists of three parts. Section~\ref{sec:detector} describes the NI$\mu$TPC together with its data acquisition system. Section~\ref{sec:experiment} shows experimental results using alpha and neutron sources. We conclude our studies in Section~\ref{sec:conclusion}. \section{Detector} \label{sec:detector} In this section we describe the detector including the data acquisition system for evaluations as a device for a dark matter search. This section also explains the behavior of SF$_{6}$ gas in the TPC. \subsection{Negative ion micro TPC} \label{sec:NIuTPC} Figure~\ref{fig:NIuTPC} shows the schematics of our NI$\mu$TPC. We used one of the MPGD variations, a micro pixel chamber ($\mu$-PIC~\cite{Hashimoto:2020xly}; Dai Nippon Printing Co., Ltd.), as a readout device. It has 256~$\times$~256~pixels with 400~$\mu$m pitch in a region of 10~$\times$~10~cm$^{2}$. These electrodes are connected by 256~anode strips and 256~cathode strips, which cross orthogonal each other. It enables to detect two-dimensional positions where a charged particle deposits its energy. In this study, we used only 32~anode strips and 64~cathode strips placed in the center of the $\mu$-PIC due to the limited readout channels of electronics. The anode strips have cylindrical electrodes with a diameter of 60~$\mu$m, and the cathode strips have circular openings with a diameter of 250~$\mu$m surrounding each anode electrode. Such a pixel structure forms strong electrical field around anode electrodes and it makes an avalanche and consequently enables to amplify the electrons in the gas. An additional gas amplification is made by two 10~$\times$~10 cm$^{2}$ layers of gas electron multipliers (GEM~\cite{Sauli:2016eeu}; SciEnergy Co., Ltd.) above the $\mu$-PIC, with 3~mm transfer gaps. The GEM is a 100~$\mu$m thick liquid crystal polymer with 5~$\mu$m copper coatings on both sides. The GEM has 70~$\mu$m holes with a pitch of 140~$\mu$m in entire surface~\cite{Tamagawa:2009xa}. The TPC volume with a drift length of 144~mm is formed with 12 copper rings with an inner diameter of 64~mm. The rings are connected via 50~M$\Omega$ registers to form a uniform electrical field within the TPC volume. The drift plane was made of a stainless-steel mesh. Voltages of -7.1~kV, -1500~V, -1300~V, -750~V, -550~V, 0~V, and 450~V are supplied to the drift plane, top and bottom copper layers of two GEMs (four layers in total), and cathode and anode electrodes of the $\mu$-PIC, respectively. The total gas gain was 2800 in the condition above and the electrical field of the detection volume was 0.40~kV/cm. The outer vessel was made of stainless-still and was filled with pure SF$_{6}$ gas at 20~Torr. The gas was circulated at a rate of 0.1~L/min. A contamination of water vapor is filtered-out by Zeolum (A-3) installed in the circulation system at the room temperature. The amount of the water contamination was monitored with a dew point meter (Easidew Transmitter, MICHELL Instruments Ltd.). During the measurement, the water contamination level was kept below 2500~ppm. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{figure1.pdf} \caption{Schematic of the NI$\mu$TPC.} \label{fig:NIuTPC} \end{figure} \subsection{Data acquisition system} \label{sec:DAQ} Figure~\ref{fig:DAQ} shows the schematics of the data acquisition (DAQ) system. Each electrode was AC-coupled with a 100~pF capacitor to its amplifier. The charge was amplified by a dedicated preamplifier ASIC (LTARS2018~\cite{Kishishita:2020skm}), updates of LTARS2014~\cite{Sakashita:2015qqa} and LTARS2016~\cite{Nakazawa:2019hvo}. Since the previous versions had a problem of the electrical noise, the latest version LTARS2018 improved this issue. The LTARS2018 allows to readout 16 strips per chip, and we developed a printed circuit board loaded with two LTARS2018 chips. Due to limited numbers of available boards, one board for anode strips (32~channels) and two for cathode (64~channels) are used in this study. Amplified signals were digitized at general purpose digitization boards. The digitization board mainly consists of 2.5~MHz sampling 12~bit ADC with a dynamic range of 2~V, Xilinx Artix-7 FPGA, NIM I/O ports and Ethernet interface. The digitized data were sent to a computer using the SiTCP technology~\cite{Uchida:2007cwb}. The self-triggering algorithm was developed and implemented to the firmware for this study. At least three channels detecting signals with a certain threshold were required to issue triggers in this study. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{figure2.pdf} \caption{Diagram of the data acquisition system.} \label{fig:DAQ} \end{figure} \subsection{Characteristics of SF$_{6}$ gas in TPC} \label{sec:SF6} The property of the SF$_{6}$ gas is described in reference~\cite{Phan:2016veo}. After the ionization caused by a charged particle, ionized electrons are immediately attached to SF$_{6}$ molecules and then create metastable excited products of SF$_{6}^{-*}$ ions. This metastable SF$_{6}^{-*}$ ion then forms a negative ion of SF$_{6}^{-}$ or SF$_{5}^{-}$. The branching fraction of SF$_{6}^{-}$ and SF$_{5}^{-}$ is known to be 97\% and 3\%, respectively~\cite{Phan:2016veo} in a room temperature. Figure~\ref{fig:sf6_property} illustrates the behavior of the drift of SF$_{6}^{-}$ and SF$_{5}^{-}$. Due to the difference of their masses, the drift velocity of SF$_{6}^{-}$ is less than that of SF$_{5}^{-}$, and consequently two pulses can appear with a time difference depending on the drift length. The right plot of Fig.~\ref{fig:sf6_property} shows such a characteristic signal. \begin{figure}[htbp] \begin{center} \begin{tabular}{c} \begin{minipage}{0.57 \hsize} \begin{center} \includegraphics[clip, width=8.0cm]{figure3-1.pdf} \hspace{1.6cm} \end{center} \end{minipage} \begin{minipage}{0.43 \hsize} \begin{center} \includegraphics[clip, width=6.4cm]{figure3-2.pdf} \hspace{1.6cm} \end{center} \end{minipage} \end{tabular} \caption{Illustration of SF$_{6}^{-}$ and SF$_{5}^{-}$ ion drifts (left), and a typical pulse of a TPC with SF$_{6}$ gas (right)~\cite{Phan:2016veo}.} \label{fig:sf6_property} \end{center} \end{figure} \section{Experiment and result} \label{sec:experiment} In this section, we describe the detector calibration, the measurement, and its results. Dark matter induced signals were emulated by nuclear recoil events caused by neutrons from a $^{252}$Cf source. Before the measurement, an energy calibration was taken place. \subsection{Energy calibration} \label{sec:calibration} The energy calibration was performed using an $^{241}$Am source which emits 5.5~MeV alpha rays. The $^{241}$Am source was placed inside the chamber. Figure~\ref{fig:waveform_alpha} shows a typical waveform of an alpha-ray event for each anode strip. Here both SF$_{6}^{-}$ and SF$_{5}^{-}$ peaks are clearly seen in anode strips. The energy deposition were measured by integrating the ADC values of SF${_{6}^{-}}$ pulses, and calibrated so that the peak value of the ADC distribution is corrected to be 128~keV which is obtained by Geant4 simulation~\cite{GEANT4:2002zbu,Allison:2006ve,Allison:2016lfl}. A track was reconstructed by positions of hit strips in anode and cathode, and their hit timing defined with a certain threshold. The left plot of Fig.~\ref{fig:cal_alpha} shows the two-dimensional distribution of the deposited energy and the track length. The tail component shown in the higher energy side, which is due to the events passing through the detection region diagonally, was cut by selecting events with length less than 1.3~cm. The right plot of Fig.~\ref{fig:cal_alpha} shows the projection for the deposited energy. The energy resolution was calculated by the distribution after the length cut. It was 17\% in sigma at the energy of 128~keV. In this paper, the following discussions are based on the energy calibrated with alpha rays. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{figure4.pdf} \caption{Waveforms of a typical alpha-ray event recorded through the anode (left) and cathode (right) strips.} \label{fig:waveform_alpha} \end{figure} \begin{figure}[htbp] \begin{center} \begin{tabular}{c} \begin{minipage}{0.5 \hsize} \begin{center} \includegraphics[clip, width=7.4cm]{figure5-1.pdf} \hspace{1.6cm} \end{center} \end{minipage} \begin{minipage}{0.5 \hsize} \begin{center} \includegraphics[clip, width=7.4cm]{figure5-2.pdf} \hspace{1.6cm} \end{center} \end{minipage} \end{tabular} \caption{Distribution of deposited energy and track length (left) and its projection for deposited energy before and after the length cut described in the text (right). The red line in the left plot shows the selection line of alpha ray events which pass through the detection region perpendicular to the anode strips.} \label{fig:cal_alpha} \end{center} \end{figure} \subsection{Nuclear recoil measurement} \label{sec:NRrun} The detection capability of nuclear recoil events was evaluated with a $^{252}$Cf neutron source. Measurements were carried out with a source placed at (0~mm, 133~mm, 89~mm) position for the DAQ time of 447~minutes. Energy calibrations were continuously taken place with an interval of about 120~min. The gas gain, gas pressure, dew point, voltages and currents of GEMs and $\mu$-PIC were monitored and were stable during the experiment. In order to reduce background events such as alpha rays or electron recoils caused by ambient gammas, event selections described below were applied. First, events with signals at the edges of the detection region were rejected to reduce alpha-ray backgrounds. Figure~\ref{fig:enelen_nr} shows the distribution of energy and track length after the first cut. The distribution shows two components: one is the events with correlation between energy and length, and the other is non-correlated ones almost along with $y$ axis. The correlated events are nuclear recoils caused by neutrons, and others are mainly electron recoil backgrounds induced by ambient gamma rays. To reduce electron recoil backgrounds, we selected events below the red line drawn in Fig.~\ref{fig:enelen_nr} as a second event selection. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{figure6.pdf} \caption{Distribution of the energy and track length in $^{252}$Cf run after the first cut. Red line shows the cut line for the energy-length selection. Events below the red line were selected.} \label{fig:enelen_nr} \end{figure} Figure~\ref{fig:waveform_nr} shows a typical waveform of a nuclear recoil event which passed through the selections above. In this event, both SF$_{5}^{-}$ and SF$_{6}^{-}$ signals are visible in the time of around 1020~$\mu$s and 1090~$\mu$s, respectively, in anode strips. Signals in cathode strips are relatively smaller than those from anode strips. This issue is specific to the $\mu$-PIC readout and it is related to the positive ion backflow. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{figure7.pdf} \caption{Waveforms of a typical nuclear recoil event in anode (left) and cathode (right) strips.} \label{fig:waveform_nr} \end{figure} The detection efficiency of SF$_{5}^{-}$ peaks was evaluated for the events passing all selections. The definition of SF$_{5}^{-}$ peak detection is described here: \begin{itemize} \item The search region of interest was defined as [-180~$\mu$s, -30~$\mu$s] from the highest peak position for each channel. \item The highest peak above a certain threshold in the region of interest was defined as the SF$_{5}^{-}$ peak. \item At least one channel was required to define events with a SF$_{5}^{-}$ peak. \end{itemize} Figure~\ref{fig:minorityEff} shows the detection efficiencies of SF$_{5}^{-}$ peaks for each energy slice. The overall efficiency reached 70~$\pm$~5\%, which was enough to apply this detector for dark matter searches. There are two reason for the efficiency loss. One is due to the lack of deposited energy and consequently SF$_{5}^{-}$ peaks are below the threshold, especially in the lower energy range. This issue will be improved by the increase of the gas gain. The other is that SF$_{5}^{-}$ peak is merged into the SF$_{6}^{-}$ peak if a nuclear recoil occurs near the GEMs + $\mu$-PIC system. This is not only for the case of lower energy but also for higher energy events. Since $\Delta t$ was required to be greater than 30~$\mu$s, the acceptance is estimated to be 80\% by taking the solid angle from the $^{252}$Cf source into account. Therefore we obtained reasonable efficiencies in the energy range of [100~keV, 400~keV]. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{figure8.pdf} \caption{Detection efficiencies of SF$_{5}^{-}$ peaks as a function of energy measured in the $^{252}$Cf run.} \label{fig:minorityEff} \end{figure} The absolute position of the drift direction, defined as the z coordinate, was calculated by \begin{equation} z = \frac{v_{{\rm SF}_{5}^{-}} \cdot v_{{\rm SF}_{6}^{-}}}{v_{{\rm SF}_{5}^{-}} - v_{{\rm SF}_{6}^{-}}} \Delta t, \end{equation} where $v_{{\rm SF}_{5}^{-}}$ and $v_{{\rm SF}_{6}^{-}}$ are the velocities of SF$_{5}^{-}$ and SF$_{6}^{-}$ carriers, respectively, and $\Delta t$ is the time difference between two peaks. Figure~\ref{fig:hitmap_projection} shows the projected hit maps of absolute position of reconstructed tracks in the $^{252}$Cf run. Events are distributed uniformly in the $x-y$ projection map. Since the $^{252}$Cf source was placed at $z = 89$~mm, events reconstructed around $z = 89$~mm are slightly increased. The event reconstruction is successfully working on any spacial absolute hit positions. Although three events that are out of detection area along with $z$ axis were miss-reconstructed due to the position resolution, events were successfully reconstructed in the detection volume. Figure~\ref{fig:3Dhitmap} demonstrates the three-dimensional absolute hit position reconstruction. Here $x$ and $y$ axes are defined as the direction perpendicular to anode and cathode strips, respectively. The three-dimensional hit position reconstruction was successfully working without biases in any hit position. \begin{figure}[htbp] \begin{center} \begin{tabular}{c} \begin{minipage}{0.5 \hsize} \begin{center} \includegraphics[clip, width=7.4cm]{figure9-1.pdf} \hspace{1.6cm} \end{center} \end{minipage} \begin{minipage}{0.5 \hsize} \begin{center} \includegraphics[clip, width=7.4cm]{figure9-2.pdf} \hspace{1.6cm} \end{center} \end{minipage} \end{tabular} \begin{tabular}{c} \begin{minipage}{0.5 \hsize} \begin{center} \includegraphics[clip, width=7.4cm]{figure9-3.pdf} \hspace{1.6cm} \end{center} \end{minipage} \begin{minipage}{0.5 \hsize} \begin{center} \includegraphics[clip, width=7.4cm]{figure9-4.pdf} \hspace{1.6cm} \end{center} \end{minipage} \end{tabular} \caption{Distribution of absolute position of reconstructed tracks in the $^{252}$Cf run: $x-y$ (left top), $x-z$ (right top), $y-z$ (left bottom) and $z$ (right bottom) projections. Blue boxes and line represent the detection area. Events after all selections are plotted.} \label{fig:hitmap_projection} \end{center} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{figure10.pdf} \caption{Distribution of three-dimensional absolute hit positions in $^{252}$Cf run after all selections.} \label{fig:3Dhitmap} \end{figure} We hereby showed the first detection of three-dimensional absolute position reconstruction of $\mu$TPC using a negative ion gas. In this study, the statistics was limited due to the small detection area. In order to increase the size of the fiducial region, a mass-production of the electronic is required. The extension of the fiducial region, equivalent to increase statistics, also enables to study the position and angular resolutions of tracks. With these studies accomplished, a directional dark matter search with a NI$\mu$TPCs will be taken places. \section{Conclusion} \label{sec:conclusion} We developed a prototype negative ion micro TPC using a $\mu$-PIC readout with dedicated electronics system filled with SF$_{6}$ gas at 20~torr. We irradiated the detector with neutrons to emulate DM-induced nuclear recoil. We succeeded in reconstructing three-dimensional absolute hit positions of nuclear recoil tracks with an efficiency of 70~$\pm$~5\%. We also demonstrated to reconstruct tracks independent on its absolute position along with drift direction. These results mark an important step for the NI$\mu$TPCs towards the practical use for directional dark matter searches. \section*{Acknowledgement} \label{sec:ack} This work was partially supported by KAKENHI Grant-in-Aids (16H02189, 19H05806, 21K13943 and 22H04574). \vspace{0.2cm} \noindent \let\doi\relax \bibliographystyle{unsrt}
1,116,691,501,119
arxiv
\section{Introduction} The discovery of over twenty quasars at redshift $z \gtapprox 5.7$ (e.g. \citealt{Fan00,Fan01a,Fan03,Fan04,Fan06a,Goto06}), with masses estimated to be $ \gtapprox 10^9 \ensuremath{M_\odot}$ \citep{Barth03, Vestergaard04, Jiang06a, Kurk07}, challenges our current understanding of the early universe at the epoch of galaxy formation. The highest redshift quasar, SDSS 1148+3251 \citep{Fan03}, with $z \approx 6.43$ (corresponding to a cosmic time $t=0.87$ Gyr assuming the standard $\Lambda$CDM concordance cosmology), is thought to harbour a black hole of mass $M_\bullet \approx \mbox{ a few } \times 10^9 \ensuremath{M_\odot}$ \citep{Fan03, Barth03, Willott03, Haiman04}. This poses a significant constraint on viable black hole growth mechanisms. In $\Lambda$CDM cosmology, dark matter halos merge heirarchically, with black holes merging and accreting in the gaseous centre (see e.g. \citealt{VolonteriRees05, Shapiro05, Hopkins06, Li07}). During mergers of host galaxies, the nuclear black holes can coalesce or be ejected due to gravitational wave recoil. Gravitational recoil does not significantly impede black hole growth if accretion is Eddington-limited \citep{Haiman04, YooMiralda04, Volonteri07}. Coalescence should produce a linear relation between the halo mass and the black hole mass (see e.g. \citealt{Haenhelt98}). However, the observed relation between the black hole mass and the bulge velocity dispersion of galaxies \citep{FerrareseMerrit00} suggests instead that ongoing, sustained accretion, rather than coalescence as a result of mergers, is the primary process governing black hole growth. An important constraint on accretion growth models is that the mean radiative efficiency $\ensuremath{\epsilon_{\rm r}}$ must be sufficiently low to maximize the amount of matter accreted onto the black hole and minimize the amount of rest mass energy lost to radiation. However, the observed ratio of the quasar/AGN luminosity density to the local SMBH mass density (at redshift $z \ltapprox 5$) requires $\ensuremath{\epsilon_{\rm r}} \gtapprox 0.1$ \citep{Soltan82,YuTremaine02,Elvis02, AllerRichstone02, Marconi04}. This is an empirical, model-independent lower limit. Since standard disc accretion onto a Schwarzschild black hole has a radiative efficiency $\ensuremath{\epsilon_{\rm r}} \approx 0.06$ \citep{NovikovThorne}, it has been argued (e.g. \citet{Elvis02, YuTremaine02, Barger05}) that the Soltan relation implies that quasars harbour Kerr black holes. Indeed, models of cosmolgical black hole evolution \citep{Thorne74, Volonteri05} and merger-driven accretion \citep{diMatteo05}, constrained by the evolution of the luminosity function of quasars (see e.g. \citet{Hopkins07}), indicate that quasars undergo rapid spin-up. Furthermore, these models show that in order to grow cosmological black holes from an initial seed mass to that of a quasar by $z \gtapprox 3$, the amount of material accreted in a single accretion episode must be quite large. This acts to rapidly spin-up the black holes of massive high-redshift quasars \citep{VSL07}, contrary to the suggested low-spin scenario. We note, however, that models for low-spin black holes have also been proposed \citep{King05, KingPringle06}. The radiative efficiency of a maximally spinning accreting black hole is $\approx 40\%$, which is too high to grow a SMBH by $z \approx 6$, unless implausibly high, super-Eddington accretion rates are invoked. Observations indicate that the dimensionless mass accretion rate, $\dot{m} \equiv L / \ensuremath{L_{\rm Edd}}$, where $\ensuremath{L_{\rm Edd}} = 4 \pi G M_\bullet \mu m_{\rm p} c / \sigma_{\rm T}$ is the Eddington luminosity, is limited to $\dot{m} \ltapprox 2$ at redshifts approaching $z \approx 6$ \citep{Hopkins06}. Thus, it is difficult to reconcile standard disc accretion growth of cosmological, spinning SMBHs with the observational constraints $\ensuremath{\epsilon_{\rm r}} \gtapprox 0.1$ and $\dot{m} \simeq 1$. Mergers produce large-scale gravitational instabilities which can funnel gas down to the galactic core where it can be accreted \citep{MihosHernquist94a, MihosHernquist94b}. In order for efficient disc accretion to proceed, another process must facilitate the transport of angular momentum on smaller scales. Numerical simulations demonstrate that the effective viscosity produced by magnetohydrodynamical turbulence, generated by the magnetorotational instability, is insufficient to account for the very high mass accretion rates inferred in the most powerful accreting sources, such as quasars \citep{Hawley00, StonePringle01, Hawley01, HawleyBalbus02, KingPringle07}, unless a large-scale, systematic poloidal field is present in the accretion flow \citep{SteinackerHenning01, Campbell03, KigureShibata05, Salmeron07}. In this case, a magnetized jet forms and the overall mass accretion rate increases considerably as a result of enhanced angular momentum transport and negligible mass loss \citep{KuncicBicknell04, 2007a, 2007b}. This creates auspicious conditions for efficient mass growth of black holes. Approximately $60\%$ of all AGN display outflow phenomena \citep{GangulyBrotherton07}. Relativistic outflows in particular are a characteristic feature of accreting SMBHs. X-ray observations of galaxy clusters indicate that substantial amounts of mechanical energy may be deposited into the intracluster medium by powerful AGN jets (see e.g. \citealt{Birzan04, Allen06, Fabian06, Rafferty06, Taylor06}). If some fraction of the total accretion power is converted to jet kinetic power, then $\ensuremath{\epsilon_{\rm r}}$ is lower than that predicted by standard disc accretion for a given $\ensuremath{\dot{M}_{\rm a}}$. Hence, the black hole growth rate for jet-enhanced accretion is larger. Existing accretion growth models simply set the total accretion efficiency $\ensuremath{\epsilon_{\rm a}}$ equal to the radiative efficiency $\ensuremath{\epsilon_{\rm r}}$ and thereby do not take into account the conversion of accretion power into non-radiative form (e.g. mechanical energy). Relativistic jets, in particular, can carry away a substantial amount of kinetic power but very little mass. Furthermore, the positive correlation between radio-loudness and black hole mass \citep{Laor00, Lacy01, McLureDunlop02, Oshlack02, WooUrry02, Dunlop03, Marziani03, Shields03, McLureJarvis04, MetcalfMagliocchetti06, Liu06}, suggests that jet-enhanced accretion growth results in more massive black holes. In this paper, we quantitatively determine the effect of jet-enhanced accretion on the cosmological growth of SMBHs. In Section \ref{sectionacc} we present results for a jet-enhanced accretion growth model. We discuss the implications of these results in \ref{sectiondisc} and give concluding remarks in Section \ref{sectionconc}. \section{Jet-Enhanced Accretion}\label{sectionacc} The black hole mass growth rate is given by (see e.g. \citealt{Shapiro05}) \begin{equation}\label{dmdt} \frac{dM_{\bullet}}{dt} = (1-\ensuremath{\epsilon_{\rm a}})\ensuremath{\dot{M}_{\rm a}} \end{equation} where $\ensuremath{\epsilon_{\rm a}}$ is the total accretion efficiency and $M_\bullet$ is the black hole mass. The efficiency of conversion of rest mass energy to radiation is \begin{equation}\label{erad} \ensuremath{\epsilon_{\rm r}} = \frac{L} {\ensuremath{\dot{M}_{\rm a}} c^2} \end{equation} where $L$ is the luminosity. Existing accretion growth models simply set $\ensuremath{\epsilon_{\rm r}} = \ensuremath{\epsilon_{\rm a}}$ and hence do not take into account the conversion of accretion power into non-radiative forms, such as kinetic power in a relativistic jet. The total accretion efficiency should thus be more accurately expressed as \begin{equation} \ensuremath{\epsilon_{\rm a}} = \ensuremath{\epsilon_{\rm r}} + \ensuremath{\epsilon_{\rm j}} \end{equation} where $\ensuremath{\epsilon_{\rm j}}$ is the jet efficiency (see \citealt{JolleyKuncic07}). Accretion power can be converted to jet power via a magnetic torque that acts over the disc surface and vertically transports angular momentum and energy from the disc \citep{KuncicBicknell04}. The rate at which work is done against the disk by the magnetic torque is \citep{JolleyKuncic07} \begin{equation} P_{\rm j} = \frac{3}{2} c \int_{r_i}^\infty f_1(r) \left[ \int_{r_i}^r f_2(r) B_\phi B_z \, \rm{d}r \right]\, \rm{d}r \end{equation} where $f_1(r)$ and $f_2(r)$ are dimensionless functions of the disk radius, $r_i$ is the innermost stable orbit, and $B_\phi$ and $B_z$ are the azimuthal and vertical components of the magnetic field, respectively. Note that a large scale poloidal field is required to produce jets in numerical simulations (e.g. \citealt{SteinackerHenning01, Campbell03, KigureShibata05}). A non-zero $\ensuremath{\epsilon_{\rm j}} = P_{\rm j}/\ensuremath{\dot{M}_{\rm a}} c^2$ can enhance accretion growth because jets transport angular momentum and thus give rise to a higher mass accretion rate \citep{KuncicBicknell04}. For a fixed $\ensuremath{\epsilon_{\rm a}}$ (or a fixed $L$), this means that $\ensuremath{\epsilon_{\rm r}}$ is lower. The accretion efficiency $\ensuremath{\epsilon_{\rm a}}$ for a relativistic disc \citep{NovikovThorne} evolves with the dimensionless spin $a = J/M_\bullet$ of the black hole, where $J$ is the specific angular momentum. The large amounts of material that must be accreted to form a $z \gtapprox 3$ quasar leads to very rapid spin-up of a black hole \citep{VSL07}. Thus, we take the spin $a$ and hence, the accretion efficiency \ensuremath{\epsilon_{\rm a}}, to be approximately constant with time. If the average radiative efficiency and hence, average $\ensuremath{\epsilon_{\rm j}}$, are also approximately constant, then the time evolution of black hole accretion growth is \begin{equation} M_\bullet(t) = M_\bullet(t_0) \exp \left[ (t - t_0) \frac{(1-\ensuremath{\epsilon_{\rm a}})}{\ensuremath{\epsilon_{\rm r}}} \frac{4\pi G m_{\rm p}}{\sigma_{\scriptscriptstyle \rm T} c} \dot{m} \right] \end{equation} where $M_\bullet(t_0)$ is the initial mass of the black hole. Cosmological simulations suggest seed black holes with $M_\bullet(t_0) \approx 600 \ensuremath{M_\odot}$ at $z \approx 25$ \citep{MadauRees01, OmukaiPalla03, Yoshida03}. \begin{figure*} \centerline{\includegraphics[width=14truecm]{f1}} \caption{The effect of jet driven accretion on the final mass of a black hole at $z = 6$ growing from an initial mass $M_\bullet(t_0) = 600 \ensuremath{M_\odot}$ at redshift $z = 25$ for different values of the dimensionless spin $a$ and corresponding accretion efficiency $\ensuremath{\epsilon_{\rm a}}$, as shown. The parameter $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}}$ is the fractional accretion power removed by a jet. The solid line is for an Eddington ratio $\dot{m} = 2$, the dotted line is for $\dot{m} = 1$ and the dashed line for $\dot{m} = 0.5$. The shaded areas indicate regions where the radiative efficiency is $\ensuremath{\epsilon_{\rm r}} \ge 0.1$.} \label{fig1} \end{figure*} \begin{figure*} \centerline{\includegraphics[height=20truecm]{f2}} \caption{The black hole mass evolution as a function of redshift. The left column is for the case $\dot{m} = 0.5$, the middle column is for $\dot{m} = 1$, and the right column is for $\dot{m} = 2$. The top row is for a dimensionless spin $a=0.750$ (corresponding to an accretion efficiency $\ensuremath{\epsilon_{\rm a}} = 0.12$), the second row is for $a=0.950$ ($\ensuremath{\epsilon_{\rm a}} = 0.20$), the third row is for $a=0.990$ ($\ensuremath{\epsilon_{\rm a}} = 0.29$) and the bottom row is for $a=0.999$ ($\ensuremath{\epsilon_{\rm a}} = 0.39$). The solid line corresponds to the maximum possible value of $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}}$ such that $\ensuremath{\epsilon_{\rm r}} = 0.1$, and the dotted line is the case without a jet i.e. $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}} = 0$ and $\ensuremath{\epsilon_{\rm r}} = \ensuremath{\epsilon_{\rm a}}$. The shaded areas indicate regions where the radiative efficiency is $\ensuremath{\epsilon_{\rm r}} \ge 0.1$. } \label{fig2} \end{figure*} Figure \ref{fig1} shows the dependence of the final mass of a black hole on the fraction of accretion power removed by a jet for different values of the spin $a$ and Eddington ratio $\dot{m}$. Standard disc accretion, which neglects vertical transport of angular momentum and energy by magnetized jets, corresponds to the case $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}} = 0$. The presence of a jet results in a higher final black hole mass than the standard disc accretion model. The shaded regions in Fig. \ref{fig1} indicate values of $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}}$ where $\ensuremath{\epsilon_{\rm r}} \ge 0.1$, as implied by observations of the ratio of the AGN and quasar luminosity density to the SMBH mass density \citep{Soltan82}. Fig. \ref{fig1} shows that in order to grow a SMBH of mass $M_\bullet \approx 10^9 \ensuremath{M_\odot}$ by $z \approx 6$, a standard disc ($\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}} = 0$) requires a low spin, $a < 0.75$, or highly super-Eddington accretion, $\dot{m} \gg 2$. The presence of a jet significantly relaxes these constraints: Eddington-limited accretion can grow a rapidly spinning (with $a = 0.98$) black hole to $M_\bullet \approx 10^9 \ensuremath{M_\odot}$ by $z \approx 6$ and maintain a radiative efficiency $\ensuremath{\epsilon_{\rm r}} \approx 0.1$ if there is a jet that removes two-thirds of the accretion power. Figure \ref{fig2} shows the black hole accretion growth evolution as a function of $z$ for $25 \ge z \ge 6$ for different values of $\dot{m}$ and $a$. The solid curves are for jet-enhanced accretion growth with the largest possible fractional jet power $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}}$, corresponding to $\ensuremath{\epsilon_{\rm r}} = 0.1$. The dotted curves correspond to accretion growth via a standard disc (i.e. $\ensuremath{\epsilon_{\rm j}} = 0$). The shaded region indicates where $\ensuremath{\epsilon_{\rm r}} \ge 0.1$. Jet-enhanced accretion growth is evident in all cases, with the most enhancement occuring at high $a$ and $\dot{m} \gtapprox 1$. Note that for plausible physical parameters, black holes cannot grow to masses $\approx 10^9 \ensuremath{M_\odot}$ by $z \approx 6$ for sub-Eddington accretion rates even in the case of jet-enhanced accretion (Fig. \ref{fig2}; left column). For Eddington-limited accretion (Fig. \ref{fig2}; middle column), a black hole of $\approx 10^9 \ensuremath{M_\odot}$ at $z \approx 6$ can be formed for jet-enhanced accretion (solid line) provided $a \ltapprox 0.95$. By comparison, standard disc accretion at the Eddington rate (dashed line) requires an even lower spin $a \ltapprox 0.75$, which is difficult to reconcile with the rapid spin evolution of high-redshift quasars \citep{VSL07}. For super-Eddington accretion with $\dot{m} =2$ (Fig. \ref{fig2}; right column), standard disc accretion still requires the spin to be below $0.95$ to grow a $10^9 \ensuremath{M_\odot}$ black hole by $z \approx 6$, whereas jet-enhanced accretion achieves this for all spins up to $a=0.999$. Clearly, the presence of a jet can substantially relax the requirements for growing cosmological SMBHs. The rapid evolution of black holes to a near-maximal spin \citep{Thorne74, Volonteri05} poses a serious problem to standard disc accretion, which cannot grow high-spin SMBHs rapidly enough without invoking implausibly high super-Eddington accretion rates. For example, $\dot{m} = 4$ is required by the standard model to grow a $M_\bullet \approx 10^9 \ensuremath{M_\odot}$ black hole with $a = 0.999$ by $z=6$. Jet-enhanced accretion requires just $\dot{m} = 1.3$, with two thirds of the accretion power in this case used to power a jet, thereby reducing the radiative efficiency to $\ensuremath{\epsilon_{\rm r}} = 0.1$. \section{Discussion}\label{sectiondisc} The presence of a jet can greatly enhance the black hole mass growth rate because for a given mass accretion rate, the radiative efficiency is lower than that of a standard disc. Or equivalently, the accretion rate is higher for a given luminosity. This arises because jets enhance angular momentum transport; the additional accretion power, which depends on the accretion rate, drives the jet. An important constraint for jet-enhanced accretion growth of black holes is the average jet efficiency over the lifetime of the rapid growth phase ($\Delta t \approx 0.7$ Gyr). Recent studies (see e.g. \citealt{Binney07, Kording07}) suggest relativistic jets can remove well in excess of half the total accretion power (i.e. $\ensuremath{\epsilon_{\rm j}}/\ensuremath{\epsilon_{\rm a}} \gtapprox 50\%$), implying they are an "all or nothing" phenomenon. When combined with the estimated $\approx 18\%$ duty cycle for quasar activity for $M_\bullet \gtapprox 10^9 \ensuremath{M_\odot}$ and $z \ltapprox 2$ (see \citealt{Wang06} and references therein), this suggests a lower limit to the fractional jet power of $\langle \ensuremath{\epsilon_{\rm j}} \rangle / \ensuremath{\epsilon_{\rm a}} \gtapprox 10\%$. For a black hole with $a = 0.99$ ($\ensuremath{\epsilon_{\rm a}} \approx 0.29$), this gives $\langle \ensuremath{\epsilon_{\rm j}} \rangle \gtapprox 0.03$, consistent with the value independently derived by \citet{Heinz07}. This implies that, on average, accreting black holes liberate most of their energy in the form of radiation rather than jet kinetic energy. However, black holes grow faster during jet-active phases because mass accretion is enhanced and we argue that such rapid growth episodes are necessary to explain the most massive cosmological black holes. If the quasar duty cycle extrapolates to high $z$, then we predict jet-enhanced accretion to grow a $\approx 10^9 \ensuremath{M_\odot}$ black hole by $z \approx 6$ for $\dot{m} \ltapprox 3$. For comparison, standard disc accretion would require an accretion rate that is at least $30\%$ higher. Furthermore, this may be an overly conservative estimate if jets were more prevalent at high $z$ and hence, the quasar activity duty cycle was higher. Existing data suggest that jets were indeed more common at $z \approx 2$ than today, but the radio luminosity function at $z \gtapprox 2$ is still not well constrained (see e.g. \citealt{Willott01, LiuZhang07}). For a $\approx 10^9 \ensuremath{M_\odot}$ accreting black hole with $a = 0.99$ ($\ensuremath{\epsilon_{\rm a}} \approx 0.29$) and $\langle \ensuremath{\epsilon_{\rm j}} \rangle = 0.03$ the average accretion rate needed to produce a luminosity $L = 3 L_{\rm Edd} \approx 4 \times 10^{47} {\rm erg \, s}^{-1}$ with an average radiative efficiency $\langle \ensuremath{\epsilon_{\rm r}} \rangle =0.26$ is $\approx 25 M_\odot \, {\rm yr}^{-1}$. The average jet power is $\langle P_{\rm j} \rangle \approx 4 \times 10^{46}\, \rm{ erg \,s}^{-1}$, which agrees well with jet kinetic powers inferred observationally (e.g. \citealt{MerloniHeinz07}; see also \citealt{CelottiGhisellini07} for a $z \approx 5.7$ jet). As jets have radiative efficiencies typically $\ltapprox 1\%$ (see e.g. \citealt{MerloniHeinz07}), virtually all the kinetic energy is deposited into the ISM or ICM. Over the rapid growth phase ($\Delta t \approx 0.7$ Gyr), the average total energy deposited is $\langle P_{\rm j} \Delta t \rangle \approx 10^{63}$ erg. This is consistent with the strong lower limit of $10^{60}$ erg inferred from X-ray cavities in rich cluster cores \citep{Birzan04, Binney07}. Finally, we can check whether spinning accreting black holes in AGN remain consistent with observations of the X-ray background (XRB) by matching the expected mass density of relic black holes deduced from the XRB (see \citealt{Marconi04} and references therein), \begin{equation}\label{rho} \rho_{\rm XRB} = (4.7 - 10.6) \times \left(\frac{1-\langle \ensuremath{\epsilon_{\rm a}} \rangle}{9\langle \ensuremath{\epsilon_{\rm r}} \rangle}\right)\times 10^5 \, \ensuremath{M_\odot} \, {\rm Mpc}^{-3} \end{equation} to the local black hole mass density $\rho_{\rm BH} = (4.4 - 5.9) \times 10^5 \, \ensuremath{M_\odot} \, {\rm Mpc}^{-3}$ \citep{GrahamDriver07}. \citet{Marconi04} show that $\rho_{\rm XRB}$ is compatible with $\rho_{\rm BH}$ without requiring radiative efficiencies considerably higher than $\sim 0.1$. Although they used an earlier estimate of $\rho_{\rm BH}$ with a slightly broader range, their result still holds for the more restrictive range deduced by \citet{GrahamDriver07}. However, as pointed out by \citet{Marconi04}, if some of the accretion energy emerges in non-radiative (i.e. mechanical) form (i.e. $\ensuremath{\epsilon_{\rm a}} \neq \ensuremath{\epsilon_{\rm r}}$), then it is possible to place some constraints on the mean accretion efficiency $\langle \ensuremath{\epsilon_{\rm a}} \rangle$ and thereby the mean spin of accreting black holes in the AGN population. We do not expect the mean black hole spin across the whole AGN population to be necessarily as high as that of the high-$z$ population. Indeed, $\rho_{\rm XRB}$ and $\rho_{\rm BH}$ can be compatible with each other and with the Soltan relation, $\langle \ensuremath{\epsilon_{\rm r}} \rangle \gtapprox 0.1$, if the mean accretion efficiency in the AGN population is $\langle \ensuremath{\epsilon_{\rm a}} \rangle \ltapprox 0.2$, corresponding to black hole spins $a \ltapprox 0.95$. This also implies $\langle \ensuremath{\epsilon_{\rm j}} \rangle \ltapprox 0.1$ and since this is comparable to the mean jet efficiency we deduced at high-$z$, it suggests a fundamental jet production mechanism that remains remarkably the same across the entire accreting SMBH population. \section{conclusions}\label{sectionconc} High redshift accreting black holes are rapidly spinning and produce an enormous amount of power. The ubiquity of jets and outflow phenomena amongst accretion-powered sources provides clear evidence that not all of the accretion power is radiated away, as predicted by standard accretion disc theory. That is, for a given luminosity, the mass accretion rate of sources with jets is higher than that of sources without jets. This is because jets enhance angular momentum transport while minimizing mass loss. We have shown that the conditions required to grow a rapidly spinning black hole to a final mass $\approx 10^9 \ensuremath{M_\odot}$ by $z \approx 6$ are considerably more easily met for jet-enhanced disc accretion than for standard disc accretion. Our results indicate that while accreting black holes, on average, liberate most of their energy in radiation, the most massive cosmological black holes may undergo rapid growth episodes associated with enhanced rates of mass accretion driven by intermittent jet activity. This is consistent with the observed correlation between radio-loudness and black hole mass in AGN. It also implies that jets may have provided an important means of angular momentum and energy transport in the high-redshift universe. \section{acknowledgments} E. J. D. J. acknowledges support from a University of Sydney Postgraduate Award. The authors wish to thank Katherine Blundell, Roberto Soria and Geoff Bicknell for valuable discussions and an anonymous referee for useful comments that helped to improve the paper.
1,116,691,501,120
arxiv
\section{Introduction} The cosmic Dark Ages ended with the formation of the first stars in $10^5 - 10^6\, {\rm M_\odot}$ cosmological haloes at $z \sim 20 - 30$. Primordial (or Pop III) stars are the key to understanding primeval galaxies, the onset of early cosmological reionization and chemical enrichment, and the origins of the supermassive black holes found in most massive galaxies today. Unfortunately, in spite of their extreme luminosities \citep{s02} and the arrival of next-generation near infrared (NIR) observatories such as the \textit{James Webb Space Telescope} (\textit{JWST}) and the Thirty-Meter Telescope (TMT), individual Pop III stars will remain beyond the reach of direct detection for decades to come. For now, there are no observational constraints on either their masses or their rates of formation. On the simulation frontier, there has been a gradual shift in paradigm over the past decade from single $30 - 300\, {\rm M_\odot}$ stars forming in isolation in haloes \citep{bcl02,nu01,abn02, on07,2008ApJ...673...14O,2007ApJ...671.1559W} to binaries \citep{turk09} and more recently to the possibility of $20 - 40\, {\rm M_\odot}$ stars forming in small multiples of up to a dozen \citep{stacy10,clark11,get11,hos11, stacy11,get12}. However, these simulations do not estimate Pop III stellar masses by modeling the formation and evolution of the stars. Instead, they usually derive them by comparing infall rates at the center of the halo at very early stages of collapse to Kelvin-Helmholz contraction times to place upper limits on the final mass of the star. No simulation currently bridges the gap between the initial formation and fragmentation of a protostellar disk and its photoevaporation up to a million years later, so there are no firm numerical constraints on the Pop III IMF either. \citep[See][ for a recent review of primordial star formation.]{whalen12}. Pop III stars profoundly transformed the haloes that gave birth to them, driving out most of their baryons in supersonic ionized outflows \citep{wan04,ket04,abs06,awb07} and later exploding as supernovae (SNe) \citep{byh03,ky05,get07,2008ApJ...682...49W}. Ionization fronts from these stars also engulfed nearby haloes, either promoting or suppressing star formation in them and regulating the rise of the first stellar populations \citep[e.g.][]{2004MNRAS.348..753S,il05,su06, 2008ApJ...679..925W,2009MNRAS.395.1280H,suh09,2010ApJ...712..101W}. As each halo hosted consecutive cycles of stellar birth, H II region formation and SN explosions \citep{2007ApJ...663..687Y}, gravity and accretion flows congregated them into the first primitive galaxies with halo merger time-scales of $\sim$ 20 Myr at $z \sim$ 20 \citep[e.g.][]{jgb08,get08,get10,wise12}. At the same time, the first SNe enriched the IGM with metals and dust, triggering a transition from Pop III to Pop II star formation \citep[e.g.][]{mbh03,ss07,bsmith09,nho09,chiaki12}. How these two processes determined the masses and formation rates of stars in primeval galaxies at $z \sim$ 10 -- 15, and therefore their spectra and luminosities, is unknown. Stellar evolution models show that the final fates of the first stars rested upon their masses at birth \citep{hw02}:\ $15 - 40\, {\rm M_\odot}$ Pop III stars died in core-collapse SNe \citep{jet09b} and $40 - 140\, {\rm M_\odot}$ stars collapsed to black holes, perhaps with violent pulsational mass loss prior to death \citep{wbh07}. Pop III stars from 140 to 260~${\rm M_\odot}$ died as pair instability (PI) SNe, extremely energetic thermonuclear explosions with energies of up to 100 those of Type Ia and Type II SNe (Joggerst \& Whalen 2011; Chatzopoulos \& Wheeler 2012 extend this lower limit down to $65\, {\rm M_\odot}$ for rotating stars). Some $40 - 60\, {\rm M_\odot}$ Pop III stars may have died as hypernovae, with energies intermediate to those of core-collapse and PI SNe \citep{Iwamoto2005}. Attempts have been made to indirectly constrain the masses of the first stars by reconciling the cumulative nucleosynthetic yield of their supernovae to the chemical abundances found in ancient, dim metal-poor stars in the Galactic halo \citep{bc05, fet05}, some of which may be contaminated with the ashes of the first generation. For example, \citet{jet09b} recently discovered that the average yields of $15 - 40\, {\rm M_\odot}$ Pop III SNe agree well with the fossil abundances measured in $\sim130$ stars with metallicities $Z < 10^{-4} Z_{ \odot}$ \citep{Cayrel2004,Lai2008}, suggesting that low-mass primordial stars may have synthesized most metals at high redshift. However, stellar archaeology is still in its infancy because of small sample sizes, systematics in the measurements of some elements, and because the imprint of metals from first-generation stars on the second is not well understood. The direct detection of Pop III SNe may be the best prospect for probing the earliest generation of stars in the near term. Primordial supernovae may be 100,000 times brighter than their progenitors, or, at slightly lower redshifts, the primitive galaxies in which they reside. Their transience readily distinguishes them from early galaxies, with which they otherwise overlap in colour--colour space. Previous studies have investigated detection limits for PI SNe at $z \sim 6$ \citep{sc05,tet12}, for 6 $< z <$ 15 \citep[][Whalen, Even et al. in preparation]{pan11,moriya12,2012arXiv1209.5459W}, and in very approximate terms for $z \sim$ 30 \citep{2012ApJ...755...72H}. Whalen, Fryer et al. (in preparation) and Whalen, Frey et al. (in preparation) show that \textit{JWST} will detect PI SNe beyond $z \sim 30$ and that the \textit{Wide Field Infrared Survey Telescope} (\textit{WFIRST}) and the \textit{Wide-field Imaging Surveyor for High-Redshift} (\textit{WISH}) will detect them out to $z \sim$ 15 -- 20 in all-sky NIR surveys. Unfortunately, it may be a decade before such observations are possible, given that \textit{JWST} and \textit{WFIRST} will not be in operation before 2018 and 2021, respectively. In the meantime, it may be possible to discover the first generation of supernovae in cosmic backgrounds. Pair instability SNe deposit up to half of their energy into the CMB via inverse Compton scattering at $z \sim$ 20 \citep{ky05,2008ApJ...682...49W}. \citet{oh03} have found that Pop III PI SNe may impose excess power on the CMB on small scales via the Sunyayev-Zeldovich effect. Primordial SNe may also be manifest as fluctuations in the NIR background because they are so much more luminous than their host protogalaxies at high redshift. In this paper, we show Pop III supernovae may also be revealed through the detection of the radio emission from their remnants by both current and future radio observatories such as the eVLA\footnote{\texttt{www.aoc.nrao.edu/evla/}}, eMERLIN\footnote{\texttt{www.jb.man.ac.uk/research/rflabs/eMERLIN.html}}, MeerKAT\footnote{\texttt{public.ska.ac.za/meerkat}}, ASKAP\footnote{\texttt{www.atnf.csiro.au/projects/askap}} and the Square Kilometre Array\footnote{\texttt{www.skatelescope.org}} (SKA). A review of planned deep continuum radio surveys, including some that would be capable of detecting the remnants discussed here, is provided by \citet{2012arXiv1210.7521N}. \citet{2005ApJ...619..684I} previously suggested the radio afterglows of high redshift hypernovae would produce large fluxes while still relativistically expanding, however this phase is brief so that the probability of discovery of such systems in collapsing minihaloes is very low. The afterglows may be more common in primitive galaxies with established star formation. Previous studies of the observability of collapsing structures at $z>10$ in the radio have centered on detecting the 21cm signal of minihaloes during the cosmic dark ages \citep[e.g.][]{2002ApJ...579....1F, 2002ApJ...572L.123I, 2006ApJ...646..681S, 2011MNRAS.417.1480M}, not cosmic explosions at first light. There are several advantages to searching for the first supernovae through their radio synchrotron emission. Firstly, if only certain types of explosions contribute a radio signal, this may be used to place limits on the masses of their progenitors. Secondly, supernova (and hence star formation) rates may possibly be determined from the strength of the radio signals and the number counts of the sources. Finally, depending on the properties of this signal, it may be possible to pinpoint the redshift of the explosions. In this paper we calculate the long term radio signal produced by the remnants of core-collapse SNe, hypernovae and PI SNe to assess the detectability of the remnants by current and future observatories. We also determine if this signal may constrain the masses and formation rates of Pop III stars in high redshift protogalaxies. In $\S \, \ref{sec:sync}$ we review how Pop III SN explosions generate synchrotron emission that could be directly detected by current and planned radio telescopes. In $\S \, \ref{sec:obspred}$ we describe the expected radio light curves and number counts of the Pop III SN radio remnants. We summarise our conclusions in $\S \, \ref{sec:conclusions}.$ Formulae are provided for synchrotron emission and self-absorption in terms of the energy densities of the relativistic electrons and the magnetic field in an appendix. The range in relativistic gamma factors is discussed in a second appendix. For numerical cosmological estimates, we adopt $\Omega_{\rm m}=0.27$, $\Omega_{\rm v}=0.73$, $H_0=100h\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$ with $h=0.70$, $\sigma_{\rm 8h^{-1}}=0.81$ and $n=0.96$ for the total mass and vacuum energy density parameters, the Hubble constant, the linear density fluctuation amplitude on a scale of $8\,h^{-1}$Mpc and the spectral index, respectively, consistent with the best-fitting cosmological parameters for a flat universe as constrained by the 5-yr {\it Wilkinson Microwave Anisotropy Probe} ({\it WMAP}) measurements of the Cosmic Microwave Background \citep{2009ApJS..180..330K}. \section{The synchrotron signature of supernovae in mini-haloes} \label{sec:sync} We begin by estimating the expected synchrotron flux from a supernova remnant. For relativistic electrons with energy density $u_e$ in a magnetic field of energy density $u_B$, the bolometric emissivity of synchrotron radiation is \begin{equation} \epsilon_{\rm bol}\sim u_e\tau_{\rm S}^{-1}, \label{eq:Psync_bol_ap} \end{equation} where $\tau_{\rm S}$ is the characteristic synchrotron cooling time \begin{equation} \tau_{\rm S}=\frac{3}{4}\frac{m_e c^2}{c \sigma_T \gamma_u u_B}. \label{eq:tau_sync} \end{equation} for the most energetic electrons with kinetic energy $(\gamma_u-1)m_ec^2$, assuming $\gamma_u>>1$. Here $\sigma_T$ is the Thomson cross section for electron scattering and $m_e$ is the mass of an electron. Allowing for a fraction $f_e$ of the thermal energy to go into relativistic electrons, so that $u_e=f_e u_{\rm th}$ for a thermal energy density $u_{\rm th}$, and further assuming equipartition $u_B\simeq u_e$\citep{1982ApJ...259..302C}, the bolometric synchrotron power emitted by a single halo is \begin{equation} P^{\rm sync}_{\rm bol}\simeq4\pi\gamma_u\frac{c \sigma_T}{m_e c^2}\int\,dr r^2 f_e(r)^2 u_{\rm th}^2 F(p_e;\gamma_l,\gamma_u), \label{eq:E_bol} \end{equation} where the function $F(p_e;\gamma_l,\gamma_u)$, described in Appendix~\ref{app:Sync}, allows for a power-law energy distribution of index $p_e$ for the relativistic electrons, and $\gamma_l$ and $\gamma_u$ are the lower and upper ranges of the relativistic $\gamma$ factors for the electrons. The upper electron energies are generally believed to be due to acceleration within the region of the shock, although the precise mechanism is still unknown. Observations suggest values as high as $\gamma_u\simeq10^8$ are achieved \citep{1995Natur.378..255K}. For the supernova remnants examined here, synchrotron cooling is sufficiently strong downstream of the shock that $\gamma_u$ of a few hundred is more typical. A lower energy cut-off $\gamma_l$ is imposed by energy losses arising from the excitation of quantized plasma waves (plasmons). The computation of the range in relativistic electron energies used here is discussed in Appendix~\ref{app:gamrange}. The electron energy spectrum will generally evolve downstream of the shock due to synchrotron cooling, resulting in a steepening of the synchrotron spectrum above a break frequency around 100~GHz. The assumption of a constant $f_e$ and $p_e$ is therefore an approximation and should only be regarded as effective values. The model none the less qualitatively recovers the effects of the high frequency break, as discussed in Appendix~\ref{app:gamrange}. The high frequency range of the spectrum, however, should be regarded as uncertain by factors of several. We illustrate the expected flux using the computation for a $40\,{\rm M_\odot}$ hypernova in a $1.2\times10^7\,{\rm M_\odot}$ halo at $z=17.3$ \citep{2008ApJ...682...49W}. The synchrotron power may be estimated from the post-shock thermal energy density of the supernova remnant, which exceeds a few ${\rm erg\,cm^{-3}}$ over a region around a tenth of a parsec across during the first few years. For $f_e=0.01$ \citep[e.g.][]{1982ApJ...259..302C}, this corresponds to a bolometric synchrotron power output of $P^{\rm sync}_{\rm bol}\simeq5\times10^{42}\,{\rm erg\,s^{-1}}$ for $\gamma_u=300$ and $p_e=2$. The spectrum will extend to a lower characteristic cutoff wavelength, assuming an isotropic magnetic field, of $\lambda_c=(2\pi^{1/2}/3)(m_ec^2/e)\gamma_u^{-2}u_B^{-1/2}\simeq 2014\gamma_u^{-2}u_B^{-1/2}\,{\rm cm}\approx0.2\,{\rm cm}$, or upper cutoff frequency $\nu_c\simeq130$~GHz, corresponding to a characteristic specific power at $\nu_c$ of $2\times10^{31}\,{\rm erg\,s^{-1}\,Hz^{-1}}$. For a source at $z=17.3$, this corresponds to an observed radio flux of $\sim10\,\mu$Jy. The flux will vary with frequency as $\nu^{-(p_e-1)/2}=\nu^{-1/2}$. \begin{figure} \includegraphics[width=3.3in]{fig1.eps} \caption{Radio power from a $40\,{\rm M_\odot}$ hypernova in a $1.2\times10^7\,{\rm M_\odot}$ minihalo at $z=17.3$ (for $f_e=0.01$ and $p_e=2$). Shown are total (synchrotron and thermal free-free) (solid lines) and thermal free-free (dotted line) powers. The upper curve (magenta) corresponds to a model with no attenuation. The middle curves allow for synchrotron self-absorption (blue) and free-free absorption as well (cyan). The lowest curve (black) further adds hypothesized plasma synchrotron attenuation effects.} \label{fig:40Ms_halo3} \end{figure} The radio spectrum computed 4.7~yrs after the explosion is shown in Fig.~\ref{fig:40Ms_halo3}. The thermal free-free radiation is shown as well, but is generally much smaller than the synchrotron. The energy density in relativistic electrons is sufficiently high that synchrotron self-absorption (SSA) is significant at the lower frequencies. Allowing for SSA severely attenuates the spectrum at $\nu\lsim40$~GHz, as shown in Fig.~\ref{fig:40Ms_halo3}. Adding free-free absorption degrades the spectrum further at low frequencies. Free-free absorption was neglected from the fluid zone at the shock front, where the gas is becoming ionized but has not yet reached the post-shock temperature. Since the shock front is unresolved, the substantial free-free absorption from it is likely greatly over-estimated. For a shock front width on the order of the Coulomb mean free path, the free-free absorption from the shock front becomes negligible. It will therefore generally not be included here. It is remarked, however, that if cool circumstellar gas mixes in with the post-shock ionized gas, free-free absorption could be non-negligible. Such an effect is beyond the capacity of a spherically symmetric code to reproduce, and would be difficult to resolve in any case. The Razin-Tsytovich plasma effect on the synchrotron radiation mechanism may further degrade the escaping synchrotron radiation \citep[e.g.][]{1968ARA&A...6..321S, 1986rpa..book.....R}. We approximate the effect by cutting off the production of synchrotron radiation by the factor $\exp(-\nu_{\rm RT}/\nu)$, where \begin{equation} \nu_{\rm RT} = \gamma_u\nu_p\left(1+\gamma_u\frac{\nu_p}{\nu_c}\right), \label{eq:nuRT} \end{equation} and $\nu_p$ is the electron plasma frequency. This strongly cuts off most of the radiation below $\sim0.1\nu_c$, as shown in Fig.~\ref{fig:40Ms_halo3}. As the occurrence of the hypothesized effect is unclear, we conservatively neglect it further here except as noted, but remark that the detection of such greatly diminished low-frequency emission would support its reality. Radio observations of supernova remnants do not strongly constrain $f_e$, although values of $f_e\sim0.001-0.2$ are indicated \citep{1986ApJ...301..790W, 1998ApJ...499..810C, 2006ApJ...651.1005S}. Assuming equipartition, the power at $\nu=\nu_c$ is proportional to $f_e^{3/2}$. It is worth noting that for $f_e<<1$, an equipartition magnetic field will contribute negligibly to the pressure forces acting on the post-shock gas, so that the hydrodynamical models used should not be much affected. \section{Observational predictions} \label{sec:obspred} Deep radio surveys are currently able to reach the few $\mu$Jy level. The Very Large Array has achieved an {\it rms} noise level of $1.5\,\mu$Jy at 8.4~GHz \citep{2002AJ....123.2402F} and $2.7\,\mu$Jy at 1.4~GHz \citep{2008AJ....136.1889O}. Similar levels have been reached using eMERLIN \citep{2005MNRAS.358.1159M}. The Square Kilometre Array is projected to be a factor 100--1000 more sensitive. \begin{figure} \begin{center} \leavevmode \epsfig{file=fig2a.eps, height = 8cm} \epsfig{file=fig2b.eps, height = 8cm} \end{center} \caption{Top panel:\ Radio light curves for a $40\,{\rm M_\odot}$ hypernova in a $1.2\times10^7\,{\rm M_\odot}$ minihalo at $z=17.3$ (for $p_e=2$ and $f_e=0.01$). The curves correspond to the bands:\ 0.5 (dotted), 1.4 (solid), 3 (short-dashed), 10 (long-dashed), 25 (dot short-dashed) and 35 (dot long-dashed) GHz. Bottom panel:\ The corresponding spectral indices at 1.4 (solid) and 10 (long-dashed) GHz.} \label{fig:40Ms_lightcurves} \end{figure} \begin{figure} \begin{center} \leavevmode \epsfig{file=fig3a.eps, height = 8cm} \epsfig{file=fig3b.eps, height = 8cm} \end{center} \caption{Top panel:\ Radio light curves for a $15\,{\rm M_\odot}$ Type II supernova in a $1.2\times10^7\,{\rm M_\odot}$ minihalo at $z=17.3$ (for $p_e=2$ and $f_e=0.01$). The curves correspond to the bands:\ 0.5 (dotted), 1.4 (solid), 3 (short-dashed), 10 (long-dashed), 25 (dot short-dashed) and 35 (dot long-dashed) GHz. Bottom panel:\ The corresponding spectral indices at 1.4 (solid) and 10 (long-dashed) GHz.} \label{fig:15Ms_lightcurves} \end{figure} In Figs~\ref{fig:40Ms_lightcurves} and \ref{fig:15Ms_lightcurves}, we show the light curves for a $40\,{\rm M_\odot}$ hypernova and $15\,{\rm M_\odot}$ Type II supernova, respectively, in a $1.2\times10^7\,{\rm M_\odot}$ minihalo at $z=17.3$ in the observed frame for selected bands planned for the SKA. The bands lie within the eVLA frequency range. The eMERLIN L-band includes 1.4~GHz, while the C-band flux light curve would lie between those for the 3 and 10~GHz bands. For the $40\,{\rm M_\odot}$ hypernova, time-dilation produces a gradual rise in the observed flux $f_\nu$ extended over a 100~yr time-scale with an even slower decline. A flux exceeding $1\,\mu$Jy is expected to persist over a span of 300~yrs, with a peak flux of $\sim10\,\mu$Jy. Most notable is the relatively late rise of the flux in the 500~MHz band; it eventually overtakes the higher frequency fluxes for a period of 150~yrs while above $1\,\mu$Jy. Sizable fluctuations in the flux over time are found. These arise because the dominant emitting region lies just downstream of the shock front with a width governed by synchrotron and plasmon excitation losses, as described in Appendix~\ref{app:gamrange}. The width increases with decreasing gas density and magnetic field strength and decreases with decreasing shock velocity. As the shock slows and weakens, these factors work against each other, producing a fluctuating volume of emitting gas. For intermittent periods, the energy losses deplete the relativistic electron population except very near the shock front. Since radio observations are often quantified by the spectral index $\alpha_\nu=-d\log f_\nu/d\log \nu$, the spectral indices at two representative SKA bands are shown in the lower panel. The changing structure of the post-shock gas produces varying spectral indices in the period leading up to the peak emission. The low frequency attenuation results in $\alpha_\nu<0$ at frequencies below 3~GHz. At late times, after the peak emission, the emission is dominated by a restricted region downstream of the shock front for which $\gamma_u>1000$, and the spectral index at the lower frequencies settles to $\alpha_\nu=0.5$, the value expected for a power-law relativistic electron energy distribution with $p_e=2$. Similar trends are found for the $15\,{\rm M_\odot}$ supernova, although over the much shorter time-scale of decades and with flux values lower by a factor of $\sim100$. We also computed the emission following a $260\,{\rm M_\odot}$ pair instability supernova, but found it so efficiently sweeps through the minihalo that the resulting observed radio flux is at most $\sim0.1$~nJy. Similarly, we found the observed radio fluxes from the remnants produced in a $2\times10^6\,{\rm M_\odot}$ halo by all three supernova progenitor masses fell well below 1~nJy. \begin{figure} \begin{center} \leavevmode \epsfig{file=fig4a.eps, height = 8cm} \epsfig{file=fig4b.eps, height = 8cm} \end{center} \caption{Top panel:\ Radio light curves for a $40\,{\rm M_\odot}$ hypernova in a $1.2\times10^7\,{\rm M_\odot}$ minihalo at $z=17.3$ (for $p_e=2$ and $f_e=0.01$). The Razin-Tsytovich effect is included. The curves correspond to the bands:\ 0.5 (dotted), 1.4 (solid), 3 (short-dashed), 10 (long-dashed), 25 (dot short-dashed) and 35 (dot long-dashed) GHz. Bottom panel:\ The corresponding spectral indices at 1.4 (solid) and 10 (long-dashed) GHz.} \label{fig:40Ms_RT_lightcurves} \end{figure} Allowing for the Razin-Tsytovich effect much reduces the flux in the 500~MHz band, as shown in Fig.~\ref{fig:40Ms_RT_lightcurves}. While it continues to rise with time, it lies about 1~dex below the flux at 1.4~GHz for 50~yrs, and never overtakes it. Comparison with Fig.~\ref{fig:40Ms_lightcurves} suggests the measured flux in the 500~MHz band may be used to detect the presence of the Razin-Tsytovich effect. The spectrum at 500~MHz is very steep, with $\alpha_\nu<-2.5$ during the first 100~yrs. The number of sources visible per year is substantial. We estimate this from the halo collapse rate using the halo fitting function of \citet{2007MNRAS.374....2R}, adapted to the 5~yr {\it WMAP} cosmological parameters for a flat universe \citep{2009ApJS..180..330K}. The observed formation rate of haloes within a solid angle $\delta\Omega$ with a comoving number density $n_h^{\rm com}(z)$ for halo masses exceeding $M_h$, integrated over $z_1 < z < z_2$, is then \begin{eqnarray} {\dot N}_h^{\rm obs}(>M_h) &=& \int_{z_1}^{z_2} dz\,{\dot n}_h^{\rm com}(1+z)^3\frac{dV^{\rm prop}}{dz}\frac{1}{1+z}\nonumber\\ &=&-\int_{z_1}^{z_2}dz\, \frac{dn_h^{\rm com}}{dz}c(a_0r)^2\delta\Omega\nonumber\\ &=&\left[n_h^{\rm com}c(a_0r)^2\right]_{z_2}^{z_1}\delta\Omega\nonumber\\ &&+2\left(\frac{c}{H_0}\right)\int_{z_1}^{z_2}dz\,n_h^{\rm com}c\frac{a_0r}{[\Omega_{\rm m}(1+z)^3+\Omega_{\rm v}]^{1/2}}\delta\Omega\nonumber\\ &\simeq&n_h^{\rm com}(z_1)c[a_0r(z_1)]^2\delta\Omega, \label{eq:dotNh} \end{eqnarray} where $a_0r(z) = (c/H_0)\int_0^z dz\, [\Omega_{\rm m}(1+z)^3+\Omega_{\rm v}]^{-1/2}$ is the angular size distance for a flat universe with present day mass and vacuum energy density parameters $\Omega_{\rm m}$ and $\Omega_{\rm v}$, respectively, and Hubble constant $H_0$. For $z_1>3$, $(H_0/c)a_0r(z_1)\simeq1.5+1.9[1-2/(1+z_1)^{1/2}]$ to better than 2 percent accuracy. The approximation ${\dot n}_h^{\rm com}=(dn_h^{\rm com}/dz)(dz/dt)$ was made, although this could be modified by allowing for merger histories. In the final line, the integral in the line previous was neglected as was the halo density at $z=z_2$. A factor $1/(1+z)$ has been included to account for time-dilation. We find formation rates for haloes $M>10^7\, h^{-1}\,{\rm M_\odot}$ and $z>20$ of $0.0016\,{\rm deg^{-2}\,yr^{-1}}$, and $0.20\,{\rm deg^{-2}\,yr^{-1}}$ at $z>10$. These amount to about 70 to 8000 collapsing haloes per year over the sky. The rates are comparable to recent predictions for the total production rate of Pop~III pair instability supernovae, with progenitor masses between $140-260\,{\rm M_\odot}$, based on cosmological simulations \citep[e.g.][]{2012ApJ...755...72H, 2012arXiv1206.5824J}, assuming one PI SN per minihalo formed. On the other hand, small scale simulations suggest fragmentation may prevent the formation of Pop~III stars sufficiently massive to form a PI SN \citep[e.g.][]{stacy10}, although subsequent mergers of the protostars into more massive ones cannot be ruled out. The star formation may also be stochastic given the low number of massive stars formed, with hypernovae forming before a more massive PI SN progenitor is created. The effect of multiple supernovae on the gas in small haloes is unknown. The heat input of a supernova is sufficient to unbind the IGM of a minihalo with mass below $\sim10^7\,{\rm M_\odot}$, while, once the gas cools, it will fall back in a more massive system \citep{2008ApJ...682...49W, 2011MNRAS.417.1480M}. The remaining massive stars will further replenish the IGM through winds, so a gaseous environment may continue to persist on which further supernovae may impact, producing synchrotron emitting remnants. If each halo gives rise to at least one supernova with progenitor mass exceeding $40\, {\rm M_\odot}$, its remnant should then be detectable by eVLA or eMERLIN. If each produces one with a progenitor mass exceeding $15\, {\rm M_\odot}$, SKA should be able to detect the remnant. An estimate of the expected number counts of radio emitting remnants requires modelling the remnants over a broad redshift range. In the absence of simulations covering a wide range, we use those discussed above, presuming the behaviour of the gas is not very sensitive to redshift. This is a coarse approximation, as the gas content of the haloes is expected to be sensitive to the redshift of their formation. On the other hand, since much of the gas into which the supernova explodes is produced by stellar winds, the environment of the progenitor may not be very redshift dependent. An improved estimate would require a better understanding of star formation within primordial haloes. \begin{figure} \begin{center} \leavevmode \epsfig{file=fig5a.eps, height = 8cm} \epsfig{file=fig5b.eps, height = 8cm} \end{center} \caption{Number counts of radio remnants above a given flux, based on radio light curves for a $40\,{\rm M_\odot}$ hypernova in $1.2\times10^7\,{\rm M_\odot}$ minihaloes (for $p_e=2$ and $f_e=0.01$) at $z>20$ (upper panel) and $z>10$ (lower panel). The observed counts are shown for bands:\ 0.5 (dotted), 1.4 (solid), 3 (short-dashed), 10 (long-dashed), 25 (dot short-dashed) and 35 (dot long-dashed) GHz.} \label{fig:40Ms_counts} \end{figure} \begin{figure} \begin{center} \leavevmode \epsfig{file=fig6a.eps, height = 8cm} \epsfig{file=fig6b.eps, height = 8cm} \end{center} \caption{Number counts of radio remnants above a given flux, based on radio light curves for a $15\,{\rm M_\odot}$ Type II supernova in $1.2\times10^7\,{\rm M_\odot}$ minihaloes (for $p_e=2$ and $f_e=0.01$) at $z>20$ (upper panel) and $z>10$ (lower panel). The observed counts are shown for bands:\ 0.5 (dotted), 1.4 (solid), 3 (short-dashed), 10 (long-dashed), 25 (dot short-dashed) and 35 (dot long-dashed) GHz.} \label{fig:15Ms_counts} \end{figure} The number of sources with observed flux exceeding $f_\nu$ is given by modifying equation~(\ref{eq:dotNh}) to account for the duration $\delta t^{\rm eff}_{f_\nu}(z)$ the remnant is visible with a flux exceeding $f_\nu$: \begin{equation} N(>f_\nu)=-\int_{z_1}^{z_2}dz\, \frac{dn_h^{\rm com}}{dz}c(a_0r)^2 \delta t^{\rm eff}_{f_\nu}(z) \delta\Omega. \label{eq:dNdS} \end{equation} The resulting number counts for haloes collapsing at $z>20$ are shown in Fig.~\ref{fig:40Ms_counts}, based on one $40\,{\rm M_\odot}$ hypernova per collapsing halo with a mass exceeding $10^7\,h^{-1}\,{\rm M_\odot}$. In the lower frequency bands, the counts decline above $0.3\,\mu$Jy but are still substantial to $\sim10\,\mu$Jy. A source brighter than $1\,\mu$Jy should be visible in the range $0.5<\nu<3$GHz every 100 square degrees. Few young sources are found below $1\,\mu$Jy. This is because at a rest-frame age of about 21~yrs, the width of the emitting region diminishes to about $10^{15}\,{\rm cm}$, resulting in very little emission (see Appendix~\ref{app:gamrange}), and a plateau in the number counts. By an age of 32~yrs, as the shock further weakens, the emission zone broadens to about $3\times10^{16}\,{\rm cm}$, and the flux somewhat recovers (as is visible in Fig.~\ref{fig:40Ms_lightcurves} after 400~yrs in the observed frame). This produces a rise in the number counts at very low flux levels, giving about one source brighter than 1~nJy per 10 square degrees in the low frequency bands. At the earliest epochs of star formation, although the remnants would still have observed peak fluxes brighter than $1\,\mu$Jy, the frequency of their occurrence becomes very small. We find only one remnant at $z>30$ brighter than 1~nJy is expected over the entire sky. \citet{2012arXiv1206.5824J} suggest Pop~III star formation may persist to $z<10$. This substantially boosts the counts to a few per square degree brighter than $1\,\mu$Jy, and more than 10 per square degree brighter than 1~nJy. The sources may be identified by the weakness of the 25 and 35~GHz fluxes compared with the lower frequency fluxes, a signature of strong synchrotron cooling, although the fluxes at these high frequencies are difficult to predict because of the effects of energy loss on the energy distribution of the relativistic electrons (see Appendix~\ref{app:gamrange}). More definitive would be a survey campaign with repeat multiband observations extended over several years to trace the light curves, especially during the early rising phase. The corresponding number counts based on one $15\,{\rm M_\odot}$ Type II supernova per collapsing halo are shown in Fig.~\ref{fig:15Ms_counts}. While the fluxes are considerably weaker than for the more energetic hypernovae, the numbers are lower also as a result of their shorter durations. To detect the remnants at $z>20$, 100--200 square degree fields would be required to detect remnants as weak as 1~nJy. The counts for $z>10$ are considerably higher, requiring now 1--2 square degree fields to detect the supernova remnants at the 1~nJy level. We have also examined the radio absorption signature of the remnants against a bright background radio source. While synchrotron absorption will produce a strong absorption feature at rest frame frequencies below 20~GHz, it persists for less than $\sim100$~yr. The number of detectable features along a line of sight will be \begin{equation} \frac{dN(>\tau_\nu)}{dz}=n_h^{\rm com}\pi(r_0^{\rm com})^2(1+z) c\delta t^{\rm eff}_{\tau_\nu}(z), \label{eq:dNdz} \end{equation} where $r_0^{\rm com}$ is the comoving radius of the remnant out to which the line-of-sight absorption optical depth exceeds $\tau_\nu$ and $\delta t^{\rm eff}_{\tau_\nu}(z)$ is the effective duration of the feature. For a characteristic comoving radius of $\sim1$~pc, $dN/dz\sim3\times10^{-14}$ at $z=10$, so that the feature would be undiscoverable. At late times, at $t\sim0.8$~Myr, the outflowing gas cools sufficiently to produce a 21cm absorption feature that would be measurable by SKA. We find a characteristic 21cm absorption doublet arising from the cooling shell would form along lines of sight within $\sim5$~pc (proper) of the centre of the minihalo. The feature would survive about 1~Myr before transforming into a singlet absorption feature indistinguishable from those expected from minihaloes. This corresponds to $dN/dz \sim 10^{-6}$, again too small to be discovered as there would still not be nearly an adequate number of bright background radio sources to have a fair chance of seeing even one feature. \section{Conclusions} \label{sec:conclusions} We estimate the radio signatures of Pop~III supernovae in $\sim10^7\,{\rm M_\odot}$ minihaloes, sufficiently massive to retain their baryons and form supernova remnants, based on hydrodynamical computations of $15\,{\rm M_\odot}$ Type II supernovae, $40\,{\rm M_\odot}$ hypernovae and $260\,{\rm M_\odot}$ pair instability supernovae at $z=17.3$ within the haloes. We model the synchrotron emission and absorption assuming a power-law distribution of relativistic electron energies with a total energy density proportional to the thermal energy of the ionized gas, and equipartition between the relativistic electron and magnetic field energies. Allowing for a relativistic electron component with total energy one to ten percent of the gas thermal energy is sufficient for a hypernova to produce observable fluxes at $0.5-10$~GHz exceeding $1\,\mu$Jy and for the less massive Type II supernova to produce observable fluxes exceeding 10~nJy at $0.5-25$~GHz. The PI SN expels much of the halo gas, leaving behind a supernova remnant that produces fluxes of at most 0.1~nJy. A halo mass exceeding $10^8\,h^{-1}\,{\rm M_\odot}$ would be sufficient to retain the baryons following a PI SN. Although we do not currently have computations of PI SN in such massive haloes, we may estimate the halo formation rates:\ $0.011\,{\rm deg}^{-2}\,{\rm yr}^{-1}$ if forming at $z>10$, and $9.1\times10^{-6}\,{\rm deg}^{-2}\,{\rm yr}^{-1}$ for those forming at $z>20$. These are about a factor 20--200 smaller than the formation rates of minihaloes with masses exceeding $10^7\,h^{-1}\,{\rm M_\odot}$, but would still produce a detectable yield of PI SNe remnants over the sky provided their synchrotron fluxes were sufficiently high. Hypernovae at $z=17.3$ produce observed light curves that rise gradually over a period of 100~yrs, and decline slowly over the subsequent 200--300~yrs. By contrast, the Type II curves have rise times of 5--10~yrs at frequencies above 1.4~GHz, and decline abruptly after another 30~yrs. The 500~MHz flux lags the higher frequency bands for both supernovae and dominates the emission shortly after it peaks as the synchrotron spectrum reddens due to the weakening of the supernova shock. Light curves computed with and without the hypothesized Razin-Tsytovich effect show that the observed 500~MHz flux is substantially diminished by the effect, suggesting it may provide a useful means of testing for the presence of the effect. The light curves for both the hypernova and Type II supernova remnants in collapsing minihaloes may be distinguished from their galactic counterparts by the weakness of their high frequency fluxes. The flux decreases below the power-law prediction at rest-frame frequencies above 100~GHz as a consequence of intense synchrotron cooling downstream of the shock front in the high gas density environments of the minihaloes, curtailing the maximum synchrotron frequency. Predictions of the precise shape of the high frequency spectrum, however, are complicated by the transport of relativistic electrons in the presence of strong synchrotron and plasmon excitation energy losses and the unknown magnetic field distribution. Estimating the number counts of the supernovae from the minihalo formation rate at $z>20$, we find a supernova formation rate of about one per 600 square degrees per year. If the supernovae are able to form down to $z=10$, the rate increases to one per 5 square degrees per year. On average, one $z>20$ hypernova radio remnant with flux exceeding $1\,\mu$Jy should be detectable at any given time per 100 square degree field, and one with a flux above 1~nJy per 10 square degree field. If Pop III star formation persists to $z<10$, then a few hypernova remnants brighter than $1\,\mu$Jy would be visible per square degree, and more than ten per square degree brighter than 1~nJy. Because of their smaller explosion energies, Type II supernovae will produce weaker radio remnants with shorter durations. Fields of 100--200 square degrees would be required to detect remnants forming at $z>20$ with fluxes above 1~nJy. The numbers are much improved for supernovae forming down to $z<10$, requiring instead 1--2 square degree fields to detect their radio remnants. Pop~III supernova remnants with radio fluxes exceeding $1\,\mu$Jy should be visible in radio surveys using the eVLA or eMERLIN, or possibly the MIGHTEE survey conducted with the SKA pathfinder MeerKAT. Weaker few nJy sources would be detectable by the SKA in large numbers. An observing campaign extended over several years would be able to follow the light curves, yielding invaluable data on the formation of the first stars in the Universe and their impact on their local environments. The readily distinguishable differences between the radio light curves of hypernova and Type II supernova remnants provide the possibility of exploiting the radio emission to infer the initial mass function of the first stars. The frequency with which these explosions are detected over the sky will also constrain Pop III star formation rates through cosmic time along with all-sky NIR surveys by \textit{WFIRST} and \textit{WISH} and deep-field surveys by \textit{JWST}. Their discovery in the radio will permit followup by deep-field observations of the host galaxies by \textit{JWST} and the TMT. Surveys detecting Pop III radio remnants at $10 < z < 15$ may also constrain the multiplicity of their occurrence within individual primeval galaxies, and even the large-scale distribution of the galaxies themselves. The detection of primordial supernova remnants promises to be one of the most spectacular discoveries in high-redshift radio astronomy over the coming decade. \begin{appendix} \section{Synchrotron emission and absorption} \label{app:Sync} \begin{figure} \includegraphics[width=3.3in]{figA1.eps} \caption{Synchrotron function $F(p_e;\gamma_l,\gamma_u)$ for relativistic electron power-law energy distribution over $\gamma_l<\gamma<\gamma_u$ with index $p_e$. Shown for $p_e=1.5$ (solid line), $p_e = 2.0$ (dashed line), $p_e=2.5$ (dot-dashed line) and $p_e=3.0$ (dotted line). (A value $\gamma_l=1$ is adopted.)} \label{fig:Fp} \end{figure} The synchrotron emissivity for a power-law relativistic electron energy distribution $dN/d\gamma\sim\gamma^{-p_e}$ extending over the $\gamma$ factor range $\gamma_l<\gamma<\gamma_u$ may be expressed concisely as \begin{equation} \epsilon_\nu=\begin{cases}\frac{3-p_e}{2}\frac{\epsilon_{\rm bol}}{\nu_c} \left(\frac{\nu}{\nu_c}\right)^{-\frac{p_e-1}{2}} \qquad ;\frac{1}{3}<p_e<3 \\ \frac{1}{\log(\nu_c/\nu_{\rm min})}\frac{\epsilon_{\rm bol}}{\nu_c} \left(\frac{\nu}{\nu_c}\right)^{-1} \qquad ;p_e=3,\end{cases} \label{eq:epsnu} \end{equation} where $\nu_c$ is the synchrotron upper cutoff frequency \begin{equation} \nu_c=\frac{3}{4\pi}\frac{\gamma_u^2 eB\sin\alpha}{m_ec} \label{eq:nuc} \end{equation} for pitch angle $\alpha$, and $\epsilon_{\rm bol}$ is the bolometric emissivity \begin{equation} \epsilon_{\rm bol}= \frac{3}{4}u_e\tau_{\rm S}^{-1}F(p_e;\gamma_l,\gamma_u). \end{equation} Here, $u_B$ is the energy density of the magnetic field, the synchrotron cooling time $\tau_{\rm S}$ is given by equation~(\ref{eq:tau_sync}) and the function $F(p_e;\gamma_l,\gamma_u)$, shown in Fig.~\ref{fig:Fp}, is \begin{eqnarray} F(p_e;\gamma_l,\gamma_u) &=& 3^{1/2}\frac{9}{4\pi}\frac{2^{(p_e-1)/2}}{p_e+1} \Gamma\left(\frac{p_e}{4} + \frac{19}{12}\right) \Gamma\left(\frac{p_e}{4} - \frac{1}{12}\right)\nonumber\\ &&\times\begin{cases} \frac{2-p_e}{3-p_e}\frac{1-(\gamma_l/\gamma_u)^{3-p_e}} {1-(\gamma_l/\gamma_u)^{2-p_e}}\qquad ;p_e\neq2, p_e\neq3\\ \frac{1-\gamma_l/\gamma_u}{\log(\gamma_u/\gamma_l)}\qquad\qquad ;p_e=2\\ \frac{\log(\gamma_u/\gamma_l)}{\gamma_u/\gamma_l-1}\qquad\qquad ;p_e=3. \end{cases} \label{eq:Fp} \end{eqnarray} The inverse attenuation length due to synchrotron self-absorption may be expressed as \begin{equation} \alpha_\nu=r_0^{-1}\left(\frac{1}{\nu_c\tau_{\rm se}}\right) \left(\frac{\nu_c}{\nu}\right)^{(p_e+4)/2}G(p_e;\gamma_l,\gamma_u), \label{eq:alphanu} \end{equation} where $r_0$ is the classical electron radius, $\tau_{\rm se}=\gamma_u m_e c^2/c \sigma_T u_e$ is the characteristic electron scattering time, and \begin{eqnarray} G(p_e;\gamma_l,\gamma_u)&=&\frac{3^{1/2}}{16\pi}\frac{2^{p_e/2}}{\gamma_u^3} \Gamma\left(\frac{p_e}{4} + \frac{1}{6}\right) \Gamma\left(\frac{p_e}{4} + \frac{11}{6}\right)\nonumber\\ &&\times\begin{cases}\frac{2-p_e}{1-(\gamma_l/\gamma_u)^{2-p_e}}\qquad ;p_e\neq2\\ \frac{1}{\log(\gamma_u/\gamma_l)}\qquad ;p_e=2. \end{cases} \label{eq:Gp} \end{eqnarray} \section{Range in relativistic electron energies} \label{app:gamrange} The dominant energy loss processes of relativistic electrons in the supernova remnants examined are synchrotron cooling, bremsstrahlung losses through scattering off the background thermal distribution of ions, and the excitation of plasmon waves. We discuss each of these in turn, using the rate estimates from \citet{1975ApJ...196..689G}. \begin{figure} \includegraphics[width=3.3in]{figB1.eps} \caption{The time since shock passage (solid line) and the non-thermal bremsstrahlung cooling time (dotted line) for electrons with relativistic $\gamma_u$ factor (long dashed line) corresponding to the synchrotron cooling time matching the time since shock passage. Also shown is the lower limiting $\gamma_l$ factor (short dashed line) below which electrons lose energy to plasmon excitation on a time scale shorter than shock passage. Emission is dominated by the layers immediately downstream of the shock front at 0.047~pc. Shown for a $40\,{\rm M_\odot}$ hypernova 4.8~yr after the explosion, with $f_e=0.01$, $u_B=u_e$ and $p_e=2.0$ assumed.} \label{fig:tc_gam} \end{figure} No definitive acceleration mechanism for non-thermal relativistic electrons in supernova remnants has been identified, although mechanisms involving the diffusive acceleration of particles by shock waves, a form of first-order Fermi acceleration, are favoured. The characteristic acceleration time, assuming Bohm diffusion, is \begin{equation} t_{\rm acc}\simeq\frac{r_g c}{u^2_{\rm sh}}, \label{eq:tacc} \end{equation} where $r_g=\gamma m_ec^2/eB\simeq1700(\gamma/B)$~cm is the gyroradius of an electron with relativistic velocity factor $\gamma$ in a magnetic field $B$, and $u_{\rm sh}$ is the shock velocity \citep{2001RPPh...64..429M}. Requiring the acceleration time to be shorter than the synchrotron cooling time (equation~[\ref{eq:tau_sync}]) imposes the upper limit \begin{equation} \gamma_u^{\rm(sync)} < 1.2\times10^6 B^{-1/2}\left(\frac{u_{\rm sh}/c}{0.01}\right), \label{eq:gam_u_sync} \end{equation} corresponding to a limiting electron energy of $\sim1$~TeV. Downstream from the shock, at distances sufficiently far from the shock front the electrons are not rapidly returned by turbulent eddies, the electrons will cool in the absence of any other accelerating mechanism. At a time $t^{\rm p-sh}$ after a shock, synchrotron cooling will result in \begin{equation} \gamma_u^{\rm(p-sh,\, sync)} < 77.4B^{-2}\left[\frac{t^{\rm(p-sh)}}{\rm 10^7\,s}\right]^{-1}. \label{eq:gam_u_p_sh} \end{equation} The electrons will also lose energy through scatters off other electrons and the ions, generating non-thermal bremsstrahlung radiation. The resulting cooling time is given by \begin{equation} (t_{\rm cool}^{\rm n-th\, brem})^{-1}=\frac{3}{\pi}\alpha\sigma_T c(n_{\rm H} + 3n_{\rm He})\left[\log(2\gamma) - \frac{1}{3}\right], \label{eq:tcool_ntb} \end{equation} where $\alpha$ is the fine structure constant. Requiring $t_{\rm cool}^{\rm n-th\, brem}>t_{\rm accel}$ imposes \begin{equation} \gamma_u^{\rm(n-th-b)}<4\times10^{10}\left(\frac{n_{\rm H}}{10^7\,{\rm cm^{-3}}}\right)^{-1}\left(\frac{u_{\rm sh}/c}{0.01}\right)^2 B, \label{eq:gam_u_n_th-b} \end{equation} a less stringent constraint than given by synchrotron cooling for $B>0.001\{(n_{\rm H}/10^7{\rm cm^{-3}})[0.01/(u_{\rm sh}/c)]\}^{2/3}$~G. For $\gamma$ factors of order 100, the non-thermal bremsstrahlung cooling time approaches the synchrotron cooling time. Relativistic electrons will also lose energy through plasmon excitation. The associated cooling time is \begin{eqnarray} (t_{\rm cool}^{\rm e-pl})^{-1}&=&\frac{3}{2}\sigma_T n_e c\gamma^{-1}\left[\log\left(\gamma^{1/2}\frac{m_e c^2}{h \nu_p}\right)+0.216\right]\\\nonumber &\simeq&2.99\times10^{-7}\frac{n_e}{10^7\,{\rm cm^{-3}}}\gamma^{-1}\\\nonumber &&\times\left[(1/2)\log\left(1.89\times10^{25}\frac{10^7\,{\rm cm^{-3}}}{n_e}\gamma\right)+0.216\right]\,{\rm s^{-1}}. \label{eq:tcool_epl} \end{eqnarray} In this case, the lowest energy electrons cool most quickly. Generally $t_{\rm cool}^{\rm e-pl}>>t_{\rm acc}$ for the supernova remnants considered. The requirement $t_{\rm cool}^{\rm e-pl}>t^{\rm p-sh}$ imposes the lower limit on $\gamma$ of \begin{equation} \gamma_l^{\rm(p-sh,\, e-pl)}\simeq94.5\left(\frac{n_e}{10^7\,{\rm cm^{-3}}}\right)\left(\frac{t^{\rm p-sh}}{10^7\,{\rm s}}\right), \label{eq:gam_l_e_pl} \end{equation} to within 5 percent accuracy for $t^{\rm p-sh}$ within an order of magnitude of $10^7\,{\rm s}$. \begin{figure} \includegraphics[width=3.3in]{figB2.eps} \caption{The evolution of the synchrotron emission spectrum allowing for synchrotron self-absorption and free-free absorption, with $\gamma_l$ limited by loss to plasmon excitation and $\gamma_u$ limited by synchrotron losses. Shown for a $40\,{\rm M_\odot}$ hypernova, with $f_e=0.01$, $u_B=u_e$ and $p_e=2.0$ assumed, at times 4.7 (solid; black), 17.4 (short-dashed; magenta), 21.4 (long dashed; blue) and 32.5~yrs (dot-dashed; cyan) after the explosion. Also shown are models at 4.7~yrs after the hypernova, allowing $f_e$ to decrease due to cooling from an initial value of $f_e^i=0.05$ with $u_B/u_{\rm therm}=0.05$ held fixed (upper dotted curve; black) and with the initial value of $f_e^i=0.1$ and equipartition $u_B=u_e$ maintained (lower dotted curve; black).} \label{fig:PSync_evol} \end{figure} The upper limit in the relativistic $\gamma$ factor for the electrons is restricted primarily by synchrotron cooling, as illustrated in Fig.~\ref{fig:tc_gam} for a $40\,{\rm M_\odot}$ hypernova in a $1.2\times10^7\,{\rm M_\odot}$ halo at $z=17.3$, assuming $f_e=0.01$, $u_B=u_e$ and $p_e=2.0$. Also shown are the upper and lower relativistic factors $\gamma_u$ and $\gamma_l$ given by equating the synchrotron cooling time and plasmon excitation energy loss time, respectively, to the time since shock passage. By comparison, cooling due to non-thermal bremsstrahlung losses is generally negligible. Most of the interior of the supernova remnant would deplete its relativistic electrons due to synchrotron and plasmon excitation losses. Almost all the emission originates in a thin layer just downstream of the shock front. The thickness $\Delta l_{\rm em}$ of the layer may be estimated by requiring $\gamma_l^{\rm(p-sh,\,e-pl)}<\gamma_u^{\rm(p-sh,\,sync)}$, giving $\Delta t_{\rm em}/{\rm 10^7\,s} < 0.90(n_e/{\rm 10^7\,cm^{-3}})^{-1/2}B^{-1}$, or \begin{equation} \Delta l_{\rm em}=u_{\rm sh}\Delta t_{\rm em}<2.7\times10^{15}\left(\frac{u_{\rm sh}/c}{0.01}\right)\left(\frac{n_e}{\rm 10^7\,cm^{-3}}\right)^{-1/2}B^{-1}\,{\rm cm}. \label{eq:dlem} \end{equation} Because of the steep rise in the thermal energy density just behind the shock front, the emission is not much increased by relaxing $\gamma_l$ to unity. The evolution of the spectrum for a $40\,M_\odot$ hypernova remnant in a $1.2\times10^7\,{\rm M_\odot}$ minihalo is shown at $z=17.3$ in Fig.~\ref{fig:PSync_evol} for $f_e=0.01$ and $p_e=2$, with $\gamma_l$ and $\gamma_u$ restricted by energy losses. The curve at the time $17.4$~yrs corresponds to density and temperature profiles shown in \citet{2008ApJ...682...49W}. The emission power decays with time as the shock weakens and the emitting region reduces in width according to Eq.~(\ref{eq:dlem}). At 21.4~yrs after the hypernova, the emitting zone has narrowed to $10^{15}\,{\rm cm}$. By 32.5~yrs, however, the shock has weakened sufficiently for the emitting zone to broaden to $3\times10^{16}\,{\rm cm}$, and the flux slightly recovers. Because of energy losses by the relativistic electrons, a model with a constant value for $f_e$ is an approximation. While the reduction in $\gamma_u$ from synchrotron losses will little affect the total energy content in relativistic electrons for $p_e>2$ because most of the energy is at the lower energy end of the spectrum, it will reduce the flux for $p_e<2$. For $p_e=2$ the energy is logarithmically distributed. For an initial range $\gamma_l^i<\gamma<\gamma_u^i$, a final range $\gamma_l<\gamma<\gamma_u$ will retain a fraction $\log(\gamma_u/\gamma_l)/\log(\gamma_u^i/\gamma_l^i)$ of the original energy. For $\gamma_l^i=1$, $\gamma_u^i=10^6$, $\gamma_l=50$, $\gamma_u=300$, this gives a fraction 13\% of the original energy. Thus an initial fraction $f_e^i=0.1$ would be reduced to $f_e=0.013$. Since the magnetic energy density would not directly be reduced by the losses, equipartition may not apply, in which case $u_B>u_e$ may be possible, depending on the time to establish equipartition. Spectra allowing for the energy loss since the time the gas was shocked are also shown in Fig.~\ref{fig:PSync_evol} at 4.7~yrs after the explosion for two alternative models, one assuming an initial post-shock value $f_e^i=0.05$ and a post-shock magnetic energy density to thermal energy density ratio fixed at $0.05$ and the second assuming an initial post-shock value $f_e^i=0.1$ and a magnetic energy density that maintains equipartition with the evolving energy density of the relativistic electrons. Both these models well match the constant $f_e=0.01$ model, so that the constant $f_e$ model is a good approximation, noting that the value for $f_e$ is effectively an average value. At frequencies above $\sim100$~GHz, the models allowing for an evolving $f_e$ due to cooling show an enhanced flux over the constant effective $f_e$ model in Fig.~\ref{fig:PSync_evol}. In fact even in an evolving $f_e$ model, the flux is expected to steepen at high frequencies due to synchrotron losses. Under the assumptions of a uniform shock velocity gas with a uniform magnetic field and diffusion coefficient independent of electron momentum, \citet{1987MNRAS.225..335H} solve the steady-state transport equation for the population of relativistic electrons in the planar limit in the presence of strong synchrotron cooling. They show a power-law emission spectrum $\nu^{-0.5}$ will steepen to $\nu^{-1}$ above a break frequency $\nu_b$ before cutting off above the maximum frequency given by $\gamma_u$. The steepening is argued for on the general grounds that the distance over which electrons cool out of the distribution varies like $\gamma^{-1}$ for synchrotron cooling. The break frequency depends on the specific model. They find $\nu_b\simeq 15\,B^{-3}[(u_{\rm sh}/c)/0.01]^2(\Delta l_{\rm em}/10^{15}\,{\rm cm})^{-2}$~GHz. For $B=0.1$~G and $\Delta l_{\rm em}=10^{16}$~cm, this gives $\nu_b\simeq150$~GHz. While the diffusion coefficient is generally expected to be proportional to the electron momentum \citep{2001RPPh...64..429M}, the estimated break frequency is comparable to that at which the constant effective $f_e$ model lies below the evolving $f_e$ models and so may give a more realistic representation of the spectrum. A more precise prediction would require solving the relativistic electron transport equation, allowing for a non-uniform shock speed and magnetic field as well as a momentum-dependent diffusion coefficient; this is well beyond the scope of this paper, and the predicted synchrotron spectrum would in any case still depend on the assumed magnetic field properties and initial electron spectrum. In the absence of a definitive theory of relativistic electron acceleration and equipartition, the values $f_e=0.01$ and $u_B=u_e$ are assumed for the estimates here, noting that these are only meant to represent effective values. As indicated by the above discussion, the results computed in this paper will be quantitatively affected by several unknown factors, including the relativistic electron acceleration mechanism, the generation of turbulence at the shock front and its effects on the relativistic electron energy spectrum and possibly on the propagation of the shock front itself, and the subsequent transport of the relativistic electrons as they cool due both to synchrotron and plasmon excitation losses. By scaling the uncertainties based on observed supernova remnants, we expect to minimize the effects of these uncertainties on our predictions, but naturally cannot eliminate them. \end{appendix} \section*{acknowledgments} We thank Philip Best, Jim Dunlop and Jeff Peterson for valuable discussions. We thank the referee for useful comments. DJW was supported by the Bruce and Astrid McWilliams Center for Cosmology at Carnegie Mellon University. All ZEUS-MP simulations were performed on Institutional Computing (IC) platforms at LANL (Coyote). \bibliographystyle{mn2e-eprint}
1,116,691,501,121
arxiv
\subsection{Introduction} Each of the usual cohomology theories $X\rightsquigarrow H^{j}(X,r)$ on algebraic varieties arises from a functor $R\Gamma$ taking values in a triangulated category $\mathsf{D}$ equipped with a $t$-structure and a Tate twist $N\rightsquigarrow N(r)$. The heart of $\mathsf{D}$ has a tensor structure and, in particular, an identity object ${1\mkern-7mu1}$. The cohomology theory satisfies \begin{equation} H^{j}(X,r)\simeq H^{j}(R\Gamma(X)(r))\text{,} \label{eq2 \end{equation} and there is an absolute cohomology theor \begin{equation} H_{\mathrm{abs}}^{j}(X,r)\simeq\Hom_{\mathsf{D}(k)}({1\mkern-7mu1},R\Gamma(X)(r)[j]). \label{eq1 \end{equation} (see, for example, \cite{deligne1994}, \S 3). Let $k$ be a base field of characteristic $p$. For the $\ell$-adic \'{e}tale cohomology, $\mathsf{D}$ is the category of bounded constructible $\mathbb{Z}{}_{\ell}$-complexes (\cite{ekedahl1990}). For the $p$-cohomology, it is the category $\mathsf{D}_{c}^{b}(R)$ of coherent complexes of graded modules over the Raynaud ring. This category was defined in \cite{illusieR1983}, and its properties were developed in \cite{ekedahl1984, ekedahl1985, ekedahl1986}. We study homological algebra in this category and, when $k$ is finite, we prove relations between $\Ext$s and zeta functions. Let $k=\mathbb{F}{}_{q}$ with $q=p^{a}$. The $\Ext$ of two objects $M$, $N$ of $\mathsf{D}_{c}^{b}(R)$ is defined by the usual formul \[ \Ext^{j}(M,N)=\Hom_{\mathsf{D}_{c}^{b}(R)}(M,N[j]). \] Using that $k$ is finite, we construct a canonical comple \[ E(M,N)\colon\quad\cdots\rightarrow\Ext^{j-1}(M,N)\rightarrow\Ext^{j (M,N)\rightarrow\Ext^{j+1}(M,N)\rightarrow\cdots \] of abelian groups for each pair $M,N$ in $\mathsf{D}_{c}^{b}(R)$. An object $P$ of $\mathsf{D}_{c}^{b}(R)$ can be regarded as a double complex of $W_{\sigma}[F,V]$-modules. On tensoring $P$ with $\mathbb{Q}{}$ and forming the associated simple complex, we obtain a bounded complex $sP_{\mathbb{Q}{}}$ whose cohomology groups $H^{j}(sP_{\mathbb{Q}{}})$ are $F$-isocrystals over $k$. We define the zeta function $Z(P,t)$ of $P$ to be the alternating product of the characteristic polynomials of $F^{a}$ acting on these $F$-isocrystals. It lies in $\mathbb{Q}{}_{p}(t)$. Attached to each $P$ in $\mathsf{D}_{c}^{b}(R)$, there is a bounded complex $R_{1}\otimes_{R}^{L}P$ of graded $k$-vector spaces whose cohomology groups have finite dimension. The Hodge numbers $h^{i,j}(P)$ of $P$ are defined to be the dimensions of the $k$-vector spaces $H^{j}(R_{1}\otimes_{R}^{L}P)^{i}$. Finally, we let $R\underline{\Hom}(-,-)$ denote the internal Hom in $\mathsf{D}_{c}^{b}(R)$. \begin{theorem} \label{a0}Let $M,N\in\mathsf{D}_{c}^{b}(R)$ and let $P=R\underline{\Hom (M,N)$. Let $r\in\mathbb{Z}{}$, and assume that $q^{r}$ is not a multiple root of the minimum polynomial of $F^{a}$ acting on $H^{j}(sP_{\mathbb{Q}{}})$ for any integer $j$. \begin{enumerate} \item The groups $\Ext^{j}(M,N(r))$ are finitely generated $\mathbb{Z}{}_{p $-modules, and the alternating sum of their ranks is zero. \item The zeta function $Z(P,t)$ of $P$ has a pole at $t=q^{-r}$ of orde \[ \rho=\sum\nolimits_{j}(-1)^{j+1}\cdot j\cdot\rank_{\mathbb{Z}{} (\Ext^{j}(M,N(r))). \] \item The cohomology groups of the complex $E(M,N(r))$ are finite, and the alternating product of their orders $\chi(M,N(r))$ satisfie \[ \left\vert \lim_{t\rightarrow q^{-r}}Z(P,t)\cdot(1-q^{r}t)^{\rho}\right\vert _{p}^{-1}=\chi(M,N(r))\cdot q^{\chi(P,r) \] wher \[ \chi(P,r)=\sum_{i,j\,\,(i\leq r)}(-1)^{i+j}(r-i)\cdot h^{i,j}(P). \] \end{enumerate} \end{theorem} \noindent Here $|\cdot|_{p}$ is the $p$-adic valuation, normalized so that $|p^{r}\frac{m}{n}|_{p}^{-1}=p^{r}$ if $m$ and $n$ are prime to $p$. We identify the identity object of $\mathsf{D}_{c}^{b}(R)$ with the ring $W$ of Witt vectors. Then $R\underline{\Hom}(W,N)\simeq N$. Each algebraic variety (or log variety or stack) defines several objects in $\mathsf{D}_{c}^{b}(R)$ (see \S 6). Let $M(X)$ be one of the objects of $\mathsf{D}_{c}^{b}(R)$ attached to an algebraic variety $X$ over $k$, and define the absolute cohomology of $X$ to b \[ H_{\mathrm{abs}}^{j}(X,\mathbb{Z}{}_{p}(r))=\Hom_{\mathsf{D}_{c}^{b (R)}(W,M(X)(r)[j]). \] The complex $E(W,M(X)(r))$ becomes \[ E(X,r)\colon\quad\cdots\rightarrow H_{\mathrm{abs}}^{j-1}(X,\mathbb{Z}{ _{p}(r))\rightarrow H_{\mathrm{abs}}^{j}(X,\mathbb{Z}{}_{p}(r))\rightarrow H_{\mathrm{abs}}^{j+1}(X,\mathbb{Z}{}_{p}(r))\rightarrow\cdots. \] \begin{theorem} \label{b04}Assume that $q^{r}$ is not a multiple root of the minimum polynomial of $F^{a}$ acting on $H^{j}(sM(X)_{\mathbb{Q}{}})$ for any $j$. \begin{enumerate} \item The groups $H_{\mathrm{abs}}^{j}(X,\mathbb{Z}_{p}(r))$ are finitely generated $\mathbb{Z}{}_{p}$-modules, and the alternating sum of their ranks is zero. \item The zeta function $Z(M(X),t)$ of $M(X)$ has a pole at $t=q^{-r}$ of orde \[ \rho=\sum\nolimits_{j}(-1)^{j+1}\cdot j\cdot\rank_{\mathbb{Z}{}_{p}}\left( H_{\mathrm{abs}}^{j}(X,\mathbb{Z}{}_{p}(r))\right) \text{. \] \item The cohomology groups of the complex $E(X,r)$ are finite, and the alternating product of their orders $\chi(X,\mathbb{Z}{}_{p}(r))$ satisfie \[ \left\vert \lim_{t\rightarrow q^{-r}}Z(M(X),t)\cdot(1-q^{r}t)^{\rho }\right\vert _{p}^{-1}=\chi(X,\mathbb{Z}{}_{p}(r))\cdot q^{\chi(M(X),r)}. \] \end{enumerate} \end{theorem} Let $X$ be a smooth projective variety over $k$, and let $M(X)=R\Gamma (X,W\Omega_{X}^{\bullet})$. Then $H^{j}(sM(X)_{\mathbb{Q}{}})=H_{\mathrm{crys }^{j}(X/W)_{\mathbb{Q}{}}$ and $H_{\mathrm{abs}}^{j}(X,\mathbb{Z}_{p}(r))$ is the group $H^{j}(X,\mathbb{Z}{}_{p}(r))$ defined in (\ref{n7}) below. Moreover, the zeta function and the Hodge numbers of $M(X)$ agree with those of $X$, and so, in this case, Theorem \ref{b04} becomes the $p$-part of the main theorem of \cite{milne1986}. See p.\pageref{eq65} below. \subsection{Remarks} \begin{plain} \label{a03}Let $\zeta(P,s)=Z(P,q^{-s}),$ $s\in\mathbb{C}$. Then $\rho$ is the order of the pole of $\zeta(P,s)$ at $s=r$, and \[ \lim_{t\rightarrow q^{-r}}Z(P,t)\cdot(1-q^{r}t)^{\rho}=\lim_{s\rightarrow r}\zeta(P,s)\cdot(s-r)^{\rho}\cdot(\log q)^{\rho}\text{. \] \end{plain} \begin{plain} \label{a04}We expect that the $F$-isocrystals $H^{j}(sP_{\mathbb{Q}{}})$ are always semisimple (so $F^{a}$ always acts semisimply) when $P$ arises from algebraic geometry. If this fails, there will be spurious extensions over $\mathbb{Q}{}$ that will have to be incorporated into the statement of (\ref{a0}). \end{plain} \begin{plain} \label{a04a}The statement of Theorem 0.1 depends only on $\mathsf{D}_{c ^{b}(R)$ as a triangulated category with a dg-lifting. \end{plain} \begin{plain} \label{a05}We leave it as an (easy) exercise for the reader to prove the analogue of (\ref{a0}) for $\ell\neq p$ (the indolent may refer to article below). \end{plain} \begin{plain} \label{a06}In a second article, we apply (\ref{a0}) to study the analogous statement in a triangulated category of motivic complexes (\cite{milneR2013b}). \end{plain} \subsection{Outline of the article} In \S 1 and \S 3 we review some of the basic theory of the category $\mathsf{D}_{c}^{b}(R)$ (Ekedahl, Illusie, Raynaud), and in \S 2 we prove a relation between the numerical invariants of an object of $\mathsf{D}_{c ^{b}(R)$. In \S 4 begin the study of the homological algebra of $\mathsf{D _{c}^{b}(R)$, and in \S 5 we take the ground field to be finite and prove Theorem \ref{a0}. In the final section we study applications of Theorem \ref{a0} to algebraic varieties. \subsection{Notations} Throughout, $k$ is a perfect field of characteristic $p\neq0$, and $W$ is the ring of Witt vectors over $k$. As usual, $\sigma$ denotes the automorphism of $W$ inducing $a\mapsto a^{p}$ modulo $p$. We use a bar to denote base change to an algebraic closure $\bar{k}$ of $k$. For example, $\bar{W}$ denotes the Witt vectors over $\bar{k}$. We use $\simeq$ to denote a canonical, or specific, isomorphism. \section{Coherent complexes of graded $R$-modules} In this section, we review some definitions and results of Ekedahl, Illusie, and Raynaud, for which \cite{illusie1983} is a convenient reference. \begin{plain} \label{a2}The \emph{Raynaud ring} is the graded $W$-algebra $R=R^{0}\oplus R^{1}$ generated by $F$ and $V$ in degree $0$ and $d$ in degree $1$, subject to the relations \begin{align} FV & =p=VF,\quad Fa=\sigma a\cdot F,\quad aV=V\cdot\sigma a,\label{eq5}\\ d^{2} & =0,\quad FdV=d,\quad ad=da\quad(a\in W). \label{eq6 \end{align} In other words, $R^{0}$ is the Dieudonn\'{e} ring $W_{\sigma}[F,V]$ and $R$ is generated as an $R^{0}$-algebra by a single element $d$ of degree $1$ satisfying (\ref{eq6}). For $m\geq1, \begin{equation} R_{m}\overset{\textup{{\tiny def}}}{=}R/(V^{m}R+dV^{m}R)\text{.} \label{eq42 \end{equation} \end{plain} \begin{plain} \label{a3}To give a graded $R$-module $M=\bigoplus\nolimits_{i\in\mathbb{Z}{ }M^{i}$ is the same as giving a comple \[ M^{\bullet}\colon\quad\cdots\rightarrow M^{i-1}\overset{d}{\longrightarrow }M^{i}\overset{d}{\longrightarrow}M^{i+1}\rightarrow\cdots \] of $W$-modules whose components $M^{i}$ are $R^{0}$-modules and whose differentials $d$ satisfy $FdV=d$. For $n\in\mathbb{Z}{}$, $M\{n\}$ is the graded $R$-module deduced from $M$ by a shift of degree,\footnote{Illusie et al.\ write $M(n)$ for the degree shift of $M$, but this conflicts with our notation for Tate twists.} i.e., $M\{n\}^{i}=M^{n+i}$ and $d_{M\{n\} ^{i}=(-1)^{n}d_{M}^{n+i}$. The graded $R$-modules and graded homomorphisms of degree $0$ form an abelian category $\Mod(R)$ with derived category $\mathsf{D}(R)$. The bifunctor $M,N\rightsquigarrow\Hom(M,N)$ of graded $R$-modules derives to a bifuncto \[ R\Hom\colon\mathsf{D}(R)^{\mathrm{opp}}\times\mathsf{D}^{+}(R)\rightarrow \mathsf{D}(\mathbb{Z}{}_{p}) \] (denoted by $R\Hom_{R}$ in \cite{illusie1983}, 2.6.2, and \cite{ekedahl1986}, p.8, and by $R\underline{\Hom}_{R}$ in \cite{ekedahl1985}, p.73). \end{plain} \begin{plain} \label{a4}A graded $R$-module is said to be \emph{elementary} (\cite{illusie1983}, 2.2.2, p.30) if it is one of the following two types. \begin{description} \item[Type I] The module is concentrated in degree zero, finitely generated over $W$, and $V$ is topologically nilpotent on it. In other words, it is a $W_{\sigma}[F,V]$-module whose $p$-torsion submodule has finite length over $W$, and whose torsion-free quotient is finitely generated and free over $W$ with slopes lying in the interval $[0,1[$. \item[Type II] The module is isomorphic t \[ U_{l}\colon\quad\underset{\vphantom{A^{A^A}}\deg0}{\prod\nolimits_{n\geq 0}kV^{n}}\overset{d}{\longrightarrow}\underset{\vphantom{A^{A^A}}\deg 1}{\prod\nolimits_{n\geq l}kdV^{n} \] for some $l\in\mathbb{Z}$. Here $F$ (resp. $V)$ acts as zero on $U_{l}^{0}$ (resp. $U_{l}^{1}$), and $dV^{n}$ should be interpreted as $F^{-n}d$ when $n<0$. In more detail, $U_{l}^{0}$ is the $W_{\sigma}[F,V]$-module $k[[V]]$ with $F$ acting as zero. When $l\geq0$, $U_{l}^{1}$ consists of the formal sum \[ a_{l}dV^{l}+a_{l+1}dV^{l+1}+\cdots\quad\quad(a_{l}\in k), \] and when $l<0$, $U_{l}^{1}$ consists of the formal sums \[ a_{-l}F^{-l}d+\cdots+a_{-1}F^{-1}d+a_{0}d+a_{1}dV+a_{2}dV^{2}+\cdots\quad \quad(a_{l}\in k). \] \end{description} \end{plain} \begin{plain} \label{a5}A graded $R$-module $M$ is said to be \emph{coherent} if it admits a finite filtration $M\supset\cdots\supset0$ whose quotients are degree shifts of elementary modules (i.e., of the form $M\{n\}$ with $M$ elementary and $n\in\mathbb{Z}{}$). Coherent $R$-modules need not be noetherian or artinian---the object $U_{0}$ is obviously neither. \end{plain} \begin{plain} \label{a6}A complex $M$ of $R$-modules is said to be \emph{coherent} if it is bounded with coherent cohomology. Let $\mathsf{D}_{c}^{b}(R)$ denote the full subcategory of $\mathsf{D}(R)$ consisting of coherent complexes. Ekedahl has given a criterion for a complex to lie in $\mathsf{D}_{c}^{b}(R)$, from which it follows that $\mathsf{D}_{c}^{b}(R)$ is a triangulated subcategory of $\mathsf{D}(R)$; in particular, the coherent modules form an abelian subcategory of $\Mod(R)$ closed under extensions (\cite{illusie1983}, 2.4.8). In more detail (ibid. 2.4), define a graded $R_{\bullet}$-module to be a projective syste \[ M_{\bullet}=\left( M_{1}\leftarrow\cdots\leftarrow M_{m}\leftarrow M_{m+1}\leftarrow\cdots\right) \] equipped with maps $F\colon M_{m+1}\rightarrow M_{m}$ and $V\colon M_{m}\rightarrow M_{m+1}$ of degree zero satisfying (\ref{eq5}) and (\ref{eq6}); here $M_{m}$ is a graded $W_{m}[d]$-module. The graded $R_{\bullet}$-modules form an abelian category. The functor $M_{\bullet }\rightsquigarrow\varprojlim M_{m}\colon\Mod(R_{\bullet})\rightarrow\Mod(R)$ derives to a functo \[ R\varprojlim\colon\mathsf{D}(R_{\bullet})\rightarrow\mathsf{D}(R). \] On the other hand, the functor sending a graded $R$-module $M$ to the $R_{\bullet}$-module $(R_{m}\otimes_{R}M)_{m\geq1}$ derives to a functo \[ R_{\bullet}\otimes_{R}^{L}-\colon\mathsf{D}(R)\rightarrow\mathsf{D (R_{\bullet}) \] These functors compose to a functo \[ M\rightsquigarrow\hat{M}\colon\mathsf{D}(R)\rightarrow\mathsf{D}(R). \] For $M$ in $\mathsf{D}^{-}(R)$, there is a natural map $M\rightarrow\hat{M}$ inducing isomorphisms $R_{m}\otimes_{R}^{L}M\rightarrow R_{m}\otimes_{R ^{L}\hat{M}$ for all $m$, and $M$ is said to be \emph{complete} if this map is an isomorphism. Ekedahl's criterion states:\begin{quote}}\newcommand\equote{\end{quote} A bounded complex of graded $R$-modules $M$ lies in $\mathsf{D}_{c}^{b}(R)$ if and only if $M$ is complete and $R_{1}\otimes_{R}^{L}M$ is a bounded complex such that $H^{i}(R_{1 \otimes_{R}^{L}M)$ is finite-dimensional over $k$ for all $i$.\equote \end{plain} \begin{plain} \label{a8}Let $T$ be the functor of graded $R$-modules such that $(TM)^{i}=M^{i+1}$ and $T(d)=-d$, i.e., $TM=M\{1\}$ (degree shift). It is exact and defines a self-equivalence $T\colon\mathsf{D}_{c}^{b}(R)\rightarrow \mathsf{D}_{c}^{b}(R)$. The \emph{Tate twist} of a coherent complex of graded $R$-modules $M$ is defined as \[ M(r)=T^{r}(M)[-r]=M\{r\}[-r]; \] thus $M(r)^{i,j}=M^{i+r,i-r}$ (cf. \cite{milneR2005}, \S 2). \end{plain} \begin{plain} \label{a9}Following \cite{illusie1983}, 2.1, we view a complex of graded $R$-modules \[ M\colon\quad\quad\cdots\rightarrow M^{\bullet,j}\rightarrow M^{\bullet ,j+1}\rightarrow\cdots \] as a bicomplex $M^{\bullet,\bullet}$ of $R^{0}$-modules in which the first index corresponds to the $R$-gradation. Thus the $j$th row $M^{\bullet,j}$ of the bicomplex is a graded $R$-module and the $i$th column $M^{i\bullet}$ is a complex of $R^{0}$-modules \begin{equation} \begin{tikzcd}[column sep=scriptsize, row sep=scriptsize] \vdots & & &\vdots & \vdots & \vdots\\ M^{\bullet j+1}\arrow{u}: && \cdots \arrow{r} & M^{i-1,j+1}\arrow{r}{d}\arrow{u} & M^{i,j+1} \arrow{r}{d}\arrow{u} & M^{i+1,j+1} \arrow{r}\arrow{u} & \cdots\\ M^{\bullet j}\arrow{u}: && \cdots \arrow{r} & M^{i-1,j}\arrow{r}{d}\arrow{u} & M^{i,j} \arrow{r}{d}\arrow{u} & M^{i+1,j} \arrow{r}\arrow{u} & \cdots\\ \vdots\arrow{u} & & & \vdots\arrow{u} &\vdots\arrow{u} & \vdots\arrow{u} & \end{tikzcd}\label{e1 \end{equation} In this diagram, the squares commute, the vertical differentials commute with $F$ and $V$, and the horizontal differentials satisfy $FdV=d$. The cohomology modules of $M$ are obtained by passing to the cohomology in the columns \[ H^{j}(M)\colon\quad\quad\cdots\rightarrow H^{j}(M^{i-1,\bullet )\overset{d}{\longrightarrow}H^{j}(M^{i,\bullet})\overset{d}{\longrightarrow }H^{j}(M^{i+1,\bullet})\rightarrow\cdots. \] In other words, for a complex $M=M^{\bullet,\bullet}$ of graded $R$-modules, $H^{j}(M)$ is the graded $R$-module with $H^{j}(M)^{i}=H^{j}(M^{i,\bullet})$. By definition, $M\{m\}[n]$ is the bicomplex with \begin{equation} (M\{m\}[n])^{i,j}=M^{i+m,j+n} \label{e25 \end{equation} and with the appropriate sign changes on the differentials. \end{plain} \begin{plain} \label{a11}With any complex $M$ of graded $R$-modules, there is an associated simple complex $sM$ of $W$-modules wit \[ \left( sM\right) ^{n}=\bigoplus\nolimits_{i+j=n}M^{i,j},\quad dx^{ij =d^{\prime}x^{ij}+(-1)^{i}d^{\prime\prime}x^{ij}\text{. \] The functor $s$ extends to a functor $s\colon\mathsf{D}^{+}(R)\rightarrow \mathsf{D}(W)$. If $M\in\mathsf{D}_{c}^{b}(M)$, then $sM$ is a perfect complex of $W$-modules (\cite{illusie1983}, p.34). \end{plain} \begin{plain} \label{b0}For a coherent complex $M$ of graded $R$-modules, the filtration of $sM$ by the first degree defines a spectral sequenc \begin{equation} E_{1}^{ij}=H^{j}(M)^{i}\implies H^{i+j}(sM) \label{eq7 \end{equation} called the \emph{slope spectral sequence}. The slope spectral sequence degenerates at $E_{1}$ modulo torsion and at $E_{2}$ modulo $W$-modules of finite length. In particular, for $r\geq2$, $E_{r}^{ij}$ is a finitely generated $W$-module of rank equal to that of $H^{j}(M)^{i}/\mathrm{torsion $\textrm{. }This was proved by \cite{bloch1977} and \cite{illusieR1983} for the complex $M=R\Gamma(X,W\Omega_{X}^{\bullet})$ attached to a smooth complete variety $X$, and by Ekedahl for a general $M$ (see \cite{illusie1983}, 2.5.4). \end{plain} \begin{plain} \label{a12}Let $K=W\otimes\mathbb{Q}{}$ (field of fractions of $W$). Then $K\otimes_{W}W_{\sigma}[F,V]\simeq K_{\sigma}[F]$. Recall that an $F$-isocrystal is a $K_{\sigma}[F]$-module that is finite-dimensional as a $K$-vector space and such that $F$ is bijective. The $F$-isocrystals form an abelian subcategory of $\Mod(K_{\sigma}[F])$ closed under extensions, and so the subcategory $\mathsf{D}_{\text{\textrm{iso}}}^{b}(K_{\sigma}[F])$ of $\mathsf{D}^{b}(K_{\sigma}[F])$ consisting of bounded complexes whose cohomology modules are $F$-isocrystals is triangulated. \end{plain} \begin{plain} \label{a13}Let $M$ be a complex of graded $R$-modules with only nonnegative first degrees, and let $F^{\prime}$ act on $M^{i,j}$ as $p^{i}F$. The condition $FdV=d$ implies that $pFd=dF$, and so both differentials in the diagram (\ref{e1}) commute with the action of $F^{\prime}$. Therefore $s(M)$ is a complex of $W_{\sigma}[F^{\prime}]$-modules. If $M\in\mathsf{D}_{c ^{b}(R)$, then $s(M)_{K}$ lies in $\mathsf{D}_{\text{\textrm{iso}} ^{b}(K_{\sigma}[F^{\prime}])$.\medskip\ From the degeneration of the slope spectral sequence at $E_{1}$, we get isomorphism \begin{equation} (H^{j}(M)_{K}^{i},p^{i}F)\simeq\left( H^{i+j}(sM)_{K}\right) _{[i,i+1[} \label{eq7a \end{equation} for $M\in\mathsf{D}_{c}^{b}(R)$. This can also be written\footnote{For each $n$, we have $H^{n}(sM)_{K}=\bigoplus H^{j}(M)_{K}^{i}$ where the sum if over pairs $(i,j)$ with $i+j=n$. Our assumption on $M$ says that $i\geq0$, and so only $H^{n}(M)_{K}^{0}$, $H^{n-1}(M)_{K}^{i},$\ldots$,H^{0}(M)_{K}^{n}$ contribute. Each of these (with the map $F$) is an isocrystal with slopes $[0,1)$. But with the map $p^{i}F$, the slopes of $H^{n-i}(M)_{K}^{i}$ are in $[i,i+1[$. The slopes of distinct summands to not overlap. Hence we get (\ref{eq7b}). Cf. \cite{illusie1983}, p.64. \begin{equation} (H^{n-i}(M)_{K}^{i},p^{i}F)\simeq\left( H^{n}(sM)_{K}\right) _{[i,i+1[ \text{.} \label{eq7b \end{equation} \end{plain} \begin{plain} \label{a14}A \emph{domino} $N$ is a graded $R$-module that admits a finite filtration $N\supset\cdots\supset0$ whose quotients are elementary of type II. Let $N$ be elementary of type II, say $N=U_{l}$. Then $N^{0}=k_{\sigma}[[V]]$, and so $V\colon N^{0}\rightarrow N^{0}$ is injective with cokernel $N^{0}/V=k_{\sigma}[[V]]/(V)\simeq k$. Similarly, $F\colon N^{1}\rightarrow N^{1}$ is surjective with kernel $kdV^{l}$ ($l\geq0$) or $kF^{-l}d$ ($l<0$). Let $N$ be a domino, and suppose that $N$ admits a filtration of length $l(N)$ with elementary quotients. Induction on $l(N)$ shows that \begin{enumerate} \item the map $V\colon N^{0}\rightarrow N^{0}$ is injective with cokernel of dimension $l(N)$ (as a $k$-vector space) and $F|N^{0}$ is nilpotent; \item the map $F\colon N^{1}\rightarrow N^{1}$ is surjective with kernel of dimension $l(N)$ and $V|N^{1}$ is nilpotent. \end{enumerate} \noindent Therefore the number of quotients in such a filtration is independent of the filtration, and equals the common dimension of the $k$-vector spaces $N^{0}/V$ and of $\Ker\left( F\colon N^{1}\rightarrow N^{1}\right) $. This number is called the \emph{dimension} of $N$. \end{plain} \begin{plain} \label{a17}Let $M$ be a graded $R$-module. Then $Z^{i (M)\overset{\textup{{\tiny def}}}{=}\Ker\left( d\colon M^{i}\rightarrow M^{i+1}\right) $ is stable under $F$ but not in general under $V$, whereas $B^{i}(M)\overset{\textup{{\tiny def}}}{=}\im(d\colon M^{i-1}\rightarrow M^{i})$ is stable under $V$ but not in general under $F$. Instead, one put \begin{align*} V^{-\infty}Z^{i} & (M)=\{x\in M^{i}\mid V^{n}x\in Z^{i}(M)\text{ for all }n\},\\ F^{\infty}B^{i} & (M)=\{x\in M^{i}\mid x\in F^{n}B^{i}(M)\text{ for some }n\}. \end{align*} Then $V^{-\infty}Z^{i}$ is the largest $R^{0}$-submodule of $Z^{i}(M),$ and $F^{\infty}B^{i}$ is the smallest $R^{0}$-submodule of $M^{i}$ containing $B^{i}M$ \begin{equation} B^{i}\subset F^{\infty}B^{i}\subset V^{-\infty}Z^{i}\subset Z^{i}. \label{eq58 \end{equation} The homomorphism of $W$-modules $d\colon M^{i}\rightarrow M^{i+1}$ factors as \begin{equation} \begin{tikzcd} M^{i} \arrow{r}{d}\arrow{d} &M^{i+1}\\ M^{i}/V^{-\infty}Z^{i} \arrow{r}{d} & F^{\infty}B^{i+1}\arrow{u} \end{tikzcd} \label{e24 \end{equation} When $M$ is coherent, the lower row in (\ref{e24}) is an $R$-module admitting a finite filtration whose quotients are of the form $U_{l}\{-i\}$; in other words, $($\textrm{lower\thinspace\thinspace}\textrm{row}$\mathrm{)}\{i\}$ is a domino (\cite{illusie1983}, 2.5.2). \end{plain} \begin{plain} \label{a18}The \emph{heart } of a graded $R$-module $M$ is the graded $R_{0 $-module $\heartsuit(M)=\bigoplus\heartsuit^{i}(M)$ with $\heartsuit ^{i}(M)=V^{-\infty}Z^{i}/F^{\infty}B^{i}$ (see (\ref{eq58})). When $M$ is coherent, $\heartsuit(M)$ is finitely generated as a $W$-module; moreover, $Z^{i}/V^{-\infty}Z^{i}$ and $F^{\infty}B^{i}/B^{i}$ are of finite length, and s \[ \heartsuit^{i}(M)_{K}\simeq\left( Z^{i}(M)/B^{i}(M)\right) _{K \] (\cite{illusie1983}, 2.5.3). \end{plain} \begin{example} \label{b1}Let $X$ be a smooth variety over a perfect field $k$. The de Rham-Witt comple \[ W\Omega_{X}^{\bullet}\colon\quad W\mathcal{O}{}_{X}\longrightarrow \cdots\longrightarrow W\Omega_{X}^{i}\overset{d}{\longrightarrow}W\Omega _{X}^{i+1}\longrightarrow\cdots \] is a sheaf of graded $R$-modules on $X$ for the Zariski topology. On applying $R\Gamma$ to this complex, we get a complex $R\Gamma(X,W\Omega_{X}^{\bullet})$ of graded $R$-modules, which we regard as a bicomplex with $(i,j)$th term $R\Gamma(X,W\Omega_{X}^{i})^{j}$. When we replace each vertical complex with its cohomology, the $j$th row of the bicomplex become \[ R^{j}\Gamma(X,W\Omega_{X}^{\bullet})\colon\quad H^{j}(X,W\mathcal{O}{ _{X})\rightarrow\cdots\rightarrow H^{j}(X,W\Omega_{X}^{i )\overset{d}{\rightarrow}H^{j}(X,W\Omega_{X}^{i+1})\rightarrow\cdots. \] The complex $R\Gamma(X,W\Omega_{X}^{\bullet})$ is bounded and complete (\cite{illusie1983}, 2.4), and becomes $R\Gamma(X,\Omega_{X}^{\bullet})$ when tensored with $R_{1}$, and so $R\Gamma(X,W\Omega_{X}^{\bullet})$ is coherent when $X$ is complete. In this case, $R\Gamma(X/W)\overset{\textup{{\tiny def }}{=}s(R\Gamma(X,W\Omega_{X}^{\bullet}))$ is a perfect complex of $W$-modules such that \[ H^{j}(R\Gamma(X/W))\simeq H_{\mathrm{crys}}^{j}(X/W)\quad(\text{isomorphism of }W_{\sigma}[F]\text{-modules}) \] (ibid. 1.3.5), and the slope spectral sequence (\ref{eq7}) become \begin{equation} E_{1}^{ij}=H^{j}(X,W\Omega_{X}^{i})\implies H^{i+j}(X,W\Omega_{X}^{\bullet })\quad\quad(\simeq H_{\mathrm{crys}}^{\ast}(X/W)\text{.} \label{eq33 \end{equation} \end{example} \section{The numerical invariants of a coherent complex} \subsection{Definition of the invariants} Let $M$ be a coherent graded $R$-module. The dimension of the domino attached to $d\colon M^{i}\rightarrow M^{i+1}$ (see (\ref{a17})) is denoted by $T^{i}(M)$. It is equal to the number of quotients of the form $U_{l}\{-i\}$ (varying $l$) in a filtration of $M$ with elementary quotients. \begin{lemma} \label{a1a}For a coherent graded $R$-module, \begin{equation} T^{i}(M)=\mathrm{length}_{W((V))}W((V))\otimes_{W[[V]]}M^{i}\text{.} \label{eq24 \end{equation} \end{lemma} \begin{proof} It suffices to prove this for an elementary graded $R$-module $M$. If $M$ is elementary of type I, then $V$ is topologically nilpotent on it, and so when we invert $V$, $M$ becomes $0$; this agrees with $T^{0}(M)=0$. If $M$ is elementary of type II , say $M=U_{l}$, then $W((V))\otimes M^{0}\simeq W((V))$ and $W((V))\otimes M^{1}=0$, agreeing with $T^{0}(M)=1$ and $T^{i}(M)=0$ for $i\neq0$. \end{proof} Let $M$ be an object of $\mathsf{D}_{c}^{b}(R)$. Ekedahl (1986, p.14)\nocite{ekedahl1986} defines the \emph{slope numbers} of $M$ to b \[ m^{i,j}(M)={\mathrm{dim}}_{k}\frac{H^{j}(M)^{i}}{H^{j}(M)_{p\text{- \mathrm{tors}}^{i}+V\left( H^{j}(M)^{i}\right) }+{\mathrm{dim}}_{k \frac{H^{j+1}(M)^{i-1}}{H^{j+1}(M)_{p\text{-}\mathrm{tors}}^{i-1}+F\left( H^{j+1}(M)^{i-1}\right) \] where $X_{p\text{-}\mathrm{tors}}$ denotes the torsion submodule of $X$ regarded as a $W$-module. Se \[ T^{i,j}(M)=T^{i}(H^{j}(M)). \] Ekedahl (ibid., p.85) defines the \emph{Hodge-Witt numbers} of $M$ to be \[ h_{W}^{i,j}(M)=m^{i,j}(M)+T^{i,j}(M)-2T^{i-1,j+1}(M)+T^{i-2,j+2}(M) \] (see also \cite{illusie1983}, 6.3). Note that the invariants $m^{i,j}(M)$ and $T^{i,j}(M)$ (hence also $h_{W}^{i,j}(M)$) depend only on the finite sequence $(H^{j}(M))_{j\in\mathbb{Z}{}}$ of graded $R$-modules. It follows from (\ref{e25}) tha \begin{equation} h_{W}^{i,j}(M\{m\}[n])=h_{W}^{i+m,j+n}(M). \label{eq56 \end{equation} In particular (see \ref{a8}) \begin{equation} h_{W}^{i,j}(M(r))=h_{W}^{i+r,j-r}(M). \label{eq57 \end{equation} \begin{example} \label{b2}We compute these invariants for certain $M\in\mathsf{D}_{c}^{b}(R)$. (a) Suppose that $H^{j}(M)^{i}$ has finite length over $W$ for all $i,j$. Then $H^{j}(M)^{i}=H^{j}(M)_{p\text{\textrm{-}}\mathrm{tors}}^{i}$, and so $m^{i,j}(M)$ is zero for all $i$, $j$. Moreover $V$ is nilpotent on $H^{j}(M)^{i}$, and so $T^{i,j}(M)=0$. It follows that $h_{W}^{i,j}(M)$ is also zero for all $i$, $j$. (b) Suppose tha \[ H^{j}(M)^{i}=\left\{ \begin{array} [c]{ll R^{0}/R^{0}(F^{r-s}-V^{s}) & \text{if }(i,j)=(i_{0},j_{0})\\ 0 & \text{otherwise \end{array} \right. \] for some $r>s\geq0$. The \[ m^{i.j}(M)=\left\{ \begin{array} [c]{ll \dim_{k}(W_{\sigma}[F]/(F^{r-s})=r-s & \text{if }(i,j)=(i_{0},j_{0})\\ \dim_{k}(W_{\sigma}[V]/(V^{s})=s & \text{if }(i,j)=(i_{0}+1,j_{0}-1)\\ 0 & \text{otherwise. \end{array} \right. \] Note that \[ \left( R^{0}/R^{0}(F^{r-s}-V^{s})\right) \otimes K\simeq K_{\sigma }[F]/(F^{r}-p^{s}), \] which is an $F$-isocrystal of slope $\lambda=s/r$ with multiplicity $m=r$. As the dominoes attached to the $H^{j}(M)^{i}$ are obviously all zero, we see tha \[ h_{W}^{i,j}(M)=m^{i.j}(M)=\left\{ \begin{array} [c]{ll m(1-\lambda)\quad & \text{if }(i,j)=(i_{0},j_{0})\\ m\lambda & \text{if }(i,j)=(i_{0}+1,j_{0}-1)\\ 0 & \text{otherwise \end{array} \right. \] where $\lambda$ is the unique slope of the $F$-isocrystal $(H^{j_{0} (M)_{K}^{i_{0}},F)$ and $m$ is its multiplicity. (c) Suppose tha \[ \left( H^{j_{0}}(M)^{i_{0}}\overset{d}{\longrightarrow}H^{j_{0}}(M)^{i_{0 +1}\right) =\left( \underset{\vphantom{A^{A^A}}\deg\text{ }i_{0 }{\prod\nolimits_{n\geq0}kV^{n}}\overset{d}{\longrightarrow \underset{\vphantom{A^{A^A}}\deg\text{ }i_{0}+1}{\prod\nolimits_{n\geq l}kdV^{n}}\right) =U_{l}\{-i_{0}\}, \] and that $H^{j}(M)^{i}=0$ for all other values of $i$ and $j$. Then $H^{j}(M)^{i}=H^{j}(M)_{p\text{-}\mathrm{tors}}^{i}$, and so $m^{i,j}(M)$ is zero for all $i$, $j$. The only nonzero $T$ invariant is $T^{i_{0},j_{0 }(M)=1$. It follows that the only nonzero Hodge-Witt numbers ar \[ h_{W}^{i_{0},j_{0}}(M)=1,\quad h_{W}^{i_{0}+1,j_{0}-1}(M)=-2,\quad h_{W}^{i_{0}+2,j_{0}-2}=1. \] \end{example} \subsection{Weighted Hodge-Witt Euler characteristics} \begin{theorem} \label{b5} For every $M$ in $\mathsf{D}_{c}^{b}(R)$ and $r\in\mathbb{Z}{}$ \begin{equation} \sum_{i,j\,\,(i\leq r)}(-1)^{i+j}(r-i)h_{W}^{i,j}(M)=e_{r}(M) \label{eq36 \end{equation} wher \begin{equation} e_{r}(M)=\sum_{j}(-1)^{j-1}T^{r-1,j-r}(M)+\sum_{i,j,l\,\,\,(\lambda _{i,j,l}\leq r-i)}(-1)^{i+j}(r-i-\lambda_{i,j,l}). \label{e26 \end{equation} \end{theorem} The sum in (\ref{eq36}) is over the pairs of integers $(i,j)$ such that $i\leq r$, and the first sum in (\ref{e26}) is over the integers $j$. In the second sum in (\ref{e26}), $(\lambda_{i,j,l})_{l}$ is the family of slopes (with multiplicities) of the $F$-isocrystal $H^{j}(M)_{K}^{i}$ and the sum is over the triples $(i,j,l)$ such that $\lambda_{i,j,l}\leq r-i$. \begin{example} \label{b4a}Let $M$ be a graded $R$-module, regarded as an element of $\mathsf{D}_{c}^{b}(R)$ concentrated in degree $j$. Let $F^{\prime}$ act on $M^{i}$ as $p^{i}F$ (assuming only nonnegative $i$'s occur). Then $F^{\prime}$ is a $\sigma$-linear endomorphism of $M$ regarded as a complex of $R^{0 $-module \[ \begin{tikzcd} \cdots\arrow{r} &M^{i-1}\arrow{d}{p^{i-1}F}\arrow{r}{d} &M^{i}\arrow{d}{p^{i}F}\arrow{r}{d} &M^{i+1}\arrow{d}{p^{i+1}F}\arrow{r}{d} &\cdots\\ \cdots\arrow{r} &M^{i-1}\arrow{r}{d} &M^{i}\arrow{r}{d} &M^{i+1}\arrow{r}{d} &\cdots, \end{tikzcd} \] and the second term in (\ref{e26}) equals \[ \sum\nolimits_{i,l\text{\thinspace\thinspace\thinspace}(\lambda_{i,l}\leq r)}(-1)^{i+j}(r-\lambda_{i,l}) \] where $(\lambda_{i,l})_{l}$ is the family of slopes of the $F$-isocrystal $(M^{i},p^{i}F)_{K}$. \end{example} \begin{lemma} \label{b4}For every distinguished triangle $M^{\prime}\rightarrow M\rightarrow M^{\prime\prime}\rightarrow M^{\prime}[1]$ in $\mathsf{D}_{c}^{b}(R)$ \begin{align*} m^{i,j}(M) & =m^{i.j}(M^{\prime})+m^{i,j}(M^{\prime\prime})\\ T^{i,j}(M) & =T^{i,j}(M^{\prime})+T^{i,j}(M^{\prime\prime}), \end{align*} and s \[ h_{W}^{i.j}(M)=h_{W}^{i,j}(M^{\prime})+h_{W}^{i,j}(M^{\prime\prime})\text{. \] \end{lemma} \begin{proof} The distinguished triangle gives rise to an exact sequence of graded $R$-module \[ \cdots\rightarrow H^{j}(M^{\prime})\rightarrow H^{j}(M)\rightarrow H^{j}(M^{\prime\prime})\rightarrow\cdots. \] with only finitely many nonzero terms. It suffices to show that $m$ and $T$ are additive on short exact sequence \begin{equation} 0\rightarrow M^{\prime}\rightarrow M\rightarrow M^{\prime\prime}\rightarrow0 \label{eq23 \end{equation} of coherent graded $R$-modules. But $m^{ij}(M)$ depends only on $\bar {K}\otimes_{W}M$ where $\bar{K}$ is the field of fractions of $\bar{W}$, and the sequence (\ref{eq23}) splits when tensored with $\bar{K}$. The additivity of $T$ follows from the description of $T^{i}$ in Lemma \ref{a1a}. \end{proof} \begin{lemma} \label{b4b}For every distinguished triangle $M^{\prime}\rightarrow M\rightarrow M^{\prime\prime}\rightarrow M^{\prime}[1]$ in $\mathsf{D}_{c ^{b}(R)$ \begin{equation} e_{r}(M)=e_{r}(M^{\prime})+e_{r}(M^{\prime\prime})\text{.} \label{eq8 \end{equation} \end{lemma} \begin{proof} The same argument as in the proof of Lemma \ref{b4} applies. \end{proof} \subsection{Proof of Theorem \ref{b5}} The numbers do not change under extension of the base field, and so we may suppose that $k$ is algebraically closed. First note that, if $M^{\prime }\rightarrow M\rightarrow M^{\prime\prime}\rightarrow M^{\prime}[1]$ is a distinguished triangle in $\mathsf{D}_{c}^{b}(R)$ and (\ref{eq36}) holds for $M^{\prime}$ and $M^{\prime\prime}$, then it holds for $M$ (apply \ref{b4} and \ref{b4b}). A complex $M$ in $\mathsf{D}_{c}^{b}(R)$ has only finitely many nonzero cohomology groups, and each has a finite filtration whose quotients are elementary graded $R$-modules. By using induction on the sum of the lengths of the shortest such filtrations, one sees that it suffices to prove the formula for a complex $M$ having only one nonzero cohomology module, which is a degree shift of an elementary graded $R$-module, i.e., we may assume $M=H^{j_{0 }(M)=N\{-i_{0}\}$ where $N$ is elementary. Assume that $N$ is elementary of type I. If $N$ is torsion, then both sides are zero. We may suppose that $N$ is a Dieudonne module of slope $\lambda \in\lbrack0,1[$ with multiplicity $m$ (because $N$ is isogenous to a direct sum of such modules --- recall that $k$ is algebraically closed). In this case (see \ref{b2}b), the only nonzero Hodge-Witt invariants of $M$ ar \begin{align*} h_{W}^{i_{0},j_{0}}(M) & =m^{i_{0},j_{0}}(M)=m(1-\lambda)\\ h_{W}^{i_{0}+1,j_{0}-1}(M) & =m^{i_{0}+1,j_{0}-1}(M)=m\lambda. \end{align*} Both sides of (\ref{eq36}) are zero if $r\leq i_{0}$, and so we may suppose that $r>i_{0}$. Then the left hand side (\ref{eq36}) is \begin{align*} & (-1)^{i_{0}+j_{0}}(r-i_{0})h^{i_{0},j_{0}}+(-1)^{i_{0}+1+j_{0}-1 (r-i_{0}-1)h^{i_{0}+1,j_{0}-1}\\ = & (-1)^{i_{0}+j_{0}}(r-i_{0})(1-\lambda)m+(-1)^{i_{0}+j_{0} (r-i_{0}-1)\lambda m\\ = & (-)^{i_{0}+j_{0}}(r-i_{0}-\lambda)m. \end{align*} On the other hand, the isocrystal $H^{j}(M)_{K}^{i}$ is zero for $(i,j)\neq(i_{0},j_{0})$ and $H^{j_{0}}(M)_{K}^{i_{0}}$ is an isocrystal with slope $\lambda$ of multiplicity $m$, and s \[ e_{r}(M)=(-1)^{i_{0}+j_{0}}(r-i_{0}-\lambda)m. \] If $N$ is of type II, i.e., $H^{j_{0}}(M)=U_{l}\{-i_{0}\}$, then $T^{i_{0},j_{0}}=1$ is the only non-zero $T$-invariant (see \ref{b2}c), and s \[ e_{r}(M)=\left\{ \begin{array} [c]{ll (-1)^{i_{0}+j_{0}} & \text{if }r=i_{0}+1\\ 0 & \text{otherwise \end{array} \right. \] The nonzero $h_{W}$-invariants are \[ h_{W}^{i_{0},j_{0}}=1\quad h_{W}^{i_{0}+1,j_{0}-1}=-2\quad h_{W ^{i_{0}+2,j_{0}-2}=1, \] from which (\ref{eq36}) follows by an elementary calculation. \begin{aside} \label{b5c}Here is an alternative proof of Theorem \ref{b5}. Le \[ L(r)=\sum\nolimits_{i,j\,\,(i\leq r)}(-1)^{i+j}(r-i)\left( T^{i,j (M)-2T^{i-1,j+1}(M)+T^{i-2,j+2}(M)\right) \text{. \] The contribution of $T^{i_{0},j_{0}}$ to this sum is $(-1)^{i_{0}+j_{0 }T^{i_{0},j_{0}}$ if $i_{0}=r-1$ and $0$ otherwise. Therefor \begin{equation} \begin{aligned} L(r) & =\sum\nolimits_{j}(-1)^{r-1+j}T^{r-1,j}\\ & =\sum\nolimits_{j}(-1)^{j-1}T^{r-1,j-r}. \label{eq39} \end{aligned} \end{equation} For an $F$-crystal $P$, let $P_{[i,i+1[}=(K\otimes_{W}P)_{[i,i+1[}$ (part with slopes $\lambda$, $i\leq\lambda<i+1$). From the degeneration of the slope spectral sequence (\ref{b0}) at $E_{1}$ modulo torsion, we find tha \[ H^{n}(sM)_{[i,i+1[}\simeq(H^{n-i}(M)_{K}^{i},p^{i}F). \] From this, it follows tha \[ m^{i,n-i}(M)=\sum_{\lambda\in\lbrack i,i+1[}(i+1-\lambda)h_{\lambda}^{n -\sum_{\lambda\in\lbrack i-1,i[}(i-1-\lambda)h_{\lambda}^{n \] where $h_{\lambda}^{n}$ is the multiplicity of $\lambda$ as a slope of $H^{n}(sM)$ (cf. \cite{illusie1983}, 6.2). Using these two statements, we find tha \begin{equation} \sum_{i,j\,\,(i\leq r)}(-1)^{i+j}(r-i)m^{i,j}(M)=\sum_{i,j,l\,\,\,(\lambda _{i,j,l}\leq r-i)}(-1)^{i+j}(r-i-\lambda_{i,j,l}). \label{eq40 \end{equation} On adding (\ref{eq39}) and (\ref{eq40}), we obtain (\ref{eq36}). \end{aside} \subsection{Weighted Hodge Euler characteristics} Following Ekedahl (1986, p.14), we define the \emph{Hodge numbers} of an $M$ in $\mathsf{D}_{c}^{b}(R)$ to b \[ h^{i,j}(M)=\dim_{k}(H^{j}(R_{1}\otimes_{R}^{L}M)^{i}). \] \begin{theorem} \label{b5a}For every $M$ in $\mathsf{D}_{c}^{b}(R)$ and $i\in\mathbb{Z}{}$ \begin{equation} \sum\nolimits_{j}(-1)^{j}h_{W}^{i,j}(M)=\sum\nolimits_{j}(-1)^{j h^{i,j}(M)\text{.} \label{eq25 \end{equation} \end{theorem} \begin{proof} As for Theorem \ref{b5}, it suffices to prove this for an elementary $R$-module, where it can be checked directly. See \cite{ekedahl1986}, IV, Theorem 3.2. \end{proof} For $M=R\Gamma(W\Omega_{X}^{\bullet})$, the formula (\ref{eq25}) was found independently by Crew and Milne (cf. ibid. p.86). \begin{theorem} \label{b5b}For every $M$ in $\mathsf{D}_{c}^{b}(R), \begin{equation} \sum_{i,j\,\,\,(i\leq r)}(-1)^{i+j}(r-i)h^{i,j}(M)=e_{r}(M). \label{eq27 \end{equation} \end{theorem} \begin{proof} We hav \begin{align*} \text{LHS} & \,\,=\sum\nolimits_{i\leq r}(-1)^{i}(r-i)\left( \sum \nolimits_{j}(-1)^{j}h^{i,j}(M)\right) \\ & \overset{\text{(\ref{eq25})}}{=}\sum\nolimits_{i\leq r}(-1)^{i}(r-i)\left( \sum\nolimits_{j}(-1)^{j}h_{W}^{i,j}(M)\right) =\text{RHS. \end{align*} \end{proof} \section{Internal Homs and tensor products in $\mathsf{D}_{c}^{b}(R)$} We review some constructions from \cite{ekedahl1985}. \subsection{The internal tensor product} Let $M$ and $N$ be graded $R$-modules. Ekedahl (1985, p.69) defines $M\ast N$ to be the largest quotient of $M\otimes_{W}N$ \[ x\otimes y\mapsto x\ast y\colon M\otimes_{W}N\rightarrow M\ast N\text{, \] in which the following relations hold: $Vx\ast y=V(x\ast Fy)$, $x\ast Vy=V(Fx\ast y)$, $F(x\ast y)=Fx\ast Fy$, $d(x\ast y)=dx\ast y+(-1)^{\deg (x)}x\ast dy$. Regard $W$ as a graded $R$-module concentrated in degree zero with $F$ acting as $\sigma$. Then \begin{equation} W\ast M\simeq M\simeq M\ast W, \label{eq62 \end{equation} and so $W$ plays the role of the identity object ${1\mkern-7mu1}$. The bifunctor $(M,N)\rightsquigarrow M\ast N$ of graded $R$-modules derives to a bifuncto \[ \ast^{L}\colon\mathsf{D}^{-}(R)\times\mathsf{D}^{-}(R)\rightarrow \mathsf{D}^{-}(R)\text{. \] If $M$ and $N$ are in $\mathsf{D}_{c}^{b}(R)$, then so also is \[ M\hat{\ast}N\overset{\textup{{\tiny def}}}{=}\widehat{M\ast^{L}N}\text{. \] See \cite{ekedahl1985}, I, 4.8; \cite{illusie1983}, 2.6.1.10. \subsection{The internal Hom} For graded $R$-modules $M,N$, we let $\Hom^{d}(M,N)$ denote the set of graded $R$-homomorphisms $M\rightarrow N$ of degree $d$, and we let $\Hom^{\bullet }(M,N)=\bigoplus_{d}\Hom^{d}(M,N)$. Let $_{R}R$ denote the ring $R$ regarded as a graded left $R$-module. The internal $\Hom$ of two graded $R$-modules $M,N$ is \[ \underline{\Hom}(M,N)\overset{\textup{{\tiny def}}}{=}\Hom^{\bullet}(_{R}R\ast M,N). \] This graded $\mathbb{Z}{}_{p}$-module becomes a graded $R$-module thanks to the right action of $R$ on $_{R}R$, and $\underline{\Hom}$ derives to a bifuncto \[ R\underline{\Hom}\colon\mathsf{D}(R)^{\mathrm{opp}}\times\mathsf{D ^{+}(R)\rightarrow\mathsf{D}(R) \] (denoted by $R\underline{\Hom}_{R}$ in \cite{illusie1983}, 2.6.2.6, by $R\underline{\Hom}_{R}^{!}$ in \cite{ekedahl1985}, p.73, and by $R\Hom_{R ^{!}$ in \cite{ekedahl1986}, p.8). The functor $R\underline{\Hom}(M,N)$ commutes with extension of the base field. For $M$ in $\mathsf{D}^{-}(R)$ and $N$ in $\mathsf{D}^{+}(R)$ \begin{align} R\underline{\Hom}(W,N)\overset{\text{(\ref{eq62})}}{\simeq}R\Hom^{\bullet }(_{R}R,N) & \simeq N\label{eq28c}\\ R\Hom(W,R\underline{\Hom}(M,N)) & \simeq R\Hom(M,N).\label{eq28 \end{align} (isomorphisms in $\mathsf{D}_{c}^{b}(R)$ and $D(\mathbb{Z}{}_{p})$ respectively). Ekedahl shows tha \[ R_{1}\otimes_{R}^{L}R\underline{\Hom}(M,N)\simeq R\Hom(R_{1}\otimes_{R ^{L}M,R_{1}\otimes_{R}^{L}M) \] (isomorphism in $\mathsf{D}(k[d])$) and tha \begin{equation} \widehat{R\underline{\Hom}(M,N)}\simeq R\underline{\Hom}(\hat{M},\hat {N})\text{,}\label{eq41 \end{equation} and so his criterion (see \ref{a6}) shows that $R\underline{\Hom}(M,N)$ lies in $\mathsf{D}_{c}^{b}(R)$ when both $M$ and $N$ do. See \cite{illusie1983}, 2.6.2. \section{Homological algebra in the category $\mathsf{D}_{c}^{b}(R)$} Throughout this section, $S=\Spec k$, and $\Lambda_{m}=\mathbb{Z /p^{m}\mathbb{Z}{}$. \subsection{The perfect site} An $S$-scheme $U$ is \emph{perfect} if its absolute Frobenius map $F_{\mathrm{abs}}\colon U^{(1/p)}\rightarrow U$ is an isomorphism. The \emph{perfection} $T^{\mathrm{pf}}$ of an $S$-scheme $T$ is the limit of the projective system $T\overset{F_{\mathrm{abs}}}{\longleftarrow}T^{(1/p) \overset{F_{\mathrm{abs}}}{\longleftarrow}\cdots$. The scheme $T^{\mathrm{pf }$ is perfect, and for any perfect $S$-scheme $U$, the canonical map $T^{\mathrm{pf}}\rightarrow T$ defines an isomorphis \[ \Hom_{S}(U,T^{\mathrm{pf}})\rightarrow\Hom_{S}(U,T). \] Let $\mathsf{Pf}/S$ denote the category of perfect affine schemes over $S$. A \emph{perfect group scheme} over $S$ is a representable functor $\mathsf{Pf /S\rightarrow\mathsf{Gp}$. For any affine group scheme $G$ over $S$, the functor $U\rightsquigarrow G(U)\colon\mathsf{Pf}/S\rightarrow\mathsf{Gp}$ is a perfect group scheme represented by $G^{\mathsf{pf}}$. We say that a perfect group scheme is \emph{algebraic} if it is represented by an algebraic $S$-scheme. Let $\mathcal{S}{}$ denote the category of sheaves of commutative groups on $\left( \mathsf{Pf}/S\right) _{\mathrm{et}}$. The commutative perfect algebraic group schemes killed by some power of $p$ form an abelian subcategory $\mathcal{G}$ of $\mathcal{S}{}$ which is closed under extensions. Let $G\in\mathcal{G}$. The identity component $G^{\circ}$ of $G$ has a finite composition series whose quotients are isomorphic to ${\mathbb{G} _{a}^{\mathrm{pf}}$, and the quotient $G/G{^{\circ}}$ is \'{e}tale. The dimension of $G$ is the dimension of any algebraic group whose perfection is $G^{\circ}$. The category $\mathcal{G}{}$ is artinian. See \cite{milne1976}, \S 2, or \cite{berthelot1981}, II. \begin{example} \label{n7}Let $f\colon X\rightarrow S$ be a smooth scheme over $S$. The functors $U\rightsquigarrow\Gamma(U,W_{m}\Omega_{X}^{i})$ are sheaves for the \'{e}tale topology on $X$. The composit \[ W_{m+1}\Omega_{X}^{i}\overset{F}{\longrightarrow}W_{m}\Omega_{X ^{i}\rightarrow W_{m}\Omega_{X}^{i}/d(W_{m}\Omega_{X}^{i-1}) \] factors through $W_{m}\Omega_{X}^{i}$, and so defines a homomorphis \[ F\colon W_{m}\Omega_{X}^{i}\rightarrow W_{m}\Omega_{X}^{i}/d(W_{m}\Omega _{X}^{i-1})\text{. \] The sheaf $\nu_{m}(i)$ on $X_{\mathrm{et}}$ is defined to be the kernel o \[ 1-F\colon W_{m}\Omega_{X}^{i}\rightarrow W_{m}\Omega_{X}^{i}/d(W_{m}\Omega _{X}^{i-1}) \] (\cite{milne1976}, \S 1; \cite{berthelot1981}, p.209). The map $W_{m+1 \Omega_{X}^{i}\rightarrow W_{m}\Omega_{X}^{i}$ defines a surjective map $\nu_{m+1}(i)\rightarrow\nu_{m}(i)$ with kernel $\nu_{1}(i)$. Assume that $f$ is proper. The sheaves $R^{i}f_{\ast}\nu_{m}(r)$ lie in $\mathcal{G}{}$. When $m=1$, this is proved in \cite{milne1976}, 2.7, and the general case follows by induction on $m$. Following \cite{milne1986}, p.309, we defin \begin{align*} H^{i}(X,(\mathbb{Z}{}/p^{m}\mathbb{Z}{})(r)) & =H^{i-r}(X_{\mathrm{et} ,\nu_{m}(r))\\ H^{i}(X,\mathbb{Z}_{p}(r)) & =\varprojlim H^{i}(X,(\mathbb{Z}{ /p^{m}\mathbb{Z}{})(r))\text{. \end{align*} \end{example} \subsection{The functor $M\rightsquigarrow M^{F}$} For a complex $M$ of graded $R$-modules, we defin \begin{equation} M^{F}=R\Hom(W,M). \label{eq55 \end{equation} Then $M\rightsquigarrow M^{F}$ is a functor $\mathsf{D}^{+}(R)\rightarrow \mathsf{D}(\mathbb{Z}{}_{p})$. Let $\hat{R}$ denote the completion $\varprojlim R_{m}$ of $R$. From \[ W\simeq R^{0}/R^{0}(1-F)\simeq R/R(1-F), \] we get an exact sequenc \begin{equation} 0\rightarrow\hat{R}\overset{1-F}{\longrightarrow}\hat{R}\rightarrow W\rightarrow0 \label{eq20 \end{equation} of graded $R$-modules (\cite{ekedahl1985}, III, 1.5.1, p.90). If $M$ is in $\mathsf{D}_{c}^{b}(R)$, then, because $M$ is complete \begin{equation} R\Hom(\hat{R},M)\simeq R\Hom(R,M)\simeq M^{0} \label{eq20a \end{equation} (isomorphisms of graded $R$-modules; ibid. I, 5.9.3ii, p.78). Now (\ref{eq20}) gives a canonical isomorphism \begin{equation} M^{F}\simeq s(M^{0}\overset{1-F}{\longrightarrow}M^{0}) \label{eq22 \end{equation} (ibid. I, 1.5.4(i), p.90), which explains the notation. Note tha \begin{equation} s(M^{0}\overset{1-F}{\longrightarrow}M^{0})=\text{\textrm{Cone}}(1-F\colon M^{0}\rightarrow M^{0})[-1]\text{.} \label{eq22a \end{equation} For $M,N$ in $\mathsf{D}_{c}^{b}(R)$, we hav \begin{equation} R\Hom(M,N)\overset{\text{(\ref{eq28})}}{\simeq}R\Hom(W,R\underline{\Hom (M,N))\overset{\textup{{\tiny def}}}{=}R\underline{\Hom}(M,N)^{F}\label{eq38 \end{equation} in $\mathsf{D}(\mathbb{Z}{}_{p})$. \subsection{The functor $M\rightsquigarrow\mathcal{M}_{\bullet}^{F}$} Let $\mathcal{S}_{\bullet}$ denote the category of projective systems of sheaves $(P_{m})_{m\in\mathbb{N}}$ on $(\mathsf{Pf}/S)_{\mathrm{et}}$ with $P_{m}$ a sheaf of $\Lambda_{m}$-modules, and let $\mathcal{G}{}_{\bullet}$ denote the full subcategory of systems $(P_{m})_{m\in\mathbb{N}{}}$ with $P_{m}$ in $\mathcal{G}{}$. Then $\mathcal{G}{}_{\bullet}$ is an abelian subcategory of $\mathcal{S}{}_{\bullet}$ closed under extensions. Let $M$ be a graded $R$-module, and let $M_{m}=R_{m}\otimes_{R}M$. Let $\mathcal{M}_{m}^{i}{}$ denote the sheaf $\Spec(A)\rightsquigarrow M_{m ^{i}\otimes_{W}WA$ on $(\mathsf{Pf}/S)_{\mathrm{e}\mathrm{t}}$, and let $\mathcal{\mathcal{M}{}}^{i}$ denote the projective system $(\mathcal{M}{ _{m}^{i})_{m\in\mathbb{N}{}}$. Thus $\mathcal{M}{}^{i}\in\mathcal{S {}_{\bullet}$. Let $F$ (resp. $V$) denote the endomorphism of $\mathcal{M {}^{i}$ defined by $F\otimes\sigma$ (resp. $V\otimes\sigma^{-1}$) on $(M_{m}^{i}\otimes_{W}WA)_{m}$. In this way, we get an $R_{\bullet}$-modul \[ \mathcal{M}{}_{\bullet}\colon\quad\cdots\rightarrow\mathcal{M}{}_{\bullet ^{i}\overset{d}{\longrightarrow}\mathcal{M}{}_{\bullet}^{i+1}\rightarrow\cdots \] in $\mathcal{S}{}_{\bullet}$. Cf. \cite{illusieR1983} IV, 3.6.3. \begin{example} \label{b14}Let $M=M^{0}$ be an elementary graded $R$-module of type I. For each $m$, the map $1-F\colon\mathcal{M}{}_{m}\rightarrow\mathcal{M}{}_{m}$ is surjective with kernel the \'{e}tale group scheme $\mathcal{M}{}_{m}^{F}$ over $k$ corresponding to the natural representation of $\Gal(\bar{k}/k)$ on $(M\otimes_{W}\bar{W})^{F\otimes\sigma}$ . Therefore $\mathcal{M}{}_{\bullet }^{F}$ is a pro-\'{e}tale group scheme over $k$ wit \[ \mathcal{M}{}_{\bullet}^{F}(\bar{k})\overset{\textup{{\tiny def} }{=}\varprojlim\mathcal{M}{}_{m}^{F}(\bar{k})=(M\otimes_{W}\bar{W )^{F\otimes\sigma}. \] Cf. (\ref{e4}) below. \end{example} \begin{example} \label{b15}Let $M$ be an elementary graded $R$-module of type II. Then $1-F\colon\mathcal{M}{}_{\bullet}^{i}\rightarrow\mathcal{M}{}_{\bullet}^{i}$ is bijective for $i=0$, and it is surjective with kernel canonically isomorphic to $\mathbb{G}{}_{a}^{\mathrm{pf}}$ for $i=1$ (\cite{illusieR1983}, IV, 3.7, p.195). \end{example} \begin{proposition} \label{b6}Let $M$ be a coherent graded $R$-module. For each $i$, the map $1-F\colon\mathcal{M}{}_{\bullet}^{i}\rightarrow\mathcal{M}{}_{\bullet}^{i}$ is surjective, and its kernel $(\mathcal{M}{}_{\bullet}^{i})^{F}$ lies in $\mathcal{G}{}_{\bullet}$. There is an exact sequenc \[ 0\rightarrow U^{i}\rightarrow(\mathcal{M}{}_{\bullet}^{F})^{i}\rightarrow D^{i}\rightarrow0 \] with $U^{i}$ a connected unipotent perfect algebraic group of dimension $T^{i-1}(M)$ and $D^{i}$ the profinite \'{e}tale group corresponding to the natural representation of $\Gal(\bar{k}/k)$ on $(\heartsuit^{i}M\otimes _{W}\bar{W})^{F\otimes\sigma}$. \end{proposition} \begin{proof} When $M$ is an elementary graded $R$-module, the proposition is proved in the two examples. The proof can be extended to all coherent graded $R$-modules by using \cite{illusieR1983}, IV 3.10, 3.11, p.196. \end{proof} \begin{corollary} \label{b6d}Let $M$ be a coherent graded $R$-module, and let $H^{i (M)=Z^{i}(M)/B^{i}(M)$. Then $D^{i}(\bar{k})\overset{\textup{{\tiny def} }{=}\varprojlim_{m}D_{m}^{i}(\bar{k})$ is a finitely generated $\mathbb{Z {}_{p}$-module, an \[ D^{i}(\bar{k})\otimes_{\mathbb{Z}{}_{p}}\mathbb{Q}{}_{p}\simeq(H^{i (M)\otimes_{W}\bar{K})^{F\otimes\sigma}\text{. \] \end{corollary} \begin{proof} According to (\ref{b6}), $D^{i}(\bar{k})\simeq(\heartsuit^{i}M\otimes_{W \bar{W})^{F\otimes\sigma}$. Now the statement follows from (\ref{a18}). \end{proof} Let $\Gamma(S_{\mathrm{et}},-)$ denote the functor \[ (M_{m})_{m\in\mathbb{N}{}}\rightsquigarrow\varprojlim\Gamma(S_{\mathrm{et },M_{m})\colon\mathcal{S}{}_{\bullet}\rightarrow\Mod(\mathbb{Z}{}_{p})\text{. \] It derives to a functor $R\Gamma(S_{\mathrm{et}},-)\colon\mathsf{D (\mathcal{S}{}_{\bullet})\rightarrow\mathsf{D}(\mathbb{Z}{}_{p})$. For a coherent graded $R$-module $M$, the system $\mathcal{M}_{\bullet}{}$ depends only on the projective system $M_{\bullet}=(M_{m})_{m}$. The functor $M_{\bullet}\rightsquigarrow\mathcal{M}{}_{\bullet}\colon\Mod(R_{\bullet })\rightarrow\mathcal{S}{}_{\bullet}$ is exact, and so it defines a functo \[ M_{\bullet}\rightsquigarrow\mathcal{M}{}_{\bullet}\colon\mathsf{D}(R_{\bullet })\rightarrow\mathsf{D}(\mathcal{S}{}_{\bullet}). \] Let \begin{equation} \mathcal{M}{}_{\bullet}^{F}=\text{\textrm{Cone}}(\mathcal{M}_{\bullet ^{0}\overset{1-F}{\longrightarrow}{}\mathcal{M}_{\bullet}^{0})[-1]. \label{eq54 \end{equation} \begin{proposition} \label{b6e}The following diagram commutes: \[ \begin{tikzcd}[column sep=large, row sep=large] \mathsf{D}_{c}^{b}(R)\arrow{r}{M\rightsquigarrow M_{\bullet}}\arrow{rd}[swap] {M\rightsquigarrow\hat{M}} & \mathsf{D}^{b}(R_{\bullet})\arrow{r}{(-)^{F}}\arrow{d}{R\varprojlim} & \mathsf{D}^{b}(\mathcal{S {}_{\bullet})\arrow{d}{R\Gamma(S_{\mathrm{et}},-)} \\ & \mathsf{D}^{b}(R)\arrow{r}{(-)^{F}} & \mathsf{D}^{b}(\mathbb{Z}{}_{p}). \end{tikzcd} \] The functor $(-)^{F}$ on the top row (resp. bottom row) is that defined in (\ref{eq54}) (resp. (\ref{eq55}). In other words, for $M$ in $\mathsf{D _{c}^{b}(R)$ \[ R\Gamma(S_{\mathrm{et}},\mathcal{M}{}_{\bullet}^{F})\simeq M^{F}. \] \end{proposition} \begin{proof} This follows directly from the definitions and the isomorphism (\ref{eq22}, \ref{eq22a} \[ M^{F}\simeq\text{\textrm{Cone}}(1-F\colon M^{0}\rightarrow M^{0})[-1]\text{. \] \end{proof} \begin{proposition} \label{b6b}Let $M\in\mathsf{D}_{c}^{b}(R)$, and let $r\in\mathbb{Z}{}$. For each $j,$ there is an exact sequenc \[ 0\rightarrow U^{j}\rightarrow H^{j}(\mathcal{M(}r){}_{\bullet}^{F})\rightarrow D^{j}\rightarrow0 \] with $U^{j}$ a connected unipotent perfect algebraic group of dimension $T^{r-1,j-r}$ and $D^{j}$ the profinite \'{e}tale group corresponding to the natural representation of $\Gal(\bar{k}/k)$ on $(\heartsuit^{r}\left( H^{j}(M)\right) \otimes\bar{W})^{F\otimes\sigma}$. \end{proposition} \begin{proof} Apply (\ref{b6}) to $H^{j}(M(r))$ with $i=0$. \end{proof} \begin{corollary} \label{b6f}The $\mathbb{Z}{}_{p}$-module $D^{j}(\bar{k})$ is finitely generated, an \begin{equation} D^{j}(\bar{k})\otimes_{\mathbb{Z}{}_{p}}\mathbb{Q}{}_{p}\simeq(H^{r (H^{j}(M))\otimes\bar{K})^{F\otimes\sigma}\text{.} \label{eq60 \end{equation} Here $H^{r}(H^{j}(M))$ is the $E_{2}^{r,j}$ term in the slope spectral sequence for $M$. \end{corollary} \begin{proof} Apply (\ref{b6d}) to $H^{j}(M)$. \end{proof} \subsection{The functors $R\mathcal{H}om$} If $M,N$ in $\mathsf{D}_{c}^{b}(R)$, then $P\overset{\textup{{\tiny def} }{=}R\underline{\Hom}(M,N)$ lies in $\mathsf{D}_{c}^{b}(R)$ (see \S 3). Le \[ R\mathcal{H}{}om(M,N)=\mathcal{P}{}_{\bullet}^{F}\text{. \] Then $R\mathcal{H}{}om$ is a bifuncto \[ R\mathcal{H}{}om\colon\mathsf{D}_{c}^{b}(R)\times\mathsf{D}_{c}^{b (R)\rightarrow\ \mathsf{D}_{\mathcal{G}{}_{\bullet}}^{b}(\mathcal{S {}_{\bullet}) \] (denoted by $R\underline{\Hom}_{R}$ in \cite{ekedahl1986}, p.11, except that he allows graded homomorphisms of any degree). \begin{proposition} \label{b6a}For $M,N\in\mathsf{D}_{c}^{b}(R)$ \begin{equation} R\Gamma(S_{\mathrm{et}},R\mathcal{H}om(M,N))\simeq R\Hom(M,N). \label{eq59 \end{equation} \end{proposition} \begin{proof} From (\ref{b6e}) with $P=R\underline{\Hom}(M,N)$, we find tha \[ R\Gamma(S_{\mathrm{et}},R\mathcal{H}om(M,N))\simeq R\underline{\Hom (M,N)^{F}\text{. \] But $R\underline{\Hom}(M,N)^{F}\simeq R\Hom(M,N)$ (see (\ref{eq38})). \end{proof} For $M,N$ in $\mathsf{D}_{c}^{b}(R)$, we le \begin{align*} \Ext^{j}(M,N) & =H^{j}(R\Hom(M,N))\\ \underline{\Ext}^{j}(M,N) & =H^{j}(R\underline{\Hom}(M,N))\\ \mathcal{E}{}xt^{j}(M,N) & =H^{j}(R\mathcal{H}{}om(M,N)). \end{align*} The first is a $\mathbb{Z}{}_{p}$-module, the second is a coherent graded $R$-module, and the third is an object of $\mathcal{G}{}_{\bullet}$. From (\ref{eq28}) and (\ref{eq59}) we get spectral sequence \begin{align*} \Ext^{i}(W,\underline{\Ext}^{j}(M,N)) & \implies\Ext^{i+j}(M,N)\\ R^{i}\Gamma(S_{\mathrm{et}},\mathcal{E}{}xt^{j}(M,N)) & \implies \Ext^{i+j}(M,N)\text{. \end{align*} The identity component of $\mathcal{E}{}xt^{j}(M,N)$ is a perfect algebraic group of dimension $T^{i-1,j}(M,N)$ wher \[ T^{i,j}(M,N)\overset{\textup{{\tiny def}}}{=}T^{i,j}(R\underline{\Hom (M,N))=T^{i}(\underline{\Ext}^{j}(M,N))\text{. \] For example, it follows from (\ref{eq28c}) tha \begin{align*} \underline{\Ext}^{j}(W,M) & =H^{j}(M)\\ \mathcal{E}{}xt^{j}(W,M) & =H^{j}(\mathcal{M}{}_{\bullet}^{F}),\quad \text{and}\\ T^{i,j}(W,M) & =T^{i,j}(M). \end{align*} \begin{aside} \label{b6c}If $M\in\mathsf{D}_{c}^{b}(R)$, then the dua \[ D(M)\overset{\textup{{\tiny def}}}{=}R\underline{\Hom}(M,W) \] of $M$ also lies in $\mathsf{D}_{c}^{b}(R)$. If $M,N\in D_{c}^{b}(R)$, the \[ D(M)\hat{\ast}N\simeq R\underline{\Hom}(M,N) \] (see \cite{illusie1983}, 2.6.3.4). In particular \[ T^{i,j}(M,N)=T^{i,j}(D(M)\hat{\ast}N). \] \end{aside} \section{The proof of the main theorem} Throughout this section, $\Gamma$ is a profinite group isomorphic to $\mathbb{\hat{Z}}{}$, and $\gamma$ is a topological generator for $\Gamma$. For a $\Gamma$-module $M$, the kernel and cokernel of $1-\gamma\colon M\rightarrow M$ are denoted by $M^{\Gamma}$ and $M_{\Gamma}$ respectively. \subsection{Elementary preliminaries} Let $[S]$ denote the cardinality of a set $S$. For a homomorphism $f\colon M\rightarrow N$ of abelian groups, we le \[ z(f)=\frac{[\Ker(f)]}{[\Coker(f)] \] when both cardinalities are finite. \begin{lemma} \label{a1}Let $M$ be a finitely generated $\mathbb{Z}{}_{p}$-module with an action of $\Gamma$, and let $f\colon M^{\Gamma}\rightarrow M_{\Gamma}$ be the map induced by the identity map on $M$. Then $z(f)$ is defined if and only if $1$ is not a multiple root of the minimum polynomial $\gamma$ on $M$, in which case $M^{\Gamma}$ has rank equal to the multiplicity of $1$ as an eigenvalue of $\gamma$ on $M_{\mathbb{Q}{}_{p}}$ an \[ z(f)=\left\vert \prod\nolimits_{i,\,\,a_{i}\neq1}(1-a_{i})\right\vert _{p \] where $\left( a_{i}\right) _{i\in I}$ is the family of eigenvalues of $\gamma$ on $M_{\mathbb{Q}{}_{p}}$. \end{lemma} \begin{proof} Elementary and easy.\noindent \end{proof} \begin{lemma} \label{b10}Consider a commutative diagra \[ \begin{CD} \cdots@>>>C^{j-1}@.C^{j}@>{f^{j}}>>C^{j+1}@.\\ @.@VV{g^{j-1}}V@AA{h^{j}}A@VV{g^{j+1}}V\\ \cdots @>>> A^{j-1} @>{d^{j-1}}>> A^{j} @>{d^{j}}>> A^{j+1} @>>> \cdots\\ @.@VV{h^{j-1}}V@AA{g^{j}}A@VV{h^{j+1}}V\\ @.B^{j-1}@>{f^{j-1}}>>B^{j}@.B^{j+1}@>>>\cdots \end{CD} \] in which $A^{\bullet}$ is a bounded complex of abelian groups and each column is a short exact sequence (in particular, the $g$'s are injective and the $h$'s are surjective). The cohomology groups $H^{j}(A^{\bullet})$ are all finite if and only if the numbers $z(f^{j})$ are all defined, in which cas \[ \prod\nolimits_{j}[H^{j}(A^{\bullet})]^{(-1)^{j}}=\prod\nolimits_{j z(f^{j})^{(-1)^{j}}. \] \end{lemma} \begin{proof} Because $h^{j-1}$ is surjective, $g^{j}$ maps the image of $f^{j-1}$ into the image of $d^{j-1}$. Because $g^{j+1}$ is injective and $h^{j}$ is surjective, $h^{j}$ maps the kernel of $d^{j}$ onto the kernel of $f^{j}$. The snake lemma applied t \[ \begin{CD} @.\im(f^{j-1})@>{g^{j}}>>\im(d^{j-1}) @>>> 0 \\ @.@VVV@VVV@VVV@.\\ 0 @>>> B^{j} @>g^{j}>> \Ker(d^{j}) @>{h^j}>> \Ker(f^{j}) @>>> 0 \end{CD} \] gives an exact sequenc \[ 0\rightarrow\Coker(f^{j-1})\rightarrow H^{j}(A^{\bullet})\rightarrow \Ker(f^{j})\rightarrow0. \] Therefore $H^{j}(A^{\bullet})$ is finite if and only if $\Coker(f^{j-1})$ and $\Ker(f^{j})$ are both finite, in which cas \[ \lbrack H^{j}(A^{\bullet})]=[\Coker(f^{j-1})]\cdot\lbrack\Ker(f^{j})]. \] On combining these statements for all $j$, we obtain the lemma. \end{proof} \subsection{Cohomological preliminaries} Let $\Lambda$ be a finite ring, and let $\Lambda\Gamma$ be the group ring. For a $\Lambda$-module $M$, we let $M_{\ast}$ denote the corresponding co-induced module. Thus $M_{\ast}$ consists of the locally constant maps $f\colon \Gamma\rightarrow M$ and $\tau\in\Gamma$ acts on $f$ according to the rule $(\tau f)(\sigma)=f(\sigma\tau)$. When $M$ is a discrete $\Gamma$-module, there is an exact sequenc \begin{equation} 0\rightarrow M\longrightarrow M_{\ast}\overset{\alpha_{\gamma }{\longrightarrow}M_{\ast}\rightarrow0, \label{eq43 \end{equation} in which the first map sends $m\in M$ to the map $\sigma\mapsto\sigma m$ and the second map sends $f\in M_{\ast}$ to $\sigma\mapsto f(\sigma\gamma)-\gamma f(\sigma)$. Let $F$ be the functor $M\rightsquigarrow M^{\Gamma \colon\Mod(\Lambda\Gamma)\rightarrow\Mod(\Lambda)$. The class of co-induced $\Lambda\Gamma$-modules is $F$-injective, and so (\ref{eq43}) defines isomorphisms \[ RF(M)\simeq F(M_{\ast}\overset{\alpha_{\gamma}}{\longrightarrow}M_{\ast })\simeq(M\overset{1-\gamma}{\longrightarrow}M) \] in $\mathsf{D}^{+}(\Lambda)$. For the second isomorphism, note that $M_{\ast }^{\Gamma}$ is the set of constant functions $\Gamma\rightarrow M$, and if $f$ is the constant function with value $m$, then $(\alpha_{\gamma}f)(\sigma )=f(\sigma\gamma)-\gamma f(\sigma)=m-\gamma m$. Now let $\Mod(\Lambda_{\bullet}\Gamma)$ denote the category of projective systems $(M_{m})_{m\in\mathbb{N}{}}$ with $M_{m}$ a discrete $\Gamma$-module killed by $p^{m}$, and let $F$ be the functor $\Mod(\Lambda_{\bullet \Gamma)\rightarrow\Mod(\mathbb{Z}_{p})$ sending $(M_{m})_{m}$ to $\varprojlim M_{m}^{\Gamma}$. We say that an object $(M_{m})_{m}$ of $\Mod(\Lambda _{\bullet}\Gamma)$ is co-induced if $M_{m}$ is co-induced for each $m$. For every complex $X=(X_{m})_{m}$ of $\Lambda_{\bullet}\Gamma$-modules, there is an exact sequenc \begin{equation} 0\rightarrow X\rightarrow X_{\ast}\overset{\alpha_{\gamma}}{\longrightarrow }X_{\ast}\rightarrow0\label{eq46 \end{equation} of complexes with $X_{\ast}^{j}=(X_{m\ast}^{j})_{m}$ for all $j,m$. The class of co-induced $\Lambda_{\bullet}\Gamma$-modules is $F$-injective, and so (\ref{eq46}) defines isomorphism \begin{equation} RF(X)\simeq s(F(X_{\ast}\longrightarrow X_{\ast}))\simeq s(\vec{X \overset{1-\gamma}{\longrightarrow}\vec{X})\label{eq47 \end{equation} in $\mathsf{D}^{+}(\mathbb{Z}{}_{p})$ where $\vec{X}=(R\varprojlim)(X)$ and $\vec{X}\overset{1-\gamma}{\longrightarrow}\vec{X}$ is a double complex with $\vec{X}$ as both its zeroth and first column. From (\ref{eq47}), we get a long exact sequenc \begin{equation} \cdots\rightarrow H^{j-1}(\vec{X})\overset{1-\gamma}{\longrightarrow H^{j-1}(\vec{X})\rightarrow R^{j}F(X)\rightarrow H^{j}(\vec{X )\overset{1-\gamma}{\longrightarrow}H^{j}(\vec{X})\rightarrow\cdots .\label{eq48 \end{equation} If $(M_{m})_{m}$ is a $\Lambda_{\bullet}\Gamma$-module satisfying the Mittag-Leffler condition, then \[ R^{j}F((M_{m})_{m})\simeq H_{\mathrm{cts}}^{j}(\Gamma,\varprojlim M_{m}) \] (continuous cohomology). Let $\Lambda_{\bullet}=(\mathbb{Z}{}/p^{m \mathbb{Z}{})_{m}$. The \[ R^{1}F(\Lambda_{\bullet})\simeq H_{\mathrm{cts}}^{1}(\Gamma,\mathbb{Z}{ _{p})\simeq\Hom_{\mathrm{cts}}(\Gamma,\mathbb{Z}{}_{p})\text{, \] which has a canonical element $\theta$, namely, that mapping $\gamma$ to $1$. We can regard $\theta$ as an element o \[ \Ext^{1}(\Lambda_{\bullet},\Lambda_{\bullet})=\Hom_{\mathsf{D}^{+ (\Lambda_{\bullet}\Gamma)}(\Lambda_{\bullet},\Lambda_{\bullet}[1]). \] Thus, for $X$ in $\mathsf{D}^{+}(\Lambda_{\bullet}\Gamma)$, we obtain map \begin{align*} \theta\colon X & \rightarrow X[1]\\ R\theta\colon RF(X) & \rightarrow RF(X)[1]. \end{align*} The second map is described explicitly by the following map of double complexes \[ \begin{tikzpicture}[baseline=(current bounding box.center), text height=1.5ex, text depth=0.25ex] \node (a) at (0,0) {$RF(X)$}; \node (b) at (2,0) {}; \node (c) at (4,0) {$\vec{X}$}; \node (d) at (6,0) {$\vec{X}$}; \node (e) [below=of a] {$RF(X)[1]$}; \node (f)[below=of b] {$\vec{X}$}; \node (g)[below=of c] {$\vec{X}$}; \node (h)[below=of d] {}; \node at (2,-2.05) {$\scriptstyle{-1}$}; \node at (4,-2.05) {$\scriptstyle{0}$}; \node at (6,-2.05) {$\scriptstyle{1}$}; \draw[->,font=\scriptsize,>=angle 90] (a) edge node[right]{$R\theta$} (e) (c) edge node[right]{$\gamma$} (g) (c) edge node[above]{$1-\gamma$} (d) (f) edge node[above]{$1-\gamma$} (g); \end{tikzpicture} \] For all $j$, the following diagram commute \begin{equation} \begin{tikzcd} R^{j}F(X)\arrow{d}\arrow{r}{d^{j}}& R^{j+1}F(X)\\ H^{j}(\vec{X})\arrow{r}{\id} & H^{j}(\vec{X})\arrow{u} \end{tikzcd} \label{eq50 \end{equation} where $d^{j}=H^{j}(R\theta)$ and the vertical maps are those in (\ref{eq48}). The sequenc \begin{equation} \cdots\longrightarrow R^{j-1}F(X)\overset{d^{j-1}}{\longrightarrow R^{j}F(X)\overset{d^{j}}{\longrightarrow}R^{j+1}F(X)\longrightarrow \cdots\label{eq51 \end{equation} is a complex because $R\theta\circ R\theta=0$. \subsection{Review of $F$-isocrystals} Let $V$ be an $F$-isocrystal over $k$. The $\bar{K}$-module $\bar {V}\overset{\textup{{\tiny def}}}{=}\bar{K}\otimes_{K}V$ becomes an $F$-isocrystal over $\bar{k}$ with $\bar{F}$ acting as $\sigma\otimes F$. \begin{plain} \label{c2} Let $\lambda$ be a nonnegative rational number, and write $\lambda=s/r$ with $r,s\in\mathbb{N}{}$, $r>0$, $(r,s)=1$. Define $E^{\lambda }$ to be the $F$--isocrystal $K_{\sigma}[F]/\left( K_{\sigma}[F](F^{r -p^{s})\right) $. When $k$ is algebraically closed, every $F$-isocrystal is semisimple, and the simple $F$-isocrystals are exactly the $E^{\lambda}$ with $\lambda \in\mathbb{Q}_{\geq0}$. Therefore an $F$-isocrystal has a unique (slope) decompositio \begin{equation} V=\bigoplus\nolimits_{\lambda\geq0}V_{\lambda} \label{eq63 \end{equation} with $V_{\lambda}$ a sum of copies of $E^{\lambda}$. See \cite{demazure1972}, IV. When $k$ is merely perfect, the decomposition (\ref{eq63}) of $\bar{V}$ is stable under $\Gal(\bar{k}/k)$, and so arises from a (slope) decomposition of $V$. In other words, $V=\bigoplus\nolimits_{\lambda}V_{\lambda}$ with $\overline{V_{\lambda}}=\bar{V}_{\lambda}$. If $\lambda=r/s$ with $r,s$ as above, then $V_{\lambda}$ is the largest $K$-submodule of $V$ such that $F^{r}V_{\lambda}=p^{s}V_{\lambda}$. The $F$-isocrystal $V_{\lambda}$ is called the part of $V$ with slope $\lambda$, and $\{\lambda\mid V_{\lambda }\neq0\}$ is the set of slopes of $V$. \end{plain} \begin{plain} \label{e3}Let $V$ be an $F$-isocrystal over $k$. The characteristic polynomial \[ P_{V,\alpha}(t)\overset{\textup{{\tiny def}}}{=}\det(1-\alpha t|V) \] of an endomorphism $\alpha$ of $V$ lies in $\mathbb{Q}_{p}[t]$. Let $k=\mathbb{F}{}_{q}$ with $q=p^{a}$, so that $F^{a}$ is an endomorphism of $(V,F)$, and let \[ P_{V,F^{a}}(t)=\prod\nolimits_{i\in I}(1-a_{i}t),\quad a_{i}\in\mathbb{\bar {Q}}_{p}\text{. \] According to a theorem of Manin, $\left( \ord_{q}(a_{i})\right) _{i\in I}$ is the family of slopes of $V$. Here $\ord_{q}$ is the $p$-adic valuation on $\mathbb{\bar{Q}}_{p}$ normalized so that $\ord_{q}(q)=1$. See \cite{demazure1972}, pp.89-90. \end{plain} \begin{plain} \label{e4}Let $(V,F)$ be an $F$-isocrystal over $k=\mathbb{F}{}_{p^{a}}$, and let $\lambda\in\mathbb{N}{}$. Le \[ V_{(\lambda)}=\{v\in\bar{V}\mid\bar{F}v=p^{\lambda}v\}\quad\text{( \mathbb{Q}{}_{p}\text{-subspace of }\bar{V}\text{). \] Then $V_{(\lambda)}$ is a $\mathbb{Q}{}_{p}$-structure on $V_{\lambda}$. In other words, $V_{\lambda}$ has a basis of elements $e$ with the property that $\bar{F}e=p^{\lambda}e$, and hence \[ \left( \gamma\otimes F^{a}\right) e=\bar{F}^{a}e=q^{\lambda}e. \] Therefore, as $c$ runs over the eigenvalues of $F^{a}$ on $V$ with $\ord_{q}(c)=\lambda$, the quotient $q^{\lambda}/c$ runs over the eigenvalues of $\gamma$ on $V_{(\lambda)}$; moreover, $c$ is a multiple root of the minimum polynomial of $F^{a}$ on $V_{\lambda}$ if and only if $q^{\lambda}/c$ is a multiple root of the minimum polyomial of $\gamma$ on $V_{(\lambda)}$. See \cite{milne1986}, 5.3. \end{plain} \begin{plain} \label{e5}Let $(V,F)$ be an $F$-isocrystal over $k=\mathbb{F}{}_{p^{a}}$. If $F^{a}$ is a semisimple endomorphism of $V$ (as a $K$-vector space), then $\End(V,F)$ is semisimple, because it is a $\mathbb{Q}{}_{p}$-form of the centralizer of $F^{a}$ in $\End(V)$; it follows that $(V,F)$ is semisimple. Conversely, if $(V,F)$ is semisimple, then $F^{a}$ is semisimple, because it lies in the centre of the semisimple algebra $\End(V,F)$. Let $V$ and $V^{\prime}$ be nonzero $F$-isocrystals; then $V\otimes V^{\prime}$ is semisimple if and only if both $V$ and $V^{\prime}$ are semisimple. \end{plain} \subsection{A preliminary calculation} In this subsection, $k$ is the finite field $\mathbb{F}_{q}$ with $q=p^{a}$, and $\Gamma=\Gal(\bar{k}/k)$. We take the Frobenius element $x\mapsto x^{q}$ to be the generator $\gamma$ of $\Gamma$. Recall that for $P$ in $\mathsf{D}_{c}^{b}(R)$, $H^{j}(sP)_{K}$ is an $F$-isocrystal. \begin{proposition} \label{b9c}Let $M,N\in\mathsf{D}_{c}^{b}(R)$, let $P=R\underline{\Hom}(M,N)$, and let $r\in\mathbb{Z}{}$. For each $j$, let \[ f_{j}\colon\Ext^{j}(\bar{M},\bar{N}(r))^{\Gamma}\rightarrow\Ext^{j}(\bar {M},\bar{N}(r))_{\Gamma \] be the map induced by the identity map. Then $z(f_{j})$ is defined if and only if $q^{r}$ is not a multiple root of the minimum polynomial of $F^{a}$ on $H^{j}(sP)_{K}$, in which cas \[ z(f_{j})=\left\vert \prod_{a_{j,l}\neq q^{r}}\left( 1-\frac{a_{j,l}}{q^{r }\right) \right\vert _{p}~~\left\vert \prod_{\mathrm{ord}_{q}(a_{j,l )<r}\frac{q^{r}}{a_{j,l}}\right\vert _{p}q^{T^{r-1,j-r}(P) \] where $(a_{j,l})_{l}$ is the family of eigenvalues of $F^{a}$ acting on $H^{j}(sP)_{K}$. \end{proposition} \begin{proof} (Following the proof of \cite{milne1986}, 6.2.) Let $G^{j}$ denote the perfect pro-group scheme $\mathcal{E}{}xt^{j}(M,N(r))\overset{\textup{{\tiny def} }{=}H^{j}(\mathcal{P}(r){}_{\bullet}^{F})$. There is an exact sequenc \[ 0\rightarrow U^{j}\rightarrow G^{j}\rightarrow D^{j}\rightarrow0 \] in which $U^{j}$ is a connected unipotent perfect algebraic group of dimension $T^{r-1,j-r}(P)$ and $D^{j}$ is a pro-\'{e}tale group such that $D^{j}(\bar {k})$ is a finitely generated $\mathbb{Z}{}_{p}$-module an \begin{equation} D^{j}(\bar{k})\otimes_{\mathbb{Z}{}_{p}}\mathbb{Q}{}_{p}\simeq H^{j (sP(r)_{K})_{(0)}\simeq H^{j}(sP_{K})_{(r)} \label{eq61 \end{equation} (see \ref{b6b}, \ref{b6f}). The map $1-\gamma\colon U^{j}(\bar{k})\rightarrow U^{j}(\bar{k})$ is surjective because it is \'{e}tale and $U^{j}$ is connected. On applying the snake lemma t \[ \begin{CD} 0 @>>> U^{j}(\bar{k}) @>>> G^{j}(\bar{k}) @>>> D^{j}(\bar{k}) @>>> 0\\ @.@VV{1-\gamma}V @VV{1-\gamma}V @VV{1-\gamma}V\\ 0@>>>U^{j}(\bar{k})@>>> G^{j}(\bar{k}) @>>> D^{j}(\bar{k}) @>>> 0, \end{CD} \] and using that the first vertical arrow is surjective, we obtain the upper and lower rows of the following exact commutative diagra \begin{equation} \begin{CD} 0 @>>> U^j(\bar{k})^{\Gamma} @>>> G^j (\bar{k})^{\Gamma} @>>> D^j (\bar{k})^{\Gamma} @>>> 0\\ @. @VV{f^{\prime}_j}V @VV{f_j}V @VV{f^{\prime\prime}_j}V @.\\ 0 @>>> 0 @>>> G^j (\bar{k})_{\Gamma} @>>> D^j (\bar{k})_{\Gamma} @>>> 0. \end{CD} \label{eq2a \end{equation} Because $U^{j}$ has a composition series whose quotients are isomorphic to $\mathbb{G}{}_{a}^{\mathrm{pf}}$ \[ \lbrack U^{j}(k)]=q^{\dim(U^{j})}=q^{T^{r-1,j-r}}\text{. \] On the other hand, it follows from (\ref{e4}) that the eigenvalues of $\gamma$ acting on $D^{j}(\bar{k})_{\mathbb{Q}{}_{p}}$ are the quotients $q^{r /a_{j,l}$ with $\ord_{q}(a_{j,l})=r$. Therefore, (\ref{a1}) and (\ref{e4}) show that $z(f_{j}^{\prime\prime})$ is defined if and only if the minimum polynomial of $F^{a}$ on $H^{j}(sP)_{K}$ does not have $q^{r}$ as a multiple root, in which cas \[ z(f_{j}^{\prime\prime})=\left\vert \prod_{l}\left( 1-\frac{q^{r}}{a_{j,l }\right) \right\vert _{p \] where the product is over the $a_{j,l}$ such that $\ord_{q}(a_{j,l})=r$ but $a_{j,l}\neq q^{r}$. Note tha \[ \left\vert 1-\frac{a_{j,l}}{q^{r}}\right\vert _{p}=\left\vert 1-\frac{q^{r }{a_{j,l}}\right\vert _{p}\quad\text{if }\ord_{q}(a_{j,l})=r, \] an \[ \left\vert 1-\frac{a_{j,l}}{q^{r}}\right\vert _{p}=\left\{ \begin{array} [c]{ll |a_{j,l}/q^{r}|_{p} & \text{if }\ord_{q}(a_{j,l})<r\\ 1 & \text{if }\ord_{q}(a_{j,l})>r. \end{array} \right. \] Therefor \[ z(f_{j}^{\prime\prime})=\left\vert \prod_{a_{j,l}\neq q^{r}}\left( 1-\frac{a_{j,l}}{q^{r}}\right) \right\vert _{p}~~\left\vert \prod _{\mathrm{ord}_{q}(a_{j,l})<r}\frac{q^{r}}{a_{j,l}}\right\vert _{p \] where both products are over all $a_{j,l}$ satisfying the conditions. The snake lemma applied to (\ref{eq2a}) shows $z(f_{j})$ is defined if and only if both $z(f_{j}^{\prime})$ and $z(f_{j}^{\prime\prime})$ are defined, in which case $z(f_{j})=z(f_{j}^{\prime})\cdot z(f_{j}^{\prime\prime})$. The proposition now follows. \end{proof} \subsection{Definition of the complex $E(M,N(r))$} Recall (\ref{b6a}) that the bifuncto \[ R\Hom\colon\mathsf{D}(R)^{\mathrm{opp}}\times\mathsf{D}^{+}(R)\rightarrow \mathsf{D}(\mathbb{Z}{}_{p}) \] factors canonically throug \[ R\Gamma(S_{\mathrm{et}},-)\colon\mathsf{D}^{+}(\mathcal{S}{}_{\bullet })\rightarrow\mathsf{D}(\mathbb{Z}{}_{p}) \] where $\Gamma(S_{\mathrm{et}},-)$ is the functor $(P_{m})_{m}\rightsquigarrow \varprojlim\Gamma(S_{\mathrm{et}},P_{m})$. Since $R\Gamma(S_{\mathrm{et}},-)$ obviously factors throug \[ RF\colon\mathsf{D}^{+}(\Lambda_{\bullet}\Gamma)\rightarrow\mathsf{D (\mathbb{Z}{}_{p}),\quad F=\left( (M_{m})_{m}\rightsquigarrow\varprojlim M_{m}^{\Gamma}\right) ,\quad\Gamma=\Gal(\bar{k}/k), \] so also does $R\Hom$. Therefore, for $M,N\in\mathsf{D}_{c}^{b}(R)$, there exists a well-defined object $X$ in $\mathsf{D}^{+}(\Lambda_{\bullet}\Gamma)$ such that $RF(X)=R\Hom(M,N(r))$. For an algebraically closed base field $k$, $RF(X)=\vec{X}$, and so, for a general $k$, $\vec{X}=R\Hom(\bar{M},\bar {N}(r))$. Now let $k$ be $\mathbb{F}{}_{q}$ with $q=p^{a}$. With $X$ as in the last paragraph, the sequence (\ref{eq48}) gives us short exact sequence \begin{equation} 0\rightarrow\Ext^{j-1}(\bar{M},\bar{N}(r))_{\Gamma}\rightarrow\Ext^{j (M,N(r))\rightarrow\Ext^{j}(\bar{M},\bar{N}(r))^{\Gamma}\rightarrow0\text{.} \label{eq52 \end{equation} Moreover, (\ref{eq51}) becomes a comple \[ E(M,N(r))\colon\quad\cdots\rightarrow\Ext^{j-1}(M,N(r))\rightarrow \Ext^{j}(M,N(r))\rightarrow\Ext^{j+1}(M,N(r))\rightarrow\cdots \] This is the unique complex for which the following diagram commutes \begin{equation} \minCDarrowwidth5pt\begin{CD} @.@.\Ext^j(\bar{M},\bar{N}(r))^{\Gamma}@>f^j>> \Ext^{j}(\bar{M},\bar{N}(r))_{\Gamma}@.\\ @.@.@AAA@VVV\\ \cdots @>>> \Ext^{j-1}(M,N(r)) @>{d^{j-1}}>> \Ext^{j}(M,N(r)) @>{d^{j}}>> \Ext^{j+1}(M,N(r)) @>>> \cdots\\ @.@VVV@AAA@.\\ @.\Ext^{j-1}(\bar{M},\bar{N}(r))^{\Gamma}@>f^{j-1}>>\Ext^{j-1}(\bar{M},\bar{N}(r))_{\Gamma}@.@. \end{CD} \label{e30 \end{equation} (the vertical maps are those in (\ref{eq52}) and the maps $f^{j}$ are induced by the identity map). Let $P\in\mathsf{D}_{c}^{b}(R)$. The zeta function $Z(P,t)$ of $P$ is the alternating product of the characteristic polynomials of $F^{a}$ acting on the isocrystals $H^{j}(sP)_{K}$: \[ Z(P,t)=\prod\nolimits_{j}\det(1-F^{a}t\mid H^{j}(sP)_{K})^{(-1)^{j+1}}.\quad \] \subsection{Proof of Theorem \ref{a0}} We first note that the condition on the minimum polynomial of $F^{a}$ implies that the minimum polynomial of $\gamma$ on $H^{j}(sP_{\bar{K}})_{(r)}$ does not have $1$ as a multiple root (see \ref{e4}). Le \[ P_{j}(t)=\det(1-F^{a}t\mid H^{j}(sP)_{K})=\prod\nolimits_{l}(1-a_{j,l}t). \] (a) We have $\Ext^{j}(\bar{M},\bar{N}(r))=\mathcal{E}{}xt^{j}(M,N(r))(\bar {k})$ where $\mathcal{E}{}xt^{j}(M,N(r))$ is a pro-algebraic group such that the identity component of $\mathcal{E}{}xt^{j}(M,N(r))$ is algebraic and the quotient of $\mathcal{E}{}xt^{j}(M,N(r))$ by its identity component is a pro-\'{e}tale group $(D_{m}^{j})_{m}$ such that $\varprojlim_{m}D_{m}^{j (\bar{k})$ is a finitely generated $\mathbb{Z}{}_{p}$-module (see \ref{b6b}, \ref{b6f}). Hence the $\mathbb{Z}{}_{p}$-modules $\Ext^{j}(\bar{M},\bar {N}(r))^{\Gamma}$ and $\Ext^{j}(\bar{M},\bar{N}(r))_{\Gamma}$ are finitely generated. No \[ \rank(\Ext^{j}(M,N(r)))=\rank(\Ext^{j-1}(\bar{M},\bar{N}(r))_{\Gamma })+\rank(\Ext^{j}(\bar{M},\bar{N}(r))^{\Gamma}). \] The hypothesis on the action of the Frobenius element implies tha \[ \Ext^{j}(\bar{M},\bar{N}(r))^{\Gamma}\otimes\mathbb{Q}{}\simeq\Ext^{j}(\bar {M},\bar{N}(r))_{\Gamma}\otimes\mathbb{Q}{ \] for all $j$, and s \[ \rank(\Ext^{j}(M,N(r)))=\rank(\Ext^{j-1}(\bar{M},\bar{N}(r))^{\Gamma })+\rank(\Ext^{j}(\bar{M},\bar{N}(r))^{\Gamma}). \] Therefore \[ \sum\nolimits_{j}(-1)^{j}\rank(\Ext^{j}(M,N(r)))=0. \] (b) Let $\rho_{j}$ be the multiplicity of $q^{r}$ as an inverse root of $P_{j}$. The \[ \rho_{j}=\rank\Ext^{j}(\bar{M},\bar{N}(r))^{\Gamma}=\rank\Ext^{j}(\bar{M ,\bar{N}(r))_{\Gamma}\text{, \] and s \begin{align*} \sum\nolimits_{j}(-1)^{j+1}\cdot j\cdot\rank(\Ext^{j}(M,N(r))) & =\sum\nolimits_{j}(-1)^{j+1}\cdot j\cdot(\rho_{j-1}+\rho_{j})\\ & =\sum\nolimits_{j}(-1)^{j}\rho_{j}\\ & =\rho\text{. \end{align*} (c) From Lemma \ref{b10} applied to the diagram (\ref{e30}), we find tha \[ \chi(M,N(r))=\prod\nolimits_{j}z(f^{j})^{(-1)^{j}}\text{. \] According to Proposition \ref{b9c} \[ z(f^{j})=\left\vert \prod_{a_{j,l}\neq q^{r}}\left( 1-\frac{a_{j,l}}{q^{r }\right) \right\vert _{p}~~\left\vert \prod_{\mathrm{ord}_{q}(a_{j,l )<r}\frac{q^{r}}{a_{j,l}}\right\vert _{p}q^{T^{r-1,j-r}(P).}. \] where $(a_{j,l})_{l}$ is the family of eigenvalues of $F^{a}$ acting on $H^{j}(sP(r))_{\mathbb{Q}{}}$. Note tha \[ \prod_{a_{j,l}\neq q^{r}}\left( 1-\frac{a_{j,l}}{q^{r}}\right) =\lim_{t\rightarrow q^{-r}}\frac{P_{j}(t)}{(1-q^{r}t)^{\rho_{j}}}\text{. \] According to (\ref{e3}) \[ \left\vert \prod_{\mathrm{ord}_{q}(a_{j,l})<r}\frac{q^{r}}{a_{j,l}}\right\vert _{p}^{-1}=\sum_{l\,\,\,(\lambda_{j,l}<r)}r-\lambda_{j,l \] where $(\lambda_{j,l})_{l}$ is the family of slopes $H^{j}(sP(r))_{\mathbb{Q {}}$. Therefor \[ \chi(M,N(r))=\left\vert \lim_{t\rightarrow q^{-r}}Z(M,N,t)\cdot(1-q^{r t)^{\rho}\right\vert _{p}^{-1}q^{-e_{r}(P)}. \] Theorem \ref{b5b} completes the proof. \section{Applications to algebraic varieties} Throughout, $S=\Spec(k)$ where $k$ is perfect field of characteristic $p>0$. Recall that the zeta function of an algebraic variety $X$ over a finite field $\mathbb{F}{}_{q}$ is defined to be the formal power series $Z(X,t)\in \mathbb{Q}{}[[t]]$ such tha \begin{equation} \log(Z(X,t))=\sum_{n>0}\frac{N_{n}t_{n}}{n},\quad N_{n}=\#(X(\mathbb{F {}_{q^{n}})), \label{eq3 \end{equation} and that Dwork (1960)\nocite{dwork1960} proved that $Z(X,t)\in\mathbb{Q}{}(t)$. \subsection{Smooth complete varieties} Let $X$ be a smooth complete variety over a perfect field $k$, and le \[ M(X)=R\Gamma(X,W\Omega_{X}^{\bullet})\in\mathsf{D}_{c}^{b}(R) \] (see \ref{b1}). For all $j\geq0$ \begin{equation} H^{j}(s\left( M(X)\right) )\simeq H_{\mathrm{crys}}^{j}(X/W) \label{eq65 \end{equation} (isomorphism of $F$-isocrystals; see \ref{b1}), and s \[ Z(M(X),t)=\prod\nolimits_{j}\det(1-F^{a}t\mid H_{\mathrm{crys}}^{j (X/W)_{\mathbb{Q}{}}^{(-1)^{j+1}}. \] That this equals $Z(X,t)$ is proved in \cite{katzM1974} for $X$ projective, and the complete case can be deduced from the projective case by using de Jong's theory of alterations (\cite{suh2012}). Moreover, $H_{\mathrm{crys }^{j}(X/W)_{\mathbb{Q}{}}$ can be replaced by $H_{\mathrm{rig}}^{j}(X)$ (see \ref{r2} below). Finally, $H_{\mathrm{abs}}^{j}(X,\mathbb{Z}{}_{p}(r))$ is the group $H^{j}(X,\mathbb{Z}_{p}(r))$ defined in (\ref{n7}) (\cite{milneR2005}), an \[ h^{i,j}(M(X))=h^{i,j}(X)\overset{\textup{{\tiny def}}}{=}\dim H^{j (X,\Omega_{X}^{i}), \] because $R_{1}\otimes_{R}^{L}M(X)\simeq R\Gamma(X,\Omega_{X}^{\bullet})$ (see \ref{b1}). Therefore, when $X$ is projective, Theorem \ref{b04} becomes the $p$-part of Theorem 0.1 of \cite{milne1986}. \subsection{Rigid cohomology} Before considering more general algebraic varieties, we briefly review the theory of rigid cohomology. This was introduced in the 1980s by Pierre Berthelot as a common generalization of crystalline and Washnitzer-Monsky cohomology. The book \cite{lestum2007} is a good reference for the foundations. We write $H_{\mathrm{rig}}^{i}(X)$ (resp. $H_{\mathrm{rig},c ^{i}(X)$) for the rigid cohomology (resp. rigid cohomology with compact support) of a variety $X$ over a perfect field $k$. \begin{plain} \label{r1}Both $H_{\mathrm{rig}}^{i}(X)$ and $H_{\mathrm{rig},c}^{i}(X)$ are $F$-isocrystals over $k$; in particular, they are finite-dimensional vector spaces over $K$. Cohomology with compact support is contravariant for proper maps and covariant for open immersions; ordinary cohomology is contravariant for all regular maps. The K\"{u}nneth theorem is true for both cohomology theories. (\cite{berthelot1997a}, \cite{berthelot1997b}, \cite{gk2002}). \end{plain} \begin{plain} \label{r2}When $X$ is smooth complete variety \[ H_{\mathrm{rig}}^{i}(X)\simeq H_{\mathrm{crys}}^{i}(X)_{\mathbb{Q}{} \] (canonical isomorphism of $F$-isocrystals). (\cite{berthelot1986}). \end{plain} \begin{plain} \label{r3}Let $U$ be an open subvariety of $X$ with closed complement $Z$; then there is a long exact sequenc \[ \cdots\rightarrow H_{\mathrm{rig},c}^{i}(U)\rightarrow H_{\mathrm{rig},c ^{i}(X)\rightarrow H_{\mathrm{rig},c}^{i}(Z)\rightarrow\cdots \] (\cite{berthelot1986}, 3.1). \end{plain} \begin{plain} \label{r4}Rigid cohomology is a Bloch-Ogus theory. In particular, there is a theory of rigid homology and cycle class maps. (\cite{petr2003}.) \end{plain} \begin{plain} \label{r5}Rigid cohomology is a mixed-Weil cohomology theory and hence factors through the triangulated category of complexes of mixed motives. (\cite{cd2012, cd2013}). \end{plain} \begin{plain} \label{r6}Rigid cohomology satisfies proper cohomological descent (\cite{tsuz2003}). \end{plain} \begin{plain} \label{r7}Rigid cohomomology (with compact support) can be described in terms of the logarithmic de Rham-Witt cohomology of smooth simplicial schemes (Lorenzon, Mokrane, Tsuzuki, Shiho, Nakkajima). We explain this below. \end{plain} \begin{plain} \label{r8}When $k$ is finite, say, $k=\mathbb{F}{}_{p^{a}}$ \[ Z(X,t)=\det(1-F^{a}t\mid H_{\mathrm{rig},c}^{j}(X))^{(-1)^{j+1} \] (\cite{etesse1993}). \end{plain} \begin{plain} \label{r9}When $k$ is finite, the $F$-isocrystals $H_{\mathrm{rig}}^{i}(X)$ and $H_{\mathrm{rig},c}^{i}(X)$ are mixed; in particular, the eigenvalues of $\Phi=F^{a}$ are Weil numbers. \end{plain} The functors $X\rightsquigarrow H_{\mathrm{rig}}^{i}(X)$ and $X\rightsquigarrow$ $H_{\mathrm{rig},c}^{i}(X)$ arise from functors to $\mathsf{D}_{\text{\textrm{iso}}}^{b}(K_{\sigma}[F])$, which we denote $h_{\mathrm{rig}}(X)$ and $h_{\mathrm{rig},c}(X)$ respectively. \subsection{Varieties with log structure} Endow $S$ with a fine log structure, and let $X$ be a complete log-smooth log variety of Cartier type over $S$ (\cite{kato1989}). In this situation, Lorenzon\nocite{lorenzon2002} (2002, Theorem 3.1) defines a complex $M(X)\overset{\textup{{\tiny def}}}{=}R\Gamma(X,W\Omega_{X}^{\bullet})$ of graded $R$-modules, and proves that it lies in $D_{c}^{b}(R)$. Therefore, Theorem \ref{b04} applies to $X$. \subsection{Smooth varieties} Let $V=X\smallsetminus E$ be the complement of a divisor with normal crossings $E$ in a smooth complete variety $X$ of dimension $n$, and let $m_{X}$ be the canonical log structure on $X$ defined by $E$ \[ m_{X}=\{f\in\mathcal{O}{}_{X}\mid f\text{ is invertible outside }E\} \] (\cite{kato1989}, 1.5). Then $(X,m_{X})$ is log-smooth (ibid. \S 3). Define $M(V)\in\mathsf{D}_{c}^{b}(R)$ to be the complex of graded $R$-modules attached to $(X,m_{X})$ as above \[ M(V)=R\Gamma((X,m_{X}),W\Omega_{X}^{\bullet})=R\Gamma(X,W\Omega_{X}^{\bullet }(\log E)). \] We caution that this definition of $M(V)$ uses the presentation of $V$ as $X\smallsetminus E$ --- it is not known at present that $M(V)$ depends only on $V$. Howeve \[ H^{i}(s(M(V))_{\mathbb{Q}{}{}}\simeq H_{\mathrm{rig}}^{i}(V) \] (\cite{nakk2012}, 1.0.18, p.13), and so $s(M(V))_{\mathbb{Q}{}}$ is independent of the compactification $X$ of $V$. We define $M_{c}(V)$ to be the Tate twist of the dual of $MV)$ \[ M_{c}(V)=D(M(V))(-n)\text{. \] (see \ref{b6c}). From Berthelot's duality of rigid cohomology (\cite{berthelot1997a}; \cite{nakk2008}, 3.6.0.1), we have the following isomorphism of $F$-isocrystal \[ H_{\mathrm{rig},c}^{j}(V)\simeq\Hom_{K}(H_{\mathrm{rig}}^{2n-j}(V),K(-n)). \] It follows that \begin{equation} H^{j}(s(M_{c}(V))_{\mathbb{Q}{}}\simeq H_{\mathrm{rig},c}^{j}(V). \label{eq66 \end{equation} We defin \[ H_{c}^{j}(V,\mathbb{Z}{}_{p}(r))=\Hom(W,M_{c}(V)(r)[j])\text{. \] Now take $k=\mathbb{F}{}_{p^{a}}$. It follows from (\ref{r8}) and (\ref{eq66}) tha \[ Z(V,t)=Z(M_{c}(V),t)\text{. \] Moreover, \[ R_{1}\otimes_{R}^{L}M(V)\simeq R\Gamma(X,\Omega_{X}^{\bullet}(\log E)). \] (\cite{lorenzon2002}, 2.17, or \cite{nakk2008}, p.184). Therefore, in this case, Theorem \ref{b04} becomes the following statement. \begin{theorem} \label{b05}Assume that $q^{r}$ is not a multiple root of the minimum polynomial of $F^{a}$ acting on $H_{\mathrm{rig}}^{j}(V)$ for any $j$. \begin{enumerate} \item The groups $H_{c}^{j}(V,\mathbb{Z}_{p}(r))$ are finitely generated $\mathbb{Z}{}_{p}$-modules, and the alternating sum of their ranks is zero. \item The zeta function $Z(V,t)$ of $X$ has a pole at $t=q^{-r}$ of orde \[ \rho=\sum\nolimits_{j}(-1)^{j+1}\cdot j\cdot\rank_{\mathbb{Z}{}_{p}}\left( H_{c}^{j}(V,\mathbb{Z}{}_{p}(r))\right) \text{. \] \item The cohomology groups of the comple \[ E(V,r)\colon\quad\cdots\rightarrow H_{c}^{j-1}(V,\mathbb{Z}{}_{p (r))\rightarrow H_{c}^{j}(V,\mathbb{Z}{}_{p}(r))\rightarrow H_{c ^{j+1}(V,\mathbb{Z}{}_{p}(r))\rightarrow\cdots \] are finite, and the alternating product of their orders $\chi(V,\mathbb{Z {}_{p}(r))$ satisfie \[ \left\vert \lim_{t\rightarrow q^{-r}}Z(V,t)\cdot(1-q^{r}t)^{\rho}\right\vert _{p}^{-1}=\chi(V,\mathbb{Z}{}_{p}(r))\cdot q^{\chi(V,r) \] where $\chi(V,r)=\sum_{i\leq r,j}(-1)^{i+j}(r-i)h^{i,j}(V)$. \end{enumerate} \end{theorem} We caution the reader that it is not known that every smooth variety $U$ can be expressed as the complement of a normal crossings divisor in a smooth complete variety. \subsection{General varieties} \subsubsection{Philosophy} With each variety $V$ over $k$, there should be associated objects $M(V),$ $M_{c}(V)$, $M^{BM}(V)$, $M^{h}(V)$ in $\mathsf{D}_{c}^{b}(R)$ arising as the $p$-adic realizations of the various motives of $V$. See the discussion \cite{voevodsky2000}, pp.181--182. At present, it does not seem to be known whether there exists a $W$-linear cohomology theory underlying Berthelot's rigid cohomology, i.e., a cohomology theory that gives finitely generated $W$-modules $H_{c}^{j}(V)$ stable under $F$ with $\mathbb{Q}\otimes_{\mathbb{Z}{}}{}H_{c}^{j}(V)=H_{\mathrm{rig,c }^{j}(V)$ for each variety $V$. Deligne's technique of cohomological descent in Hodge theory has been transplanted to rigid and log-de Rham Witt theory by the brave efforts of N. Tsuzuki, A. Shiho, and Y. Nakkajima. While their results do not provide the invariants of $V$ above, they are still sufficient for applications to zeta functions. Even though $M_{c}(V)$ is the only relevant object for zeta values, we consider both $M(V)$ and $M_{c}(V)$. \subsubsection{The ordinary cohomology object $M(V)$} Let $V$ be a variety of dimension $n$ over $k$ equipped with an embedding $V\hookrightarrow V^{\prime}$ of $V$ into a proper scheme $V^{\prime}$. Then (see \cite{nakk2012}, especially 1.0.18, p.13), there is a simplicial proper hypercovering $(U_{\bullet},X_{\bullet})$ of $(V,V^{\prime})$ with $X_{\bullet}$ a proper smooth simplicial scheme over $k$ and $U_{\bullet}$ the complement of a simplicial strict divisor with normal crossings $E_{\bullet}$ on $X_{\bullet}$; moreover \[ H_{\mathrm{rig}}^{i}(V)\simeq H^{i}(X_{\bullet},W\Omega_{X_{\bullet} ^{\bullet}(\log E_{\bullet}))_{\mathbb{Q}{}}. \] For each $j\geq0$, \[ R\Gamma(X_{j},W\Omega_{Xj}^{\bullet}(\log E_{j}))\in D_{c}^{b}(R) \] (\cite{lorenzon2002}). As $\mathsf{D}_{c}^{b}(R)$ is a triangulated subcategory of $D(R)$, this implies tha \[ R\Gamma(X_{\leq d},W\Omega_{X_{\leq d}}^{\bullet}(\log E_{\leq d ))\in\mathsf{D}_{c}^{b}(R) \] for each truncation $X_{\leq d}$ of the simplicial scheme $X_{\bullet}$. The inclusion $X_{\leq d}\rightarrow X_{\bullet}$ induces an isomorphis \[ H^{i}(X_{\bullet},W\Omega_{X_{\bullet}}^{\bullet}(\log E_{\bullet }))_{\mathbb{Q}{}}\simeq H^{i}(X_{\leq d},W\Omega_{X_{\leq d}}^{\bullet}(\log E_{\leq d}))_{\mathbb{Q}{} \] for all $i$ provided $d>(n+1)(n+2)$ because both terms are isomorphic to $H_{\mathrm{rig}}^{i}(V)$. For the left hand side, this follows from 1.0.8 or 12.9.1 of \cite{nakk2012}. For the right hand side, we apply Theorem 3.5.4, p.243, of \cite{nakk2008}: $H_{\mathrm{rig}}^{i}(V)=0$ for $i>2n$ and the spectral sequence 3.5.4.1 degenerates at $E_{1}$, implying that only finitely many $X_{j}$'s contribute to the rigid cohomology of $V$. The bound on $d$ comes from the arguments following the isomorphism 3.5.0.4 on p.242 ibid. See also pp.122-125 of \cite{nakk2012}. We let \[ M(V)=R\Gamma(X_{\leq d},W\Omega_{X_{\leq d}}^{\bullet}(\log E_{\leq d})) \] for any integer $d>(n+1)(n+2)$. While $M(V)$ may depend on $d$, $X_{\bullet}$, and the embedding into $V^{\prime}$, the object $s(M(V))_{\mathbb{Q}{}}$ is independent of these choices up to canonical isomorphism because $H^{i}(s(M(V))_{\mathbb{Q}{}})\simeq H_{\mathrm{rig}}^{i}(V)$. Recall that $H_{\mathrm{rig}}^{i}(V)=0$ for $i>2n$. We need to truncate because it is not clear that the object $R\Gamma (X_{\bullet},W\Omega_{X_{\bullet}}^{\bullet}(\log E_{\bullet}))$ lies in $\mathsf{D}_{c}^{b}(R)$. \subsubsection{The cohomology object with compact support $M_{c}(V)$} Let $V\hookrightarrow V^{\prime}$ be as in the last subsubsection. Let $\iota\colon Z\hookrightarrow V^{\prime}$ denote the inclusion of the reduced closed complement $Z$ of $V$. One can find a proper hypercovering $Y_{\bullet }\rightarrow Z$ and a morphism $f\colon Y_{\bullet}\rightarrow X_{\bullet}$ lifting $\iota$. Applying the results of the previous subsubsection to $Z$ and fixing an integer $d>(n+1)(n+2)$, we get $M(Z)$ and a map $f^{\ast}\colon M(V^{\prime})\rightarrow M(Z)$. We define $M_{c}(V)[1]$ to be the mapping cone of $f^{\ast}$. This is an object of $\mathsf{D}_{c}^{b}(R).$ \begin{lemma} \label{b06}For all $V\hookrightarrow V^{\prime}$ as above \[ H^{i}(s(M_{c}(V))\simeq H_{\mathrm{rig},c}^{i}(V)\,. \] \end{lemma} \begin{proof} As the map $f$ lifts $\iota$, the map \[ f^{\ast}\colon H^{i}(s(M(V^{\prime}))_{\mathbb{Q}{}})\rightarrow H^{i}(s(M(Z))_{\mathbb{Q}{}}) \] can be identified with the map $\iota^{\ast}\colon H_{\mathrm{rig} ^{i}(V^{\prime})\rightarrow H_{\mathrm{rig}}^{i}(Z)$. But as $V^{\prime}$ and $Z$ are proper, rigid cohomology is the same as rigid cohomology with compact support. The lemma now follows from the long exact sequence (\ref{r3}). \end{proof} Combining the lemma with the result of Etesse-Le Stum above, we obtain that the zeta function $Z(V,t)$ of $V$ is equal to the zeta function of $M_{c}(V)$. Therefore, from Theorem \ref{a0} we obtain Theorem \ref{b05} for $V$. \subsection{Application of strong resolution of singularities} Geisser (2006)\nocite{geisser2006} has shown how the assumption of a strong form of resolution of singularities leads to a definition of groups $H_{c ^{i}(V,\mathbb{Z}{}(r))$ for an arbitrary variety $V$ over $k$, which, when $k$ is finite, are closely connected to special values of zeta functions. His definition involves the eh-topology, where the coverings are generated by \'{e}tale coverings and abstract blow-ups (ibid. 2.1). We now sketch how his argument provides an object $M_{c}(V)\in\mathsf{D _{c}^{b}(R)$. For a complete $V$, we define $M(V)=R\Gamma(V_{\mathrm{eh} ,\rho^{\ast}W\Omega_{V}^{\bullet})$ where $\rho^{\ast}$ denotes pullback from eh-sheaves on the category of smooth varieties over $k$ to eh-sheaves on all varieties over $k$. We show that $M(V)\in\mathsf{D}_{c}^{b}(R)$ by using induction on the dimension of $V$. Resolution of singularities gives us a proper map $V^{\prime}\rightarrow V$ inducing an isomorphism from an open subvariety $U^{\prime}$ of the smooth variety $V^{\prime}$ onto an open subvariety $U$ of $V$. Moreover, $U^{\prime}$ is the complement in $V^{\prime }$ of a divisor with normal crossings, and so we can define $M(U^{\prime )\in\mathsf{D}_{c}^{b}(R)$ as above (using the eh-topology). Now $M(V)\in\mathsf{D}_{c}^{b}(R)$ because $M(U)\overset{\textup{{\tiny def} }{=}M(U^{\prime})\in\mathsf{D}_{c}^{b}(R)$ and $M(V\smallsetminus U)\in\mathsf{D}_{c}^{b}(R)$ (by induction). To define $M_{c}(V)$ for an arbitrary $V$, choose a compactification $V^{\prime}$ of $V$, and le \[ M_{c}(V)=\mathrm{\mathrm{Cone}}(M(V^{\prime})\rightarrow M(Z))[-1],\quad Z\overset{\textup{{\tiny def}}}{=}V^{\prime}\smallsetminus V\text{. \] Clearly, $M_{c}(V)\in\mathsf{D}_{c}^{b}(R)$. The eh-topology is crucial for proving that this definition is independent of the compactification (ibid. 3.4). Given $M_{c}(V)$, we defin \[ H_{c}^{i}(V,\mathbb{Z}{}_{p}(r))=\Hom_{\mathsf{D}_{c}^{b}(R)}(W,M_{c (V)(r)[i]). \] This agrees with Geisser's group tensored with $\mathbb{Z}{}_{p}$, because the two agree for smooth complete varieties and satisfy the same functorial properties. \subsection{Deligne-Mumford Stacks} Olsson (2007, first three chapters) extends the theory of crystalline cohomology to certain algebraic stacks. He also shows (ibid. Chapter 4) that the crystalline definition (\cite{illusie1983}, 1.1(iv)) of the de Rham-Witt complex can be extended to stacks. Let $\mathcal{S}/W$ be a flat algebraic stack $\,$equipped with a lift of the Frobenius endomorphism from $\mathcal{S}{}_{0}$ compatible with the action of $\sigma$ in $W$. Let $\mathcal{X}{}\rightarrow\mathcal{S}{}$ be a smooth morphism of algebraic stacks with $\mathcal{X}{}$ a Deligne-Mumford stack. Then $W\Omega _{\mathcal{\mathcal{X}{}}/S}^{\bullet}$ is a complex of sheaves of $R$-modules on $\mathcal{X}{}$, and there is a canonical isomorphis \begin{equation} H^{j}(s(R\Gamma(W\Omega_{\mathcal{\mathcal{X}{}}/S}^{\bullet})))_{\mathbb{Q {}}\simeq H_{\mathrm{crys}}^{j}(\mathcal{X}{}/W)_{\mathbb{Q}{}} \label{eq34 \end{equation} (\cite{olsson2007}, 4.4.17). Under certain hypotheses on $\mathcal{S}{}$ and $\mathcal{X}$ (ibid. 4.5.1), Ekedahl's criterion (see \ref{a6}) can be used to show that $R\Gamma(W\Omega_{\mathcal{\mathcal{X}{}}/S}^{\bullet})\in \mathsf{D}_{c}^{b}(R)$ (ibid. 4.5.19) and that (\ref{eq34}) is an isomorphism of $F$-isocrystals. Now assume that $k=\mathbb{F}{}_{q}$, $q=p^{a}$. The zeta function $Z(\mathcal{X}{},t)$ of a stack $\mathcal{X}$ over $k$ is defined by (\ref{eq3}), but wit \[ N_{m}=\sum_{x\in\lbrack\mathcal{X}(\mathbb{F}{}_{q^{m}})]}\frac{1 {\#\Aut_{x}\mathbb{F}{}_{q^{m}} \] (see \cite{sun2012}, p.49). Assume that $\mathcal{X}{}$ is a Deligne-Mumford stack over $S$ satisfying Olsson's conditions, and let $M(\mathcal{X {})=R\Gamma(W\Omega_{\mathcal{\mathcal{X}{}}/S}^{\bullet})\in\mathsf{D _{c}^{b}(R)$. From (\ref{eq34}), we see tha \[ Z(M(\mathcal{X}{}),t)=\prod\nolimits_{j}\det(1-F^{a}t\mid H_{\mathrm{crys }^{j}(\mathcal{X}{}/W)_{\mathbb{Q}{}})^{(-1)^{j+1}}\text{. \] We expect that the two zeta functions agree (see ibid. 1.1 for the $\ell $-version of this). Then Theorem \ref{b05} will hold for $\mathcal{X}{}$ wit \[ H^{j}(\mathcal{X}{},\mathbb{Z}{}_{p}(r))\overset{\textup{{\tiny def} }{=}\Hom_{\mathsf{D}_{c}^{b}(R)}(W,M(\mathcal{X}{})(r)[j])\text{. \] \subsection{Crystals} Let $X$ be a smooth scheme over $S$, and let $E$ be a crystal on $X$. Etesse (1988a, II, 1.2.5)\nocite{etesse1988a} defines a de Rham-Witt complex $E\otimes W\Omega_{X/S}^{\bullet}$ on $X$, and, under some hypotheses on $X$ and $E$, he proves that $M(X,E)\in\mathsf{D}_{c}^{b}(R)$ (ibid., II, 1.2.7) and that there is a canonical isomorphism \[ H^{j}(R\Gamma(E\otimes W\Omega_{X/S}^{\bullet}))\simeq H_{\mathrm{crys} ^{j}(X/S,E) \] (ibid. II, 2.7.1). Le \[ M(X,E)=R\Gamma(E\otimes W\Omega_{X/S}^{\bullet}). \] When $k$ is finite, Theorem \ref{b04} for $M(X,E)$ becomes Theorem (0.1)$^{\prime}$ of \cite{etesse1988b}. \bibliographystyle{cbe}
1,116,691,501,122
arxiv
\section{Introduction} The topic of the in-medium properties of strange hadrons is a subject of intense research in the context of the heavy ion collision experiments as well as in the study of the bulk matter in nuclear astrophysical objects, e.g., (proto) neutron stars \cite{Tolos_Prog_Part_Nucl_Phys_112_103770_2020}. The topic is important due to its relevance to the experimental observables, e.g., the collective flow as well as the yield and spectra of these hadrons resulting from the high energy nuclear collision experiments \cite{C_Hartnack_Phys_Rep_510_2012_119}. The strange baryons could exist in the high density bulk matter in neutron stars and there could be a possibility of the antikaon condensation \cite{kaplan} in the interior of the neutron stars. There have been extensive studies of the $K$ and $\bar K$ mesons in the literature. In the early work on the study of kaons and antikaons based on chiral perturbation theory \cite{kaplan}, the masses of these mesons were obtained by solving the dispersion relation obtained by using the leading order Weinberg-Tomozawa term ($\sim(\bar N \gamma^\mu N) (\bar K (\partial_\mu K) - (\partial_\mu \bar K) K$)) and the next to leading order $KN$-sigma term ($\sim\Sigma_{KN} (\bar N N)(\bar K K)$). The leading term gives a rise (drop) in the mass of the kaon (antikaon). On the other hand, the second term is attractive for both kaons and antikaons and is largely responsible for the possibility of the $K^-$ condensation in the interior of a neutron star \cite{kaplan}. Using a meson exchange model \cite{Glendenning_Schaffner_PRC_60_025803_1999,Debades_PRC86_045803_2012}, the $K$ and $\bar K$ mesons in hadronic matter have been studied, where their interactions to the meson fields arise in the same manner as the baryon-baryon interactions (via exchange of $\sigma$, $\omega$, $\rho$ mesons) in the relativistic Quantum Hadrodynamics (QHD) model \cite{Serot_QHD}. The interaction due to scalar meson exchange is attractive for both kaons and antikaons leading to a mass drop of these mesons in the medium, similar to the mass shift of the baryons in QHD, which arises due to the scalar potential experienced by the baryon in the hadronic medium. The open strange mesons have also been studied using the Quark meson coupling (QMC) model. In the QMC model, the light ($u$, $d$) quark (antiquark) constituents of the hadron, interact via the exchange of the light mesons ($\sigma$, $\omega$ and $\rho$) and the mass shifts of the hadrons arise dominantly due to the scalar potentials experienced by these constituents. The hadrons, with the same light quark and antiquark constituents, e.g., $K$ and $K^*$ mesons, are thus observed to have very similar mass shifts \cite{Krein_Prog_Part_Nucl_Phys,qmc_phi}. The vector potential felt by a hadron, $h$, in the medium is given as $U_v^h=(n_q-n_{\bar q})V_\omega^q+I_3^h V_\rho^q)$, where $n_q$ and $n_{\bar q}$ refer to the number of light quarks and antiquarks present inside the hadron, and, $I_3^h$ is the third component of the isospin of the hadron. It might be noted here that, the vector potential for the non-strange quark (antiquark), $\pm V_\omega^q$ in $K(\bar K)$ meson in the Quark meson coupling (QMC) model has to be increased by a factor $(1.4)^2$ to reproduce the repulsive $K^+$--Nucleus interaction \cite{qmc_phi}. The coupled channel approach has been used extensively in the literature to study the spectral functions of the $K$ and $\bar K$ mesons. Within this approach, the $\Lambda (1405)$ resonance is generated dynamically from the $\bar K N$ interaction, using the lowest order meson-baryon interaction in a chiral Lagrangian and a cut-off to regularize the loop integrals \cite{Oset_RamosNPA635_1998_99}. The spectral function of $\bar K$ in the medium is obtained from the $\bar K N$ scattering amplitude, $T_{\bar KN}$ by a self consistent solution of the coupled channel Lippmann Schwinger equations. The modifications of the masses in hot hadronic matter include the effects from the Pauli blocking, the mean field binding potentials for the baryons, as well as, the self-energies of the $\bar K$ and $\pi$ \cite{Oset_Ramos_NPA_671_481_2000}. For the $K N$ interaction, due to absence of any resonance near to the threshold, the $T\rho$ approximation is a good approximation for the study the self-energies of the kaons at low densities, and, a self-consistent calculation of the scattering amplitude has only small modifications to results obtained using this approximation. The medium modifications of the spectral functions of the $K$, $\bar K$, as well as, of the vector strange mesons, $K^*$ and $\bar {K^*}$, calculated using the self-consistent coupled channel approach, have been used to study the dynamics of $K^*$ and $\bar {K^*}$ in heavy ion collision experiments, using the Parton-Hadron-String-Dynamics (PHSD) transport model, which incorporates partonic as well as hadronic degrees of freedom \cite{Elena_16,Elena_17}. In Ref. \cite{Elena_16}, the PHSD calculations for Au+Au collisions at $\sqrt {s_{NN}}$=200 GeV are observed to be in good agreement with the data from STAR Collaboration at RHIC \cite{STAR_Kstr}. These PHSD calculations show that the production of $K^*$($\bar {K^*}$) by hadronization from the QGP phase are quite small and these vector mesons are dominantly produced at the later hadronic stage from $K(\bar K)\pi$ scatterings. The medium effects on the spectral functions of these mesons are observed to be quite small due to the small densities and temperatures attained by the medium when the $K^*$ and ${\bar {K^*}}$ mesons are produced. The PHSD calculations for Pb+Pb collisions at $\sqrt {s_{NN}}$=2.76 TeV for the particle spectra and particle ratios \cite{Elena_17} have been compared to the experimental data obtained by the ALICE Collaboration at the LHC \cite{ALICE_Kstr}. These calculations show that at LHC too, the main source of the $K^*$ and ${\bar {K^*}}$ are from the later hadronic stage from $K(\bar K)\pi$ scattering, and the production from from the QGP phase by hadronization is very small. In \cite{Elena_17}, the PHSD calculations have also been performed for heavy ion collision experiments at low beam energies relevant for the FAIR and NICA conditions, where the medium effects may become visible due to the large baryon density matter which will be created at these facilities. The kaons and antikaons in hadronic matter have been studied using a chiral effective model based on a non-linear realization of chiral symmetry and broken scale invariance \cite{paper3}. The in-medium masses of $K$ and $\bar K$ have been studied retaining the leading order Weinberg Tomozawa term, as well as, the next to leading order terms (the scalar exchange and the range terms) in the chiral perturbative expansion, consistent with the kaon-nucleon scattering lengths \cite{kmeson1,isoamss,isoamss1,isoamss2}. The model has been used recently to study the in-medium decay widths of the $\phi$, $K^*$ and $\bar {K^*}$ to their dominant decay modes of $K\bar K$, $K\pi$ and $\bar K \pi$ respectively, from the mass modfications of the open strange mesons, and, the effects of the isospin asymmetry and the strangeness fraction of the strange hadronic medium on the spectral functions and the production cross-sections of the vector mesons, $\phi$, $K^*$, ${\bar {K^*}}$ have also been studied \cite{strangedecaywidths}. Recently, there have been a lot of studies on the properties of the hadrons in the presence of strong magnetic fields. This is due to the estimation of the magnetic fields produced in the peripheral ultra relativistic heavy ion collision experiments to be huge \cite{Tuchin_Review_Adv_HEP_2013}. Also, strong magnetic fields exist in the astrophysical compact objects, e.g. magnetars, where the magnetic field is of the order of $10^{15}-10^{16}$ Gauss at the surface and the magnitude of the magnetic field could be much larger in the interior of magnetars \cite{magnetars}. The created magnetic field has been estimated to be of the order of $eB\sim 5 m_\pi^2$ for Au-Au collisions at $\sqrt {s_{NN}}$=200 GeV at RHIC \cite{Kharzeev_NPA803_227_2008,Skokov_Illarionov_Tonnev_IJMPA24_5925_2009} and $eB \sim 15 m_\pi^2$ at LHC \cite{Skokov_Illarionov_Tonnev_IJMPA24_5925_2009}. It was suggested that in the presence of strong magnetic fields in heavy ion collision experiments, the chiral magnetic effect (CME) can lead to preferential emission of charged particles along the direction of the magnetic field \cite{Kharzeev_NPA803_227_2008,Kharzeev_PRD78_074033_2008}. An electric charge asymmetry of produced particles above and below the reaction plane has been observed by the STAR Collaboration at RHIC \cite{STAR_CME}, which could be due to the separation of charge along the direction of the magnetic field resulting from the chiral magnetic effect. The creation and evolution of a magnetic field in heavy ion collision experiment employed in Hadron String Dynamics (HSD) for non-central Au-Au collisions at $\sqrt {s_{NN}}$=200 GeV \cite{Voronyuk}, however, does not show much effect on the experimental observables due to the presence of the electromagnetic field. As has already been mentioned, the magnetic fields resulting from the non-central ultra-relativistic heavy ion collisions at Relativistic Heavy Ion Collider (RHIC) at BNL and Large Hadron Collider (LHC) at CERN are estimated to be huge. The magnetic fields created from the heavy ion collisions, drop rapidly after the collision. In the presence of a medium, the drop in the magnetic field leads to induced currents ($\sim \sigma {\bf E}$, with $\sigma$ as the electrical conductivity of the medium) being created, which tend to slow down the decay of the magnetic field. However, the time evolution of the magnetic field \cite{Tuchin_Review_Adv_HEP_2013} is still an open problem and it needs the solution of the magnetohydrodynamic equations \cite{Ajit_MHD}, with a proper estimate of the electrical conductivity of the medium. The effects of a magnetic field on the light mesons, e.g., $\pi$, $\rho$, $K$ and $\phi$ mesons, have been studied in the literature \cite{Aguirre_light_mesons,Aguirre_neutral_K_phi,Pradip_rho_BT,Chernodub}. In the presence of an ultra-strong magnetic field, $B>B_{crit}$, where, $B_{crit}={m_\rho}^2/e$, where $m_\rho$ is the mass of the $\rho$ meson, the interesting possibility of vacuum superconductivity with condensation of the charged $\rho$ mesons has been suggested \cite{Chernodub}. Alsom the effects of magnetic field on the strange mesons $\phi$ and neutral kaons have been studied in Ref. \cite{Aguirre_neutral_K_phi} for the neutron star matter for a range of magnetic field ($10^{15}-10^{19}$ Gauss). The study shows an oscillatory behaviour around the vacuum value for the decay width of $\phi \rightarrow K^+ K^-$ for small values of the magnetic field, and a critical field ($B_{crit}\sim 2 \times 10^{18}$ Gauss) above which the decay width vanishes. This is due to the positive Landau level contributions to the masses of the charged kaons and antikaons in the presence of a magnetic field. In the case of a large magnetic field in the interior of the neutron star, the increase in the mass of $K^-$ due to Landau contributions could have implications on the possibility of $K^-$ condensation inside the neutron star. There have been a lot of studies on the heavy flavour mesons in the recent past \cite{Hosaka_Prog_Part_Nucl_Phys} and in the presence of a magnetic field, the mixing of the pseudoscalar and the vector mesons \cite{charmonium_mag_QSR,charmonium_mag_lee,Gubler_D_mag_QSR,Alford_Strickland_2013,Suzuki_Lee_2017} has been shown to lead to dominant modifications to the masses of these mesons. The study of heavy flavour charmonium states as well as of $D$ mesons in the presence of a magnetic field within a QCD sum rule approach \cite{charmonium_mag_QSR,charmonium_mag_lee,Gubler_D_mag_QSR} as well as within an effective potential approach \cite{Alford_Strickland_2013} show the effects of the mixing on the masses to be quite pronounced. A study of the mixing effects on the formation times of the vector and pseudoscalar charmonium states ($J/\psi-\eta_c$ and $\psi'-\eta'_c$ mixing effects) shows to lead to faster (slower) formation times for the pseudoscalar (vector) states \cite{Suzuki_Lee_2017}. Due to the mixing with the vector charmonium states, the pseudoscalar mesons, $\eta_c$ and $\eta'_c$ might show as peaks in the dilepton spectra, arising from anomalous decay modes $\eta_c,\eta'_c \rightarrow l^+l^-$ amd can act as probe of a strong magnetic field at the early stage \cite{Suzuki_Lee_2017}. The changes in the masses of the charmonium states \cite{charmonium_PV_amspm} and the open charm mesons \cite{dmeson_PV_amspm} in the presence of strong magnetic fields due to the pseusoscalar-vector mixing effects, in addition to the Landau level contributions to the charged mesons, are observed to be quite appreciable. The effects of the magnetic field on the decay widths $\psi(3770)\rightarrow D\bar D$ \cite{charmonium_PV_amspm} as well as $D^* \rightarrow D\pi$ \cite{dmeson_PV_amspm} are calculated from their mass modifications in the presence of a magnetic field, using a field theoretical model of composite hadrons with constituent quark (antiquark) \cite{spm781,spm782}. The model of composite hadrons uses the light quark-antiquark pair creation term of the free Dirac Hamiltonian in terms of the constituent quark fields and explicit constructions of the initial and final states to study the decay processes. In the absence of a magnetic field, the model has been used to calculate the decay widths of the charmonium states to $D\bar D$ and $D^*\rightarrow D\pi$ \cite{amspmwg}, as well as, to study the decay widths of bottomonium states to $B\bar B$ \cite{amspm_upsilon} in asymmetric strange hadronic matter. The charmonium decay widths to $D\bar D$ were also calculated using a light quark antiquark pair creation (in the $^3P_0$ state) model \cite{friman,amarvepja}, namely, the $^3P_0$ model \cite{3p0,3p0_1}. The modifications of the decay widths were computed from the mass modifications of the charmonium states as well as $D$ and $\bar D$ mesons calculated using a chiral effective model \cite{amarvepja}. When the internal structure of the mesons in the initial and final states are taken into account, within the composite model of hadrons \cite{amspmwg}, as well as using the $^3P_0$ model \cite{friman}, the heavy quarkonium decay widths were observed to vanish at specific densities (so called nodes) \cite{friman,amarvepja}. In both the models, the decay process is through the creation of a light quark-antiquark pair and the constituent quark (antiquark) of the decaying meson, combines with the light antiquark (quark) created to form the final state mesons. The study of the heavy quarkonium decay widths in asymmetric hyperonic matter using the field theoretical model of composite hadrons \cite{amspmwg,amspm_upsilon} show that the effects of density are the dominant medium effects as compared to the effects from isospin asymmetry as well as strangeness. The mass modifications of the hidden and open charm mesons in the presence of strong magnetic fields, arising from the pseudoscalar-vector mesons mixings, in addition to the Landau contributions to the masses of the charged mesons, and their effects on the decay widths of charmonium states to $D\bar D$ and $D^*\rightarrow D\pi$ \cite{charmonium_PV_amspm,dmeson_PV_amspm} have been studied using the model of composite hadrons \cite{spm781,spm782,spmdiffscat}. In the present work, we study the mass modifications of the strange $K$, $K^*$, $\phi$ mesons in the presence of a magnetic field due to the mixing of the pseudoscalar ($P$) and vector ($V$) mesons ($K-K^*$ and $\phi-\eta'$ mixing), including the Landau level contributions for the charged mesons, and their subsequent effects on the decay widths $K^* \rightarrow K \pi$ and $\phi \rightarrow K \bar K$. The outline of the paper is as follows. In section II, the mass modifications of the strange $K$, $K^*$ and $\phi$ mesons and their effects on the decay widths $K^* \rightarrow K\pi$ and $\phi \rightarrow K \bar K$, are investigated in the presence of a uniform magnetic field. The mass modifications arise from the pseudoscalar-vector mesons mixing ($K-K^*$ and $\phi-\eta '$) in the presence of a magnetic field. For the charged $K$ and $K^*$ mesons, the Landau level contributions to these masses are also taken into account. The effects of the mass modifications on the decay widths of $K^*\rightarrow K\pi$ as well as of $\phi$ meson to $K \bar K$ ($K^+K^-$ and $K^0 \bar {K^0}$) are investigated from the mass modifications of these mesons in the presence of a magnetic field. These decay widths are studied using a field theoretical model of composite hadrons with quark and antiquark constituents. We discuss the results of the masses and decay widths of the strange mesons in the presence of strong magnetic fields in section III. Section IV summarizes the results of the present study. \section{Strange mesons in presence of a magnetic field} \subsection{MASSES:} In this section, the mass modifications for the $K$ and $K^*$ mesons, as well as of $\phi$ and $\eta '$, are studied arising due to the pseudoscalar-vector meson mixing in the presence of a magnetic field. The effect of the mixing is considered through a phenomenological Lagrangian interaction \cite{charmonium_mag_lee,Gubler_D_mag_QSR} \begin{equation} {\cal L}_{PV\gamma}=\frac{g_{PV}}{m_{\rm {av}}} e {\tilde F}_{\mu \nu} (\partial ^\mu P) V^\nu, \label{PVgamma} \end{equation} where $m_{\rm {av}}$ is the average of the masses of the pseudoscalar and vector mesons. The coupling strengths of the radiative decay of the vector meson, $V$ to the pseudocalar meson, $P$, as described by the interaction given by (\ref{PVgamma}) are determined from the observed decay widths of $V\rightarrow P\gamma$ in vacuum. For the charged $K$ and $K^*$ mesons, the contributions of the mixing on the masses of these mesons are in addition to the lowest level Landau level contributions to their masses. The masses of the charged pseudoscalar and vector mesons in their ground states in a background magnetic field are given as \cite{Chernodub} $m_{P} (B)=(m_{P}^2+ eB)^{1/2}$ and $m_{V} (B)=(m_{V}^2 - eB)^{1/2}$, with $m_{P,V}$ as the masses of these mesons at zero magnetic field. For the vector meson, the gyromagnetic ratio is taken to be 2 \cite{Gubler_D_mag_QSR,Chernodub}. For the neutral mesons, $m_{V,P}(B)=m_{V,P}$, as there are no Landau level contributions. In the presence of a magnetic field, there is mixing of the pseudoscalar and the longitudinal component of the vector mesons, which modify their masses to \begin{equation} m^{(PV)}_{V^{||},P}=\Bigg [\frac{1}{2} \Bigg ( M_+^2 +\frac{c_{PV}^2}{m_{\rm {av}}^2} \pm \sqrt {M_-^4+\frac{2c_{PV}^2 M_+^2}{m_{\rm {av}}^2} +\frac{c_{PV}^4}{m_{\rm {av}}^4}} \Bigg) \Bigg]^{1/2}, \label{mpv_long} \end{equation} where $M_{\pm}^2=m^2_V(B) \pm m^2_P (B)$, $c_{PV}= g_{PV} eB$ and $m_{\rm {av}}=(m_V (B) +m_P(B))/2$. \subsection{DECAY WIDTHS:} The decay widths of $K^* \rightarrow K \pi$ and $\phi \rightarrow K\bar K$ are modified in the presence of a magnetic field due to the changes in the masses of these mesons arising from the pseudoscalar-vector mesons mixing effects, in addition to the Landau level contributions for the charged mesons. These decay widths are studied using a field theoretical model of composite hadrons with quark/antiquark constituents \cite{spm781,spm782,spmdiffscat}. The matrix element of the quark antiquark pair creation term of the free Dirac Hamiltonian term for light quarks ($q=u,d)$ is evaluated between the initial and final meson states to calculate the decay widths. The initial and final meson states are constructed explicitly in terms of the constituent quark fields, assuming the harmonic oscillator potential between the quark and antiquark constituents. Using a Lorentz boosting, the constituent quark field operators of the hadron in motion are obtained from the constituent quark field operators of the hadron at rest. We investigate the decay processes, ${K^*}^+ \rightarrow K\pi (K^+\pi^0,K^0 \pi^+)$, ${K^*}^0 \rightarrow K\pi (K^0\pi^0, K^+\pi^-)$ and $\phi \rightarrow K\bar K (K^+K^-,K^0 \bar {K^0})$, within the composite model of hadrons in the present work. For the vector mesons, $K^*$ and $\phi$ decaying at rest, the outgoing pseudoscalar meson ($K$, $\pi$, $K$, $\bar K$) is with finite momentum. The explicit construction for the final state pseudoscalar meson, $P$ with momentum ${\bf p}$, assuming harmonic oscllator wave function, is given as \begin{equation} |P ({\bf p})\rangle = \frac{1}{\sqrt{6}} \Bigg (\frac {R_P^2}{\pi} \Bigg)^{\frac{3}{4}} \int d{\bf k} \exp \Big(-\frac {R_P^2 {\bf k}^2}{2}\Big) {Q_1}_r^i({\bf k}+\lambda_2 {\bf p}) ^\dagger u_r^\dagger {\tilde {Q_2}}_s^i (-{\bf k} +\lambda_1 {\bf p})v_s |vac\rangle, \label{pseudoscalar_meson} \end{equation} where, ${{Q_1}_r^i}({\bfs k})^\dagger ( {{\tilde {Q_2}}_r^i}({\bfs k}))$ is the quark (antiquark) creation operator of flavour $Q_1 (Q_2)$ with spin $r$, color $i$ and momentum ${\bf k}$, and $u_r$ and $v_s$ are the two component spinors. For the meson $P(\equiv K^+,K^0, K^-,\bar {K^0}, \pi^+,\pi^-,\pi^0)$, the quark-antiquark constituents are given as $(Q_1,{\bar {Q_2}})\equiv (u,\bar s), (d,\bar s), (s,\bar u), (s,\bar d), (u,\bar d), (d,\bar u), \frac {1}{\sqrt 2}\Big[(u,\bar u) -(d, \bar d)\Big]$. The decaying vector meson, $V(\equiv {K^*}^+, {K^*}^0, \phi$) at rest, with polarisation $m$, is constructed as \cite{amspmwg} \begin{eqnarray} |{V}^m ({\bf 0})\rangle & =& \frac{1}{\sqrt{6}} \Bigg (\frac {R_{V}^2}{\pi} \Bigg)^{\frac{3}{4}} \int d{\bf k} \exp\Big(-\frac {R_{V}^2 {\bf k}^2}{2}\Big) {{Q_1}_r}^{i}({\bf k}) ^\dagger u_r^\dagger \sigma^m \tilde {{Q_2}_s}^{i} (-{\bf k})v_s |vac\rangle, \label{kstr_phi} \end{eqnarray} with the quark-antiquark constituents for the vector meson $V(\equiv {K^*}^+, {K^*}^0, \phi)$, as given by $(Q_1,{\bar {Q_2}}) \equiv (u,\bar s), (d,\bar s), (s,\bar s)$. The parameters $R_{P}$ and $R_V$ in equations (\ref{pseudoscalar_meson}) and (\ref{kstr_phi}), correspond to the the harmonic oscillator strengths for the pseudoscalar and vector mesons. In the pseudoscalar state, $|P({\bf p})\rangle$ given by equation (\ref{pseudoscalar_meson}), $\lambda_1$ and $\lambda_2$ are the fractions of the mass (energy) of the pseudoscalar meson at rest (in motion), carried by the constituent antiquark, ${\tilde {Q_2}}$ and the constituent quark, $Q_1$. The constituent quark (and antiquark) of the hadron occupying specific energy level is similar to as in the MIT bag model \cite{MIT_bag}. The fractions of the energy carried by the constituent quark (antiquark) are calculated by assuming the binding energy of the meson as shared by the quark (antquark) to be inversely proportional to the quark (antiquark) mass \cite{spm782}. For the pion states, $\pi^+$ and $\pi^0$, which are light ($q=u,d$) quark-antiquark bound states, the fractions of energy carried by the quark and antiquark are the same, i.e., $\lambda_1=\lambda_2=1/2$, as the masses of the $u$ and $d$ quarks are assumed to be the same. The decay widths for the processes $K^* \rightarrow K\pi$ and $\phi \rightarrow K \bar K$, are obtained from the matrix element of the light quark antiquark pair creation term of the free Dirac Hamiltonian density, between the initial and the final states \cite{amspmwg}. The matrix element is multiplied with a strength parameter, $\gamma_{V(=K^*,\phi)}$, which is fitted to the observed decay width of the vector meson in vacuum. \subsubsection{\bf{DECAY WIDTH OF $K^*\rightarrow K \pi$}} \noindent In the absence of the pseudoscalar-vector mesons mixing, the decay width for $K^* \rightarrow K\pi$ is obtained from the matrix element of the free Dirac Hamiltonian between the initial and final states as \cite{amspmwg} \begin{eqnarray} \Gamma\left(K^{*} ({\bf 0})\rightarrow K ({\bf p})+\pi ({-\bf p})\right) = \gamma_{K^*}^2 g^2\frac{8\pi^2p_K^0p_\pi^0}{3m_{K^*}}A_{K^*} (|\bfs p|)^2|\bfs p|^3, \label{gammakstr} \end{eqnarray} where $p_K^0=(|{\bf p}|^2+m_K^2)^{1/2}$ and $p_\pi^0=(|{\bf p}|^2+m_\pi^2)^{1/2}$ are the energies of the outgoing $K$ meson and pion respectively, in terms of the magnitude of the 3-momentum of $K(\pi)$ meson, $|{\bf p}|$, given as \begin{equation} |\bfs p|=\Bigg (\frac{m_{K^{*}}^2}{4}-\frac{m_{K}^2+m_{\pi}^2}{2} +\frac{\left(m_{K}^2-m_{\pi}^2\right)^2}{4m_{K^{*}}^2} \Bigg )^{1/2}. \label{mokp} \end{equation} In equation (\ref{gammakstr}), $A_{K^*}(|{\bf p}|)$ is given as \begin{eqnarray} A_{K^*}(|{\bf p}|)=6c_{K^*} \Big(\frac{\pi}{a_{K^*}}\Big)^{{3}/{2}} \exp\left[\Big (a_{K^*}b_{K^*}^2 -\frac{1}{2}\left(\lambda_2^2 R_K^2 +\frac{1}{4}R_\pi^2\right)\Big){|\bf p|}^2\right] \times \Big[{F_0}_{K^*}+\Big (\frac {3{F_1}_{K^*}}{2a_{K^*}}\Big )\Big], \label{apkstr} \end{eqnarray} where, \begin{eqnarray} {F_0}_{K^*}&=&(b_{K^*}-1)\left(1-\frac{1}{8M_q^2}|{\bfs p}|^2 (\lambda_2-\frac{1}{2})^2\right) \nonumber \\ &-&(b_{K^*}-\lambda_2)\left(\frac{1}{2}+\frac{1}{4M_q^2} {|\bfs p|}^2 \left(\frac{3}{4}b_{K^*}^2-\frac{5}{4}b_{K^*} +\frac{7}{16}\right)\right) \nonumber\\ &-&(b_{K^*}-\frac{1}{2})\left[\frac{1}{2}+\frac{1}{4M_q^2}{|\bfs p|}^2 \left(\frac{3}{4}b_{K^*}^2-(1+\frac{1}{2}\lambda_2)b_{K^*} +\lambda_2-\frac{1}{4}\lambda_2^2\right)\right] \label{c0kstr} \\ {F_1}_{K^*}&=&-\frac{1}{4M_q^2}\left[\frac{5}{2}b_{K^*}-\frac{9}{8} -\frac{11}{12}\lambda_2\right]. \label{c1kstr} \end{eqnarray} The parameters $a_{K^*}$, $b_{K^*}$, $c_{K^*}$ are given as \begin{eqnarray} a_{K^*}=\frac{\left(R_{K^*}^2+R_K^2+R_\pi^2\right)}{2},\;\; b_{K^*}=\frac{1}{2a_{K^*}}\left(R_K^2\lambda_2 +\frac{1}{2}R_\pi^2\right),\;\; c_{K^*}=\frac{1}{12\sqrt 3} \left(\frac{R_{K^*}^2 R_K^2 R_\pi^2}{\pi^3}\right)^{{3}/{4}}. \label{abckstr} \end{eqnarray} In equations (\ref{c0kstr})--(\ref{c1kstr}), $M_q$ is the constituent light quark ($q=u,d$) mass. As has already been mentioned, the expression for the decay width for $K^* \rightarrow K\pi$ as given by equation (\ref{gammakstr}) (modulo $\gamma_{K^*}^2$) is obtained from the matrix element of the free Dirac Hamiltonian between the initial and final states. For $\pi_0$ ($\pi^\pm$) in the final state, the value of $g^2$=1(2). The decay amplitude is multiplied by $\gamma_{K^*}$, which is the production strength of $K\pi$ from decay of $K^*$ meson, and is fitted from its vacuum decay width. The decay width has the dependence on the masses of the decaying and outgoing mesons, through $|{\bf p}|$, and its dependence as given by equation (\ref{gammakstr}), is mainly through a polynomial part multiplied by an exponential part. The expression for the decay width of $K^*\rightarrow K\pi$ given by equation (\ref{gammakstr}) does not account for the mixing of the $K$ and $K^*$ mesons in the presence of the magnetic field. The mixing of the pseudoscalar mesons and the vector mesons leads to a drop (increase) in the mass of the $K$ meson (longitudinal component of the $K^*$ meson), given by equaion (\ref{mpv_long}). This leads to the expression for the decay width of $K^*\rightarrow K\pi$ to be modified to \cite{dmeson_PV_amspm} \begin{eqnarray} &&\Gamma^{PV}(K^* ({\bf 0}) \rightarrow K({\bf p}) {\pi} (-{\bf p})) =\gamma_{K^*}^2 g^2\frac{8\pi^2}{3} \Bigg [ \Bigg(\frac{2}{3} |{\bf p}|^3 \frac {p^0_K (|{\bf p}|) p^0_{\pi}(|{\bf p}|)}{m_{K^*}} A^{K^*}(|{\bf p}|)^2 \Bigg) \nonumber \\ &+&\Bigg(\frac{1}{3} |{\bf p}|^3 \frac {p^0_K(|{\bf p}|) p^0_{\pi}(|{\bf p}|)}{m_{K^*}^{PV}} A^{K^*}(|{\bf p}|)^2 \Bigg) \Big({|{\bf p}|\rightarrow |{\bf p|} (m_{K^*} = m_{K^*}^{PV}, m_{K} = m_{K}^{PV} )}\Big) \Bigg]. \label{gammakstr_mix} \end{eqnarray} The first term corresponding to the transverse polarizations for the vector $K^*$ meson are unaffected by the mixing of the $K$ and $K^*$ states, whereas the second term has the masses the $K$ and the longitudinal $K^*$ mesons modified due to the $K-K^*$ mixing. \subsubsection{\bf {DECAY WIDTH OF $\phi\rightarrow K \bar K$}} The decay width of $\phi \rightarrow K \bar K$ is calculated to be \begin{eqnarray} \Gamma(\phi ({\bf 0})\rightarrow K ({\bf P})\bar K(-{\bf P})) = \gamma_\phi^2\frac{8\pi^2}{3}|{\bf P}|^3 \frac {{P^0}_{K} {P^0}_{\bar K}}{m_{\phi}} A_{\phi}(|{\bf P}|)^2 \label{gammaphikkbar} \end{eqnarray} in the absence of the pseudoscalar-vector mesons mixing effects. In equation (\ref{gammaphikkbar}) for the expression of decay width of $\phi$ to $K\bar K$, $P^0_{K(\bar K)}=\big(m_{K (\bar K)}^2 +|{\bf P}|^2\big)^{\frac{1}{2}}$, and, $|\bfs P|$ is the magnitude of the momentum of the outgoing $K(\bar K)$ meson, given as \begin{equation} |{\bf P}|=\Bigg (\frac{{m_\phi}^2}{4}-\frac {{m_K}^2+{m_{\bar K}}^2}{2} +\frac {({m_K}^2-{m_{\bar K}}^2)^2}{4 {m_\phi}^2}\Bigg)^{1/2}. \label{pkkbar} \end{equation} In the above, $A_{\phi}(|\bfs P|)$ is given as \begin{equation} A_{\phi}(|{\bf P}|)=6c_\phi\exp[(a_\phi {b_\phi}^2 -R_K^2\lambda_2^2)|{\bf P}|^2] \cdot\Big(\frac{\pi}{a_\phi}\Big)^\frac{3}{2} \Big[F_{0 \phi}+F_{1 \phi}\frac{3}{2a_\phi} \Big], \label{ap} \end{equation} with \begin{eqnarray} &&F^{\phi}_0=(\lambda_2-1)-\frac{1}{2M_q^2} |\bfs P|^2(b_{\phi}-\lambda_2) \left(\frac{3}{4}b_{\phi}^2-(1+\frac{1}{2}\lambda_2)b_{\phi} +\lambda_2-\frac{1}{4}\lambda_2^2\right), \nonumber\\ &&F^{\phi}_1=\frac{1}{4 M_q^2} \left[-\frac{5}{2}b_{\phi}+\frac{2}{3} +\frac{11}{6}\lambda_2\right]. \label{c012phi} \end{eqnarray} and the parameters $a_\phi$, $b_\phi$ and $c_\phi$ given as \begin{equation} a_\phi=\frac{1}{2}R_{\phi}^2+R_K^2, \;\; b_\phi=R_K^2\lambda_2/a_\phi,\;\; c_{\phi}=\frac{1}{6\sqrt{6}}\cdot \left(\frac{R_\phi^2}{\pi}\right)^{{3}/{4}} \cdot\left(\frac{R_K^2}{\pi}\right) ^{{3}/{2}}, \label{abcphi} \end{equation} When the pseudoscalar meson--vector meson mixing is included, the masses of the $K$ and $\bar K$ mesons and the longitudinal component of the $\phi$ meson are modified due to $K-K^*$, $\bar K-\bar {K^*}$ and $\phi-\eta'$ mixings. The modified expression for the decay width $\phi\rightarrow K\bar K$ is given as \cite{charmonium_PV_amspm} \begin{eqnarray} && \Gamma ^{PV}(\phi ({\bf 0})\rightarrow K ({\bf P})\bar K(-{\bf P})) = \gamma_\phi^2\frac{8\pi^2}{3} \Bigg [ \Bigg ( \frac{2}{3} |{\bf P}|^3 \frac {{P^0}_{K} {P^0}_{\bar K}}{m_{\phi}} A_{\phi}(|{\bf P}|)^2 \Bigg)\nonumber \\ &+ & \Bigg ( \frac{1}{3} |{\bf P}|^3 \frac {{P^0}_{K} {P^0}_{\bar K}}{m_{\phi}} A_{\phi}(|{\bf P}|)^2 \Bigg) \Big(|{\bf P}|\rightarrow |{\bf P}| (m_{\phi}=m_{\phi}^{PV}, m_{K(\bar K)}=m_{K(\bar K)}^{PV})\Big) \Bigg] \label{gammaphikkbar_mix} \end{eqnarray} \section{Results and Discussions} \begin{figure} \includegraphics[width=16.4cm,height=16.4cm]{fig1.pdf} \caption{(Color online) The masses of the $K$ and the longitudinal components of the $K^*$ mesons are plotted as functions of $eB/{m_\pi^2}$. The effects of the pseudoscalar--vector (PV) mesons mixing on these masses are shown for the charged and neutral mesons in panels (a) and (b) respectively. The Landau contributions to the masses of the charged $K^+$ and ${K^*}^+$ mesons are shown as the dot-dashed lines. } \label{mKKstr} \end{figure} \begin{figure} \includegraphics[width=16.4cm,height=16.4cm]{fig2.pdf} \caption{(Color online) The decay widths for $K^*\rightarrow K\pi$ for the charged ${K^*}^+$ and neutral ${K^*}^0$ are plotted as functions of $eB/{m_\pi^2}$ in panels (a) and (b) respectively. These are shown including both the pseudoscalar--vector mesons (PV) mixing as well as Landau level contribtuions for the charged mesons. These are also shown when the mixing effects (indicated as PV) are only taken into account. } \label{dwFT_Kstr} \end{figure} \begin{figure} \includegraphics[width=16.4cm,height=16.4cm]{fig3_rev.pdf} \caption{(Color online) The mass modifications of the $\phi$ and $\eta '$ mesons arising from the pseudoscalar--vector meson mixing are shown as functions of $eB/{m_\pi^2}$ in panel (a). Panel (b) shows the effects of the magnetic field on the partial decay widths of $\phi$ to (I) $K^+ K^-$ and (II) $K^0 {\bar K}^0$ respectively. The Landau contributions to the masses of the charged mesons $K ^ \pm $ are taken into account and these are compared to the case of when these effects are not considered. } \label{mass_dwFT_phi} \end{figure} In the presence of a uniform magnetic field, the modifications of the masses of the $K$, $K^*$, $\phi$ and $\eta '$ due to the mixing of the pseudoscalar and vector mesons are investigated. These are in addition to the Landau level contributions for the charged mesons. The mixing is taken into account through a phenomenological Lagrangian interaction given by equation (\ref{PVgamma}). The coupling strength parameter $g_{PV}$ for the radiative decay of the vector meson, $V$ to the pseudoscalar meson, $P$, described by the interaction Lagrangian (\ref{PVgamma}) is determined from the observed decay width of $V\rightarrow P\gamma$ in vacuum. For the processes ${K^*}^+ \rightarrow K^+ \gamma$, ${K^*}^0 \rightarrow K^0 \gamma$ and ${\phi} \rightarrow \eta ' \gamma$, the observed decay widths of 50.292 keV, 46.827 keV and 0.26429 keV \cite{pdg_2019_update} determine the coupling parameters, $g_{{K^+}{K^*}^+}$, $g_{{K^0}{K^*}^0}$, and $g_{{\eta '}{\phi}}$ to be 0.5793, 0.5611 and 0.7043 respectively. The vacuum values for the masses (in MeV) of these mesons are taken to be $m_{{K^*}^+}=891.66,\,\,m_{{K^*}^0}=895.55,\,\, m_{{K}^+}=493.677,\,\,m_{{K}^0}=497.61,\,\,m_{\phi}=1019.461,\,\, m_{\eta '}=957.78$ \cite{pdg_2019_update}. The presence of a magnetic field leads to the mixing of the pseudoscalar meson and the longitudinal component of the vector meson, with their modified masses given by equation (\ref{mpv_long}), due to the mixing effect. As has already been mentioned, for the charged mesons, these modifications in the masses are in addition to the contribution arising from the lowest Landau levels, due to the direct interactions of the charged mesons with the external magnetic field. The masses of the charged and neutral open strange mesons are plotted in figure \ref{mKKstr}. The mixing of the pseudoscalar and vector mesons (indicated as PV) are observed to be a monotonic drop (rise) in the mass of the $K^+(K^0)$ (longitudinal component of the vector ${K^*}^+({K^*}^0$)) meson. The modifications of the masses of the charged as well as neutral $K$ and $K^*$ mesons are observed to be quite small due to the (PV) mixing effects, with modified values of for ${K^*}^+$, ${K}^+$, ${K^*}^0$, ${K}^0$ masses as 897 (912.46), 490.75 (482.4), 900.49 (914.87) and 494.88 (487.1) mesons respectively, at $eB=5 (10) m_\pi^2$. For the maximum value of the magnetic field considered in the present work, $eB=10 m_\pi^2$, the mass shifts (in MeV) for ${K^*}^+$, ${K}^+$, ${K^*}^0$ and ${K}^0$ mesons as thus observed to be around 21, 11, 19 and 10.5 respectively. The Landau level contributions are observed to lead to a monotonic drop (rise) in the mass of the ${K^*}^+$ ($K^+$) with increase in magnetic field. From panel (a) of figure \ref{mKKstr}, it is observed that there is a drop (increase) in the mass of the ${K^*}^+$(${K}^+$) meson with increase in the magnetic field, upto a value of $eB$ of the order of around $5m_\pi^2$, when the Landau contributions dominate over the mixing effects. As the mass difference of these mesons becomes smaller for larger values of the magnetic field, the mixing effect is observed to become more appreciable and this starts dominating over the contributions from the Landau levels. This is observed to lead to quite a dominant rise (drop) of the mass of the ${K^*}^+$ ($K^+$) meson for $eB$ greater than about $6 m_\pi^2$. The mass modifications of the charged $K$ and $K^*$ mesons are thus observed to be much more pronounced as compared to the neutral mesons, due to the additional effects from the Landau level contributions. These mass modifications of the charged open strange mesons are observed to modify appreciably the decay widths of ${K^*}^+ \rightarrow K\pi$, as can be seen from figure \ref{dwFT_Kstr}. The mass modifications of the neutral mesons, ${K^*}^0$ and $K^0$ due to the magnetic field, which arise only due to the pseudoscalar-vector mesons mxing, are shown in panel (b) of figure \ref{mKKstr}. These mass changes are observed to be moderate due to the small mixing coupling parameter, $g_{K^0{K^*}^0}$=0.5611. It might be worthwhile to compare the effects of the mixing on the masses of the open strange mesons as studied in the present work, with the mass modifications of the open charm mesons due to the mixing effects \cite{dmeson_PV_amspm}. The total width of the neutral $D^*$ meson is not yet measured experimentally, but the branching raito of the two modes ${D^*}^0\rightarrow D^0 \pi^0$ and ${D^*}^0\rightarrow D^0 \gamma$ is measured to be 64.7:35.3 \cite{pdg_2019_update}. The decay width of ${D^*}^0\rightarrow D^0 \pi^0$ (and hence of ${D^*}^0\rightarrow D^0 \gamma$) is obtained \cite{Gubler_D_mag_QSR,dmeson_PV_amspm} by taking the coupling strengths of the decays ${D^*}^0\rightarrow D^0 \pi^0$ and ${D^*}^+\rightarrow D^+ \pi^0$ to be the same. This is observed to lead to the pseudoscalar--vector mesons mixing parameter for the neutral open charm mesons, $g_{D^0{D^*}^0}$ to be quite large, about 4 times larger than the mixing parameter for ${D^*}^+ - D^+$ mixing \cite{Gubler_D_mag_QSR,dmeson_PV_amspm}. The mass modifications for the neutral open charm mesons due to the mixing effects are thus observed to be much more pronounced compared to the mass modifications of the charged $D$ and $D^*$ mesons. On the other hand, in the present investigation, the mass modifications of the neutral open strange mesons are moderate. The much larger value of the mixing parameter in the neutral open charm sector could be due to the availability of a single channel for $D^*\rightarrow D\pi$, which is ${D^*}^0 \rightarrow D^0 \pi$, in addition to the radiative decay channel ${D^*}^0 \rightarrow D^0 \gamma$, and these are the only decay modes of ${D^*}^0$ meson. The radiative decay width is comparable to the decay width of $D^*\rightarrow D\pi$ for the neutral $D^*$. On the other hand, for the open strange meson sector, there are two channels ${K^*}^0 \rightarrow K^0 \pi^0$ and ${K^*}^0 \rightarrow K^+ \pi^-$, and the decay width of ${K^*}^0 \rightarrow (K\pi)^0$ is about 99.754 \% of the total width of ${K^*}^0$ meson. The radiative decay wdith of ${K^*}^0 \rightarrow K^0 \gamma$ is about 0.246 \% of its total width, which makes the mixing of ${K^*}^0-K^0$ to be quite small. Also, for the charged ${K^*}^+$ meson, the decay is dominated by ${K^*}^+ \rightarrow (K\pi)^+$ and the decay to $K^+\gamma$ is around $0.099 \%$ of its total width. These lead to the mass modifications of the $K$ and $K^*$ (both the charged and neutral mesons) due to $K$-$K^*$ mixings to be quite moderate, as can be seen in figure \ref{mKKstr}. The masses of the charged open strange mesons, ${K^*}^\pm$ and $K^\pm$ mesons, as modified due to the Landau level contributions are $({m_{{K^*}^\pm}^2-eB})^{1/2}$ and $({m_{K^\pm}^2+eB})^{1/2}$ respectively \cite{Chernodub}. The mass of the charged vector meson, $V$, becoming negative above a critical magnetic field, $(eB)_{crit}=m_V^2$ leads to condensation of the charged vector meson \cite{Chernodub}. For magnetic fields higher than the critical magnetic field, $(eB)_{crit}=m_\rho^2 \sim 30 m_\pi^2$, there is condensation of the charged $\rho$ mesons ($u\bar d (d\bar u)$ bound states), arising from the gluon mediated attractive interaction of the quark and antiquark of different flavours in spin 1 state \cite{Chernodub}. For still stronger magnetic fields, $eB$ greater than $(eB)_{crit}=m_{K^*}^2 \simeq 40 m_\pi^2$, there should be condensation of the charged $K^*$ mesons ($u \bar s (s \bar u)$ bound states) as well. The decay widths, $K^* \rightarrow K \pi$ and $\phi \rightarrow K\bar K$ as modified due to the mass modifications of these mesons in the presence of a magnetic field are studied using a field theoretical model of composite hadrons with constituent quarks and antiquarks. These decay processes are studied using a light quark-antiquark pair creation term, which is the quark-antiquark creation term of the free Dirac Hamiltonian in terms of the constituent quark fields. The matrix element of this term between the initial and final states of the decay process is calculated to compute the decay widths of the processes $K^* \rightarrow K \pi$ and $\phi \rightarrow K\bar K$. As has already been mentioned, $\lambda_1$ and $\lambda_2$ in the state $K(K^+,K^0)$ meson ($q \bar s$ bound state, $q=(u,d)$) given by equation (\ref{pseudoscalar_meson}), are the fractions of the mass (energy) of the $K$ meson at rest (in motion) carried by the constituent strange antiquark and the constituent light quark ($u$, $d$), and $\lambda_1+\lambda_2$=1. These are calculated by assuming the binding energy of the hadron as shared by the quark (antquark) to be inversely proportional to the quark (antiquark) mass \cite{spm782}. With the vacuum mass (in MeV) of $K^+ (K^0)$ meson to be given as 493.677 (497.61), $M_{u,d}$=330 MeV \cite{amspmwg}, and $M_s$=480 MeV, the value of $\lambda_2$ is obtained as 0.71. The harmonic oscillator strengths of the pseudoscalar mesons ($K$, $\pi$) and vector mesons ($\phi$ and $K^*$) are needed to calculate the decay widths of $K^* \rightarrow K\pi$ and $\phi \rightarrow K \bar K$. The value of $R_\pi$=(211 MeV)$^{-1}$ \cite{spm782,amspmwg} was fitted from the value of the charge radius squared of pion given as $0.4 {\rm fm}^2$. We determine the harmonic oscillator strength parameter for the $K$ meson, $R_K$, assuming the ratio $R_K/R_\pi$ to be same as the ratio of their charge radii, $({r_{ch}})_K/({r_{ch}})_\pi$. Taking $({r_{ch}})_K$=0.56 fm \cite{pdg_2019_update}, the value of $R_K$ is obtained as (238.3 MeV)$^{-1}$. The value of $R_\phi$ is obtained from the observed decay width of $\phi \rightarrow e^+e^-$ of 1.26377 keV. The expression of the decay width is given as \cite{Van_Royen} \begin{equation} \Gamma (\phi \rightarrow e^+ e^-) =\frac {16 \pi \alpha ^2}{9 m_\phi^2} |\psi({\bf r}={\bf 0})|^2, \end{equation} with $\alpha=1/137$, yields the value if $R_\phi$ to be (290.7 MeV)$^{-1}$. The value of $R_{K^*}$ is assumed to be same as $R_K$ in the present work. The values of $\gamma_{K^*}$ for the decays ${K^*}^+\rightarrow (K\pi)^+$ and ${K^*}^0\rightarrow (K\pi)^0$, as fitted to their observed vacuum decay widths of 50.75 and 47.18 MeV \cite{pdg_2019_update}, are obtained as 2.4742 and 2.347 respectively. These yield the values of the decay widths for the subchannels of the ${K^*}^+$ meson to $K^+\pi^0$ and $ K^0 \pi^+$ to be 16.98 and 33.77 MeV and of decay widths of ${K^*}^0$ to $K^0\pi^0$ and $K^+ \pi^-$ to be 15.87 and 31.31 MeV respectively. The effects of the magnetic field on the decay widths of the charged and neutral $K^*\rightarrow K\pi$ are plotted in figure \ref{dwFT_Kstr}. The decay widths for ${K^*}^+$ to $K^0 \pi^+$ as well as to $K^+ \pi^0$ are observed to increase monotonically with the magnetic field, when the mass modifications are taken into account only due to mixing of the $K$ and $K^*$ mesons (indicated as PV in figure \ref{dwFT_Kstr}), and these modifications are seen to be moderate. The vacuum values of these decay wdiths of 33.77 and 16.98 are observed to be modified to 40.63 and 20.34 at $eB=10 m_\pi^2$ for ${K^*}^+\rightarrow K^0\pi^+$ and ${K^*}^+\rightarrow K^+\pi^0$ respectively. When the Landau level contributions are also taken into account for the charged mesons, ${K^*}^+$ and $\pi^+$, there is observed to be a drastic drop in the decay width of ${K^*}^+ \rightarrow K^0 \pi^+$, mainly due to the drop (rise) in mass the charged vector (pseudoscalar) meson from Landau level contributions. The value of the decay width vanishes at around $eB = 5 m_\pi^2$ and remains zero upto around $eB = 7 m_\pi^2$. As the magnetic fueld is further increased, the dominant increase in mass of ${K^*}^+$ due to mixing effects, leads to an increase in the decay width of ${K^*}^+ \rightarrow K^0 \pi^+$. The decay width of ${K^*}^+ \rightarrow K^+ \pi^0$ is observed to have a similar behaviour when the Landau level contributions as well as mixing effects are taken into account. The masses of the $\phi$ and $\eta'$ due to their mixing on the presence of a magnetic field are shown in panel (a) in figure \ref{mass_dwFT_phi}. There is observed to be an increase (drop) in $\phi (\eta')$ meson mass due to this mixing. There are no Landau level contributions to their masses as these mesons are charge neutral. The values of $\gamma_\phi$ for the decay of $\phi \rightarrow K^+K^-$ and $\phi \rightarrow K^0 {\bar {K^0}}$ are obtained as 2.38 and 2.41 are obtained, as fitted from their observed vacuum decay widths of 2.0905 and 1.53 MeV \cite{pdg_2019_update}. The decay width of $\phi$ to $K^+ K^-$ as well as to the neutral $K^0 \bar {K^0}$ are shown in panel (b) in figure \ref{mass_dwFT_phi}. The effects of the magnetic field on the decay width to the neutral $K\bar K$ pair arise from the mass modifications of the $\phi$ meson as well as of the $K^0$ (and $\bar {K^0}$) mesons due to the $\phi-\eta'$, ${K^*}^0-{K^0}$ (and $\bar {K^*}^0-\bar {K^0}$) mixings. These mixing effects lead to a rise of decaying $\phi$ mass and drop in the outgoing pseudoscalar $K^0$ and ${\bar {K^0}}$ meson masses. These mass modifications are observed as a monotonic increase in the decay width of $\phi\rightarrow K^0 {\bar {K^0}}$ with increase in the magnetic field. The decay width of $\phi\rightarrow K^+ K^-$ also shows a similar trend when the Landau contributions to the masses of these charged kaons and antikaaons are not taken into consideration. The Landau level contributions lead to increase in the masses of the charged $K\bar K$, due to which it is observed that the decay width vanishes at around $eB=m_\pi^2$ and remains zero upto around $eB=8.1 m_\pi^2$. As the magnetic field is increased further, the decay to $K^+K^-$ is observed to again become kinematically possible, as the masses of the charged $K$ and $ \bar K$ decrease at high magnetic field due to the dominance of the mixing ($K^\pm - {K^*}^\pm$) effects. This can be seen in figure \ref{mKKstr} for the ${K^*}^+ - K^+$ mixing in the presence of a magnetic field. The present work of study of effects of strong magnetic fields on the strange vector mesons is complementary to the recent work on the study of the spectral functions and the production cross-sections of the vector strange mesons ($\phi$, $K^*(\bar {K^*})$) from $K\bar K$, $K(\bar K)\pi$ scatterings, in isospin asymmetric strange hadronic matter \cite{strangedecaywidths}. It may be mentioned here that in the relativistic heavy ion collision experiments, contrary to the heavy flavour mesons which are produced at the early stage when the magnetic field produced can still be high, the strange vector mesons, $\phi$ and $K^*$ are produced at the later stages of evolution when the magnetic field is expected to be very small. Hence it is not feasible to study the effects of strong magnetic fields on these strange mesons from the experimental observables of heavy ion collision experiments. However, apart for being a topic of of theoretical interest, the study of the effects of magnetic field on the properties of the strange mesons, e.g, the kaons and $\phi$ mesons, could have implications on the composition of neutron star matter due to possible existence of large magnetic fields in the interior of magnetars \cite{Aguirre_neutral_K_phi}. \section{Summary} In the present work, we have studied the modifications of the masses of the vector mesons ($\phi$ and $K^*$) and the pseudoscalar mesons ($\eta'$, $K$) in the presence of strong magnetic fields, due to $\phi-\eta'$, $K^*-K$ mixings. The modifications in masses of the charged mesons in their ground sttes are in addition to the lowest Landau level contributions. The effects of the mass modifications due to the magnetic field on the decay widths of $K^*\rightarrow K\pi$ and $\phi\rightarrow K \bar K$ are investigated using a field theoretic model of composite hadrons. For the neutral open strange mesons, $K^0$ and ${K^*}^0$, as well as charged mesons $K^+$ and ${K^*}^+$, there is observed to be marginal modifications of the masses leading to a drop (increase) in the mass of the pseudoscalar (longitudinal componenent of the vector) meson. The contributions of the Landau levels for the charged mesons are observed to lead to a rise (drop) in the mass of $K^+ ({K^*}^+)$ meson, which gives rise to a smaller mass difference of the masses of $K^+$ and ${K^*}^+$ with increase of the magnetic field. As the magnetic field is increased further, the mixing effects become more important, leading to a dominant increase (drop) in the ${K^*}^+$(${K}^+$) mass. This leads to the effect of the magnetic field on the decay width of the charged $K^*$ meson to $K\pi$ (${K^*}^+\rightarrow K^0 \pi^+$ as well as ${K^*}^+\rightarrow K^+ \pi^0$) to have an initial drop followed by moderate change and then a sharp rise when $eB$ is further increased. The decay width ${K^*}^+\rightarrow K^0 \pi^+$ is observed to be zero for values of $eB/m_\pi^2$ between 5 and 7, due to the increase in the mass of the charged pion in the final state. The decay width of ${K^0}^* \rightarrow K^+ \pi^-$ is observed to have a sharp drop with increase in the magnetic field and becomes zero at around $eB=4.2 m_\pi^2$. This is due to the positive contributions from the Landau levels to the masses of both the charged mesons in the final state. For the decay width of $\phi \rightarrow K \bar K$, the Landau contributions are observed to lead to vanishing of the decay to the charged $K^+K^-$ pair above the value of $m_\pi^2$ for $eB$. The value of the decay width for the charged $K\bar K$ remains zero upto around 8$m_\pi^2$. On the other hand, the decay wdith of $\phi \rightarrow K^0 {\bar {K^0}}$ is observed to increase monotonically with the increase in the magnetic field. \begin{section}*{Acknowledgements} One of the authors (AM) is grateful to ITP, University of Frankfurt, for warm hospitality when this work was initiated. AM also acknowledeges financial support from Department of Science and Technology (DST), Government of India (project no. CRG/2018/002226). \end{section}
1,116,691,501,123
arxiv
\section{Introduction} \label{sec:intro} \vspace{-1mm} Consider a conventional neural network trained to predict whether a random person would classify a drawing as either a `rabbit' or a `duck'. As illustrated in Figure \ref{fig:nn_example}, given a single drawing, the network generates a marginal prediction that assigns probabilities to the two classes. If the probabilities are each $0.5$, it remains unclear whether this is because opinions across the population are equally divided or the neural network would learn a definitive class if subsequently trained on additional data. These two possibilities would be distinguished by a joint prediction across two identical inputs. As illustrated in tables to the right of the neural network output, such a joint prediction takes the form of a two-by-two table. If opinions are divided due to ambiguity of the image, probabilities would be equally distributed across the four cells. On the other hand, if uncertainty would be completely resolved with further training, the probability of labels differing across identical images is zero. \begin{figure}[!ht] \vspace{-1mm} \centering \includegraphics[width=0.8\textwidth]{figures/rabbit_duck_3.png} \vspace{-1mm} \caption{Conventional neural networks generate marginal predictions, which do not distinguish genuine ambiguity from insufficiency of data.} \label{fig:nn_example} \vspace{-1mm} \end{figure} This example indicates that joint predictions express the extent to which uncertainty will be resolved by additional data, whereas marginal predictions do not. Understanding whether and how uncertainty can be resolved is critical to a number of intelligent behaviors such as efficient exploration and adaptation. As elucidated by \citet{wen2022predictions}, such challenges call for the ability to produce accurate joint predictions. In this paper, we introduce the {\it epistemic neural network} (ENN) -- an interface for models that represent uncertainty as required to generate useful joint predictions. While prior approaches to uncertainty modeling such as Bayesian neural networks can be expressed as ENNs, the new interface facilitates assessment and comparison of joint predictions and the design of novel architectures and algorithms. In particular, we introduce the {\it epinet}: an architecture that can supplement any conventional neural network to estimate uncertainty more effectively than state-of-the-art approaches. For example, with an epinet, conventional neural networks can outperform very large `deep ensembles' \citep{lakshminarayanan2017simple}, consisting of hundreds or more particles, with orders of magnitude less computation. Figure~\ref{fig:imagenet_overall} offers a preview of results of Section~\ref{sec:supervised_image}, where we compare these approaches on ImageNet, evaluating each model on out-of-sample data after training. The quality of the ResNet's marginal predictions -- measured in terms of classification error or marginal log-loss -- does not change much if supplemented with an epinet. However, as a function of the number of model parameters, the epinet-enhanced ResNet dramatically reduces joint log-loss. \begin{figure}[!ht] \centering \includegraphics[width=0.99\textwidth]{figures/large_base_epinet_plot3.pdf} \caption{Quality of marginal and joint predictions across models on ImageNet (Section~\ref{sec:supervised_image}).} \label{fig:imagenet_overall} \end{figure} \subsection{Epistemic neural networks} \label{sec:intro_enn} Given an input $x$ and parameters $\theta$, a conventional NN produces an output $f_\theta(x)$. The output $f_\theta(x, z)$ of an ENN depends additionally on an \textit{epistemic index} $z$. An ENN specifies a fixed reference distribution $P_Z$ for the epistemic index. Typical choices include a uniform distribution over a finite set or a standard Gaussian over a vector space. The index $z$ is used to express epistemic uncertainty. In particular, variation of the network output with $z$ indicates uncertainty that might be resolved by future data. As we will see, the introduction of an epistemic index allows us to represent the kind of uncertainty required to generate useful joint predictions. In typical classification settings, a conventional neural network produces a marginal prediction that assigns a probability $\softmax(f_\theta(x))_y$ to each class $y$. A joint prediction across inputs $x_1,\ldots,x_\tau$ would assign a probability to each class combination $y_1,\ldots,y_\tau$. While conventional neural networks are not designed to provide joint predictions, joint predictions can be produced by multiplying marginal predictions: \begin{equation} \label{eq:nn_joint} \prod_{t=1}^\tau \softmax \left(f_{\theta}(x_t)\right)_{y_t}. \end{equation} However, as discussed earlier, such use of marginal predictions fails to distinguish ambiguity from insufficiency of data. ENNs address this by enabling more expressive joint predictions through integrating over epistemic indices: \begin{equation} \label{eq:enn_joint} \int_z P_Z(dz) \prod_{t=1}^\tau \softmax \left(f_{\theta}(x_t,z)\right)_{y_t}. \end{equation} This integration introduces dependencies so that joint predictions are not necessarily just the product of marginals. Figure~\ref{fig:all_nn_diagrams} illustrates the difference between conventional and epistemic neural network architectures. \begin{figure}[!ht] \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=.99\linewidth]{figures/nn_diagram.jpg} \caption{Conventional neural network.} \label{fig:nn_diagram} \end{subfigure} \hspace{2mm} \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=.99\linewidth]{figures/enn_diagram.jpg} \caption{Epistemic neural network.} \label{fig:enn_diagram} \end{subfigure} \caption{An ENN samples an epistemic index $Z$ from a reference distribution $P_Z$. This enables expression of joint predictions beyond the product of marginal predictions.} \label{fig:all_nn_diagrams} \end{figure} Although the ENN framing and notation are new, all existing approaches to uncertainty estimation can be expressed as ENNs. In particular, any Bayesian neural network (BNN) can easily be written as an ENN. Where a conventional NN produces output $f_\theta(x)$, a BNN then maintains a posterior distribution over weights $\theta$ \citep{neal2012bayesian}. For any posterior distribution, we can define reference distribution $P_Z$ and parameterized class of functions $g_{\theta'}$ such that a prediction $f_{g_{\theta'}(Z)}(x)$, with $Z$ sampled from $P_Z$, reflects predictive uncertainty equivalent to that of the original BNN. Concretely, a `deep ensemble' can be expressed as an ENN by taking $Z$ to index particles of the ensemble and the reference distribution $P_Z$ to be uniform across these indices. For Monte Carlo dropout, we could take $Z$ to identify the dropout mask \citep{Gal2016Uncertainty}. See Appendix~\ref{app:enn_examples} for full details and link to open-source code. While ENNs and BNNs are closely related, there are some important differences. Where BNNs typically view neural network weights as unknown parameters to be inferred, ENNs focus on the resultant predictive distributions. Though ENNs maintain parameters and adapt them to data, the approach ascribes no semantic meaning to them. The sole role of ENN parameters is in driving effective predictions. While all BNNs can be naturally framed as ENNs, it is not the case that all ENNs are natural to describe as BNNs. The epinet, which we will introduce in Section~\ref{sec:epinet}, serves as an example of such an ENN. \subsection{Contributions} \label{sec:intro_contributions} \textbf{We introduce epistemic neural networks (ENNs)}, and we view this new perspective as an important contribution of this paper. This new interface brings focus to joint predictions rather than inference of network weight distributions, which is emphasized in Bayesian deep learning. Only data is {\it real}, and as such, joint predictions of unobserved data capture all relevant uncertainty. While the Bayesian deep learning literature views weights as unknowns to be inferred, ENNs attribute no semantic meaning to weights and treat them only as artifacts that facilitate computation. This ENN perspective sidesteps unnecessary restrictions that pose challenges for Bayesian deep learning algorithms. While all BNNs can be expressed as ENNs, we develop an ENN that is not naturally expressed as a BNN but can produce accurate joint predictions with orders of magnitude less compute than BNN approaches. In Section~\ref{sec:epinet}, \textbf{we introduce the epinet: an ENN architecture that can supplement any conventional NN and be trained to estimate uncertainty}. At its core, the epinet training procedure uses randomized prior functions to generate approximate posterior samples \citep{osband2018rpf}. However, unlike previous work, which generates multiple samples independently via large ensembles, the epinet can be trained with modest incremental computation. In the remainder of our paper, \textbf{we demonstrate the efficacy of the epinet across a range of experiments}. Section~\ref{sec:supervised} shows that an epinet can match the performances of state-of-the-art approaches to joint predictions at orders of magnitude lower computational cost. These results are consistent across data generated by random neural networks (Section~\ref{sec:supervised_testbed}) and large scale image benchmarks (Section~\ref{sec:supervised_image}). Section~\ref{sec:rl} shows that these improvements in the quality of joint predictions drive improved performance in some reinforcement learning tasks. We show that agents using an epinet can outperform existing BNN approaches at much lower computational cost. These results are consistent across tasks generated by random neural networks (Section~\ref{sec:rl_bandit}) and a reinforcement learning benchmark (Section~\ref{sec:rl_bsuite}). As part of this research effort, \textbf{we release the core open-source code necessary to reproduce our experimental results in Appendix~\ref{app:code}}. This includes an optimized Haiku-based library for epistemic neural networks, as well as high-quality implementations of baselines and experiments \citep{jax2018github, deepmind2020jax}. These research tools should enable the community to contribute to ENN technology and together make advances in uncertainty modeling. \subsection{Related work} \label{sec:intro_related} Our work builds on the subject of Bayesian neural networks (BNNs) \citep{mackay1992practical, neal2012bayesian}, which has produced a rich literature. Research in this area has developed and analyzed methods that estimate uncertainty \citep{welling2011bayesian, blundell2015weight,mandt2018stochastic} and highlighted differing roles and interpretations of epistemic and aleatoric uncertainty \citep{der2009aleatory, KendallGal17WhatUncertainties}. Our work is motivated by the importance of joint predictions in driving decision, exploration, and adaptation \citep{wang2021beyond, osband2022neural,wen2022predictions}. Our experimental results rely on dyadic sampling to evaluate the quality of joint predictions in high dimensions \citep{osband2022evaluating}. There is a long history of research in reinforcement learning that develops representations that aim to track what the agent knows or does not know (see, e.g., \citealp{li2011knows,lu2021reinforcement}). ENNs facilitate comparison and assessment of such representations, without worrying about `whether XYZ is Bayesian' \citep{izmailov2021bayesian}. \section{The epinet} \label{sec:epinet} This section introduces the \textit{epinet}, which can supplement any conventional neural network to produce an effective ENN. Our approach reuses standard deep learning blocks and training algorithms. As we will see, it is straightforward to add an epinet to any existing model, even one that has been pretrained. \subsection{Architecture} \label{sec:epinet_network} Consider a conventional neural network, which we will refer to as the {\it base network}. Given base parameters $\zeta$ and an input $x$, denote the output by $\mu_{\zeta}(x)$. If this is a classification model, the associated class probabilities would be $\softmax(\mu_{\zeta}(x))$. An epinet is a neural network with privileged access to inputs and outputs of activation units in the base network. A subset of these inputs and outputs, denoted by $\phi_\zeta(x)$, are taken as input to the epinet along with an epistemic index $z$. For epinet parameters $\eta$, the epinet outputs $\sigma_\eta(\phi_\zeta(x), z)$. To produce an ENN, the output of the epinet is added to that of the base network, though with a ``stop gradient'': \begin{equation} \label{eq:epinet} \underbrace{f_\theta(x, z)}_{\text{ENN}} = \underbrace{\mu_\zeta(x)}_{\text{base net}} + \underbrace{\sigma_\eta(\mathrm{sg}[\phi_\zeta(x)], z)}_{\text{epinet}}. \end{equation} The ENN parameters $\theta = (\zeta, \eta)$ include those of the base network and epinet. The ``stop gradient'' notation $\mathrm{sg}[\cdot]$ indicates that when computing a gradient the argument is treated as fixed. For example, $$\nabla_\theta f_\theta(x,z) = \left[\begin{array}{c}\nabla_\zeta \mu_\zeta(x) \\ \nabla_\eta \sigma_\eta(\phi_\zeta(x), z)\end{array}\right].$$ The epinet used to produce the ImageNet results of Figure \ref{fig:imagenet_overall} serves as an illustrative example. In that case, the base network is a ResNet and the input $\phi_\zeta(x)$ of the epinet is a concatenation of the ResNet's input $x$ and last-layer features. Each epistemic index is a $30$-dimensional vector, and its reference distribution is standard normal. Before training, variation of the ENN output $f_\theta(x, z)$ as a function of $z$ reflects prior uncertainty. Since the base network does not depend on $z$, this variation must derive from the epinet. In our experiments, we induce this initial variation using \textit{prior networks} \citep{osband2018rpf}. In particular, for $\tilde{x} = \mathrm{sg}[\phi_\zeta(x)]$, our epinets take the form \begin{equation} \label{eq:prior_fn} \underbrace{\sigma_\eta(\tilde{x}, z)}_{\text{epinet}} = \underbrace{\sigma_\eta^L(\tilde{x}, z)}_{\text{learnable}} + \underbrace{\sigma^P(\tilde{x}, z)}_{\text{prior net}}. \end{equation} The prior network $\sigma^P$ represents prior uncertainty and has no trainable parameters. In our experiments, prior networks are suitably architected neural networks with weights randomly sampled such that outputs exhibit appropriate variation with the epistemic index $z$. For all the experiments in this paper, we use a simple MLP architecture for the learnable component $\sigma_\eta^L$. In general, the learnable component $\sigma_\eta^L$ is designed to initially output near-zero values. After training on data, the resultant sum $\sigma_\eta$ represents an estimate of posterior uncertainty. \subsection{Training} \label{sec:epinet_training} We describe in this section how we train the ENN in supervised learning tasks. In Section \ref{sec:rl_bsuite}, we discuss an extension to reinforcement learning. Given a set $\mathcal{D}$ of data pairs, consider a loss function \begin{equation} \label{eq:loss} \mathcal{L}(\theta, \mathcal{D}) = \int_z P_Z(dz) \left(\overbrace{\sum_{(x, y) \in \mathcal{D}} \ell(\theta, x, y, z)}^{\text{data loss}} + \overbrace{\Psi(\theta, z)}^{\text{regularization}} \right). \end{equation} This is similar to loss functions employed in the training of conventional neural networks, but with the addition of an integral over the epistemic index $z$. Parameters are adapted by stochastic gradient descent, but with the ``stop gradient'' introduced in Equation (\ref{eq:epinet}), which freezes the epinet input when computing gradients with respect to the base network parameters $\zeta$. Each step is based on a minibatch $\tilde{\mathcal{D}} \subset \mathcal{D}$ of data along with a minibatch $\tilde{\mathcal{Z}}$ of indices sampled i.i.d. from the reference distribution $P_Z$. Each stochastic gradient sample takes the form \begin{equation} \label{eq:loss_minibatch} \nabla_{\theta} \left(\frac{1}{|\tilde{\mathcal{Z}}|} \sum_{z \in \tilde{\mathcal{Z}}} \left(\frac{|\mathcal{D}|}{|\tilde{\mathcal{D}}|} \sum_{(x, y) \in \tilde{\mathcal{D}}} \ell(\theta, x, y, z) + \Psi(\theta, z) \right)\right). \end{equation} In our experiments, we use standard loss functions that are common to conventional neural network training. In particular, for regression problems, we use mean squared error $\ell^{\rm MSE}(\theta, x, y, z) = \left(f_\theta(x,z) - y \right)^2$. For classification problems, we use the cross-entropy loss $\ell^{\rm XENT}(\theta, x, y, z) = - \ln\left(\softmax(f_\theta(x,z))_y\right)$. In both cases, we use ridge regularization: $\Psi^{L_2}(\theta, z) = \lambda \|\theta\|_2^2$. The coefficient $\lambda$ is a tunable hyperparameter that weights regularization relative to data loss. It is natural to consider using different coefficients for base network versus epinet parameters. However, that difference did not play a significant role in our experiments and we used a single coefficient. As elucidated by \cite{wen2022predictions}, when the signal-to-noise ratio of the observed label $y$ varies substantially with the input $x$, in addition to tuning the regularization coefficient $\lambda$, perturbing the data loss function in a manner that depends on the epistemic index $z$ can improve out-of-sample joint predictions. Effective approaches to perturbation based on versions of the statistical bootstrap have been proposed in the literature \citep{osband2015bootstrapped,lu2017ensemble}. Such perturbations were not helpful in our experiments, however, perhaps because each of our domains did not present substantial variation in signal-to-noise ratios. One useful feature of the epinet is that it can supplement a pretrained base network and be trained with frozen base network weights. This makes the epinet suitable as an addition to large models that are expensive to train, such as pretrained language models \citep{brown2020language}. Further, producing uncertainty estimates by sampling and evaluating outputs across many epistemic indices does not require multiple forward passes through the base network, which does not depend on the epistemic index, but only the epinet, which can be much smaller and thus economical. \vspace{-2mm} \section{Supervised learning} \label{sec:supervised} \vspace{-1mm} In this section, we study the performance of epinet architectures in supervised learning. We begin with a benchmark that produces synthetic data using a neural-network-based generative model. We then treat image classification benchmarks, including ImageNet. In each setting, we examine the empirical trade-off realized by a variety of agents between statistical loss and computational cost. We find the epinet to be particularly attractive. For example, it attains joint log-loss competitive with very large ensembles that include hundreds or thousands of particles, but with orders of magnitude less computation. \vspace{-1mm} \subsection{The neural testbed} \label{sec:supervised_testbed} \vspace{-2mm} \textit{The Neural Testbed} is an open-source benchmark that evaluates the quality of predictive distributions on classification problems, with synthetic data produced by neural-network-based generative models \citep{osband2022neural}. It serves as a unit test for sanity-checking agents in a controlled environment. Table~\ref{tab:agent_summary} lists agents that we study and compare as well as hyperparameters that we tune. In our experiments, we optimize these hyperparameters via grid search. We use the open-source github repository \url{https://github.com/deepmind/neural\_testbed} ~and will submit our new \texttt{epinet} agent to this library. \vspace{-2mm} \begin{table}[!ht] \begin{center} \caption{Summary of benchmark agents, full details in Appendix \ref{app:testbed_agents}.} \label{tab:agent_summary} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|l|} \hline \rowcolor{tableHeader} \textcolor{white}{\textbf{agent}} & \textcolor{white}{\textbf{description}} & \textcolor{white}{\textbf{hyperparameters}} \\[0.5ex] \hline \textbf{\texttt{mlp}} & Vanilla MLP & $L_2$ decay \\ \textbf{\texttt{ensemble}} & `Deep Ensemble' \citep{lakshminarayanan2017simple} & $L_2$ decay, ensemble size \\ \textbf{\texttt{dropout}} & Dropout \citep{Gal2016Dropout} & $L_2$ decay, network, dropout rate \\ \textbf{\texttt{bbb}} & Bayes by Backprop \citep{blundell2015weight} & prior mixture, network, early stopping \\ \textbf{\texttt{hypermodel}} & Hypermodel \citep{Dwaracherla2020Hypermodels} & $L_2$ decay, prior, bootstrap, index dimension \\ \textbf{\texttt{ensemble+}} & Ensemble + prior functions \citep{osband2018rpf} & $L_2$ decay, ensemble size, prior scale, bootstrap \\ \textbf{\texttt{sgmcmc}} & Stochastic Langevin MCMC \citep{welling2011bayesian} & learning rate, prior, momentum \\ \textbf{\texttt{epinet}} & MLP + MLP epinet (this paper) & $L_2$ decay, network, prior, index dimension \\ \hline \end{tabular}% } \end{center} \end{table} \vspace{-1mm} For our \texttt{epinet} agent, we initialize a random base network $\mu_\zeta(x)$ as per the baseline \texttt{mlp} agent. We take $\phi_\zeta(x)$ to be the concatenation of $x$ and the last-layer features of the base network. Let $C$ denote the number of classes and $D_Z$ denote the index dimension. The learnable network $\sigma^L_\eta(\phi_\zeta(x), z) = g_\eta([\phi_\zeta(x), z])^T z$, where $g_\eta(\cdot)$ is a 2-layer MLP with outputs in $\mathds{R}^{D_Z \times C}$, and $[\phi_\zeta(x), z]$ is concatenation of $\phi_\zeta(x)$ and $z$. The prior network $\sigma^P$ is an ensemble of $D_Z$ particles sampled from the distribution of the data generating model that act directly on the input $x$. After tuning, we chose epinet hidden layer widths $(15, 15)$, with an index dimension of $8$ and standard Gaussian reference distribution. Full details, together with open source code, are provided in Appendix~\ref{app:testbed_agents}. \begin{figure}[!ht] \vspace{-4mm} \centering \includegraphics[width=0.99\textwidth]{figures/neural_testbed_scaling.pdf} \vspace{-2mm} \caption{Comparison of agents on the neural testbed.} \label{fig:neural_testbed_scaling} \vspace{-2mm} \end{figure} Figure~\ref{fig:neural_testbed_scaling} examines the trade-offs between statistical loss and computational cost for each of the agents after tuning. The bars indicate one standard error, estimated over the testbed's random seeds. In terms of classification error, most of the agents perform similarly after tuning. Most agents also perform similarly in marginal log-loss, though \texttt{sgmcmc} agent reduces loss relative to others at a high computational cost. The key differences in performance come when examining the \textit{joint} log-loss, which we evaluate over ten inputs selected via dyadic sampling (\citet{osband2022evaluating}, Appendix~\ref{app:dyadic}). In terms of joint log-loss, the \texttt{epinet} is competitive with top-performing agents at orders of magnitude lower computational cost. Our results show that, in terms of joint predictions, the \texttt{epinet} can dramatically improve joint predictions and/or dramatically reduce computational cost relative to any of the other agents. These effects are huge, even when compared to popular Bayesian deep learning approaches: \texttt{bbb}, \texttt{dropout}, and \texttt{ensemble}. This highlights a blind-spot in the development of existing agents: the focus has been on marginal rather than joint predictions. \subsection{Image classification} \label{sec:supervised_image} The benefits of the \texttt{epinet} scale up to more complex image data. In fact, these benefits become even more substantial. This section focuses on experiments around the ImageNet dataset \citep{deng2009imagenet}, although we produce qualitatively similar results for both CIFAR-10 and CIFAR-100 in Appendix~\ref{app:image_agents}. This appendix also compares our agents against the \textit{uncertainty baselines} of \citet{nado2021uncertainty}. Even after tuning for joint log-loss, none of these agents match epinet performance. For our experiments in this section, we first train several baseline ResNet architectures on ImageNet. In particular, we tune ResNet-$L$ with $L \in \{50, 101, 152, 200\}$ over learning rate, weight decay and temperature rescaling \citep{wenzel2020good} in the Jaxline framework \citep{deepmind2020jax}. The ensemble agent only uses the ResNet-$50$ architecture. After tuning hyperparameters, we independently initialize and train $100$ ResNet-50 models to serve as ensemble particles. These models are then used to form ensembles of sizes $1$, $3$, $10$, $30$, and $100$. For our epinet agent, we take the pretrained ResNet as the base network and freeze its weights. The input to the learnable network $\sigma^L_\eta$ includes the last-layer features of the base ResNet and the epistemic index. Similar to Section~\ref{sec:supervised_testbed}, the learnable network $\sigma^L_\eta(\phi, z) = g_\eta([\phi, z])^T z$, where $\phi$ is the last-layer features, $z$ is the epistemic index, $[\phi, z]$ is the concatenation of $\phi$ and $z$, and $g_\eta(\cdot)$ is a 1-layer MLP with 50 hidden units and output $\in \mathds{R}^{D_Z \times C}$ for $C$ classes. The fixed prior $\sigma^P$ consists of a network with the same architecture and initialization as $\sigma^L_\eta$, together with an ensemble of small random convolutional networks that directly take the image as inputs. We choose the index dimension $D_Z=30$ and $P_Z$ as standard Gaussian. We train the epinet using cross-entropy loss with ridge regularization. We push the details on hyperparameters and ablations, together with open-source code, to Appendix~\ref{app:image_agents}. \begin{figure}[!ht] \centering \includegraphics[width=0.99\textwidth]{figures/large_base_epinet_plot3.pdf} \repeatcaption{fig:imagenet_overall}{Quality of marginal and joint predictions on ImageNet.} \label{fig:imagenet_overall_repeat} \end{figure} \textbf{Figure~\ref{fig:imagenet_overall_repeat} presents a key result of this paper: relative to large ensembles, epinets greatly improve joint predictions at orders of magnitude lower computational cost.} Just as in the experiments of Section~\ref{sec:supervised_testbed}, we assess joint predictions over inputs selected by a dyadic sampling heuristic, which we review in Appendix~\ref{app:dyadic}. The figure plots performance of three agents with respect to three notions of loss as a function of model size in the spirit of \cite{kaplan2020scaling}. For these agents, the qualitative results are unchanged for alternative measures of computational cost such as FLOPs (Appendix \ref{app:image_agents}). The first two plots assess marginal prediction performance in terms of classification error and log-loss. Performance of the ResNet scales similarly whether or not supplemented with the epinet. Performance of the ensemble does not scale as well as that of the ResNet. The third plot pertains to performance of joint predictions and exhibits dramatic benefits afforded by the epinet. While joint log-loss incurred by the ensemble agent improves with model size more so than the ResNet, the epinet agent outperforms both alternatives by an enormous margin. \section{Reinforcement learning} \label{sec:rl} Efficient exploration and adaptation in reinforcement learning rely on an agent's ability to decide whether and how uncertainty can be resolved by additional data. Such abilities, as elaborated in \cite{wen2022predictions}, are highly related to whether an agent can produce accurate joint predictions. In this section, we demonstrate that the improvements in the quality of joint predictions achieved by epinets in Section~\ref{sec:supervised} translate to improved performance in some reinforcement learning tasks. \subsection{Enhanced DQN Agents} \label{sec:rl_intro} Common reinforcement learning agents, such as the deep Q-networks (DQNs) of \cite{mnih2015human}, use conventional neural networks to estimate a value function. Such a value function generates a vector of predictions $f_\theta(s)$ as a function of state $s$. Each component $f_\theta(s)_a$ represents a prediction of discounted future rewards contingent on next executing action $a$. Use of an epistemic rather than conventional neural network enables an agent to infer whether particular predictions ought to improve with more data. We fix an algorithm that selects each action based on the ENN used to represent the value function, and we compare performance across ENNs. We use a variant of DQN similar to that proposed in \citep{osband2016rlsvi}, which applies to episodic environments. This approach selects actions using approximate Thompson sampling that draws a random value function at the start of each episode. In particular, the agent samples an epistemic index $Z_k \sim P_Z$ at the beginning of each $k$th episode. The agent then selects an action $A_t$ that maximizes $f_\theta(S_t, Z_k)_{A_t}$ at each time $t$ within the episode. Similar to the DQN agent, we store state transition data tuples of the form $(S_t,A_t,R_{t+1},S_{t+1})$ in a replay buffer, we update the network parameters via stochastic gradient steps, and we use a target network $f_{\theta^{\rm target}}$ with periodically updated target parameters $\theta^{\rm target}$ to stabilize gradients. Each gradient step is based on a minibatch $\tilde{\mathcal{D}}$ of samples from the replay buffer and a minibatch $\tilde{\mathcal{Z}}$ of epistemic indices drawn i.i.d. from the reference distribution:\, \begin{equation} \label{eq:dqn_loss} \nabla_\theta \left(\frac{1}{|\tilde\mathcal{Z}|} \sum_{z \in \tilde \mathcal{Z}} \frac{1}{|\tilde\mathcal{D}|} \sum_{(s,a,r,s') \in \tilde\mathcal{D}} \left( f_\theta(s,z)_a - r - \gamma \max_{a'} f_{\theta^{\rm target}}(s', z)_{a'} \right)^2 \right), \end{equation} where $\gamma \in (0,1)$ is a discount factor. Full details are presented in Appendix~\ref{app:rl}. \subsection{Neural bandit} \label{sec:rl_bandit} The neural bandit \citep{osband2022neural} is an environment where rewards are generated by neural-network-based generating processes. We take the 2-layer MLP generative model from the Neural Testbed (Section~\ref{sec:supervised_testbed}). We consider $N = 1000$ actions, drawn i.i.d. from a $100$-dimensional standard normal distribution. At each timestep, the reward of selecting an action $a$ is generated by first forwarding the vector $a$ through the MLP, which gives $2$ logit outputs. The reward $\in \{0, 1\}$ is then sampled according to the class probabilities obtained from applying softmax to the logits. Our agents re-use the ENN architectures from Section~\ref{sec:supervised_testbed} to estimate value functions that predict immediate rewards (i.e. apply discount factor $0$). We run the agents for $50,000$ timesteps and average results over $30$ random seeds. The computational costs of each ENN scale similarly to Section~\ref{sec:supervised_testbed}, see Appendix~\ref{app:rl_bandit}, Figure~\ref{fig:bandit_regret} displays the average regret through time attained using different ENNs. We can see that all agents are able to learn from their experience, but the quality of their performance is strongly impacted by the choice of ENN. Epinet achieves the lowest average regret throughout time, while also using much less computation. The scatter plots of Figure~\ref{fig:bandit_corr} report the correlation between prediction quality on the Neural Testbed and bandit performance, together with the 5th and 95th percentiles obtained by bootstrap resampling. The multiple points for any given agent represent results generated with different random seeds. Agents that produced accurate joint predictions performed well in the neural bandit. However, the quality of marginal predictions showed no strong relation with performance on bandit problems. \begin{figure}[!ht] \vspace{-8mm} \centering \hspace{-10mm} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=4.7cm]{figures/average_regret_plot.pdf} \caption{Bandit performance.} \label{fig:bandit_regret} \end{minipage} \hspace{2mm} \begin{minipage}{.7\textwidth} \begin{figure}[H] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bandit_corr_marginal.pdf} \end{subfigure}% \hspace{1mm} \begin{subfigure}[t]{0.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bandit_corr_joint.pdf} \end{subfigure} \vspace{-1mm} \caption{Relating bandit performance to prediction quality.} \label{fig:bandit_corr} \end{figure} \end{minipage}% \hspace{-10mm} \vspace{-3mm} \end{figure} \subsection{Behaviour suite for reinforcement learning} \label{sec:rl_bsuite} The behaviour suite for reinforcement learning, or \texttt{bsuite} for short, is a collection of environments carefully-designed to investigate core capabilities of RL agents \citep{osband2020bsuite}. We repeat our analysis of ENNs applied to these environments. We use the ENNs from Section \ref{sec:supervised_testbed} to estimate value functions with discount $\gamma=0.99$. For all agents using prior functions (\texttt{ensemble+}, \texttt{hypermodel}, and \texttt{epinet}) we scale the value prior to have mean 0 and variance 1 based on the observations over the first 100 timesteps under a random action policy. Full details are available in Appendix~\ref{app:rl_bsuite}. In \texttt{bsuite}, an agent is assigned a score for each experiment. Figures \ref{fig:bsuite_bar} plots the ``bsuite loss,'' which we define to be one minus the the average score against computational cost. Once again, epinet performs similarly with large ensembles, but at orders of magnitude less computational cost. We include a more detailed breakdown of agent performance by competency in Appendix \ref{app:bsuite_report}. Figure~\ref{fig:bsuite_corr} reports the correlation between prediction quality and \texttt{bsuite}\ performance, together with the 5th and 95th percentiles obtained by bootstrap resampling. The multiple points for any given agent represent results generated with different random seeds. The results on mirror those for the neural bandit closely. Agents that produced accurate joint predictions performed well in \texttt{bsuite}. However, the quality of marginal predictions showed no strong relation with performance on \texttt{bsuite}. \begin{figure}[!ht] \centering \vspace{-3mm} \hspace{-10mm} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=4.7cm]{figures/bsuite_compute.pdf} \caption{\texttt{bsuite}\ performance.} \label{fig:bsuite_bar} \end{minipage} \hspace{2mm} \begin{minipage}{.7\textwidth} \begin{figure}[H] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bsuite_corr_marginal.pdf} \end{subfigure}% ~ \begin{subfigure}[t]{.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bsuite_corr_joint.pdf} \end{subfigure} \vspace{-1mm} \caption{Relating \texttt{bsuite}\ performance to prediction quality.} \label{fig:bsuite_corr} \end{figure} \end{minipage}% \hspace{-10mm} \vspace{-3mm} \end{figure} \section{Conclusion} \label{sec:conclusion} \vspace{-1mm} We have introduced the ENN interface for uncertainty modeling. This facilitates design of new ENNs and comparison of joint predictions. We have also introduced the epinet, a novel ENN that greatly outperforms alternatives in terms of joint prediction quality, computational requirements, or both. In particular, for large models, the epinet enables joint predictions that are significantly better than an ensemble of 100 particles at computation cost marginally exceeding that of a single particle. The epinet improves performance across applications ranging from image classification to reinforcement learning. \begin{ack} We thank John Maggs for organization and management of this research effort and Rich Sutton, Yee Whye Teh, Geoffrey Irving, Koray Kavukcuoglu, and Satinder Singh for helpful discussions and feedback. \end{ack} { \small \bibliographystyle{apalike} \subsection{Agent definition} \label{app:bsuite-agents} In these experiments we use the DQN variants defined in \texttt{enn\_acme/experiments/bsuite}. These agents differ principally in terms of their ENN definition, which are taken directly from the \texttt{neural\_testbed/agents/factories} as tuned on the Neural Testbed. We provide a brief summary of the ENNs used by agents: \begin{itemize}[noitemsep, nolistsep] \item \texttt{mlp}: A `classic' DQN network with 2-layer MLP. \item \texttt{ensemble}: An ensemble of DQN networks which only differ in initialization. \item \texttt{dropout}: An MLP with dropout used as ENN \citep{Gal2016Dropout}. \item \texttt{hypermodel}: A linear hypermodel \citep{Dwaracherla2020Hypermodels}. \item \texttt{ensemble+}: An ensemble of DQN networks with additive prior \citep{osband2016deep,osband2018rpf}. \item \texttt{epinet}: The epinet architecture outlined in Section~\ref{app:rl_bsuite}. \end{itemize} \subsection{Summary scores} \label{app:bsuite-scores} Each \texttt{bsuite}\ experiment outputs a summary score in [0,1]. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. \ifx\newgeometry\undefined\vspace{-2mm}\else\fi \begin{figure}[!ht] \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_mlp.png} \caption{\texttt{mlp}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_dropout.png} \caption{\texttt{dropout}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_ensemble.png} \caption{\texttt{ensemble}} \end{subfigure} \hfill \vspace{5mm} \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_hypermodel.png} \caption{\texttt{hypermodel}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_epinet.png} \caption{\texttt{epinet}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_ensemble+.png} \caption{\texttt{ensemble+}} \end{subfigure} \caption{Radar plots give a snapshot of agent capabilities.} \label{fig:radar} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{figures/bsuite_bar_plot.png} \captionof{figure}{Summary score for each \texttt{bsuite}\ experiment.} \label{fig:bar} \end{figure} \ifx\newgeometry\undefined\vspace{-2mm}\else\fi \newpage \subsection{Results commentary} \label{app:bsuite-commentary} \begin{itemize}[noitemsep, nolistsep, leftmargin=*] \item \texttt{mlp} performs well on basic tasks, and quite well on credit assignment, generalization, noise and scale. However, DQN performs extremely poorly across memory and exploration tasks. Our results match the high-level performance of the \texttt{bsuite/baselines}. \item \texttt{ensemble} performs similar to {\tt mlp} agent. The additional diversity provided by random initialization in ensemble particles is insufficient to drive significantly different behaviour. \item \texttt{dropout} performs very similar to {\tt mlp} agent. Different dropout masks are not sufficient to drive significantly different behaviour on \texttt{bsuite}. \item \texttt{hypermodel} performs better than {\tt mlp}, {\tt ensemble}, and {\tt dropout} agents on exploration tasks, but the performance does not scale to the most challenging tasks in \texttt{bsuite}. \item \texttt{ensemble+} also known as Bootstrapped DQN \citep{osband2016deep, osband2018rpf}. Mostly performs similar to {\tt ensemble} agent, except for exploration where it greatly outperforms {\tt mlp}, {\tt ensemble}, and {\tt dropout} agents. The addition of prior functions is crucial to this performance. \item \texttt{epinet} performs similar to {\tt ensemble+} agent, but with much lower compute. We do see some evidence that, compared to other approaches \texttt{epinet} agent is less robust to problem \textit{scale}. This matches our observation in supervised learning that epinet performance is somewhat sensitive to the chosen scaling of the prior networks $\sigma^P$. \end{itemize} None of the agents we consider have a mechanism for memory as they use feed-forward networks. We could incorporate memory by considering modifications to the agents, but we don't explore that here. \newpage \section{\hfil \LARGE \normalfont \texttt{bsuite}\ report: #1 \vspace{2mm} \hfil \vspace{-3mm} } \rule{\linewidth}{1pt} } \newcommand{\bsuiteabstract}{ {\small \begin{adjustwidth}{1.5cm}{1.5cm} The \textit{Behaviour Suite for Core Reinforcement Learning} \citep{osband2019behaviour}, or \texttt{bsuite}\ for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the \texttt{bsuite}\ project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. We test agents which use ENNs to represent uncertainty in action-value functions. Please refer to Appendix \ref{app:rl} for training details. \end{adjustwidth} } } \subsection{Agent definition} \label{app:bsuite-agents} In these experiments we use the DQN variants defined in \texttt{enn\_acme/experiments/bsuite}. These agents differ principally in terms of their ENN definition, which are taken directly from the \texttt{neural\_testbed/agents/factories} as tuned on the Neural Testbed. We provide a brief summary of the ENNs used by agents: \begin{itemize}[noitemsep, nolistsep] \item \texttt{mlp}: A `classic' DQN network with 2-layer MLP. \item \texttt{ensemble}: An ensemble of DQN networks which only differ in initialization. \item \texttt{dropout}: An MLP with dropout used as ENN \citep{Gal2016Dropout}. \item \texttt{hypermodel}: A linear hypermodel \citep{Dwaracherla2020Hypermodels}. \item \texttt{ensemble+}: An ensemble of DQN networks with additive prior \citep{osband2016deep,osband2018rpf}. \item \texttt{epinet}: The epinet architecture outlined in Section~\ref{app:rl_bsuite}. \end{itemize} \subsection{Summary scores} \label{app:bsuite-scores} Each \texttt{bsuite}\ experiment outputs a summary score in [0,1]. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. \ifx\newgeometry\undefined\vspace{-2mm}\else\fi \begin{figure}[!ht] \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_mlp.png} \caption{\texttt{mlp}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_dropout.png} \caption{\texttt{dropout}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_ensemble.png} \caption{\texttt{ensemble}} \end{subfigure} \hfill \vspace{5mm} \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_hypermodel.png} \caption{\texttt{hypermodel}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_epinet.png} \caption{\texttt{epinet}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=3.6cm]{figures/bsuite_ensemble+.png} \caption{\texttt{ensemble+}} \end{subfigure} \caption{Radar plots give a snapshot of agent capabilities.} \label{fig:radar} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{figures/bsuite_bar_plot.png} \captionof{figure}{Summary score for each \texttt{bsuite}\ experiment.} \label{fig:bar} \end{figure} \ifx\newgeometry\undefined\vspace{-2mm}\else\fi \newpage \subsection{Results commentary} \label{app:bsuite-commentary} \begin{itemize}[noitemsep, nolistsep, leftmargin=*] \item \texttt{mlp} performs well on basic tasks, and quite well on credit assignment, generalization, noise and scale. However, DQN performs extremely poorly across memory and exploration tasks. Our results match the high-level performance of the \texttt{bsuite/baselines}. \item \texttt{ensemble} performs similar to {\tt mlp} agent. The additional diversity provided by random initialization in ensemble particles is insufficient to drive significantly different behaviour. \item \texttt{dropout} performs very similar to {\tt mlp} agent. Different dropout masks are not sufficient to drive significantly different behaviour on \texttt{bsuite}. \item \texttt{hypermodel} performs better than {\tt mlp}, {\tt ensemble}, and {\tt dropout} agents on exploration tasks, but the performance does not scale to the most challenging tasks in \texttt{bsuite}. \item \texttt{ensemble+} also known as Bootstrapped DQN \citep{osband2016deep, osband2018rpf}. Mostly performs similar to {\tt ensemble} agent, except for exploration where it greatly outperforms {\tt mlp}, {\tt ensemble}, and {\tt dropout} agents. The addition of prior functions is crucial to this performance. \item \texttt{epinet} performs similar to {\tt ensemble+} agent, but with much lower compute. We do see some evidence that, compared to other approaches \texttt{epinet} agent is less robust to problem \textit{scale}. This matches our observation in supervised learning that epinet performance is somewhat sensitive to the chosen scaling of the prior networks $\sigma^P$. \end{itemize} None of the agents we consider have a mechanism for memory as they use feed-forward networks. We could incorporate memory by considering modifications to the agents, but we don't explore that here. \newpage \section{\hfil \LARGE \normalfont \texttt{bsuite}\ report: #1 \vspace{2mm} \hfil \vspace{-3mm} } \rule{\linewidth}{1pt} } \newcommand{\bsuiteabstract}{ {\small \begin{adjustwidth}{1.5cm}{1.5cm} The \textit{Behaviour Suite for Core Reinforcement Learning} \citep{osband2019behaviour}, or \texttt{bsuite}\ for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the \texttt{bsuite}\ project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. We test agents which use ENNs to represent uncertainty in action-value functions. Please refer to Appendix \ref{app:rl} for training details. \end{adjustwidth} } } \section{Introduction} \label{sec:intro} \vspace{-1mm} Consider a conventional neural network trained to predict whether a random person would classify a drawing as either a `rabbit' or a `duck'. As illustrated in Figure \ref{fig:nn_example}, given a single drawing, the network generates a marginal prediction that assigns probabilities to the two classes. If the probabilities are each $0.5$, it remains unclear whether this is because opinions across the population are equally divided or the neural network would learn a definitive class if subsequently trained on additional data. These two possibilities would be distinguished by a joint prediction across two identical inputs. As illustrated in tables to the right of the neural network output, such a joint prediction takes the form of a two-by-two table. If opinions are divided due to ambiguity of the image, probabilities would be equally distributed across the four cells. On the other hand, if uncertainty would be completely resolved with further training, the probability of labels differing across identical images is zero. \begin{figure}[!ht] \vspace{-1mm} \centering \includegraphics[width=0.8\textwidth]{figures/rabbit_duck_3.png} \vspace{-1mm} \caption{Conventional neural networks generate marginal predictions, which do not distinguish genuine ambiguity from insufficiency of data.} \label{fig:nn_example} \vspace{-1mm} \end{figure} This example indicates that joint predictions express the extent to which uncertainty will be resolved by additional data, whereas marginal predictions do not. Understanding whether and how uncertainty can be resolved is critical to a number of intelligent behaviors such as efficient exploration and adaptation. As elucidated by \citet{wen2022predictions}, such challenges call for the ability to produce accurate joint predictions. In this paper, we introduce the {\it epistemic neural network} (ENN) -- an interface for models that represent uncertainty as required to generate useful joint predictions. While prior approaches to uncertainty modeling such as Bayesian neural networks can be expressed as ENNs, the new interface facilitates assessment and comparison of joint predictions and the design of novel architectures and algorithms. In particular, we introduce the {\it epinet}: an architecture that can supplement any conventional neural network to estimate uncertainty more effectively than state-of-the-art approaches. For example, with an epinet, conventional neural networks can outperform very large `deep ensembles' \citep{lakshminarayanan2017simple}, consisting of hundreds or more particles, with orders of magnitude less computation. Figure~\ref{fig:imagenet_overall} offers a preview of results of Section~\ref{sec:supervised_image}, where we compare these approaches on ImageNet, evaluating each model on out-of-sample data after training. The quality of the ResNet's marginal predictions -- measured in terms of classification error or marginal log-loss -- does not change much if supplemented with an epinet. However, as a function of the number of model parameters, the epinet-enhanced ResNet dramatically reduces joint log-loss. \begin{figure}[!ht] \centering \includegraphics[width=0.99\textwidth]{figures/large_base_epinet_plot3.pdf} \caption{Quality of marginal and joint predictions across models on ImageNet (Section~\ref{sec:supervised_image}).} \label{fig:imagenet_overall} \end{figure} \subsection{Epistemic neural networks} \label{sec:intro_enn} Given an input $x$ and parameters $\theta$, a conventional NN produces an output $f_\theta(x)$. The output $f_\theta(x, z)$ of an ENN depends additionally on an \textit{epistemic index} $z$. An ENN specifies a fixed reference distribution $P_Z$ for the epistemic index. Typical choices include a uniform distribution over a finite set or a standard Gaussian over a vector space. The index $z$ is used to express epistemic uncertainty. In particular, variation of the network output with $z$ indicates uncertainty that might be resolved by future data. As we will see, the introduction of an epistemic index allows us to represent the kind of uncertainty required to generate useful joint predictions. In typical classification settings, a conventional neural network produces a marginal prediction that assigns a probability $\softmax(f_\theta(x))_y$ to each class $y$. A joint prediction across inputs $x_1,\ldots,x_\tau$ would assign a probability to each class combination $y_1,\ldots,y_\tau$. While conventional neural networks are not designed to provide joint predictions, joint predictions can be produced by multiplying marginal predictions: \begin{equation} \label{eq:nn_joint} \prod_{t=1}^\tau \softmax \left(f_{\theta}(x_t)\right)_{y_t}. \end{equation} However, as discussed earlier, such use of marginal predictions fails to distinguish ambiguity from insufficiency of data. ENNs address this by enabling more expressive joint predictions through integrating over epistemic indices: \begin{equation} \label{eq:enn_joint} \int_z P_Z(dz) \prod_{t=1}^\tau \softmax \left(f_{\theta}(x_t,z)\right)_{y_t}. \end{equation} This integration introduces dependencies so that joint predictions are not necessarily just the product of marginals. Figure~\ref{fig:all_nn_diagrams} illustrates the difference between conventional and epistemic neural network architectures. \begin{figure}[!ht] \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=.99\linewidth]{figures/nn_diagram.jpg} \caption{Conventional neural network.} \label{fig:nn_diagram} \end{subfigure} \hspace{2mm} \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=.99\linewidth]{figures/enn_diagram.jpg} \caption{Epistemic neural network.} \label{fig:enn_diagram} \end{subfigure} \caption{An ENN samples an epistemic index $Z$ from a reference distribution $P_Z$. This enables expression of joint predictions beyond the product of marginal predictions.} \label{fig:all_nn_diagrams} \end{figure} Although the ENN framing and notation are new, all existing approaches to uncertainty estimation can be expressed as ENNs. In particular, any Bayesian neural network (BNN) can easily be written as an ENN. Where a conventional NN produces output $f_\theta(x)$, a BNN then maintains a posterior distribution over weights $\theta$ \citep{neal2012bayesian}. For any posterior distribution, we can define reference distribution $P_Z$ and parameterized class of functions $g_{\theta'}$ such that a prediction $f_{g_{\theta'}(Z)}(x)$, with $Z$ sampled from $P_Z$, reflects predictive uncertainty equivalent to that of the original BNN. Concretely, a `deep ensemble' can be expressed as an ENN by taking $Z$ to index particles of the ensemble and the reference distribution $P_Z$ to be uniform across these indices. For Monte Carlo dropout, we could take $Z$ to identify the dropout mask \citep{Gal2016Uncertainty}. See Appendix~\ref{app:enn_examples} for full details and link to open-source code. While ENNs and BNNs are closely related, there are some important differences. Where BNNs typically view neural network weights as unknown parameters to be inferred, ENNs focus on the resultant predictive distributions. Though ENNs maintain parameters and adapt them to data, the approach ascribes no semantic meaning to them. The sole role of ENN parameters is in driving effective predictions. While all BNNs can be naturally framed as ENNs, it is not the case that all ENNs are natural to describe as BNNs. The epinet, which we will introduce in Section~\ref{sec:epinet}, serves as an example of such an ENN. \subsection{Contributions} \label{sec:intro_contributions} \textbf{We introduce epistemic neural networks (ENNs)}, and we view this new perspective as an important contribution of this paper. This new interface brings focus to joint predictions rather than inference of network weight distributions, which is emphasized in Bayesian deep learning. Only data is {\it real}, and as such, joint predictions of unobserved data capture all relevant uncertainty. While the Bayesian deep learning literature views weights as unknowns to be inferred, ENNs attribute no semantic meaning to weights and treat them only as artifacts that facilitate computation. This ENN perspective sidesteps unnecessary restrictions that pose challenges for Bayesian deep learning algorithms. While all BNNs can be expressed as ENNs, we develop an ENN that is not naturally expressed as a BNN but can produce accurate joint predictions with orders of magnitude less compute than BNN approaches. In Section~\ref{sec:epinet}, \textbf{we introduce the epinet: an ENN architecture that can supplement any conventional NN and be trained to estimate uncertainty}. At its core, the epinet training procedure uses randomized prior functions to generate approximate posterior samples \citep{osband2018rpf}. However, unlike previous work, which generates multiple samples independently via large ensembles, the epinet can be trained with modest incremental computation. In the remainder of our paper, \textbf{we demonstrate the efficacy of the epinet across a range of experiments}. Section~\ref{sec:supervised} shows that an epinet can match the performances of state-of-the-art approaches to joint predictions at orders of magnitude lower computational cost. These results are consistent across data generated by random neural networks (Section~\ref{sec:supervised_testbed}) and large scale image benchmarks (Section~\ref{sec:supervised_image}). Section~\ref{sec:rl} shows that these improvements in the quality of joint predictions drive improved performance in some reinforcement learning tasks. We show that agents using an epinet can outperform existing BNN approaches at much lower computational cost. These results are consistent across tasks generated by random neural networks (Section~\ref{sec:rl_bandit}) and a reinforcement learning benchmark (Section~\ref{sec:rl_bsuite}). As part of this research effort, \textbf{we release the core open-source code necessary to reproduce our experimental results in Appendix~\ref{app:code}}. This includes an optimized Haiku-based library for epistemic neural networks, as well as high-quality implementations of baselines and experiments \citep{jax2018github, deepmind2020jax}. These research tools should enable the community to contribute to ENN technology and together make advances in uncertainty modeling. \subsection{Related work} \label{sec:intro_related} Our work builds on the subject of Bayesian neural networks (BNNs) \citep{mackay1992practical, neal2012bayesian}, which has produced a rich literature. Research in this area has developed and analyzed methods that estimate uncertainty \citep{welling2011bayesian, blundell2015weight,mandt2018stochastic} and highlighted differing roles and interpretations of epistemic and aleatoric uncertainty \citep{der2009aleatory, KendallGal17WhatUncertainties}. Our work is motivated by the importance of joint predictions in driving decision, exploration, and adaptation \citep{wang2021beyond, osband2022neural,wen2022predictions}. Our experimental results rely on dyadic sampling to evaluate the quality of joint predictions in high dimensions \citep{osband2022evaluating}. There is a long history of research in reinforcement learning that develops representations that aim to track what the agent knows or does not know (see, e.g., \citealp{li2011knows,lu2021reinforcement}). ENNs facilitate comparison and assessment of such representations, without worrying about `whether XYZ is Bayesian' \citep{izmailov2021bayesian}. \section{The epinet} \label{sec:epinet} This section introduces the \textit{epinet}, which can supplement any conventional neural network to produce an effective ENN. Our approach reuses standard deep learning blocks and training algorithms. As we will see, it is straightforward to add an epinet to any existing model, even one that has been pretrained. \subsection{Architecture} \label{sec:epinet_network} Consider a conventional neural network, which we will refer to as the {\it base network}. Given base parameters $\zeta$ and an input $x$, denote the output by $\mu_{\zeta}(x)$. If this is a classification model, the associated class probabilities would be $\softmax(\mu_{\zeta}(x))$. An epinet is a neural network with privileged access to inputs and outputs of activation units in the base network. A subset of these inputs and outputs, denoted by $\phi_\zeta(x)$, are taken as input to the epinet along with an epistemic index $z$. For epinet parameters $\eta$, the epinet outputs $\sigma_\eta(\phi_\zeta(x), z)$. To produce an ENN, the output of the epinet is added to that of the base network, though with a ``stop gradient'': \begin{equation} \label{eq:epinet} \underbrace{f_\theta(x, z)}_{\text{ENN}} = \underbrace{\mu_\zeta(x)}_{\text{base net}} + \underbrace{\sigma_\eta(\mathrm{sg}[\phi_\zeta(x)], z)}_{\text{epinet}}. \end{equation} The ENN parameters $\theta = (\zeta, \eta)$ include those of the base network and epinet. The ``stop gradient'' notation $\mathrm{sg}[\cdot]$ indicates that when computing a gradient the argument is treated as fixed. For example, $$\nabla_\theta f_\theta(x,z) = \left[\begin{array}{c}\nabla_\zeta \mu_\zeta(x) \\ \nabla_\eta \sigma_\eta(\phi_\zeta(x), z)\end{array}\right].$$ The epinet used to produce the ImageNet results of Figure \ref{fig:imagenet_overall} serves as an illustrative example. In that case, the base network is a ResNet and the input $\phi_\zeta(x)$ of the epinet is a concatenation of the ResNet's input $x$ and last-layer features. Each epistemic index is a $30$-dimensional vector, and its reference distribution is standard normal. Before training, variation of the ENN output $f_\theta(x, z)$ as a function of $z$ reflects prior uncertainty. Since the base network does not depend on $z$, this variation must derive from the epinet. In our experiments, we induce this initial variation using \textit{prior networks} \citep{osband2018rpf}. In particular, for $\tilde{x} = \mathrm{sg}[\phi_\zeta(x)]$, our epinets take the form \begin{equation} \label{eq:prior_fn} \underbrace{\sigma_\eta(\tilde{x}, z)}_{\text{epinet}} = \underbrace{\sigma_\eta^L(\tilde{x}, z)}_{\text{learnable}} + \underbrace{\sigma^P(\tilde{x}, z)}_{\text{prior net}}. \end{equation} The prior network $\sigma^P$ represents prior uncertainty and has no trainable parameters. In our experiments, prior networks are suitably architected neural networks with weights randomly sampled such that outputs exhibit appropriate variation with the epistemic index $z$. For all the experiments in this paper, we use a simple MLP architecture for the learnable component $\sigma_\eta^L$. In general, the learnable component $\sigma_\eta^L$ is designed to initially output near-zero values. After training on data, the resultant sum $\sigma_\eta$ represents an estimate of posterior uncertainty. \subsection{Training} \label{sec:epinet_training} We describe in this section how we train the ENN in supervised learning tasks. In Section \ref{sec:rl_bsuite}, we discuss an extension to reinforcement learning. Given a set $\mathcal{D}$ of data pairs, consider a loss function \begin{equation} \label{eq:loss} \mathcal{L}(\theta, \mathcal{D}) = \int_z P_Z(dz) \left(\overbrace{\sum_{(x, y) \in \mathcal{D}} \ell(\theta, x, y, z)}^{\text{data loss}} + \overbrace{\Psi(\theta, z)}^{\text{regularization}} \right). \end{equation} This is similar to loss functions employed in the training of conventional neural networks, but with the addition of an integral over the epistemic index $z$. Parameters are adapted by stochastic gradient descent, but with the ``stop gradient'' introduced in Equation (\ref{eq:epinet}), which freezes the epinet input when computing gradients with respect to the base network parameters $\zeta$. Each step is based on a minibatch $\tilde{\mathcal{D}} \subset \mathcal{D}$ of data along with a minibatch $\tilde{\mathcal{Z}}$ of indices sampled i.i.d. from the reference distribution $P_Z$. Each stochastic gradient sample takes the form \begin{equation} \label{eq:loss_minibatch} \nabla_{\theta} \left(\frac{1}{|\tilde{\mathcal{Z}}|} \sum_{z \in \tilde{\mathcal{Z}}} \left(\frac{|\mathcal{D}|}{|\tilde{\mathcal{D}}|} \sum_{(x, y) \in \tilde{\mathcal{D}}} \ell(\theta, x, y, z) + \Psi(\theta, z) \right)\right). \end{equation} In our experiments, we use standard loss functions that are common to conventional neural network training. In particular, for regression problems, we use mean squared error $\ell^{\rm MSE}(\theta, x, y, z) = \left(f_\theta(x,z) - y \right)^2$. For classification problems, we use the cross-entropy loss $\ell^{\rm XENT}(\theta, x, y, z) = - \ln\left(\softmax(f_\theta(x,z))_y\right)$. In both cases, we use ridge regularization: $\Psi^{L_2}(\theta, z) = \lambda \|\theta\|_2^2$. The coefficient $\lambda$ is a tunable hyperparameter that weights regularization relative to data loss. It is natural to consider using different coefficients for base network versus epinet parameters. However, that difference did not play a significant role in our experiments and we used a single coefficient. As elucidated by \cite{wen2022predictions}, when the signal-to-noise ratio of the observed label $y$ varies substantially with the input $x$, in addition to tuning the regularization coefficient $\lambda$, perturbing the data loss function in a manner that depends on the epistemic index $z$ can improve out-of-sample joint predictions. Effective approaches to perturbation based on versions of the statistical bootstrap have been proposed in the literature \citep{osband2015bootstrapped,lu2017ensemble}. Such perturbations were not helpful in our experiments, however, perhaps because each of our domains did not present substantial variation in signal-to-noise ratios. One useful feature of the epinet is that it can supplement a pretrained base network and be trained with frozen base network weights. This makes the epinet suitable as an addition to large models that are expensive to train, such as pretrained language models \citep{brown2020language}. Further, producing uncertainty estimates by sampling and evaluating outputs across many epistemic indices does not require multiple forward passes through the base network, which does not depend on the epistemic index, but only the epinet, which can be much smaller and thus economical. \vspace{-2mm} \section{Supervised learning} \label{sec:supervised} \vspace{-1mm} In this section, we study the performance of epinet architectures in supervised learning. We begin with a benchmark that produces synthetic data using a neural-network-based generative model. We then treat image classification benchmarks, including ImageNet. In each setting, we examine the empirical trade-off realized by a variety of agents between statistical loss and computational cost. We find the epinet to be particularly attractive. For example, it attains joint log-loss competitive with very large ensembles that include hundreds or thousands of particles, but with orders of magnitude less computation. \vspace{-1mm} \subsection{The neural testbed} \label{sec:supervised_testbed} \vspace{-2mm} \textit{The Neural Testbed} is an open-source benchmark that evaluates the quality of predictive distributions on classification problems, with synthetic data produced by neural-network-based generative models \citep{osband2022neural}. It serves as a unit test for sanity-checking agents in a controlled environment. Table~\ref{tab:agent_summary} lists agents that we study and compare as well as hyperparameters that we tune. In our experiments, we optimize these hyperparameters via grid search. We use the open-source github repository \url{https://github.com/deepmind/neural\_testbed} ~and will submit our new \texttt{epinet} agent to this library. \vspace{-2mm} \begin{table}[!ht] \begin{center} \caption{Summary of benchmark agents, full details in Appendix \ref{app:testbed_agents}.} \label{tab:agent_summary} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|l|} \hline \rowcolor{tableHeader} \textcolor{white}{\textbf{agent}} & \textcolor{white}{\textbf{description}} & \textcolor{white}{\textbf{hyperparameters}} \\[0.5ex] \hline \textbf{\texttt{mlp}} & Vanilla MLP & $L_2$ decay \\ \textbf{\texttt{ensemble}} & `Deep Ensemble' \citep{lakshminarayanan2017simple} & $L_2$ decay, ensemble size \\ \textbf{\texttt{dropout}} & Dropout \citep{Gal2016Dropout} & $L_2$ decay, network, dropout rate \\ \textbf{\texttt{bbb}} & Bayes by Backprop \citep{blundell2015weight} & prior mixture, network, early stopping \\ \textbf{\texttt{hypermodel}} & Hypermodel \citep{Dwaracherla2020Hypermodels} & $L_2$ decay, prior, bootstrap, index dimension \\ \textbf{\texttt{ensemble+}} & Ensemble + prior functions \citep{osband2018rpf} & $L_2$ decay, ensemble size, prior scale, bootstrap \\ \textbf{\texttt{sgmcmc}} & Stochastic Langevin MCMC \citep{welling2011bayesian} & learning rate, prior, momentum \\ \textbf{\texttt{epinet}} & MLP + MLP epinet (this paper) & $L_2$ decay, network, prior, index dimension \\ \hline \end{tabular}% } \end{center} \end{table} \vspace{-1mm} For our \texttt{epinet} agent, we initialize a random base network $\mu_\zeta(x)$ as per the baseline \texttt{mlp} agent. We take $\phi_\zeta(x)$ to be the concatenation of $x$ and the last-layer features of the base network. Let $C$ denote the number of classes and $D_Z$ denote the index dimension. The learnable network $\sigma^L_\eta(\phi_\zeta(x), z) = g_\eta([\phi_\zeta(x), z])^T z$, where $g_\eta(\cdot)$ is a 2-layer MLP with outputs in $\mathds{R}^{D_Z \times C}$, and $[\phi_\zeta(x), z]$ is concatenation of $\phi_\zeta(x)$ and $z$. The prior network $\sigma^P$ is an ensemble of $D_Z$ particles sampled from the distribution of the data generating model that act directly on the input $x$. After tuning, we chose epinet hidden layer widths $(15, 15)$, with an index dimension of $8$ and standard Gaussian reference distribution. Full details, together with open source code, are provided in Appendix~\ref{app:testbed_agents}. \begin{figure}[!ht] \vspace{-4mm} \centering \includegraphics[width=0.99\textwidth]{figures/neural_testbed_scaling.pdf} \vspace{-2mm} \caption{Comparison of agents on the neural testbed.} \label{fig:neural_testbed_scaling} \vspace{-2mm} \end{figure} Figure~\ref{fig:neural_testbed_scaling} examines the trade-offs between statistical loss and computational cost for each of the agents after tuning. The bars indicate one standard error, estimated over the testbed's random seeds. In terms of classification error, most of the agents perform similarly after tuning. Most agents also perform similarly in marginal log-loss, though \texttt{sgmcmc} agent reduces loss relative to others at a high computational cost. The key differences in performance come when examining the \textit{joint} log-loss, which we evaluate over ten inputs selected via dyadic sampling (\citet{osband2022evaluating}, Appendix~\ref{app:dyadic}). In terms of joint log-loss, the \texttt{epinet} is competitive with top-performing agents at orders of magnitude lower computational cost. Our results show that, in terms of joint predictions, the \texttt{epinet} can dramatically improve joint predictions and/or dramatically reduce computational cost relative to any of the other agents. These effects are huge, even when compared to popular Bayesian deep learning approaches: \texttt{bbb}, \texttt{dropout}, and \texttt{ensemble}. This highlights a blind-spot in the development of existing agents: the focus has been on marginal rather than joint predictions. \subsection{Image classification} \label{sec:supervised_image} The benefits of the \texttt{epinet} scale up to more complex image data. In fact, these benefits become even more substantial. This section focuses on experiments around the ImageNet dataset \citep{deng2009imagenet}, although we produce qualitatively similar results for both CIFAR-10 and CIFAR-100 in Appendix~\ref{app:image_agents}. This appendix also compares our agents against the \textit{uncertainty baselines} of \citet{nado2021uncertainty}. Even after tuning for joint log-loss, none of these agents match epinet performance. For our experiments in this section, we first train several baseline ResNet architectures on ImageNet. In particular, we tune ResNet-$L$ with $L \in \{50, 101, 152, 200\}$ over learning rate, weight decay and temperature rescaling \citep{wenzel2020good} in the Jaxline framework \citep{deepmind2020jax}. The ensemble agent only uses the ResNet-$50$ architecture. After tuning hyperparameters, we independently initialize and train $100$ ResNet-50 models to serve as ensemble particles. These models are then used to form ensembles of sizes $1$, $3$, $10$, $30$, and $100$. For our epinet agent, we take the pretrained ResNet as the base network and freeze its weights. The input to the learnable network $\sigma^L_\eta$ includes the last-layer features of the base ResNet and the epistemic index. Similar to Section~\ref{sec:supervised_testbed}, the learnable network $\sigma^L_\eta(\phi, z) = g_\eta([\phi, z])^T z$, where $\phi$ is the last-layer features, $z$ is the epistemic index, $[\phi, z]$ is the concatenation of $\phi$ and $z$, and $g_\eta(\cdot)$ is a 1-layer MLP with 50 hidden units and output $\in \mathds{R}^{D_Z \times C}$ for $C$ classes. The fixed prior $\sigma^P$ consists of a network with the same architecture and initialization as $\sigma^L_\eta$, together with an ensemble of small random convolutional networks that directly take the image as inputs. We choose the index dimension $D_Z=30$ and $P_Z$ as standard Gaussian. We train the epinet using cross-entropy loss with ridge regularization. We push the details on hyperparameters and ablations, together with open-source code, to Appendix~\ref{app:image_agents}. \begin{figure}[!ht] \centering \includegraphics[width=0.99\textwidth]{figures/large_base_epinet_plot3.pdf} \repeatcaption{fig:imagenet_overall}{Quality of marginal and joint predictions on ImageNet.} \label{fig:imagenet_overall_repeat} \end{figure} \textbf{Figure~\ref{fig:imagenet_overall_repeat} presents a key result of this paper: relative to large ensembles, epinets greatly improve joint predictions at orders of magnitude lower computational cost.} Just as in the experiments of Section~\ref{sec:supervised_testbed}, we assess joint predictions over inputs selected by a dyadic sampling heuristic, which we review in Appendix~\ref{app:dyadic}. The figure plots performance of three agents with respect to three notions of loss as a function of model size in the spirit of \cite{kaplan2020scaling}. For these agents, the qualitative results are unchanged for alternative measures of computational cost such as FLOPs (Appendix \ref{app:image_agents}). The first two plots assess marginal prediction performance in terms of classification error and log-loss. Performance of the ResNet scales similarly whether or not supplemented with the epinet. Performance of the ensemble does not scale as well as that of the ResNet. The third plot pertains to performance of joint predictions and exhibits dramatic benefits afforded by the epinet. While joint log-loss incurred by the ensemble agent improves with model size more so than the ResNet, the epinet agent outperforms both alternatives by an enormous margin. \section{Reinforcement learning} \label{sec:rl} Efficient exploration and adaptation in reinforcement learning rely on an agent's ability to decide whether and how uncertainty can be resolved by additional data. Such abilities, as elaborated in \cite{wen2022predictions}, are highly related to whether an agent can produce accurate joint predictions. In this section, we demonstrate that the improvements in the quality of joint predictions achieved by epinets in Section~\ref{sec:supervised} translate to improved performance in some reinforcement learning tasks. \subsection{Enhanced DQN Agents} \label{sec:rl_intro} Common reinforcement learning agents, such as the deep Q-networks (DQNs) of \cite{mnih2015human}, use conventional neural networks to estimate a value function. Such a value function generates a vector of predictions $f_\theta(s)$ as a function of state $s$. Each component $f_\theta(s)_a$ represents a prediction of discounted future rewards contingent on next executing action $a$. Use of an epistemic rather than conventional neural network enables an agent to infer whether particular predictions ought to improve with more data. We fix an algorithm that selects each action based on the ENN used to represent the value function, and we compare performance across ENNs. We use a variant of DQN similar to that proposed in \citep{osband2016rlsvi}, which applies to episodic environments. This approach selects actions using approximate Thompson sampling that draws a random value function at the start of each episode. In particular, the agent samples an epistemic index $Z_k \sim P_Z$ at the beginning of each $k$th episode. The agent then selects an action $A_t$ that maximizes $f_\theta(S_t, Z_k)_{A_t}$ at each time $t$ within the episode. Similar to the DQN agent, we store state transition data tuples of the form $(S_t,A_t,R_{t+1},S_{t+1})$ in a replay buffer, we update the network parameters via stochastic gradient steps, and we use a target network $f_{\theta^{\rm target}}$ with periodically updated target parameters $\theta^{\rm target}$ to stabilize gradients. Each gradient step is based on a minibatch $\tilde{\mathcal{D}}$ of samples from the replay buffer and a minibatch $\tilde{\mathcal{Z}}$ of epistemic indices drawn i.i.d. from the reference distribution:\, \begin{equation} \label{eq:dqn_loss} \nabla_\theta \left(\frac{1}{|\tilde\mathcal{Z}|} \sum_{z \in \tilde \mathcal{Z}} \frac{1}{|\tilde\mathcal{D}|} \sum_{(s,a,r,s') \in \tilde\mathcal{D}} \left( f_\theta(s,z)_a - r - \gamma \max_{a'} f_{\theta^{\rm target}}(s', z)_{a'} \right)^2 \right), \end{equation} where $\gamma \in (0,1)$ is a discount factor. Full details are presented in Appendix~\ref{app:rl}. \subsection{Neural bandit} \label{sec:rl_bandit} The neural bandit \citep{osband2022neural} is an environment where rewards are generated by neural-network-based generating processes. We take the 2-layer MLP generative model from the Neural Testbed (Section~\ref{sec:supervised_testbed}). We consider $N = 1000$ actions, drawn i.i.d. from a $100$-dimensional standard normal distribution. At each timestep, the reward of selecting an action $a$ is generated by first forwarding the vector $a$ through the MLP, which gives $2$ logit outputs. The reward $\in \{0, 1\}$ is then sampled according to the class probabilities obtained from applying softmax to the logits. Our agents re-use the ENN architectures from Section~\ref{sec:supervised_testbed} to estimate value functions that predict immediate rewards (i.e. apply discount factor $0$). We run the agents for $50,000$ timesteps and average results over $30$ random seeds. The computational costs of each ENN scale similarly to Section~\ref{sec:supervised_testbed}, see Appendix~\ref{app:rl_bandit}, Figure~\ref{fig:bandit_regret} displays the average regret through time attained using different ENNs. We can see that all agents are able to learn from their experience, but the quality of their performance is strongly impacted by the choice of ENN. Epinet achieves the lowest average regret throughout time, while also using much less computation. The scatter plots of Figure~\ref{fig:bandit_corr} report the correlation between prediction quality on the Neural Testbed and bandit performance, together with the 5th and 95th percentiles obtained by bootstrap resampling. The multiple points for any given agent represent results generated with different random seeds. Agents that produced accurate joint predictions performed well in the neural bandit. However, the quality of marginal predictions showed no strong relation with performance on bandit problems. \begin{figure}[!ht] \vspace{-8mm} \centering \hspace{-10mm} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=4.7cm]{figures/average_regret_plot.pdf} \caption{Bandit performance.} \label{fig:bandit_regret} \end{minipage} \hspace{2mm} \begin{minipage}{.7\textwidth} \begin{figure}[H] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bandit_corr_marginal.pdf} \end{subfigure}% \hspace{1mm} \begin{subfigure}[t]{0.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bandit_corr_joint.pdf} \end{subfigure} \vspace{-1mm} \caption{Relating bandit performance to prediction quality.} \label{fig:bandit_corr} \end{figure} \end{minipage}% \hspace{-10mm} \vspace{-3mm} \end{figure} \subsection{Behaviour suite for reinforcement learning} \label{sec:rl_bsuite} The behaviour suite for reinforcement learning, or \texttt{bsuite} for short, is a collection of environments carefully-designed to investigate core capabilities of RL agents \citep{osband2020bsuite}. We repeat our analysis of ENNs applied to these environments. We use the ENNs from Section \ref{sec:supervised_testbed} to estimate value functions with discount $\gamma=0.99$. For all agents using prior functions (\texttt{ensemble+}, \texttt{hypermodel}, and \texttt{epinet}) we scale the value prior to have mean 0 and variance 1 based on the observations over the first 100 timesteps under a random action policy. Full details are available in Appendix~\ref{app:rl_bsuite}. In \texttt{bsuite}, an agent is assigned a score for each experiment. Figures \ref{fig:bsuite_bar} plots the ``bsuite loss,'' which we define to be one minus the the average score against computational cost. Once again, epinet performs similarly with large ensembles, but at orders of magnitude less computational cost. We include a more detailed breakdown of agent performance by competency in Appendix \ref{app:bsuite_report}. Figure~\ref{fig:bsuite_corr} reports the correlation between prediction quality and \texttt{bsuite}\ performance, together with the 5th and 95th percentiles obtained by bootstrap resampling. The multiple points for any given agent represent results generated with different random seeds. The results on mirror those for the neural bandit closely. Agents that produced accurate joint predictions performed well in \texttt{bsuite}. However, the quality of marginal predictions showed no strong relation with performance on \texttt{bsuite}. \begin{figure}[!ht] \centering \vspace{-3mm} \hspace{-10mm} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=4.7cm]{figures/bsuite_compute.pdf} \caption{\texttt{bsuite}\ performance.} \label{fig:bsuite_bar} \end{minipage} \hspace{2mm} \begin{minipage}{.7\textwidth} \begin{figure}[H] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bsuite_corr_marginal.pdf} \end{subfigure}% ~ \begin{subfigure}[t]{.45\textwidth} \centering \vspace{-3mm} \includegraphics[height=4.7cm]{figures/bsuite_corr_joint.pdf} \end{subfigure} \vspace{-1mm} \caption{Relating \texttt{bsuite}\ performance to prediction quality.} \label{fig:bsuite_corr} \end{figure} \end{minipage}% \hspace{-10mm} \vspace{-3mm} \end{figure} \section{Conclusion} \label{sec:conclusion} \vspace{-1mm} We have introduced the ENN interface for uncertainty modeling. This facilitates design of new ENNs and comparison of joint predictions. We have also introduced the epinet, a novel ENN that greatly outperforms alternatives in terms of joint prediction quality, computational requirements, or both. In particular, for large models, the epinet enables joint predictions that are significantly better than an ensemble of 100 particles at computation cost marginally exceeding that of a single particle. The epinet improves performance across applications ranging from image classification to reinforcement learning. \begin{ack} We thank John Maggs for organization and management of this research effort and Rich Sutton, Yee Whye Teh, Geoffrey Irving, Koray Kavukcuoglu, and Satinder Singh for helpful discussions and feedback. \end{ack} { \small \bibliographystyle{apalike}
1,116,691,501,124
arxiv
\section{INTRODUCTION} In MIMO (multiple-input multiple-output) systems, channel capacity grows linearly with the number of spatial degrees of freedom (DoF), which is relative to both the scattering environment and antenna array geometries. Recently, the concept of \textit{holographic MIMO} has drawn increasing attention and is regarded as one of the possible technologies used in 6G communications\cite{b1}. A holographic MIMO array consists of a massive (possible infinite) number of antennas integrating into a compact space, which can be seen as a ultimate form of spatially-constrained MIMO system and be modeled as a spatially-continuous electromagnetic (EM) apertures with asymptotically infinite antennas. Thus, a fundamental question of holographic communications is to obtain the number of spatial DoF and capacity of a holographic MIMO system. To answer this question, many previous works have investigated the continuous-space channel modelling under different propagation conditions and array geometries, e.g.,\cite{b2}-\cite{b4}. However, these models are all based on the far-field communication assumption, which might be not applicable with the dramatic increase of antenna aperture and higher operation frequencies that makes near-field communications be the practical scenario. Thus, near-field modeling, the spatial DoF and its capacity boundary are needed. There have been some works trying to fill the gap from the Green's function and electromagnetic theory, such as \cite{b5}, \cite{b6} and \cite{b7}. In these works, the authors derive channel models and estimate near-field communication performances from Green's function, corresponding to a relatively simple communication scenario. However, to the best of the authors' knowledge, near-field analysis based on channel model corresponding to complex scattering environment have not been investigated. In this letter, we consider transceivers with spatially-constrained antenna apertures of rectangular symmetry based on spatially-stationary random monochromatic scattering propagation environment. Unlike work \cite{b8} that a Fourier orthonormal expansion was derived to compute the DoF limit in the far-field region, we focus on the near-field communication scenario and compute the DoF and capacity gain by considering evanescent waves. To the best of our knowledge, this is the initial work to analyze the DoF and capacity limit of the near-field communications under a generalized stochastic spatially-stationary monochromatic channel. Moreover, we also demonstrate our proposition through numerical simulations and verify its coherence with traditional results, i.e., the maximum DoF and capacity of the far-field region that are provided in \cite{b8} and \cite{b9}. Specifically, we take full potential of evanescent waves in near-field holographic communications, and drive its extra DoF and capacity based on Fourier plane-wave series expansion. Numerical results show that the DoF and capacity of near-field communications can be improved by 30\% respectively comparing with the traditional far-field scenario. \section{SYSTEM MODEL} We study a holographic communication system where the transmitter and the receiver are equipped with planar holographic MIMO surface. Considering the electromagnetic waves propagation in every direction (i.e., isotropic propagation) through a homogeneous, isotropic, and infinite random scattered medium, the 3D small-scale fading channel can be modeled as a space-frequency scalar random field \cite{b9},\cite{b10}, which is given as follows: \begin{equation} {h_\omega(x,y,z):(x,y,z)\in\mathbb{R}^3,\omega\in(-\infty,\infty)} \end{equation} which is a function of frequency and spatial position $(x,y,z)$. Since we only consider monochromatic waves in this letter, the variant $\omega$ can thus be omitted. According to \cite{b10}, $h(x,y,z)$ can be modeled as a zero-mean, spatially-stationary and Gaussian random field. This channel can be completely described according to \cite{b10}: \begin{equation} c_h(x,y,z)=\mathbb{E} \{h^*(x',y',z')h(x+x',y+y',z+z')\} \end{equation} \begin{equation} S_h(k_x,k_y,k_z)=\iiint c_h(x,y,z)e^{-j(k_xx+k_yy+k_zz)}dxdy \end{equation} where $c_h(x,y,z)$ is the spatial auto-correlation function in spatial domain and $S_h(k_x,k_y,k_z)$ is the power spectral density in the wave-number domain. \subsection{Fourier Plane-Wave Spectral Representation} In a source-free environment, the EM nature of the small-scale fading requires realizations of $h(x,y,z)$ to satisfy the scalar Helmholtz equation in the frequency domain, which can be seen as a constraint of $h(x,y,z)$:$(\nabla^2+\kappa^2)h(x,y,z)=0$ where $\kappa=\frac{2\pi}{\lambda}$ is the module of the vector wave-number and $\lambda$ is the wavelength. Following the derivation in \cite{b10}\cite{b11}, we can find out that the channel's power spectral density in the wave-number domain has the following form under the Helmholtz equation constraint: \begin{equation} S_h(k_x,k_y,k_z)=\frac{4\pi^2}{\kappa}\delta(k_x^2+k^2_y+k^2_z-\kappa^2) \label{3Dpowerspectral} \end{equation} which is an impulsive function with wave-number support on the surface of a sphere of radius $\kappa$. $h(x,y,z)$ is decomposed into the sum of two random fields: \begin{equation} h(x,y,z)=h_+(x,y,z)+h_-(x,y,z) \end{equation} which is further defined as the \textit{Fourier plane-wave spectral representation:} \begin{equation} \begin{aligned} h_\pm(x,y,z)&=\frac{1}{4\pi\sqrt{\pi}}\iint \sqrt{S_h(k_x,k_y)}W^\pm(k_x,k_y) \\ &\times e^{j(k_xx+k_yy\pm\gamma(\kappa_x,\kappa_y)z)} \label{fourier-basedrepresentation} \end{aligned} \end{equation} where $W^+(k_x,k_y)$ and $W^-(k_x,k_y)$ are two 2D independent, zero-mean, complex-valued, white-noise Gaussian random fields and $S_h(k_x,k_y)$ is the 2D power spectral density of $h\pm(x,y,z=0)$. Conventionally, $S_h(k_x,k_y)$ is defined over a compact support $k_x^2+k^2_y\leq\kappa^2$ given by a disk of radius $\kappa$ centered on the origin (excluding a purely imaginary $\gamma(k_x,k_y)=\sqrt{\kappa^2-k^2_x-k^2_y}$ that is called as the evanescent waves because they do not contribute to far-field propagation). This limits the bandwidth of conventional $h(x,y,z)$ (in the wave-number domain) to $\pi\kappa^2$. However, as we will demonstrate in Section \uppercase\expandafter{\romannumeral4}, the far-field communication assumption might be not applicable in some short range communications scenarios because of the increase of antenna aperture and operating frequencies in the next generation communications. As a consequence, it is possible to leverage evanescent EM waves to transmit information in near-field communications. \subsection{4D Fourier Plane-Wave Series Expansion} In \cite{b9}, authors propose 4D Fourier plane-wave representation and 4D Fourier plane-wave series expansion of EM channels. Compared with the 2D representation, the 4D version is derived from Helmholtz equation and Green's function as well. Specifically, 4D Fourier plane-wave representation can be written as \begin{equation} \begin{aligned} h(\textbf{r},\textbf{s}) = &\frac{1}{(2\pi)^2}\iiiint a_r(\textbf{k},\textbf{r})H_a(k_x,k_y,\kappa_x,\kappa_y)a_s(\boldsymbol{\kappa},\textbf{s})\\ &\times dk_xdk_yd\kappa_xd\kappa_y \label{4D1} \end{aligned} \end{equation} where $a_r(\textbf{k},\textbf{r})H_a(k_x,k_y,\kappa_x,\kappa_y)=e^{j\textbf{k}^T\textbf{r}}$, $a_s(\boldsymbol{\kappa},\textbf{s})=e^{-j\boldsymbol{\kappa}^T\textbf{s}}$ respectively. (\ref{4D1}) can be approximated by 4D Fourier plane-wave series expansion: \begin{equation} h(\textbf{r},\textbf{s}) \approx \sum_{}\sum_{}a_r(l_x,l_y,\textbf{r})H_a(l_x,l_y,m_x,m_y)a_s(l_x,l_y,\textbf{s}) \end{equation} where $a_r(l_x,l_y,\textbf{r}) = e^{j(\frac{2\pi}{L_{R,x}}l_xr_x+\frac{2\pi}{L_{R,y}}l_yr_y+\gamma_r(l_x,l_y)r_z)}$ and $a_s(m_x,m_y,\textbf{s}) = e^{-j(\frac{2\pi}{L_{S,x}}m_xs_x+\frac{2\pi}{L_{S,y}}m_ys_y+\gamma_r(m_x,m_y)s_z)}$. The channel matrix of Holographic MIMO communications can be written as \begin{equation} \textbf{H} = \boldsymbol{\Phi}_r\Tilde{\textbf{H}}\boldsymbol{\Phi}_s^H = \boldsymbol{\Phi}_r e^{j\boldsymbol{\Gamma}_r} \textbf{H}_a e^{-j\boldsymbol{\Gamma}_s} \boldsymbol{\Phi}_s^H \end{equation} where $\boldsymbol{\Gamma}_r = diag(\gamma_r)r_z$ and $\boldsymbol{\Gamma}_s = diag(\gamma_s)s_z$. $\boldsymbol{\Phi}_r$ and $\boldsymbol{\Phi}_s$ are 2D spatial-frequency Fourier harmonics matrix. In the far-field communication scenario, the angular matrix $\textbf{H}_a$ is $\textit{semi-unitarily equivalent}$ to $\textbf{H}$ and statistically equivalent to $\Tilde{\textbf{H}}$. \section{DOF AND CAPACITY OF NEAR-FIELD HOLOGRAPHIC MIMO COMMUNICATIONS} In electromagnetics, an evanescent field, or evanescent wave, is an oscillating electric and/or magnetic field that does not propagate as an EM wave but whose energy is spatially concentrated in the vicinity of the source. As we can see that the DoF of holographic MIMO channels in far-field communication scenarios has been computed in \cite{b8}, but evanescent waves are not considered since they attenuate exponentially with the distance. When we go to beyond 5G or 6G communications with holographic MIMO surfaces under higher operating frequencies i.e., millimeter wave or terahertz, the original far-field communication range will become in the near-field range according to the Raleigh distance $\frac{2D^2}{\lambda}$, where $D$ is the maximum linear dimension of the antenna, and $\lambda$ is the wavelength of the EM waves. One typical scenario is that the whole wall of an indoor room is equipped with holographic MIMO surfaces as the transmitter using sufficiently high carrier frequencies for communications, which might make the whole indoor environment fall in transmitter's near-field region. For example, using a carrier frequency $f=5$ GHz (i.e., $\lambda=6$cm) and holographic MIMO surface aperture length $L_x=L_y=1$m, this roughly provide a near-field range of $2D^2/\lambda\approx33.3$m. \par Using the holographic MIMO surfaces to control the EM wave, it usually makes the reflected wave off the surface at an angle greater than the critical angle where evanescent waves are formed. This only happens in the near field since the intensity of evanescent waves decays exponentially with the distance from the interface at which they are formed. Therefore, it is natural to consider of using evanescent wave to carry information besides the normal plane waves that can be applied for information transmission in the original far-field range. In this paper, we compute the limit of the average number of spatial DoF and capacity with evanescent waves. Furthermore, we will show that evanescent waves can bring the considerable DoF and capacity gain in reactive near field region from both theoretical and numerical perspectives. \subsection{DEGREES OF FREEDOM} \subsubsection{Far-field Communications}\label{AA} It is well known that a band limited orthonormal series expansion has a countably-finite number of coefficients, whose cardinality determines the space dimension \cite{b8}, i.e., the available DoF. In the following, we take planar arrays and volumetric arrays as examples to derive their DoFs. \begin{itemize} \item Planar arrays \end{itemize} \par Assuming $h(x,y,z)$ is observed over a 2D rectangle, $\mathcal{V}_2$ of side lengths $L_1 > L_2$. According to 2D Fourier Plane-Wave Series Expansion, $h(x,y) = h(x,y,0)$ is given \begin{equation} h(x,y) \approx \mathop{\sum\sum}\limits_{(l,m) \in \mathcal{E}}c_{l,m}\varphi_{l,m}(x,y) \quad (x,y) \in \mathcal{V}_2\label{dof_fourier_2Dexpansion} \end{equation} where $\varphi_{l,m}(x,y) = \varphi_l(x)\varphi_m(y)$ is the 2D Fourier basis, and $c_{l,m} = \sqrt{L_xL_y}H_{lm}(0)$. The average number of DoF is limited by the cardinality of $\{H_{lm}\}$, which is the number of lattice points falling into the 2D \textbf{inner} lattice ellipse shown in Fig.~\ref{ellipse}, that is its Lebesgue measure of $\mathcal{E}$. This yields \begin{equation} \eta_2=\frac{\pi}{\lambda^2}L_xL_y\label{dof_2D} \end{equation} \begin{itemize} \item Volumetric arrays \end{itemize} \par Assuming $h(x,y,z)$ is observed over a 3D parallelepiped, $\mathcal{V}_3$ of side lengths $L_x,L_y$ and $L_z < \min(L_x,L_y)$. The 2D Fourier plane-wave series expansion of $h(x,y,z)$ for a fixed $z$ is \begin{equation} h(x,y) \approx \mathop{\sum\sum}\limits_{(l,m) \in \mathcal{E}}c_{l,m}(z)\varphi_{l,m}(x,y) \quad (x,y), \in \mathcal{V}_2\label{dof_fourier_3Dexpansion} \end{equation} where $c_{l,m} = \sqrt{L_xL_y}H_{lm}(z)$ with $H_{lm}(z) = H_{lm}^{+}e^{j\gamma_{lm}z}$ $+H_{lm}^{-}e^{-j\gamma{lm}z}$. Giving the pair $(l,m)$ and fixed $z$, these two functions are completely known and do not carry any information. Hence, we expect that the number of DoF over a 3D volume does not scale proportionally to $L_z$, as it happens for $L_x$ and $L_y$. By working with a single random vector, the 2D random vector field $\bold{h}(x,y) = h(x,y,\bold{z})$ is given by \begin{equation} \bold{h}(x,y) \approx \mathop{\sum\sum}\limits_{(l,m) \in \mathcal{E}} \bold{c}_{l,m} \varphi_{l,m}(x,y) \end{equation} where $\bold{c}_{lm}=\bold{\Gamma}_{lm}\bold{a}_{lm} \in \mathbb{C}^{N_z\times2}$ is a random vector of statistically-independent elements with \begin{equation} \bold{\Gamma}_{lm} = [e^{j\gamma{lm}\bold{z}},e^{-j\gamma{lm}\bold{z}}] \in \mathbb{C}^{N_z\times2} \end{equation} and $\bold{a}_{lm} = \sqrt{L_xL_y}[H_{lm}^{+},H_{lm}^{-}]^{T} \in \mathbb{C}^{N_z\times2}$. The number of DoF are obtained by the product between (\ref{dof_2D}) and the rank of $\Gamma_{lm}$. The latter is 2 since the two columns of $\Gamma_{lm}$ are linearly independent regardless of the choice of $z$. Thus, the DoF is \begin{equation} \eta_3=\frac{2\pi}{\lambda^2}L_xL_y \end{equation} \subsubsection{Near-field Communications} \begin{figure}[htbp] \centering \includegraphics[width=0.40\textwidth]{ellipse.pdf} \caption{The 2D lattice ellipse $\mathcal{E}$ wavenumber spectral support of $h(x,y,z)$} \label{ellipse} \end{figure}\vspace{-0mm} We now consider evanescent waves for near-field communications and compute the number of DoF over volumetric($\mathcal{V}_3$) aperture spaces. \newtheorem{definition}{Definition} \begin{definition} The amplitude of evanescent wave attenuates exponentially with propagation distance and its power can be written as \begin{equation} \frac{\mathcal{P}_{receive}}{\mathcal{P}_{send}} = e^{-2k_zz} \end{equation} \textit{where $k_z$ is the modulus of imaginary wavenumber on z axis.} \end{definition} \begin{theorem} \textit{The additional DoF can be obtained in the near-field communications that is generally given as follow, which vanishes as the distance between the transmitter and receiver planes increases.} \begin{equation} \!\!DoF_{evanescent}\!=\!DoF_{far-field}*\frac{1}{(4z\pi)^2}*\lambda^2*\ln^2{\frac{\mathcal{P}_{send}}{\mathcal{P}_{noise}}} \label{theorem} \end{equation} \end{theorem} \begin{proof} We assume that the impact of evanescent waves is only considered when the received signal power is larger than that of the channel noise. From (16), we can obtain that the largest $k_z$ satisfies the following equation: \begin{equation*} \frac{P_{noise}}{P_{send}} = e^{-2k_zz} \end{equation*} then we can calculate $k_z$ as \begin{equation*} 2k_zz = ln(\frac{P_{send}}{P_{noise}}) \end{equation*} \begin{equation} k_z = \frac{1}{2z}*ln(\frac{P_{send}}{P_{noise}}) \label{kz} \end{equation} When the EM wave is evanescent, its wavenumbers on three axes satisfy the following equation: \begin{equation*} \kappa^2 = \kappa_x^2 + \kappa_y^2 - \kappa_z^2 \end{equation*} Assuming the outer ellipse in Fig.~\ref{ellipse} is defined by \begin{equation*} t^2 = \kappa_x^2 + \kappa_y^2 \end{equation*} Then the additional DoF can be calculated by \begin{equation*} DoF_{evanescent}=DoF_{far-field}*(\frac{{t}^2}{\kappa^2}-1) \end{equation*} \begin{equation} \begin{aligned} DoF_{evanescent}&=DoF_{far-field}*(\frac{{\kappa}^2+\kappa_z^2}{\kappa^2}-1)\\ &=DoF_{far-field}*\frac{{k_z}^2}{\kappa^2}\\ \end{aligned} \label{DoFimprove} \end{equation} Substituting the corresponding $k_z$ in (\ref{kz}) into equation (\ref{DoFimprove}) with $\kappa=2\pi/\lambda $, it gives us the result (\ref{theorem}). \end{proof} \iffalse \textit{When the receiver plane is at the medium position of the reactive near-field region, i.e., $\frac{0.62\sqrt{\frac{D^3}{\lambda}}}{2}$, the corresponding $k_z$ and additional DoF are:} \begin{equation*} k_z = \frac{1}{0.62}\sqrt{\frac{\lambda}{D^3}}ln(\frac{P_{send}}{P_{noise}}) \end{equation*} \begin{equation} DoF_{evanescent}=DoF_{far-field}*\frac{1}{{1.5376\pi}^2}*\frac{\lambda^3}{D^3}*\ln^2{\frac{\mathcal{P}_{send}}{\mathcal{P}_{noise}}}. \end{equation} \fi \subsection{CHANNEL CAPACITY EVALUATION} In this section, we analyze channel capacity by considering evanescent waves. Channel capacity in far-field communication scenarios can be given by \cite{b9}: \begin{equation} C = \rm \mathop{\max}_{ \textbf{Q}_a:tr(\textbf{Q}_a)<=1} \mathbb{E}\{log_2det( \textbf{I}_{n_r}+snr \textbf{H}_a \textbf{Q}_a \textbf{H}^H_a)\} \label{traditional_capacity} \end{equation} where $\textbf{Q}_a = \mathbb{E}\{\textbf{x}_a^{\rm H} \textbf{x}_a \}$, and $\textbf{I}_{n_r}$ is an identity matrix of dimension $n_r$. $\textbf{H}_a \in \mathbb{C}^{n_r \times n_s}$ is the angular response matrix describing the channel coupling between source and receive propagation directions, assuming that there are $n_s$ and $n_r$ sampling points on the transmitting and receiving side, respectively. However, Eq. (\ref{traditional_capacity}) is based on the assumption that $\bm{\widetilde{{\rm H}}}$ is statistically equivalent with $\bm{{\rm H}}_a$, which does not hold when considering evanescent waves. Therefore, we will replace $\bm{{\rm H}}_a$ with $\bm{\widetilde{{\rm H}}}$ to derive the channel capacity in near-field communications. As shown in Fig.~\ref{ellipse}, when considering evanescent waves, the angular response matrix can be expanded to include wave-numbers out of 2D lattice ellipse. In other words, sampling points $n_s$ and $n_r$ are much larger. We assume that the channels are independent identically distributed (i.i.d), which means the variances of matrix $\textbf{H}_a$'s elements naturally decouple: \begin{equation} \sigma^2(l_x,l_y,m_x,m_y) = \sigma^2_s(m_x,m_y)\sigma^2_r(l_x,l_y) \end{equation} where $\sigma^2_s(m_x,m_y)$ and $\sigma^2_r(l_x,l_y)$ account for the power transfer at source and receiver, respectively. Therefore, the matrix consisting of $n_rn_s$ variances can be denoted by: \begin{equation} \bm{\Sigma} = vec\{\bm{\Sigma}_r\} vec\{\bm{\Sigma}_s\}^{\rm H} \label{sigma_decomposition} \end{equation} where $\bm{\Sigma}_r$ and $\bm{\Sigma}_s$ are variance matrices at source and receiver respectively. To clarify the impact of evanescent waves, we assume $\bm{\Sigma}_r$ = $\bm{\Sigma}_{rin}$ + $\bm{\Sigma}_{rout}$, where $\bm{\Sigma}_{rin}$ and $\bm{\Sigma}_{rout}$ represent variance matrices of sampling points in and out of the traditional support ellipse respectively. As shown in Lemma 1 of work \cite{b9}, angular random matrix can be obtained as \begin{equation} \bm{{\rm H_a}} = \bm{\Sigma}\odot\bm{{\rm W}} \end{equation} where $\textbf{W}$ is the matrix with i.i.d. circularly-symmetric, complex-Gaussian random entries. Thus, the capacity of near-field communications with evanescent waves can be calculated by: \begin{equation} C = \rm \mathop{\max}_{ \textbf{Q}_a:tr(\textbf{Q}_a)<=1} \mathbb{E}\{log_2det( \textbf{I}_{n_r}+snr \bm{\widetilde{{\rm H}}} \textbf{Q}_a \bm{\widetilde{{\rm H}}^H})\} \label{near_field_capacity} \end{equation} where $\bm{\widetilde{{\rm H}}} = \rm \textbf{e}^{j\bm{{\rm\Gamma_r}}} \bm{{\rm H_a}} \textbf{e}^{-j\bm{{\rm\Gamma_s}}}.$ \iffalse Here, $\rm \textbf{e}^{j\bm{{\rm\Gamma_r}}}$ and $\rm \textbf{e}^{-j\bm{{\rm\Gamma_s}}}$ are diagonal matrices with $\bm{{\rm\Gamma_r}} = diag(\bm{\gamma_r} r_z)$ and $\bm{{\rm\Gamma_s}} = diag(\bm{\gamma_s} s_z).$ \fi If the sampling points are in the traditional 2D ellipse region, $\gamma_r$ and $\gamma_s$ are real and $\bm{\widetilde{{\rm H}}}$ is statistically equivalent to $\bm{{\rm H_a}}.$ However, when the sampling points are outside the 2D ellipse region (i.e., evanescent waves are used for transmissions), $\gamma_r$ and $\gamma_s$ are imaginary and the signal power decays exponentially with the transmission distance (i.e., $\bm{\widetilde{{\rm H}}}$ is not statistically equivalent to $\bm{{\rm H_a}})$. For simplicity, we assume: (1) instantaneous channel state information is only available at the receiver; (2) communications happen only in interior 2D ellipse region or exterior 2D ellipse region (i.e., there is no cross region communications). Under assumption 1, the ergodic capacity in (\ref{near_field_capacity}) is achieved by an i.i.d input vector $\bm{x}_a$ with $\bm{{\rm Q}}_a = \frac{1}{n_s}\bm{{\rm I}}_{n_s}$ and channel capacity is given by \begin{equation} C = \sum_{i=1}^{{\rm rank}(\widetilde{\boldsymbol{{\rm H}}})}\mathbb{E}\{ {\rm log_2}(1+\frac{{\rm snr}}{n_s}\lambda_i(\widetilde{\boldsymbol{{\rm H}}}\widetilde{\boldsymbol{{\rm H}}}^{\rm H})) \} \label{near_field_capacity_average_power} \end{equation} where $\{\lambda_i(\bm{{\rm A}})\}$ are the eigenvalues of an arbitrary $\bm{{\rm A}}$. Assuming the transmitting array is on the $x-y$ plane (i.e., $s_z$ = 0), then (\ref{near_field_capacity_average_power}) can be decomposed into: \begin{equation} C = \sum_{i=1}^{{\rm rank}(\widetilde{\boldsymbol{{\rm H}}})}\mathbb{E}\{{\rm log_2}[1+\frac{{\rm snr}}{n_s}\lambda_i(e^{j\boldsymbol{\Gamma_r}}\boldsymbol{\Sigma}\odot \boldsymbol{{\rm W}} {(e^{j\boldsymbol{\Gamma_r}}\boldsymbol{\Sigma}\odot \boldsymbol{{\rm W}} )}^{\rm H})] \} \label{capacity_decomposition} \end{equation} Applying (\ref{sigma_decomposition}) and $\bm{\Sigma}_r$ = $\bm{\Sigma}_{rin}$ + $\bm{\Sigma}_{rout}$, we can obtain: \begin{equation*} \boldsymbol{\Sigma} = [vec(\boldsymbol{\Sigma}_{rin})+vec(\boldsymbol{\Sigma}_{rout})][{vec(\boldsymbol{\Sigma}_{sin})}^{\rm H}+{vec(\boldsymbol{\Sigma}_{sout})}^{\rm H}] \end{equation*} Under assumption 2, we can simplify the above equation into: \begin{equation} \boldsymbol{\Sigma} = vec(\boldsymbol{\Sigma}_{rin}){vec(\boldsymbol{\Sigma}_{sin})}^{\rm H}+vec(\boldsymbol{\Sigma}_{rout}){vec(\boldsymbol{\Sigma}_{sout})}^{\rm H} \label{sigma_decomposition_2} \end{equation} The capacity simulation results in Section \ref{simulation} are obtained through plugging (\ref{sigma_decomposition_2}) into (\ref{capacity_decomposition}). \section{SIMULATION RESULTS} \label{simulation} Numerical results are performed to validate our theoretical results for a rectangular space $\mathcal{V}_3$. Initially, we set simulation parameters as Table.~\ref{parameter_set}, then the DoF improvement percentage of near-field communications compared to the conventional far-field communication is computed through (\ref{theorem}) with one variable varying each time. \begin{table}[htbp] \caption{SIMULATION PARAMETERS} \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Simulation Parameters}} \\ \hline \textbf{Antenna linear dimension D} & \textbf{Transmission distance} \\ \hline $D=0.5$m& $z=0.5$m $^{\mathrm{*}}$\\ \hline \textbf{Operation frequency}& \textbf{Power ratio} \\ \hline $f=3G$Hz $^{\mathrm{*}}$& $P_{send}/P_{rece}=125.56$dB $^{\mathrm{*}}$\\ \hline \multicolumn{2}{l}{The parameters with$^{\mathrm{*}}$ will be changed in the simulation stage} \end{tabular} \label{tab1} \end{center} \label{parameter_set} \end{table} In Fig.~\ref{distance}, it is shown that as the transmission distance becomes larger, the additional DoF provided by evanescent waves decreases rapidly. When the receiver is in the corresponding far-field region of the setting, the DoF improvement can be neglected, which validates that the extra DoF gain only be obtained in the near-field region. In Fig.~\ref{powerratio}, it it shown that as transmission power increases, extra DoF gain increases rapidly at first but converges to a constant mainly determined by the transmission distance, where the extra DoF gain can achieves more than 30\% improvement. This indicates that additional DoF gain is not mainly driven by transmit power consumption, but the EM nature of near-field communication. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{distance.pdf} \caption{Improved DoF percentage as a function of transmission distance.} \label{distance} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.33\textwidth]{sendpower.pdf} \caption{Improved DoF percentage as a function of the ratio between transmission power and noise power.} \label{powerratio} \end{figure} Next, we set simulation parameters as Table.~\ref{parameter_set_2}, then the channel capacity improvement of near-field communications is computed through (\ref{traditional_capacity}) and (\ref{capacity_decomposition}). \begin{table}[htbp] \caption{CAPACITY SIMULATION PARAMETERS} \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Simulation Parameters}} \\ \hline \textbf{Antenna linear dimension D} & \textbf{Operation frequency} \\ \hline $D=10\lambda$ & $f=300M/900M/3G$Hz\\ \hline \multicolumn{2}{|c|}{\textbf{Transmission Distance}} \\ \hline \multicolumn{2}{|c|} {$z=0.01 - 1$m $^{\mathrm{*}}$} \\ \hline \multicolumn{2}{l}{The parameters with$^{\mathrm{*}}$ will be changed in the simulation stage} \end{tabular} \label{tab1} \end{center} \label{parameter_set_2} \end{table} In Fig.~\ref{capacity}, it is shown that as the transmission distance becomes larger, the additional channel capacity provided by evanescent waves decreases rapidly. Moreover, with the increase of the operating frequency, the extra capacity goes down faster. When the receiver goes to the corresponding far-field region of the setting, the DoF improvement can be neglected, which further validates that extra DoF benefit can only be obtained in the near-field region. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{capacity.pdf} \caption{Improved capacity percentage is a function of transmission distance.} \label{capacity} \end{figure} \section{CONCLUSION} Based on the Fourier plane-wave series representation of a spatially-stationary scattering channel, we derived a theoretical DoF and capacity gain in the near-field region by considering evanescent waves. This theoretical result captures the essence of EM propagation and also illuminates the possible benefits brought by evanescent waves.The analysis is limited to an isotropic scattering environment but can be extended to the non-isotropic case through the linear-system theoretic interpretation of plane-wave propagation. Numerical simulations demonstrates the validity of the derivations and its coherence with conventional DoF analysis in the far-field region.
1,116,691,501,125
arxiv
\section{...} Violation of the combined Charge-conjugation and Parity symmetries ($CP$) in the standard model (SM) is produced by a non-vanishing phase in the Cabibbo-Kobayashi-Maskawa flavor-mixing matrix~\cite{CKM}, where the violation may be observed as a non-zero $CP$ asymmetry defined as \begin{equation} A^{D\rightarrow f}_{CP}=\frac {\Gamma(D\rightarrow f)-\Gamma(\bar{D}\rightarrow\bar{f})} {\Gamma(D\rightarrow f)+\Gamma(\bar{D}\rightarrow\bar{f})} \label{EQ:ACP} \end{equation} where $\Gamma$ is the partial decay width, $D$ denotes a charmed meson, and $f$ is a final state. In this presentation, we report $CP$ asymmetries of charmed mesons in the decays $D^+\rightarrow K^0_S\pi^+$, $D^0\rightarrow K^+K^-$, $D^0\rightarrow\pi^+\pi^-$~\cite{CC}, and the $CP$ asymmetry difference between $D^0\rightarrow K^+K^-$ and $D^0\rightarrow\pi^+\pi^-$, which is an update of our previous publications~\cite{ACPKSH, D0hhBelle} using the full data sample collected with the Belle detector~\cite{BELLE} at the KEKB~\cite{KEKB} asymmetric-energy $e^+e^-$ collider. The $D^+\rightarrow K^0_S\pi^+$ final state is a coherent sum of Cabibbo-favored and doubly Cabibbo-suppressed decays where no SM $CP$ violation in charm decay is expected, while $(-0.332\pm0.006)\%$~\cite{PDG2012} $CP$ violation due to $K^0-\bar{K}^0$ mixing (denoted by $A^{\bar{K}^0}_{CP}$) is expected with a neutral kaon in the final state. The $D^0\rightarrow h^+h^-$ final states where $h$ denotes $K$ and $\pi$ are singly Cabibbo-suppressed decays in which both direct ($a^{\rm dir}_{CP}$) and indirect $CP$ violations ($a^{\rm ind}_{CP}$) are expected in the SM, while the $CP$ asymmetry difference between the two decays, $\Delta A^{hh}_{CP}=A^{KK}_{CP}-A^{\pi\pi}_{CP}$, reveals approximately direct $CP$ violation with the universality of indirect $CP$ violation in charm decays~\cite{YAY}. The data were recorded at the $\Upsilon(nS)$ resonances ($n=1,2,3,4,5$) or near the $\Upsilon(4S)$ resonance and the integrated luminosity is $\sim$1 ab$^{-1}$. We determine the quantity $A^{D\rightarrow f}_{CP}$ defined in Eq.~(\ref{EQ:ACP}) by measuring the asymmetry in the signal yield \begin{equation} A^{D\rightarrow f}_{\rm rec}~=~\frac{N_{\rm rec}^{D\rightarrow f}-N_{\rm rec}^{\bar{D}\rightarrow\bar{f}}}{N_{\rm rec}^{D\rightarrow f}+N_{\rm rec}^{\bar{D}\rightarrow\bar{f}}} ~=~A^{D\rightarrow f}_{CP}~+~A_{FB}+~A^{f}_{\epsilon}, \label{EQ:ARECON} \end{equation} where $N_{\rm rec}$ is the number of reconstructed decays. The $A_{FB}$ is forward-backward asymmetry in $e^+e^-\rightarrow c\bar{c}$ process and the $A^{f}_{\epsilon}$ is final state particle detection asymmetry where the latter depends on the final state particles while the former does not. For a slow pion detection asymmetry which is involved in $D^0\rightarrow h^+h^-$ reconstruction via $D^{*+}$, we correct for the asymmetry using the method described in our previous publication~\cite{D0hhBelle}. A fast pion detection asymmetry which is involved in $D^+\rightarrow K^0_S\pi^+$ reconstruction is corrected for using the method described in Ref.~\cite{KSPIPRL}. With assumption the $A_{FB}$ is the same for all charmed mesons, Refs.~\cite{D0hhBelle, KSPIPRL} use $CP$ violation free large statistics of resonance data samples to correct for the $A^{f}_{\epsilon}$. For the final state with a neutral kaon, we have to take into account additional corrections which are asymmetry due to different interactions between $K^0$ and $\bar{K}^0$ with detector~\cite{K0MAT} and experiment dependent $A^{\bar{K}^0}_{CP}$ with $K^0_S$ decay time dependency on it~\cite{GROSSMAN_NIR}. Once we correct for $A^{f}_{\epsilon}$, then $A^{D\rightarrow f}_{CP}$ is obtained in bins of the polar angle of charmed meson momentum at the center-of-mass system (c.m.s.) using antisymmetry of $A_{FB}$ in the polar angle of charmed meson momentum at the c.m.s. Figure~\ref{FIG:ACPKSPI} shows invariant masses of $D^{\pm}\rightarrow K^0_S\pi^{\pm}$ together with the fits that result in $\sim$1.74M reconstructed decays and the measured $A_{CP}$ in bins of the polar angle of $D^+$ momentum at the c.m.s. From the right plot in Fig.~\ref{FIG:ACPKSPI}, we obtain $A^{D^+\rightarrow K^0_S\pi^+}_{CP}=(-0.363\pm0.094\pm0.067)\%$ which shows 3.2$\sigma$ deviations from zero. This is the first evidence for $CP$ violation in charm decays from a single decay mode while the measured asymmetry is consistent with the $A^{\bar{K}^0}_{CP}$. After subtracting experiment dependent $A^{\bar{K}^0}_{CP}$~\cite{GROSSMAN_NIR}, the $CP$ violation due to change in charm, $A^{\Delta C}_{CP}$, is measured to be $(-0.024\pm0.094\pm0.067)\%$~\cite{KSPIPRL}. \begin{figure}[htbp] \mbox{ \includegraphics[width=0.35\textwidth, height=0.4\textwidth]{mKsPI_paper.eps} } \mbox{ \includegraphics[width=0.65\textwidth, height=0.4\textwidth]{Acponly_kspi.eps} } \caption{$M(K^0_S\pi^+)$ (left top) and $M(K^0_S\pi^-)$ (left bottom) distributions where the shaded and hatched are $D^+_s\rightarrow K^0_S K^+$ due to particle misidentification and combinatorial backgrounds. Right plot is $A_{CP}$ as a function of $\cos\theta^{\rm c.m.s.}_{D^+}$ where the thick line is the mean value of $A_{CP}$ while the hatched band is the $\pm1\sigma_{\rm total}$ interval, where $\sigma_{\rm total}$ is the total uncertainty.} \label{FIG:ACPKSPI} \end{figure} \begin{figure}[htbp] \mbox{ \includegraphics[width=0.4\textwidth, height=0.35\textwidth]{signals_d0hh_cs5osx.eps} } \mbox{ \includegraphics[width=0.6\textwidth, height=0.35\textwidth]{final_plot_d0hh_cs5osx.eps} } \caption{Left four plots show reconstructed signal distributions described in the text and right two plots show preliminary results of $A_{CP}$ as a function of the polar angle of $D^{*+}$ momentum at the c.m.s.} \label{FIG:ACPD0hh} \end{figure} Figure~\ref{FIG:ACPD0hh} shows reconstructed signal distributions showing 14.7M $D^0\rightarrow K^-\pi^+$, 3.1M $D^{*+}$ tagged $D^0\rightarrow K^-\pi^+$, 282k $D^{*+}$ tagged $D^0\rightarrow K^+K^-$, and 123k $D^{*+}$ tagged $D^0\rightarrow\pi^+\pi^-$ on top of the high signal purities, respectively, and the measured $A_{CP}$ in bins of the polar angle of $D^{*+}$ momentum at the c.m.s. From the right plot in Fig.~\ref{FIG:ACPD0hh}, we obtain $A^{KK}_{CP}=(-0.32\pm0.21\pm0.09)\%$ and $A^{\pi\pi}_{CP}=(+0.55\pm0.36\pm0.09)\%$ where the former shows the best sensitivity to date. From the two measurements, we obtain $\Delta A^{hh}_{CP}=(-0.87\pm0.41\pm0.06)\%$ which shows 2.1$\sigma$ deviations from zero and supports recent LHCb and CDF measurements~\cite{ACP_LHCB, ACP_CDF}. By combining LHCb, CDF, and Belle results, the average of $\Delta A^{hh}_{CP}$ becomes $(-0.74\pm0.15)\%$. With a help from Marco Gersabeck from Heavy Flavor Averaging Group (HFAG), Fig.~\ref{FIG:ACPhhICHEP} shows $\Delta A_{CP}$ and $A_{\Gamma}$ fit reflecting the new Belle results reported in this presentation and results in $\Delta a^{\rm dir}_{CP}=(-0.678\pm0.147)\%$ and $a^{\rm ind}_{CP}=(+0.027\pm0.163)\%$~\cite{HFAG}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{deltaACP_AGamma_fit_ICHEP2012.eps} \caption{$\Delta A_{CP}$ and $A_{\Gamma}$ fit from HFAG.} \label{FIG:ACPhhICHEP} \end{center} \end{figure} In summary, we observe evidence for $CP$ violation in the decay $D^+\rightarrow K^0_S\pi^+$ where the evidence is consistent with the expected $CP$ violation due to $K^0-\bar{K}^0$ mixing. No evidence for $CP$ violation in $D^0\rightarrow h^+h^-$ is observed and the $\Delta A^{hh}_{CP}$ is measured to be $(-0.87\pm0.41\pm0.06)\%$.
1,116,691,501,126
arxiv
\section{Mean Field analysis}\label{A} If the field $\Phi_\perp =0$, the GP functional (\ref{HH},\ref{grad}) becomes (at $\vec{\eta}=0$): \begin{widetext} \begin{eqnarray} H=\int dxdy\left[i(\Phi^*\dot{\Phi}-c.c.) - \frac{1}{2m}|\vec{\nabla} \Phi|^2 + \left(\frac{ (\vec{\nabla}\theta)^2}{2m} -\mu - n_0d^2_0\right) |\Phi|^2 - 3g|\Phi|^4 \right]. \label{HHsm} \end{eqnarray} \end{widetext} Thus, the gauge effect of the field $\vec{\nabla}\theta$ vanishes. The remaining term $\sim (\vec{\nabla}\theta)^2$ contributes to the suppression of local chemical potential. In other words, this term introduces diagonal disorder. However, in the limit under consideration the contribution $\sim (\nabla \theta)^2$ can be ignored if compared with $\mu + n_0d^2_0$. This implies that $\Phi$, practically, decouples from the external gauge field $\vec{\nabla}\theta$ and forms algebraic order at $\mu + n_0d^2_0 - \langle (\vec{\nabla}\theta)^2/(2m)\rangle_\theta \approx \mu + n_0d^2_0 >0$. In contrast, the total field $\vec{\psi}= \vec{n} \Phi $, Eq.(\ref{Phi}), shows no such an order because $\vec{n}=(\cos\theta,\sin\theta)$ winds in space and washes out any coherence contributed by $\Phi$. This is the mechanism which destroys 1PSF and induces 2PSF while crossing the solid line from the normal phase in Fig.1. The 2PSF can be characterized by the order parameter $\vec{\psi}\vec{\psi} = \Phi^2$, which exhibits algebraic order consistent with the functional (\ref{HHsm}). In the case when both fields are condensed, the mean field solution for (\ref{HH},\ref{grad}) gives \begin{eqnarray} \Phi=\Phi_1 \exp({\rm i} \varphi),\, \Phi_\perp=\Phi_2\exp({\rm i} \varphi),\,\, \nonumber \\ \Phi^2_1=\frac{2\mu + 3n_0d^2_0}{16g},\,\, \Phi_2^2=\frac{2\mu - n_0d^2_0}{16g}>0, \label{MF} \end{eqnarray} which is valid for $\mu > 0.5n_0d^2_0$. Here the phase $\varphi$ is to be determined from the minimum of the gradient part \begin{widetext} \begin{eqnarray} \Delta H= \int dxdy \frac{1}{2m}\left[(\Phi_1 \vec{\nabla} \varphi + \Phi_2 \vec{\theta})^2 + (\Phi_2 \vec{\nabla} \varphi + \Phi_1 \vec{\theta})^2\right] \label{grrr} \end{eqnarray} \end{widetext} of the functional (\ref{HH}). Let's assume the field $ \vec{\nabla} \theta $ has just a single vortex. Then, the minimum of $\Delta H$ can be reached either for $\vec{\nabla}\varphi=0$ (which corresponds to 2PSF) or for $\vec{\nabla}\varphi =- \vec{\nabla} \theta$, which (as will be explained below) corresponds to circulary polarized 1PSF. Comparison of these two energies leads to the condition \begin{eqnarray} \frac{\Phi^2_2}{\Phi^2_1} < 7-4\sqrt{3} \label{cond} \end{eqnarray} for the existence of the 2PSF. Using the solution (\ref{MF}) in Eq.(\ref{cond}) gives \begin{eqnarray} \mu < \frac{3\sqrt{3} -5}{2\sqrt{3}-3} n_0d^2_0 \approx 0.423 n_0 d_0^2, \end{eqnarray} which conflicts with the requirement $\mu > 0.5n_0d^2_0$ needed to have $\Phi^2_2 >0$. Thus, for $\mu > 0.5n_0d^2_0$ the minimum of $\Delta H$ is achieved for $\vec{\nabla}\varphi = - \vec{\nabla} \theta$, that is when the phase of the fields $\Phi, \Phi_\perp$ contains antivortex. In this case the components of the full field $\vec{\psi}$ become \begin{eqnarray} \psi_x &=& \left[\frac{\Phi_1 + \Phi_2}{2} + \frac{\Phi_1 - \Phi_2}{2}\exp(2{\rm i} \theta)\right] \exp({\rm i} \tilde{\varphi}), \\ \psi_y &=& -{\rm i} \left[\frac{\Phi_1 + \Phi_2}{2} + \frac{\Phi_2 - \Phi_1}{2}\exp(2{\rm i} \theta)\right] \exp({\rm i} \tilde{\varphi}), \end{eqnarray} where $\tilde{\varphi}$ is a non-winding part of $\varphi$, that is, which accounts for non-topological excitations -- phonons of photonic condensate. The terms $\sim \exp(2{\rm i} \theta)$ containing the winding field are washed out at large distances, and the coherent part of the total field $\vec{\psi}$ becomes \begin{eqnarray} \psi_x &=& \frac{1}{2}(\Phi_1 + \Phi_2) \exp({\rm i} \tilde{\varphi}), \\ \psi_y &=& - \frac{{\rm i}}{2}(\Phi_1 + \Phi_2) \exp({\rm i} \tilde{\varphi}), \end{eqnarray} which describes 1PSF with circular polarization as depicted in Eq.(\ref{coh}). Such a transformation renders the gauge field $\vec{\nabla}\theta$ irrelevant, and the algebraic part of the OPDM becomes \begin{eqnarray} \sim \langle \exp( i [\tilde{\varphi}(\vec{x}) - \tilde{\varphi}(\vec{x}')]) \rangle , \end{eqnarray} which after the gaussian averaging over phonons produces algebraic decay at large distances (see in Ref.[\onlinecite{BKT}]). Thus, for $\mu >0.5n_0d^2_0$, the resulting phase is 1PSF and it is characterized by circular polarization. According to the mean field analysis it occurs as soon as both fields are condensed -- that is, above the dotted line in Fig.1.
1,116,691,501,127
arxiv
\section{Introduction A quasitoric manifold $M$ over a simple polytope $P$, which was introduced by Davis and Januszkiewicz \cite{DJ91}, is a $2n$-dimensional smooth manifold with a locally standard $T^n=(S^1)^n$-action for which the orbit space is identified with $P$. Quasitoric manifolds are defined as a topological counterpart of toric varieties. Actually, as the toric varieties are in one-to-one correspondence with the fans, the quasitoric manifolds over $P$ are in one-to-one correspondence with a kind of combinatorial objects, called characteristic maps on $P$. Moreover, any smooth projective toric variety turns out to be a quasitoric manifold if we regard that $T^n$ acts on it through the inclusion to $(\mathbb{C}^{\times})^n$. On the classification of quasitoric manifolds, Masuda posed the following \textit{cohomological rigidity problem for quasitoric manifolds} in \cite{M08} where he affirmatively solved the equivariant version of it. \begin{prob} Are two quasitoric manifolds homeomorphic if their cohomology rings are isomorphic as graded rings? \end{prob} Since then toric topologists have studied the topological classification of quasitoric manifolds from the viewpoint of cohomological rigidity, and now we have some classification results which give partial affirmative answers for this problem. First, the cohomological rigidity of quasitoric manifolds over the simplex $\Delta^n\,(n=1,2,\ldots)$ is shown in \cite{DJ91}. Second, the cohomological rigidity of quasitoric manifolds over the convex $m$-gon ($m=4,5,\ldots$) is an immediate corollary of the classification theorem of Orlik and Raymond \cite{OR70}. Third, over the product of two simplices, the cohomological rigidity is proved by Choi, Park, and Suh \cite{CPS12}. Finally, over the dual cyclic polytope $C^n(m)^*$ ($n\geq 4$ or $m-n=3$), it is shown by the author \cite{H15}. In addition, there are some results on the cohomological rigidity of Bott manifolds, a special subclass of quasitoric manifolds over cubes, by Choi, Masuda, and Suh \cite{CMS}, Choi \cite{Choi} and Choi, Masuda, and Murai \cite{CMM}. On the other hand, we have found no counterexample to this problem. In this article we mainly consider the cohomological rigidity of ``bundle-type'' quasitoric manifolds over the cube $I^n$, which we give the precise definition later. Bundle-type quasitoric manifolds form a large subclass of quasitoric manifolds. For instance, up to weakly equivariant homeomorphism, the equivariant connected sum $\ccp{2}$ is the only one quasitoric manifold over $I^2$ which is not bundle-type (Proposition \ref{only ccp{2}} and Remark \ref{rem:kappa^2}), and there are only four quasitoric manifolds over $I^3$ which are not bundle-type (Lemma \ref{lem:summary up to w.e.homeo.}). Note that there are infinitely many quasitoric manifolds over $I^n$ ($n\geq 2$) up to weakly equivariant homeomorphism. The goal of this article is to show the following two theorems. Here a ($\mathbb{C}P^2\sharp\mathbb{C}P^2$)-bundle type quasitoric manifold means an iterated ($\mathbb{C}P^2\sharp\mathbb{C}P^2$)-bundle over a point equipped with a good torus action, of which the precise definition is given in Section 2.2. \begin{thm}\label{main thm 1} Suppose that there is a graded ring isomorphism $\varphi\colon\thinspace H^*(M';\mathbb{Z})\to H^*(M;\mathbb{Z})$ between the cohomology rings of two $(\ccp{2})$-bundle type quasitoric manifolds $M$ and $M'$. Then there exists a weakly equivariant homeomorphism $f\colon\thinspace M\to M'$ which induces $\varphi$ in cohomology. \end{thm} \begin{thm}\label{main thm 2} Suppose that there is a graded ring isomorphism $\varphi\colon\thinspace H^*(M';\mathbb{Z})\to H^*(M;\mathbb{Z})$ between the cohomology rings of two quasitoric manifolds $M$ and $M'$ over $I^3$. Then there exists a homeomorphism $f\colon\thinspace M\to M'$ which induces $\varphi$ in cohomology. \end{thm} This article is organized as follows. In Section 2, we review the basics of quasitoric manifolds and give the precise definitions of the terms bundle-type quasitoric manifold and so on. In Section 3, we prove the key lemma of this article (Lemma \ref{lem:sublemma, filtration lemma 2-2}) and prove Theorem \ref{main thm 1}. In Section 4, we classify the quasitoric manifolds over $I^3$ up to weakly equivariant homeomorphism. Finally, we give the proof of Theorem \ref{main thm 2} in Section 5. \section{Preliminaries \subsection{Basics of quasitoric manifolds First, let us begin with the definition of a quasitoric manifold. The reader can find more detailed explanation in e.g. Buchstaber and Panov \cite{BP02} and \cite{H15}. Here we always assume that $\mathbb{C}^n$ is equipped with the standard $T^n$-action, i.e. the action defined by $t z:=(t_1 z_1,\ldots,t_n z_n)$ where $t=(t_1,\ldots,t_n)\in T^n$ and $z=(z_1,\ldots,z_n)\in \mathbb{C}^n$ respectively. For two $T^n$-spaces $X$ and $Y$, a map $f\colon\thinspace X\to Y$ is called \textit{weakly equivariant} if there exists $\psi\in\mathrm{Aut}(T^n)$ such that $f(t x)=\psi(t) f(x)$ for any $t\in T^n$ and $x\in X$, where $\mathrm{Aut}(T^n)$ denotes the group of continuous automorphisms of $T^n$. We say a smooth $T^n$-action on a $2n$-dimensional differentiable manifold $M$ is \textit{locally standard} if for each $z\in M$ there exists a triad $(U,V,\varphi)$ consisting of a $T^n$-invariant open neighborhood $U$ of $z$, a $T^n$-invariant open subset $V$ of $\mathbb{C}^n$, and a weakly equivariant diffeomorphism $\varphi\colon\thinspace U \to V$. The orbit space of a locally standard $T^n$-action is naturally regarded as a \textit{manifold with corners}, by which we mean a Hausdorff space locally homeomorphic to an open subset of $(\mathbb{R}_{\geq 0})^n$ with the transition functions preserving the depth. Here $\mathrm{depth}\,x$ of $x\in (\mathbb{R}_{\geq 0})^n$ is defined as the number of zero components of $x$. By definition, for a manifold with corners $X$, we can define the \textit{depth of} $x\in X$ by $\mathrm{depth}\,x:=\mathrm{depth}\,\varphi(x)$ where $\varphi$ is an arbitrary local chart around $x$. Then a map $f$ between two manifolds with corners is said to \textit{preserve the corners} if $\mathrm{depth}\circ f=\mathrm{depth}$. An $n$-dimensional convex polytope is called \textit{simple} if it has exactly $n$ facets at each vertex. We regard a simple polytope as a manifold with corners in the natural way, and define a quasitoric manifold as follows. \begin{df}\label{df:qt mfd} A \textit{quasitoric manifold over a simple polytope $P$} is a pair $(M,\pi)$ consisting of a $2n$-dimensional smooth manifold $M$ equipped with a locally standard $T^n$-action and a continuous surjection $\pi\colon\thinspace M\to P$ which descends to a homeomorphism from $M/T^n$ to $P$ preserving the corners. We omit the projection $\pi$ unless it is misleading. \end{df} Next we recall the two ways to construct a quasitoric manifold. In this section $P$ always denotes an $n$-dimensional simple polytope with exactly $m$ facets and $\mathcal{F}(P)$ denotes the face poset of $P$. In addition, we define $\mathfrak{T}^n$ as the set of subtori of $T^n$. \begin{df}\label{df:ch map} A \textit{characteristic map on $P$} is a map $\ell\colon\thinspace \mathcal{F}(P)\to \mathfrak{T}^n$ such that \begin{itemize} \item[(i)] $\dim \ell(F)=n-\dim F$ for each face $F$, \item[(ii)] $\ell(F)\subseteq \ell(F')$ if $F'\subseteq F$, and \item[(iii)] if a face $F$ is the intersection of distinct $k$ facets $F_1,\ldots,F_k$, then the inclusions $\ell(F_i)\to\ell(F)$ ($i=1,\ldots,k$) induce an isomorphism $\ell(F_1)\times\cdots\times\ell(F_k)\to\ell(F)$. \end{itemize} \end{df} \begin{rem} For each face $F$ of $P$, we denote the relative interior of $F$ by $\mathrm{relint}\,F$. Given a quasitoric manifold $M$ over $P$, then we obtain a characteristic map $\ell_M$ on $P$ by \[ \ell_M(F):= (T^n)_z \] where $z$ is an arbitrary point of $\pi^{-1}(\mathrm{relint}\, F)$ and $(T^n)_z$ denotes the isotropy subgroup at $z$. Actually we can easily check the conditions of Definition \ref{df:ch map} by the locally-standardness. \end{rem} \begin{const}\label{const:ch map} For each point $q\in P$, we denote the minimal face containing $q$ by $G(q)$. Then we obtain a quasitoric manifold $(M(\ell),\pi)$ over $P$ by setting \[M(\ell):=(T^n\times P)/\!\sim_{\ell},\] where $(t_1,q_1)\sim_{\ell}(t_2,q_2)$ if and only if $q_1=q_2$ and $t_1t_2^{-1}\in \ell(G(q_1))$, and $\pi\colon\thinspace M(\ell)\to P$ denotes the map induced by $\mathrm{pr}_2\colon\thinspace T^n\times P\to P$. Obviously the $T^n$-action on $T^n\times P$ by multiplication on the first component descends to a $T^n$-action on $M(\ell)$. We can define a differentiable structure on $M(\ell)$ as follows. We regard $P$ as a subset of $\mathbb{R}^n$ and denote the hyperplane $\{(x_1,\ldots,x_n)\in\mathbb{R}^n\,\vert\,x_i=0\}$ by $H_i$ ($i=1,\ldots,n$). For a vertex $v$ of $P$, we denote by $U_v$ the open subset of $P$ obtained by deleting all faces not containing $v$ from $P$, and take $n$ facets $F_1,\ldots,F_n$ of $P$ such that $v=F_1\cap\cdots\cap F_n$. Additionally, we take an affine transformation $\bar{\varphi}_v$ of $\mathbb{R}^n$ which maps $U_v$ onto an open subset of $(\mathbb{R}_{\geq 0})^n$ and $F_i$ into $H_i$. If we take an automorphism $\psi_v$ of $T^n$ which maps $\ell(F_{i})$ into the $i$-th coordinate subtorus for each $i=1,\ldots,n$, then the map $\psi_v\times\bar{\varphi}_v\colon\thinspace T^n\times U_v\to T^n\times (\mathbb{R}_{\geq 0})^n$ descends to a weakly equivariant homeomorphism $\varphi_v$ from $\pi^{-1}(U_v)$ to some $T^n$-invariant open subset of $\mathbb{C}^n$. We can check that the atlas $\{(\pi^{-1}(U_v),\varphi_v)\}$ gives a differentiable structure on $M(\ell)$. Clearly the $T^n$-action on $M(\ell)$ is locally standard and the orbit space is identified with $P$, i.e. $M(\ell)$ is a quasitoric manifold over $P$. Moreover, by definition, we have $\ell=\ell_{M(\ell)}$. \end{const} In this article, we define an isomorphism of quasitoric manifolds as follows: for two quasitoric manifolds $(M,\pi)$ and $(M',\pi')$ over $P$, a map $f\colon\thinspace M\to M'$ is called an \textit{isomorphism of quasitoric manifolds} if it is a $T^n$-equivariant homeomorphism such that $\pi'\circ f=\pi$. By using the blow-up method of Davis \cite{D78}, we see that for any quasitoric manifold $M$ over $P$ there exists a $T^n$-equivariant surjection $T^n\times P\to M$ which descends to an isomorphism $M(\ell_M)\to M$ of quasitoric manifolds. Thus we obtain the following. \begin{prop}\label{prop:fundamental} The correspondence $\ell \mapsto M(\ell)$ gives a bijection from the set of characteristic maps on $P$ to the set of isomorphism classes of quasitoric manifolds over $P$, and the inverse is given by $M\mapsto \ell_M$. \end{prop} The second way to construct a quasitoric manifold uses a characteristic matrix and a moment-angle manifold. Below the term \textit{facet labeling of} $P$ means a bijection from $\{1,\ldots,m\}$ to the set of the facets of $P$. \begin{df} An $(n\times m)$-matrix $\lambda=(\lambda_1,\ldots,\lambda_m)$ of integers is called a \textit{characteristic matrix on $P$ with respect to the facet labeling $F_1,\ldots,F_m$} if it satisfies the following \textit{nonsingularity condition}: if $F_{i_1},\ldots,F_{i_n}$ meet at a vertex, then $\det(\lambda_{i_1},\ldots,\lambda_{i_n})=\pm 1$. \end{df} Hereafter, unless mentioned otherwise, we fix a facet labeling $F_1,\ldots,F_m$ of $P$. \begin{rem} Given a characteristic matrix $\lambda$ on $P$, we can define a characteristic map $\ell_{\lambda}$ by \[ \ell_\lambda(F_{i_1}\cap \ldots \cap F_{i_k}):= \mathrm{im}\,(\lambda_{i_1},\ldots,\lambda_{i_k}) \] where we identify $S^1$ with $\mathbb{R}/\mathbb{Z}$ and regard $(\lambda_{i_1},\ldots,\lambda_{i_k})$ as a homomorphism from $T^k$ to $T^n$. Obviously, any characteristic map is obtained from some characteristic matrix in this way. \end{rem} \begin{const}\label{const:moment-angle mfd} Let $K_P$ be the simplicial complex on $[m]:=\{1,\ldots,m\}$ defined by $K_P:=\{J_F\,|\,F\in\mathcal{F}(P)\}$ where $J_F:=\{i\in[m]\,|\,F\subseteq F_i\}$. We regard $D^2$ as the unit disc of $\mathbb{C}$ and define $$ (D^2,S^1)^{J} := \{ (z_1,\ldots,z_m) \in (D^2)^m \,\vert\, \minor{z_i}=1\text{ if } i\not\in J \} $$ for each $J \subseteq [m]$. Then the \textit{moment-angle manifold} $\mathcal{Z}_P$ is defined as the union $$\bigcup_{J \in K_P}(D^2,S^1)^{J} \subseteq (D^2)^m,$$ which is equipped with the $T^m$-action defined by $(t_1,\ldots,t_m)\cdot(z_1,\ldots,z_m)=(t_1 z_1,\ldots,t_m z_m)$. We can define an embedding $\varepsilon\colon\thinspace P\to \mathcal{Z}_P$ as follows. Denote the barycentric subdivision of $K_P$ by $K'_P$. If we take $b_F\in \mathrm{relint}\, F$ for each face $F$, then the correspondence $J_F\mapsto b_F$ gives a triangulation of $P$ by $K'_P$. Then we define $\varepsilon\colon\thinspace |K'_P|\to \mathcal{Z}_P$ so that $\varepsilon(J_F)=(c_1(F),\ldots,c_m(F))$ for the vertices and it restricts to an affine map on each simplex, where $c_i(F)=0$ if $F\subseteq F_i$ and $c_i(F)=1$ otherwise. Note that $\varepsilon$ descends to a homeomorphism from $P\cong|K'_P|$ to $\mathcal{Z}_P/T^m$. If we define $G(q)$ as in Construction \ref{const:ch map} and $\ell_P\colon\thinspace \mathcal{F}(P)\to\mathfrak{T}^m$ by $$ \ell_P(F):=\{(t_1,\ldots,t_m)\in T^m\,\vert\, t_i =1\text{ if }F\not\subseteq F_i)\}, $$ then the correspondence $(t,q)\mapsto t\cdot \varepsilon(q)$ gives an equivariant homeomorphism from $ (T^m\times P)/\!\sim$ to $\mathcal{Z}_P $, where $(t_1,q_1)\sim(t_2,q_2)$ if and only if $q_1=q_2$ and $t_1t_2^{-1}\in \ell_P(G(q_1))$. Moreover, we can define a differentiable structure on $(T^m\times P)/\!\sim\ \cong \mathcal{Z}_P$ in the same way as Construction \ref{const:ch map}, and then the $T^m$-action on $\mathcal{Z}_P$ is smooth. Let $\lambda$ be a characteristic matrix on $P$. If we regard $\lambda$ as a homomorphism from $T^m$ to $T^n$, then we can check that $T_\lambda:=\ker \lambda$ acts on $\mathcal{Z}_P$ freely. Thus we obtain a manifold $M(\lambda):=\mathcal{Z}_P/T_\lambda$ with a smooth action of $T^m/T_{\lambda}\cong T^n$ where the isomorphism is induced by $\lambda$. We can easily check that this $T^n$-action on $M(\lambda)$ is locally standard. Actually, the map $\lambda\times\mathrm{id}_P\colon\thinspace T^m\times P\to T^n\times P$ descends to an equivariant diffeomorphism from $M(\lambda)$ to $M(\ell_\lambda)$. We define $\pi\colon\thinspace M(\lambda)\to P$ as the composite of the quotient map $M(\lambda)\to M(\lambda)/T^n=\mathcal{Z}_P/T^m$ and $\varepsilon^{-1}$, and then $(M(\lambda),\pi)$ is a quasitoric manifold over $P$. \end{const} Clearly, we have the following proposition. \begin{prop} For a characteristic matrix $\lambda$ on $P$, the two quasitoric manifolds $M(\lambda)$ and $M(\ell_\lambda)$ are smoothly isomorphic. \end{prop} \begin{df} For a quasitoric manifold $M$ over $P$, a \textit{characteristic matrix of $M$} means a characteristic matrix $\lambda$ on $P$ such that $M(\lambda)$ is isomorphic to $M$. In other words, $\lambda$ is called a characteristic matrix of $M$ if $\ell_{\lambda}=\ell_M$. \end{df} Next we consider the weakly equivariant homeomorphisms between quasitoric manifolds. We denote by $[m]_{\pm}$ the set of $2m$ integers $\pm1,\ldots,\pm m$ and regard that $\mathbb{Z}/2$ acts on $[m]_\pm$ by multiplication with $-1$. Additionally, we define a map $\mathrm{sgn}\colon\thinspace [m]_\pm\to \mathbb{Z}/2$ so that $x=\mathrm{sgn}(x)\cdot|x|$ where we identify $\mathbb{Z}/2$ with the multiplicative group $\{\pm 1\}$. \begin{df}\label{df:iota} We define $R_m$ as the group of ($\mathbb{Z}/2$)-equivariant permutations of $[m]_\pm$ and $p\colon\thinspace R_m\to \mathfrak{S}_m$ as the canonical surjection to the symmetric group. In addition, we define $\iota\colon\thinspace R_m\to GL_m(\mathbb{Z})$ so that $e_i\cdot\iota(\rho)=\mathrm{sgn}_i(\rho)\cdot e_{\sigma(i)}$ ($i=1,\ldots,m$) where $e_1,\ldots,e_m$ denote the standard basis of $\mathbb{Z}^m$, $\sigma:=p(\rho)$, and $\mathrm{sgn}_i(\rho):=\mathrm{sgn}(\rho(i))$. \end{df} \begin{rem} The map $\iota\colon\thinspace R_m\to GL_m(\mathbb{Z})$ defined above is an antihomomorphism. Actually, if we take $\rho_i\in R_m$ and put $\sigma_i:=p(\rho_i)$ ($i=1,2$), we can check $\iota(\rho_1\circ \rho_2)=\iota(\rho_2)\cdot\iota(\rho_1)$ as follows. For a fixed $j\in\{1,\ldots,m\}$, if we put $k:=\sigma_2(j)$, then $\mathrm{sgn}_j(\rho_1\circ\rho_2)=\mathrm{sgn}_j(\rho_2)\cdot\mathrm{sgn}_k(\rho_1)$. Therefore \begin{align*} e_j\cdot\iota(\rho_1\circ\rho_2)&=\mathrm{sgn}_j(\rho_1\circ\rho_2)\cdot e_{\sigma_1\circ\sigma_2(j)}=\mathrm{sgn}_j(\rho_2)\cdot(\mathrm{sgn}_k(\rho_1)\cdot e_{\sigma_1(k)}) \\ &=\mathrm{sgn}_j(\rho_2)\cdot (e_k\cdot\iota(\rho_1))=(\mathrm{sgn}_j(\rho_2)\cdot e_{\sigma_2(j)})\cdot\iota(\rho_1)=e_j\cdot\iota(\rho_2)\cdot\iota(\rho_1). \end{align*} \end{rem} \begin{df}\label{df:action on Lambda_P} For a simple polytope $P$, we denote by $\mathrm{Aut}(P)$ the group of combinatorial self-equivalences of $P$ and regard it as a subgroup of the symmetric group $\mathfrak{S}_m$ by using the facet labeling. Then we denote by $R(P)$ the subgroup $p^{-1}(\mathrm{Aut}(P))$ of $R_m$. Moreover, we define $\Lambda_P$ as the set of characteristic matrices on $P$ and a left action of $GL_n(\mathbb{Z})\times R(P)$ on $\Lambda_P$ by $(\psi,\rho)\cdot \lambda:=\psi\cdot\lambda\cdot\iota(\rho)$. \end{df} \begin{df}\label{df:rep of wehomeo} Let $P$ be a simple polytope, $\lambda$, $\lambda'$ be characteristic matrices on $P$, and $f\colon\thinspace M(\lambda)\to M(\lambda')$ be a weakly equivariant homeomorphism. We denote by $\bar{f}$ the corner-preserving self-homeomorphism of $P$ induced by $f$. Then a pair $(\psi,\rho)\in GL_n(\mathbb{Z})\times R(P)$ is called the \textit{representation of} $f$ if the following (i), (ii), and (iii) hold. \begin{itemize} \item[(i)] $f(t x)=\psi(t)f(x)$ for any $t\in T^n$ and $x\in M(\lambda)$, where we identify $GL_n(\mathbb{Z})$ with $\mathrm{Aut}(T^n)$ through the left action on $\mathbb{R}^n/\mathbb{Z}^n=T^n$. \item[(ii)] If we denote by $\sigma\!_f$ the combinatorial self-equivalence of $P$ induced by $\bar{f}$, then $\sigma\!_f=p(\rho)$. \item[(iii)] $\lambda'=(\psi,\rho)\cdot \lambda$. \end{itemize} \end{df} It is easy to see that for any weakly equivariant homeomorphism $f\colon\thinspace M(\lambda)\to M(\lambda')$ there exists a unique representation of $f$. Conversely, we have the following proposition. \begin{prop}\label{realization of rep} For any pair $(\psi,\rho)\in GL_n(\mathbb{Z})\times R(P)$ and a characteristic matrix $\lambda$ on $P$, there exists a weakly equivariant homeomorphism $f\colon\thinspace M(\lambda)\to M(\lambda')$ of which the representation is $(\psi,\rho)$. Here $\lambda'$ denotes the characteristic matrix $(\psi,\rho)\cdot\lambda$ on $P$. \end{prop} \begin{proof} First, by using the triangulation of $P$ given in Construction \ref{const:moment-angle mfd}, we can construct a corner-preserving self-homeomorphism $\bar{f}$ of $P$ which induces $p(\rho)$. Since $\lambda'=\psi\cdot \lambda\cdot \iota(\rho)$, we see $\psi(\ell(F))\subseteq \ell'(\sigma(F))$ for each face $F$ of $P$, where $\sigma:=p(\rho)$, $\ell:=\ell_\lambda$ and $\ell':=\ell_{\lambda'}$. It implies that $(\psi(t_1),\bar{f}(q_1))\sim_{\ell'}(\psi(t_2),\bar{f}(q_2))$ if $(t_1,q_1)\sim_\ell(t_2,q_2)$, where $\sim_{\ell}$ and $\sim_{\ell'}$ are defined in the same way as Construction \ref{const:ch map}. Thus we see that $\psi\times\bar{f}\colon\thinspace T^n\times P\to T^n\times P$ descends to a weakly equivariant homeomorphism $f\colon\thinspace M(\ell)\to M(\ell')$, of which the representation is obviously $(\psi,\rho)$. \end{proof} \begin{cor}\label{classification up to wehomeo} If we denote by $\mathcal{M}_P^\mathrm{weh}$ the set of weakly equivariant homeomorphism classes of quasitoric manifolds over $P$, then the correspondence $\lambda\mapsto M(\lambda)$ gives a bijection from $\Lambda_P/(GL_n(\mathbb{Z})\times R(P))$ to $\mathcal{M}_P^\mathrm{weh}$. \end{cor} Then we consider the cohomology ring of a quasitoric manifold $M=M(\lambda)$ over $P$. The following computation is due to \cite{DJ91}. Let us define the \textit{Davis-Januszkiewicz space} $DJ_P$ as the union $$\bigcup_{J \in K_P}BT^J \subseteq BT^m=(\mathbb{C} P^{\infty})^m$$ where $ BT^{J} := \{ (y_1,\ldots,y_m) \in BT^m \,\vert\, y=*\text{ if }i\not\in J \} $ and $*$ denotes the basepoint of $\mathbb{C} P^{\infty}$. $K_P$ is the simplicial complex defined in Construction \ref{const:moment-angle mfd}. Denote the Borel constructions of $M$ and $\mathcal{Z}_P$ by $\mathcal{B}_{T^n} (M)$ and $\mathcal{B}_{T^m} (\mathcal{Z}_P)$ respectively, i.e. $\mathcal{B}_{T^n} (M)$ (resp. $\mathcal{B}_{T^m} (\mathcal{Z}_P)$) denotes the quotient of $ET^n\times M$ (resp. $ET^m\times \mathcal{Z}_P$) by the action of $T^n$ (resp. $T^m$) defined by $t\cdot(x,y):=(x t,t^{-1}y)$. Then we have a homotopy commutative diagram \[ \xymatrix { \mathcal{Z}_P \ar[1,0] \ar[0,1] & M \ar[1,0] \\ \mathcal{B}_{T^m} (\mathcal{Z}_P) \ar[1,0] \ar[0,1] & \mathcal{B}_{T^n} (M) \ar[1,0] \\ BT^m \ar[0,1]^{B\lambda} & BT^n } \] where the columns are fiber bundles, the middle horizontal map is a homotopy equivalence, and the bottom one is the map induced by $\lambda\colon\thinspace T^m\to T^n$. By using homotopy colimit, we can construct a homotopy equivalence from $DJ_P$ to $\mathcal{B}_{T^m} (\mathcal{Z}_P)$ such that the diagram \[ \xymatrix { & \mathcal{B}_{T^m} (\mathcal{Z}_P) \ar[1,0] \\ DJ_P \ar[0,1] \ar[-1,1] & BT^m } \] commutes up to homotopy, where the horizontal arrow is the inclusion. Thus we obtain the following theorem. \begin{thm}[Davis and Januszkiewicz]\label{thm:qt mfd is a homotopy fiber} Let $P$ be an $n$-dimensional simple polytope with $m$ facets and $\lambda$ be a characteristic matrix on $P$. Then $M(\lambda)$ is the homotopy fiber of the map $B\lambda\circ \mathrm{incl}\colon\thinspace DJ_P \to BT^n$, where $\mathrm{incl}$ denotes the inclusion into $BT^m$. \end{thm} Since it is also shown by Davis and Januszkiewicz (in the proof of \cite[Theorem 3.1]{DJ91}) that any quasitoric manifold has a CW structure without odd dimensional cells, we immediately obtain the following corollary. \begin{cor}[Davis and Januszkiewicz]\label{cor:cohomology ring of a quasitoric manifold} Let $P$ be an $n$-dimensional simple polytope with $m$ facets, $\lambda=(\lambda_{i,j})$ be a characteristic matrix on $P$, and put $M:=M(\lambda)$. Then the integral cohomology ring of $M$ is given by \[ H^*(M;\mathbb{Z})=\mathbb{Z}[v_1,\ldots,v_m] /(\mathcal{I}_P+\mathcal{J}_{\lambda}).\] Here $v_i:=j^* t_i \in H^2(M;\mathbb{Z})\,(i=1,\ldots,m)$ where $j\colon\thinspace M\rightarrow DJ_P$ is the inclusion of fiber and $t_i$'s are the canonical basis of $H^2(DJ_P;\mathbb{Z})$, and $\mathcal{I}_P$, $\mathcal{J}_{\lambda}$ are the ideals below: \begin{align*} \mathcal{I}_P &= (v_{i_1}\cdots v_{i_k}\,\vert \, F_{i_1}\cap \ldots \cap F_{i_k}=\emptyset ),\\ \mathcal{J}_{\lambda} &= (\lambda_{i,1}v_1+\cdots +\lambda_{i,m}v_m\, \vert \, i=1,\ldots,n). \end{align*} \end{cor} \begin{lem}\label{generators of cohomology} For each $i=1,\ldots,m$, the generator $v_i\in H^2(M;\mathbb{Z})$ of Corollary $\ref{cor:cohomology ring of a quasitoric manifold}$ is equal to the Poincar\'{e} dual of the submanifold $M_i:=\pi^{-1}(F_i)$ . \end{lem} We make some preparations before the proof of this lemma. For the sake of simplicity, we make the following conventions. \begin{itemize} \item Unless otherwise mentioned, a space means a Hausdorff space and a map means a continuous map. An action is also assumed to be continuous. \item A structure group $\mathfrak{F}$ of a fiber bundle with fiber $F$ is always assumed to act on $F$ effectively. Moreover, $\mathfrak{F}$ is assumed to have the following property: for a space $X$ and a possibly non-continuous map $f\colon\thinspace X \to \mathfrak{F}$, $f$ is continuous if the map $X\times F\to F$ defined by $(x,y)\mapsto f(x)\cdot y$ is continuous. \end{itemize} \begin{df}\label{df:equivariant bundle} Let $G$, $\mathfrak{F}$ be two topological groups and regard that $\mathfrak{F}$ acts on a space $F$. A \textit{$G$-equivariant fiber bundle with fiber $F$ and structure group $\mathfrak{F}$} is a map $p\colon\thinspace E\to B$ between $G$-spaces satisfying the following conditions: \begin{itemize} \item[(i)] $p$ is a fiber bundle with fiber $F$ and structure group $\mathfrak{F}$; \item[(ii)] $p$ is $G$-equivariant; \item[(iii)] for each $g\in G$ and $x\in B$, if we take local trivializations $\phi\colon\thinspace U\times F\to p^{-1}(U)$ and $\phi'\colon\thinspace U'\times F\to p^{-1}(U')$ around $x$ and $g x$ respectively, then there exists $f\in\mathfrak{F}$ such that the following diagram commutes. \[\xymatrix{ p^{-1}(x) \ar[r]^g & p^{-1}(g x) \\ \{x\}\times F \ar[u]^{\phi|_{\{x\}\times F}} \ar[r]^{g\times f} & \{g x\}\times F \ar[u]_{\phi'|_{\{g x\}\times F}} }\] \end{itemize} If $G$ is a Lie group, then a $G$-equivariant fiber bundle is called smooth if it is smooth as a fiber bundle, and the $G$-actions on the total space and the base space are smooth. Here we say a fiber bundle is smooth if the fiber, the total space, and the base space are differentiable manifolds and the local trivializations can be chosen to be diffeomorphisms. \end{df} \begin{lem}\label{lem:equivariant bundle} Let $G$ be a topological group, $H$ be a closed normal subgroup of $G$, $p\colon\thinspace E\to P$ be a $G$-equivariant fiber bundle with fiber $F$ and structure group $\mathfrak{F}$, and put $\overline{E}:=E/H$, $B:=P/H$. If the quotient map $q\colon\thinspace P\to B$ is a principal $H$-bundle, then $\bar{p}\colon\thinspace\overline{E}\to B$ induced by $p$ can be equipped with a $G/H$-equivariant fiber bundle structure with fiber $F$ and structure group $\mathfrak{F}$ so that the quotient map $\tilde{q}\colon\thinspace E\to\overline{E}$ is a bundle map covering $q$. \end{lem} \begin{proof} To summarize the setting of the lemma, we have the following commutative diagram. \[\xymatrix{ F \ar[r] & E \ar[r]^{p} \ar[d]_{\tilde{q}} & P \ar[d]^{q} \\ & \overline{E} \ar[r]^{\bar{p}} & B }\] Let $\mathcal{A}$ be the set consisting of triads $(U,s,\beta)$ where $U$ is an open subset of $B$, $s$ is a section of $q\colon\thinspace q^{-1}(U)\to U$, and $\beta\colon\thinspace V\times F\to p^{-1}(V)$ is a local trivialization of $p$ such that $s(U)\subseteq V$. If we define $\phi_\alpha\colon\thinspace U\times F\to \bar{p}^{-1}(U)$ by $\phi_\alpha(x,y):=\tilde{q}\circ\beta(s(x),y)$ for each $\alpha=(U,s,\beta)\in\mathcal{A}$, then it is clearly bijective. Note that, since $\overline{E}$ is a quotient by a group action, $\tilde{q}$ is an open map and restricts to a quotient map $\tilde{q}^{-1}(W)\to W$ for any open subset $W$ of $\overline{E}$. Since $\beta\circ(s\times\mathrm{id}_F)$ is a topological embedding and $\tilde{q}\circ\beta\circ(s\times\mathrm{id}_F)\circ\phi_\alpha^{-1}=\mathrm{id}_{\bar{p}^{-1}(U)}$ is continuous, $\phi_\alpha^{-1}$ is also continuous. Thus we see that $\bar{p}$ is a fiber bundle with fiber $F$. Next, let us show that the transition functions associated with the local trivializations $\{(U,\phi_\alpha)\}_{\alpha\in\mathcal{A}}$ take values in $\mathfrak{F}$. Take $\alpha=(U,s,\beta),\, \alpha'=(U',s',\beta')\in\mathcal{A}$ and assume $U\cap U'\neq \emptyset$. Due to the second convention made before Definition \ref{df:equivariant bundle}, we only have to show that for each $x\in U\cap U'$ there exists $f\in\mathfrak{F}$ such that $\phi_\alpha(x,y)=\phi_{\alpha'}(x,f\cdot y)$ for any $y\in F$. Fix $x\in U\cap U'$ and take $h\in H$ such that $s'(x)=h\cdot s(x)$. Since $p$ is a $G$-equivariant fiber bundle, there exists $f\in\mathfrak{F}$ such that $h\cdot\beta(s(x),y)=\beta'(s'(x),f\cdot y)$ for any $y\in F$. Then, since $\tilde{q}(h\cdot\beta(s(x),y))=\tilde{q}(\beta(s(x),y))$, we have $\phi_\alpha(x,y)=\phi_{\alpha'}(x,f\cdot y)$ for any $y\in F$. Thus we see that $\bar{p}$ is a fiber bundle with structure group $\mathfrak{F}$. Finally, we show that the condition (iii) of Definition \ref{df:equivariant bundle} holds for $\bar{p}$. Fix $g\in G$, $x\in B$ and take $\alpha=(U,s,\beta),\, \alpha'=(U',s',\beta')\in\mathcal{A}$ so that $x\in U$, $gx\in U'$. We can take $h\in H$ such that $s'(g x)=h\cdot (g\cdot s(x))$. If we put $g':=h g$, since $p$ is a $G$-equivariant fiber bundle, there exists $f\in \mathfrak{F}$ such that $g'\cdot\beta(s(x),y)=\beta'(s'(g x),f\cdot y)$ for any $y\in F$. Then, since $G$ acts on $\overline{E}$ via $G/H$, we have $g\cdot\phi_\alpha(x,y)=\phi_{\alpha'}(g x,f\cdot y)$. Thus the proof is completed. \end{proof} \begin{proof}[proof of Lemma \ref{generators of cohomology}] Fix $i\in\{1,\ldots,m\}$ and let $X$ be the inverse image of $M_i$ under the quotient map from $\mathcal{Z}_P$ to $M=M(\lambda)$. Then $X=\{(z_1,\ldots,z_m)\in\mathcal{Z}_P\,\vert\,z_i=0\}$ (see Construction \ref{const:moment-angle mfd}). We define a normal bundle $\nu(X)$ of $X$ in $\mathcal{Z}_P$ by $\nu(X):=\{(z_1,\ldots,z_m)\in\mathcal{Z}_P\,\vert\,|z_i|<1\}$ where the projection $\nu(X)\to X$ is given by $(z_1,\ldots,z_m)\mapsto (z_1,\ldots,z_{i-1},0,z_{i+1},\ldots,z_m)$. Then $\mathrm{pr}_i\colon\thinspace \mathcal{Z}_P\to D^2$ restricts to a bundle map $\nu(X) \to \mathrm{Int}\,D^2$ covering $X\to\{0\}$. By Lemma \ref{lem:equivariant bundle}, since $\nu(X)$ is a $T^m$-equivariant vector bundle and $\mathcal{Z}_P\to M(\lambda)$ is a principal $T_\lambda$-bundle, $\nu(M_i):=\nu(X)/T_\lambda$ gives a $T^n$-equivariant normal bundle of $M_i$ in $M$. Moreover, by using Lemma \ref{lem:equivariant bundle} again, we see that $\mathcal{B}_{T^n}(\nu(M_i))\to \mathcal{B}_{T^n}(M_i)$ and $\mathcal{B}_{T^m}(\nu(X))\to \mathcal{B}_{T^m}(X)$ also have vector bundle structures. Thus we have the following diagram where each square is a bundle map. \[\xymatrix{ \nu(M_i) \ar[r] \ar[d] & \mathcal{B}_{T^n}(\nu(M_i)) \ar[d] & \mathcal{B}_{T^m}(\nu(X)) \ar[d] \ar[l] \ar[r] & \mathcal{B}_{T^1}(\mathrm{Int}\,D^2) \ar[d] \\ M_i \ar[r] & \mathcal{B}_{T^n}(M_i) & \mathcal{B}_{T^m}(X) \ar[l] \ar[r] & B T^1 }\] Then, let us put $(A,B)^c:=(A,A\setminus B)$, $\mathcal{B}_{T^k}(A,B):=(\mathcal{B}_{T^k}A,\mathcal{B}_{T^k}B)$ for a pair $(A,B)$ of $T^k$-spaces, and consider the following commutative diagram. \[\xymatrix{ H^2(\mathcal{B}_{T^1}(D^2,\{0\})^c) \ar[d]_{r_1} \ar[r]^{\mathrm{pr}_i^*} & H^2(\mathcal{B}_{T^m}(\mathcal{Z}_P,X)^c) \ar[d] & H^2(\mathcal{B}_{T^n}(M,M_i)^c) \ar[r] \ar[d] \ar[l]_{\cong} & H^2((M,M_i)^c) \ar[d]^{r_2} \\ H^2(\mathcal{B}_{T^1} (D^2)) \ar[r]^{\mathrm{pr}_i} & H^2(\mathcal{B}_{T^m}(\mathcal{Z}_P)) & H^2(\mathcal{B}_{T^n} (M)) \ar[r] \ar[l]_{\cong} & H^2(M) }\] Here $H^*(\,\cdot\,)$ denotes the integral cohomology and each vertical arrow denotes the restriction. Let us denote the Thom class of $\mathcal{B}_{T^1}(\mathrm{Int}\,D^2)$ by $\tau$ and regard it as an element of $H^2(\mathcal{B}_{T^1}(D^2,\{0\})^c)$ through the excision isomorphism. Moreover, we denote the composite of the upper (resp. lower) horizontal arrows by $\gamma_1$ (resp. $\gamma_2$). Due to the above diagram of bundle maps, $\gamma_1$ maps $\tau$ to the Thom class of $\nu(M_i)$, and therefore $r_2\circ\gamma_1(\tau)$ is the Poincar\'{e} dual of $M_i$. Moreover, since $r_1(\tau)$ is the canonical generator of $H^2(\mathcal{B}_{T^1} (D^2))\cong H^2(B T^1)$, $\gamma_2\circ r_1(\tau)=v_i$. Thus the proof is completed. \end{proof} Note that, with the notation of Definition \ref{df:rep of wehomeo}, a weakly equivariant homeomorphism $f$ maps $\pi^{-1}(F_i)$ to $\pi'^{-1}(F_{\sigma\!_f(i)})$ for each $i=1,\ldots,m$, where $\pi$ (resp. $\pi'$) denotes the projection from $M(\lambda)$ (resp. $M(\lambda')$) to $P$. By taking into account the orientations of the normal bundles, we have the following. \begin{cor}\label{induced by rep} Let $\lambda$, $\lambda'$ be two characteristic matrices on $P$ and $f\colon\thinspace M(\lambda)\to M(\lambda')$ be a weakly equivariant homeomorphism represented by $(\psi,\rho)\in GL_n(\mathbb{Z})\times R(P)$. Then, if we take generators $v_1,\ldots,v_m\in H^*(M(\lambda);\mathbb{Z})$ and $v'_1,\ldots,v'_m\in H^*(M(\lambda');\mathbb{Z})$ as in Corollary $\ref{cor:cohomology ring of a quasitoric manifold}$, we have $$ f^*(v'_1, \ldots , v'_m)=(v_1, \ldots, v_m) \cdot \iota(\rho)^{-1}.$$ \end{cor} To close this subsection, we introduce two theorems which we will use for the classification of quasitoric manifolds over $I^3$. \begin{thm}[{\cite[Corollary 6.8]{DJ91}}]\label{thm:characteristic classes of a quasitoric manifold} With the notation in Corollary $\ref{cor:cohomology ring of a quasitoric manifold}$, we have the following formulae for the total Stiefel-Whitney class and the total Pontrjagin class: \begin{align*} w(M) &= \prod_{i=1}^m (1+v_i),\\ p(M) &= \prod_{i=1}^m (1-v_i^2). \end{align*} \end{thm} \begin{thm}[Jupp's classification of certain $6$-manifolds, \cite{Jup73}]\label{thm:classification theorem of 6-manifolds} Let $M,N$ be closed, one-connected, smooth $6$-manifolds with torsion-free cohomology. If a graded ring isomorphism $\alpha\colon\thinspace H^*(N;\mathbb{Z})\to H^*(M;\mathbb{Z})$ preserves the second Stiefel-Whitney classes and the first Pontrjagin classes, then there exists a homeomorphism $f\colon\thinspace M\to N$ which induces $\alpha$ in cohomology. \end{thm} \subsection{Bundle-type quasitoric manifold Given a quasitoric manifold $M$, we denote by $\mathfrak{D}(M)$ the group of smooth automorphisms of $M$ equipped with the compact-open topology (recall that an isomorphism between quasitoric manifolds $(M,\pi)$ and $(M',\pi')$ means an equivariant homeomorphism $f\colon\thinspace M\to M'$ satisfying $\pi'\circ f=\pi$). The following proposition is immediate from the definition of a smooth equivariant fiber bundle (Definition \ref{df:equivariant bundle}). \begin{prop}\label{prop:associated torus action} Let $M_i$ be a quasitoric manifold acted on by $T_i$ ($i=1,2$) and suppose that $p\colon\thinspace M \to M_2$ is a smooth $T_2$-equivariant fiber bundle with fiber $M_1$ and structure group $\mathfrak{D}(M_1)$. Then there is a unique $T_1$-action on $M$ such that $t_1\cdot\phi(x,y)=\phi(x,t_1 y)$ for any local trivialization $\phi\colon\thinspace U\times M_1\to p^{-1}(U)$ of $p$ and $t_1\in T_1$. Moreover, this action of $T_1$ on $M$ is smooth and commutes with the action of $T_2$. \end{prop} \begin{df}\label{df:qt bundle} Let $M_i$ be a quasitoric manifold over $P_i$ acted on by $T_i$ ($i=1,2$). Then a \textit{quasitoric $M_1$-bundle over $M_2$} is a smooth $T_2$-equivariant fiber bundle $p\colon\thinspace M \to M_2$ with fiber $M_1$, structure group $\mathfrak{D}(M_1)$, and total space equipped with the action of $T:=T_1\times T_2$ defined by $(t_1,t_2)\cdot x:=t_1(t_2 x)$, where the $T_1$-action is the one defined in Proposition \ref{prop:associated torus action}. \end{df} We prove later that the quasitoric bundle $M$ is a quasitoric manifold over $P_1\times P_2$. \begin{df} Let $\mathcal{M}$ be a class of quasitoric manifolds and consider a sequence \[\xymatrix{ B_l\ar[r]^(0.44){p_{l-1}}&B_{l-1}\ar[r]^(0.55){p_{l-2}}&\cdots\ar[r]^(0.52){p_1}&B_1\ar[r]^{p_0}&B_0 }\] where $B_0$ is a point. Then $B_l$ is called an \textit{$l$-stage $\mathcal{M}$-bundle type quasitoric manifold} if $p_i$ is a quasitoric $M_i$-bundle for some $M_i\in \mathcal{M}$ ($i=0,\ldots,l-1$). \end{df} Now the term $(\ccp{2})$-bundle type quasitoric manifold in Theorem \ref{main thm 1} is defined as follows: let us define $\mathcal{M}(\ccp{2})$ as the class of quasitoric manifolds which are homeomorphic to $\ccp{2}$, and use the term \textit{$(\ccp{2})$-bundle type quasitoric manifold} instead of $\mathcal{M}(\ccp{2})$-bundle type quasitoric manifold. Suppose that the facets of $P_i$ are labeled by $F_{i,1},\ldots,F_{i,m_i}$ and $\lambda_i$ is a characteristic matrix of $M_i$ with respect to this facet labeling ($i=1,2$). If we give a facet labeling of $P_1\times P_2$ by $F_1,\ldots,F_{m_1+m_2}$ where \[F_i:=\left\{\begin{array}{l l} F_{1,j}\times P_2 & (1\leq j\leq m_1) \\ P_1\times F_{2,j-m_1} & (m_1+1\leq j\leq m_1+m_2), \end{array}\right.\] then we have the following. \begin{prop}\label{prop:qt bundle} Let $(M_i,\pi_i)$ be a quasitoric manifold over $P_i$ acted on by $T_i$ ($i=1,2$) and $p\colon\thinspace M\to M_2$ be a quasitoric $M_1$-bundle over $M_2$. Then $M$ is a quasitoric manifold over $P_1\times P_2$, which has a characteristic matrix in the form \[\left(\begin{array}{c c} \lambda_1 & * \\ 0 & \lambda_2 \end{array}\right).\] Conversely, if a quasitoric manifold $M$ over $P_1\times P_2$ has a characteristic matrix in the above form, then $\lambda_i$ is a characteristic matrix on $P_i$ ($i=1,2$) and $M$ is isomorphic to the total space of a quasitoric $M(\lambda_1)$-bundle over $M(\lambda_2)$. \end{prop} We use the following lemma to prove this proposition. \begin{lem}\label{sublem:qt bundle} Let $(M_i,\pi_i)$ be a quasitoric manifold over $P_i$ acted on by $T_i$ ($i=1,2$) and $p\colon\thinspace M\to M_2$ be a quasitoric $M_1$-bundle over $M_2$. Moreover, take $x\in M_2$ and put $M_x:=p^{-1}(x)$, $T':=\mathrm{pr}_2^{-1}((T_2)_x)$ where $(T_2)_{x}$ denotes the isotropy subgroup at $x\in M_2$. Then the action of $T$ on $M$ restricts to a $T'$-action on the fiber $M_x$, and there exists a homomorphism $\rho\colon\thinspace T' \to T_1$ such that $t\cdot y=\rho(t)\cdot y$ for any $t\in T'$ and $y\in M_x$. In particular, for each $z\in M_x$, there is a split exact sequence \[\xymatrix{ 0 \ar[r] & (T_1)_z \ar[r] & T_z \ar[r] & (T_2)_{x} \ar[r] & 0. }\] \end{lem} \begin{proof} Take a local trivialization $\phi\colon\thinspace U\times M_1\to p^{-1}(U)$ of $p$ around $x$ and define $\varphi\colon\thinspace M_1\to M_x$ by $\varphi(y):=\phi(x,y)$. Since $p$ is a $T_2$-equivariant fiber bundle with structure group $\mathfrak{D}(M_1)$, there exists a map $\gamma\colon\thinspace T'\to \mathfrak{D}(M_1)$ such that $t\cdot \varphi(y)=\varphi(\gamma(t)(y))$ for any $t\in T'$ and $y\in M_1$, which is clearly a homomorphism. Moreover, since $T_1$ acts on ${\pi_1}^{-1}(\mathrm{int}\,P_1)$ freely and each $\gamma(t)$ ($t\in T'$) descends to $\mathrm{id}_{P_1}$, there is a unique $s_{q,y,t}\in T_1$ for each $q\in\mathrm{int}\,P_1$, $y\in {\pi_1}^{-1}(q)$, and $t\in T'$ such that $\gamma(t)(y)=s_{q,y,t}\cdot y$. Since $\gamma(t)$ is $T_1$-equivariant, $s_{q,y,t}$ does not depend on $y$ and therefore we can put $s(q,t):=s_{q,y,t}$. Moreover, for each $q$, the correspondence $t\mapsto s(q,t)$ gives a homomorphism from $T'$ to $T_1$. Thus we see that the correspondence $q\mapsto s(q,\,\cdot\,)$ gives a map from $\mathrm{int}\,P_1$ to $\mathrm{Hom}(T',T_1)$, the set of continuous homomorphisms equipped with the compact-open topology. Since $\mathrm{Hom}(T',T_1)$ is discrete and $\mathrm{int}\,P_1$ is connected, this map is constant. If we define $\rho$ as the value of this map, then $\gamma(t)(y)=\rho(t)\cdot y$ for any $t\in T'$ and $y\in{\pi_1}^{-1}(\mathrm{int}\,P_1)$. This identity holds for any $t\in T'$ and $y\in M_1$ since ${\pi_1}^{-1}(\mathrm{int}\,P_1)$ is dense in $M_1$. Thus we obtain the former part of the lemma. For each $z\in M_x$, the correspondence $t\mapsto (\rho(t)^{-1},t)$ gives a section of $\mathrm{pr}_2\colon\thinspace T_z\to (T_2)_x$, and $(T_1)_z$ clearly coincides with the kernel of $\mathrm{pr}_2\colon\thinspace T_z \to T_2$. Thus we obtain the latter part of the lemma. \end{proof} \begin{proof}[proof of Proposition \ref{prop:qt bundle}] First, we show that the $T$-action on $M$ is locally standard. Recall that any quasitoric manifold has a CW structure without odd dimensional cells, and therefore it is simply connected and its odd degree cohomology vanishes. By using the long exact sequence of homotopy groups and the Serre spectral sequence associated with $p$, we see that $M$ is also simply connected and has vanishing odd degree cohomology. Then the local standardness follows immediately from the following theorem of Masuda: a torus manifold with vanishing odd degree cohomology is locally standard (\cite[Theorem 4.1]{M06}). Here a torus manifold means an even-dimensional closed connected orientable smooth manifold equipped with an effective smooth action of the half-dimensional torus which has at least one fixed point. Next, we prove that $M/T$ is homeomorphic to $P_1\times P_2$ as a manifold with corners. It is clear that $p$ descends to a $T_2$-equivariant fiber bundle $\bar{p}\colon\thinspace M/T_1\to M_2$ with fiber $P_1$ and structure group $\{\mathrm{id}_{P_1}\}$ by the definition of $\mathfrak{D}(M_1)$. Thus we see that there is a $T_2$-equivariant homeomorphism $f\colon\thinspace M/T_1\to P_1\times M_2$, where $T_2$ acts on $P_1\times M_2$ by the action on the second component, such that (i) for any local trivialization $\bar{\phi}\colon\thinspace U\times P_1 \to \bar{p}^{-1}(U)$ of $\bar{p}$ and any $x\in U$ the map $P_1\to P_1$ defined by $q\mapsto \mathrm{pr}_1\circ f\circ\bar{\phi}(x,q)$ is identify, and (ii) $\bar{p}=\mathrm{pr}_2\circ f$. Then $f$ clearly descends to a homeomorphism $\bar{f}\colon\thinspace M/T\to P_1\times P_2$. We can prove that $\bar{f}$ preserves corners as follows. If we take $x\in M$ and denote by $\bar{x}\in M/T$ the equivalence class containing $x$, then $\mathrm{depth}\,\bar{x}=\dim T_x$ by definition. On the other hand, $\mathrm{depth}\,\bar{f}(\bar{x})=\mathrm{depth}\,\bar{x}_1+\mathrm{depth}\,\bar{x}_2$ where $\bar{f}(\bar{x})=(\bar{x}_1,\bar{x}_2)\in P_1\times P_2$. If we take a local trivialization $\phi\colon\thinspace U\times M_1\to p^{-1}(U)$ around $p(x)$ and put $(x_2,x_1):=\phi^{-1}(x)$, then $\mathrm{depth}\,\bar{x}_i=\dim (T_i)_{x_i}$ for $i=1,2$ since $\bar{x}_i=\pi_i(x_i)$. Then we have $\mathrm{depth}\,\bar{x}=\mathrm{depth}\,\bar{f}(\bar{x})$ by Lemma \ref{sublem:qt bundle}. Next, we consider the characteristic matrix $\lambda$ of $M$. We denote by $\pi$ the projection $M\to M/T\cong P_1\times P_2$. Take $x\in M$, a local trivialization $\phi\colon\thinspace U\times M_1\to p^{-1}(U)$ of $p$ around $p(x)$, and put $S_j:=\ell_M(F_j)$ ($j=1,\ldots,m_1+m_2$) where $\ell_M$ denotes the characteristic map associated with $M$. If $x\in \pi^{-1}(\mathrm{relint}\, F_j)$ for some $j\in \{1,\ldots,m_1\}$, then $\mathrm{pr}_2(S_{j})\subseteq T_2$ fixes $p(x)\in \pi_2^{-1}(\mathrm{int}\,P_2)$ and therefore $S_j\subseteq T_1$. Since $\varphi$ is $T_1$-equivariant on each fiber, we see $S_j=\ell_{M_1}(F_{1,j})$ for $j=1,\ldots,m_1$. On the other hand, if $x\in \pi^{-1}(\mathrm{relint}\,F_{j})$ for some $j\in\{m_1+1,\ldots,m_1+m_2\}$, then $(T_1)_x=\{0\}$ and $\mathrm{pr}_2(S_j)$ fixes $p(x)\in\pi_2^{-1}(\mathrm{relint}\,F_{2,j-m_1})$. Therefore we have $\mathrm{pr}_2(S_j)=\ell_{M_2}(F_{2,j-m_1})$. Thus we obtain the former part of the proposition. Finally, we prove the latter part. It is clear that $\lambda_i$ is a characteristic matrix on $P_i$ ($i=1,2$). We can assume that $M=M(\lambda):=\mathcal{Z}_{P_1\times P_2}/T_\lambda$ (see Construction \ref{const:moment-angle mfd}) since they are isomorphic. Put $m:=m_1+m_2$ and identify $T^{m_1}$ with $T^{m_1}\times\{0\}\subseteq T^m$. Then $T_{\lambda_1}\subseteq T_\lambda$ and $\overline{T}\!_\lambda:=T_\lambda/T_{\lambda_1}$ is isomorphic to $T_{\lambda_2}$ through the projection to $T^{m_2}$. If we regard that $T^m$ acts on $\mathcal{Z}_{P_2}$ through the projection to $T^{m_2}$, then, since $\mathcal{Z}_{P_1\times P_2}=\mathcal{Z}_{P_1}\times \mathcal{Z}_{P_2}$ and $M(\lambda)=(M(\lambda_1)\times \mathcal{Z}_{P_2})/\overline{T}\!_\lambda$, we have the following commutative diagram. \[\xymatrix{ M(\lambda_1) \ar[r] & M(\lambda_1)\times \mathcal{Z}_{P_2} \ar[r]^(0.65){\mathrm{pr}_2} \ar[d] & \mathcal{Z}_{P_2} \ar[d] \\ & M(\lambda) \ar[r] & M(\lambda_2) }\] Here the upper row is a $T^m/T_{\lambda_1}$-equivariant fiber bundle with structure group $\mathfrak{D}(M_1)$ and the right vertical arrow is a principal $\overline{T}\!_\lambda$-bundle. Thus the proof is completed by Lemma \ref{lem:equivariant bundle}. \end{proof} Let $P_i$ be an $n_i$-dimensional simple polytope with a facet labeling $F_{i,1},\ldots,F_{i,m_i}$ ($i=1,\ldots,l$), put $n:=\sum n_i$, $m:=\sum m_i$, and $P:=P_1\times\cdots\times P_l$. Given $(\psi_i,\rho_i)\in GL_{n_i}(\mathbb{Z})\times R(P_i)$ ($i=1,\ldots,l$), we define $(\psi_1,\rho_1)\times\cdots\times(\psi_l,\rho_l)\in GL_n(\mathbb{Z})\times R(P)$ as follows: define $\psi\in GL_n(\mathbb{Z})$ and $\rho\in R(P)$ so that $$ \psi=\left( \begin{array}{c c c c} \psi_1&0 &\cdots & 0\\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &0 \\ 0 &\cdots &0 &\psi_l \end{array} \right),\quad \iota(\rho)=\left( \begin{array}{c c c c} \iota(\rho_1)&0 &\cdots &0 \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &0 \\ 0 &\cdots &0 &\iota(\rho_l) \end{array} \right), $$ and put $(\psi_1,\rho_1)\times\cdots\times(\psi_l,\rho_l):=(\psi,\rho)$. Here we label the facets of $P$ by $F_1,\ldots,F_m$ where $$F_{m_1+\cdots+m_{i-1}+j}:=P_1\times\cdots\times P_{i-1}\times F_{i,j}\times P_{i+1}\times\cdots\times P_l \ (i=1,\ldots,l,\ j=1,\ldots,m_i).$$ \begin{lem}\label{wehomeo on each fiber} Let $M_i$ be a quasitoric manifold over an $n_i$-dimensional simple polytope $P_i$ ($i=1,\ldots,l$) and consider a sequence \[\xymatrix{ B_l\ar[r]^(0.44){p_{l-1}}&B_{l-1}\ar[r]^(0.55){p_{l-2}}&\cdots\ar[r]^(0.52){p_2}&B_2 \ar[r]^{p_1} & M_1 }\] where each $p_i$ is a quasitoric $M_{i+1}$-bundle. Take a characteristic matrix $\lambda_i$ of $M_i$, $(\psi_i,\rho_i)\in GL_{n_i}(\mathbb{Z})\times R(P_i)$, and put $\lambda'_i:=(\psi_i,\rho_i)\cdot\lambda_i$ for $i=1,\ldots,l$. Then there exist a sequence \[\xymatrix{ B'_l\ar[r]^(0.44){p'_{l-1}}&B'_{l-1}\ar[r]^(0.55){p'_{l-2}}&\cdots\ar[r]^(0.52){p'_2}&B'_2\ar[r]^(0.45){p'_1}&M(\lambda'_1), }\] where each $p'_i$ is a quasitoric $M(\lambda'_{i+1})$-bundle, and a weakly equivariant homeomorphism from $B_l$ to $B'_l$ represented by $(\psi_l,\rho_l)\times\cdots\times(\psi_1,\rho_1)$. \end{lem} \begin{proof} Put $(\psi'_i,\rho'_i):=(\psi_i,\rho_i)\times(\psi_{i-1},\rho_{i-1})\times\cdots\times(\psi_1,\rho_1)$ ($i=1,\ldots,l$). By an iterated use of Proposition \ref{prop:qt bundle}, we can take a characteristic matrix $\mu_i$ of $B_i$ ($i=1,\ldots,l$) in the form $$ \left( \begin{array}{c c c c} \lambda_i&* &\cdots &* \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &* \\ 0 &\cdots &0 &\lambda_1 \end{array} \right). $$ Then, if we put $\mu'_i:=(\psi'_i,\rho'_i)\cdot\mu_i$ ($i=1,\ldots,l$), we see that each $M(\mu'_{i+1})$ is a quasitoric $M(\lambda'_{i+1})$-bundle over $M(\mu'_{i})$ by Proposition \ref{prop:qt bundle}. The proof is completed by setting $B'_i:=M(\mu'_i)$. \end{proof} \subsection{Quasitoric manifolds over $I^n$ Now we restrict ourselves to the case $P=I^n$. Hereafter, we always use the facet labeling $F_1,\ldots,F_{2n}$ of $I^n$ defined by \begin{align*} F_{i}&:=\{(x_1,\ldots,x_n)\in I^n\,\vert\, x_i=0\},\\ F_{n+i}&:=\{(x_1,\ldots,x_n)\in I^n\,\vert\, x_i=1\} \end{align*} for $i=1,\ldots,n$. Note that this facet labeling is different from the one used in the previous subsection. We easily see that $\mathrm{Aut}(I^n)$ is generated by $\rho_{i,j}:=(i\ j)(i+n\ j+n)$ and $\rho_k:=(k\ k+n)$ ($i,j,k=1,\ldots,n$) where we regard $\mathrm{Aut}(I^n)$ as a subgroup of the symmetric group $\mathfrak{S}_{2n}$ by using the facet labeling, as in Definition \ref{df:action on Lambda_P}. \begin{df} Let $\xi$ be a square matrix of order $n$. We call $\xi$ a \textit{characteristic square on} $I^n$ if each diagonal component of $\xi$ is equal to $1$ and $(E_n\:\xi)$ is a characteristic matrix on $I^n$. We denote by $\Xi_n$ the set of characteristic squares on $I^n$. For the convenience of notation, we identify a characteristic square $\xi$ with the characteristic matrix $(E_n\:\xi)$, for example, we write $M(\xi)$ instead of $M((E_n\,\xi))$. \end{df} \begin{rem} Due to Proposition \ref{realization of rep}, any quasitoric manifold over $I^n$ is weakly equivariantly homeomorphic to $M(\xi)$ for some characteristic square $\xi$. \end{rem} \begin{df}\label{H^*(xi)} For a characteristic square $\xi=(\xi_{i,j})$ on $I^n$, we define a graded ring $H^*(\xi)$, which is canonically isomorphic to $H^*(M(\xi);\mathbb{Z})$, as follows. Let $\mathbb{Z}[X_1,\ldots,X_n]$ be the polynomial ring of which the generators have degree 2, and $\mathcal{I}_{\xi}$ be the ideal generated by $u_i(\xi) X_i\,(i=1,\ldots,n)$ where $u_i(\xi):=\sum_{j=1}^n \xi_{i,j} X_j$. Then $H^*(\xi)$ is defined by $$ H^*(\xi):=\mathbb{Z}[X_1,\ldots,X_n]/\mathcal{I}_{\xi} .$$ \end{df} Next, we consider the bundle-type quasitoric manifolds over $I^n$. \begin{df} Let $\xi$ be a characteristic square on $I^n$ and $n_1,\ldots,n_l$ be positive integers summing up to $n$. Then $\xi$ is called \textit{$(\xi_1,\ldots,\xi_l)$-type} if it is in the form $$ \left( \begin{array}{c c c c} \xi_1&* &\cdots &* \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &* \\ 0 &\cdots &0 &\xi_{l} \end{array} \right) $$ where each $\xi_i$ is a characteristic square on $I^{n_i}$. \end{df} \begin{lem}\label{bundle-type and type of square} Let $\xi_i$ be a characteristic square on $I^{n_i}$ ($i=1,\ldots,l$) and consider a sequence \[\xymatrix{ B_l\ar[r]^(0.44){p_{l-1}}&B_{l-1}\ar[r]^(0.55){p_{l-2}}&\cdots\ar[r]^(0.52){p_2}&B_2\ar[r]^{p_1}&B_1 }\] where $B_1=M(\xi_1)$. If each $p_i$ is a quasitoric $M(\xi_{i+1})$-bundle, then $B_l$ is weakly equivariantly homeomorphic to $M(\xi)$ for some $(\xi_l,\ldots,\xi_1)$-type characteristic square $\xi$. \end{lem} \begin{proof} Denote the sum of $n_i$ ($i=1,\ldots,l$) by $n$. By an iterated use of Proposition \ref{prop:qt bundle}, we see that $B_l$ has a characteristic matrix $\lambda$ in the form $$ \left( \begin{array}{c c c c c c c c} E_{n_l}&*&\cdots&*&\xi_l&* &\cdots &* \\ 0&\ddots&\ddots&\vdots&0 &\ddots &\ddots&\vdots \\ \vdots&\ddots&\ddots&*&\vdots &\ddots &\ddots &* \\ 0&\cdots&0&E_{n_1}&0 &\cdots &0 &\xi_1 \end{array} \right). $$ We denote the left $n\times n$ part of $\lambda$ by $A$. Then, since $A^{-1}$ is also in the form $$ \left( \begin{array}{c c c c} E_{n_l}&*&\cdots&* \\ 0&\ddots&\ddots&\vdots \\ \vdots&\ddots&\ddots&* \\ 0&\cdots&0&E_{n_1} \end{array} \right), $$ $A^{-1}\lambda=(E_n\,\xi)$ for some $(\xi_l,\ldots,\xi_1)$-type characteristic square $\xi$. Thus the proof is completed by Proposition \ref{realization of rep}. \end{proof} \section{$(\ccp{2})$-bundle type quasitoric manifolds} In this section we give the proof of Theorem \ref{main thm 1}. We use the facet labeling $F_1,\ldots,F_{2n}$ of $I^n$ defined in Section 2.3. Let us begin with the following proposition. \begin{prop}\label{only ccp{2}} Any quasitoric manifold over $I^2$ is weakly equivariantly homeomorphic to $M(\chi)$ where $\chi$ denotes a characteristic square in the following form: \begin{equation}\label{ch mat on I^2} \left(\begin{array}{c c} 1 & a \\ 0 & 1 \end{array}\right)\text{ or } \left(\begin{array}{c c} 1 & 2 \\ 1 & 1 \end{array}\right). \end{equation} \end{prop} \begin{proof} Let $\lambda$ be a characteristic matrix of a quasitoric manifold $M$ over $I^2$. Since $\lambda$ satisfies the nonsingularity condition, there is a pair $(\psi,\rho)\in GL_2(\mathbb{Z})\times (\mathbb{Z}/2)^4$ such that \[\psi\cdot\lambda\cdot\rho=\left(\begin{array}{c c c c} 1 & 0 & 1 & a \\ 0 & 1 & b & 1 \end{array}\right)\] where $a$ and $b$ are integers satisfying $a b=1\pm 1$, i.e. $(a,b)=(0,b),(a,0),\pm(1,2),\pm(2,1)$. Moreover, by multiplying the first row, the first column and the third column by $-1$ if necessary, we can assume that $b\geq 0$. If we put $\lambda':=\psi\cdot\lambda\cdot\rho$ and $\sigma:=(1\,2)(3\,4)\in \mathrm{Aut}(I^2)$, then we have \[\left(\begin{array}{c c} 0 & 1 \\ 1 & 0 \end{array}\right)\cdot\lambda'\cdot\iota(\sigma)=\left(\begin{array}{c c c c} 1 & 0 & 1 & b \\ 0 & 1 & a & 1 \end{array}\right).\] Thus the proof is completed by Proposition \ref{realization of rep}. \end{proof} Throughout this section, we denote by $\kappa^2$ the latter characteristic square in (\ref{ch mat on I^2}). \begin{rem}\label{rem:kappa^2} We easily see that $M(\kappa^2)$ is homeomorphic to $\ccp{2}$. For instance, since $(E_2 \,\kappa^2)$ is decomposed into a connected sum (see \cite[Section 3.2]{H15}) and $\cp{2}$ is the only one quasitoric manifold over $\Delta^2$ (\cite[Example 1.18]{DJ91}), $M(\kappa^2)$ is homeomorphic to $\ccp{2}$ or $\cp{2}\sharp \overline{\cp{2}}$. $H^*(\kappa^2)$ (Definition \ref{H^*(xi)}) is not isomorphic to $H^*(\cp{2}\sharp \overline{\cp{2}};\mathbb{Z})$, implying $M(\kappa^2)\cong \ccp{2}$. Note that, by \cite[Proposition 6.2]{CMS10} of Choi, Masuda, and Suh, the other quasitoric manifolds $M(\chi)$ of Proposition \ref{only ccp{2}} are the Hirzebruch surfaces. In particular, they are not homeomorphic to $\ccp{2}$. \end{rem} Next, we consider the graded ring automorphisms of $H^*(\kappa^2)$. If we put $x:=X_2$ and $y:=u_2(\kappa^2)=X_1+X_2$ with the notation of Definition \ref{H^*(xi)}, then $$H^*(\kappa^2)=\mathbb{Z}[x,y]/(x^2-y^2,x y).$$ Let us denote by $\mathrm{Aut}(H^*(\kappa^2))$ the group of graded ring automorphisms of $H^*(\kappa^2)$ and regard it as a subgroup of $GL_2(\mathbb{Z})$ by identifying an automorphism $\varphi$ with the matrix $A$ defined by $$ \varphi\left( \begin{array}{c} x \\ y \end{array} \right)= A \left( \begin{array}{c} x \\ y \end{array} \right).$$ \begin{lem}\label{Aut(kappa^2)} $\mathrm{Aut}(H^*(\kappa^2))=\left\{\left(\begin{array}{c c} \pm 1 &0 \\ 0 &\pm 1 \end{array} \right) ,\ \left(\begin{array}{c c} 0 &\pm 1 \\ \pm 1 &0 \end{array} \right)\right\}$. \end{lem} \begin{proof} Let $\varphi$ be an automorphism of $H^*(\kappa^2)$ identified with $$ A=\left(\begin{array}{c c} a &b \\ c &d \end{array} \right). $$ Since $\varphi(x)\varphi(y)=(a c+b d)y^2=0$ in $H^*(\kappa^2)$, we have $a c=-b d$. In particular, if $a\neq 0$, we see that $a$ divides $d$ and vice versa, implying $a=\pm d$. We put $\epsilon:=d/a=\pm 1$, and then obtain $c=-\epsilon b$. Moreover, since $\varphi$ is an automorphism, $\det A=\epsilon (a^2+b^2)=\pm 1$. Thus we have $a=\epsilon d=\pm 1$ and $b=c=0$. Similarly, if we assume $b\neq 0$, then we have $|b|=|c|=1$ and $a=d=0$. Thus the proof is completed. \end{proof} Recall that we put $[m]_\pm:=\{\pm 1,\ldots,\pm m\}$ and define $R_m$ as the group of $(\mathbb{Z}/2)$-equivariant permutations of $[m]_\pm$ (Definition \ref{df:iota}). Let us describe $\rho\in R_m$ by $\rho=(\rho(1),\ldots,\rho(m))$. Then, if we put $\tau_1:=(-1,-2,-3,-4)$, $\tau_2:=(3,-2,1,4)$ and $\tau_3:=(-1,4,3,2)$, they belong to $R(I^2)$ and there are $\psi_i\in GL_2(\mathbb{Z})$ ($i=1,2,3$) such that $(\psi_i,\tau_i)\cdot(E_2\,\kappa^2)=(E_2\,\kappa^2)$. By Proposition \ref{realization of rep}, there are weakly equivariant self-homeomorphisms $f_i$ ($i=1,2,3$) of $M(\kappa^2)$ represented by $(\psi_i^{-1},\tau_i^{-1})$, and by Corollary \ref{induced by rep}, we have $$ f_1^*:=\left(\begin{array}{cc} -1 &0 \\ 0 &-1 \end{array} \right) ,\, f_2^*:=\left(\begin{array}{cc} 1 &0 \\ 0 &-1 \end{array} \right) ,\, f_3^*:=\left(\begin{array}{cc} 0 &-1 \\ -1 &0 \end{array} \right),$$ where we canonically identify $H^*(M(\kappa^2);\mathbb{Z})$ with $H^*(\kappa^2)$ (note that, in the notation of Corollary \ref{cor:cohomology ring of a quasitoric manifold}, $x=v_4$ and $y=-v_2$). Since these matrices generate $\mathrm{Aut}(H^*(\kappa^2))$ by Lemma \ref{Aut(kappa^2)}, we have the following. \begin{lem}\label{realization of Aut(kappa^2)} Any graded ring automorphism of $H^*(M(\kappa^2);\mathbb{Z})$ is induced by a weakly equivariant self-homeomorphism of $M(\kappa^2)$. \end{lem} Then we consider the isomorphisms between the cohomology rings of $(\ccp{2})$-bundle type quasitoric manifolds. \begin{df}\label{df:calK} We denote by $\mathcal{K}_n$ the set of $(\kappa^2,\ldots,\kappa^2)$-type characteristic squares on $I^{2n}$. For $\xi=(\xi_{i,j})\in\mathcal{K}_n$ and integers $h,k$ such that $0<h\leq k\leq n$, we define $\xi_{[h,k]}:=(\xi_{i,j})_{i,j=2h-1,\ldots,2k}$, which belongs to $\mathcal{K}_{k-h+1}$. We identify $H^*(\xi_{[h,n]})$ with the subring of $H^*(\xi)$ generated by $X_{2h-1},\ldots,X_{2n}$ and $H^*(\xi_{[h,k]})$ with the quotient ring $H^*(\xi_{[h,n]})/(X_{2k+1},\ldots,X_{2n})$, where we use the notation of Definition \ref{H^*(xi)}. \end{df} \begin{lem}\label{lem:calK} Any $n$-stage $(\ccp{2})$-bundle type quasitoric manifold is weakly equivariantly homeomorphic to $M(\xi)$ for some $\xi\in\mathcal{K}_n$. \end{lem} \begin{proof} By Proposition \ref{only ccp{2}} and Remark \ref{rem:kappa^2}, if a quasitoric manifold $M$ over $I^2$ is homeomorphic to $\ccp{2}$, then $M$ is weakly equivariantly homeomorphic to $M(\kappa^2)$. Therefore, by Lemma \ref{wehomeo on each fiber}, any ($\ccp{2}$)-bundle type quasitoric manifold is weakly equivariantly homeomorphic to a $\{M(\kappa^2)\}$-bundle type quasitoric manifold. Then the proof is completed by Lemma \ref{bundle-type and type of square}. \end{proof} Thus we see that we only have to consider $M(\xi)$ ($\xi\in\mathcal{K}_n$) to prove Theorem \ref{main thm 1}. Next, let $\xi'$ be a characteristic square on $I^n$ ($n> 2$) which is $(\kappa^2,\xi'_0)$-type for some characteristic square $\xi'_0$ on $I^{n-2}$, and denote the first and second rows of $\xi'$ by $(1,2,s_3,\ldots,s_n)$ and $(1,1,t_3,\ldots,t_n)$ respectively. We continue to use the notation of Definition \ref{H^*(xi)}. \begin{lem}\label{lem:sublemma, filtration lemma 2-2} Let $\varphi\colon\thinspace \mathbb{Z}[X_1,X_2]\to \mathbb{Z}[X_1,\ldots,X_n]$ be a graded ring monomorphism which maps $\mathcal{I}_{\kappa^2}$ into $\mathcal{I}_{\xi'}$. Additionally, we put $\varphi(X_1)=\sum_{i=1}^n a_i X_i$, $\varphi(X_2)=\sum_{i=1}^n b_i X_i$ and assume that for any prime $p$ the $\bmod\, p$ reductions of $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$ are linearly independent. Then either of the following $({\rm i})$ and $({\rm ii})$ holds: \begin{enumerate} \item[(\,i\,)] $a_i=b_i=s_i=t_i=0$ for $i=3,4,\ldots,n$; \item[(ii)] $a_1=a_2=b_1=b_2=0$. \end{enumerate} \end{lem} \begin{proof} We prove the lemma by showing (i) under the assumption $(a_1,a_2,b_1,b_2) \neq (0,0,0,0)$. Since $\varphi( X_1 ( X_1 + 2 X_2 ) )$ and $\varphi( X_2 ( X_1 + X_2 ) )$ belong to $\mathcal{I}_{\xi'}$, we have \begin{align*} \varphi( X_1 ( X_1 + 2 X_2 ) ) &= \left( \sum_{i=1}^n a_i X_i \right)\left\{ \sum_{i=1}^n ( a_i + 2 b_i ) X_i \right\} \\ &\equiv \alpha_1 X_1 \left( X_1 + 2 X_2 + \sum_{j=3}^n s_j X_j \right) + \beta_1 X_2 \left( X_1 + X_2 + \sum_{j=3}^n t_j X_j \right) \bmod W, \\ \varphi( X_2 ( X_1 + X_2 ) ) &= \left( \sum_{i=1}^n b_i X_i \right)\left\{ \sum_{i=1}^n ( a_i + b_i ) X_i \right\} \\ &\equiv \alpha_2 X_1 \left( X_1 + 2 X_2 + \sum_{j=3}^n s_j X_j \right) + \beta_2 X_2 \left( X_1 + X_2 + \sum_{j=3}^n t_j X_j \right) \bmod W \end{align*} for some integers $\alpha_i, \beta_i\, (i=1,2)$, where $W$ denotes the submodule spanned by $\{ X_p X_q \,\vert\, p,q \geq 3 \}$. Since the coefficients of ${X_1}^2$ in $\varphi( X_1 ( X_1 + 2 X_2 ) )$ and $\varphi( X_2 ( X_1 + X_2 ) )$ are $a_1(a_1 + 2 b_1)$ and $b_1(a_1 + b_1)$ respectively, we obtain $\alpha_1=a_1(a_1 + 2 b_1)$ and $\alpha_2=b_1(a_1 + b_1)$. Similarly, we see $\beta_1=a_2(a_2+2 b_2)$ and $\beta_2=b_2(a_2+b_2)$. Thus we obtain the following equations. \begin{eqnarray*} a_1(a_2+2 b_2) + a_2(a_1+2 b_1) & = & 2 a_1(a_1+2 b_1) + a_2(a_2+2 b_2) \\ a_1(a_i+2 b_i) + a_i(a_1+2 b_1) & = & a_1(a_1+2 b_1) s_i\quad(i\geq 3) \\ a_2(a_i+2 b_i) + a_i(a_2+2 b_2) & = & a_2(a_2+2 b_2) t_i\quad(i\geq 3) \\ b_1(a_2+b_2) + b_2(a_1+b_1) & = & 2 b_1(a_1+b_1) + b_2(a_2+b_2) \\ b_1(a_i+b_i) + b_i(a_1+b_1) & = & b_1(a_1+b_1) s_i\quad(i\geq 3) \\ b_2(a_i+b_i) + b_i(a_2+b_2) & = & b_2(a_2+b_2) t_i\quad(i\geq 3) \end{eqnarray*} For the convenience, we rewrite these equations as follows. \begin{eqnarray}\label{eq:1} (a_1-a_2)(a_2+2 b_2) & = & (2 a_1-a_2)(a_1+2 b_1) \\ \label{eq:2} a_1(a_i+2 b_i) & = & (s_i a_1 -a_i)(a_1+2 b_1)\quad(i\geq 3) \\ \label{eq:3} a_2(a_i+2 b_i) & = & (t_i a_2-a_i)(a_2+2 b_2)\quad(i\geq 3) \\ \label{eq:4} (b_1-b_2)(a_2+b_2) & = & (2 b_1-b_2)(a_1+b_1) \\ \label{eq:5} b_1(a_i+b_i) & = & (s_i b_1-b_i)(a_1+b_1)\quad(i\geq 3) \\ \label{eq:6} b_2(a_i+b_i) & = & (t_i b_2-b_i)(a_2+b_2)\quad(i\geq 3) \end{eqnarray} First, we assume that all of $a_1-a_2$, $2 a_1-a_2$, $b_1-b_2$ and $2 b_1-b_2$ are non-zero. Let $k>0$ be the greatest common divisor of $a_1-a_2$ and $2 a_1-a_2$, and $l>0$ be that of $b_1-b_2$ and $2 b_1-b_2$. Suppose that $r$ divides $a_1+2 b_1$ and $a_2+2 b_2$. If we assume that $r$ does not divide $k$, then there is a prime number $r'$ which divides $r/(k,r)$ but does not divide $k/(k,r)$, where $(k,r)$ means the greatest common divisor. Then, by (\ref{eq:2}) and (\ref{eq:3}), $r'$ divides $a_i+2 b_i$ for $i=1,2,\ldots,n$, but it contradicts the assumption (b). Thus we see that any common divisor of $a_1+2 b_1$ and $a_2+2 b_2$ divides $k$. In particular, $a_1+2 b_1, a_2+2 b_2 \neq 0$. Similarly, we shall show that any common divisor of $a_1+b_1$ and $a_2+b_2$ divides $l$. Let $p>0$ be the greatest common divisor of $a_1+2 b_1$ and $a_2+2 b_2$, and $q>0$ be that of $a_1+b_1$ and $a_2+b_2$. Since $\frac{a_1-a_2}{k}$ and $\frac{2 a_1-a_2}{k}$ (resp. $\frac{a_1+2 b_1}{p}$ and $\frac{a_2+2 b_2}{p}$) are prime to each other, we obtain $$ \cfrac{\; \cfrac{a_1-a_2}{k}\; }{\; \cfrac{a_1+2 b_1}{p}\; } = \cfrac{\; \cfrac{2 a_1-a_2}{k}\; }{\; \cfrac{a_2+2 b_2}{p}\; } = \pm 1 $$ from (\ref{eq:1}). It can be written as \begin{equation}\label{eq:7} \frac{a_1-a_2}{a_1+2 b_1} = \frac{2 a_1-a_2}{a_2+2 b_2} = \pm \frac{k}{p} \in \mathbb{Z} . \end{equation} Similarly, we obtain \begin{equation}\label{eq:8} \frac{b_1-b_2}{a_1+b_1} = \frac{2 b_1-b_2}{a_2+b_2} = \pm \frac{l}{q} \in \mathbb{Z} . \end{equation} Define $$ k':=\frac{a_1-a_2}{a_1+2 b_1},\ l':=\frac{b_1-b_2}{a_1+b_1}\in \mathbb{Z} .$$ Then, from (\ref{eq:7}) and (\ref{eq:8}), we have the following equations. \begin{eqnarray}\label{eq:9} \left\{ \begin{array}{l} (1-k')a_1-a_2-2 k' b_1 = 0 \\ 2 a_1+(-1-k')a_2-2 k' b_2 = 0 \\ -l' a_1+(1-l')b_1-b_2 = 0 \\ -l' a_2+2 b_1+(-1-l')b_2 = 0 \end{array} \right. \end{eqnarray} Since we assume $a_i,b_i\neq 0\,(i=1,2)$, the determinant of the matrix $$ A:= \left( \begin{array}{c c c c} 1-k' &-1 &-2 k' &0 \\ 2 &-1-k' &0 &-2 k' \\ -l' &0 &1-l' &-1 \\ 0 &-l' &2 &-1-l' \end{array} \right) $$ equals $0$. Therefore, since $\det A=(k'+l')^2+(k' l'+1)^2$, we obtain $(k',l')=(1,-1),(-1,1)$. If $(k',l')=(1,-1)$, we have $$ a_1=b_2-2 b_1,\ a_2=-2 b_1 $$ from (\ref{eq:9}). If $b_1=0$, we easily obtain $a_j=b_j=s_j=t_j=0\,(j\geq 2)$ from (\ref{eq:2}), (\ref{eq:3}), (\ref{eq:5}) and (\ref{eq:6}). On the other hand, if we assume $b_2=0$, we similarly obtain $a_j=b_j=s_j=t_j=0\,(j\geq 2)$ (but this contradicts the assumption (b) since $(a_1,\ldots,a_n)\equiv 0 \bmod 2$). Then we can assume that $b_1,b_2\neq 0$. Putting $b'_i:=b_i/l$ ($i=1,2$), we obtain the following equations from (\ref{eq:2}), (\ref{eq:3}), (\ref{eq:5}) and (\ref{eq:6}). \begin{eqnarray}\label{eq:10} (b'_2-2 b'_1)(a_i+2 b_i) & = & \{s_i (b_2-2 b_1) -a_i\}b'_2\quad(i\geq 3) \\ \label{eq:11} b'_1(a_i+2 b_i) & = & (-2 t_i b_1-a_i)(b'_1-b'_2)\quad(i\geq 3) \\ \label{eq:12} b'_1(a_i+b_i) & = & (s_i b_1-b_i)(b'_2-b'_1)\quad(i\geq 3) \\ \label{eq:13} b'_2(a_i+b_i) & = & (t_i b_2-b_i)(-2 b'_1+b'_2)\quad(i\geq 3) \end{eqnarray} If $b_2$ is odd, $b'_2-2 b'_1$ and $b'_2$ (resp. $b'_1$ and $b'_1-b'_2$) are prime to each other, and hence we obtain the following. \begin{eqnarray}\label{eq:14} \frac{a_i+2 b_i}{b'_2} & = & s_i l - \frac{a_i}{b'_2-2 b'_1}\in\mathbb{Z}\quad(i\geq 3) \\ \label{eq:15} \frac{a_i+2 b_i}{b'_1-b'_2} & = & -2 t_i l - \frac{a_i}{b'_1}\in\mathbb{Z}\quad(i\geq 3) \\ \label{eq:16} \frac{a_i+b_i}{b'_2-b'_1} & = & s_i l - \frac{b_i}{b'_1}\in\mathbb{Z}\quad(i\geq 3) \\ \label{eq:17} \frac{a_i+b_i}{b'_2-2 b'_1} & = & t_i l - \frac{b_i}{b'_2}\in\mathbb{Z}\quad(i\geq 3) \end{eqnarray} In particular, $b'_1$ divides $a_i$ and $b_i$ for $i=3,4,\ldots,n$. Putting $a'_i:=a_i/b'_1$ and $b'_i:=b_i/b'_1$ ($i\geq 3$), from (\ref{eq:14}) and (\ref{eq:16}) (resp. (\ref{eq:15}) and (\ref{eq:17})), we obtain \begin{eqnarray}\label{eq:18} \{(a'_i+2 b'_i)(b'_2-2 b'_1) + a'_i b'_2\}(b'_2-b'_1)b'_1 & = & \{(a'_i+b'_i)b'_1 + b'_i(b'_2-b'_1)\}(b'_2-2 b'_1)b'_2, \\ \label{eq:19} \{(a'_i+2 b'_i)b'_1 + a'_i(b'_1-b'_2)\}(b'_2-2 b'_1)b'_2 & = & -2\{(a'_i+b'_i)'_2 + b'_i(b'_2-2 b'_1)\}b'_1(b'_1-b'_2) \end{eqnarray} for $i=3,\ldots,n$. Since $b'_1$ is prime to $b'_2-2 b'_1$, $b'_2$ and $b'_2-b'_1$ respectively, we see that $b'_1$ divides $b'_i\,(i=3,\ldots,n)$ from (\ref{eq:18}) and divides $a'_i\,(i=3,\ldots,n)$ from (\ref{eq:19}). Repeating this procedure, we see that any power of $b'_1$ divides $a_i,b_i\,(i=3,\ldots,n)$. By similar arguments, we can show that any powers of $b'_2$, $2 b'_1-b'_2$ and $b'_1-b'_2$ divide $a_i,b_i\,(i=3,\ldots,n)$. $b'_1$, $b'_2$, $2 b'_1-b'_2$ and $b'_1-b'_2$ cannot be $\pm 1$ simultaneously, and hence $a_i,b_i=0\,(i=3,\ldots,n)$. Then we obtain $s_i,t_i=0\,(i=3,\ldots,n)$ from (\ref{eq:14}) and (\ref{eq:15}). Otherwise, if $b'_2$ is even (and hence $b'_1$ is odd), put $b''_2:=b'_2/2$. Then we have the following. \begin{eqnarray*} (b''_2-b'_1)(a_i+2 b_i) & = & \{s_i (b_2-2 b_1) -a_i\}b''_2\quad(i\geq 3) \\ b'_1(a_i+2 b_i) & = & (-2 t_i b_1-a_i)(b'_1-2 b''_2)\quad(i\geq 3) \\ b'_1(a_i+b_i) & = & (s_i b_1-b_i)(2 b''_2-b'_1)\quad(i\geq 3) \\ b''_2(a_i+b_i) & = & (t_i b_2-b_i)(b''_2-b'_1)\quad(i\geq 3) \end{eqnarray*} By an argument similar to above, we obtain $a_i,b_i,s_i,t_i=0\,(i=3,\ldots,n)$ again. We shall obtain (i) in the same way if $(k',l')=(-1,1)$. Finally, we consider the case where at least one of $a_1-a_2$, $2 a_1-a_2$, $b_1-b_2$ and $2 b_1-b_2$ equals zero. Note that ${X_2}^2$ and $X_p X_q$ ($p=1,2$, $q=3,\ldots,n$) form a basis of $H^4(\xi')/W$. Then, by considering the equation $\varphi(X_2(X_1+X_2))=0$ in $H^4(\xi')/W$, we see that $(a_1,a_2)=(0,0)$ implies $(b_1,b_2)=(0,0)$ and vice versa. Since we assume $(a_1,a_2,b_1,b_2) \neq (0,0,0,0)$ as mentioned at the beginning of this proof, we have $(a_1,a_2)\neq (0,0)$ and $(b_1,b_2)\neq(0,0)$. If $a_1-a_2=0$, then we have $0=a_1(a_1+2b_1)$ by (\ref{eq:1}), which implies $a_1+2b_1=0$ since $(a_1,a_2)\neq 0$. Similarly, we obtain $a_i+2b_i=0$ ($i\geq 3$) by (\ref{eq:2}), but this contradicts the assumption of linear independence since $(a_1,\ldots,a_n)\equiv 0 \bmod 2$. If $2a_1-a_2=0$, then we have $a_2+2b_2=0$ by (\ref{eq:1}), which implies $b_2=-a_1$. By (\ref{eq:3}) and (\ref{eq:2}), we have $a_i+2b_i=0$ ($i\geq 3$) and then $s_i a_1-a_i=0$ ($i\geq 3$). Moreover, $a_i+t_i b_2=0$ by (\ref{eq:6}). We have $b_1(a_1+b_1)=0$ by (\ref{eq:4}). If $b_1=0$, then (\ref{eq:5}) implies $b_i=0$ ($i\geq 3$), and therefore we obtain $a_i=s_i=t_i$ ($i\geq 3$) since both $a_1$ and $b_2$ are non-zero. If $a_1+b_1=0$, then (\ref{eq:5}) implies $a_i+b_i=0$ ($i\geq 3$), and therefore $a_i=b_i=s_i=t_i=0$ ($i\geq 3$). We can show $a_i=b_i=s_i=t_i=0$ ($i\geq 3$) similarly in the other two cases. \end{proof} For $\rho\in \mathfrak{S}_m$, the symmetric group, and a positive integer $k$, we define $\rho[k]\in \mathfrak{S}_{k m}$ so that $\rho[k](i k-j)=\rho(i)\cdot k-j$ for $i=1,\ldots,m$ and $j=0,\ldots,k-1$. \begin{lem}\label{lem:permutation} Let $\xi'\in\mathcal{K}_n$, $s_{i,j}$ be its $(i,j)$-th entry, and assume that there exist two integers $p$, $q$ which satisfy the following: \begin{enumerate} \item[(a)] $0<p<q\leq n$; \item[(b)] $s_{i,j}=0$ if $i=2p-1,2p$ and $j=2p+1,\ldots,2q$. \end{enumerate} Then there exist $\eta'\in\mathcal{K}_n$ and a weakly equivariant homeomorphism $f\colon\thinspace M(\eta')\rightarrow M(\xi')$ such that $f^*(X_i)=X_{\sigma[2](i)}$ ($i=1,\ldots,2n$) where $\sigma$ denotes the cyclic permutation $(q\ q-1\ \cdots\ p)$. \end{lem} \begin{proof} Recall that we put $\rho_{i,j}:=(i\ j)(i+n\ j+n)\in \mathrm{Aut}(I^n)$. If we put $\omega_k:=\rho_{k,k+1}[2]$ and $\omega_{p,q}:=\omega_{q-1}\circ\cdots\circ\omega_p$, then $\omega_{p,q}\in \mathrm{Aut}(I^{2n})$. Additionally, we put $\psi:=\iota(\sigma[2])^{-1}\in GL_{2n}(\mathbb{Z})$ and $\sigma_k:=(k\,k+1)\in \mathfrak{S}_n$. Since $\iota$ is an antihomomorphism, $$(\psi,\omega_{p,q})\cdot(E_{2n}\,\xi')= \iota(\sigma_{q-1}[2])^{-1}\cdots\iota(\sigma_p[2])^{-1}\cdot(E_{2n}\,\xi')\cdot\iota(\omega_p)\cdots\iota(\omega_{q-1}).$$ We can easily check that $\iota(\sigma_p[2])^{-1}\cdot(E_{2n}\,\xi')\cdot\iota(\omega_p)=(E_{2n}\,\xi'')$ where $\xi''=(s'_{i,j})\in\mathcal{K}_n$ and it satisfies the following: $s'_{i,j}=0$ if $i=2p+1,2p+2$ and $j=2p+3,\ldots,2q$. By induction, we see that $(\psi,\omega_{p,q})\cdot(E_{2n}\,\xi')=(E_{2n}\,\eta')$ for some $\eta'\in\mathcal{K}_n$. Then the proof is completed by Proposition \ref{realization of rep} and Corollary \ref{induced by rep}. \end{proof} \begin{lem}\label{lem:filtration lemma 2-2} Let $\xi,\xi'\in\mathcal{K}_n$ and $\varphi \colon\thinspace H^*(\xi) \rightarrow H^*(\xi')$ be a graded ring isomorphism. Then there exist $\eta'\in\mathcal{K}_n$ and a weakly equivariant homeomorphism $f\colon\thinspace M(\eta')\rightarrow M(\xi')$ such that $f^* \circ \varphi$ preserves the ideal $(X_{2i-1},\ldots,X_{2n})$ for each $i=1,\ldots,n$. \end{lem} \begin{proof} First, by Lemma \ref{lem:sublemma, filtration lemma 2-2} and Lemma \ref{lem:permutation}, there are $\eta''\in\mathcal{K}_n$ and a weakly equivariant homeomorphism $f'\colon\thinspace M(\eta'')\to M(\xi')$ such that $f'^*\circ\varphi$ preserves the ideal $(X_{2n-1},X_{2n})$. Therefore, without loss of generality, we can assume that $\varphi$ preserves $(X_{2n-1},X_{2n})$. We prove the lemma by induction on $n$. The lemma is trivial if $n=1$. Suppose that the lemma holds for $n-1$. If $\varphi$ preserves the ideal $(X_{2n-1},X_{2n})$, then it descends to a graded ring isomorphism $\bar{\varphi}\colon\thinspace H^*(\xi_{[1,n-1]})\to H^*(\xi'_{[1,n-1]})$ (see Definition \ref{df:calK}). By the induction hypothesis, there are $\eta'_0\in\mathcal{K}_{n-1}$ and a weakly equivariant homeomorphism $f_0\colon\thinspace M(\eta'_0)\rightarrow M(\xi'_{[1,n-1]})$ such that $f_0^* \circ\bar{\varphi}$ preserves the ideal $(X_{2i-1},\ldots,X_{2n-2})$ for each $i=1,\ldots,n-1$. Let $(\psi_0,\rho_0)$ be the representation of $f_0$ and put $(\psi,\rho):=(\psi_0,\rho_0)\times (E_2,e)\in GL_{2n}(\mathbb{Z})\times R(I^{2n})$, where $e$ denotes the identity element of $R(I^2)$. The product $(\psi_0,\rho_0)\times (E_2,e)$ is defined before Lemma \ref{wehomeo on each fiber}, but we should note that now we use a different facet labeling. If we define a characteristic square $\eta'$ on $I^{2n}$ so that $(\psi,\rho)\cdot (E_{2n}\,\eta')=(E_{2n}\,\xi')$, then there exists a weakly equivariant homeomorphism $f\colon\thinspace M(\eta')\rightarrow M(\xi')$ represented by $(\psi,\rho)$. We can easily check that $\eta'\in\mathcal{K}_n$ and $f^* \circ \varphi$ preserves the ideal $(X_{2i-1},\ldots,X_{2n})$ for each $i=1,\ldots,n$. \end{proof} \begin{cor}\label{1-I} Let $\xi,\xi'\in\mathcal{K}_n$ and $\varphi \colon\thinspace H^*(\xi) \rightarrow H^*(\xi')$ be a graded ring isomorphism. Then there exist $\eta'\in\mathcal{K}_n$ and a weakly equivariant homeomorphism $f\colon\thinspace M(\eta')\rightarrow M(\xi')$ such that $$ f^* \circ \varphi\left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right)= \left( \begin{array}{c c c c} E_2 &* &\cdots &* \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &* \\ 0 &\cdots &0 &E_2 \end{array} \right) \left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right).$$ \end{cor} \begin{proof} By Lemma \ref{lem:filtration lemma 2-2}, there exist $\eta''\in\mathcal{K}_n$ and a weakly equivariant homeomorphism $f_1\colon\thinspace M(\eta'')\rightarrow M(\xi')$ such that $$ f_1^* \circ \varphi\left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right)= \left( \begin{array}{c c c c} \alpha_1 &* &\cdots &* \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &* \\ 0 &\cdots &0 &\alpha_n \end{array} \right) \left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right),$$ where each $\alpha_i$ ($i=1,\ldots,n$) gives an automorphism of $H^*(\kappa^2)$ since $\xi_{[i,i]}=\xi'_{[i,i]}=\kappa^2$. Moreover, by Lemma \ref{realization of Aut(kappa^2)}, there is a weakly equivariant self-homeomorphism $h_i\colon\thinspace M(\kappa^2)\to M(\kappa^2)$ such that $h_i^*=\alpha_i$. Take the representation $(\psi_i,\rho_i)$ of $h_i$ ($i=1,\ldots,n$) and put $(\psi,\rho):=(\psi_1,\rho_1)\times\cdots\times(\psi_n,\rho_n)$. If we define a characteristic square $\eta'$ on $I^{2n}$ so that $(E_{2n}\,\eta')=(\psi,\rho)\cdot(E_{2n}\,\eta'')$ and take a weakly equivariant homeomorphism $f_2\colon\thinspace M(\eta')\to M(\eta'')$ represented by $(\psi^{-1},\rho^{-1})$, then $\eta'\in\mathcal{K}_n$ and $$ f_2^* \left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right)= \left( \begin{array}{c c c c} \alpha_1^{-1} &0 &\cdots &* \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &0 \\ 0 &\cdots &0 &\alpha_n^{-1} \end{array} \right) \left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right).$$ If we put $f:=f_1\circ f_2$, then it satisfies the condition of the lemma. \end{proof} \begin{lem}\label{lemma for 1-II} Let $\xi_0$ be a characteristic square on $I^{n-2}$ ($n\geq 3$), $\xi,\xi'$ be two $(\kappa^2,\xi_0)$-type characteristic squares on $I^n$, and $\varphi \colon\thinspace H^*(\xi) \rightarrow H^*(\xi')$ be a graded ring isomorphism such that $$ \varphi\left( \begin{array}{c} X_1 \\ \vdots \\ X_n \end{array} \right)= \left( \begin{array}{c c} E_2 &A \\ 0 &E_{n-2} \end{array} \right) \left( \begin{array}{c} X_1 \\ \vdots \\ X_n \end{array} \right)$$ where $A$ denotes some $(2\times (n-2))$-matrix of integers. Then we have $A=0$ and $\xi=\xi'$. \end{lem} \begin{proof} We denote the first rows of $A$, $\xi$, $\xi'$ by $(a_3,\ldots,a_n)$, $(1,2,s_3,\ldots,s_n)$, $(1,2,s'_3,\ldots,s'_n)$ respectively. Similarly, we denote their second rows by $(b_3,\ldots,b_n)$, $(1,1,t_3,\ldots,t_n)$, $(1,1,t'_3,\ldots,t'_n)$. Then, in $H^*(\xi')$, we have \begin{align*} & \varphi(X_1(X_1+2 X_2+s_3 X_3+\cdots+s_n X_n)) \\ & = ( X_1+a_3 X_3+\cdots+a_n X_n )\{ X_1+2 X_2+ (a_3+2 b_3+s_3) X_3+\cdots+ (a_n+2 b_n+s_n) X_n \} \\ & = X_1 \{ (2 a_3+2 b_3+s_3-s'_3)X_3+\cdots+(2 a_n+2 b_n+s_n-s'_n)X_n \} + \\ & \quad \quad \quad \quad \quad \quad 2 X_2\{ a_3 X_3+\cdots+a_n X_n \} + (\text{a polynomial in } X_3,\ldots,X_n)=0 \end{align*} and \begin{align*} & \varphi(X_2(X_1+X_2+t_3 X_3+\cdots+t_n X_n)) \\ & = ( X_2+b_3 X_3+\cdots+b_n X_n )\{ X_1+X_2+ (a_3+b_3+t_3) X_3+\cdots+ (a_n+b_n+t_n) X_n \} \\ & = X_2 \{ (a_3+2 b_3+t_3-t'_3)X_3+\cdots+(a_n+2 b_n+t_n-t'_n)X_n \} + \\ & \quad \quad \quad \quad \quad \quad X_1\{ b_3 X_3+\cdots+b_n X_n \} + (\text{a polynomial in } X_3,\ldots,X_n) =0. \end{align*} If we define $W$ as the submodule of $H^4(\xi')$ generated by $X_p X_q$ ($p,q\geq 3$), then ${X_2}^2$ and $X_i X_j$ ($i=1,2,\,j=3,\ldots,n$) form a basis of $H^4(\xi')/W$. Therefore we obtain $a_i=b_i=s_i-s'_i=t_i-t'_i=0$, i.e. $A=0$ and $\xi=\xi'$. \end{proof} \begin{cor}\label{1-II} Let $\xi,\xi'\in\mathcal{K}_n$ and $\varphi \colon\thinspace H^*(\xi) \rightarrow H^*(\xi')$ be a graded ring isomorphism such that $$ \varphi\left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right)= \left( \begin{array}{c c c c} E_2 &A_{1,2} &\cdots &A_{1,n} \\ 0 &\ddots &\ddots&\vdots \\ \vdots &\ddots &\ddots &A_{n-1,n} \\ 0 &\cdots &0 &E_2 \end{array} \right) \left( \begin{array}{c} X_1 \\ \vdots \\ X_{2n} \end{array} \right).$$ Then $A_{i,j}=0$ ($1\leq i<j\leq n$) and $\xi=\xi'$. In particular, $\varphi$ is induced by $\mathrm{id}_{M(\xi)}$. \end{cor} \begin{proof} We prove the corollary by induction on $n$. If $n=2$, the corollary is immediate from Lemma \ref{lemma for 1-II}. Suppose that the corollary holds for $n-1$. Since $\varphi$ restricts to a graded ring isomorphism from $H^*(\xi_{[2,n]})$ to $H^*(\xi'_{[2,n]})$, by the induction hypothesis, we obtain $A_{i,j}=0$ for $2\leq i<j\leq n$ and $\xi_{[2,n]}=\xi'_{[2,n]}$. Then we have $A_{1,j}=0$ ($1<j\leq n$) and $\xi=\xi'$ by Lemma \ref{lemma for 1-II}. \end{proof} Then the following theorem is immediate from Corollary \ref{1-I} and Corollary \ref{1-II}. \begin{thm}\label{main thm 1'} Let $\xi,\xi'\in\mathcal{K}_n$ and $\varphi \colon\thinspace H^*(\xi) \rightarrow H^*(\xi')$ be a graded ring isomorphism. Then there exists a weakly equivariant homeomorphism $f\colon\thinspace M(\xi')\to M(\xi)$ such that $\varphi=f^*$. \end{thm} By Lemma \ref{lem:calK} and Theorem \ref{main thm 1'}, we obtain Theorem \ref{main thm 1}. \section{Computation of $\mathcal{M}^{\rm weh}_{I^3}$ Toward the proof of Theorem \ref{main thm 2}, in this section, we list all the quasitoric manifolds over $I^3$ up to weakly equivariant homeomorphism. We denote by $\mathcal{M}^{\rm weh}_{I^3}$ the set of weakly equivariant homeomorphism classes of quasitoric manifolds over $I^3$, as in Corollary \ref{classification up to wehomeo}. \begin{notation} To compute $\mathcal{M}_{I^3}^{\rm weh}$, we use the following notations. Recall that we denote by $\Xi_3$ the set of characteristic squares on $I^3$. \begin{itemize} \item We denote by $\phi \colon\thinspace \Xi_3\to \mathcal{M}^{\rm weh}_{I^3}$ the surjection given by $\xi\mapsto M(\xi)$. \item For $V_1,V_2,V_3\subseteq \mathbb{Z}^2$, we define $$\Xi(V_1,V_2,V_3):=\Set{ \left( \begin{array}{ccc} 1 &x_1 &x_2 \\ y_1 &1 &x_3 \\ y_2 &y_3 &1 \end{array} \right) \in \Xi_3 | \left( \begin{array}{c} x_i \\ y_i \end{array} \right)\in V_i\text{ for } i=1,2,3 }.$$ \item We put $P_+:=\Set{\left( \begin{array}{c} k \\ 0 \end{array} \right)| k\in\mathbb{Z}}$, $P_-:=\Set{\left( \begin{array}{c} 0 \\ k \end{array} \right)| k\in\mathbb{Z}}$, $N_+:=\left\{\pm\left( \begin{array}{c} 2 \\ 1 \end{array} \right)\right\}$, $N_-:=\left\{\pm\left( \begin{array}{c} 1 \\ 2 \end{array} \right)\right\}$, $C_0 := P_+ \cup P_-$, and $C_2:=N_+ \cup N_-$. \item We put $C_{\epsilon_1,\epsilon_2,\epsilon_3}:=\Xi(C_{\epsilon_1},C_{\epsilon_2},C_{\epsilon_3})$ for $(\epsilon_1,\epsilon_2,\epsilon_3)\in \{0,2\}^3$. \item We define $\sigma_i,\tau_i \in \mathrm{Aut} (I^3)$ ($i=1,2,3$) by $$ \sigma_1:=(1\ 2)(4\ 5),\ \sigma_2:=(1\ 3)(4\ 6),\ \sigma_3:=(2\ 3)(5\ 6),\ \tau_i:=(i\ i+3)$$ and $\delta_i\in GL_3(\mathbb{Z})$ ($i=1,2,3$) by $$ \delta_1:=\left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&1\end{array}\right),\ \delta_2:=\left(\begin{array}{ccc}0&0&1\\0&1&0\\1&0&0\end{array}\right),\ \delta_3:=\left(\begin{array}{ccc}1&0&0\\0&0&1\\0&1&0\end{array}\right).$$ Additionally, we take $\mu_i\in(\mathbb{Z}/2)^6$ ($i=1,2,3$) such that the $i$-th and ($i+3$)-th components are $-1$ and the other components are $1$, and $\nu_i\in GL_3(\mathbb{Z})$ ($i=1,2,3$) which acts on $\mathbb{Z}^3$ by multiplication by $-1$ on the $i$-th component. \end{itemize} \end{notation} \begin{rem} Since $(\delta_i,\sigma_i)\cdot(E_3\, \xi)$ and $(\nu_i,\mu_i)\cdot(E_3\, \xi)$ (see Definition \ref{df:action on Lambda_P}) are in the form $(E_3\ \xi')$ for each $\xi\in\Xi_3$ and $i=1,2,3$, we can regard that $(\delta_i,\sigma_i)$ and $(\nu_i,\mu_i)$ act on $\Xi_3$. \end{rem} \begin{lem}\label{lem:first restriction} The restriction of $\phi$ to $C_{0,0,0}\cup C_{0,0,2}\cup C_{0,2,2}\cup C_{2,2,2}$ is surjective. \end{lem} \begin{proof} Let $\xi$ be a characteristic square on $I^3$ and write $$\xi=\left( \begin{array}{ccc} 1 &x_1 &x_2 \\ y_1 &1 &x_3 \\ y_2 &y_3 &1 \end{array} \right).$$ From the nonsingularity condition, $1-x_i y_i=\pm 1$ ($i=1,2,3$). This implies that each ${}^t(x_i,y_i)$ belongs to $C_0$ or $C_2$. Therefore we obtain $$\Xi_3=\bigcup_{\epsilon_1,\epsilon_2,\epsilon_3\in\{0,2\}}C_{\epsilon_1,\epsilon_2,\epsilon_3}.$$ Since $(\delta_1,\sigma_1)\cdot C_{\epsilon_1,\epsilon_2,\epsilon_3} =C_{\epsilon_1,\epsilon_3,\epsilon_2}$, we have $\phi(C_{\epsilon_1,\epsilon_2,\epsilon_3})=\phi(C_{\epsilon_1,\epsilon_3,\epsilon_2})$. Similarly, we have $\phi(C_{\epsilon_1,\epsilon_2,\epsilon_3})=\phi(C_{\epsilon_3,\epsilon_2,\epsilon_1})$ and $\phi(C_{\epsilon_1,\epsilon_2,\epsilon_3})=\phi(C_{\epsilon_2,\epsilon_1,\epsilon_3})$. Hence we see that $$ \mathcal{M}_{I^3}^{\rm weh}=\bigcup_{\epsilon_1,\epsilon_2,\epsilon_3\in\{0,2\}}\phi(C_{\epsilon_1,\epsilon_2,\epsilon_3}) =\phi(C_{0,0,0})\cup \phi(C_{0,0,2})\cup \phi(C_{0,2,2})\cup \phi(C_{2,2,2}) .$$ Thus we obtain the lemma. \end{proof} Let us put $P_{s_1,s_2,s_3}:=\Xi(P_{s_1},P_{s_2},P_{s_3})$ where $s_i\in\{+,-\}$ ($i=1,2,3$). Then we have $$C_{0,0,0}=\bigcup_{s_1,s_2,s_3\in \{+,-\}} P_{s_1,s_2,s_3}. $$ Moreover, $(\delta_i,\sigma_i)$ ($i=1,2,3$) act as follows. \[ \xymatrix { P_{+,+,+} \ar[1,0]_{(\delta_3,\sigma_3)} \ar[1,2]^(0.6){(\delta_2,\sigma_2)} \ar[0,2]^{(\delta_1,\sigma_1)} && P_{-,+,+} \ar[1,2]^(0.6){(\delta_3,\sigma_3)} \ar[0,2]^{(\delta_2,\sigma_2)} \ar[0,-2] && P_{-,-,+} \ar[0,-2] & P_{+,-,+} \ar[1,0]^{(\delta_2,\sigma_2)} \\ P_{+,+,-} \ar[-1,0] &&P_{-,-,-} \ar[-1,-2] && P_{+,-,-} \ar[-1,-2] & P_{-,+,-} \ar[-1,0] } \] Thus we obtain the following lemma. \begin{lem} $\phi(C_{0,0,0})=\phi(P_{+,+,+}\cup P_{+,-,+})$. \end{lem} Suppose that $$\xi=\left( \begin{array}{ccc} 1 &x_1 &0 \\ 0 &1 &x_3 \\ x_2 &0 &1 \end{array} \right) \in P_{+,-,+}.$$ Then, from the nonsingularity condition, we have $x_1 x_2 x_3 =-1\pm 1$. \begin{itemize} \item If $x_1 x_2 x_3 =0$, then $\xi \in P_{-,-,+}\cup P_{+,+,+} \cup P_{+,-,-}$. \item If $x_1 x_2 x_3 =-2$, then there exists $\psi\in GL_3(\mathbb{Z})$ such that $(\psi,\tau_1)\cdot(E_3\ \xi)=(E_3\ \xi')$ where $$\xi'= \left( \begin{array}{ccc} 1 &x_1 &0 \\ 0 &1 &x_3 \\ -x_2 &-x_1 x_2 &1 \end{array} \right)\in C_{0,0,2}. $$ \end{itemize} Thus we obtain the following lemma. \begin{lem}\label{lem:C_{0,0,0}} Put $A_1:=P_{+,+,+}$. Then we have $\phi(C_{0,0,0})\subseteq \phi(A_1) \cup \phi(C_{0,0,2})$. \end{lem} For $C_{0,0,2}$, we prove the following lemma first. Put $$C'_{0,0,2}:=\left\{\left( \begin{array}{ccc} 1 &x_1 &x_2 \\ y_1 &1 &2 \\ y_2 &1 &1 \end{array} \right)\in C_{0,0,2} \right\}.$$ \begin{lem} $\phi(C'_{0,0,2})=\phi(C_{0,0,2})$. \end{lem} \begin{proof} Since $(\delta_3,\sigma_3)$ gives a bijection between $\Xi(C_0,C_0,N_+)$ and $\Xi(C_0,C_0,N_-)$, we have $\phi(C_{0,0,2})=\phi(\Xi(C_0,C_0,N_+))$. By using $(\nu_3,\mu_3)$, we see that $\phi(C'_{0,0,2})=\phi(\Xi(C_0,C_0,N_+))$. \end{proof} Suppose that $$\xi=\left( \begin{array}{ccc} 1 &x_1 &0 \\ 0 &1 &2 \\ y_2 &1 &1 \end{array} \right)\in C'_{0,0,2}.$$ From the nonsingularity condition, we have $2 x_1 y_2 =1\pm 1$. \begin{itemize} \item If $x_1 y_2 =0$, then $\xi \in \Xi(P_+,P_+,N_+) \cup \Xi(P_-,P_-,N_+)$. \item If $x_1 y_2 =1$, then $x_1=\pm 1$ and $$\xi=\left( \begin{array}{ccc} 1 &x_1 &0 \\ 0 &1 &2 \\ x_1 &1 &1 \end{array} \right).$$ \end{itemize} Similarly, if we assume $$\xi=\left( \begin{array}{ccc} 1 &0 &x_2 \\ y_1 &1 &2 \\ 0 &1 &1 \end{array} \right)\in C'_{0,0,2},$$ then we see that $\xi \in \Xi(P_+,P_+,N_+) \cup \Xi(P_-,P_-,N_+)$ or $$\xi=\left( \begin{array}{ccc} 1 &0 &2a \\ a &1 &2 \\ 0 &1 &1 \end{array} \right),\left( \begin{array}{ccc} 1 &0 &b \\ 2b &1 &2 \\ 0 &1 &1 \end{array} \right)$$ where $a$ and $b$ are $\pm 1$. By using the action of $(\nu_1,\mu_1)$, we obtain the following. \begin{lem}\label{lem:C_{0,0,2}} Put $A_2:=C'_{0,0,2}\cap \Xi(P_+,P_+,N_+)$, $A_3:=C'_{0,0,2}\cap \Xi(P_-,P_-,N_+)$, $$\chi_1:=\left( \begin{array}{ccc} 1 &0 &2 \\ 1 &1 &2 \\ 0 &1 &1 \end{array} \right),\,\chi_2:=\left( \begin{array}{ccc} 1 &0 &1 \\ 2 &1 &2 \\ 0 &1 &1 \end{array} \right),\,\chi_3:=\left( \begin{array}{ccc} 1 &1 &0 \\ 0 &1 &2 \\ 1 &1 &1 \end{array} \right).$$ Then we have $\phi(C_{0,0,2})=\phi(A_2\cup A_3\cup \{\chi_1,\chi_2,\chi_3\})$. \end{lem} For $C_{0,2,2}$, we have the following. \begin{lem}\label{lem:C_{0,2,2}} Put \begin{align*} \chi_4 &:=\left( \begin{array}{ccc} 1 &1 &1 \\ 0 &1 &1 \\ 2 &2 &1 \end{array} \right), & \chi_5 &:=\left( \begin{array}{ccc} 1 &2 &1 \\ 0 &1 &1 \\ 2 &2 &1 \end{array} \right), & \chi_6 &:=\left( \begin{array}{ccc} 1 &1 &1 \\ 0 &1 &2 \\ 2 &1 &1 \end{array} \right), & \chi_7 &:=\left( \begin{array}{ccc} 1 &2 &2 \\ 0 &1 &1 \\ 1 &2 &1 \end{array} \right), \\ \chi_8 &:=\left( \begin{array}{ccc} 1 &4 &2 \\ 0 &1 &1 \\ 1 &2 &1 \end{array} \right), & \chi_9 &:=\left( \begin{array}{ccc} 1 &1 &2 \\ 0 &1 &2 \\ 1 &1 &1 \end{array} \right), & \chi_{10} &:=\left( \begin{array}{ccc} 1 &2 &2 \\ 0 &1 &2 \\ 1 &1 &1 \end{array} \right). \end{align*} Then we have $\phi(C_{0,2,2})=\phi(\{\chi_4,\ldots,\chi_{10}\})$. \end{lem} \begin{proof} Take $\xi \in C_{0,2,2}$. By using $(\delta_1,\sigma_1)$, $(\nu_2,\mu_2)$ and $(\nu_3,\mu_3)$, we can assume that $\xi$ is in one of the following forms: $$ \xi=\left( \begin{array}{ccc} 1 &x &1 \\ 0 &1 &1 \\ 2 &2 &1 \end{array} \right),\ \left( \begin{array}{ccc} 1 &x &1 \\ 0 &1 &2 \\ 2 &1 &1 \end{array} \right),\ \left( \begin{array}{ccc} 1 &x &2 \\ 0 &1 &1 \\ 1 &2 &1 \end{array} \right),\ \left( \begin{array}{ccc} 1 &x &2 \\ 0 &1 &2 \\ 1 &1 &1 \end{array} \right).$$ If $\xi$ is in the first form, then we have $x=1,2$ by the nonsingularity condition. Similarly, $x=1$ if $\xi$ is in the second form, $x=2,4$ in the third form, and $x=1,2$ in the forth form. Thus we obtain the lemma. \end{proof} Similarly, we have the following lemma for $C_{2,2,2}$. \begin{lem}\label{lem:C_{2,2,2}} Put $$ \chi_{11}:=\left( \begin{array}{ccc} 1 &2 &2 \\ 1 &1 &2 \\ 1 &1 &1 \end{array} \right).$$ Then we have $\phi(C_{2,2,2})=\{ \phi(\chi_{11}) \}$. \end{lem} \begin{proof} Take $$ \xi=\left( \begin{array}{ccc} 1 &x_1 &x_2 \\ y_1 &1 &x_3 \\ y_2 &y_3 &1 \end{array} \right)\in C_{2,2,2}. $$ By using $(\delta_1,\sigma_1)$, $(\nu_2,\mu_2)$ and $(\nu_3,\mu_3)$, we can assume $x_1=2$, $y_1=1$ and $x_2,y_2>0$. From the nonsingularity condition, we have $2 y_2 x_3 + x_2 y_3 = 5\pm 1$. Then it is straightforward to obtain $$ \xi=\left( \begin{array}{ccc} 1 &2 &2 \\ 1 &1 &2 \\ 1 &1 &1 \end{array} \right),\left( \begin{array}{ccc} 1 &2 &2 \\ 1 &1 &1 \\ 1 &2 &1 \end{array} \right),\left( \begin{array}{ccc} 1 &2 &1 \\ 1 &1 &1 \\ 2 &2 &1 \end{array} \right). $$ Moreover, if we put them $\xi_1$, $\xi_2$ and $\xi_3$, then we have $(\delta_3,\sigma_3)\cdot\xi_1=\xi_2$ and $(\delta_2,\sigma_2)\cdot\xi_2=\xi_3$. \end{proof} Taking $\tau_i$ ($i=1,2,3$) into account, we have the following diagram. \[ \xymatrix { \chi_1 \ar[1,0]_{\tau_3} \ar[1,1]^{\tau_2} \ar[0,1]^{\tau_1} & \gamma_1 \ar[1,1]^{\tau_3} \ar[0,1]^{\tau_2} & \gamma_2 \ar[0,1]^{\sigma_1} & \chi_3 & \chi_4 \ar[1,0]^{\tau_1} & \chi_6 \ar[1,0]^{\tau_3} \\ \gamma_3 \ar[1,0]_{\sigma_3} &\gamma_4 \ar[1,0]_{\tau_3} \ar[1,1]^{\sigma_1} & \gamma_5 \ar[0,1]^{\sigma_3 \circ \sigma_2} & \chi_2 & \gamma_6 \ar[1,0]^{\sigma_3 \circ \sigma_1} & \gamma_7 \ar[1,0]^{\sigma_1} \\ \chi_7 & \chi_{11} & \chi_9 & & \chi_3 & \chi_8 } \] Here the arrow $\xi_1\xrightarrow{\rho}\xi_2$ means that there exist $\psi\in GL_3(\mathbb{Z})$ and $\mu\in(\mathbb{Z}/2)^6$ such that $(\psi,\mu\circ\rho)\cdot(E_3\ \xi_1)=(E_3\ \xi_2)$, and $\gamma_i$ ($i=1,\dots,7$) denote the following characteristic squares: \begin{align*} \gamma_1 &:=\left( \begin{array}{ccc} 1 &0 &2 \\ -1 &1 &0 \\ 1 &1 &1 \end{array} \right), & \gamma_2 &:=\left( \begin{array}{ccc} 1 &0 &2 \\ 1 &1 &0 \\ 1 &1 &1 \end{array} \right), & \gamma_3 &:=\left( \begin{array}{ccc} 1 &2 &2 \\ 1 &1 &2 \\ 0 &1 &1 \end{array} \right), & \gamma_4 &:=\left( \begin{array}{ccc} 1 &0 &2 \\ 1 &1 &2 \\ 1 &1 &1 \end{array} \right), \\ \gamma_5 &:=\left( \begin{array}{ccc} 1 &2 &2 \\ 1 &1 &0 \\ 0 &1 &1 \end{array} \right), & \gamma_6 &:=\left( \begin{array}{ccc} 1 &1 &1 \\ 0 &1 &1 \\ 2 &0 &1 \end{array} \right), & \gamma_7 &:=\left( \begin{array}{ccc} 1 &0 &1 \\ 4 &1 &2 \\ 2 &1 &1 \end{array} \right). \end{align*} Summarizing Lemma \ref{lem:first restriction}, Lemma \ref{lem:C_{0,0,0}}, Lemma \ref{lem:C_{0,0,2}}, Lemma \ref{lem:C_{0,2,2}}, Lemma \ref{lem:C_{2,2,2}} and the above, we obtain the following. Note that $\chi_3$ appears twice in the diagram and $\chi_5$, $\chi_{10}$ do not appear. \begin{lem}\label{lem:summary up to w.e.homeo.} $\mathcal{M}^{\rm weh}_{I^3}= \phi(A_1\cup A_2\cup A_3\cup \{ \chi_1,\chi_5,\chi_6,\chi_{10} \})$. \end{lem} \begin{df} We define $\xi_{s,t}\,(s,t\in\mathbb{Z})$ by $$\xi_{s,t}:=\left( \begin{array}{ccc} 1 &s &t \\ 0 &1 &2 \\ 0 &1 &1 \end{array} \right) \in A_2,$$ and $\xi^{s,t}\,(s,t\in\mathbb{Z})$ by $$\xi^{s,t}:=\left( \begin{array}{ccc} 1 &0 &0 \\ s &1 &2 \\ t &1 &1 \end{array} \right) \in A_3.$$ Note that $A_2=\{\xi_{s,t}\}_{s,t\in\mathbb{Z}}$ and $A_3=\{\xi^{s,t}\}_{s,t\in\mathbb{Z}}$. \end{df} \section{Strong cohomological rigidity of $\mathcal{M}_{I^3}$ In this section, for $\xi\in\Xi_3$, we denote the generators of $H^*(\xi)$ by $X$, $Y$ and $Z$ instead of $X_1$, $X_2$ and $X_3$ (see Definition \ref{H^*(xi)}). Additionally, we define $H^*(\xi;\mathbb{Z}/2):=H^*(\xi)/2$, $w_2(\xi):=\sum_{i=1}^6 u_i(\xi) \in H^2(\xi;\mathbb{Z}/2)$, $p_1(\xi):=-\sum_{i=1}^6 u_i(\xi)^2\in H^4(\xi)$ and identify $w_2(\xi)$, $p_1(\xi)$ with $w_2(M(\xi))$, $p_1(M(\xi))$ respectively through the canonical isomorphism between $H^*(\xi)$ and $H^*(M(\xi);\mathbb{Z})$ (see Theorem \ref{thm:characteristic classes of a quasitoric manifold}). \begin{df} Let $\mathcal{M}_{I^3}$ be the set of homeomorphism classes of quasitoric manifolds over $I^3$ and $\phi_1$ be the canonical surjection from $\mathcal{M}^\mathrm{weh}_{I^3}$ to $\mathcal{M}_{I^3}$. We define subsets $\mathcal{M}_1$, $\mathcal{M}_2$ and $\mathcal{M}_3$ of $\mathcal{M}_{I^3}$ by $\mathcal{M}_1:=\phi_1 \circ \phi (A_1)$, $\mathcal{M}_2:=\phi_1 \circ \phi (A_2\setminus \{ \xi_{0,0} \})$ and $\mathcal{M}_3:=\phi_1 \circ \phi (A_3)$. In addition, we define $\mathcal{M}^\mathrm{ceq}_{I^3}$ as the quotient $\mathcal{M}_{I^3}/\!\sim$ where $M\sim M'$ if and only if $H^*(M;\mathbb{Z})\cong H^*(M';\mathbb{Z})$ as graded rings, and denote the quotient map by $\phi_2\colon\thinspace \mathcal{M}_{I^3} \to \mathcal{M}^\mathrm{ceq}_{I^3}$. \end{df} \begin{df} A class $\mathcal{C}$ of topological spaces is called \textit{strongly cohomologically rigid} if for any graded ring isomorphism $\varphi$ between the cohomology rings of $X,Y\in\mathcal{C}$ there exists a homeomorphism $f$ between them such that $\varphi=f^*$. \end{df} \begin{rem}\label{rem:strong cohomological rigidity of M_1} By \cite[Proposition 6.2]{CMS10}, we see that $\mathcal{M}_1$ corresponds with the class of 3-stage Bott manifolds. Then we obtain the strong cohomological rigidity of $\mathcal{M}_1$ by \cite[Theorem 3.1]{Choi} which shows the strong cohomological rigidity of 3-stage Bott manifolds. \end{rem} \begin{lem}\label{lem:M(chi_5),M(chi_6) in M_2} $M(\chi_5),M(\chi_6),M(\chi_{10})\in\mathcal{M}_2$. Therefore $\mathcal{M}_{I^3}=\mathcal{M}_1\cup \mathcal{M}_2\cup \mathcal{M}_3\cup \{\chi_1\}$. \end{lem} \begin{proof} Define graded ring automorphisms $\alpha_5,\alpha_6,\alpha_{10}$ of $\mathbb{Z}[X,Y,Z]$ so that $$ \alpha_i\left(\begin{array}{c}X\\Y\\Z\end{array}\right)= A_i\left(\begin{array}{c}X\\Y\\Z\end{array}\right) $$ where $A_i$ ($i=5,6,10$) denote the following matrices: $$A_5 :=\left(\begin{array}{ccc} 1 &0 &0 \\ 0 &0 &1 \\ 1 &1 &0 \end{array} \right),\, A_6:=\left( \begin{array}{ccc} -1 &0 &0 \\ 2 &1 &0 \\ 0 &0 &1 \end{array} \right),\, A_{10}:=\left( \begin{array}{ccc} 1 &0 &0 \\ 1 &1 &0 \\ 0 &0 &1 \end{array} \right). $$ Then these $\alpha_i$'s descend to isomorphisms $\alpha_5\colon\thinspace H^*(\xi_{-1,-2})\to H^*(\chi_5)$, $\alpha_6\colon\thinspace H^*(\xi_{1,1})\to H^*(\chi_6)$, $\alpha_{10}\colon\thinspace H^*(\xi_{-2,-2})\to H^*(\chi_{10})$ and they preserve the second Stiefel-Whitney classes and the first Pontrjagin classes. Thus we obtain the lemma by Theorem \ref{thm:classification theorem of 6-manifolds}. \end{proof} \begin{lem} Let $\mathbb{Z}[Y,Z]$ be the polynomial ring generated by $Y$ and $Z$ of degree $2$, and $R$ be the quotient ring $\mathbb{Z}[Y,Z]/(Y(Y+2Z),Z(Y+Z))$. Then $R$ has no non-zero element of degree $2$ of which the square is equal to $0$. \end{lem} \begin{proof} Let $W=s Y+t Z$ be an element of which the square is $0$. Then $0=W^2=(s Y+t Z)^2=(-2s^2+2s t-t^2)YZ=-\{s^2+(s-t)^2\}YZ$, so we have $s=s-t=0$, i.e. $W=0$. \end{proof} \begin{rem}\label{rem:nil-square element} For any $\xi \in \Xi(\mathbb{Z}^2,\mathbb{Z}^2,C_2)$, since $H^*(\xi)/(X)$ is isomorphic to $R$, the set $\{W\in H^2(\xi)\,|\,W^2=0\}$ is equal to $\mathbb{Z} X$ or $\{0\}$. \end{rem} This remark immediately yields the following lemma. \begin{lem}\label{lem:M_1,M_3/M_2,M_4} Let $M$ be a quasitoric manifold over $I^3$. Then there exists non-zero $W \in H^2(M;\mathbb{Z})$ such that $W^2=0$ if and only if $M\in \mathcal{M}_1 \cup \mathcal{M}_3$. In particular, $\phi_2(\mathcal{M}_1 \cup \mathcal{M}_3) \cap \phi_2(\mathcal{M}_2 \cup \{\chi_1\})=\emptyset$. \end{lem} \begin{lem}\label{lem:M_1/M_3} $\phi_2(\mathcal{M}_1)\cap\phi_2(\mathcal{M}_3)=\emptyset$. \end{lem} \begin{proof} Let $\xi_1 \in A_1,\,\xi_3 \in A_3$ and suppose that there exists an isomorphism $\alpha \colon\thinspace H^*(\xi_1)\to H^*(\xi_3)$. Since $\alpha$ preserves the elements of which the squares are 0, $\alpha$ descends to an isomorphism $\bar{\alpha}\colon\thinspace H^*(\xi_1)/(Z) \to H^*(\xi_3)/(X)$. However, $H^*(\xi_1)/(Z)$ has non-zero degree 2 elements of which the squares are zero, but $H^*(\xi_3)/(X)\cong R$ does not. This is a contradiction. \end{proof} \begin{lem}\label{lem:M_2/chi_1} Let $\xi_0$ be a characteristic square on $I^{n-1}$ ($n\geq 3$), $\xi$ be a $(1,\xi_0)$-type characteristic square on $I^n$, and $\varphi\colon\thinspace \mathbb{Z}[X,Y,Z]\to \mathbb{Z}[X_1,\ldots,X_n]$ be a graded ring monomorphism which maps $X$, $Y$ and $Z$ to $\sum_{i=1}^n a_i X_i$, $\sum_{i=1}^n b_i X_i$ and $\sum_{i=1}^n c_i X_i$ respectively. Moreover, we assume the following: \begin{enumerate} \item[(a)] $\varphi$ maps $\mathcal{I}_{\chi_1}$ into $\mathcal{I}_{\xi}$; \item[(b)] for each prime $p$ the $\bmod p$ reductions of $(a_1,\ldots,a_n)$, $(b_1,\ldots,b_n)$ and $(c_1,\ldots,c_n)$ are linearly independent. \end{enumerate} Then we have $a_1=b_1=c_1=0$. In particular, there exists no graded ring isomorphism from $H^*(\chi_1)$ to $H^*(\xi_{s,t})$ for any integers $s$ and $t$. \end{lem} \begin{proof} Denote the first row of $\xi$ by $(1,s_2,s_3,\ldots,s_n)$. Since $\xi$ is $(1,\xi_0)$-type and \begin{align*} & \varphi(X(X+2 Z)) = \left(\sum_{i=1}^n a_i X_i\right)\left\{ \sum_{i=1}^n (a_i+2 c_i) X_i \right\} \\ & = X_1 \left\{ a_1(a_1+2 c_1) X_1 + \sum_{i=2}^n \{a_1(a_i+2 c_i)+a_i(a_1+2 c_1)\} X_i \right\}+(\text{a polynomial in }X_2,\ldots,X_n) \\ & = X_1 \sum_{i=2}^n \{ a_1(a_i+2 c_i)+(a_i-s_i a_1)(a_1+2 c_1) \} X_i +(\text{a polynomial in }X_2,\ldots,X_n)=0 \end{align*} in $H^*(\xi)$, we obtain $a_1(a_i+2 c_i)=(s_i a_1 -a_i)(a_1+2 c_1)$ for $i=2,\ldots,n$. Note that, by the assumption of linear independence, $a_1+2c_1=0$ if $a_1=0$ and vice versa. If $a_1$ and $a_1+2 c_1$ are non-zero, denoting by $k$ the greatest common divisor of $a_1$ and $a_1+2 c_1$, we see that $a_1/k$ and $(a_1+2 c_1)/k$ divide $s_i a_1 -a_i$ and $a_i +2 c_i$ ($i=2,\ldots,n$) respectively. By the assumption (b), we obtain $a_1/k=\pm 1$ and $(a_1+2 c_1)/k=\pm 1$, namely, $a_1+c_1=0$ or $c_1=0$. This holds also in the case $a_1=a_1+2 c_1=0$. Similarly, we have $b_1(a_i+b_i+2 c_i)=(s_i b_1-b_i)(a_1+b_1+2 c_1)$ and $c_1(b_i+ c_i)=(s_i c_1-c_i)(b_1+ c_1)$ for $i=2,\ldots,n$ from $\varphi(Y(X+Y+2 Z))=0$ and $\varphi(Z(Y+Z))=0$ respectively, and then obtain $a_1+2 c_1=0$ or $a_1+2 b_1+2 c_1=0$, and, $b_1=0$ or $b_1+2 c_1=0$ in the same way as above. We solve these equations to see $a_1=b_1=c_1=0$. \end{proof} \begin{rem}\label{rem:remark on proof over I^3} By Remark \ref{rem:strong cohomological rigidity of M_1}, Lemma \ref{lem:M(chi_5),M(chi_6) in M_2}, Lemma \ref{lem:M_1,M_3/M_2,M_4}, Lemma \ref{lem:M_1/M_3} and Lemma \ref{lem:M_2/chi_1}, to show the strong cohomological rigidity of $\mathcal{M}_{I^3}$, we only have to show that of $\mathcal{M}_2$, $\mathcal{M}_3$, and $\{ M(\chi_1) \}$ respectively. \end{rem} \begin{lem}\label{filtration lemma for M_2} Let $\xi_0$ be a characteristic square on $I^{n-1}$ ($n\geq 2$), $\xi$ be a $(1,\xi_0)$-type characteristic square on $I^n$, and $\varphi\colon\thinspace \mathbb{Z}[X_1,X_2]\to \mathbb{Z}[X_1,\ldots,X_n]$ be a graded ring monomorphism which maps $X_1$ and $X_2$ to $\sum_{i=1}^n a_i X_i$ and $\sum_{i=1}^n b_i X_i$ respectively. Moreover, we assume the following: \begin{enumerate} \item[(a)] $\varphi$ maps $\mathcal{I}_{\kappa^2}$ into $\mathcal{I}_{\xi}$; \item[(b)] for any prime $p$ the $\bmod p$ reductions of $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$ are linearly independent. \end{enumerate} Then we have $a_1=b_1=0$. \end{lem} \begin{proof} Denote the first row of $\xi$ by $(1,s_2,s_3,\ldots,s_n)$. Since $\xi$ is $(1,\xi_0)$-type and \begin{align*} & \varphi(X_1(X_1+2 X_2)) = \left(\sum_{i=1}^n a_i X_i\right)\left\{ \sum_{i=1}^n (a_i+2 b_i) X_i \right\} \\ & = X_1 \left\{ a_1(a_1+2 b_1) X_1 + \sum_{i=2}^n \{a_1(a_i+2 b_i)+a_i(a_1+2 b_1)\} X_i \right\}+(\text{a polynomial in }X_2,\ldots,X_n) \\ & = X_1 \sum_{i=2}^n \{ a_1(a_i+2 b_i)+(a_i-s_i a_1)(a_1+2 b_1) \} X_i +(\text{a polynomial in }X_2,\ldots,X_n)=0 \end{align*} in $H^*(\xi)$, we obtain $a_1(a_i+2 b_i)=(s_i a_1 -a_i)(a_1+2 b_1)$ for $i=2,\ldots,n$. In the same way as the proof of Lemma \ref{lem:M_2/chi_1}, we obtain $a_1+b_1=0$ or $b_1=0$, which implies that the coefficient of $X_1$ in $\varphi(X_1+X_2)$ or $\varphi(X_2)$ is zero. Then we easily see that $\varphi(X_2(X_1+X_2))\neq 0$ in $H^*(\xi)$ unless both $b_1$ and $a_1+b_1$ are zero. \end{proof} \begin{lem}\label{lem:strong cohomological rigidity of M_2} $\mathcal{M}_2$ is strongly cohomologically rigid. \end{lem} \begin{proof} Let $\varphi\colon\thinspace H^*(\xi_{s,t})\to H^*(\xi_{x,y})$ be a graded ring isomorphism. By Lemma \ref{filtration lemma for M_2}, $$\varphi\left(\begin{array}{c}X\\Y\\Z\end{array}\right)=\left( \begin{array}{c | c c} a &b &c \\ \hline 0 & \multicolumn{2}{c}{\raisebox{-11pt}[0pt][0pt]{\LARGE $\theta$}} \\ 0 & \end{array} \right)\left(\begin{array}{c}X\\Y\\Z\end{array}\right)$$ where $a=\pm 1$ and $\theta$ is an automorphism of $H^*(\kappa^2)$. By Lemma \ref{realization of Aut(kappa^2)}, $\theta$ can be realized as a weakly equivariant self-homeomorphism of $M(\kappa^2)$, and therefore we can construct a weakly equivariant homeomorphism $f$ from $M(\xi_{x,y})$ to some $M(\xi_{x',y'})$ such that $$f^*\left(\begin{array}{c}X\\Y\\Z\end{array}\right)=\left( \begin{array}{c | c c} a &0 &0 \\ \hline 0 & \multicolumn{2}{c}{\raisebox{-11pt}[0pt][0pt]{\LARGE $\theta$}} \\ 0 & \end{array} \right)\left(\begin{array}{c}X\\Y\\Z\end{array}\right)$$ in the similar way to the proof of Corollary \ref{1-I}. Thus we see that we can assume $a=1$ and $\theta=E_2$. Since $\varphi$ maps $\mathcal{I}_{\xi_{s,t}}$ into $\mathcal{I}_{\xi_{x,y}}$, \begin{align*} & \varphi(X(X+s Y+t Z)) = (X+ b Y+c Z)\{X+(s+b)Y+(t+c)Z\} \\ & = X \{ (s-x+2 b)Y+(t-y+2 c)Z \}+\{ -2 b(s+b) -c(t+c) +b(t+c) +c(s+b) \} YZ \\ & =0 \end{align*} in $H^*(\xi_{x,y})$. Thus we obtain $$b=\frac{x-s}{2},\,c=\frac{y-t}{2},\,(s-t)^2+s^2=(x-y)^2+x^2.$$ In particular, $s\equiv x$ and $t\equiv y$ modulo 2. Then we have $$ \varphi(w_2(\xi_{s,t}))=\varphi((s+1) Y+t Z))=(s+1)Y+t Z=w_2(\xi_{x,y}) $$ in $H^*(\xi_{x,y};\mathbb{Z}/2)$. Similarly, since $\varphi(2 X+sY+t Z)-(2 X+x Y+y Z)=0$, we have \begin{align*} & p_1(\xi_{x,y}) - \varphi(p_1(\xi_{s,t})) = \varphi(X)^2+\varphi(X+sY+t Z)^2-X^2-(X+x Y+y Z)^2 \\ & = \varphi(2 X+sY+t Z)^2-(2 X+x Y+y Z)^2 \\ & = \{ \varphi(2 X+sY+t Z)+(2 X+x Y+y Z) \} \{ \varphi(2 X+sY+t Z)-(2 X+x Y+y Z) \} \\ & = 0 \end{align*} in $H^*(\xi_{x,y})$. Thus we obtain the lemma by Theorem \ref{thm:classification theorem of 6-manifolds}. \end{proof} \begin{lem}\label{lem:strong cohomological rigidity of M_3} Any graded ring isomorphism between the cohomology rings of two members of $\phi(A_3)$ is induced by a weakly equivariant homeomorphism. In particular, $\mathcal{M}_3$ is strongly cohomologically rigid. \end{lem} \begin{proof} Note that $(\psi_3\psi_2,\sigma_3\circ\sigma_2)\cdot\xi^{s,t}$ is a $(\kappa^2,1)$-type characteristic square. Let $\xi$ and $\xi'$ be two $(\kappa^2,1)$-type characteristic squares and $\varphi\colon\thinspace H^*(\xi)\to H^*(\xi')$ be a graded ring isomorphism. Since $\varphi$ preserves the elements of degree 2 of which the squares are zero, we have $$\varphi\left(\begin{array}{c}X\\Y\\Z\end{array}\right)=\left( \begin{array}{c c | c} \multicolumn{2}{c|}{\raisebox{-11pt}[0pt][0pt]{\LARGE $\theta$}} & a\\ && b \\ \hline 0 & 0 & c \end{array} \right)\left(\begin{array}{c}X\\Y\\Z\end{array}\right).$$ As in the proof of the previous lemma, we can assume $c=1$ and $\theta=E_2$. Then we have $\xi=\xi'$ and $\varphi=(\mathrm{id}_{M(\xi)})^*$ by Lemma \ref{lemma for 1-II}. \end{proof} \begin{lem}\label{lem:automorphisms of chi_1} Let $\varphi$ be a graded ring automorphism of $H^*(\chi_1)$. Then $\varphi= \pm \mathrm{id} $. \end{lem} \begin{proof} Take $A\in GL_3(\mathbb{Z})$ so that $$\varphi\left(\begin{array}{c}X\\Y\\Z\end{array}\right) =A\left(\begin{array}{c}X\\Y\\Z\end{array}\right)$$ and denote the $i$-th row of $A$ by $(a_i,b_i,c_i)$ ($i=1,2,3$). Then $A$ satisfies the following equations. \begin{align} \label{eq:chi_1,1} (a_1-b_1)(b_1+2 b_3) &= -b_1(a_1+2 a_3) \\ (c_1-2b_1)(b_1+2 b_3) &= (c_1-b_1)(c_1+2 c_3) \\ (c_1-2a_1)(a_1+2a_3) &= -a_1(c_1+2c_3) \\ (a_2-b_2)(b_1+b_2+2 b_3) &= -b_2(a_1+a_2+2 a_3) \\ (c_2-2b_2)(b_1+b_2+2 b_3) &= (c_2-b_2)(c_1+c_2+2 c_3) \\ (c_2-2a_2)(a_1+a_2+2a_3) &= -a_2(c_1+c_2+2c_3) \\ (a_3-b_3)(b_2+b_3) &= -b_3(a_2+a_3) \\ (c_3-2b_3)(b_2+b_3) &= (c_3-b_3)(c_2+c_3) \\ \label{eq:chi_1,9} (c_3-2a_3)(a_2+a_3) &= -a_3(c_2+c_3) \end{align} By solving these equations modulo $2$, we obtain $$A \equiv \left( \begin{array}{ccc} 1 &0 &0 \\ 0 &1 &0 \\ 0 &b_3 &1 \end{array} \right) \bmod 2. $$ Since $a_1$ is odd and $b_1$ is even, we have $$ \frac{b_1}{2}+b_3\equiv -\frac{b_1}{2} \bmod 2$$ from the equation (\ref{eq:chi_1,1}), which implies $b_3\equiv 0\bmod 2$. Moreover, we obtain $c_2-b_2,\,c_3-b_3,\,c_3-2b_3=\pm 1$ as follows. Note that $c_2-b_2$, $c_3-b_3$, and $c_3-2b_3$ are odd. Let $p$ be an odd prime, and consider the equations $(\ref{eq:chi_1,1}),\ldots,(\ref{eq:chi_1,9})$ and $\det A = \pm 1$ modulo $p$. Then, by a direct calculation, one can show that there exists no solution with $c_2-b_2\equiv 0$, $c_3-b_3\equiv 0$, or $c_3-2b_3\equiv 0$ modulo $p$. This implies that no prime divides them, i.e. they are equal to $\pm 1$ respectively. Then we can solve $(\ref{eq:chi_1,1}),\ldots,(\ref{eq:chi_1,9})$ straightforwardly and obtain the lemma. \end{proof} The following theorem is immediate from Remark \ref{rem:remark on proof over I^3}, Lemma \ref{lem:strong cohomological rigidity of M_2}, Lemma \ref{lem:strong cohomological rigidity of M_3} and Lemma \ref{lem:automorphisms of chi_1}, which is a paraphrase of Theorem \ref{main thm 2}. \begin{thm} $\mathcal{M}^{\rm homeo}_{I^3}$ is strongly cohomologically rigid. \end{thm}
1,116,691,501,128
arxiv
\chapter*{Abstract} This dissertation concerns the classification of groupoid and higher-rank graph $C^*$-algebras and has two main components. Firstly, for a groupoid it is shown that the notions of strength of convergence in the orbit space and measure-theoretic accumulation along the orbits are equivalent ways of realising multiplicity numbers associated to a sequence of induced representations of the groupoid $C^*$-algebra. Examples of directed graphs are given, showing how to determine the multiplicity numbers associated to various sequences of induced representations of the directed graph $C^*$-algebras. The second component of this dissertation uses path groupoids to develop various characterisations of the $C^*$-algebras of higher-rank graphs. Necessary and sufficient conditions are developed for the Cuntz-Krieger $C^*$-algebras of row-finite higher-rank graphs to be liminal and to be postliminal. When Kumjian and Pask's path groupoid is principal, it is shown precisely when these $C^*$-algebras have bounded trace, are Fell, and have continuous trace. Necessary and sufficient conditions are provided for the path groupoids of row-finite higher-rank graphs without sources to have closed orbits and to have locally closed orbits. When these path groupoids are principal, necessary and sufficient conditions are provided for them to be integrable, to be Cartan and to be proper. \mainmatter \chapter{Introduction}\label{chapter_introduction} This thesis is broken up into 5 chapters. After this short introductory chapter, Chapter \ref{chapter_groupoid_intro} gives an overview of groupoids and groupoid $C^*$-algebras. Chapter \ref{chapter_graph_algebra_intro} gives an overview of the Cuntz-Krieger $C^*$-algebras of directed graphs and higher-rank graphs. The key research for this thesis is presented in Chapters \ref{chapter_strength_of_convergence} and \ref{chapter_categorising_higher-rank_graphs}, with the content of Chapter \ref{chapter_strength_of_convergence} published as \cite{Hazlewood-anHuef2011} and content of Chapter \ref{chapter_categorising_higher-rank_graphs} soon to be submitted for publication. The following Sections \ref{intro_strength_of_convergence} and \ref{intro_categorising_higher-rank_graphs} introduce the research Chapters \ref{chapter_strength_of_convergence} and \ref{chapter_categorising_higher-rank_graphs}, respectively. This chapter will end with the definition of various classes of $C^*$-algebras. \section{Introduction: Strength of convergence in the orbit space of a groupoid}\label{intro_strength_of_convergence} Suppose $H$ is a locally-compact Hausdorff group acting freely and continuously on a locally-compact Hausdorff space $X$, so that $(H,X)$ is a free transformation group. In \cite[pp.~95--96]{Green1977} Green gives an example of a free non-proper action of $H=\mathbb R$ on a subset $X$ of $\mathbb R^3$; the non-properness comes down to the existence of $z\in X$, $\braces{x_n}\subset X$, and two sequences $\braces{s_n}$ and $\braces{t_n}$ in $H$ such that \begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}}\renewcommand{\labelenumi}{(\theenumi)} \item $s_n^{-1}\cdot x_n\rightarrow z$ and $t_n^{-1}\cdot x_n\rightarrow z$; and \item $t_ns_n^{-1}\rightarrow\infty$ as $n\rightarrow\infty$, in the sense that $\braces{t_ns_n^{-1}}$ has no convergent subsequence. \end{enumerate} In \cite[Definition~2.2]{Archbold-Deicke2005}), and subsequently in \cite[p.~2]{Archbold-anHuef2006}, the sequence $\{x_n\}$ is said to converge $2$-times in the orbit space to $z\in X$. Each orbit $H\cdot x$ gives an induced representation $\mathrm{Ind}\,\epsilon_{x}$ of the associated transformation-group $C^*$-algebra $C_0(X)\rtimes H$ which is irreducible, and the $k$-times convergence of $\{x_n\}$ in the orbit space to $z\in X$ translates into statements about various multiplicity numbers associated to $\mathrm{Ind}\,\epsilon_z$ in the spectrum of $C_0(X)\rtimes H$, as in \cite[Theorem~2.5]{Archbold-Deicke2005}, \cite[Theorem~1.1]{Archbold-anHuef2006} and \cite[Theorem~2.1]{Archbold-anHuef2008}. Upper and lower multiplicity numbers associated to irreducible representations $\pi$ of a $C^*$-algebra $A$ were introduced by Archbold \cite{Archbold1994} and extended to multiplicity numbers relative to a net of irreducible representations by Archbold and Spielberg \cite{Archbold-Spielberg1996}. The upper multiplicity $M_U(\pi)$ of $\pi$, for example, counts `the number of nets of orthogonal equivalent pure states which can converge to a common pure state associated to $\pi$' \cite[p.~26]{aklss2001}. The definition of $k$-times convergence and \cite[Theorem~2.5]{Archbold-Deicke2005} were very much motivated by a notion of $k$-times convergence in the dual space of a nilpotent Lie group \cite{Ludwig1990} and its connection with relative multiplicity numbers (see, for example, \cite[Theorem~2.4]{aklss2001} and \cite[Theorem~5.8]{Archbold-Ludwig-Schlichting2007}). Theorem~1.1 of \cite{Archbold-anHuef2006} shows that the topological property of a sequence $\{x_n\}$ converging $k$-times in the orbit space to $z\in X$ is equivalent to (1) a measure theoretic accumulation along the orbits $G\cdot x_n$ and (2) that the lower multiplicity of $\mathrm{Ind}\,\epsilon_z$ relative to the sequence $\{\mathrm{Ind}\,\epsilon_{x_n}\}$ is at least $k$. In Chapter \ref{chapter_strength_of_convergence} we prove that the results of \cite{Archbold-anHuef2006} generalise to principal groupoids. In our main arguments we have tried to preserve as much as possible the structure of those in \cite{Archbold-anHuef2006}, although the arguments presented here are often more complicated in order to cope with the partially defined product in a groupoid and the set of measures that is a Haar system compared to the fixed Haar measure used in the transformation-group case. Our theorems have led us to a new class of examples exhibiting $k$-times convergence in groupoids that are not based on transformation groups, thus justifying our level of generality. Given a row-finite directed graph $E$, Kumjian, Pask, Raeburn and Renault in \cite{kprr1997} used the set of all infinite paths in $E$ to construct an $r$-discrete groupoid $G_E$, called a {\em path groupoid}. We prove that $G_E$ is principal if and only if $E$ contains no cycles (Proposition~\ref{prop_principal_iff_no_cycles}). We then exhibit principal $G_E$ with Hausdorff and non-Hausdorff orbits space, respectively, both with a $k$-times converging sequence in the orbit space. In particular, our examples can be used to find a groupoid $G_E$ whose $C^*$-algebra has non-Hausdorff spectrum and distinct upper and lower multiplicity counts among its irreducible representations. The directed graphs demonstrating $k$-times convergence in the orbit space of the associated path groupoids were carefully chosen so that the path groupoids would be principal, integrable and not Cartan. The search for examples of directed graphs with these properties provided the initial motivation for the research that is introduced in the following section. \section[Intro: Categorising higher-rank graphs using the path groupoid]{Introduction: Categorising the operator algebras of higher-rank graphs using the path groupoid}\label{intro_categorising_higher-rank_graphs} Kumjian and Pask in \cite{Kumjian-Pask2000} introduced the notion of a higher-rank graph and, for row-finite higher-rank graph $\Lambda$ without sources, defined the Cuntz-Krieger higher-rank graph $C^*$-algebra, denoted $C^*(\Lambda)$. These $C^*$-algebras were introduced as generalisations of the directed-graph $C^*$-algebras developed by Kumjian, Pask, Raeburn and Renault in \cite{kprr1997}. The class of higher-rank graph $C^*$-algebras is signifantly larger than the already large class of $C^*$-algebras associated to directed graphs (see Raeburn's book \cite{Raeburn2005} for an overview). As with the $C^*$-algebras associated to directed graphs, the study of the $C^*$-algebras of higher-rank graphs initially used the theory of groupoids. A more recent ``bare hands'' approach to studying higher-rank graphs has been developed that avoids the use of groupoids with the aim of simplifying arguments. These groupoid models have recently been seen to be a useful source of examples with their use to illustrate groupoid results in \cite{Clark-anHuef2012} and in Chapter \ref{chapter_strength_of_convergence}. Suppose $\Lambda$ is a row-finite $k$-graph without sources and suppose $G_\Lambda$ is Kumjian and Pask's path groupoid associated to $\Lambda$; recall that $C^*(G_\Lambda)=C^*(\Lambda)$. In this part of the thesis we explore when $C^*(\Lambda)$ is postliminal, liminal and Fell, and when it has bounded- and countinuous-trace. Clark in \cite[Theorem~6.1]{Clark2007-2} showed that a groupoid $C^*$-algebra is liminal if and only if the orbits in the groupoid are closed and the $C^*$-algebras of each of the stability subgroups are liminal. In Section \ref{sec_liminal_postliminal} we establish a necessary and sufficient condition on $\Lambda$ for the orbits in $G_\Lambda$ to be closed; we use this to show exactly when $C^*(\Lambda)$ is liminal. Similarly we provide a condition on $\Lambda$ that is necessary and sufficient for the orbits in $G_\Lambda$ to be locally closed and use another of Clark's results, \cite[Theorem~7.1]{Clark2007-2}, to show precisely when $C^*(\Lambda)$ is postliminal. In Section \ref{sec_bded-trace_cts-trace_Fell} we will develop necessary and sufficient conditions on $\Lambda$ for $G_\Lambda$ to be integrable, to be Cartan and to be proper. With the assumption that $G_\Lambda$ is principal we will use results by Clark and an Huef in \cite{Clark-anHuef2008}, Clark in \cite{Clark2007} and Muhly and Williams in \cite{Muhly-Williams1990} to determine exactly when $C^*(\Lambda)$ has bounded trace, is Fell, and has continuous trace. In Section \ref{sec_boundary_paths_and_desourcification} we introduce the desourcification construction that Webster developed in \cite{Webster2011} based on Farthing's \cite{Farthing2008} and in Section \ref{sec_desourcification} we use the desourcification construction to provide similar characterisations of $C^*(\Lambda)$ while permitting the $k$-graphs to have sources. Finally in Section \ref{section_directed_graphs} it will be shown how these results can be simplified for directed graphs and how they relate to the directed graph characterisations developed by Ephrem in \cite{Ephrem2004} and by Deicke, Hong and Szyma{\'n}ski in \cite{dhs2003}. It should be noted that Farthing, Muhly and Yeend in \cite{Farthing-Muhly-Yeend2005,Yeend2007} developed groupoid models of row-finite higher-rank graphs that may have sources. Developing characterisations for one of these groupoids would have made the section using the Farthing-Webster desourcification, Section \ref{sec_desourcification}, unnecessary. This extra level of groupoid generality does, however, come at a cost to their simplicity that makes them less attractive as a source of groupoid examples. By separating the groupoid arguments from the arguments dealing with row-finite higher-rank graphs that may have sources, Webster's desourcification also enables us to simplify our arguments. \chapter{An introduction to groupoids}\label{chapter_groupoid_intro} This chapter provides a brief overview of the theory of groupoids, beginning with Renault's definition of a groupoid from \cite{Renault1980}. Examples are then given before introducing important technical results and defining the Haar system of a groupoid. A fix for a mistake in Renault's \cite{Renault1980} is given (see Remark \ref{remark_problems_Renault_results}). Section \ref{section_representations_and_groupoid_algebras} describes induced representations and shows the construction of groupoid $C^*$-algebras. Sections \ref{section_transformation-group_groupoid} and \ref{sec_path_groupoid} define the groupoids that can be constructed from transformation groups and the infinite paths of directed graphs, respectively. Finally, Section \ref{sec_proper_Cartan_integrable} defines proper, Cartan and integrable groupoids, and shows the relationship between them and the classes of $C^*$-algebras from Section \ref{sec_algebra_classes}. \section{Groupoids and Haar systems} \begin{definition}[{\cite[Definition~1.1]{Renault1980}}]\label{def_groupoid}\notationindex{G@$G, G^{(2)}$} A {\em groupoid} $G$ is a set endowed with a subset $G^{(2)}$ of $G\times G$, a map $(\alpha,\beta)\mapsto \alpha\beta$ from $G^{(2)}$ to $G$ and an involution $\alpha\mapsto \alpha^{-1}$ on $G$ such that: \begin{enumerate}\renewcommand{\theenumi}{G\arabic{enumi}} \item\label{G1} if $(\alpha,\beta)$ and $(\beta,\gamma)$ are in $G^{(2)}$, then so are $(\alpha\beta,\gamma)$ and $(\alpha,\beta\gamma)$, and $(\alpha\beta)\gamma=\alpha(\beta\gamma)$; \item\label{G2} $(\alpha^{-1},\alpha)\in G^{(2)}$ for every $\alpha\in G$; and \item\label{G3} $\alpha^{-1}(\alpha\beta)=\beta$ and $(\alpha\beta)\beta^{-1}=\alpha$ for every $(\alpha,\beta)\in G^{(2)}$. \end{enumerate} The elements of $G^{(2)}$ are called {\em composable pairs}\index{composable pairs}, the map $G^{(2)}\rightarrow G$ is called the {\em composition map}\index{composition map}, the involution is called the {\em inverse map}\index{inverse map} and $\alpha^{-1}$ is called the {\em inverse}\index{inverse} of $\alpha\in G$. The maps $r,s:G\rightarrow G$ defined by $r(\alpha)=\alpha\alpha^{-1}$ and $s(\alpha)=\alpha^{-1}\alpha$ for every $\alpha\in G$ are repectively called the {\em range}\index{range!in a groupoid} and {\em source}\index{source!in a groupoid} map. Then $r(\alpha)$ and $s(\alpha)$ are respectively called the {\em range} and {\em source} of $\alpha$. The range and source maps have a common image in $G$ which is called the {\em unit space}\index{unit space} and is denoted by $G^{(0)}$\notationindex{G0@$G^{(0)}$}. \end{definition} \begin{remark}\label{remark_composable_iff_range_source} The point to the range and source maps is that they provide a simpler way of determining when composition makes sense: it follows from Definition \ref{def_groupoid} that $(\alpha,\beta)\in G^{(2)}$ if and only if $s(\alpha)=r(\beta)$. \end{remark} By the definition of the range and source maps we can see that $r(\alpha)\alpha=\alpha$ and $\alpha s(\alpha)=\alpha$ for every $\alpha\in G$, and that elements of $G^{(0)}$ are characterised by the equation $\alpha=\alpha^{-1}=\alpha^2$. \begin{example}\label{example_group_is_groupoid} Every group $G$ with identity $e$ can be considered to be a groupoid with $G^{(2)}=G\times G$ and $G^{(0)}=\braces{e}$. \end{example} \begin{example}\label{example_equivalence_relation_groupoid} There is a natural groupoid that can be constructed from any set $X$ with an equivalence relation $\sim$. Let $G=\{(x,y)\in X\times X: x\sim y\}$ and \[G^{(2)}=\big\{\big((x,y), (y,z)\big): x,y,z\in X\text{ with }x\sim y\sim z\big\}.\] Define composition by $(x,y)(y,z)=(x,z)$ and involution by $(x,y)^{-1}=(y,x)$. Then $G$ is a groupoid with $r(x,y)=(x,x)$, $s(x,y)=(y,y)$ and $G^{(0)}=\{(x,x):x\in X\}$. \end{example} Suppose $G$ is a groupoid. The subset $\{(r(\alpha),s(\alpha)):\alpha\in G\}$ of $G^{(0)}\times G^{(0)}$ is an equivalence relation on $G^{(0)}$ that is referred to as the {\em natural equivalence relation}\index{natural equivalence relation on $G^{(0)}$} on $G^{(0)}$. The {\em orbit}\index{orbit} of $u\in G^{(0)}$ is the equivalence class of $u$ with respect to the natural equivalence relation on $G^{(0)}$ and is denoted by $[u]$. A subset $A$ of $G^{(0)}$ is {\em $G$-invariant}\index{G-invariant@$G$-invariant} if it is saturated with respect to the natural orbit equivalence relation on $G^{(0)}$. In other words, $A$ is $G$-invariant if it is a union of orbits. A groupoid $G$ is {\em principal}\index{principal groupoid} if $\alpha,\beta\in G$ are equal whenever $r(\alpha)=r(\beta)$ and $s(\alpha)=s(\beta)$. \begin{example}\label{example_equivalence_relation_groupoid_properties} Suppose $X=\{1,2,3,4,5\}$ and $\sim$ is the equivalence relation on $X$ with equivalence classes $\{1,2,3\}$ and $\{4,5\}$. Let $G$ be the groupoid as in Example \ref{example_equivalence_relation_groupoid}. Then $[(1,1)]=\{(1,1),(2,2),(3,3)\}$ and $[(4,4)]=\{(4,4),(5,5)\}$. The subset $\{(4,4),(5,5)\}$ of $G^{(0)}$ is $G$-invariant whereas $\{(1,1)\}$ is not. The groupoid $G$ is principal since the range and source of each $\alpha\in G$ completely determines $\alpha$: If $r(\alpha)=(x,x)$ and $s(\alpha)=(y,y)$, then $\alpha=(x,y)$. \end{example} A {\em groupoid homomorphism}\index{groupoid!homomorphism} from a groupoid $G$ to a groupoid $F$ is a map $\phi:G\rightarrow F$ such that for every $(\alpha,\beta)\in G^{(2)}$, we have $\big(\phi(\alpha),\phi(\beta)\big)\in F^{(2)}$ and $\phi(\alpha\beta)=\phi(\alpha)\phi(\beta)$. If $A$ is a non-empty subset of $G^{(0)}$, we define $G^A=r^{-1}(A)$, $G_A=s^{-1}(A)$ and $G|_A=r^{-1}(A)\cap s^{-1}(A)$. If $u\in G^{(0)}$ then we define $G_u=G_{\{u\}}$, $G^u=G^{\{u\}}$ and $G|_u=G|_{\{u\}}$. \begin{prop}[{\cite[Proposition~2.8]{Muhly-book}}]\label{prop_Muhly_2.8} Suppose $G$ is a groupoid and $A$ is a non-empty subset of $G^{(0)}$. Then $G|_A$ is a groupoid with $G|_A^{(2)}=G^{(2)}\cap (G|_A\times G|_A)$ and operations that are the restrictions of those on $G$. \end{prop} \begin{example} Suppose $G$ is the groupoid from Example \ref{example_equivalence_relation_groupoid_properties}. Let $A$ be the subset $\{(1,1),(2,2),(3,3),(4,4)\}$ of $G^{(0)}$ so that $G|_A$ is a groupoid by Proposition \ref{prop_Muhly_2.8}. If $F$ is the groupoid constructed as in Example \ref{example_equivalence_relation_groupoid} with the set $\{1,2,3,4\}$ and equivalence classes $\{1,2,3\}$ and $\{4\}$, then $G|_A=F$. \end{example} The {\em stability subgroups}\index{stability subgroup} of a groupoid $G$ are the groups of the form $G|_u=r^{-1}(u)\cap s^{-1}(u)$ for $u\in G^{(0)}$; $G$ is principal if and only if each of the stability subgroups is trivial, that is, if $G|_u=\{u\}$ for every $u\in G^{(0)}$. Stability subgroups are also commonly known as {\em isotropy subgroups}\index{isotropy subgroup}. \begin{definition}[{\cite[Definition~2.25]{Muhly-book}}]\label{def_topological_groupoid} Suppose $G$ is a groupoid with a topology on the set $G$ and let $G^{(2)}$ have the subspace topology from $G\times G$ when equipped with the product topology. Then $G$ is called a {\em topological groupoid}\index{topological groupoid}\index{groupoid!topology} if the composition and inversion maps are continuous. \end{definition} \begin{remark}\label{remark_following_topological_groupoid_def} It follows that the range and source maps of a topological groupoid are continuous. Then, since any continuous involution is open, the inversion map in a topological groupoid is a homeomorphism. It follows from the inverse map being open that the range map is open if and only if the source map is open. Whenever we refer to a topology on a groupoid, we assume that this is a topological groupoid unless otherwise stated. \end{remark} \begin{example} A locally compact group is a locally compact groupoid in the sense of Example \ref{example_group_is_groupoid}. \end{example} To construct a $C^*$-algebra from a groupoid we need an analogue of the Haar measure on a group. \needspace{6\baselineskip} \begin{definition}[{\cite[Definition~2.28]{Muhly-book}}]\label{def_Haar_system}Suppose $G$ is a second countable, locally compact, Hausdorff groupoid. A {\em right Haar system}\index{Haar system} on $G$ is a set $\{\lambda_x:x\in G^{(0)}\}$ of non-negative Radon measures on $G$ such that \begin{enumerate}\renewcommand{\theenumi}{H\arabic{enumi}} \item\label{H1} $\mathrm{supp}\,\lambda_x=G_x$ $\big(=s^{-1}(\{x\})\big)$\quad for all $x\in G^{(0)}$; \item\label{H2} for $f\in C_c(G)$, the function $x\mapsto \int f\,d\lambda_x$ on $G^{(0)}$ is in $C_c(G^{(0)})$; and \item\label{H3} for $f\in C_c(G)$ and $\gamma\in G$, \[\int f(\alpha\gamma)\,d\lambda_{r(\gamma)}(\alpha)=\int f(\alpha)\,d\lambda_{s(\gamma)}(\alpha).\] \end{enumerate} We will refer to \eqref{H2} as the {\em continuity of the Haar system} and to \eqref{H3} as {\em Haar-system invariance}. The collection $\{\lambda^x:x\in G^{(0)}\}$ of measures where $\lambda^x(E):=\lambda_x(E^{-1})$ is a {\em left Haar system}, which is a system of measures such that $\mathrm{supp}\,\lambda^x=G^x$ and, for $f\in C_c(G)$, $x\mapsto \int f\,d\lambda^x$ is continuous and $\int f(\gamma\alpha)\,d\lambda^{s(\gamma)}(\alpha)=\int f(\alpha)\,d\lambda^{r(\gamma)}(\alpha)$. Given that we can easily convert a right Haar system $\{\lambda_x\}$ into a left Haar system $\{\lambda^x\}$ and vice versa, we will simply refer to a {\em Haar system} $\lambda$ and use subscripts to refer to elements of the right Haar system $\{\lambda_x\}$ and superscripts to refer to elements of the left Haar system $\{\lambda^x\}$. \end{definition} The next few lemmas establish some properties of Haar systems which we will use frequently. \begin{lemma} Let $G$ be a second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$. If $K\subset G$ is compact and $\gamma\in G$ with $s(\gamma)=x$ and $r(\gamma)=y$, then $\lambda_x(K\gamma)=\lambda_y(K)$ and $\lambda^x(\gamma^{-1} K)=\lambda^y(K)$. \begin{proof} Let $\braces{U_n}$ be a decreasing sequence of open neighbourhoods of $K$ so that if $\gamma\in G\backslash K$, then $\gamma\notin U_n$ eventually. $G$ is locally compact, so there is a compact neighbourhood of $K$, and we can assume that $\overline{U_0}$ is compact. By Urysohn's Lemma for each $n$ there exists $f_n\in C_c(G)$ with range in $[0,1]$ such that $f_n$ is identically $1$ on $K$ and $\mathrm{supp}\, f_n\subseteq U_n$. For each $n$, define $g_n:G\rightarrow \mathbb C$ by $g_n(\alpha)=f_n(\alpha\gamma^{-1})$ for each $\alpha\in G$, so that each $g_n\in C_c(G)$ and $g_n(\alpha)\rightarrow \chi_K(\alpha\gamma^{-1})$ as $n\rightarrow\infty$ for every $\alpha\in G$. Let $h = \chi_{U_0\gamma}$, noting that $\mathrm{supp}\, h$ is compact since $U_0$ is relatively compact and the composition map is continuous. Then \[ g_n(\alpha) = f_n(\alpha\gamma^{-1}) \leqslant \chi_{U_n}(\alpha\gamma^{-1}) \leqslant \chi_{U_0}(\alpha\gamma^{-1}) = \chi_{U_0\gamma}(\alpha) = h(\alpha) \] for every $\alpha\in G$ and $n\in\mathbb N$. Since $h$ and each $g_n$ have compact support with $g_n(\alpha)\leqslant h(\alpha)$ for each $\alpha$, and since $g_n(\alpha)\rightarrow \chi_K(\alpha\gamma^{-1})$ for each $\alpha$, by the Dominated Convergence Theorem we have \begin{align*} \int f_n(\alpha&\gamma^{-1})\, d\lambda_x(\alpha)=\int g_n(\alpha)\, d\lambda_x(\alpha)\\ &\rightarrow\int \chi_K(\alpha\gamma^{-1})\, d\lambda_x(\alpha) = \int \chi_{K\gamma}(\alpha)\, d\lambda_x(\alpha)=\lambda_x(K\gamma). \end{align*} By Haar-system invariance \eqref{H3} we can see that \[ \int f_n(\alpha\gamma^{-1})\, d\lambda_x(\alpha)=\int f_n(\alpha)\, d\lambda_y(\alpha)\rightarrow \int \chi_K(\alpha)\, d\lambda_y(\alpha)=\lambda_y(K), \] and it follows that $\lambda_x(K\gamma)=\lambda_y(K)$. Since $\lambda^x(E)=\lambda_x(E^{-1})$ for every measurable set $E$, we can deduce that \[ \lambda^x(\gamma^{-1} K)=\lambda_x(K^{-1}\gamma)=\lambda_y(K^{-1})=\lambda^y(K).\qedhere \] \end{proof} \end{lemma} \begin{lemma}\label{astrids_lim_sup} Let $G$ be a second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$ and let $K$ be a compact subset of $G$. If $\{x_n\}\subset G^{(0)}$ is a sequence that converges to $z\in G^{(0)}$, then \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(K)\leqslant\lambda_z(K). \] \begin{proof} Fix $\epsilon>0$. By the outer regularity of $\lambda_z$, there exists an open neighbourhood $U$ of $K$ such that \[ \lambda_z(K)\leqslant \lambda_z(U)<\lambda_z(K)+\epsilon/2. \] By Urysohn's Lemma there exists $f\in C_c(G)$ with $0\leqslant f\leqslant 1$ such that $f$ is identically one on $K$ and zero off $U$. In particular we have \begin{equation}\label{measure_of_f_near_measure_of_K} \lambda_z(K)\leqslant\int f\,d\lambda_z<\lambda_z(K)+\epsilon/2. \end{equation} The continuity of the Haar system \eqref{H2} implies $\int f\,d\lambda_{x_n}\rightarrow \int f\,d\lambda_z$, so there exists $n_0$ such that $n\geqslant n_0$ implies \[ \int f\,d\lambda_z-\epsilon/2 < \int f\,d\lambda_{x_n}<\int f\,d\lambda_z +\epsilon/2. \] By our choice of $f$ we have $\lambda_{x_n}(K)\leqslant \int f\,d\lambda_{x_n}$, so \[ \lambda_{x_n}(K)\leqslant\int f\,d\lambda_{x_n}<\int f\,d\lambda_z+\epsilon/2. \] Combining this with \eqref{measure_of_f_near_measure_of_K} enables us to observe that for $n\geqslant n_0$, we have $\lambda_{x_n}(K)<\lambda_z(K)+\epsilon$, completing the proof. \end{proof} \end{lemma} \begin{lemma}\label{corollary_astrid_lemma} Let $G$ be a second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$ and let $K$ be a compact subset of $G$. For every $\epsilon>0$ and $z\in G^{(0)}$ there exists a neighbourhood $U$ of $z$ in $G^{(0)}$ such that $x\in U$ implies $\lambda_x(K)<\lambda_z(K)+\epsilon$. \begin{proof} Fix $\epsilon>0$ and $z\in G^{(0)}$. Let $\{U_n\}$ be a decreasing neighbourhood basis for $z$ in $G^{(0)}$. If our claim is false, then each $U_n$ contains an element $x_n$ such that $\lambda_{x_n}(K)\geqslant\lambda_z(K)+\epsilon$. But since each $x_n\in U_n$, $x_n\rightarrow z$, and so by Lemma \ref{astrids_lim_sup} there exists $n_0$ such that $n\geqslant n_0$ implies $\lambda_{x_n}(K)<\lambda_z(K)+\epsilon$, a contradiction. \end{proof} \end{lemma} Haar systems do not always exist. A following result, Proposition \ref{Renault_I_2_8i_sub}, can be used to show that some groupoids admit a Haar system. The next lemma is a small result that is frequently used in the groupoid literature. \begin{lemma}\label{lemma_urysohn_corollary} Let $G$ be a second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$. Then for every $x\in G^{(0)}$ there exists $f\in C_c(G)$ such that $\int_G f\, d\lambda_x> 0$ and the range of $f$ is contained in $[0,1]$. \begin{proof} Let $K$ be a compact neighbourhood of $x$ in $G$ and recall that the support of a Radon measure is the complement of the union of all open sets with measure zero. Then, since $x\in \mathrm{int}\, K\cap G_x=\mathrm{int}\, K\cap \mathrm{supp}\,\lambda_x$ and $\lambda_x$ is non-negative, we must have $\lambda_x(K)\geqslant\lambda_x(\mathrm{int}\, K)>0$. By Urysohn's Lemma (see, for example, \cite[Proposition~1.7.5]{Pedersen1989}) there exists $f\in C_c(G)$ such that $f$ is identically $1$ on $K$ and the range of $f$ is contained in $[0,1]$. Then $\int_G f\,d\lambda_x\geqslant \int_G \chi_K\, d\lambda_x>0$. \end{proof} \end{lemma} \begin{prop}[{\cite[Proposition~I.2.4]{Renault1980}, \cite[Proposition~2.2.1]{Paterson1999}}]\label{prop_RenaultI1.2.4} Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with a Haar system. Then the range and source maps of $G$ are open. \end{prop} \begin{lemma}[{\cite[p. 361]{Ramsay1990}}]\label{lemma_composition_map_open} If a locally compact, Hausdorff groupoid has open range and source maps, then the composition map is open. \begin{proof} Suppose $G$ is a locally compact, Hausdorff groupoid with open range map. Let $A$ and $B$ be open subsets of $G$. It suffices to show that $AB$ is open. If $AB$ is empty, then $AB$ is open and we are done. Suppose $AB$ is non-empty, so that there exist $\alpha\in A$ and $\beta\in B$ such that $s(\alpha)=r(\beta)$. Let $\circ:G^{(2)}\rightarrow G$ be the composition map. Since the composition map is continuous (Definition~\ref{def_topological_groupoid}), by the definition of the topology on $G^{(2)}$ there exist open neighbourhoods $H,K$ of $\alpha^{-1}, \alpha\beta$ respectively, such that $(H\times K)\cap G^{(2)}$ is contained in the open set $\circ^{-1}(B)$. Note that $HK$ is a subset of $B$. Let $U=H^{-1}\cap A$, noting that $U$ is an open neighbourhood of $\alpha$ since the inverse map is open (Remark~\ref{remark_following_topological_groupoid_def}). Now let $V=r^{-1}\big(r(U)\big)\cap K$, noting that $V$ is an open neighbourhood of $\alpha\beta$ since the range map is open and continuous. The set $U^{-1}V$ is a subset of $B$ as $U^{-1}\subset H$, $V\subset K$ and $HK$ is a subset of $B$. Fix $\gamma\in V$. Since $r(V)\subset r(U)=s(U^{-1})$ and $U^{-1}V\subset B$, there exist $\delta\in U$ and $\epsilon\in B$ such that $\delta^{-1}\gamma = \epsilon$. Then $\gamma=\delta\epsilon$, so $V\subset UB\subset AB$. The set $V$ is open with $\alpha\beta\in V\subset AB$ so, since $\alpha\beta$ was chosen arbitrarily in $AB$, the set $AB$ is open and thus the composition map is open. \end{proof} \end{lemma} \begin{lemma}\label{lemma_alpha_mapsto_gamma_alpha_homeomorphism} Suppose $G$ is a topological groupoid. Then, for every $\gamma\in G$, \begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}} \item\label{lemma_alpha_mapsto_gamma_alpha_homeomorphism_1} the map from $G^{s(\gamma)}$ to $G^{r(\gamma)}$ such that $\alpha\mapsto\gamma\alpha$, and \item\label{lemma_alpha_mapsto_gamma_alpha_homeomorphism_2} the map from $G_{r(\gamma)}$ to $G_{s(\gamma)}$ such that $\alpha\mapsto\alpha\gamma$ \end{enumerate} are homeomorphisms. \begin{proof} Fix $\gamma\in G$ and note that it follows immediately from the definition of a groupoid (Definition \ref{def_groupoid}) that the map $\alpha\mapsto\gamma\alpha$ is injective and surjective. To see that the map $\alpha\mapsto\gamma\alpha$ is continuous, first suppose that $\{\alpha_i\}_{i\in I}$ is a net in $G^{s(\gamma)}$ that converges to some $\alpha\in G^{s(\gamma)}$. Since $\{\gamma\}_{i\in I}$ trivially converges to $\gamma$, it follows that $\{(\gamma,\alpha_i)\}_{i\in I}$ converges to $(\gamma,\alpha)$ in $G\times G$ with the product topology. Since $s(\gamma)=r(\alpha)$, by Remark \ref{remark_composable_iff_range_source} we can see that $(\gamma,\alpha)\in G^{(2)}$. Similarly each $(\gamma,\alpha_i)$ is in $G^{(2)}$, so $\{(\gamma,\alpha_i)\}_{i\in I}$ converges to $(\gamma,\alpha)$ in $G^{(2)}$. The composition map of a groupoid is continuous, so $\gamma\alpha_i$ converges to $\gamma\alpha$ in $G$, and the map $\alpha\mapsto\gamma\alpha$ is continuous. Since the inverse of $\alpha\mapsto\gamma\alpha$ is $\alpha\mapsto\gamma^{-1}\alpha$, the same argument shows that this inverse map is continous, thus showing that $\alpha\mapsto\gamma\alpha$ is a homeomorphism. A similar argument can be used to establish the second component of this lemma. \end{proof} \end{lemma} A topological groupoid $G$ is {\em $r$-discrete}\index{$r$-discrete groupoid} if $G^{(0)}$ is an open subset of $G$. \begin{lemma}[{\cite[Lemma~I.2.7(i)]{Renault1980}}]\label{lemma_Renault_I.2.7_1} Let $G$ be an $r$-discrete groupoid. For every $u\in G^{(0)}$, the subspace topologies on $G_u$ and $G^u$ are discrete. \begin{proof} Fix $u\in G^{(0)}$ and $\gamma\in G^u$. Then, since $G^{(0)}$ is open, the set $G^{(0)}\cap G^{s(\gamma)}=\{s(\gamma)\}$ is an open subset of $G^{s(\gamma)}$. Since $\alpha\mapsto\gamma\alpha$ is a homeomorphism of $G^{s(\gamma)}$ onto $G^{r(\gamma)}$ (Lemma \ref{lemma_alpha_mapsto_gamma_alpha_homeomorphism}), it follows that $\{\gamma s(\gamma)\}=\{\gamma\}$ is an open subset of $G^{r(\gamma)}=G^u$. A similar argument can be used to show that $G_u$ is discrete. \end{proof} \end{lemma} \begin{remark}\label{remark_problems_Renault_results} The proofs of two useful results, \cite[Lemma~I.2.7, Proposition~I.2.8]{Renault1980}, omit details that appear to be important. In \cite[Lemma~I.2.7]{Renault1980} it is claimed that if an $r$-discrete groupoid has a Haar system, then that Haar system is `essentially the system of counting measures'. Component (ii) of the proof of this lemma constructs a system of counting measures from the given Haar system however does not establish the invariance \eqref{H3} of this system. In \cite[Proposition~I.2.8]{Renault1980} it is claimed that if the range map of a groupoid is a local homeomorphism, then the groupoid is $r$-discrete and admits a Haar system. The (iv)$\implies$(i) component of the proof of this proposition constructs a system of measures, however the continuity \eqref{H2} of this system is not established, which is necessary to have a Haar system. \end{remark} We use the next result as an alternative to \cite[Lemma~I.2.7, Proposition~I.2.8]{Renault1980}; its proof uses ideas from \cite[Lemma~I.2.7, Proposition~I.2.8]{Renault1980}. \begin{prop}[{based on \cite[Lemma~I.2.8]{Renault1980}}]\label{Renault_I_2_8i_sub Suppose $G$ is a second countable, locally compact, Hausdorff groupoid. Then the following are equivalent. \begin{enumerate} \item\label{Renault_I_2_8i_sub_i} $G$ is $r$-discrete and admits a Haar system of counting measures; \item\label{Renault_I_2_8i_sub_ii} the source map is a local homeomorphism; \item\label{Renault_I_2_8i_sub_iii} the range map is a local homeomorphism; and \item\label{Renault_I_2_8i_sub_iv} $G$ is $r$-discrete and the composition map is a local homeomorphism. \end{enumerate} \end{prop} The proof of this proposition will use the following technical lemma. \begin{lemma}\label{lemma_used_for_Renault_I_2_8i_sub_ii_implies_i} Suppose $G$ is a locally compact, Hausdorff groupoid and that $U$ is an open set in $G$ such that $s|_U$ is a homeomorphism onto the open set $s(U)$. Then for every $\gamma\in U$ there is an open neighbourhood $V$ of $\gamma$ in $G$ such that $\overline{V}\subset U$ and $s(\overline{V})$ is closed in $G^{(0)}$. \begin{proof} Fix $\gamma\in U$. Since $G$ is locally compact and Hausdorff, $\gamma$ has a base of compact neighbourhoods. Then there is a compact neighbourhood $N$ of $\gamma$ in $G$ such that $N\subset U$. Let $V=\mathrm{int}\,N$, so that $V$ is an open neighbourhood of $\gamma$ in $G$ with $\overline{V}\subset U$. The set $s(\overline{V})$ is compact since $s$ is continuous. Finally, since $G$ and $G^{(0)}$ are Hausdorff, the compact sets $\overline{V}$ and $s(\overline{V})$ are closed. \end{proof} \end{lemma} To prove Proposition \ref{Renault_I_2_8i_sub}, it is easiest to deal with either left or right Haar systems; here we choose right Haar systems due to personal preference. In this light, condition \eqref{Renault_I_2_8i_sub_i} is most closely related to \eqref{Renault_I_2_8i_sub_ii}, so we will begin by establishing their equivalence. Conditions \eqref{Renault_I_2_8i_sub_ii} and \eqref{Renault_I_2_8i_sub_iii} are very closely related, so demonstrating their equivalence will follow. For the remainder of the proof we will show that \eqref{Renault_I_2_8i_sub_ii} and \eqref{Renault_I_2_8i_sub_iv} are equivalent. \begin{proof}[Proof of Proposition \ref{Renault_I_2_8i_sub}] \eqref{Renault_I_2_8i_sub_i}$\implies$\eqref{Renault_I_2_8i_sub_ii}. Suppose $G$ is $r$-discrete and $\lambda$ is a Haar system of counting measures on $G$. Since $s$ is continuous by the definition of a topological groupoid and open by Proposition \ref{prop_RenaultI1.2.4}, to show that $s$ is a local homeomorphism it suffices so show that it is locally injective. Fix $\gamma\in G$ and let $K$ be a compact neighbourhood of $\gamma$ in $G$. Since $G$ is $r$-discrete, $G_{s(\gamma)}$ is discrete by Lemma \ref{lemma_Renault_I.2.7_1}, so $K$ and $G_{s(\gamma)}$ must have finite a intersection; write $K\cap G_{s(\gamma)}=\{\gamma_1,\gamma_2,\ldots,\gamma_p\}$. For every $\gamma_i$ with $\gamma_i\ne \gamma$, there is a compact neighbourhood $K_i$ of $\gamma$ such that $\gamma_i\notin K_i$ and $K_i\subset K$. By replacing $K$ by $K_1\cap K_2\cap\ldots \cap K_p$, we can assume that $K\cap G_{s(\gamma)}=\{\gamma\}$, and so $\lambda_{s(\gamma)}(K)=1$. By Lemma \ref{corollary_astrid_lemma} there is an open neighbourhood $U$ of $s(\gamma)$ in $G^{(0)}$ such that $y\in U$ implies $\lambda_y(K)<\lambda_{s(\gamma)}(K)+0.1=1.1$. Let $K'$ be a compact neighbourhood of $\gamma$ with $K'\subset s^{-1}(U)\cap K$. Replace $K$ by $K'$ so that $\lambda_y(K)<1.1$ for every $y\in s(K)$. For every $y\in s(K)$ there exists $\delta\in K$ such that $s(\delta)=y$, so $\{\delta\}\subset G_y\cap K$ and $\lambda_y(K)\geqslant 1$. Thus $\lambda_y(K)=1$ for every $y\in s(K)$. So each $G_y\cap K$ has exactly one element and then the restriction $s|_K$ is injective. Thus $s$ is a local homeomorphism. \eqref{Renault_I_2_8i_sub_ii}$\implies$\eqref{Renault_I_2_8i_sub_i}. Suppose \eqref{Renault_I_2_8i_sub_ii}. We will first show that $G$ is $r$-discrete. By \eqref{Renault_I_2_8i_sub_ii}, for any $x\in G^{(0)}$ there is an open neighbourhood $U_x$ of $x$ such that $s|_{U_x}$ is a homeomorphism. We claim that $U_xU_x^{-1}\subset G^{(0)}$ for each $x\in G^{(0)}$. Fix $y\in G^{(0)}$ and $\gamma\in U_yU_y^{-1}$. There exist $\alpha,\beta\in U_y$ with $s(\alpha)=s(\beta)$ such that $\gamma=\alpha\beta^{-1}$. Then $\alpha=\beta$ since $s|_{U_y}$ is injective, so $\gamma=\alpha\alpha^{-1}=r(\alpha)$ and so $U_yU_y^{-1}\subset G^{(0)}$. Note that $U_yU_y^{-1}$ is open since the composition and inverse maps are open (Lemma \ref{lemma_composition_map_open} and Remark \ref{remark_following_topological_groupoid_def}). Then $\cup_{x\in G^{(0)}}U_xU_x^{-1}$ is open and is all of $G^{(0)}$, so $G$ is $r$-discrete. Since $G$ is $r$-discrete, the spaces $G_x$ for $x\in G^{(0)}$ are discrete by Lemma \ref{lemma_Renault_I.2.7_1}. For each $x\in G^{(0)}$ and every Borel set $E\subset G$, let $\lambda_x(E)$ be the possibly infinite number of elements in $G_x\cap E$. We claim that $\{\lambda_x:x\in G^{(0)}\}$ is a right Haar system for $G$. Since each $G_x$ is discrete, it is easy to see that each $\lambda_x$ is a Radon measure and it remains to show that \eqref{H1}, \eqref{H2} and \eqref{H3} are satisfied. Condition \eqref{H1} is an immediate consequence of the definition of the system $\{\lambda_x\}$. Fix $f\in C_c(G)$. Before establishing \eqref{H2} and \eqref{H3} we use that $s$ is a local homeomorphism to make an assumption about $f$. Since $s$ is a local homeomorphism, for every $\gamma\in G$ there exists an open neighbourhood $U_\gamma$ of $\gamma$ in $G$ such that $s|_{U_\gamma}$ is a homeomorphism onto the open set $s(U_\gamma)$. By Lemma \ref{lemma_used_for_Renault_I_2_8i_sub_ii_implies_i} for each $\gamma$ there exists an open neighbourhood $V_\gamma$ of $\gamma$ in $G$ such that $\overline{V_\gamma}\subset U_\gamma$ and $s(\overline{V_\gamma})$ is closed in $G^{(0)}$. Now $\{V_\gamma:\gamma\in \mathrm{supp}\, f\}$ is an open cover of the compact space $\mathrm{supp}\, f$ so there exists a finite subset $\{\gamma_1,\gamma_2,\ldots,\gamma_p\}$ of $\mathrm{supp}\, f$ such that $\mathrm{supp}\, f\subset\cup_{i=1}^p V_{\gamma_i}$. By a partition of unity (see, for example, \cite[Proposition~1.7.12]{Pedersen1989}) there exist $f_1,f_2,\ldots,f_p\in C_c(G)$ such that $f(\gamma)=\sum_{i=1}^p f_i(\gamma)$ for every $\gamma\in G$ and $f_i(\gamma)=0$ for every $\gamma\notin V_{\gamma_i}$. To establish \eqref{H2} and \eqref{H3} we may thus assume that $f=f_j$ for some $1\leqslant j\leqslant p$. Let $V=V_{\gamma_j}$ and $U=U_{\gamma_j}$. We will now establish \eqref{H2}. Suppose $\{x_i\}_{i\in I}$ is a net in $G^{(0)}$ that converges to some $x\in G^{(0)}$. For \eqref{H2} we need to show that $\int f\,d\lambda_{x_i}\rightarrow\int f\, d\lambda_x$. If $x$ is not in the open subset $s(U)$ of $G^{(0)}$, then $x_i$ is eventually not in the closed subset $s(\overline{V})$ of $G^{(0)}$ and thus not in $s(V)$. Since $f(\gamma)=0$ whenever $\gamma\notin V$, it follows that $\int f\, d\lambda_x=0=\int f\,d\lambda_{x_i}$ eventually. Now suppose $x$ is in the open set $s(U)$. Then $x_i$ is eventually in $s(U)$ so we may assume that $x_i\in s(U)$ for every $i\in I$. Let $\gamma_i=s|_{U}^{-1}(x_i)$ for $i\in I$ and let $\gamma=s|_{U}^{-1}(x)$. Since $x_i\rightarrow x$ in $s(U)$, the map $s|_{U}^{-1}$ is continuous and so $\{\gamma_i\}_{i\in I}$ converges to $\gamma$ in $U$. Furthermore, since $U$ is an open subset of $G$, $\{\gamma_i\}_{i\in I}$ converges to $\gamma$ in $G$. Now, for every $i\in I$, \begin{align*} \int f(\alpha)\, d\lambda_{x_i}(\alpha)&=\int f(\alpha)\chi_{\{\gamma_i\}}(\alpha)\, d\lambda_{x_i}(\alpha)\\ &=f(\gamma_i)\lambda_{x_i}(\{\gamma_i\})\\ &= f(\gamma_i)\quad(\lambda_{x_i}\text{ is the counting measure on }G_x) \end{align*} and similarly $\int f\,d\lambda_x=f(\gamma)$. Since $\gamma_i\rightarrow\gamma$ in $G$ and $f\in C_c(G)$, \[ \int f\,d\lambda_{x_i}=f(\gamma_i)\rightarrow f(\gamma)=\int f\,d\lambda_x. \] Thus $x\mapsto \int f\,d\lambda_x$ is continuous. To show that the map $x\mapsto \int f\, d\lambda_x$ from \eqref{H2} has compact support, first suppose $x\in G^{(0)}$ satisfies $\int f\,d\lambda_x\ne 0$. Since $f\in C_c(G)$, the space $s(\mathrm{supp}\,f)$ is compact so it will suffice to show that $x\in s(\mathrm{supp}\, f)$. The intersection $G_x\cap U$ must be non-empty since $\mathrm{supp}\,\lambda_x=G_x$ and $f(\alpha)\ne 0$ requires $\alpha\in U$. Let $\eta=s|_{U}^{-1}(x)$. Then \[ \int f\,d\lambda_x=\int f(\alpha)\chi_{\{\eta\}}(\alpha)\,d\lambda_x(\alpha)=f(\eta)\lambda_x(\{\eta\})=f(\eta),\] so $f(\eta)$ is non-zero and thus $\eta\in \mathrm{supp}(f)$. Then $s(\eta)=x\in s(\mathrm{supp}\,f)$ so the support of $x\mapsto \int f\,d\lambda_x$ is compact, establishing \eqref{H2}. Fix $\alpha\in G$. To establish \eqref{H3} we will show that \[\int f(\beta\alpha)\,d\lambda_{r(\alpha)}(\beta)=\int f(\beta)\,d\lambda_{s(\alpha)}(\beta).\] If $s(\alpha)\notin s(U)$, then $\beta\alpha\notin U$ for all $\beta\in G_{r(\alpha)}$ so $\int f(\beta\alpha)\,d\lambda_{r(\alpha)}(\beta)=0$ and similarly $\int f(\beta)\,d\lambda_{s(\alpha)}(\beta)=0$. Suppose $s(\alpha)\in s(U)$ and let $\delta=s|_{U}^{-1}(s(\alpha))$. The integrand of $\int f(\beta\alpha)\,d\lambda_{r(\alpha)}(\beta)$ is zero unless $\beta\alpha\in U$ and hence $\beta\alpha=\delta$. Now we have {\allowdisplaybreaks\begin{align*} \int f(\beta\alpha)\,d\lambda_{r(\alpha)}(\beta)&=\int f(\beta\alpha)\chi_{\{\delta\}}(\beta\alpha)\,d\lambda_{r(\alpha)}(\beta)\\ &=\int f(\delta)\chi_{\{\delta\alpha^{-1}\}}(\beta)\,d\lambda_{r(\alpha)}(\beta)\\ &=f(\delta)\lambda_{r(\alpha)}(\{\delta\alpha^{-1}\}) = f(\delta). \end{align*} }The integrand of $\int f(\beta)\,d\lambda_{s(\alpha)}(\beta)$ is zero unless $\beta\in U$ and hence $\beta=\delta$, so we have {\allowdisplaybreaks \begin{align*} \int f(\beta)\,d\lambda_{s(\alpha)}(\beta)&=\int f(\beta)\chi_{\{\delta\}}(\beta)\,d\lambda_{s(\alpha)}(\beta)\\ &=f(\delta)\lambda_{s(\alpha)}(\{\delta\}) = f(\delta), \end{align*} }establishing \eqref{H3}. Thus $\{\lambda_x:x\in G^{(0)}\}$ is a right Haar system of counting measures for $G$, establishing \eqref{Renault_I_2_8i_sub_i}. \eqref{Renault_I_2_8i_sub_ii}$\implies$\eqref{Renault_I_2_8i_sub_iii}. Suppose \eqref{Renault_I_2_8i_sub_ii} and fix $\gamma\in G$. Since $s$ is a local homeomorphism there is an open neighbourhood $U$ of $\gamma^{-1}$ such that $s|_U$ is a homeomorphism onto the open set $s(U)$. Since the inverse map is open, $U^{-1}$ is an open neighbourhood of $\gamma$. Note that $r(U^{-1})$ is open since $r(U^{-1})=s(U)$. To show that $r|_{U^{-1}}$ is a homeomorphism onto $r(U^{-1})$, we need to show that $r|_{U^{-1}}$ is open and injective. Suppose $V\subset U^{-1}$ is open. Then $V^{-1}\subset U$ is open, so $s(V^{-1})$ is open since $s|_{U}$ is open. It follows that $r|_{U^{-1}}$ is open since $r(V)=s(V^{-1})$. Now suppose $\alpha,\beta\in U^{-1}$ satisfy $r(\alpha)=r(\beta)$. Then $s(\alpha^{-1})=s(\beta^{-1})$. Since $\alpha^{-1},\beta^{-1}\in U$ and $s|_U$ is injective, we must have $\alpha^{-1}=\beta^{-1}$, so $\alpha=\beta$ and $r|_{U^{-1}}$ is injective, establishing \eqref{Renault_I_2_8i_sub_iii}. \eqref{Renault_I_2_8i_sub_iii}$\implies$\eqref{Renault_I_2_8i_sub_ii}. A proof establishing this claim is similar to the \eqref{Renault_I_2_8i_sub_ii}$\implies$\eqref{Renault_I_2_8i_sub_iii} proof. \eqref{Renault_I_2_8i_sub_ii}$\implies$\eqref{Renault_I_2_8i_sub_iv}. Suppose \eqref{Renault_I_2_8i_sub_ii}. It was shown that $G$ is $r$-discrete in the \eqref{Renault_I_2_8i_sub_ii}$\implies$\eqref{Renault_I_2_8i_sub_i} component of this proof, so it remains to show that the composition map is a local homeomorphism. Let $\circ:G^{(2)}\rightarrow G$ be the composition map (i.e. the map such that $\circ(\alpha,\beta)=\alpha\beta$ for every $(\alpha,\beta)\in G^{(2)}$). Fix $(\alpha,\beta)\in G^{(2)}$. By \eqref{Renault_I_2_8i_sub_ii} and the previously proved claim that \eqref{Renault_I_2_8i_sub_ii}$\implies$\eqref{Renault_I_2_8i_sub_iii}, the maps $r$ and $s$ are local homeomorphisms. Thus there exist open neighbourhoods $U$ and $V$ of $\alpha$ and $\beta$, respectively, such that $r|_U$ and $s|_V$ are homeomorphisms. Then $(U\times V)\cap G^{(2)}$ is an open neighbourhood of $(\alpha,\beta)$ in $G^{(2)}$. To see that $\circ|_{(U\times V)\cap G^{(2)}}$ is injective, suppose $(\gamma_1,\delta_1),(\gamma_2,\delta_2)\in (U\times V)\cap G^{(2)}$ satisfy $\circ(\gamma_1,\delta_1)=\circ(\gamma_2,\delta_2)$. Then $\gamma_1\delta_1=\gamma_2\delta_2$, so $r(\gamma_1)=r(\gamma_2)$ and $s(\delta_1)=s(\delta_2)$. Since $\gamma_1,\gamma_2\in U$ and $\delta_1,\delta_2\in V$ with $r|_U$ and $s_V$ homeomorphisms, we must have $\gamma_1=\gamma_2$ and $\delta_1=\delta_2$. Thus $\circ|_{(U\times V)\cap G^{(2)}}$ is injective and it follows that $\circ$ is a local homeomorphism since $\circ$ is continuous (Definition \ref{def_topological_groupoid}) and open (Lemma \ref{lemma_composition_map_open}). \eqref{Renault_I_2_8i_sub_iv}$\implies$\eqref{Renault_I_2_8i_sub_ii}. We begin by showing that $s$ is an open map. Let $U$ be an open subset of $G$. Then $U^{-1}$ is open (Remark \ref{remark_following_topological_groupoid_def}) and so $(U^{-1}\times U)\cap G^{(2)}$ is an open subset of $G^{(2)}$. Since every local homeomorphism is open and $\circ$ is a local homeomorphism, $\circ\big((U^{-1}\times U)\cap G^{(2)}\big)$ is an open subset of $G$. We claim that $s(U)=\circ\big((U^{-1}\times U)\cap G^{(2)}\big)\cap G^{(0)}$. Fix $x\in \circ\big((U^{-1}\times U)\cap G^{(2)}\big)\cap G^{(0)}$. Then there exist $\alpha,\beta\in U$ with $r(\alpha)=r(\beta)$ and such that $x=\alpha^{-1}\beta$; note in particular that $x=s(\alpha)$. Then $\alpha x=\beta$ and $\alpha=\beta$ since $x=s(\alpha)$. We now have $x=\alpha^{-1}\alpha=s(\alpha)\in s(U)$. Now fix $y\in s(U)$. Then there exists $\eta\in U$ with $y=s(\eta)$. Now $(\eta^{-1},\eta)\in (U^{-1}\times U)\cap G^{(2)}$, so \[ s(\eta)=\eta^{-1}\eta\in \circ\big((U^{-1}\times U)\cap G^{(2)}\big), \] and $s(U)=\circ\big((U^{-1}\times U)\cap G^{(2)}\big)\cap G^{(0)}$. Since $\circ\big((U^{-1}\times U)\cap G^{(2)}\big)$ is open and $G^{(0)}$ is open since $G$ is $r$-discrete, $s(U)$ is open and so $s$ is an open map. Since $s$ is continuous (Remark \ref{remark_following_topological_groupoid_def}), to show that $s$ is a local homeomorphism it remains to show that it is locally injective. Fix $\gamma\in G$. Since $\circ$ is locally injective and $(\gamma^{-1},\gamma)\in G^{(2)}$, there is an open neighbourhood $A$ of $(\gamma^{-1},\gamma)$ in $G^{(2)}$ such that $\circ|_A$ is injective. By the definition of the topology on $G^{(2)}$ there exist open neighbourhoods $U$ and $V$ of $\gamma^{-1}$ and $\gamma$, respectively, in $G$ such that $(U\times V)\cap G^{(2)}\subset A$. Let $W=U^{-1}\cap V$, so that $W$ is an open neighbourhood of $\gamma$ in $G$ with \[ (\gamma^{-1},\gamma)\in (W^{-1}\times W)\cap G^{(2)}\subset (U\times V)\cap G^{(2)}. \] Suppose $\eta,\zeta\in W$ satisfy $s(\eta)=s(\zeta)$. Then $\eta^{-1}\eta=\zeta^{-1}\zeta$ implies $(\eta^{-1},\eta)=(\zeta^{-1},\zeta)$ since composition is injective on $(W^{-1}\times W)\cap G^{(2)}$. Then $\eta=\zeta$ and so $s$ is locally injective. Thus $s$ is a local homeomorphism. \end{proof} \section{Representations and the groupoid \texorpdfstring{$C^*$}{\it C*}-algebra}\label{section_representations_and_groupoid_algebras} Suppose $G$ is a topological groupoid with Haar system $\lambda$. For $f,g\in C_c(G)$ and $\alpha\in G$, define \[ f\ast g(\alpha):=\int f(\beta)g(\beta^{-1}\alpha)\,d\lambda^{r(\alpha)}(\beta)\notationindex{FG@$f\ast g$} \] and $f^*(\alpha)=\overline{f(\alpha^{-1})}$. Then $C_c(G)$ with respect to these operations and the inductive limit topology\footnote{An overview of the inductive limit topology can be found in Section D.2 of Raeburn and Williams' book \cite{Raeburn-Williams1998}} is a topological $\ast$-algebra. Since the Haar system is a key part of this structure, it is both common in the literature and indeed more precise, for this $\ast$-algebra to be written as $C_c(G,\lambda)$\notationindex{CGlambda@$C_c(G,\lambda)$}. In this thesis we usually write $C_c(G)$ when the Haar system is clear from the context. \begin{remark}\label{remark_alternative_convolution} By considering \eqref{H3} and the definition of a left Haar system (Definition \ref{def_Haar_system}), we can see that for $f,g\in C_c(G)$ and $\alpha\in G$, \[ f\ast g(\alpha) = \int f(\alpha \beta^{-1})g(\beta)\,d\lambda_{s(\alpha)}(\beta). \] \end{remark} \begin{definition}[{\cite[Definition~2.41]{Muhly-book}}] A {\em representation} of $C_c(G,\lambda)$\index{representation!of $C_c(G,\lambda)$} on a Hilbert space $\mathcal H$ is a $\ast$-homomorphism $\pi$ from $C_c(G,\lambda)$ into $B(\mathcal H)$, that is continuous with respect to the inductive limit topology on $C_c(G,\lambda)$ and the weak operator topology on $B(\mathcal H)$, and that is nondegenerate in the sense that the span of $\{\pi(f)h:f\in C_c(G,\lambda), h\in H\}$ is a dense subset of $\mathcal H$. \end{definition} For $f\in C_c(G,\lambda)$, based on Hahn's definition of the $I$-norm in \cite[p. 38]{Hahn1978}, Renault in \cite[p. 50]{Renault1980} defined a quantity $\|f\|_I$\notationindex{0I@$\lVert\cdot\rVert_I,\lVert\cdot\rVert$} by \[ \|f\|_I=\max\left\{\sup_{x\in G^{(0)}}\int_G\lvert f\rvert\,d\lambda^x,\sup_{x\in G^{(0)}}\int_G\lvert f\rvert\,d\lambda_x\right\}. \] Proposition~II.1.4 from \cite{Renault1980} shows that $\|\cdot\|_I$ is a $\ast$-algebra norm on $C_c(G,\lambda)$. \begin{definition}[{\cite[Definition~II.1.5]{Renault1980}}] Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with Haar system $\lambda$. A representation $\pi$ of $C_c(G,\lambda)$ is {\em bounded}\index{bounded representation} if $\|\pi(f)\|\leqslant\|f\|_I$ for every $f\in C_c(G,\lambda)$. \end{definition} Following the definition of a bounded representation, for each $f\in C_c(G,\lambda)$ Renault in \cite[p. 51]{Renault1980} defines a quantity \[ \|f\|:=\sup\{\|\pi(f)\|:\pi\text{ is a bounded representation of }C_c(G,\lambda)\}. \] After noting that $\|\cdot\|$ is a $C^*$-semi-norm on $C_c(G,\lambda)$, Renault goes on to show that $\|\cdot\|$ is a $C^*$-norm on $C_c(G,\lambda)$. \begin{definition}[{\cite[Definition~II.1.12]{Renault1980}}] \notationindex{C*Glambda@$C^*(G,\lambda)$}\index{groupoid C*-algebra@groupoid $C^*$-algebra}Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with Haar system $\lambda$. The {\em groupoid $C^*$-algebra} of $G$ with respect to $\lambda$, denoted $C^*(G,\lambda)$, is the completion of $C_c(G,\lambda)$ with the $C^*$-norm $\|\cdot\|$. \end{definition} A more recent definition of the groupoid $C^*$-algebra, as used by Muhly in \cite[Theorem~2.42]{Muhly-book}, by Paterson in \cite[p.~101]{Paterson1999} and by Anantharaman-Delaroche and Renault in \cite[p.~146]{Anantharaman-Renault2000}, defines $C^*(G,\lambda)$ to be the closure of $C_c(G,\lambda)$ with the norm such that \[ \|f\|=\sup\{\|\pi(f)\|:\pi\text{ is a representation of }C_c(G,\lambda)\} \] for each $f\in C_c(G,\lambda)$. It follows from Renault's disintegration theorem \cite[Proposition~4.2]{Renault1987} (or \cite[Theorem~3.32]{Muhly-book}) that every representation of $C_c(G,\lambda)$ is bounded and that $\|\cdot\|$ is a $C^*$-norm on $C_c(G,\lambda)$, so this more recent definition is compatible with Renault's original definition. It was shown by Muhly, Renault and Williams in \cite[Theorem~2.8]{Muhly-Renault-Williams1987} that if $\lambda$ and $\mu$ are Haar systems for a second countable, locally compact groupoid $G$, then $C^*(G,\lambda)$ is Morita equivalent to $C^*(G,\mu)$. When the Haar system is clear from the context we will usually write $C^*(G)$ instead of $C^*(G,\lambda)$. We now show how a Radon measure on the unit space of a groupoid $G$ with a Haar system $\lambda$ induces a representation of $C^*(G,\lambda)$. These induced representations will be used heavily in Chapter \ref{chapter_strength_of_convergence}. The next definition is from \cite{Muhly-book}. Alternative descriptions may be found in \cite[p.~234]{Muhly-Williams1990} and \cite[pp.~81--82]{Renault1980}. \begin{definition}[{\cite[Definition~2.45]{Muhly-book}}]\label{def_induced_representation}\index{induced representation}\index{representation!induced} Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$ and let $\mu$ be a Radon measure on $G^{(0)}$. \begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})} \item We write $\nu=\mu\circ\lambda=\int \lambda^x \, d\mu$ for the measure on $G$ defined for every Borel-measurable function $f:G\rightarrow\mathbb C$ by \[\int_G f(\gamma)\, d\nu(\gamma)=\int_{G^{(0)}}\int_G f(\gamma)\, d\lambda^x(\gamma)\, d\mu(x).\] We call $\nu$ the measure induced by $\mu$, and we write $\nu^{-1}$ for the image of $\nu$ under the homeomorphism $\gamma\mapsto \gamma^{-1}$. \item For $f\in C_c(G)$, $\mathrm{Ind}\,\mu(f)$\notationindex{INDMU@$\mathrm{Ind}\, \mu$} is the operator on $L^2(G,\nu^{-1})$ defined by the formula \begin{align*} \big(\mathrm{Ind}\,\mu(f)\xi\big)(\gamma)&=\int_G f(\alpha)\xi(\alpha^{-1}\gamma)\, d\lambda^{r(\gamma)}(\alpha)\\ &=\int_G f(\gamma \alpha)\xi(\alpha^{-1})\, d\lambda^{s(\gamma)}(\alpha). \end{align*} \end{enumerate} \end{definition} For each $x\in G^{(0)}$, define $\mathrm{L}^x$\notationindex{Lx@$\mathrm{L}^x$} to be the representation induced by the point-mass measure $\delta_x$\notationindex{DELTAX@$\delta_x$} on $G^{(0)}$ as in \cite{Muhly-Williams1990,Clark-anHuef2008}. \begin{remark}\label{measure_induced_epsilon_x} It follows from the definition of the induced measure that for $x\in G^{(0)}$, the measure $\nu$ induced by $\delta_x$ is equal to $\lambda^x$. In particular we have $\nu^{-1}=\lambda_x$, so $\mathrm{L}^x$ acts on $L^2(G,\lambda_x)$. The operator $\mathrm{L}^x(f)$ is then given by \[ \big(\mathrm{L}^x(f)\xi\big)(\gamma)=\int_G f(\gamma \alpha^{-1})\xi(\alpha)\, d\lambda_x(\alpha) \] for all $\xi\in L^2(G,\lambda_x)$ and all $\gamma\in G$. There is a close relationship between the convolution on $C_c(G)$ and these induced representations: recall from Remark \ref{remark_alternative_convolution} that for $f,g\in C_c(G)$, the convolution $f\ast g\in C_c(G)$ is given by \[ f\ast g(\gamma)=\int_G f(\gamma\alpha^{-1})g(\alpha)\,d\lambda_{s(\gamma)}(\alpha)\quad\text{for all }\gamma\in G, \] so that \[ \big(\mathrm{L}^x(f)g\big)(\gamma)=f\ast g(\gamma)\quad\text{for any }x\in G^{(0)}\text{ and }\gamma\in G_x. \] \end{remark} We denote the norm in $L^2(G,\lambda_x)$ by $\|\cdot\|_x$\notationindex{0I2@$\lVert\cdot\rVert_x$}. Finally note that when $G$ is a second-countable locally-compact principal groupoid that admits a Haar system, each $\mathrm{L}^x$ is irreducible by \cite[Lemma~2.4]{Muhly-Williams1990}. The irreducible representations $\mathrm{L}^x$ will be a key focus of chapter \ref{chapter_strength_of_convergence}. \section{The transformation-group groupoid}\label{section_transformation-group_groupoid} Renault in \cite[Examples~1.2]{Renault1980} described a groupoid that can be constructed from a group acting on the right of a space. The following is a modification that describes a groupoid constructed from a left group action. \begin{definition}[{\cite[Example~3.3]{Clark-anHuef2008}}]\label{def_transformation-group_groupoid}\index{transformation-group groupoid} Let $(H,X)$ be a transformation group with $H$ acting on the left of the space $X$. Then $G=H\times X$ with \[ G^{(2)}=\big\{\big((h,x),(k,y)\big)\in G\times G:y=h^{-1}\cdot x\big\} \] and operations $(h,x)(k,h^{-1}\cdot x)=(hk,x)$ and $(h,x)^{-1}=(h^{-1},h^{-1}\cdot x)$ is called the {\em transformation-group groupoid}. If $H$ and $X$ are topological spaces, then $G$ is considered to have the product topology. \end{definition} We identify the space $\{e\}\times X$ with $X$, so that the range and source maps $r,s:H\times X\rightarrow X$ are given by $r(h,x)=x$ and $s(h,x)=h^{-1}\cdot x$. A transformation group $(H,X)$ is {\em free}\index{free transformation group} if whenever $h\in H$ and $x\in X$ satisfy $h\cdot x=x$, then $h$ is the identity $e$ of $H$. \begin{lemma} Suppose $(H,X)$ is a transformation group with $H$ acting on the left of $X$ and let $G$ be the transformation-group groupoid $H\times X$. Then $G$ is principal if and only if $(H,X)$ is free. \begin{proof} Suppose $G$ is principal and that $h\in H$ and $x\in X$ satisfy $h\cdot x=x$. Then $(h^{-1},x)$ and $(e,x)$ are elements of $G$ with $r(h^{-1},x)=x=r(e,x)$ and $s(h^{-1},x)=h\cdot x=x=s(e, x)$. Since $G$ is principal it follows that $(h^{-1},x)=(e,x)$, so $h=e$ and $(H,X)$ is free. Conversely, suppose $(H,X)$ is free and that $(h,x),(k,y)\in G$ have equal range and source. Then $x=r(h,x)=r(k,y)=y$. Similarly $h^{-1}\cdot x=s(h,x)=s(k,y)=k^{-1}\cdot y$ so, since $x=y$, we have $kh^{-1}\cdot x=x$ and since $(H,X)$ is free we have $kh^{-1}=e$. Then $h=k$ and so $(h,x)=(k,y)$. \end{proof} \end{lemma} \begin{remark}\label{remark_Haar_system_for_transformation_group} Suppose $(H,X)$ is a locally compact, Hausdorff transformation group with $H$ acting on the left of the space $X$. If $\delta_x$ is the point-mass measure on $X$ and $\mu$ is a left Haar measure on $H$, then $\{\lambda^x:=\mu\times\delta_x:x\in X\}$ is a left Haar system for the transformation-group groupoid $H\times X$. \end{remark} We will now introduce the representations induced by evaluation maps on $C_0(X)$ as in \cite{Archbold-anHuef2006, anHuef2002, Williams1981, Williams1981-2}. Suppose that $H$ acts freely on $X$ and let $\nu$ be the right Haar measure such that $\nu(E)=\mu(E^{-1})$ for every Borel subset $E$ of $H$. For each $x\in X$, we define $\mathrm{Ind}\,\epsilon_x$ to be the representation $\tilde{\epsilon}_x\rtimes\lambda$ of the transformation group $C^*$-algebra $C_0(X)\rtimes H$ on the Hilbert space $L^2(H,\nu)$, where $\big(\tilde{\epsilon}_x(f)\xi\big)(h)=f(h\cdot x)\xi(h)$ and $(\lambda_k\xi)(h)=\Delta(k)^{1/2}\xi(k^{-1}h)$ for $f\in C_0(X)$, $\xi\in L^2(H,\nu)$ and $h,k\in H$ (see \cite[Lemma~4.14]{Williams1981}). It follows that \[ \big(\mathrm{Ind}\,\epsilon_x(f)\xi\big)(h)=\int_H f(k,h\cdot x)\xi(k^{-1}h)\Delta(k)^{1/2}\,d\nu(k) \] for $f\in C_c(H,X)$ and $\xi\in L^2(H,\nu)$. \begin{remark}\label{remark_algebras_isomorphic} If $G=(H, X)$ is a second countable, free transformation group, then the representations $\mathrm{L}^x$ of the transformation-group groupoid $C^*$-algebra are unitarily equivalent to the representations $\mathrm{Ind}\,\epsilon_x$. Specifically, let $\nu$ be a choice of right Haar measure on $H$ and $\Delta$ the associated modular function. The map $\iota:C_c(H\times X)\to C_c(H\times X)$ defined by $\iota(f)(h,x)=f(h,x)\Delta(t)^{1/2}$ extends to an isomorphism $\iota$ of the groupoid $C^*$-algebra $C^*(H\times X)$ onto the transformation-group $C^*$-algebra $C_0(X)\rtimes H$ \cite[p.~58]{Renault1980}. Fix $x\in X$. Then there is a unitary $U_x:L^2(H,\nu)\to L^2(H\times X,\lambda_x)$, characterised by $U(\xi)(h, y)=\xi(h)\delta_x(h^{-1}\cdot y)$ for $\xi\in C_c(H)$, and $U(\mathrm{Ind}\,\epsilon_x(\iota(f))U^*=\mathrm{L}^x(f)$ for $f\in C^*(H\times X)$. \end{remark} \section{The path groupoid of a directed graph}\label{sec_path_groupoid} A {\em directed graph}\index{directed graph} $E=(E^0,E^1,r_E,s_E)$\notationindex{E@$E, E^0, E^1$}\notationindex{RE@$r_E$}\notationindex{SE@$s_E$} consists of two countable sets $E^0,E^1$ and functions $r_E,s_E:E^1\rightarrow E^0$. We call elements of $E^0$ and $E^1$ {\em vertices}\index{vertex!in a directed graph} and {\em edges}\index{edge!in a directed graph} respectively. For each edge $e$, we call $s_E(e)$ the {\em source}\index{source!of an edge in directed graph} of $e$ and call $r_E(e)$ the {\em range}\index{range!of an edge in a directed graph} of $e$. Where the graph is clear from the context, we usually write $s$ instead of $s_E$ and $r$ instead of $r_E$ \begin{example} Let $E$ be the directed graph such that $E^0=\{a,b\}$, $E^1=\{x,y\}$ and $r,s$ are given defined by: $r(x)=a$, $s(x)=b$, $r(y)=b$ and $s(y)=b$. This directed graph is fully described by the following picture. \begin{center}\begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \clip (-0.3em,-1.8em) rectangle (12.6em,1.9em); \node (a) at (0em,0em) {$\scriptstyle a$}; \node (b) at (8em,0em) {$\scriptstyle b$}; \draw[->] (b) to node [midway,above=-0.1em] {$\scriptstyle x$} (a); \draw [<-] (b) .. controls (13em,5em) and (13em,-5em) .. (b) node [midway,right=-0.2em] {$\scriptstyle y$}; \end{tikzpicture} \end{center} \end{example} A {\em finite path}\index{finite path!in a directed graph} $\alpha$ in a directed graph $E$ is a finite sequence $\alpha_1\alpha_2\cdots\alpha_k$ of edges $\alpha_i$ with $s_E(\alpha_j)=r_E(\alpha_{j+1})$ for $1\leqslant j\leqslant k-1$, or a vertex $v\in E^0$; we write $s_E(\alpha)=s_E(\alpha_k)$\notationindex{SE@$s_E$}, $r_E(\alpha)=r_E(\alpha_1)$\notationindex{RE@$r_E$} and $s_E(v)=r_E(v)=v$. We define the {\em length}\index{length!of a finite path} $\lvert\alpha\rvert$\notationindex{ALPHAABS@$\lvert\alpha\rvert$} of $\alpha$ to be $k$ and the length $\lvert v\rvert$ of $v$ to be zero. An {\em infinite path} $x=x_1x_2\cdots$ is defined similarly (with $|x|=\infty$), although $s_E(x)$ remains undefined. Let $E^*$ and $E^\infty$\notationindex{E*@$E^*,E^\infty$} denote the set of all finite paths and infinite paths in $E$ respectively. If $\alpha=\alpha_1\cdots\alpha_k$ and $\beta=\beta_1\cdots\beta_j$ are finite paths then, provided $s_E(\alpha)=r_E(\beta)$, let $\alpha\beta$ be the path $\alpha_1\cdots\alpha_k\beta_1\cdots\beta_j$. When $x\in E^\infty$ with $s_E(\alpha)=r_E(x)$ define $\alpha x$ similarly. We of course simplify notation and write $s$ for $s_E$ and $r$ for $r_E$. Note that this notation is somewhat strange: a path $\alpha=\alpha_1\alpha_2\alpha_3$ is drawn as \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.2em,-0.2em) rectangle (16.7em, 0.8em); \foreach \x in {0,1,2,3} { \node (v\x) at (\cellwidth*\x em,0em) {}; \fill[black] (v\x) circle (0.15em); } \foreach \x / \y in {0/1,1/2,2/3} \draw [<-] (v\x) to node[above=-0.2em] {$\alpha_{\y}$} (v\y); \end{tikzpicture} \end{center} where the arrows are reversed from what seems natural. In Kumjian, Pask, Raeburn and Renault's original paper they used the seemingly more natural approach, however the definition of a path that we use is consistent with the later development of higher-rank graphs, where edges become morphisms in a category and the new convention ensures that ``composition of morphisms is compatible with multiplication of operators in $B(\mathcal H)$'' \cite[p.~2]{Raeburn2005}. This new convention is used elsewhere in work on directed graphs including in Raeburn's book \cite{Raeburn2005}. A directed graph $E$ is {\em row-finite}\index{row finite!directed graph} if $r^{-1}(v)$ is finite for every $v\in E^0$. For any finite path $\alpha$ in a directed graph $E$, let $\alpha E^\infty$\notationindex{ALPHAEINFTY@$\alpha E^\infty$}\label{def_alphaEinfty} be the set of all paths in $E^\infty$ of the form $\alpha y$ for some $y\in E^\infty$. \begin{prop}[{\cite[Corollary~2.2]{kprr1997}}]\label{prop_topology_on_Einfty} Suppose $E$ is a row-finite directed graph. The sets $\alpha E^\infty$ for $\alpha\in E^*$ form a basis of compact open sets for a locally compact, $\sigma$-compact, totally disconnected, Hausdorff topology on $E^\infty $ \end{prop} Readers already familiar with graph algebras will recall that the key results in Kumjian, Pask, Raeburn and Renault's \cite{kprr1997} relate to directed graphs without sources (that is, vertices $v$ with $r^{-1}(v)=\emptyset$), Proposition \ref{prop_topology_on_Einfty} applies to any row-finite directed graph $E$ since the set $E^\infty$ only relates to the infinite paths of $E$. Before presenting an example that illustrates convergence in the topology on the infinite path space $E^\infty$, we will introduce some notation that will only be used in this thesis for related examples. \notationindex{VFinfty@$[v,f]^\infty$}\label{def_vfinfty}When $v$ is a vertex, $f$ is an edge, and there is exactly one infinite path with range $v$ that includes the edge $f$, then we denote this infinite path by $[v,f]^\infty$. Define $\mathbb P:=\mathbb N\backslash\{0\}$\notationindex{P@$\mathbb P\ (=\mathbb N\char`\\\braces{0})$}. \needspace{6\baselineskip} \begin{example} Let $E$ be the following row-finite directed graph. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-2em,-5.8em) rectangle (4*\cellwidth em + 2em,0.7em); \node (x1y0) at (0 em,0 em) {$\scriptstyle v$}; \foreach \x in {2,3,4,5} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {} \foreach \x in {1,2,3,4,5} \node (x\x y1) at (\cellwidth*\x em-\cellwidth em,-3 em) {}; \foreach \x in {1,2,3,4,5} \node (x\x y2) at (\cellwidth*\x em-\cellwidth em,-4.5 em) {}; \foreach \x in {2,3,4,5} \foreach \y in {0} \fill[black] (x\x y\y) circle (0.15em); \foreach \x in {1,2,3,4,5} \foreach \y in {1,2} \fill[black] (x\x y\y) circle (0.15em); \foreach \x / \z in {1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y0) to node[above=-0.2em] {$\scriptstyle x_{\x}$} (x\z y0); \foreach \x in {1,2,3,4,5} \draw [<-] (x\x y0) to node[right=-0.2em] {$\scriptstyle f_{\x}$} (x\x y1); \foreach \x in {1,2,3,4,5} \draw [<-] (x\x y1) -- (x\x y2); \foreach \x in {1,2,3,4,5} { \node (endtail\x) at (\cellwidth*\x em-\cellwidth em, -6.0em) {}; \draw [dotted,thick] (x\x y2) -- (endtail\x); } \node(endtailone) at (4*\cellwidth em + 2em,0em) {}; \draw[dotted,thick] (x5y0) -- (endtailone); \end{tikzpicture} \end{center} The sequence $\{[v,f_n]^\infty\}_{n\in\mathbb P}$ in $E^\infty$ converges to the infinite path $x=x_1x_2x_3\cdots$. \begin{proof} Let $U$ be a neighbourhood of $x$ in $E^\infty$. By Proposition \ref{prop_topology_on_Einfty} there exists $\alpha\in E^*$ such that $x\in \alpha E^\infty\subset U$. In order to have $x\in\alpha E^\infty$, we must have $\alpha=x_1x_2\cdots x_p$ for some $p\in\mathbb P$. For every $n>p$ we have $[v,f_n]^\infty\in x_1x_2\cdots x_pE^\infty$, so $[v,f_n]^\infty$ is eventually in $\alpha E^\infty\subset U$, establishing the convergence. \end{proof} \end{example} We say $x,y\in E^\infty$ are {\em shift equivalent}\index{shift equivalence!in a directed graph} with lag $n\in\mathbb Z$ (written $x\sim_n y$\notationindex{XSIMY@$x\sim_n y$}) if there exists $m\in\mathbb N$ such that $x_i=y_{i-n}$ for all $i\geqslant m$. \begin{example} The following graph is row finite. In this graph $x\sim_1y$ and $y\sim_{-1} x$. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\scale{3}; \clip (-0.3em,-1.2*\scale em) rectangle (5.6*\scale em,0.4em); \node (x0) at (0,0) {}; \node (x1) at (\scale em,-1/3*\scale em) {}; \node (y0) at (\scale em, -\scale em) {}; \node (z0) at (2*\scale em, -2/3*\scale em) {}; \node (z1) at (3*\scale em, -2/3*\scale em) {}; \node (z2) at (4*\scale em, -2/3*\scale em) {}; \node (z3) at (5*\scale em, -2/3*\scale em) {}; \node (endtail) at (5.7*\scale em,-2/3*\scale em) {}; \draw [black,<-] (x0) to node[anchor=south] {$\scriptstyle x_1$} (x1); \draw [black,<-] (x1) to node[anchor=south] {$\scriptstyle x_2$} (z0); \draw [black,<-] (y0) to node[anchor=north] {$\scriptstyle y_1$} (z0); \draw [black,<-] (z0) to node[anchor=north] {$\scriptstyle y_2$} (z1); \draw [black,<-] (z1) to node[anchor=north] {$\scriptstyle y_3$} (z2); \draw [black,<-] (z2) to node[anchor=north] {$\scriptstyle y_4$} (z3); \node [anchor=south] at ($(z0)!0.5!(z1)$) {$\scriptstyle x_3$}; \node [anchor=south] at ($(z1)!0.5!(z2)$) {$\scriptstyle x_4$}; \node [anchor=south] at ($(z2)!0.5!(z3)$) {$\scriptstyle x_5$}; \fill[black] (x0) circle (0.15em); \fill[black] (x1) circle (0.15em); \fill[black] (y0) circle (0.15em); \foreach \z in {0,1,2,3} \fill[black] (z\z) circle (0.15em); \draw [dotted,thick] (z3) -- (endtail); \end{tikzpicture} \end{center} \end{example} This definition of shift equivalence is a modification of the definition of shift equivalence in \cite[p.~509]{kprr1997} which required $x_i=y_{i+n}$ for all $i\geqslant m$. This change has been made because later work on higher-rank graphs by Kumjian and Pask in \cite[Definition~2.7]{Kumjian-Pask2000} involved constructing a groupoid in a way that is consistent with this modification. This doesn't matter a great deal because the groupoids that are constructed with these two different notions of shift equivalence are isomorphic. Suppose $E$ is a row-finite directed graph. We refer to the groupoid constructed from $E$ by Kumjian, Pask, Raeburn and Renault in \cite{kprr1997} as the {\em path groupoid}\index{path groupoid} of $E$. Before describing this construction a reminder should be made to any reader going back to examine \cite{kprr1997}: the notion of shift equivalence has changed slightly and the definition of a path has changed (swapping range and source) to achieve consistency with the more recent higher-rank graphs. The path groupoid $G_E$\notationindex{GE@$G_E$} constructed from $E$ is defined as follows: let \[ G_E:=\braces{(x,n,y)\in E^\infty\times \mathbb Z\times E^\infty :x\sim_n y}. \] For elements of \[ G_E^{(2)}:=\braces{\big((x,n,y),(y,m,z)\big):(x,n,y),(y,m,z)\in G_E}, \] Kumjian, Pask, Raeburn, and Renault defined \[ (x,n,y)\cdot (y,m,z):=(x,n+m,z), \] and for arbitrary $(x,n,y)\in G_E$, defined $(x,n,y)^{-1}:=(y,-n,x)$. For each $\alpha,\beta\in E^*$ with $s(\alpha)=s(\beta)$, let $Z(\alpha,\beta)$ be the set \[ \braces{(x,n,y):x\in \alpha E^\infty, y\in \beta E^\infty, n=\lvert\alpha\rvert-\lvert\beta\rvert, x_i=y_{i-n}\text{ for } i>\lvert\alpha\rvert}. \] \begin{prop}[{\cite[Proposition 2.6]{kprr1997}} Let $E$ be a row-finite directed graph. The collection of sets \[\braces{Z(\alpha,\beta):\alpha,\beta\in E^*, s(\alpha)=s(\beta)}\] is a basis of compact open sets for a second countable, locally compact, Hausdorff topology on $G_E$ that makes $G_E$ $r$-discrete. The groupoid $G_E$ has a Haar system of counting measures and the map from $E^\infty$ onto $G_E^{(0)}$ defined by $x\mapsto (x,0,x)$ is a homeomorphism. \end{prop} In the proof of \cite[Proposition 2.6]{kprr1997} it is shown that $r$ is a local homeomorphism before using Renault's \cite[Lemma~2.7(ii), Proposition~2.8]{Renault1980} to show that the groupoid is $r$-discrete and admits a Haar system of counting measures. There are issues with the proofs of \cite[Lemma~2.7(ii), Proposition~2.8]{Renault1980} (see Remark \ref{remark_problems_Renault_results}), however Proposition \ref{Renault_I_2_8i_sub} can be used instead. The point to examining the path groupoid is that properties of the groupoid are related to the combinatorial properties of the directed graph. In order to present our first result describing such a relationship, first define a {\em cycle}\index{cycle!of a directed graph} to be a finite path $\alpha$ of non-zero length with $r(\alpha)=s(\alpha)$ and $s(\alpha_i)\ne s(\alpha_j)$ for $i\ne j$. \begin{prop}\label{prop_principal_iff_no_cycles} Suppose $E$ is a row-finite directed graph. The path groupoid $G_E$ is principal if and only if $E$ contains no cycles. \begin{proof} Let $G=G_E$. We first show that if $E$ contains no cycles, then $G$ is principal. Suppose $G$ is not principal. Then there exists $x\in E^\infty$ such that the stability subgroup $G|_x$ is non-trivial and so there is a non-zero $n\in\mathbb Z$ of least absolute value with $x\sim_n x$. We may assume that $n>0$ since $x\sim_{-n}x$. As $x\sim_n x$ there exists $m\in\mathbb N$ such that $x_i=x_{i-n}$ for all $i\geqslant m$ and so $x_{i-n}x_{i-n+1}\cdots x_i$ is a cycle in $E$. Conversely, suppose $E$ contains the cycle $\alpha=\alpha_1\alpha_2\cdots \alpha_n$. Then $x:=\alpha\alpha\cdots$ is in $E^\infty $ with $x\sim_n x$, so the stability subgroup $G|_x$ is non-trivial and $G$ is not principal. \end{proof} \end{prop} For a row-finite directed graph $E$, Section \ref{section_algebras_of_directed_graphs} describes the directed-graph Cuntz-Krieger $C^*$-algebra, $C^*(E)$, of $E$. Readers already familiar with the directed-graph Cuntz-Krieger $C^*$-algebras may be interested to know that the gauge-invariant uniqueness theorem (Theorem \ref{thm_gauge_invariant_uniqueness}) tells us that $C^*(G_E)$ is isomorphic to $C^*(E)$ when $E$ has no sources (i.e. no vertices $v$ with $r^{-1}(v)=\emptyset$). Example \ref{example_directed_graph_algebras_not_isomorphic} demonstrates why this `no sources' condition is necessary. \section{Proper, Cartan and integrable groupoids and their \texorpdfstring{$C^*$}{\it C*}-algebras}\label{sec_proper_Cartan_integrable} \begin{definition} A groupoid $G$ is {\em proper}\index{proper groupoid} if the map $\Phi:G\rightarrow G^{(0)}\times G^{(0)}$ defined by $\Phi(\gamma)=\big(r(\gamma),s(\gamma)\big)$ for every $\gamma\in G$ is proper. In other words, if $\Phi^{-1}(K)$ is compact for every compact subset $K$ of $G^{(0)}\times G^{(0)}$, where $G^{(0)}\times G^{(0)}$ is equipped with the product topology. \end{definition} \begin{lemma}[{\cite[Lemma~7.2]{Clark2007}}] A Hausdorff groupoid $G$ is proper if and only if $G|_N$ is compact for every compact subset $N$ of $G^{(0)}$. \end{lemma} \begin{definition}[{\cite[Definition~7.1]{Clark2007}}] Suppose $G$ is a groupoid. A subset $N$ of $G^{(0)}$ is said to be {\em wandering}\index{wandering} if $G|_N=r^{-1}(N)\cap s^{-1}(N)$ is relatively compact. We say $G$ is {\em Cartan}\index{Cartan groupoid} if every $x\in G^{(0)}$ has a wandering neighbourhood. \end{definition} Note that it follows immediately that every proper groupoid is Cartan. \begin{definition}[{\cite[Definition~3.1]{Clark-anHuef2008}}] Suppose $G$ is a locally compact, Hausdorff groupoid with Haar system $\lambda$. We say $G$ is {\em integrable}\index{integrable groupoid} if for every compact subset $N$ of $G^{(0)}$, $\sup_{x\in N}\braces{\lambda_x(G^N)}<\infty$. \end{definition} \begin{lemma}\label{lemma_integrable_if_Cartan} Every first countable, locally compact, Hausdorff, Cartan groupoid with a Haar system is integrable. \begin{proof Let $G$ be a first countable, locally compact, Hausdorff, Cartan groupoid with a Haar system $\lambda$ and let $N$ be a compact subset of $G^{(0)}$. Since $G$ is Cartan for every $x\in N$ there is a compact neighbourhood $K_x$ of $x$ such that $G|_{K_x}$ is compact. Since each $\lambda_x$ is a Radon measure, every $\lambda_x(G|_{K_x})$ is finite. Since $G$ is locally compact and Hausdorff, $x$ has a neighbourhood base of compact sets. So by Lemma \ref{corollary_astrid_lemma}, for every $x\in N$ there exists a compact neighbourhood $W_x$ of $x$ such that $y\in W_x$ implies $\lambda_y(G|_{K_x})<\lambda_x(G|_{K_x})+1$. For every $x\in N$, let $N_x$ be the compact neighbourhood $K_x\cap W_x$ of $x$. Now $\braces{\mathrm{int}\,N_x:x\in N}$ is an open cover of the compact set $N$ so there exists $x_1,x_2,\ldots,x_p\in N$ such that $N\subset \cup_{i=1}^p N_{x_i}$. We claim that $\lambda_x(G^N)\leqslant\sum_{i=1}^p\lambda_{x_i}(G|_{K_{x_i}})+p$ for every $x\in N$. Fix $x\in N$ and observe that \[ \lambda_x(G^N)\leqslant\lambda_x\Big(\bigcup_{i=1}^p G^{N_{x_i}}\Big)\leqslant\sum_{i=1}^p\lambda_x(G^{N_{x_i}}). \] It will suffice to show that $\lambda_x(G^{N_{x_i}})\leqslant \lambda_{x_i}(G|_{K_{x_i}})+1$ for every $i$. Fix $i$ with $1\leqslant i\leqslant p$. Since the support of $\lambda_x$ is $s^{-1}(x)$, in order for $\lambda_x(G^{N_{x_i}})$ to be non-zero there must exist $\gamma\in G$ such that $s(\gamma)=x$ and $r(\gamma)\in N_{x_i}$. Then \[ \lambda_x(G^{N_{x_i}})=\lambda_x(G^{N_{x_i}}\gamma^{-1}\gamma)=\lambda_{r(\gamma)}(G^{N_{x_i}}\gamma^{-1})=\lambda_{r(\gamma)}(G^{N_{x_i}})=\lambda_{r(\gamma)}(G|_{N_{x_i}}). \] Since $r(\gamma)\in N_{x_i}\subset W_{x_i}$ and $N_{x_i}\subset K_{x_i}$, \[ \lambda_x(G^{N_{x_i}})=\lambda_{r(\gamma)}(G|_{N_{x_i}})\leqslant \lambda_{r(\gamma)}(G|_{K_{x_i}}) \leqslant \lambda_{x_i}(G|_{K_{x_i}})+1, \] and the result follows. \end{proof} \end{lemma} \chapter{Strength of convergence in the orbit space of a groupoid}\label{chapter_strength_of_convergence} In Section \ref{sec_upper_lower_multiplicities} we introduce the upper and lower multiplicity numbers that Archbold and Spielberg in \cite{Archbold1994,Archbold-Spielberg1996} associated to irreducible representations. In Section \ref{sec_AaH} we will introduce the notion of $k$-times convergence in the orbit space of a transformation group and state the theorems by Archbold and an Huef in \cite{Archbold-anHuef2006} that relate $k$-times convergence in a free transformation group to upper and lower multiplicity numbers of the transformation group $C^*$-algebra; we will provide an example based on the transformation group that Green described in \cite[pp. 95--96]{Green1977}. In Sections \ref{sec_lower_multiplicity_1} to \ref{sec_upper_multiplicity} we generalise the results in Archbold and an Huef's \cite{Archbold-anHuef2006} to results about principal groupoids. Finally in Section \ref{section_graph_algebra_examples} we will demonstrate these results with Kumjian, Pask, Raeburn and Renault's path groupoid of various directed graphs. These examples are new and interesting since they are not constructed from transformation groups. The contents of Sections \ref{sec_lower_multiplicity_1} to \ref{section_graph_algebra_examples} has been published as \cite{Hazlewood-anHuef2011}. \section{Upper and lower multiplicities}\label{sec_upper_lower_multiplicities} Let $A$ be a $C^*$-algebra. We write $\theta$ for the canonical surjection from the space $P(A)$ of pure states of $A$ to the spectrum $\hat A$ of $A$. We frequently identify an irreducible representation $\pi$ with its equivalence class in $\hat A$ and we write $\mathcal H_\pi$ for the Hilbert space on which $\pi(A)$ acts. Let $\pi\in\hat A$ and let $\braces{\pi_\alpha}$ be a net in $\hat{A}$. \notationindex{MU@$M_\mathrm{U},M_\mathrm{L}$}We now recall the definitions of {\em upper} and {\em lower multiplicity}\index{upper multiplicity}\index{lower multiplicity} $M_\mathrm{U}(\pi)$ and $M_\mathrm{L}(\pi)$ from \cite{Archbold1994}, and {\em relative upper} and {\em relative lower multiplicity}\index{relative upper multiplicity}\index{relative lower multiplicity} $\mathrm{M}_\mathrm{U}(\pi,\braces{\pi_\alpha})$ and $\mathrm{M}_\mathrm{L}(\pi,\braces{\pi_\alpha})$ from \cite{Archbold-Spielberg1996}. Let $\mathcal N$ be the weak$^*$-neighbourhood base at zero in the dual $A^*$ of $A$ consisting of all open sets of the form \[ N=\{\psi\in A^*:\lvert\psi(a_i)\rvert<\epsilon, 1\leqslant i\leqslant n\}, \] where $\epsilon>0$ and $a_1,a_2,\ldots,a_n\in A$. Suppose $\phi$ is a pure state of $A$ associated with $\pi$ and let $N\in\mathcal N$. Define \[ V(\phi,N)=\theta\big((\phi+N)\cap P(A)\big), \] an open neighbourhood of $\pi$ in $\hat{A}$. For $\sigma\in \hat{A}$ let \[ \mathrm{Vec}(\sigma,\phi,N)=\braces{\eta\in \mathcal H_\sigma:\lvert\eta\rvert=1,(\sigma(\cdot)\eta\, |\,\eta)\in \phi+N}. \] Note that $\mathrm{Vec}(\sigma,\phi,N)$ is non-empty if and only if $\sigma\in V(\phi,N)$. For any $\sigma\in V(\phi,N)$ define $d(\sigma,\phi,N)$ to be the supremum in $\mathbb P\cup\braces{\infty}$ of the cardinalities of finite orthonormal subsets of $\mathrm{Vec}(\sigma,\phi,N)$. Write $d(\sigma,\phi,N)=0$ when $\mathrm{Vec}(\sigma,\phi,N)$ is empty. Define \[ \mathrm{M}_\mathrm{U}(\phi,N)=\underset{\sigma\in V(\phi,N)}{\sup} d(\sigma,\phi,N)\in\mathbb P\cup\braces{\infty}. \] Note that if $N'\in \mathcal N$ and $N'\subset N$, then $\mathrm{M}_\mathrm{U}(\phi,N')\leqslant\mathrm{M}_\mathrm{U}(\phi,N)$. Now define \[ \mathrm{M}_\mathrm{U}(\phi)=\underset{N\in\mathcal N}{\inf}\mathrm{M}_\mathrm{U}(\phi,N)\in\mathbb P\cup\braces{\infty}. \] By \cite[Lemma~2.1]{Archbold1994}, the value of $\mathrm{M}_\mathrm{U}(\phi)$ is independent of the pure state $\phi$ associated to $\pi$ and so $\mathrm{M}_\mathrm{U}(\pi):=\mathrm{M}_\mathrm{U}(\phi)$ is well-defined. For lower multiplicity, assume that $\braces{\pi}$ is not open in $\hat{A}$, and using \cite[Lemma~2.1]{Archbold1994} again, define \[ \mathrm{M}_\mathrm{L}(\pi):=\underset{N\in\mathcal N}\inf \Big(\underset{\sigma\rightarrow\pi, \sigma\ne\pi}{\lim\,\inf} d(\sigma,\phi,N)\Big)\in\mathbb P\cup\braces{\infty}. \] Now suppose that $\{\pi_\alpha\}_{\alpha\in\Lambda}$ is a net in $\hat{A}$. For $N\in\mathcal N$ let \[ \mathrm{M}_\mathrm{U}\big(\phi,N,\{\pi_\alpha\}\big)=\underset{\alpha\in\Lambda}{\lim\,\sup}\, d(\pi_\alpha,\phi,N)\in\mathbb N\cup\braces{\infty}. \] Note that if $N'\in\mathcal N$ and $N'\subset N$ then $\mathrm{M}_\mathrm{U}\big(\phi,N',\{\pi_\alpha\}\big)\leqslant \mathrm{M}_\mathrm{U}\big(\phi,N,\{\pi_\alpha\}\big)$. Then \[ \mathrm{M}_\mathrm{U}\big(\pi,\{\pi_\alpha\}\big):=\underset{N\in\mathcal N}\inf \mathrm{M}_\mathrm{U}\big(\phi,N,\{\pi_\alpha\}\big)\in\mathbb N\cup\braces{\infty}, \] is well-defined because the right-hand side is independent of the choice of $\phi$ by an argument similar to the proof of \cite[Lemma~2.1]{Archbold1994}. Similarly define \[ \mathrm{M}_\mathrm{L}\big(\phi,N,\{\pi_\alpha\}\big):=\underset{\alpha\in\Lambda}{\lim\,\inf}\, d(\pi_\alpha,\phi,N)\in\mathbb N\cup\braces{\infty}, \] and let \[ \mathrm{M}_\mathrm{L}\big(\pi,\{\pi_\alpha\}\big)=\underset{N\in\mathcal N}\inf \mathrm{M}_\mathrm{L}\big(\phi,N,\{\pi_\alpha\}\big)\in\mathbb N\cup\braces{\infty}. \] It follows that for any irreducible representation $\pi$ and any net $\{\pi_\alpha\}_{\alpha\in\Lambda}$ of irreducible representations, \[ \mathrm{M}_\mathrm{L}\big(\pi,\{\pi_\alpha\}\big)\leqslant\mathrm{M}_\mathrm{U}\big(\pi,\{\pi_\alpha\}\big)\leqslant\mathrm{M}_\mathrm{U}(\pi) \] and, if $\{\pi_\alpha\}$ converges to $\pi$ with $\pi_\alpha\ne\pi$ eventually, $\mathrm{M}_\mathrm{L}(\pi)\leqslant\mathrm{M}_\mathrm{L}\big(\pi,\{\pi_\alpha\}\big)$. Finally, if $\{\pi_\beta\}$ is a subnet of $\{\pi_\alpha\}$, then \[ \mathrm{M}_\mathrm{L}\big(\pi,\{\pi_\alpha\}\big)\leqslant \mathrm{M}_\mathrm{L}\big(\pi,\{\pi_\beta\}\big)\leqslant \mathrm{M}_\mathrm{U}\big(\pi,\{\pi_\beta\}\big)\leqslant \mathrm{M}_\mathrm{U}\big(\pi,\{\pi_\alpha\}\big). \] \begin{example}[{\cite[p. 121]{Archbold1994}}] Let $A$ be the $C^*$-algebra of all continuous functions $f:[0,1]\rightarrow M_4(\mathbb C)$ such that, for each $n\geqslant 1$, \[ f\bigg(\frac{1}{n}\bigg)=\bigg(\begin{array}{cc}\sigma_n(f) & 0 \\0 & \sigma_n(f)\end{array}\bigg)\quad\text{where $\sigma_n(f)\in M_2(\mathbb C)$} \] and $f(0)=\lambda(f)1_{M_4(\mathbb C)}$ for some scalar $\lambda(f)$. We have \[ \widehat{A}=\{\lambda\}\cup\{\sigma_n:n\geqslant 1\}\cup\{\epsilon_t:0<t<1,t^{-1}\notin\mathbb N\} \] where $\epsilon_t(f)=f(t)$ for every $f\in A$. Archbold in \cite[p. 125]{Archbold1994} proves that $\mathrm{M}_\mathrm{U}(\lambda)=4$ and $\mathrm{M}_\mathrm{L}(\lambda)=2$. We now recall Archbold's proof. \begin{proof}[Proof that $\mathrm{M}_\mathrm{U}(\lambda)=4$ and $\mathrm{M}_\mathrm{L}(\lambda)=2$] We begin by showing that $\mathrm{M}_\mathrm{U}(\lambda)=4$. Let $\{e_1,e_2,e_3,e_4\}$ be the usual basis for $\mathbb C^4$. For $0<t<1$ with $t^{-1}\notin\mathbb N$, we define \[ \phi_t^{(i)}=\big(\epsilon_t(\cdot)e_i\,|\,e_i\big)\quad (1\leqslant i\leqslant 4). \] Then, for each $i$, $\phi_t^{(i)}\rightarrow\lambda$ as $t\rightarrow 0$. Let $N\in\mathcal N$. There exists $t_0\in(0,1)$ such that \[ \phi_t^{(i)}\in\lambda+N\quad(0<t<t_0,t^{-1}\notin\mathbb N,1\leqslant i\leqslant 4). \] Hence $d(\epsilon_t,\lambda,N)=4$ for $0<t<t_0$ with $t^{-1}\notin\mathbb N$ and so $\mathrm{M}_\mathrm{U}(\lambda,N)\geqslant 4$. Since $N$ was chosen arbitrarily, $\mathrm{M}_\mathrm{U}(\lambda)\geqslant 4$. The reverse inequality follows from the fact that $\dim(\pi)\leqslant 4$ for all $\pi\in A^\wedge$. To show that $\mathrm{M}_\mathrm{L}(\lambda)=2$, let $\{u_1,u_2\}$ be the usual orthonormal basis for $\mathbb C^2$ and define \[ \psi_n^{(i)}=\big(\sigma_n(\cdot)u_i\,|\,u_i\big)\quad\text{for }i=1,2. \] Then $\psi_n^{(i)}\rightarrow\lambda$ for each $i$. Fix $N\in\mathcal N$. There exists $t_i\in(0,1)$ such that if $n^{-1}<t_1$ then $\psi_n{(i)}\in\lambda+N$ for each $i$ and if $0<t<t_1$ with $t^{-1} \notin\mathbb N$ then $\phi_t^{(i)}\in\lambda+N$ for each $i$. Let \[ V=\{\lambda\}\cup\{\sigma_n:n^{-1}<t_1\}\cup\{\pi_t:0<t<t_1, t^{-1}\notin\mathbb N\}, \] an open neighbourhood of $\lambda$ contained in $V(\lambda,N)$. Then \[ \mathrm{inf}_{\rho\in V\backslash\{\lambda\}} d(\rho,\lambda,N)=2 \] and so $\mathrm{M}_\mathrm{L}(\lambda,N)\geqslant 2$. Since $N$ was fixed arbitrarily, $\mathrm{M}_\mathrm{L}(\lambda)\geqslant 2$. The reverse inequality can be obtained by noting that $\sigma_n\rightarrow\lambda$ and $\dim(\sigma_n)=2$ for all $n$. \end{proof} \end{example} \section[Strength of convergence in a transformation group]{Strength of convergence in the orbit space of a transformation group}\label{sec_AaH} Recall that a sequence $\{h_n\}\subset H$ tends to infinity if it admits no convergent subsequence. \begin{definition}[{\cite[p. 92]{Archbold-anHuef2006}}]\label{def_k-times_convergence_transformation_group} Suppose $(H,X)$ is a transformation group and let $k\in\mathbb P$. A sequence $\{x_n\}$ in $X$ is \index{k-times convergence@$k$-times convergence!in the orbit space of a transformation group}{\em $k$-times convergent} in $X/H$ to $z\in X$ if there exist $k$ sequences $\{h_n^{(1)}\},\{h_n^{(2)}\},\ldots,\{h_n^{(k)}\}$ in $H$ such that \begin{enumerate} \item\label{def_k-times_convergence_transformation_group_1} $h_n^{(i)}\cdot x_n\rightarrow z$ as $n\rightarrow\infty$ for $1\leqslant i\leqslant k$; and \item\label{def_k-times_convergence_transformation_group_2} if $1\leqslant i<j\leqslant k$ then $h_n^{(j)}(h_n^{(i)})^{-1}\rightarrow\infty$ as $n\rightarrow\infty$. \end{enumerate} \end{definition} The following example describes a transformation group from Green's \cite{Green1977}. Following the description of the transformation group we will show that its orbit space exhibits $2$-times convergence. \begin{example}\label{example_2-times_convergence_Green}\index{Green's transformation group example} Define $\psi:\mathbb R\rightarrow\mathbb R^3$ by $\psi(a)=(0,a,0)$ and for each $n\in\mathbb P$, define $\phi_n:\mathbb R\rightarrow \mathbb R^3$ by \[ \phi_n(a)=\left\{\begin{tabular}{ll} $(2^{-2n},a,0)$&$a\leqslant n$\\ $\big(2^{-2n}-\frac{a-n}{\pi}2^{-2n-1},n\cos(a-n),n\sin(a-n)\big)$&$n<a<n+\pi$\\ $(2^{-2n-1},a-\pi-2n,0)$&$a\geqslant n+\pi$ \end{tabular} \right. \] For each positive $n$, the image of $\phi_n$ ``consists of two vertical half lines connected by a half circle" \cite[p. 96]{Green1977}. By projecting the first two components of $\mathbb R^3$ into $\mathbb R^2$ we can can sketch the images of $\phi_1$, $\phi_2$ and $\phi_3$ in $\mathbb R^3$ as in Figure \ref{figure_sketch_phi012}. In this figure the dashed lines are the projections of the half circles and the image of $\psi$ is the vertical axis. \begin{figure} \begin{center} \begin{tikzpicture \def80{80} \def2{2} \def3.5{3.5} \def3.7{3.7} \node (phi1-1) at (0.25*80 em,1*2 em) {}; \node (phi1-pi1) at (0.125*80 em,-1*2 em) {}; \node (phi2-2) at (0.0625*80 em,2*2 em) {}; \node (phi2-pi2) at (0.0312*80 em,-2*2 em) {}; \node (phi2-pi4) at (phi2-pi2 |- 0em,0em) {}; \node (phi2-2) at (0.0625*80 em,2*2 em) {}; \node (phi2-pi2) at (0.0312*80 em,-2*2 em) {}; \node (phi3-3) at (0.0156*80 em,3*2 em) {}; \node (phi3-pi3) at (0.0078*80 em,-3*2 em) {}; \draw[<-,thick,black] (phi1-1.center |- 0,-3.5*2 em) to (phi1-1.center); \draw[->,thick,black] (phi1-pi1.center) to (phi1-pi1.center |- 0,3.5*2 em) node[above] {\scriptsize$\phi_1$}; \draw[thick,dashed,black] (phi1-1.center) to (phi1-pi1.center); \draw[<-,thick,black] (phi2-2.center |- 0,-3.5*2 em) to (phi2-2.center); \draw[->,thick,black] (phi2-pi2.center) to (phi2-pi2.center |- 0,3.5*2 em) node[above] {\scriptsize$\phi_2$}; \draw[thick,dashed,black] (phi2-2.center) to (phi2-pi2.center); \draw[<-,thick,black] (phi3-3.center |- 0,-3.5*2 em) to (phi3-3.center); \draw[->,thick,black] (phi3-pi3.center) to (phi3-pi3.center |- 0,3.5*2 em) node[above] {\scriptsize$\phi_3$}; \draw[thick,dashed,black] (phi3-3.center) to (phi3-pi3.center); \fill[black] (phi1-1) circle (0.15em); \node [above] at (phi1-1) {\scriptsize$\phi_1(1)$}; \fill[black] (phi1-pi1) circle (0.15em); \node [below] at (phi1-pi1) {\scriptsize$\phi_1(1+\pi)$}; \node [right] at (phi2-2) {\scriptsize$\phi_2(2)$}; \fill[black] (phi2-2) circle (0.15em); \fill[black] (phi2-pi2) circle (0.15em); \node [right, rotate=-60] at (phi2-pi2) {\scriptsize$\phi_2(2+\pi)$}; \fill[black] (phi2-pi4) circle (0.15em); \node [right, rotate=60] at (phi2-pi4) {\scriptsize$\phi_2(4+\pi)$}; \node (phi1-0) at (phi1-1|-0,0) {}; \fill[black] (phi1-0) circle (0.15em); \node [above right] at (phi1-0) {\scriptsize$\phi_1(0)$}; \node (phi2-0) at (phi2-2|-0,0) {}; \fill[black] (phi2-0) circle (0.15em); \node [above right] at (phi2-0) {\scriptsize$\phi_2(0)$}; \node (phi1-pi2) at (phi1-pi1|-0,0) {}; \fill[black] (phi1-pi2) circle (0.15em); \node [above right] at (phi1-pi2) {\scriptsize$\phi_1(2+\pi)$}; \draw[->] (-2pt,0) -- (0.31*80 em,0) \draw[<->] (0,-3.7*2 em) -- (0,3.7*2 em); \foreach \x in {0.1,0.2,0.3} \draw[shift={(\x*80 em,0)}] (0pt,2pt) -- (0pt,-2pt) node[below=-0.3em] {\scriptsize$\x$}; \node[left] at (-2pt,0) {\scriptsize$0$}; \foreach \y in {-3,-2,-1,1,2,3} \draw[shift={(0,\y*2 em)}] (2pt,0pt) -- (-2pt,0pt) node[left] {\scriptsize$\y$}; \end{tikzpicture}\end{center} \vspace{-1em} \addtocounter{theorem}{1} \caption{A sketch of a projection of the images of $\phi_1$, $\phi_2$ and $\phi_3$ into $\mathbb R^2$.} \label{figure_sketch_phi012} \end{figure} Let $X=\{\phi_n(a):n\in\mathbb N, a\in\mathbb R\}$ and define an action of $\mathbb R$ on $X$ by $b\cdot\phi_n(a)=\phi_n(a+b)$, so that $(\mathbb R,X)$ is a transformation group. Endow $X$ with the usual subspace topology of $\mathbb R^3$ so that the action of $\mathbb R$ on $X$ is continuous and $(\mathbb R,X)$ is a second countable, locally compact, Hausdorff transformation group. The action is free since $b\cdot\phi_n(a)=\phi_n(a)$ only when $b=0$. Representatives of the orbits in $(\mathbb R,X)$ are given by $z:=\psi(0)=(0,0,0)$ and $x_n:=\phi_n(0)=(2^{-2n},0,0)$ for $n\in\mathbb P$. We claim that $\{x_n\}$ converges $2$-times in $X/\mathbb R$ to $z$. To see this claim, let $b_n^{(1)}=0$ and $b_n^{(2)}=2n+\pi$ for each $n\in\mathbb P$. The sequences $\{b_n^{(1)}\}$ and $\{b_n^{(2)}\}$ can be used as the sequences $\{h_n^{(1)}\}$ and $\{h_n^{(2)}\}$ in Definition \ref{def_k-times_convergence_transformation_group} to establish $2$-times convergence: note that \begin{align*} b_n^{(1)}\cdot x_n&=b_n^{(1)}\cdot\phi_n(0)=\phi_n(b_n^{(1)})=\phi_n(0)=(2^{-2n},0,0)\quad\text{and}\\ b_n^{(2)}\cdot x_n&=b_n^{(2)}\cdot\phi_n(0)=\phi_n(b_n^{(2)})=\phi_n(2n+\pi)=(2^{-2n-1},0,0), \end{align*} so that both $\{b_n^{(1)}\cdot x_n\}$ and $\{b_n^{(2)}\cdot x_n\}$ converge to $z=(0,0,0)$ in $X$ as $n\rightarrow\infty$. We also have $b_n^{(2)}(b_n^{(1)})^{-1}=(2n+\pi)-(0)=2n+\pi$, which tends to infinity as $n\rightarrow\infty$. It follows that $\{x_n\}$ converges $2$-times in $X/\mathbb R$ to $z$. In order for $\{x_n\}$ to converge $3$-times in $X/\mathbb R$ to $z$, there would have to be a third sequence $\{b_n^{(3)}\}$ as in Definition \ref{def_k-times_convergence_transformation_group}. In order to have $b_n^{(3)}\cdot x_n\rightarrow z=(0,0,0)$ in this transformation group, we would eventually require $b_n^{(3)}$ to be close to $b_n^{(1)}$ or $b_n^{(2)}$, so the second condition Definition \ref{def_k-times_convergence_transformation_group} cannot be satisfied for the sequence $\{b_n^{(3)}\}$. Thus $\{x_n\}$ does not converge $3$-times in $X/\mathbb R$ to $z$. \end{example} The next theorem is the first of the two main theorems in Archbold and an Huef's \cite{Archbold-anHuef2006}. After stating this theorem we will apply it to the transformation group from the previous example. A subset $S$ of a topological space $X$ is {\em locally closed}\index{locally closed subset} if there exist an open set $U$ of $X$ and a closed set $V$ of $X$ such that $S=U\cap V$; this is equivalent to $S$ being open in the closure of $S$ with the subspace topology by, for example, \cite[Lemma~1.25]{Williams2007}. \needspace{6\baselineskip} \begin{theorem}[{\cite[Theorem~1.1]{Archbold-anHuef2006}}]\label{AaH_thm1.1} Suppose $(H,X)$ is a free second-countable transformation group. Let $k$ be a positive integer, let $z\in X$ and let $\{x_n\}$ be a sequence in $X$. Assume that $H\cdot z$ is locally closed in $X$. Then the following are equivalent: \begin{enumerate} \item\label{AaH_thm1.1_1} the sequence $\{x_n\}$ converges $k$-times in $X/H$ to $z$; \item\label{AaH_thm1.1_2} $\mathrm{M}_\mathrm{L}(\mathrm{Ind}\,\epsilon_z,\{\mathrm{Ind}\,\epsilon_{x_n}\})\geqslant k$; \item for every open neighbourhood $V$ of $z$ such that $\{h\in H: h\cdot z\in V\}$ is relatively compact we have \[ \underset{n}{\lim\,\inf}\,\nu\big(\{h\in H:h\cdot x_n\in V\}\big)\geqslant k\nu\big(\{h\in H:h\cdot z\in V\}\big); \] \item there exists a real number $R>k-1$ such that for every open neighbourhood $V$ of $z$ with $\{h\in H:h\cdot z\in V\}$ relatively compact we have \[ \underset{n}{\lim\,\inf}\,\nu\big(\{h\in H:h\cdot x_n\in V\}\big)\geqslant R\nu\big(\{h\in H:h\cdot z\in V\}\big);\quad\text{and} \] \item there exists a basic decreasing sequence of compact neighbourhoods $\{W_m\}$ of $z$ such that, for each $m\geqslant 1$, \[ \underset{n}{\lim\,\inf}\,\nu\big(\{h\in H:h\cdot x_n\in W_m\}\big)> (k-1)\nu\big(\{h\in H:h\cdot z\in W_m\}\big). \] \end{enumerate} \end{theorem} In the next example we will use the equivalence of \eqref{AaH_thm1.1_1} and \eqref{AaH_thm1.1_2} to determine the lower multiplicity number of the representation induced by an evaluation map $\epsilon_z$. \begin{example}\label{example_Green_lower_multiplicity} In Example \ref{example_2-times_convergence_Green} we showed that in the free transformation group $X/\mathbb R$, the sequence $\{x_n\}$ converges $2$-times, but not $3$-times, in $X/\mathbb R$ to $z$. Since $\mathbb R\cdot z=\mathbb R\times\{0\}\times\{0\}$ is a closed subset of $X$, we can apply Theorem \ref{AaH_thm1.1} to see that $\mathrm{M}_\mathrm{L}(\mathrm{Ind}\,\epsilon_z,\{\mathrm{Ind}\,\epsilon_{x_n}\})=2$. \end{example} The other main theorem from Archbold and an Huef's \cite{Archbold-anHuef2006} follows and is the upper multiplicity analogue of Theorem \ref{AaH_thm1.1}. \needspace{6\baselineskip} \begin{theorem}[{\cite[Theorem~5.3]{Archbold-anHuef2006}}]\label{AaH_thm5.3} Suppose $(H,X)$ is a free second-countable transformation group. Let $k$ be a positive integer, let $z\in X$ and let $\{x_n\}\subset X$ be a sequence such that $\{H\cdot x_n\}$ converges to $H\cdot z$ in $X/H$. Assume that $H\cdot z$ is locally closed in $X$. Then the following are equivalent: \begin{enumerate} \item there exists a subsequence $\{x_{n_i}\}$ of $\{x_n\}$ which converges $k$-times in $X/H$ to $z$; \item $\mathrm{M}_\mathrm{U}(\mathrm{Ind}\,\epsilon_z,\{\mathrm{Ind}\,\epsilon_{x_n}\})\geqslant k$; \item for every open neighbourhood $V$ of $z$ such that $\{h\in H: h\cdot z\in V\}$ is relatively compact we have \[ \underset{n}{\lim\,\sup}\,\nu\big(\{h\in H:h\cdot x_n\in V\}\big)\geqslant k\nu\big(\{h\in H:h\cdot z\in V\}\big); \] \item there exists a real number $R>k-1$ such that for every open neighbourhood $V$ of $z$ with $\{h\in H:h\cdot z\in V\}$ relatively compact we have \[ \underset{n}{\lim\,\sup}\,\nu\big(\{h\in H:h\cdot x_n\in V\}\big)\geqslant R\nu\big(\{h\in H:h\cdot z\in V\}\big);\quad\text{and} \] \item there exists a basic decreasing sequence of compact neighbourhoods $\{W_m\}$ of $z$ such that, for each $m\geqslant 1$, \[ \underset{n}{\lim\,\sup}\,\nu\big(\{h\in H:h\cdot x_n\in W_m\}\big)> (k-1)\nu\big(\{h\in H:h\cdot z\in W_m\}\big). \] \end{enumerate} \end{theorem} \begin{example} In Example \ref{example_2-times_convergence_Green} we showed that in the free transformation group $X/\mathbb R$, the sequence $\{x_n\}$ converges $2$-times, but not $3$-times, in $X/\mathbb R$ to $z$. By taking a subsequence $\{x_{n_i}\}$ of $\{x_n\}$ and considering the arguments in Example \ref{example_2-times_convergence_Green}, we can see that $\{x_{n_i}\}$ converges $2$-times, but not $3$-times, in $X/\mathbb R$ to $z$. In Example \ref{example_Green_lower_multiplicity} we noted that $\mathbb R\cdot z$ is closed so in order to be able to apply Theorem \ref{AaH_thm5.3} it remains to show that $\mathbb R\cdot x_n\rightarrow \mathbb R\cdot z$ in $X/\mathbb R$. But $x_n\rightarrow z$ in $X$ and the quotient map $X\rightarrow X/\mathbb R$ is continuous, so $\mathbb R\cdot x_n\rightarrow \mathbb R\cdot z$. We can now apply Theorem \ref{AaH_thm5.3} to see that $\mathrm{M}_\mathrm{U}(\mathrm{Ind}\,\epsilon_z,\{\mathrm{Ind}\,\epsilon_{x_n}\})=2$. \end{example} While Green's transformation group from Example \ref{example_2-times_convergence_Green} `folds' vertical lines with an arc of a helix, Rieffel in \cite[Example~1.18]{Rieffel2004} modified this to allow any fixed number of helix-arc folds for each orbit. This can be used to construct transformation groups with an irreducible representation that has any desired relative upper and lower multiplicity numbers. In particular, Archbold and an Huef in \cite[Example~6.4]{Archbold-anHuef2006} use this to describe a transformation group and an irreducible representation with relative upper multiplicity 3 and lower multiplicity 2. Rather than repeating this example, after generalising Theorems \ref{AaH_thm1.1} and \ref{AaH_thm5.3} to principal groupoid results, in Example \ref{ML2_MU3_example} we use the path groupoid of a directed graph to describe an irreducible representation with relative upper multiplicity 3 and lower multiplicity 2. \section{Lower multiplicity and \texorpdfstring{$k$}{\it k}-times convergence I}\label{sec_lower_multiplicity_1} The key goal of this chapter is to describe the relationship between multiplicities of induced representations and strength of convergence in the orbit space. We start this section by recalling the definition of $k$-times convergence in a groupoid from \cite{Clark-anHuef2008}. We then show that if a sequence converges $k$-times in the orbit space of a principal groupoid, then the lower multiplicity of the associated sequence of representations is at least $k$; the converse will be shown in Section \ref{sec_lower_multiplicity_2}. Recall that a sequence $\{\gamma_n\}\subset G$ tends to infinity if it admits no convergent subsequence. \begin{definition}\label{def_k-times_convergence} Suppose $G$ is a topological groupoid and let $k\in\mathbb P$. A sequence $\{x_n\}$ in $G^{(0)}$ is {\em $k$-times convergent}\index{k-times convergence@$k$-times convergence!in the orbit space of a groupoid} in $G^{(0)}/G$ to $z\in G^{(0)}$ if there exist $k$ sequences $\{\gamma_n^{(1)}\},\{\gamma_n^{(2)}\},\ldots,\{\gamma_n^{(k)}\}\subset G$ such that \begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}}\renewcommand{\labelenumi}{(\theenumi)} \item\label{def_k-times_convergence_1} $s(\gamma_n^{(i)})=x_n$ for all $n$ and $1\leqslant i\leqslant k$; \item\label{def_k-times_convergence_2} $r(\gamma_n^{(i)})\rightarrow z$ as $n\rightarrow\infty$ for $1\leqslant i\leqslant k$; and \item\label{def_k-times_convergence_3} if $1\leqslant i<j\leqslant k$ then $\gamma_n^{(j)}(\gamma_n^{(i)})^{-1}\rightarrow\infty$ as $n\rightarrow\infty$. \end{enumerate} \end{definition} For each $x\in G^{(0)}$, let $\mathrm{L}^x$ be the induced representation $\mathrm{Ind}\, \delta_x$ as in Section \ref{section_representations_and_groupoid_algebras}. The proof of the following proposition is based on \cite[Theorem~2.3]{Archbold-Deicke2005} and a part of \cite[Theorem~1.1]{Archbold-anHuef2006}. \begin{prop}\label{AaH_thm_1.1_1_implies_2} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $z\in G^{(0)}$ and suppose that $\{x_n\}$ is a sequence in $G^{(0)}$ that converges $k$-times to $z$ in $G^{(0)}/G$. Then $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\geqslant k$. \begin{proof} We will use a contradiction argument. Suppose that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})=r<k$. Fix a real-valued $g\in C_c(G)$ so that $\|g\|_z>0$. Define $\eta\in L^2(G,\lambda_z)$ by $\eta(\alpha)=\|g\|_z^{-1}g(\alpha)$ for all $\alpha\in G$. Then \[ \|\eta\|_z^2=\|g\|_z^{-2}\int g(\alpha)^2\, d\lambda_z(\alpha)=\|g\|_z^{-2}\|g\|_z^2=1, \] so $\eta$ is a unit vector in $L^2(G,\lambda_z)$ and the GNS construction of $\phi:=(\mathrm{L}^z(\cdot)\eta\,|\,\eta)$ is unitarily equivalent to $\mathrm{L}^z$. By the definition of lower multiplicity we now have \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})=\inf_{N\in\mathcal N} \mathrm{M}_\mathrm{L}(\phi,N,\{\mathrm{L}^{x_n}\})=r, \] so there exists $N\in\mathcal N$ such that \[ \mathrm{M}_\mathrm{L}(\phi,N,\{\mathrm{L}^{x_n}\})=\underset{\scriptstyle n}{\lim\,\inf}\, d(\mathrm{L}^{x_n},\phi,N)=r, \] and consequently there exists a subsequence $\{y_m\}$ of $\{x_n\}$ such that \begin{equation}\label{observation_to_contradict} d(\mathrm{L}^{y_m},\phi,N)=r\quad\text{for all }m. \end{equation} Since any subsequence of a sequence that is $k$-times convergent is also $k$-times convergent, we know that $\{y_m\}$ converges $k$-times to $z$ in $G^{(0)}/G$. We will now use the $k$-times convergence of $\{y_m\}$ to construct $k$ sequences of unit vectors with sufficient properties to establish our contradiction. By the $k$-times convergence of $\{y_m\}$ there exist $k$ sequences \[ \{\gamma_m^{(1)}\},\{\gamma_m^{(2)}\},\ldots,\{\gamma_m^{(k)}\}\subset G \] satisfying {\allowdisplaybreaks\begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})} \item $s(\gamma_m^{(i)})=y_m$ for all $m$ and $1\leqslant i\leqslant k$; \item $r(\gamma_m^{(i)})\rightarrow z$ as $m\rightarrow\infty$ for $1\leqslant i\leqslant k$; and \item if $1\leqslant i<j\leqslant k$ then $\gamma_m^{(j)}(\gamma_m^{(i)})^{-1}\rightarrow\infty$ as $m\rightarrow\infty$. \end{enumerate}} \noindent For each $1\leqslant i\leqslant k$ and $m\geqslant 1$, define $\eta_m^{(i)}$ by \[ \eta_m^{(i)}(\alpha)=\|g\|_{r(\gamma_m^{(i)})}^{-1}g\big(\alpha(\gamma_m^{(i)})^{-1}\big)\quad\text{for all }\alpha\in G. \] It follows from Haar-system invariance that \begin{align*} \|\eta_m^{(i)}\|_{y_m}^2&=\|g\|_{r(\gamma_m^{(i)})}^{-2}\int g\big(\alpha(\gamma_m^{(i)})^{-1}\big)^2\,d\lambda_{y_m}(\alpha)\\ &=\|g\|_{r(\gamma_m^{(i)})}^{-2}\int g(\alpha)^2\,d\lambda_{r(\gamma_m^{(i)})}(\alpha)\\ &=\|g\|_{r(\gamma_m^{(i)})}^{-2}\|g\|_{r(\gamma_m^{(i)})}^2=1, \end{align*} so the $\eta_m^{(i)}$ are unit vectors in $L^2(G,\lambda_{y_m})$. Now suppose that $1\leqslant i<j\leqslant k$. Then \begin{equation}\label{working_the_etas} (\eta_m^{(i)}\,|\,\eta_m^{(j)})_{y_m}= \|g\|_{r(\gamma_m^{(i)})}^{-1}\|g\|_{r(\gamma_m^{(j)})}^{-1}\int g\big(\alpha(\gamma_m^{(i)})^{-1}\big)g\big(\alpha(\gamma_m^{(j)})^{-1}\big)\, d\lambda_{y_m}(\alpha)\\ \end{equation} Since $\gamma_m^{(i)}(\gamma_m^{(j)})^{-1}\rightarrow\infty$, the sequence $\{\gamma_m^{(i)}(\gamma_m^{(j)})^{-1}\}$ is eventually not in the compact set $(\mathrm{supp}\, g)^{-1}(\mathrm{supp}\, g)$, and so there exists $m_0$ such that if $m\geqslant m_0$, then \[ (\mathrm{supp}\, g)\gamma_m^{(i)}\cap(\mathrm{supp}\, g)\gamma_m^{(j)}=\emptyset. \] (To see this claim, note that if $(\mathrm{supp}\, g)\gamma_m^{(i)}\cap (\mathrm{supp}\, g)\gamma_m^{(j)}\ne\emptyset$ then there exist $\alpha,\beta\in \mathrm{supp}\, g$ such that $\alpha\gamma_m^{(i)}=\beta\gamma_m^{(j)}$, and so $\gamma_m^{(i)}(\gamma_m^{(j)})^{-1}=\alpha^{-1}\beta\in (\mathrm{supp}\, g)^{-1}(\mathrm{supp}\, g)$.) For the integrand of \eqref{working_the_etas} to be non-zero, both $\alpha(\gamma_m^{(i)})^{-1}$ and $\alpha(\gamma_m^{(j)})^{-1}$ must be in $\mathrm{supp}\, g$, so $\alpha$ must be in $(\mathrm{supp}\, g)\gamma_m^{(i)}\cap(\mathrm{supp}\, g)\gamma_m^{(j)}$. But this is not possible if $m\geqslant m_0$. Thus, for any distinct $i,j$, we will eventually have $\eta_m^{(i)}\perp\eta_m^{(j)}$. For the last main component of this proof we will establish that \[ \big(\mathrm{L}^{y_m}(\cdot)\eta_m^{(i)}\,\big|\,\eta_m^{(i)}\big)\rightarrow \big(\mathrm{L}^z(\cdot)\eta\,\big|\,\eta\big)=\phi\quad\text{as }m\rightarrow\infty \] in the dual of $C^*(G)$ with the weak$^*$ topology for each $i$. Fix $f\in C_c(G)$. We have {\allowdisplaybreaks\begin{align} \big(\mathrm{L}^{z}(f)\eta\,\big|\,\eta\big)&= \int_G\big(\mathrm{L}^z(f)\eta\big)(\alpha)\eta(\alpha)\, d\lambda_z(\alpha)\notag\\ &=\int_G\int_G f(\alpha\beta^{-1})\eta(\beta)\eta(\alpha)\,d\lambda_z(\beta)\,d\lambda_z(\alpha)\notag\\ \label{dealing_with_etas} &=\|g\|_z^{-2}\int_G\int_G f(\alpha\beta^{-1})g(\beta)g(\alpha)\,d\lambda_z(\beta)\,d\lambda_z(\alpha) \end{align} }Now fix $1\leqslant i\leqslant k$. By the invariance of the Haar system we have {\allowdisplaybreaks\begin{align} \big(\mathrm{L}^{y_m}&(f)\eta_m^{(i)}\,\big|\,\eta_m^{(i)}\big)=\int_G\int_G f(\alpha\beta^{-1})\eta_m^{(i)}(\beta)\eta_m^{(i)}(\alpha)\,d\lambda_{y_m}(\beta)\,d\lambda_{y_m}(\alpha)\notag\\ &=\|g\|_{r(\gamma_m^{(i)})}^{-2}\int_G\int_G f(\alpha\beta^{-1})g\big(\alpha(\gamma_m^{(i)})^{-1}\big)g\big(\beta(\gamma_m^{(i)})^{-1}\big)\, d\lambda_{y_m}(\beta)\,d\lambda_{y_m}(\alpha)\notag\\ &=\|g\|_{r(\gamma_m^{(i)})}^{-2}\int_G\int_G f(\alpha\beta^{-1})g(\alpha)g(\beta)\, d\lambda_{r(\gamma_m^{(i)})}(\beta)\, d\lambda_{r(\gamma_m^{(i)})}(\alpha)\notag\\ &=\|g\|_{r(\gamma_m^{(i)})}^{-2}\int_G f\ast g(\alpha) g(\alpha)\, d\lambda_{r(\gamma_m^{(i)})}(\alpha).\label{still_working_etas} \end{align} }We know that $r(\gamma_m^{(i)})\rightarrow z$ as $m\rightarrow\infty$ so, by the continuity of the Haar system, $\|g\|_{r(\gamma_m^{(i)})}\rightarrow \|g\|_z$ as $m\rightarrow\infty$. Since $f\ast g\in C_c(G)$ we can apply the continuity of the Haar system with \eqref{dealing_with_etas} and \eqref{still_working_etas} to see that \begin{align*} \big(\mathrm{L}^{y_m}(f)\eta_m^{(i)}\,\big|\,\eta_m^{(i)}\big)&=\|g\|_{r(\gamma_m^{(i)})}^{-2}\int_G f\ast g(\alpha) g(\alpha)\, d\lambda_{r(\gamma_m^{(i)})}(\alpha)\\ &\rightarrow \|g\|_{z}^{-2}\int_G f\ast g(\alpha) g(\alpha)\, d\lambda_{z}(\alpha)=\big(\mathrm{L}^{z}(f)\eta\,\big|\,\eta\big) \end{align*} as $m\rightarrow\infty$. We have thus shown that, for each $i$, \[ \big(\mathrm{L}^{y_m}(\cdot)\eta_m^{(i)}\,\big|\,\eta_m^{(i)}\big)\rightarrow \big(\mathrm{L}^z(\cdot)\eta\,\big|\,\eta\big)=\phi \] in the dual of $C^*(G)$ equipped with the weak$^*$ topology. Thus there exists $m_1$ such that for any $m\geqslant m_1$ and any $1\leqslant i\leqslant k$, the pure state $\big(\mathrm{L}^{y_m}(\cdot)\eta_m^{(i)}\,\big|\,\eta_m^{(i)}\big)$ is in $\phi+N$. We have now established that every $\eta_m^{(i)}$ with $m\geqslant\max\{m_0,m_1\}$ is in $\mathrm{Vec}(\mathrm{L}^{y_m},\phi,N)$ with $\eta_m^{(i)}\perp\eta_m^{(j)}$ for $i\ne j$, so $d(\mathrm{L}^{y_m},\phi,N)\geqslant k$ for all $m\geqslant\max\{m_0,m_1\}$, contradicting our choice of $\{y_m\}$ that in \eqref{observation_to_contradict} had $d(\mathrm{L}^{y_m},\phi,N)=r<k$ for all $m$. \end{proof} \end{prop} \section{Measure ratios and \texorpdfstring{$k$}{\it k}-times convergence}\label{sec_measure_ratios_k-times_convergence} In this section we show that lower bounds on measure ratios along orbits give strength of convergence in the orbit space. We begin with a lemma before generalising \cite[Proposition~4.1]{Archbold-anHuef2006}. \begin{lemma}\label{the_unbroken_lemma} Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with Haar system $\lambda$. Let $W$ be a compact neighbourhood of $z\in G^{(0)}$ and let $K$ be a compact subset of $G$. Let $\{x_n\}$ be a sequence in $G^{(0)}$ such that $[x_n]\rightarrow [z]$ uniquely in $G^{(0)}/G$. Then for every $\delta>0$ there exists $n_0$ such that, for every $n\geqslant n_0$ and every $\gamma\in G_{x_n}^W$, \[ \lambda_{x_n}(K\gamma\cap G^W)<\lambda_z(G^W)+\delta. \] \begin{proof} Suppose not. Then, by passing to a subsequence if necessary, there exists $\delta>0$ such that for each $n$ there exists $\gamma_n\in G_{x_n}^W$ with \begin{equation}\label{assumption_in_unbroken_lemma} \lambda_{x_n}(K\gamma_n\cap G^W)\geqslant \lambda_z(G^W)+\delta. \end{equation} Since each $r(\gamma_n)$ is in the compact set $W$, we can pass to a subsequence so that $r(\gamma_n)\rightarrow y$ for some $y\in G^{(0)}$. This implies $[r(\gamma_n)]\rightarrow [y]$, but $[r(\gamma_n)]=[s(\gamma_n)]=[x_n]$ and $[x_n]\rightarrow [z]$ uniquely, so $[y]=[z]$. Choose $\psi\in G$ with $s(\psi)=z$ and $r(\psi)=y$. By Haar-system invariance \[ \lambda_{x_n}(K\gamma_n\cap G^W)=\lambda_{r(\gamma_n)}(K\cap G^W), \] so by applying Lemma \ref{astrids_lim_sup} with the compact space $K\cap G^W$ and $\{r(\gamma_n)\}$ converging to $y$, \begin{align*} \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(K\gamma_n\cap G^W) &=\underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{r(\gamma_n)}(K\cap G^W)\\ &\leqslant\lambda_y(K\cap G^W)\quad\text{(by Lemma \ref{astrids_lim_sup})}\\ &=\lambda_z(K\psi\cap G^W)\quad\text{(Haar-system invariance)}\\ &\leqslant\lambda_z(G^W). \end{align*} This contradicts our assertion \eqref{assumption_in_unbroken_lemma}. \end{proof} \end{lemma} \begin{prop}\label{AaH_prop4_1_1} Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with Haar system $\lambda$. Let $k\in\mathbb P$ and $z\in G^{(0)}$ with $[z]$ locally closed in $G^{(0)}$. Assume that $\{x_n\}$ is a sequence in $G^{(0)}$ such that $[x_n]\rightarrow [z]$ uniquely in $G^{(0)}/G$. Suppose $\{W_m\}$ is a basic decreasing sequence of compact neighbourhoods of $z$ such that each $m$ satisfies \[ \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(G^{W_m}). \] Then $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \begin{proof} Let $\{K_m\}$ be an increasing sequence of compact subsets of $G$ such that $G=\bigcup_{m\geqslant 1}\mathrm{Int}\,K_m$. By the regularity of $\lambda_z$, for each $m\geqslant 1$ there exist $\delta_m>0$ and an open neighbourhood $U_m$ of $G_z^{W_m}$ such that \begin{equation}\label{AaH4.1} \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(U_m)+\delta_m. \end{equation} We will construct, by induction, a strictly increasing sequence of positive integers $\{n_m\}$ such that, for all $n\geqslant n_m$, \begin{align} &\lambda_{x_n}(K_m\alpha\cap G^{W_m})<\lambda_z(U_m)+\delta_m/k\quad\text{for all }\alpha\in G_{x_n}^{W_m},\quad\text{and}\label{AaH4.2}\\ &\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(U_m)+\delta_m.\label{AaH4.3} \end{align} By applying Lemma \ref{the_unbroken_lemma} with $\delta=\lambda_z(U_1)-\lambda_z(G^{W_1})+\delta_1/k$ there exists $n_1$ such that $n\geqslant n_1$ implies \[\lambda_{x_n}(K_1\alpha\cap G^{W_1})<\lambda_z(U_1)+\delta_1/k \quad\text{for all }\alpha\in G_{x_n}^{W_1},\] establishing \eqref{AaH4.2} for $m=1$. If necessary we can increase $n_1$ to ensure \eqref{AaH4.3} holds for $m=1$ by considering \eqref{AaH4.1}. Assuming that we have constructed $n_1<n_2<\cdots<n_{m-1}$, we apply Lemma \ref{the_unbroken_lemma} with $\delta=\lambda_z(U_m)-\lambda_z(G^{W_m})+\delta_m/k$ to obtain $n_m>n_{m-1}$ such that \eqref{AaH4.2} holds, and again, if necessary, increase $n_m$ to obtain \eqref{AaH4.3}. If $n_1>1$ then, for each $1\leqslant n<n_1$ and $1\leqslant i\leqslant k$, let $\gamma_n^{(i)}=x_n$. For each $n\geqslant n_1$ there is a unique $m$ such that $n_m\leqslant n<n_{m+1}$. For every such $n$ and $m$ choose $\gamma_n^{(1)}\in G_{x_n}^{W_m}$ (which is always non-empty by \eqref{AaH4.3}). Using \eqref{AaH4.2} and \eqref{AaH4.3} we have \begin{align*} \lambda_{x_n}(G^{W_m}\backslash K_m\gamma_n^{(1)})&=\lambda_{x_n}(G^{W_m})-\lambda_{x_n}(G^{W_m}\cap K_m\gamma_n^{(1)})\\ &>\big((k-1)\lambda_z(U_m)+\delta_m\big)-\big(\lambda_z(U_m)+\delta_m/k\big)\\ &=(k-2)\lambda_z(U_m)+\frac{(k-1)}k\delta_m. \end{align*} So for each $n\geqslant n_1$ and its associated $m$ we can choose $\gamma_n^{(2)}\in G_{x_n}^{W_m}\backslash K_m\gamma_n^{(1)}$. We now have {\allowdisplaybreaks\begin{align*} \lambda_{x_n}&\big(G^{W_m}\backslash (K_m\gamma_n^{(1)}\cup K_m\gamma_n^{(2)})\big)\\ &=\lambda_{x_n}(G^{W_m}\backslash K_m\gamma_n^{(1)})-\lambda_{x_n}\big((G^{W_m}\backslash K_m\gamma_n^{(1)})\cap K_m\gamma_n^{(2)}\big)\\ &\geqslant\lambda_{x_n}(G^{W_m}\backslash K_m\gamma_n^{(1)})-\lambda_{x_n}(G^{W_m}\cap K_m\gamma_n^{(2)})\\ &>\Big((k-2)\lambda_z(U_m)+\frac{(k-1)}k\delta_m\Big)-\Big(\lambda_z(U_m)+\delta_m/k\Big)\\ &=(k-3)\lambda_z(U_m)+\frac{(k-2)}k\delta_m, \end{align*}}enabling us to choose $\gamma_n^{(3)}\in G_{x_n}^{W_m}\backslash (K_m\gamma_n^{(1)}\cup K_m\gamma_n^{(2)})$. By continuing this process, for each $j=3,\ldots,k$ and each $n\geqslant n_1$ we have \[ \lambda_{x_n}\Bigg(G^{W_m}\backslash \bigg(\bigcup_{i=1}^{j-1}K_m\gamma_n^{(i)}\bigg)\Bigg)>(k-j)\lambda_z(U_m)+\frac{(k-j-1)\delta_m}k, \] enabling us to choose \begin{equation}\label{eqn_choosing_gammas} \gamma_n^{(j)}\in G^{W_m}_{x_n}\backslash \bigg(\bigcup_{i=1}^{j-1}K_m\gamma_n^{(i)}\bigg). \end{equation} Note that for $n_m\leqslant n<n_{m+1}$ we have $\gamma_n^{(j)}\notin K_m\gamma_n^{(i)}$ for $1\leqslant i<j\leqslant k$. We will now establish that $x_n$ converges $k$-times to $z$ in $G^{(0)}/G$ by considering the $\gamma_n^{(i)}$. Note that $s(\gamma_n^{(i)})=x_n$ for all $n$ and $i$ by our choice of the $\gamma_n^{(i)}$. To see that $r(\gamma_n^{(i)})\rightarrow z$ as $n\rightarrow\infty$ for $1\leqslant i\leqslant k$, first fix $i$ and let $V$ be an open neighbourhood of $z$. Since $W_m\rightarrow \{z\}$ there exists $m_0$ such that $m\geqslant m_0$ implies $W_m\subset V$. For each $n\geqslant n_{m_0}$ there exists a $m\geqslant m_0$ such that $n_m\leqslant n <n_{m+1}$, and so $r(\gamma_n^{(i)})\in W_m\subset V$. Finally we claim that $\gamma_n^{(j)}(\gamma_n^{(i)})^{-1}\rightarrow\infty$ as $n\rightarrow\infty$ for $1\leqslant i<j\leqslant k$. Fix $i<j$ and let $K$ be a compact subset of $G$. There exists $m_0$ such that $K\subset K_m$ for all $m\geqslant m_0$. Then for $n\geqslant n_{m_0}$ there exists $m\geqslant m_0$ such that $n_m\leqslant n<n_{m+1}$. By \eqref{eqn_choosing_gammas} we know \begin{align*} \gamma_n^{(j)}&\in G_{x_n}^{W_m}\backslash (K_m\gamma_n^{(i)})\\ &=\big(G_{x_n}^{W_m}(\gamma_n^{(i)})^{-1}\gamma_n^{(i)}\big)\backslash (K_m\gamma_n^{(i)})\\ &=\big((G_{x_n}^{W_m}(\gamma_n^{(i)})^{-1})\backslash K_m\big)\gamma_n^{(i)}, \end{align*} and so $\gamma_n^{(j)}(\gamma_n^{(i)})^{-1}\in \big(G_{x_n}^{W_m}(\gamma_n^{(i)})^{-1}\big)\backslash K_m\subset G\backslash K_m\subset G\backslash K$, enabling us to conclude that $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \end{proof} \end{prop} In Proposition~\ref{part_of_AaH4.2_generalisation} we prove a generalisation of a part of \cite[Proposition~4.2]{Archbold-anHuef2006}. \begin{prop}\label{part_of_AaH4.2_generalisation} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Suppose that $z\in G^{(0)}$ with $[z]$ locally closed in $G^{(0)}$ and suppose $\{x_n\}$ is a sequence in $G^{(0)}$. Assume that for every open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact, $\lambda_{x_n}(G^V)\rightarrow \infty$ as $n\rightarrow\infty$. Then, for every $k\geqslant 1$, the sequence $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \begin{proof} Let $\{ K_m\}$ be an increasing sequence of compact subsets of $G$ such that $G=\cup_{m\geqslant 1}\,\mathrm{Int}\, K_m$. By Lemma \ref{corollary_astrid_lemma}, for each $K_m$ there exists an open neighbourhood $V_m$ of $z$ such that $x\in V_m$ implies $\lambda_x(K_m)<\lambda_z(K_m)+1$. Since $[z]$ is locally closed, by Lemma~4.1(1) in \cite{Clark-anHuef2010-preprint} we can crop $V_1$ if necessary to ensure that $G_z^{V_1}$ is relatively compact. By further cropping each $V_m$ we may assume that $\{V_m\}$ is a decreasing neighbourhood basis of $z$. By our hypothesis, for each $m$ there exists $n_m$ such that \begin{equation}\label{lower_bound} n\geqslant n_m\quad\text{implies}\quad\lambda_{x_n}(G^{V_m})>k\big(\lambda_z(K_m)+1\big). \end{equation} Note that for any $\gamma\in G_{x_n}^{V_m}$ with $n\geqslant n_m$, we have $r(\gamma)\in V_m$, and so $\lambda_{r(\gamma)}(K_m)<\lambda_z(K_m)+1$. By Haar-system invariance we know that $\lambda_{r(\gamma)}(K_m)=\lambda_{x_n}(K_m\gamma)$, which shows us that \begin{equation}\label{upper_bound} \lambda_{x_n}(K_m\gamma)<\lambda_z(K_m)+1. \end{equation} If necessary we can increase the elements of $\{n_m\}$ so that it is a strictly increasing sequence. We now proceed as in the proof of Proposition \ref{AaH_prop4_1_1}. For all $n<n_1$ and $1\leqslant i\leqslant k$ let $\gamma_n^{(i)}=x_n$. For each $n\geqslant n_1$ there exists a unique number $m(n)$ such that $n_{m(n)}\leqslant n<n_{m(n)+1}$. For the remainder of this proof we will write $m$ instead of $m(n)$ because the specific $n$ will be clear from the context. For each $n\geqslant n_1$ choose $\gamma_n^{(1)}\in G_{x_n}^{V_{m}}$. Then by \eqref{lower_bound} and \eqref{upper_bound} we have \begin{align*} \lambda_{x_n}(G^{V_m}\backslash K_m\gamma_n^{(1)})&=\lambda_{x_n}(G^{V_m})-\lambda_{x_n}(G^{V_m}\cap K_m\gamma_n^{(1)})\\ &\geqslant \lambda_{x_n}(G^{V_m})-\lambda_{x_n}(K_m\gamma_n^{(1)})\\ &>k\big(\lambda_z(K_m)+1\big) - \big(\lambda_z(K_m)+1\big)\\ &=(k-1)\big(\lambda_z(K_m)+1\big). \end{align*} We can thus choose $\gamma_n^{(2)}\in G_{x_x}^{V_m}\backslash K_m\gamma_n^{(1)}$ for each $n\geqslant n_1$. This now gives us \begin{align*} \lambda_{x_n}&(G^{V_m}\backslash (K_m\gamma_n^{(1)}\cup K_m\gamma_n^{(2)}))\\ &=\lambda_{x_n}(G^{V_m}\backslash K_m\gamma_n^{(1)})-\lambda_{x_n}\big( (G^{V_m}\backslash K_m\gamma_n^{(1)})\cap K_m\gamma_n^{(1)}\big)\\ &\geqslant \lambda_{x_n}(G^{V_m}\backslash K_m\gamma_n^{(1)})-\lambda_{x_n}(K_m\gamma_n^{(2)})\\ &>(k-1)\big(\lambda_z(K_m)+1\big) - \big(\lambda_z(K_m)+1\big)\\ &=(k-2)\big(\lambda_z(K_m)+1\big). \end{align*} Continuing in this manner we can choose \[ \gamma_n^{(j)}\in G^{V_m}_{x_n}\backslash \bigg(\bigcup_{i=1}^{j-1}K_m\gamma_n^{(i)}\bigg) \] for every $n\geqslant n_1$ and $j=3,\ldots,k$. The tail of the proof of Proposition \ref{AaH_prop4_1_1} establishes our desired result. \end{proof} \end{prop} \section{Measure ratios and bounds on lower multiplicity}\label{sec_measure_ratios_bounds_lower_multiplicity} In this section we show that upper bounds on measure ratios\index{measure ratio} along orbits give upper bounds on multiplicities. \begin{lemma}\label{based_on_Ramsay} Let $G$ be a second countable, locally compact, Hausdorff groupoid. Suppose $z\in G^{(0)}$ and $[z]$ is locally closed. Then the restriction of $r$ to $G_z/(G|_{\{z\}})$ is a homeomorphism onto $[z]$. If in addition $G$ is principal, then the restriction of $r$ to $G_z$ is a homeomorphism onto $[z]$. \begin{proof} We consider the transitive groupoid $G|_{[z]}$. Since $[z]$ is locally closed, $G|_{[z]}$ is a second countable, locally compact, Hausdorff groupoid. Thus $G|_{[z]}$ is Polish by, for example, \cite[Lemma~6.5]{Williams2007}. Now \cite[Theorem~2.1]{Ramsay1990} applies to give the result. \end{proof} \end{lemma} Theorem~\ref{M2_thm} is based on \cite[Theorem~3.1]{Archbold-anHuef2006}; it is only an intermediary result which will be used to prove a sharper bound in Theorem \ref{M_thm}. \needspace{6\baselineskip} \begin{theorem}\label{M2_thm} Suppose $G$ is a second countable, locally compact, Hausdorff principal groupoid with Haar system $\lambda$. Let $M\in\mathbb R$ with $M\geqslant 1$, suppose $z\in G^{(0)}$ such that $[z]$ is locally closed and let $\{x_n\}$ be a sequence in $G^{(0)}$. Suppose there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \[ \lambda_{x_n}(G^V)\leqslant M\lambda_z(G^V) \] frequently (in the sense that there is a subsequence $\{x_{n_i}\}$ of $\{x_n\}$ with $\lambda_{x_{n_i}}(G^V)\leq M\lambda_z(G^V)$ for all $i$). Then $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\leqslant\lfloor M^2\rfloor$. \begin{proof} Fix $\epsilon>0$ such that $M^2(1+\epsilon)^2<\lfloor M^2\rfloor +1$. We will build a function $D\in C_c(G)$ such that $\mathrm{L}^z(D^\ast\ast D)$ is a rank-one projection and \[ \mathrm{Tr}\big(\mathrm{L}^{x_n}(D^\ast\ast D)\big)<M^2(1+\epsilon)^2<\lfloor M^2\rfloor +1 \] frequently. By the generalised lower semi-continuity result of \cite[Theorem~4.3]{Archbold-Spielberg1996} we will have \begin{align*} \liminf\,\mathrm{Tr}\big(\mathrm{L}^{x_n}(D^\ast\ast D)\big)&\geqslant \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\, \mathrm{Tr} \big(\mathrm{L}^z(D^*\ast D)\big)\\ &=\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\}), \end{align*} and the result will follow. For the next few paragraphs we will be working with $G_z$ equipped with the subspace topology. Note that $\lambda_z$ can be thought of as a Radon measure on $G_z$ with $\lambda_z(S\cap G_z)=\lambda_z(S)$ for any $\lambda_z$-measurable subset $S$ of $G$. Fix $\delta>0$ such that \[ \delta<\frac{\epsilon\lambda_z(G^V)}{1+\epsilon}<\lambda_z(G^V). \] Since $G_z$ is a second countable, locally compact, Hausdorff space, the Radon measure $\lambda_z$ is regular (see, for example, \cite[Proposition~6.3.6]{Pedersen1989}). It follows that there exists a $G_z$-compact subset $W$ of $G_z^V$ such that \[ 0<\lambda_z(G_z^V)-\delta<\lambda_z(W). \] Since $W$ is $G_z$-compact there exists a $G_z$-compact neighbourhood $W_1$ of $W$ that is contained in $G_z^V$ and there exists a continuous function $g:G_z\rightarrow [0,1]$ that is identically one on $W$ and zero off the interior of $W_1$. We have \[ \lambda_z(G^V)-\delta=\lambda_z(G_z^V)-\delta<\lambda_z(W)\leqslant\int_{G_z} g(t)^2\, d\lambda_z(t)=\|g\|_z^2, \] and hence \begin{equation}\label{AaH3.1} \frac{\lambda_z(G^V)}{\|g\|_z^2}<1+\frac{\delta}{\|g\|_z^2}<1+\frac{\delta}{\lambda_z(G^V)-\delta}<1+\epsilon. \end{equation} By Lemma \ref{based_on_Ramsay} the restriction $\tilde{r}$ of $r$ to $G_z$ is a homeomorphism onto $[z]$. So there exists a continuous function $g_1:\tilde{r}(W_1)\rightarrow [0,1]$ such that $g_1\big(\tilde{r}(\gamma)\big)=g(\gamma)$ for all $\gamma\in W_1$. Thus $\tilde{r}(W_1)$ is $[z]$-compact, which implies that $\tilde{r}(W_1)$ is $G^{(0)}$-compact. Since we know that $G^{(0)}$ is second countable and Hausdorff, Tietze's Extension Theorem can be applied to extend $g_1$ to a continuous map $g_2:G^{(0)}\rightarrow [0,1]$. Because $\tilde{r}(W_1)$ is a compact subset of the open set $V$, there exist a compact neighbourhood $P$ of $\tilde{r}(W_1)$ contained in $V$ and a continuous function $h:G^{(0)}\rightarrow [0,1]$ that is identically one on $\tilde{r}(W_1)$ and zero off the interior of $P$. Note that $h$ has compact support that is contained in $P$. We set $f(x)=h(x)g_2(x)$. Then $f\in C_c(G^{(0)})$ with $0\leqslant f\leqslant 1$ and \begin{equation}\label{supp_f_contained_in_V} \mathrm{supp}\,f\subset \mathrm{supp}\,h\subset P\subset V. \end{equation} Note that {\allowdisplaybreaks \begin{align} \|f\circ r\|_z^2&=\int_{G_z} f\big(\tilde{r}(\gamma)\big)^2\,d\lambda_z(\gamma)\notag\\ &=\int_{G_z} h\big(\tilde{r}(\gamma)\big)^2g_2\big(\tilde{r}(\gamma)\big)^2\, d\lambda_z(\gamma)\notag\\ &\geqslant\int_{W_1} h\big(\tilde{r}(\gamma)\big)^2 g(\gamma)^2\, d\lambda_z(\gamma)\notag\\ &=\int_{W_1}g(\gamma)^2\,d\lambda_z(\gamma)\notag\\ &=\|g\|_z^2\label{AaH3.2} \end{align}}since $\mathrm{supp}\, g\subset W_1$ and $h$ is identically one on $\tilde{r}(W_1)$. We now define $F\in C_c(G^{(0)})$ by \begin{equation}\label{definition_F} F(x)=\frac{f(x)}{\|f\circ r\|_z}. \end{equation} Then $\|F\circ r\|_z=1$ and \begin{equation}\label{unmotivated_ref} F\circ r(\gamma)\ne 0\implies h\big(r(\gamma)\big)\ne 0 \implies r(\gamma)\in V \implies \gamma\in G^V. \end{equation} Let $N=\mathrm{supp}\, F$ so that $N=\mathrm{supp}\, f\subset V$ by \eqref{supp_f_contained_in_V} and \eqref{definition_F}. Since $G_z^V$ is relatively compact by our hypothesis, the set $\overline{G_z^N}$ is compact. Let $b\in C_c(G)$ be a function that is identically one on $(\overline{G_z^N})(\overline{G_z^N})^{-1}$ and has range contained in $[0,1]$. We can assume that $b$ is self-adjoint by considering $\frac12(b+b^*)$ if necessary. Define $D\in C_c(G)$ by \[ D(\gamma):=F\big(r(\gamma)\big)F\big(s(\gamma)\big)b(\gamma). \] For $\xi\in L^2(G,\lambda_u)$ and $\gamma\in G$ we have \begin{align*} \big(\mathrm{L}^{u}(D)\xi\big)(\gamma)&=\int_GD(\gamma\alpha^{-1})\xi(\alpha)\,d\lambda_u(\alpha)\\ &=\int_G F\big(r(\gamma)\big)F\big(s(\alpha^{-1})\big) b(\gamma\alpha^{-1})\xi(\alpha)\, d\lambda_u(\alpha)\\ &=F\big(r(\gamma)\big) \int_G F\big(r(\alpha)\big) b(\gamma\alpha^{-1})\xi(\alpha)\, d\lambda_u(\alpha). \end{align*} In the case where $u=z$, if $\alpha,\gamma\in \mathrm{supp}\, F\circ r\cap s^{-1}(z)$, then $r(\alpha),r(\gamma)\in \mathrm{supp}\, F=N$ and $\gamma,\alpha\in G_z^N$. This implies $b(\gamma\alpha^{-1})=1$, so \begin{align*} \big(\mathrm{L}^z(D)\xi\big)(\gamma)&=F\big(r(\gamma)\big)\int_G F\big(r(\alpha)\big) \xi(\alpha)\, d\lambda_z(\alpha)\\ &=(\xi\,|\,F\circ r)_z F\circ r(\gamma), \end{align*} and $L^z(D)$ is a rank-one projection. By the hypothesis on $V$ there exists a subsequence $\{x_{n_i}\}$ of $\{x_n\}$ such that \[ \lambda_{x_{n_i}}(G^V)\leqslant M\lambda_z(G^V) \] for all $i\geqslant 1$. If we define $E:=\{\gamma\in G:F\big(r(\gamma)\big)\ne 0\}$ then $E$ is open with \begin{equation}\label{measure_of_E_finite} \lambda_{x_{n_i}}(E)\leqslant\lambda_{x_{n_i}}(G^V)\leqslant M\lambda_z(G^V) \end{equation} by \eqref{unmotivated_ref} and \begin{equation}\label{AaH3.3} \int_G\big(F\circ r(\gamma)\big)^2\, d\lambda_{x_{n_i}}(\gamma)\leqslant \frac{\lambda_{x_{n_i}}(E)}{\|f\circ r\|_z^2} \leqslant \frac{M\lambda_z(G^V)}{\|g\|_z^2}. \end{equation} by \eqref{AaH3.2}. Consider the continuous function \[ T(\alpha,\beta):=F\big(r(\alpha)\big) F\big(r(\beta)\big) b(\alpha\beta^{-1}). \] Note that \begin{align*} \int_G &T(\alpha,\beta)^2\, d(\lambda_{x_{n_i}}\times\lambda_{x_{n_i}})(\alpha,\beta)\\ &=\int_G F\big(r(\alpha)\big)^2F\big(r(\beta)\big)^2b(\alpha\beta^{-1})^2\, d(\lambda_{x_{n_i}}\times\lambda_{x_{n_i}})(\alpha,\beta)\\ &\leqslant\|F\|_\infty^4\int_G \chi_{E\times E}(\alpha,\beta)\,d(\lambda_{x_{n_i}}\times\lambda_{x_{n_i}})(\alpha,\beta)\\ &=\|F\|_\infty^4\lambda_{x_{n_i}}(E)^2, \end{align*} which is finite by \eqref{measure_of_E_finite}. Thus \[T\in L^2(G\times G,\lambda_{x_{n_i}}\times \lambda_{x_{n_i}}),\] and since $T$ is conjugate symmetric, \cite[Proposition~3.4.16]{Pedersen1989} implies that $\mathrm{L}^{x_{n_i}}(D)$ is the self-adjoint Hilbert-Schmidt operator on $L^2(G,\lambda_{x_{n_i}})$ with kernel $T$. It follows that $\mathrm{L}^{x_{n_i}}(D^*\ast D)$ is a trace-class operator, and since we equip the Hilbert-Schmidt operators with the trace norm, we have \[ \mathrm{Tr}\,\mathrm{L}^{x_{n_i}}(D^*\ast D)=\|T\|_{L^2(\lambda_{x_{n_i}}\times\lambda_{x_{n_i}})}^2. \] Applying Fubini's Theorem to $T$ now gives {\allowdisplaybreaks\begin{align} \mathrm{Tr}\,\mathrm{L}^{x_{n_i}}&(D^*\ast D)\notag\\ &=\int_G\int_G F\big(r(\alpha)\big)^2 F\big(r(\beta)\big)^2b(\alpha\beta^{-1})^2\, d\lambda_{x_{n_i}}(\alpha)\, d\lambda_{x_{n_i}}(\beta)\notag\\ &\leqslant\bigg(\int_G F\big(r(\alpha)\big)^2\, d\lambda_{x_{n_i}}(\alpha)\bigg)^2\notag\\ &\leqslant\frac{M^2\lambda_z(G^V)^2}{\|g\|_z^4}\text{\quad (using \eqref{AaH3.3})}\notag\\ &<M^2(1+\epsilon)^2\text{\quad (using \eqref{AaH3.1})}\label{Aah3.4}. \end{align} }Now \begin{align*} \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})&\leqslant \underset{\scriptstyle n}{\lim\,\inf}\, \mathrm{Tr}\big(\mathrm{L}^{x_n}(D^*\ast D)\big)\\ &\leqslant M^2(1+\epsilon)^2\\ &<\lfloor M^2\rfloor+1, \end{align*} and hence $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\leqslant\lfloor M^2\rfloor$, completing the proof. \end{proof} \end{theorem} The following proposition is an immediate consequence of Theorem \ref{M2_thm} and Proposition \ref{part_of_AaH4.2_generalisation}. This result will be strengthened later in Corollary \ref{AaH_cor5.6}, where we will show that these three items are in fact equivalent. \begin{prop}\label{AaH_prop_4.2} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $z\in G^{(0)}$ and let $\{x_n\}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed in $G^{(0)}$. Consider the following properties. \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{AaH_prop_4_2_1} $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})=\infty$. \item\label{AaH_prop_4_2_2} For every open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact, $\lambda_{x_n}(G^V)\rightarrow\infty$ as $n\rightarrow\infty$. \item\label{AaH_prop_4_2_3} For each $k\geqslant 1$, the sequence $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \end{enumerate} Then {\rm\eqref{AaH_prop_4_2_1}} implies {\rm\eqref{AaH_prop_4_2_2}} and {\rm\eqref{AaH_prop_4_2_2}} implies {\rm\eqref{AaH_prop_4_2_3}}. \end{prop} Our next goal is to sharpen the $\lfloor M^2\rfloor$ bound in Theorem \ref{M2_thm}. This strengthened theorem appears later on as Theorem \ref{M_thm}. We will first establish several results to assist in strengthening this bound. \begin{lemma}\label{lemma_orbits_equal} Suppose $G$ is a second countable groupoid and $x,y\in G^{(0)}$. If $\overline{[x]}=\overline{[y]}$ and $[x]$ is locally closed, then $[x]=[y]$. \begin{proof} We have $x\in \overline{[y]}$, so there exists $\{\gamma_n\}\subset G$ such that $s(\gamma_n)=y$ and $r(\gamma_n)\rightarrow x$. Since $[x]$ is locally closed, there exists an open subset $U$ of $G$ such that $[x]=U\cap\overline{[x]}$. Then $r(\gamma_n)$ is eventually in $U$, so eventually $r(\gamma_n)\in U\cap\overline{[y]}=U\cap\overline{[x]}=[x]$. Thus there exists $\gamma\in G$ with $s(\gamma)=y$ and $r(\gamma)\in [x]$, as required. \end{proof} \end{lemma} The following is a generalisation of \cite[Lemma~3.3]{Archbold-anHuef2006}. \begin{lemma}\label{AaH_Lemma3.3} Suppose $G$ is a second countable, locally compact, Hausdorff groupoid with Haar system $\lambda$. Fix $\epsilon>0$, $z\in G^{(0)}$ and let $V$ be an open neighbourhood of $z\in G^{(0)}$ such that $\lambda_z(G^V)<\infty$. Then there exists an open relatively-compact neighbourhood $V_1$ of $z$ such that $\overline{V_1}\subset V$ and \[ \lambda_z(G^V)-\epsilon<\lambda_z(G^{V_1})\leqslant\lambda_z(G^{\overline{V_1}})\leqslant\lambda_z(G^V)<\lambda_z(G^{V_1})+\epsilon. \] \begin{proof} We use $G_z$ equipped with the subspace topology to find a compact subset $\lambda_z$-estimate of $V$. This estimate is then used to obtain the required open set $V_1$. Since $G_z^V$ is $G_z$-open, by the regularity of $\lambda_z$ there exists a compact subset $W$ of $G_z^V$ such that $\lambda_z(W)>\lambda_z(G_z^V)-\epsilon$. Then $r(W)$ is compact and contained in $V$, so there exists an open relatively-compact neighbourhood $V_1$ of $r(W)$ such that $\overline{V_1}\subset V$. Then \begin{align*} \lambda_z(G^V)-\epsilon<\lambda_z(W)&\leqslant \lambda_z(G^{V_1}) \leqslant \lambda_z(G^{\overline{V_1}})\leqslant \lambda_z(G^V)\\&<\lambda_z(W)+\epsilon\leqslant\lambda_z(G^{V_1})+\epsilon, \end{align*} as required. \end{proof} \end{lemma} The following lemma is equivalent to the claim in \cite[Proposition~3.6]{Clark2007} that $[x]\mapsto [L^x]$ from $G^{(0)}/G$ to the spectrum of $C^*(G)$ is open. \begin{lemma}\label{lemma_ind_reps_converge_imply_orbits_converge} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. If $\{x_n\}$ is a sequence in $G^{(0)}$ with $\mathrm{L}^{x_n}\rightarrow \mathrm{L}^{z}$, then $[x_n]\rightarrow [z]$. \begin{proof} We prove the contrapositive. Suppose $[x_n]\nrightarrow [z]$. Then there exists an open neighbourhood $U_0$ of $[z]$ in $G^{(0)}/G$ such that $[x_n]$ is frequently not in $U_0$. Let $q:G^{(0)}\rightarrow G^{(0)}/G$ be the quotient map $x\mapsto [x]$. Then $U_1:=q^{-1}(U_0)$ is an open invariant neighbourhood of $z$ and $x_n\notin U_1$ frequently. Note that $C^*(G|_{U_1})$ is isomorphic to a closed two-sided ideal $I$ of $C^*(G)$ (see \cite[Lemma~2.10]{Muhly-Renault-Williams1996}). We now claim that $I\subset\ker\,\mathrm{L}^{x_n}$ whenever $x_n\notin U_1$. Suppose $x_n\notin U_1$ and recall from Remark \ref{measure_induced_epsilon_x} that $\mathrm{L}^{x_n}$ acts on $L^2(G,\lambda_{x_n})$. Fix $f\in C_c(G)$ such that $f(\gamma)=0$ whenever $\gamma\notin G|_{U_1}$ and fix $\xi\in L^2(G,\lambda_{x_n})$. Then by Remark \ref{measure_induced_epsilon_x} we have \[ \|\mathrm{L}^{x_n}(f)\xi\|_{x_n}^2=\int_G\bigg(\int_G f(\gamma\alpha^{-1})\xi(\alpha)\, d\lambda_{x_n}(\alpha)\bigg)^2\, d\lambda_{x_n}(\gamma). \] When evaluating the inner integrand, we have $s(\alpha)=s(\gamma)=x_n$, so $\gamma\alpha^{-1}\in G|_{[x_n]}$. Since $U_1$ is invariant with $x_n\notin U_1$, it follows that $\gamma\alpha^{-1}\notin G|_{U_1}$, and so $f(\gamma\alpha^{-1})=0$. Thus $\|\mathrm{L}^{x_n}(f)\xi\|_{x_n}=0$ and since $\xi$ was fixed arbitrarily, $\mathrm{L}^{x_n}(f)=0$. This implies that $I\subset \ker\,\mathrm{L}^{x_n}$. We now conclude by observing that since $I\subset\ker\,\mathrm{L}^{x_n}$ frequently, $\mathrm{L}^{x_n}\notin \hat{I}$ frequently. But $\hat{I}$ is an open neighbourhood of $\mathrm{L}^z$, so $\mathrm{L}^{x_n}\nrightarrow \mathrm{L}^z$. \end{proof} \end{lemma} We may now proceed to strengthening the $\lfloor M^2\rfloor$ bound in Theorem \ref{M2_thm}. This theorem is a generalisation of \cite[Theorem~3.5]{Archbold-anHuef2006}. \begin{theorem}\label{M_thm} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $M\in\mathbb R$ with $M\geqslant 1$, suppose $z\in G^{(0)}$ such that $[z]$ is locally closed and let $\{x_n\}$ be a sequence in $G^{(0)}$. Suppose there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \[ \lambda_{x_n}(G^V)\leqslant M\lambda_z(G^V) \] frequently. Then $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\leqslant\lfloor M\rfloor$. \begin{proof} If $\mathrm{L}^{x_n}$ does not converge to $\mathrm{L}^z$, then $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})=0<\lfloor M \rfloor$. So we assume from now on that $\mathrm{L}^{x_n}\rightarrow \mathrm{L}^z$. Lemma \ref{lemma_ind_reps_converge_imply_orbits_converge} now shows that $[x_n]\rightarrow [z]$. Next we claim that we may assume, without loss of generality, that $[z]$ is the unique limit of $\{[x_n]\}$ in $G^{(0)}/G$. To see this, note that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\leqslant\lfloor M^2\rfloor<\infty$ by Theorem \ref{M2_thm}. Hence, by \cite[Proposition~3.4]{Archbold-anHuef2006}, $\{\mathrm{L}^z\}$ is open in the set of limits of $\{\mathrm{L}^{x_n}\}$. So there exists an open neighbourhood $U_2$ of $\mathrm{L}^z$ in $C^*(G)^\wedge$ such that $\mathrm{L}^z$ is the unique limit of $\{\mathrm{L}^{x_n}\}$ in $U_2$. By \cite[Proposition~2.5]{Muhly-Williams1990} there is a continuous function $\mathrm{L}:G^{(0)}/G\rightarrow C^*(G)^\wedge$ such that $[x]\mapsto\mathrm{L}^x$ for all $x\in G^{(0)}$. Define $p:G^{(0)}\rightarrow G^{(0)}/G$ by $p(x)=[x]$ for all $x\in G^{(0)}$. Then $p$ is continuous, and $Y:=(\mathrm{L}\circ p)^{-1}(U_2)$ is an open $G$-saturated neighbourhood of $z$ in $G^{(0)}$. Note that $x_n\in Y$ eventually. Now suppose that, for some $y\in Y$, $[x_n]\rightarrow [y]$ in $Y/G$ and hence in $G^{(0)}/G$. Then $\mathrm{L}^{x_n}\rightarrow \mathrm{L}^y$ by \cite[Proposition~2.5]{Muhly-Williams1990}, and $\mathrm{L}^y\in U_2$ since $y\in(\mathrm{L}\circ p)^{-1}(U_2)$. But $\{\mathrm{L}^{x_n}\}$ has the unique limit $\mathrm{L}^z$ in $U_2$, so $\mathrm{L}^z=\mathrm{L}^y$ and hence $\overline{[z]}=\overline{[y]}$. Since $[z]$ is locally closed, Lemma \ref{lemma_orbits_equal} shows that $[z]=[y]$ in $G^{(0)}$ and hence in $Y$. We know $Y$ is an open saturated subset of $G^{(0)}$, so $C^*(G|_Y)$ is isomorphic to a closed two-sided ideal $J$ of $C^*(G)$. We can apply \cite[Proposition~5.3]{Archbold-Somerset-Spielberg1997} with the $C^*$-subalgebra $J$ to see that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})$ is the same whether we compute it in the ideal $J$ or in $C^*(G)$. Since $Y$ is $G$-invariant, $G_z^V=G_z^{V\cap Y}$ and eventually $G_{x_n}^V=G_{x_n}^{V\cap Y}$. We may thus consider $G|_Y$ instead of $G$ and therefore assume that $[z]$ is the unique limit of $[x_n]$ in $G^{(0)}/G$ as claimed. As in \cite{Archbold-anHuef2006}, the idea for the rest of the proof is the same as in Theorem \ref{M2_thm}, although more precise estimates are used. Fix $\epsilon>0$ such that $M(1+\epsilon)^2<\lfloor M\rfloor +1$ and choose $\kappa>0$ such that \begin{equation}\label{choice_of_kappa} \kappa<\frac{\epsilon\lambda_z(G^V)}{1+\epsilon}<\lambda_z(G^V). \end{equation} By Lemma \ref{AaH_Lemma3.3} there exists an open relatively compact neighbourhood $V_1$ of $z$ such that $\overline{V_1}\subset V$ and \[ 0<\lambda_z(G^V)-\kappa<\lambda_z(G^{V_1})\leqslant\lambda_z(G^{\overline{V_1}})\leqslant\lambda_z(G^V)<\lambda_z(G^{V_1})+\kappa. \] Choose a subsequence $\{x_{n_i}\}$ of $\{x_n\}$ such that \[ \lambda_{x_{n_i}}(G^V)\leqslant M\lambda_z(G^V) \] for all $i\geqslant 1$. Then \begin{align} \lambda_{x_{n_i}}(G^{V_1})&\leqslant\lambda_{x_{n_i}}(G^V)\notag\\ &\leqslant M\lambda_z(G^V)\notag\\ &<M\big(\lambda_z(G^{V_1})+\kappa\big)\notag\\ &<M\lambda_z(G^{V_1})+M\epsilon\big(\lambda_z(G^V)-\kappa\big)\quad\text{(by \eqref{choice_of_kappa})}\notag\\ &<M\lambda_z(G^{V_1})+M\epsilon\lambda_z(G^{V_1})\notag\\ &=M(1+\epsilon)\lambda_z(G^{V_1})\label{AaH3.8} \end{align} for all $i$. Since \[ \frac{\lambda_z(G^{V_1})\big(\lambda_z(G^{V_1})+\kappa+1/j\big)}{\big(\lambda_z(G^{V_1})-1/j\big)^2} \rightarrow 1+\frac{\kappa}{\lambda_z(G^{V_1})} <1+\epsilon \] as $j\rightarrow\infty$, there exists $\delta>0$ such that $\delta<\lambda_z(G^{V_1})$ and \begin{equation}\label{AaH3.9} \frac{\lambda_z(G^{V_1})\big(\lambda_z(G^{\overline{V_1}})+\delta\big)}{\big(\lambda_z(G^{V_1})-\delta\big)^2} < \frac{\lambda_z(G^{V_1})\big(\lambda_z(G^{V_1})+\kappa+\delta\big)}{\big(\lambda_z(G^{V_1})-\delta\big)^2}<1+\epsilon. \end{equation} We will now construct a function $F\in C_c(G^{(0)})$ with support inside $V_1$. Since $\lambda_z$ is inner regular on open sets and $G_z^{V_1}$ is $G_z$-open, there exists a $G_z$-compact subset $W$ of $G_z^{V_1}$ such that \[ 0<\lambda_z(G_z^{V_1})-\delta<\lambda_z(W). \] Since $W$ is $G_z$-compact there exists a $G_z$-compact neighbourhood $W_1$ of $W$ that is contained in $G_z^{V_1}$ and there exists a continuous function $g:G_z\rightarrow [0,1]$ that is identically one on $W$ and zero off the interior of $W_1$. We have \begin{equation}\label{AaH3.10} \lambda_z(G^{V_1})-\delta<\lambda_z(W) \leqslant\int_{G_z} g(t)^2\, d\lambda_z(t) =\|g\|_z^2, \end{equation} By Lemma \ref{based_on_Ramsay} the restriction $\tilde{r}$ of $r$ to $G_z$ is a homeomorphism onto $[z]$. So there exists a continuous function $g_1:\tilde{r}(W_1)\rightarrow [0,1]$ such that $g_1\big(\tilde{r}(\gamma)\big)=g(\gamma)$ for all $\gamma\in W_1$. Thus $\tilde{r}(W_1)$ is $[z]$-compact, which implies that $\tilde{r}(W_1)$ is $G^{(0)}$-compact. Since we know that $G^{(0)}$ is second countable and Hausdorff, Tietze's Extension Theorem can be applied to show that $g_1$ can be extended to a continuous map $g_2:G^{(0)}\rightarrow [0,1]$. Because $\tilde{r}(W_1)$ is a compact subset of the open set $V_1$, there exist a compact neighbourhood $P$ of $\tilde{r}(W_1)$ contained in $V_1$ and a continuous function $h:G^{(0)}\rightarrow [0,1]$ that is identically one on $\tilde{r}(W_1)$ and zero off the interior of $P$. Note that $h$ has compact support that is contained in $P$. We set $f(x)=h(x)g_2(x)$. Then $f\in C_c(G^{(0)})$ with $0\leqslant f\leqslant 1$ and \begin{equation}\label{supp_f_contained_in_V_2} \mathrm{supp} f\subset \mathrm{supp} h\subset P\subset V_1. \end{equation} Note that \begin{align} \|f\circ r\|_z^2&=\int_{G_z} f\big(\tilde{r}(\gamma)\big)^2\,d\lambda_z(\gamma)\notag\\ &=\int_{G_z} h\big(\tilde{r}(\gamma)\big)^2g_2\big(\tilde{r}(\gamma)\big)^2\, d\lambda_z(\gamma)\notag\\ &\geqslant\int_{W_1} h\big(\tilde{r}(\gamma)\big)^2 g(\gamma)^2\, d\lambda_z(\gamma)\notag\\ &=\int_{W_1}g(\gamma)^2\,d\lambda_z(\gamma)\notag\\ &=\|g\|_z^2\label{AaH3.11} \end{align} since $\mathrm{supp}\, g\subset W_1$ and $h$ is identically one on $\tilde{r}(W_1)$. We now define $F\in C_c(G^{(0)})$ by \begin{equation}\label{definition_F_2} F(x)=\frac{f(x)}{\|f\circ r\|_z}. \end{equation} Then $\|F\circ r\|_z=1$ and \begin{equation}\label{unmotivated_ref_2} F\circ r(\gamma)\ne 0\implies h\big(r(\gamma)\big)\ne 0 \implies r(\gamma)\in V_1 \implies \gamma\in G^{V_1}. \end{equation} Let $N=\mathrm{supp}\, F$. Suppose $K$ is an open relatively compact symmetric neighbourhood of $(\overline{G_z^N})(\overline{G_z^N})^{-1}$ in $G$ and choose $b\in C_c(G)$ such that $b$ is identically one on $(\overline{G_z^N})(\overline{G_z^N})^{-1}$ and identically zero off $K$. As in Theorem \ref{M2_thm} we may assume that $b$ is self-adjoint by considering $\frac12(b+b^*)$. Define $D\in C_c(G)$ by $D(\gamma):=F\big(r(\gamma)\big)F\big(s(\gamma)\big)b(\gamma)$. By the same argument as in Theorem \ref{M2_thm}, $\mathrm{L}^z(D)$, and hence $\mathrm{L}^z(D^*\ast D)$, is the rank one projection determined by the unit vector $F\circ r\in L^2(G,\lambda_z)$. From \eqref{Aah3.4} we have \begin{align*} \mathrm{Tr}\big(\mathrm{L}^{x_{n_i}}&(D^*\ast D)\big)\\ &=\int_G F\big(r(\beta)\big)^2\bigg(\int_G F\big(r(\alpha)\big)^2 b(\alpha\beta^{-1})^2\, d\lambda_{x_{n_i}}(\alpha)\bigg)\, d\lambda_{x_{n_i}}(\beta). \end{align*} Since $b$ is identically zero off $K$, the inner integrand is zero unless $\alpha\beta^{-1}\in K$. Combining this with \eqref{supp_f_contained_in_V_2} and the fact that $\mathrm{supp}\,\lambda_{x_{n_i}}\subset G_{x_{n_i}}$ enables us to see that this inner integrand is zero unless $\alpha\in G_{x_{n_i}}^{V_1}\cap K\beta$. Thus \begin{align*} \mathrm{Tr}\big(\mathrm{L}^{x_{n_i}}&(D^*\ast D)\big)\\ &\leqslant\int_{\beta\in G_{x_{n_i}}^{V_1}} F\big(r(\beta)\big)^2\bigg(\int_{\alpha\in G_{x_{n_i}}^{V_1}\cap K\beta} F\big(r(\alpha)\big)^2\, d\lambda_{x_{n_i}}(\alpha)\bigg)\, d\lambda_{x_{n_i}}(\beta).\\ &\leqslant\frac{1}{\|f\circ r\|_z^4}\int_{\beta\in G_{x_{n_i}}^{V_1}} 1\bigg(\int_{\alpha\in G_{x_{n_i}}^{V_1}\cap K\beta} 1\, d\lambda_{x_{n_i}}(\alpha)\bigg)\, d\lambda_{x_{n_i}}(\beta). \end{align*} Since $\overline{V_1}$ and $\overline{K}$ are compact, by Lemma \ref{the_unbroken_lemma} there exists $i_0$ such that for every $i\geqslant i_0$ and any $\beta\in G_{x_{n_i}}^{\overline{V_1}}$, \[ \lambda_{x_{n_i}}(K\beta\cap G^{\overline{V_1}})<\lambda_z(G^{\overline{V_1}})+\delta. \] So, provided $i\geqslant i_0$, {\allowdisplaybreaks\begin{align*} \mathrm{Tr}\big(\mathrm{L}^{x_{n_i}}(D^*\ast D)\big)&\leqslant \frac{1}{\|f\circ r\|_z^4}\int_{\beta\in G_{x_{n_i}}^{V_1}}\lambda_{x_{n_i}}(K\beta\cap G_{x_{n_i}}^{V_1})\, d\lambda_{x_{n_i}}(\beta)\\ &\leqslant\frac{1}{\|f\circ r\|_z^4}\int_{\beta\in G_{x_{n_i}}^{V_1}}\big(\lambda_z(G_z^{\overline{V_1}})+\delta\big)\, d\lambda_{x_{n_i}}(\beta)\\ &<\frac{\big(\lambda_z(G^{\overline{V_1}})+\delta\big)\lambda_{x_{n_i}}(G^{V_1})}{\|f\circ r\|_z^4}\\ &<\frac{M(1+\epsilon)\big(\lambda_z(G^{\overline{V_1}})+\delta\big)\lambda_z(G^{V_1})}{\|g\|_z^4}\quad\text{(by \eqref{AaH3.8} and \eqref{AaH3.11})}\\ &<\frac{M(1+\epsilon)\big(\lambda_z(G^{\overline{V_1}})+\delta\big)\lambda_z(G^{V_1})}{(\lambda_z(G^{V_1})-\delta)^2}\quad\text{(by \eqref{AaH3.10})}\\ &<M(1+\epsilon)^2\quad\text{(by \eqref{AaH3.9})}. \end{align*}} We can now make our conclusion as in \cite[Theorem~3.5]{Archbold-anHuef2006}: by generalised lower semi-continuity \cite[Theorem~4.3]{Archbold-Spielberg1996}, {\allowdisplaybreaks\begin{align*} \underset{\scriptstyle n}{\lim\,\inf}\,\mathrm{Tr}\big(\mathrm{L}^{x_n}(D^\ast\ast D)\big)&\geqslant \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\, \mathrm{Tr} \big(\mathrm{L}^z(D^*\ast D)\big)\\ &=\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\}). \end{align*} }We now have {\allowdisplaybreaks\begin{align*} \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})&\leqslant \underset{\scriptstyle n}{\lim\,\inf}\, \mathrm{Tr}\big(\mathrm{L}^{x_n}(D^*\ast D)\big)\\ &\leqslant M(1+\epsilon)^2\\ &<\lfloor M\rfloor+1, \end{align*} }and so $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\leqslant\lfloor M\rfloor$, as required. \end{proof} \end{theorem} \section{Lower multiplicity and \texorpdfstring{$k$}{\it k}-times convergence II}\label{sec_lower_multiplicity_2} We proved in Proposition \ref{AaH_thm_1.1_1_implies_2} that if a sequence converges $k$-times in the orbit space of a principal groupoid, then the lower multiplicity of the associated sequence of representations is at least $k$. In this section we will prove the converse. The first result in this section generalises \cite[Lemma~5.1]{Archbold-anHuef2006}; with the exception of notation changes, the proof is the same as the proof in \cite{Archbold-anHuef2006}. \begin{lemma}\label{AaH_lemma5.1} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid. Let $k\in\mathbb P$, $z\in G^{(0)}$, and $\{x_n\}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed in $G^{(0)}$ and that there exists $R>k-1$ such that for every open neighbourhood $U$ of $z$ with $G_z^U$ relatively compact we have \[ \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^U)\geqslant R\lambda_z(G^U). \] Given an open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact, there exists a compact neighborhood $N$ of $z$ with $N\subset V$ such that \[ \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^N)>(k-1)\lambda_z(G^N). \] \begin{proof} Apply Lemma \ref{AaH_Lemma3.3} to $V$ with $0<\epsilon<\frac{R-k+1}R \lambda_z(G^V)$ to get an open relatively-compact neighbourhood $V_1$ of $z$ with $\overline{V_1}\subset V$ and \[ \lambda_z(G^V)-\epsilon<\lambda_z(G^{V_1})\leqslant\lambda_z(G^{\overline{V_1}})\leqslant\lambda_z(G^V)<\lambda_z(G^{V_1})+\epsilon. \] Since $G_z^{V_1}$ is relatively compact we have \begin{align*} \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^{\overline{V_1}})&\geqslant\underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^{V_1})\\ &\geqslant R\lambda_z(G^{V_1})&\text{(by hypothesis)}\\ &>R\big(\lambda_z(G^V)-\epsilon\big)\\ &>(k-1)\lambda_z(G^V)&\text{(by our choice of }\epsilon\text{)}\\ &\geqslant (k-1)\lambda_z(G^{\overline{V_1}}). \end{align*} So we may take $N=\overline{V_1}$. \end{proof} \end{lemma} \begin{remark}\label{AaH_lemma5.1_variant} The preceding lemma also holds when $\lim\,\inf$ is replaced by $\lim\,\sup$. No modification of the proof is needed beyond replacing the two occurrences of $\lim\,\inf$ with $\lim\,\sup$. \end{remark} We may now proceed to our main theorem. \needspace{6\baselineskip} \begin{theorem}\label{circle_thm} Suppose $G$ is a second countable, locally compact, Hausdorff principal groupoid that admits a Haar system $\lambda$. Let $k$ be a positive integer, let $z\in G^{(0)}$ and let $\{x_n\}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed in $G^{(0)}$. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{circle_thm_1} the sequence $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$; \item\label{circle_thm_2} $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\geqslant k$; \item\label{circle_thm_3} for every open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact we have \[ \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^V)\geqslant k\lambda_z(G^V); \] \item\label{circle_thm_4} there exists a real number $R>k-1$ such that for every open neighbourhood $V$ of $z$ in $G^{(0)}$ with $G_z^V$ relatively compact we have \[ \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^V)\geqslant R\lambda_z(G^V);\quad\text{and} \] \item\label{circle_thm_5} there exists a basic decreasing sequence of compact neighbourhoods $\{W_m\}$ of $z$ in $G^{(0)}$ such that, for each $m\geqslant 1$, \[ \underset{\scriptstyle n}{\lim\,\inf}\,\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(G^{W_m}). \] \end{enumerate} \begin{proof} We know that \eqref{circle_thm_1} implies \eqref{circle_thm_2} by Proposition \ref{AaH_thm_1.1_1_implies_2}. Suppose \eqref{circle_thm_2}. If $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})\geqslant k$, then $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})>\lfloor k-\epsilon\rfloor$ for all $\epsilon>0$. By Theorem \ref{M_thm}, for every $G^{(0)}$-open neighborhood $V$ of $z$ such that $G_z^V$ is relatively compact, $\lambda_{x_n}(G^V)>(k-\epsilon)\lambda_z(G^V)$ eventually, and hence \eqref{circle_thm_3} holds. It is immediately true that \eqref{circle_thm_3} implies \eqref{circle_thm_4}. Suppose \eqref{circle_thm_4}. We will construct the sequence $\{W_m\}$ of compact neighbourhoods inductively. Let $\{V_j\}$ be a basic decreasing sequence of open neighborhoods of $z$ such that $G_z^{V_1}$ is relatively compact (such neighborhoods exist by \cite[Lemma~4.1(1)]{Clark-anHuef2010-preprint}). By Lemma \ref{AaH_lemma5.1} there exists a compact neighbourhood $W_1$ of $z$ such that $W_1\subset V_1$ and $\lambda_{x_n}(G^{W_1})>(k-1)\lambda_z(G^{W_1})$. Now assume there are compact neighbourhoods $W_1,W_2,\ldots,W_m$ of $z$ with $W_1\supset W_2\supset\cdots\supset W_m$ such that \begin{equation}\label{circle_thm_4_imply_5_eqn} W_i\subset V_i\quad\text{and}\quad\lambda_{x_n}(G^{W_i})>(k-1)\lambda_z(G^{W_i}) \end{equation} for all $1\leqslant i\leqslant m$. Apply Lemma \ref{AaH_lemma5.1} to $(\mathrm{Int}\, m)\cap V_{m+1}$ to obtain a compact neighbourhood $W_{m+1}$ of $z$ such that $W_{m+1}\subset(\mathrm{Int}\, W_m)\cap V_{m+1}$ and \eqref{circle_thm_4_imply_5_eqn} holds for $i=m+1$, establishing \eqref{circle_thm_5}. Suppose \eqref{circle_thm_5}. We begin by showing that $[x_n]\rightarrow [z]$ in $G^{(0)}/G$. Let $q:G^{(0)}\rightarrow G^{(0)}/G$ be the quotient map. Let $U$ be a neighbourhood of $[z]$ in $G^{(0)}/G$ and $V=q^{-1}(U)$. There exists $m$ such that $W_m\subset V$. Since $\lim\,\inf_n\,\lambda_{x_n}(G^{W_m})>0$ there exists $n_0$ such that $G_{x_n}^{W_m}\ne\emptyset$ for all $n\geqslant n_0$. Thus, for $n\geqslant n_0$, $[x_n]=q(x_n)\in q(W_m)\subset q(V)=U$, so $[x_n]$ is eventually in every neighbourhood of $[z]$ in $G^{(0)}/G$. Now suppose that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})<\infty$. Then, as in the proof of Theorem \ref{M_thm}, we may localise to an open invariant neighbourhood $Y$ of $z$ such that $[z]$ is the unique limit in $Y/G$ of $[x_n]$. Eventually $W_m\subset Y$, and so the sequence $\{x_n\}$ converges $k$-times in $Y/(G|_Y)=Y/G$ to $z$ by Proposition \ref{AaH_prop4_1_1} applied to the groupoid $G|_Y$. This implies that the sequence $\{x_n\}$ converges $k$-times in $G^{(0)}/G$. Finally, if $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})=\infty$, then $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$ by Proposition \ref{AaH_prop_4.2}, establishing \eqref{2nd_circle_thm_1} and completing the proof. \end{proof} \end{theorem} \begin{cor} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid such that all the orbits are locally closed. Let $k\in\mathbb P$ and let $z\in G^{(0)}$ such that $[z]$ is not open in $G^{(0)}$. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{AaH_cor5.5_1} whenever $\braces{x_n}$ is a sequence in $G^{(0)}$ which converges to $z$ with $[x_n]\ne [z]$ eventually, then $\braces{x_n}$ is $k$-times convergent in $G^{(0)}/G$ to $z$; \item\label{AaH_cor5.5_2} $\mathrm{M}_\mathrm{L}(\mathrm{L}^z)\geqslant k$. \end{enumerate} \begin{proof} Assume \eqref{AaH_cor5.5_1}. We must first establish that $\braces{\mathrm{L}^z}$ is not open in $C^*(G)^\wedge$. If this is not the case, then $\braces{\mathrm{L}^z}$ is open and we can apply \cite[Proposition~3.6]{Clark2007} to see that $\braces{[z]}$ is open in $G^{(0)}/G$, and so $[z]$ is open in $G^{(0)}$, contradicting our assumption. Since $\braces{\mathrm{L}^z}$ is not open in $C^*(G)^\wedge$, we can apply \cite[Lemma~A.2]{Archbold-anHuef2006} to see that there exists a sequence $\braces{\pi_i}$ of irreducible representations of $C^*(G)$ such that each $\pi_i$ is not unitarily equivalent to $\mathrm{L}^z$, $\pi_i\rightarrow \mathrm{L}^z$ in $C^*(G)^\wedge$, and \begin{equation}\label{AaH_5.3} \mathrm{M}_\mathrm{L}(\mathrm{L}^z)=\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\pi_i})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\pi_i}). \end{equation} Since the orbits are locally closed, the map $G^{(0)}/G\rightarrow C^*(G)^\wedge$ such that $[x]\mapsto \mathrm{L}^x$ is a homeomorphism by \cite[Proposition~5.1]{Clark2007}\footnote{Proposition~5.1 in \cite{Clark2007} states that if a principal groupoid has locally closed orbits, then the map from $G^{(0)}/G$ to $C^*(G)^\wedge$ where $[x]\mapsto \mathrm{L}^x$ is a `homeomorphism from $G^{(0)}/G$ into $C^*(G)^\wedge$'. The proof of \cite[Proposition~5.1]{Clark2007} explicitly shows that this map is a surjection.}. It follows that the mapping $G^{(0)}\rightarrow C^*(G)^\wedge$ such that $x\mapsto \mathrm{L}^x$ is an open surjection, so by \cite[Proposition~1.15]{Williams2007} there is a sequence $\braces{x_n}$ in $G^{(0)}$ such that $x_n \rightarrow z$ and $\braces{\mathrm{L}^{x_n}}$ is unitarily equivalent to a subsequence of $\braces{\pi_i}$. By \eqref{AaH_5.3}, \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z)=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\pi_i})\geqslant \mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}}). \] We know by \eqref{AaH_cor5.5_1} that $\braces{x_n}$ converges $k$-times to $z$ in $G^{(0)}/G$, so it follows from Theorem \ref{circle_thm} that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z)\geqslant\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant k$. Assume \eqref{AaH_cor5.5_2}. If $\braces{x_n}$ is a sequence in $G^{(0)}$ which converges to $z$ such that $[x_n]\ne [z]$ eventually, then \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant\mathrm{M}_\mathrm{L}(\mathrm{L}^z)\geqslant k. \] By Theorem \ref{circle_thm}, $\braces{x_n}$ is $k$-times convergent to $z$ in $G^{(0)}/G$. \end{proof} \end{cor} The next corollary improves Proposition \ref{AaH_prop_4.2} and is an immediate consequence of Proposition \ref{AaH_prop_4.2} and Theorem \ref{circle_thm}. \begin{cor}\label{AaH_cor5.6} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $z\in G^{(0)}$ and let $\braces{x_n}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{AaH_cor5.6_1} $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\{\mathrm{L}^{x_n}\})=\infty$. \item\label{AaH_cor5.6_2} For every open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact, $\lambda_{x_n}(G^V)\rightarrow\infty$ as $n\rightarrow\infty$. \item\label{AaH_cor5.6_3} For each $k\geqslant 1$, the sequence $\{x_n\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \end{enumerate} \end{cor} \section{Upper multiplicity and \texorpdfstring{$k$}{\it k}-times convergence}\label{sec_upper_multiplicity} The results in this section are corollaries of Theorems~\ref{M_thm} and~\ref{circle_thm}: they relate $k$-times convergence, measure ratios and upper multiplicity numbers, generalising all the upper-multiplicity results of \cite{Archbold-anHuef2006}. We begin with the upper-multiplicity analogue of Theorem~\ref{M_thm}. \begin{theorem}\label{thm_AaH3.6} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $M\in \mathbb R$ with $M\geqslant 1$, let $z\in G^{(0)}$ and let $\braces{x_n}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed. Suppose that there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \[ \lambda_{x_n}(G^V)\leqslant M\lambda_z(G^V)<\infty \] eventually. Then $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\leqslant \lfloor M\rfloor$. \begin{proof} Since $G$ is second countable, $C^*(G)$ is separable. By \cite[Lemma~A.1]{Archbold-anHuef2006} there exists a sequence $\braces{\mathrm{L}^{x_{n_i}}}$ such that \[ \mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_i}}})=\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_i}}}). \] By Theorem \ref{M_thm}, $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_i}}})\leqslant \lfloor M\rfloor$, so $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\leqslant\lfloor M\rfloor$. \end{proof} \end{theorem} \needspace{6\baselineskip} \begin{cor}\label{AaH_cor3.7} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$ such that all the orbits are locally closed. Let $M\in \mathbb R$ with $M\geqslant 1$ and let $z\in G^{(0)}$. If for every sequence $\braces{x_n}$ in $G^{(0)}$ which converges to $z$ there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \[ \lambda_{x_n}(G^V)\leqslant M\lambda_z(G^V)<\infty \] frequently, then $\mathrm{M}_\mathrm{U}(\mathrm{L}^z)\leqslant\lfloor M\rfloor$. \begin{proof} Since $G$ is second countable, $C^*(G)$ is separable, and so we can apply \cite[Lemma~1.2]{Archbold-Kaniuth1999} to see that there exists a sequence $\braces{\pi_n}$ in $C^*(G)^\wedge$ that converges to $\mathrm{L}^z$ such that \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\pi_n})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\pi_n})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z). \] Since the orbits are locally closed, the map $G^{(0)}/G\rightarrow C^*(G)^\wedge$ such that $[x]\mapsto \mathrm{L}^x$ is a homeomorphism by \cite[Proposition~5.1]{Clark2007}. In particular, the mapping $G^{(0)}\rightarrow C^*(G)^\wedge$ such that $x\mapsto \mathrm{L}^x$ is an open surjection, so by \cite[Proposition~1.15]{Williams2007} there exists a sequence $\braces{x_i}$ in $G^{(0)}$ converging to $z$ such that $\braces{[\mathrm{L}^{x_i}]}$ is a subsequence of $\braces{[\pi_n]}$. By Theorem \ref{M_thm}, $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\leqslant\lfloor M\rfloor$. Since \begin{align*} \mathrm{M}_\mathrm{U}(\mathrm{L}^z)&=\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\pi_n})\leqslant\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_i}})\leqslant\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_i}})\\ &\leqslant\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\pi_n})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z), \end{align*} we obtain $\mathrm{M}_\mathrm{U}(\mathrm{L}^z)\leqslant\lfloor M\rfloor$, as required. \end{proof} \end{cor} In Proposition~\ref{AaH_prop4_1_1} we generalised the first part of \cite[Proposition~4.1]{Archbold-anHuef2006}. We will now generalise the second part. The argument we use is similar to that used in Proposition \ref{AaH_prop4_1_1}. \begin{prop}\label{AaH_prop4_1_2} Let $G$ be a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $k\in\mathbb P$ and $z\in G^{(0)}$ with $[z]$ locally closed in $G^{(0)}$. Assume that $\{x_n\}$ is a sequence in $G^{(0)}$ such that $[x_n]\rightarrow [z]$ uniquely in $G^{(0)}/G$. Suppose $\{W_m\}$ is a basic decreasing sequence of compact neighbourhoods of $z$ such that each $m$ satisfies \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(G^{W_m}). \] Then there exists a subsequence of $\{x_n\}$ which converges $k$-times in $G^{(0)}/G$ to $z$. \begin{proof} Let $\{K_m\}$ be an increasing sequence of compact subsets of $G$ such that $G=\bigcup_{m\geqslant 1}\mathrm{Int}\,K_m$. By the regularity of $\lambda_z$, for each $m\geqslant 1$ there exist $\delta_m>0$ and an open neighbourhood $U_m$ of $G_z^{W_m}$ such that \begin{equation}\label{AaH4.4} \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(U_m) + \delta_m. \end{equation} We will construct, by induction, a strictly increasing sequence of positive integers $\{i_m\}$ such that, for all $m$, \begin{align} &\lambda_{x_{i_m}}(K_m\alpha\cap G^{W_m})<\lambda_z(U_m)+\delta_m/k\quad\text{for all }\alpha\in G_{x_{i_m}}^{W_m},\quad\text{and}\label{AaH4.5}\\ &\lambda_{x_{i_m}}(G^{W_m})>(k-1)\lambda_z(U_m)+\delta_m.\label{AaH4.6} \end{align} By Lemma \ref{the_unbroken_lemma} with $\delta=\lambda_z(U_1)-\lambda_z(G^{W_1})+\delta_1/k$, there exists $n_1$ such that $n\geqslant n_1$ implies \[ \lambda_{x_n}(K_1\alpha\cap G^{W_1})<\lambda_z(U_1)+\delta_1/k\quad\text{for all }\alpha\in G_{x_n}^{W_m}. \] By considering \eqref{AaH4.4} with $m=1$ we can choose $i_1\geqslant n_1$ such that \[ \lambda_{x_{i_1}}(G^{W_1})>(k-1)\lambda_z(U_1)+\delta_1. \] Assuming that $i_1<i_2<\cdots<i_{m-1}$ have been chosen, we can apply Lemma \ref{the_unbroken_lemma} with $\delta=\lambda_z(U_m)-\lambda_z(G^{W_m})+\delta_m/k$ to obtain $n_m>i_{m-1}$ such that \[ n\geqslant n_m\quad\text{implies}\quad\lambda_{x_n}(K_m\alpha\cap G^{W_m})<\lambda_z(U_m)+\delta_m/k\quad\text{for all }\alpha\in G_{x_n}^{W_m}, \] and then by \eqref{AaH4.4} we can choose $i_m\geqslant n_m$ such that \[ \lambda_{x_{i_m}}(G^{W_m})>(k-1)\lambda_z(U_m)+\delta_m. \] For each $m\in\mathbb P$ choose $\gamma_{i_m}^{(1)}\in G_{x_{i_m}}^{W_m}$ (which is non-empty by \eqref{AaH4.6}). By \eqref{AaH4.5} and \eqref{AaH4.6} we have \begin{align*} \lambda_{x_{i_m}}(G^{W_m}\backslash K_m\gamma_{i_m}^{(1)})&=\lambda_{x_{i_m}}(G^{W_m})-\lambda_{x_{i_m}}(G^{W_m}\cap K_m\gamma_{i_m}^{(1)})\\ &>(k-1)\lambda_z(U_m)+\delta_m-\big(\lambda_z(U_m)+\delta_m/k\big)\\ &=(k-2)\lambda_z(U_m)+\frac{k-1}k\delta_m. \end{align*} So we can choose $\gamma_{i_m}^{(2)}\in G_{x_{i_m}}^{W_m}\backslash K_m\gamma_{i_m}^{(1)}$. This implies, as in the proof of Proposition \ref{AaH_prop4_1_1}, that \[ \lambda_{x_{i_m}}\big(G^{W_m}\backslash (K_m\gamma_{i_m}^{(1)}\cup K_m\gamma_{i_m}^{(2)})\big)>(k-3)\lambda_z(U_m)+\frac{(k-2)}k\delta_m, \] enabling us to choose $\gamma_{i_m}^{(3)}\in G_{x_{i_m}}^{W_m}\backslash (K_m\gamma_{i_m}^{(1)}\cap K_m\gamma_{i_m}^{(2)})$. Continuing in this way for $j=3,\ldots,k$, for each $i_m$ we choose \begin{equation}\label{eqn_choosing_gammas-2} \gamma_{i_m}^{(j)}\in G_{x_{i_m}}^{W_m}\backslash\bigg(\bigcup_{l=1}^{j-1}K_m\gamma_{i_m}^{(l)}\bigg). \end{equation} Note that $\gamma_{i_m}^{(j)}\notin K_m\gamma_{i_m}^{(l)}$ for $1\leqslant l<j\leqslant k$. We claim that $r(\gamma_{i_m}^{(l)})\rightarrow z$ as $m\rightarrow\infty$ for $1\leqslant l\leqslant k$. To see this, fix $l$ and let $V$ be an open neighbourhood of $z$. Since $\{W_m\}$ is a decreasing neighbourhood basis for $z$ there exists $m_0$ such that $m\geqslant m_0$ implies $W_m\subset V$, and so $r(\gamma_{i_m}^{(l)})\in W_m\subset V$. Finally we claim that $\gamma_{i_m}^{(j)}(\gamma_{i_m}^{(l)})^{-1}\rightarrow\infty$ as $m\rightarrow\infty$ for $1\leqslant l<j\leqslant k$. Fix $l<j$ and let $K$ be a compact subset of $G$. There exists $m_0$ such that $K\subset K_m$ for all $m\geqslant m_0$. By \eqref{eqn_choosing_gammas-2} we know \begin{align*} \gamma_{i_m}^{(j)}&\in G_{x_{i_m}}^{W_m}\backslash (K_m\gamma_{i_m}^{(l)})\\ &=\big(G_{x_{i_m}}^{W_m}(\gamma_{i_m}^{(l)})^{-1}\gamma_{i_m}^{(l)}\big)\backslash (K_m\gamma_{i_m}^{(l)})\\ &=\big((G_{x_{i_m}}^{W_m}(\gamma_{i_m}^{(l)})^{-1})\backslash K_m\big)\gamma_{i_m}^{(l)}. \end{align*} So provided $m\geqslant m_0$, $\gamma_{i_m}^{(j)}(\gamma_{i_m}^{(l)})^{-1}\in \big(G_{x_{i_m}}^{W_m}(\gamma_{i_m}^{(l)})^{-1}\big)\backslash K_m\subset G\backslash K_m\subset G\backslash K$, enabling us to conclude that $\{x_{i_m}\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \end{proof} \end{prop} \needspace{6\baselineskip} \begin{theorem}\label{2nd_circle_thm} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $k\in\mathbb P$, let $z\in G^{(0)}$, and let $\braces{x_n}$ be a sequence in $G^{(0)}$ such that $[x_n]$ converges to $[z]$ in $G^{(0)}/G$. Assume that $[z]$ is locally closed. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{2nd_circle_thm_1} there exists a subsequence $\braces{x_{n_i}}$ of $\braces{x_n}$ which converges $k$-times in $G^{(0)}/G$ to $z$; \item\label{2nd_circle_thm_2} $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant k$; \item\label{2nd_circle_thm_3} for every open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact we have \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^V)\geqslant k\lambda_z(G^V); \] \item\label{2nd_circle_thm_4} there exists a real number $R>k-1$ such that for every open neighbourhood $V$ of $z$ in $G^{(0)}$ with $G_z^V$ relatively compact we have \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^V)\geqslant R\lambda_z(G^V);\quad\text{and} \] \item\label{2nd_circle_thm_5} there exists a basic decreasing sequence of compact neighbourhoods $\{W_m\}$ of $z$ in $G^{(0)}$ such that, for each $m\geqslant 1$, \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^{W_m})>(k-1)\lambda_z(G^{W_m}). \] \end{enumerate} \begin{proof} If \eqref{2nd_circle_thm_1} holds then $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_i}}})\geqslant k$ by Theorem \ref{circle_thm}, and so \[ \mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}}\geqslant\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_i}}})\geqslant\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_i}}})\geqslant k. \] If \eqref{2nd_circle_thm_2} holds then by \cite[Lemma~A.1]{Archbold-anHuef2006} there is a subsequence $\braces{x_{n_r}}$ such that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_r}}})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})$ so that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_r}}})\geqslant k$. Let $V$ be any open neighbourhood of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact. Then \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^V)\geqslant\underset{\scriptstyle r}{\lim\,\sup}\,\lambda_{x_{n_r}}(G^V)\geqslant \underset{\scriptstyle r}{\lim\,\inf}\,\lambda_{x_{n_r}}(G^V)\geqslant k\lambda_z(G^V), \] using Theorem \ref{circle_thm} for the last step. That \eqref{2nd_circle_thm_3} implies \eqref{2nd_circle_thm_4} is immediate. That \eqref{2nd_circle_thm_4} implies \eqref{2nd_circle_thm_5} follows by making references to Remark \ref{AaH_lemma5.1_variant} rather than Lemma \ref{AaH_lemma5.1} in the \eqref{circle_thm_4} implies \eqref{circle_thm_5} component of the proof of Theorem \ref{circle_thm}. Assume \eqref{2nd_circle_thm_5}. First suppose that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})<\infty$. Since $[x_n]\rightarrow[z]$, we can use an argument found at the beginning of the proof of Theorem \ref{M_thm} to obtain an open $G$-invariant neighborhood $Y$ of $z$ in $G^{(0)}$ so that if we define $H:=G|_Y$, there exists a subsequence $\braces{x_{n_i}}$ of $\braces{x_n}$ such that $[x_{n_i}]\rightarrow [z]$ uniquely in $H^{(0)}/H$. Proposition \ref{AaH_prop4_1_2} now shows us that there exists a subsequence $\braces{x_{n_{i_j}}}$ of $\braces{x_{n_i}}$ that converges $k$-times in $H^{(0)}/H$ to $z$. It follows that $\braces{x_{n_{i_j}}}$ converges $k$-times in $G^{(0)}/G$ to $z$. When $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})=\infty$, $\braces{x_n}$ converges $k$-times in $G^{(0)}/G$ to $z$ by Corollary \ref{AaH_cor5.6}, establishing \eqref{2nd_circle_thm_1}. \end{proof} \end{theorem} \begin{cor}\label{AaH_cor5.4} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid such that all the orbits are locally closed. Let $k\in\mathbb P$ and let $z\in G^{(0)}$. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{AaH_cor5.4_1} there exists a sequence $\braces{x_n}$ in $G^{(0)}$ which is $k$-times convergent in $G^{(0)}/G$ to $z$; \item\label{AaH_cor5.4_2} $\mathrm{M}_\mathrm{U}(\mathrm{L}^z)\geqslant k$. \end{enumerate} \begin{proof} Assume \eqref{AaH_cor5.4_1}. By the definitions of upper and lower multiplicity, \[ \mathrm{M}_\mathrm{U}(\mathrm{L}^z)\geqslant\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}}). \] By Theorem \ref{circle_thm} we know that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant k$, establishing \eqref{AaH_cor5.4_2}. Assume \eqref{AaH_cor5.4_2}. By \cite[Lemma~1.2]{Archbold-Kaniuth1999} there exists a sequence $\braces{\pi_n}$ converging to $\mathrm{L}^z$ such that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\pi_n})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\pi_n})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z)$. Since the orbits are locally closed, by \cite[Proposition~5.1]{Clark2007} the mapping $G^{(0)}\rightarrow C^*(G)^\wedge:x\mapsto\mathrm{L}^x$ is a surjection. So there is a sequence $\braces{\mathrm{L}^{x_n}}$ in $C^*(G)^\wedge$ such that $\mathrm{L}^{x_n}$ is unitarily equivalent to $\pi_n$ for each $n$. Then \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})\geqslant\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\pi_n})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z)\geqslant k, \] and it follows from Theorem \ref{circle_thm} that $\braces{x_n}$ is $k$-times convergent in $G^{(0)}/G$ to $z$. \end{proof} \end{cor} \begin{cor}\label{AaH_cor5.7} Suppose that $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$. Let $z\in G^{(0)}$ and let $\braces{x_n}\subset G^{(0)}$ be a sequence converging to $z$. Assume that $[z]$ is locally closed. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{AaH_cor5.7_1} there exists an open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact and \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^V)<\infty; \] \item\label{AaH_cor5.7_2} $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})<\infty$. \end{enumerate} \begin{proof} Suppose that \eqref{AaH_cor5.7_1} holds. Since $C^*(G)$ is separable, it follows from \cite[Lemma~A.1]{Archbold-anHuef2006} that there exists a subsequence $\braces{x_{n_j}}$ of $\braces{x_n}$ such that \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_j}}})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_{n_j}}})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}}). \] By \eqref{AaH_cor5.7_1} and Corollary \ref{AaH_cor5.6}, $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})<\infty$. Hence $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})<\infty$, as required. Suppose that \eqref{AaH_cor5.7_1} fails. Let $\braces{V_i}$ be a basic decreasing sequence of open neighbourhoods of $z$ such that $G_z^{V_1}$ is relatively compact (such neighborhoods exist by \cite[Lemma~4.1(1)]{Clark-anHuef2010-preprint}). Then \[ \underset{\scriptstyle n}{\lim\,\sup}\,\lambda_{x_n}(G^{V_i})=\infty\quad\text{for each }i \] and we may choose a subsequence $\braces{x_{n_i}}$ of $\braces{x_n}$ such that $\lambda_{x_{n_i}}(G^{V_i})\rightarrow\infty$ as $i\rightarrow\infty$. Let $V$ be any open neighbourhood of $z$ such that $G_z^V$ is relatively compact. There exists $i_0$ such that $V_i\subset V$ for all $i\geqslant i_0$. Then, for $i\geqslant i_0$, \[ \lambda_{x_{n_i}}(G^{V_i})\leqslant\lambda_{x_{n_i}}(G^V). \] Thus $\lambda_{x_{n_i}}(G^V)\rightarrow\infty$ as $i\rightarrow\infty$. By Corollary \ref{AaH_cor5.6}, $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})=\infty$. Hence $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_n}})=\infty$, that is \eqref{AaH_cor5.7_2} fails. \end{proof} \end{cor} \needspace{6\baselineskip} \begin{cor}\label{AaH_cor5.8} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with Haar system $\lambda$ such that all the orbits are locally closed. Let $z\in G^{(0)}$. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{AaH_cor5.8_1} $\mathrm{M}_\mathrm{U}(\mathrm{L}^z)<\infty$; \item\label{AaH_cor5.8_2} there exists an open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact and \[ \sup_{x\in V}\lambda_x(G^V)<\infty. \] \end{enumerate} \begin{proof} If \eqref{AaH_cor5.8_2} holds then \eqref{AaH_cor5.8_1} holds by Corollary \ref{AaH_cor3.7}. Let $\braces{V_i}$ be a basic decreasing sequence of open neighbourhoods of $z$ such that $G_z^{V_1}$ is relatively compact. If \eqref{AaH_cor5.8_2} fails then $\sup_{x\in V_i}\braces{\lambda_x(G^{V_i})}=\infty$ for each $i$ and we may choose a sequence $\braces{x_i}$ such that $x_i\in V_i$ for all $i$ and $\lambda_{x_i}(G^{V_i})\rightarrow\infty$. Since $\braces{V_i}$ is a basic decreasing sequence, $x_i\rightarrow z$. Let $V$ be an open neighbourhood of $z$ such that $G_z^V$ is relatively compact. There exists $i_0$ such that $V_i\subset V$ for all $i\geqslant i_0$. Then, for $i\geqslant i_0$, \[ \lambda_{x_i}(G^{V_i})\leqslant \lambda_{x_i}(G^V). \] Thus $\lambda_{x_i}(G^V)\rightarrow\infty$. By Corollary \ref{AaH_cor5.7}, $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x_i}})=\infty$. Hence $\mathrm{M}_\mathrm{U}(\mathrm{L}^z)=\infty$, and so \eqref{AaH_cor5.8_1} fails. \end{proof} \end{cor} \section{Path groupoid examples}\label{section_graph_algebra_examples} In this section we will show how Theorems \ref{circle_thm} and \ref{2nd_circle_thm} can be applied to the path groupoids of directed graphs to reveal the multiplicity structure of their $C^*$-algebras. Recall the definition of a directed graph and the path groupoid from Section \ref{sec_path_groupoid}. \notationindex{VFinfty@$[v,f]^\infty$}Recall from page \pageref{def_vfinfty} that when $v$ is a vertex, $f$ is an edge, and there is exactly one infinite path with range $v$ that includes the edge $f$, then this infinite path is denoted by $[v,f]^\infty$. \notationindex{VFstar@$[v,f]^*$}When there is exactly one finite path $\alpha$ with $r(\alpha)=v$ and $\alpha_{|\alpha|}=f$, we denote $\alpha$ by $[v,f]^*$. \begin{example}[$2$-times convergence in a path groupoid]\label{2-times_convergence_example} Let $E$ be the graph \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-5em,-5.6em) rectangle (3*\cellwidth em + 4.5em,0.3em); \foreach \x in {1,2,3,4} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {$\scriptstyle v_{\x}$}; \foreach \x in {1,2,3,4} \foreach \y in {1} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {}; \foreach \x in {1,2,3,4} \foreach \y in {2} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-1.5em-1.5*\y em) {}; \foreach \x in {1,2,3,4} \foreach \y in {1,2} \fill[black] (x\x y\y) circle (0.15em); \foreach \x in {1,2,3,4} \draw [<-, bend left] (x\x y0) to node[anchor=west] {$\scriptstyle f_{\x}^{(2)}$} (x\x y1); \foreach \x in {1,2,3,4} \draw [<-, bend right] (x\x y0) to node[anchor=east] {$\scriptstyle f_{\x}^{(1)}$} (x\x y1); \foreach \x / \z in {1/2,2/3,3/4} \draw[black,<-] (x\x y0) to node[anchor=south] {} (x\z y0); \foreach \x in {1,2,3,4} \draw [<-] (x\x y1) -- (x\x y2); \foreach \x in {1,2,3,4} { \node (endtail\x) at (\cellwidth*\x em-\cellwidth em, -6em) {}; \draw [dotted,thick] (x\x y2) -- (endtail\x); } \node(endtailone) at (3*\cellwidth em + 2.5em,0em) {}; \draw[dotted,thick] (x4y0) -- (endtailone); \end{tikzpicture} \end{center} and let $G$ be the path groupoid. For each $n\geqslant 1$ define $x^{(n)}:=[v_1, f_n^{(1)}]^\infty$ and let $z$ be the infinite path with range $v_1$ that passes through each $v_n$. Then $\{x^{(n)}\}$ converges $2$-times in $G^{(0)}/G$ to $z$. \begin{proof} We will describe two sequences in $G$ as in Definition \ref{def_k-times_convergence}. For each $n\geqslant 1$ define $\gamma_n^{(1)}:=(x^{(n)},0,x^{(n)})$ and $\gamma_n^{(2)}:=\big([v_1, f_n^{(2)}]^\infty,0,x^{(n)}\big)$. It follows immediately that $s(\gamma_n^{(1)})=x^{(n)}=s(\gamma_n^{(2)})$ for all $n$ and that both $r(\gamma_n^{(1)})$ and $r(\gamma_n^{(2)})$ converge to $z$ as $n\rightarrow\infty$. It remains to show that $\gamma_n^{(2)}(\gamma_n^{(1)})^{-1}\rightarrow\infty$ as $n\rightarrow\infty$. Let $K$ be a compact subset of $G$. Our goal is to show that $\gamma_n^{(2)}(\gamma_n^{(1)})^{-1}=\gamma_n^{(2)}$ is eventually not in $K$. Since sets of the form $Z(\alpha,\beta)$ for $\alpha,\beta\in E^*$ with $s(\alpha)=s(\beta)$ form a basis for the topology on the path groupoid, for each $\gamma\in K$ there exist $\alpha^{(\gamma)},\beta^{(\gamma)}\in E^*$ with $s(\alpha^{(\gamma)})=s(\beta^{(\gamma)})$ so that $Z(\alpha^{(\gamma)},\beta^{(\gamma)})$ is an open neighbourhood of $\gamma$ in $G$. Thus $\cup_{\gamma\in K}Z(\alpha^{(\gamma)},\beta^{(\gamma)})$ is an open cover of the compact set $K$, and so admits a finite subcover $\cup_{i=1}^I Z(\alpha^{(i)},\beta^{(i)})$. We now claim that for any fixed $n\in\mathbb P$, if there exists $i$ with $1\leqslant i\leqslant I$ such that $\gamma_n^{(2)}\in Z(\alpha^{(i)},\beta^{(i)})$, then $\big|[v_1,f_n^{(2)}]^*\big|\leqslant \lvert\alpha^{(i)}\rvert$. Temporarily fix $n\in\mathbb P$ and suppose there exists $i$ with $1\leqslant i\leqslant I$ such that $\gamma_n^{(2)}\in Z(\alpha^{(i)},\beta^{(i)})$. Suppose the converse: that $\lvert\alpha^{(i)}\rvert<\big|[v_1,f_n^{(2)}]^*\big|$. Since $\gamma_n^{(2)}\in Z(\alpha^{(i)},\beta^{(i)})$, it follows that $r(\gamma_n^{(2)})=[v_1, f_n^{(2)}]^\infty\in \alpha^{(i)}E^\infty$, and so $\alpha_p^{(i)}=[v_1, f_n^{(2)}]^\infty_p$ for every $1\leqslant p\leqslant|\alpha^{(i)}|$. By examining the graph we can see that $s\big([v_1, f_n^{(2)}]^\infty_p\big)=v_{p+1}$ for all $1\leqslant p<\big|[v_1,f_n^{(2)}]^*\big|$. Since we also know that $|\alpha^{(i)}|<\big|[v_1,f_n^{(2)}]^*\big|$, we can deduce that $s(\alpha^{(i)})=v_j$ for some $j$. Furthermore since $s(\alpha^{(i)})=s(\beta^{(i)})$, $s(\beta^{(i)})=v_j$. There is only one path with source $v_j$ and range $v_1$, so $\alpha^{(i)}=\beta^{(i)}$. Note that when $k=|\alpha^{(i)}|-|\beta^{(i)}|$, the set $Z(\alpha^{(i)},\beta^{(i)})$ is by definition equal to \[ \{(x,k,y):x\in \alpha^{(i)}E^\infty,y\in \beta^{(i)}E^\infty, x_p=y_{p-k} \text{ for }p>|\alpha^{(i)}|\}, \] so since $\gamma_n^{(2)}\in Z(\alpha^{(i)},\beta^{(i)})$ and $\alpha^{(i)}=\beta^{(i)}$, we can see that $s(\gamma_n^{(2)})_p=r(\gamma_n^{(2)})_p$ for all $p>|\alpha^{(i)}|$. We know $s(\gamma_n^{(2)})=[v_1, f_n^{(1)}]^\infty$ and $r(\gamma_n^{(2)})=[v_1, f_n^{(2)}]^\infty$, so $[v_1, f_n^{(2)}]^\infty_p=[v_1, f_n^{(1)}]^\infty_p$ for all $p>|\alpha^{(i)}|$. In particular, since we assumed that $\big|[v_1,f_n^{(2)}]^*\big|>|\alpha^{(i)}|$, we have \[ [v_1, f_n^{(2)}]^\infty_{\big|[v_1,f_n^{(2)}]^*\big|}=[v_1, f_n^{(1)}]^\infty_{\big|[v_1,f_n^{(2)}]^*\big|},\] so that $f_n^{(2)}=f_n^{(1)}$. But $f_n^{(1)}$ and $f_n^{(2)}$ are distinct, so we have found a contradiction, and we must have $\big|[v_1,f_n^{(2)}]^*\big|\leqslant |\alpha^{(i)}|$. Our next goal is to show that each $Z(\alpha^{(i)},\beta^{(i)})$ contains at most one $\gamma_n^{(2)}$. Fix $n,m\in\mathbb P$ and suppose that both $\gamma_n^{(2)}$ and $\gamma_m^{(2)}$ are in $Z(\alpha^{(i)},\beta^{(i)})$ for some $i$. We will show that $n=m$. Since $\gamma_n^{(2)}\in Z(\alpha^{(i)},\beta^{(i)})$, $r(\gamma_n^{(2)})=[v_1, f_n^{(2)}]^\infty\in \alpha^{(i)}E^\infty$. Thus there exists $x\in E^\infty $ such that $[v_1,f_n^{(2)}]^*x\in \alpha^{(i)}E^\infty$ and, since $\big|[v_1,f_n^{(2)}]^*\big|\leqslant \lvert\alpha^{(i)}\rvert$, we can crop $x$ to form a finite $\epsilon\in E^*$ such that $[v_1,f_n^{(2)}]^*\epsilon=\alpha^{(i)}$. Similarly there exists $\delta\in E^*$ such that $[v_1,f_m^{(2)}]^*\delta=\alpha^{(i)}$. Then \[ [v_1,f_n^{(2)}]^*\epsilon=\alpha^{(i)}=[v_1,f_m^{(2)}]^*\delta, \] which we can see by looking at the graph is only possible if $n=m$. We have thus shown that if $\gamma_n^{(2)}$ and $\gamma_m^{(2)}$ are in $Z(\alpha^{(i)},\beta^{(i)})$, then $\gamma_n^{(2)}=\gamma_m^{(2)}$. Let $S=\{n\in\mathbb P:\gamma_n^{(2)}\in K\}$. Since $K\subset \cup_{i=1}^IZ(\alpha^{(i)},\beta^{(i)})$ and since $\gamma_n^{(2)},\gamma_m^{(2)}\in Z(\alpha^{(i)},\beta^{(i)})$ implies $n=m$, $S$ can contain at most $I$ elements. Then $S$ has a maximal element $n_0$ and $\gamma_n^{(2)}\notin K$ provided $n>n_0$. Thus $\gamma_n\rightarrow\infty$ as $n\rightarrow\infty$, and we have shown that $x^{(n)}$ converges $2$-times to $z$ in $G^{(0)}/G$. \end{proof} \end{example} \needspace{4\baselineskip} \begin{example}[$k$-times convergence in a path groupoid]\label{k-times_convergence_example} For any fixed positive integer $k$, let $E$ be the graph \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-5em,-5.6em) rectangle (3*\cellwidth em + 4.5em,0.3em); \foreach \x in {1,2,3,4} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {$\scriptstyle v_{\x}$}; \foreach \x in {1,2,3,4} \foreach \y in {1} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {}; \foreach \x in {1,2,3,4} \foreach \y in {2} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-1.5em-1.5*\y em) {}; \foreach \x in {1,2,3,4} \foreach \y in {1,2} \fill[black] (x\x y\y) circle (0.15em); \foreach \x in {1,2,3,4} \draw [<-, bend right=40] (x\x y0) to node[anchor=west] {} (x\x y1); \foreach \x in {1,2,3,4} \draw [<-, bend right=25] (x\x y0) to node[anchor=west] {} (x\x y1); \foreach \x in {1,2,3,4} \draw [<-, bend left] (x\x y0) to node[anchor=west,rotate=-25] {$\scriptstyle f_{\x}^{(1)},\ldots,f_{\x}^{(k)}$} (x\x y1); \foreach \x in {1,2,3,4} \draw [dotted,thick] ($(x\x y0)!{1/(2*cos(10))}!-10:(x\x y1)$) -- ($(x\x y0)!{1/(2*cos(14))}!14:(x\x y1)$); \foreach \x / \z in {1/2,2/3,3/4} \draw[black,<-] (x\x y0) to node[anchor=south] {} (x\z y0); \foreach \x in {1,2,3,4} \draw [<-] (x\x y1) -- (x\x y2); \foreach \x in {1,2,3,4} { \node (endtail\x) at (\cellwidth*\x em-\cellwidth em, -6em) {}; \draw [dotted,thick] (x\x y2) -- (endtail\x); } \node(endtailone) at (3*\cellwidth em + 2.5em,0em) {}; \draw[dotted,thick] (x4y0) -- (endtailone); \end{tikzpicture} \end{center} and let $G$ be the path groupoid. For each $n\geqslant 1$ define $x^{(n)}:=[v_1, f_n^{(1)}]^\infty$ and let $z$ be the infinite path that passes through each $v_n$. Then the sequence $\{x^{(n)}\}$ converges $k$-times in $G^{(0)}/G$ to $z$. \begin{proof} After defining $\gamma_n^{(i)}:=\big([v_1, f_n^{(i)}]^\infty,0,x^{(n)}\big)$ for each $1\leqslant i\leqslant k$, an argument similar to that in Example \ref{2-times_convergence_example} establishes the $k$-times convergence. \end{proof} \end{example} \begin{example}\label{ML2_MU3_example}[Lower multiplicity 2 and upper multiplicity 3] Consider the graph $E$ described by \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-2.5em,-7.1em) rectangle (4*\cellwidth em + 2.5em,0.3em); \foreach \x in {1,2,3,4,5} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {$\scriptstyle v_{\x}$}; \foreach \x in {1,2,3,4,5} \foreach \y in {1} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {$\scriptstyle w_{\x}$}; \foreach \x in {1,2,3,4,5} \foreach \y in {2,3} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-1.5em-1.5*\y em) {}; \foreach \x in {1,2,3,4,5} \foreach \y in {2,3} \fill[black] (x\x y\y) circle (0.15em); \foreach \x in {1,3,5} \draw [<-, bend left] (x\x y0) to node[anchor=west] {} (x\x y1); \foreach \x in {1,3,5} \draw [<-, bend right] (x\x y0) to node[anchor=east] {$\scriptstyle f_{\x}^{(1)}$} (x\x y1); \foreach \x in {2,4} \draw [<-, bend left=40] (x\x y0) to node[anchor=west] {} (x\x y1); \foreach \x in {2,4} \draw [<-, bend right=40] (x\x y0) to node[anchor=east] {$\scriptstyle f_{\x}^{(1)}$} (x\x y1); \foreach \x in {2,4} \draw [<-] (x\x y0) to node {} (x\x y1); \foreach \x / \z in {1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y0) to node[anchor=south] {} (x\z y0); \foreach \x in {1,2,3,4,5} \draw [<-] (x\x y1) -- (x\x y2); \foreach \x in {1,2,3,4,5} \draw [<-] (x\x y2) -- (x\x y3); \foreach \x in {1,2,3,4,5} { \node (endtail\x) at (\cellwidth*\x em-\cellwidth em, -7.5em) {}; \draw [dotted,thick] (x\x y3) -- (endtail\x); } \node(endtailone) at (4*\cellwidth em + 2.5em,0em) {}; \draw[dotted,thick] (x5y0) -- (endtailone); \end{tikzpicture} \end{center} where for each odd $n\geqslant 1$ there are exactly two paths $f_n^{(1)},f_n^{(2)}$ with source $w_n$ and range $v_n$, and for each even $n\geqslant 2$ there are exactly three paths $f_n^{(1)},f_n^{(2)},f_n^{(3)}$ with source $w_n$ and range $v_n$. Let $G$ be the path groupoid, define $x^{(n)}:=[v_1, f_n^{(1)}]^\infty$ for every $n\geqslant 1$, and let $z$ be the infinite path that meets every vertex $v_n$ (so $z$ has range $v_1$). Then \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=2\quad\text{and}\quad\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=3. \] \begin{proof} We know that $\braces{x^{(n)}}$ converges $2$-times to $z$ in $G^{(0)}/G$ by the argument in Example \ref{2-times_convergence_example}, so we can apply Theorem \ref{circle_thm} to see that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})\geqslant 2$. We can see that the subsequence $\braces{x^{(2n)}}$ of $\braces{x^{(n)}}$ converges $3$-times to $z$ in $G^{(0)}/G$ by Example \ref{k-times_convergence_example}. Theorem \ref{2nd_circle_thm} now tells us that $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})\geqslant 3$. Now suppose $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})\geqslant 3$. Then by Theorem \ref{circle_thm}, $\braces{x^{(n)}}$ converges $3$-times to $z$ in $G^{(0)}/G$, so there must exist three sequences $\braces{\gamma_n^{(1)}}$,$\braces{\gamma_n^{(2)}}$, and $\braces{\gamma_n^{(3)}}$ as in the definition of $k$-times convergence (Definition \ref{def_k-times_convergence}). For each odd $n$, there are only two elements in $G$ with source $x^{(n)}$, so there must exist $1\leqslant i<j\leqslant 3$ such that $\gamma_n^{(i)}=\gamma_n^{(j)}$ frequently. Then $\gamma_n^{(j)}(\gamma_n^{(i)})^{-1}=r(\gamma_n^{(i)})$ frequently and, since $r(\gamma_n^{(i)})\rightarrow z$, $\braces{\gamma_n^{(j)}(\gamma_n^{(i)})^{-1}}$ admits a convergent subsequence. Thus $\gamma_n^{(j)}(\gamma_n^{(i)})^{-1}\nrightarrow\infty$, contradicting the definition of $k$-times convergence. If $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})\geqslant 4$, then by Theorem \ref{2nd_circle_thm} there is a subsequence of $\braces{x^{(n)}}$ that converges $4$-times to $z$ in $G^{(0)}/G$. A similar argument to that in the preceding paragraph shows that this is not possible since there are at most 3 edges between any $v_n$ and $w_n$. It follows that $\mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=2$ and $\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=3$. \end{proof} \end{example} \begin{lemma}\label{upper_and_lower_multiplicities_lemma} In Example \ref{2-times_convergence_example}, \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=2; \] and in Example \ref{k-times_convergence_example}, \[ \mathrm{M}_\mathrm{L}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=\mathrm{M}_\mathrm{U}(\mathrm{L}^z,\braces{\mathrm{L}^{x^{(n)}}})=k. \] \begin{proof} The same argument as that found in Example \ref{ML2_MU3_example} can be used to demonstrate this lemma. The explicit proof was given for Example \ref{ML2_MU3_example} since it covers the case where the upper and lower multiplicities are distinct. \end{proof} \end{lemma} In the next example we will add some structure to the graph from Example \ref{2-times_convergence_example} to create a path groupoid $G$ with non-Hausdorff orbit space that continues to exhibit $2$-times convergence. \begin{example}\label{example_with_non-Hausdorff_orbit_space} Let $E$ be the directed graph \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-4em,-5.6em) rectangle (3*\cellwidth em + 3em,4.3em); \foreach \x in {1,2,3,4} \foreach \y in {0,1} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {}; \foreach \x in {1,2,3,4} \foreach \y in {2} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-1.5em-1.5*\y em) {}; \foreach \x in {1,2,3,4} \foreach \y in {0,1,2} \fill[black] (x\x y\y) circle (0.15em); \foreach \x in {1,2,3,4} { \node (x\x z1) at (\cellwidth*\x em-1.25*\cellwidth em,4em) {$\scriptstyle v_\x$}; \node (x\x z2) at (\cellwidth*\x em-1.5*\cellwidth em,2em) {$\scriptstyle w_\x$}; } \foreach \x in {1,2,3,4} \draw [<-, bend left] (x\x y0) to node[anchor=west] {$\scriptstyle f_{\x}^{(2)}$} (x\x y1); \foreach \x in {1,2,3,4} \draw [<-, bend right] (x\x y0) to node[anchor=east] {$\scriptstyle f_{\x}^{(1)}$} (x\x y1); \foreach \x / \z in {1/2,2/3,3/4} \draw[black,<-] (x\x z1) to node[anchor=south] {} (x\z z1); \foreach \x / \z in {1/2,2/3,3/4} \draw[black,<-] (x\x z2) to node[anchor=south] {} (x\z z2); \foreach \x in {1,2,3,4} { \draw[white,ultra thick,-] (x\x z1) -- (x\x y0); \draw[<-] (x\x z1) -- (x\x y0); \draw[<-] (x\x z2) -- (x\x y0); } \foreach \x in {1,2,3,4} \draw [<-] (x\x y1) -- (x\x y2); \foreach \x in {1,2,3,4} { \node (endtail\x) at (\cellwidth*\x em-\cellwidth em, -6em) {}; \draw [dotted,thick] (x\x y2) -- (endtail\x); } \node (endtailone) at ($(x4z1) + 0.5*(\cellwidth em, 0 em)$) {}; \node (endtailtwo) at ($(x4z2) + 0.5*(\cellwidth em, 0 em)$) {}; \draw[dotted,thick] (x4z1) -- (endtailone); \draw[dotted,thick] (x4z2) -- (endtailtwo); \end{tikzpicture} \end{center} and let $G$ be the path groupoid. For every $n\geqslant 1$ let $x^{(n)}$ be the infinite path $[v_1, f_n^{(1)}]^\infty$. Let $x$ be the infinite path with range $v_1$ that passes through each $v_n$ and let $y$ be the infinite path with range $w_1$ that passes through each $w_n$. Then the orbit space $G^{(0)}/G$ is not Hausdorff and $\braces{x^{(n)}}$ converges $2$-times in $G^{(0)}/G$ to both $x$ and $y$. \begin{proof} To see that $\braces{x^{(n)}}$ converges $2$-times to $x$ in $G^{(0)}/G$, consider the sequences $\braces{([v_1, f_n^{(2)}]^\infty,0,x^{(n)})}$ and $\braces{(x^{(n)},0,x^{(n)})}$ and follow the argument as in Example \ref{2-times_convergence_example}. To see that $\braces{x^{(n)}}$ converges $2$-times to $y$ in $G^{(0)}/G$, consider the sequences $\braces{([w_1,f_n^{(1)}]^\infty,0,x^{(n)})}$ and $\braces{([w_1,f_n^{(2)}]^\infty,0,x^{(n)})}$. While it is tempting to think that this example exhibits $4$-times convergence (or even $3$-times convergence), this is not the case (see Example~\ref{ML2_MU3_example} for an argument demonstrating this). We know $x^{(n)}$ converges $2$-times to $x$ in $G^{(0)}/G$, so $[x^{(n)}]\rightarrow [x]$ in $G^{(0)}/G$, and similarly $[x^{(n)}]\rightarrow [y]$ in $G^{(0)}/G$. It follows that $G^{(0)}/G$ is not Hausdorff since $[x]\ne [y]$. \end{proof} \end{example} In all of the examples above, the orbits in $G^{(0)}$ are closed and hence $C^*(G)^\wedge$ and $G^{(0)}/G$ are homeomorphic by \cite[Proposition~5.1]{Clark2007}. By combining the features of the graphs in Examples~\ref{ML2_MU3_example} and~\ref{example_with_non-Hausdorff_orbit_space} we obtain a principal groupoid whose $C^*$-algebra has non-Hausdorff spectrum and distinct upper and lower multiplicities among its irreducible representations. \chapter{Classes of \texorpdfstring{$C^*$}{\it C*}-algebras}\label{sec_algebra_classes} This chapter introduces various classes of $C^*$-algebras, beginning with postliminal and liminal $C^*$-algebras in Section \ref{section_liminal_and_postliminal_overview} and proceeding onto bounded trace, Fell and continuous trace $C^*$-algebras in Section \ref{bounded-trace_fell_cts-trace_overview}. We will also describe theorems that characterise groupoids with $C^*$-algebras in these classes. \section{Liminal and postliminal \texorpdfstring{$C^*$}{\it C*}-algebras}\label{section_liminal_and_postliminal_overview} \begin{definition} A $C^*$-algebra is {\em liminal}\index{liminal} (or {\em CCR}\index{CCR}) if the image of each irreducible representation $\pi$ is exactly $\mathcal K(\mathcal H_\pi)$. A $C^*$-algebra is {\em postliminal}\index{postliminal} (or {\em GCR}\index{GCR}) if the image of each irreducible representation $\pi$ contains $\mathcal K(\mathcal H_\pi)$. \end{definition} Pedersen in \cite[p. 191]{Pedersen1979} points out that $C^*$-algebras are either extremely well behaved (Type I) or totally misbehaved. Sakai in \cite[Theorem~2]{Sakai1966} showed that these well-behaved Type I algebras are precisely the postliminal $C^*$-algebras. A locally compact group $H$ is said to be {\em liminal} (or respectively {\em postliminal}) if $C^*(H)$ is liminal (or respectively postliminal) \cite[13.9.4]{Dixmier1977}. The two following theorems are by Clark \begin{theorem}[{\cite[Theorem~7.1]{Clark2007-2}}]\label{thm_groupoid_algebra_postliminal} Suppose $G$ is second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$ in which all of the stability subgroups are amenable. Then $C^*(G,\lambda)$ is postliminal if and only if $G^{(0)}/G$ is $T_0$ and all of the stability subgroups are postliminal. \end{theorem} \begin{theorem}[{\cite[Theorem~6.1]{Clark2007-2}}]\label{thm_groupoid_algebra_liminal} Suppose $G$ is second countable, locally compact, Hausdorff groupoid with a Haar system $\lambda$ in which all of the stability subgroups are amenable. Then $C^*(G,\lambda)$ is liminal if and only if $G^{(0)}/G$ is $T_1$ and all of the stability subgroups are liminal. \end{theorem} \begin{remark}\label{remark_T1_iff_orbits_closed} Recall that a topological space $X$ is $T_1$ if and only if for each $x\in X$, the singleton set $\{x\}$ is closed. It follows that if $G$ is a groupoid, then the orbit space $G^{(0)}/G$ is $T_1$ if and only if the orbits in $G^{(0)}$ are closed subsets of $G^{(0)}$. \end{remark} Remark \ref{remark_T1_iff_orbits_closed} will be used with Theorem \ref{thm_groupoid_algebra_liminal} for the results in Chapter \ref{chapter_categorising_higher-rank_graphs} characterising the higher-rank graphs with liminal $C^*$-algebras. The rest of this section demonstrates the analogue of Remark \ref{remark_T1_iff_orbits_closed} for use with Theorem \ref{thm_groupoid_algebra_postliminal}. This is that the orbit space $G^{(0)}/G$ is $T_0$ if and only if the orbits in $G^{(0)}$ are locally closed. A topological space is {\em $\sigma$-compact}\index{sigma-compact@$\sigma$-compact} if it is a union of compact sets. The following result is well known. \begin{lemma}\label{lemma_second_countable_locally_compact_is_sigma_compact} Every second countable, locally compact space is $\sigma$-compact. \begin{proof} Let $X$ be a second countable, locally compact topological space. Since $X$ is second countable there is a countable base $\{U_i\}$ for the topology on $X$. As $X$ is locally compact, each $x\in X$ has a compact neighbourhood $K_x$. Since $\{U_i\}$ is a base for the topology on $X$, for each $x\in X$ there exists $i_x\in\mathbb N$ such that $x\in U_{i_x}\subset K_x$. Let $\{U_{i_j}\}$ be the subsequence of $\{U_i\}$ obtained by removing the elements that are not in $\{U_{i_x}: x\in X\}$. Since each $x$ has $U_{i_x}\subset K_x$, for each $j$ there exists a compact set $K_j$ with $U_{i_j}\subset K_j$. Then $X=\cup_{j} K_j$, so $X$ is $\sigma$-compact. \end{proof} \end{lemma} A completely metrisable, second countable, topological space is called a {\em Polish space}\index{Polish space} \cite[p.~175]{Williams2007}. As with Lemma \ref{lemma_second_countable_locally_compact_is_sigma_compact}, the following result is well known. \begin{lemma}[{\cite[Lemma~6.5]{Williams2007}}]\label{lemma_second_countable_locally_compact_hausdorff_is_polish} Every second countable, locally compact, Hausdorff space is Polish. \end{lemma} An {\em $F_\sigma$} space is a topological space that is a countable union of closed sets. The next result is by Clark. \begin{lemma}[{\cite[p.~258]{Clark2007}}]\label{lemma_clark} If $G$ is a $\sigma$-compact, Hausdorff groupoid then the relation on $G^{(0)}$ induced by $G$ is an $F_\sigma$ subset of $G^{(0)}\times G^{(0)}$. \end{lemma} The following theorem is a cut-down version of Ramsay's \cite[Theorem~2.1]{Ramsay1990}. \begin{theorem}[{\cite[Theorem~2.1]{Ramsay1990}}]\label{theorem_ramsay_thm2.1} Let $G$ be a Polish groupoid and suppose that the relation on $G^{(0)}$ induced by $G$ is an $F_\sigma$ subset of $G^{(0)}\times G^{(0)}$. The following are equivalent: \begin{enumerate} \item $G^{(0)}/G$ is $T_0$; \item each orbit in $G^{(0)}$ is a locally closed subset of $G^{(0)}$; \item $G^{(0)}/G$ is almost Hausdorff; and \item $G^{(0)}/G$ is a standard Borel space. \end{enumerate} \end{theorem} The following corollary is, from the perspective of this thesis, the key immediate consequence of Lemmas \ref{lemma_second_countable_locally_compact_is_sigma_compact}, \ref{lemma_second_countable_locally_compact_hausdorff_is_polish} and \ref{lemma_clark} and of Theorem \ref{theorem_ramsay_thm2.1}. It will be used with Theorem \ref{thm_groupoid_algebra_postliminal}. \begin{cor}\label{cor_ramsay_orbits_locally_closed} Suppose $G$ is a second countable, locally compact, Hausdorff groupoid. The space $G^{(0)}/G$ is $T_0$ if and only if the orbits in $G^{(0)}$ are locally closed. \end{cor} \section{Bounded-trace, Fell and continuous trace \texorpdfstring{$C^*$}{\it C*}-algebras}\label{bounded-trace_fell_cts-trace_overview} Recall that an element $a\in A$ is {\em positive} if $a=b^*b$ for some $b\in A$ and that $A^+$ is the set of all positive elements in $A$. It follows that if $a\in A^+$ and $\pi$ is a representation (which recall is a $\ast$-homomorphism) of $A$ on a Hilbert space $\mathcal H_\pi$, then $\pi(a)$ is a positive element of $B(\mathcal H)$. The trace of $\pi(a)$ is then given by \[ \mathrm{Tr}\big(\pi(a)\big)=\sum_i\big(\pi(a)\xi_i\, |\, \xi_i\big)\in [0,\infty] \] for any orthonormal basis $\{\xi_i\}$ of $\mathcal H_\pi$. If $\pi$ and $\sigma$ are unitarily equivalent representations of $A$, then $\mathrm{Tr}\big(\pi(a)\big)=\mathrm{Tr}\big(\sigma(a)\big)$, so for each $a\in A^+$ there is a map $\hat{a}:\widehat{A}\rightarrow [0,\infty]$ such that $\hat{a}([\pi])=\mathrm{Tr}\big(\pi(a)\big)$ for every irreducible representation $\pi$ of $A$. \begin{definition}[{\cite[pp.~10--11]{Milicic1973}}]\index{bounded trace} Suppose $A$ is a $C^*$-algebra. An element $a\in A^+$ has {\em bounded trace} if $\hat{a}$ is bounded. We say that $A$ has {\em bounded trace} if the span of all bounded-trace elements is a dense subset of $A$. \end{definition} Archbold, Somerset and Spielberg in \cite[Theorem~2.6]{Archbold-Somerset-Spielberg1997} show that the bounded trace $C^*$-algebras are precisely the {\em uniformly liminal} $C^*$-algebras from \cite[2.1]{Perdrizet1971} and \cite[4.7.11]{Dixmier1969}. Mili\v{c}i\'c in \cite[Theorem~1]{Milicic1973} showed that every bounded-trace $C^*$-algebra is liminal. \begin{definition}[{\cite[pp.~104--106]{Dixmier1977}}]\index{continuous trace} Suppose $A$ is a $C^*$-algebra. An element $a\in A^+$ has {\em continuous trace} if $\hat{a}$ is bounded and continuous. We say that $A$ has {\em continuous trace} if the span of all continuous-trace elements is a dense subset of $A$. \end{definition} Note that it follows immediately from these definitions that every continuous-trace $C^*$-algebra has bounded trace. \begin{prop}[{\cite[Proposition~4.5.4]{Dixmier1977}}]\label{Dixmier4_5_4} A $C^*$-algebra $A$ has continuous trace if and only if $\widehat{A}$ is Hausdorff and for every $\pi\in\widehat{A}$ there exist $a\in A^+$ and a neighbourhood $V$ of $\pi$ such that $\sigma(a)$ is a rank-one projection for every $\sigma\in V$. \end{prop} Pedersen in \cite[Theorem~6.2.11]{Pedersen1979} shows that postliminal (well behaved) $C^*$-algebras can be thought of as being built out of continuous-trace $C^*$-algebras. Proposition \ref{Dixmier4_5_4} shows that there is a natural generalisation of continuous-trace $C^*$-algebras: \begin{definition}[{\cite[p.~92]{Archbold-Somerset1993}}]\index{Fell algebra} A $C^*$-algebra $A$ is {\em Fell} if for every $\pi\in\widehat{A}$ there exist $a\in A^+$ and a neighbourhood $V$ of $\pi$ such that $\sigma(a)$ is a rank-one projection for every $\sigma\in V$. \end{definition} In \cite[Theorem~3.3]{anHuef-Kumjian-Sims2011} it is shown by an Huef, Kumjian and Sims that the Fell $C^*$-algebras are precisely the Type $\mathrm{I}_0$ $C^*$-algebras from \cite[p.~191]{Pedersen1979}. In Section \ref{sec_upper_lower_multiplicities} we will introduce the upper and lower multiplicity numbers of the irreducible representations of a $C^*$-algebra. Archbold in \cite[Theorem~4.6]{Archbold1994} showed that a $C^*$-algebra is Fell if and only if the upper multiplicity number of every irreducible representation is 1. In \cite[Theorem~2.6]{Archbold-Somerset-Spielberg1997}, Archbold, Somerset and Spielberg showed that a $C^*$-algebra has bounded trace if and only if the upper multiplicity number of every irreducible representation is finite, so it follows that every Fell $C^*$-algebra has bounded trace. The following theorems are by Clark, an Huef, Muhly and Williams. \begin{theorem}[{\cite[Theorem~4.4]{Clark-anHuef2008}}]\label{thm_groupoid_algebra_bounded_trace} Suppose $G$ is a second countable, locally compact, Hausdorff, principal groupoid with a Haar system $\lambda$. Then $G$ is integrable if and only if $C^*(G,\lambda)$ has bounded trace. \end{theorem} \begin{theorem}[{\cite[Theorem~7.9]{Clark2007}}]\label{thm_groupoid_algebra_Fell} Suppose $G$ is a second countable, locally compact, Hausdorff principal groupoid with a Haar system $\lambda$. Then $G$ is Cartan if and only if $C^*(G,\lambda)$ is Fell. \end{theorem} \begin{theorem}[{\cite[Theorem~2.3]{Muhly-Williams1990}}]\label{thm_groupoid_algebra_continuous_trace} Suppose $G$ is a second countable, locally compact, Hausdorff principal groupoid with a Haar system $\lambda$. Then $G$ is proper if and only if $C^*(G,\lambda)$ has continuous trace. \end{theorem} \chapter{An introduction to graph algebras}\label{chapter_graph_algebra_intro} In this chapter we will provide a brief overview of the theory of graph algebras, concentrating on the components of the theory relevant to the results in the following chapters. For a broader and more in depth introduction to graph algebras, see Raeburn's book \cite{Raeburn2005}. Section \ref{section_algebras_of_directed_graphs} provides a brief overview of the theory of the $C^*$-algebras associated to directed graphs. This includes the definition of the Cuntz-Krieger $C^*$-algebra of a row-finite directed graph, the gauge-invariant uniqueness theorem, the Cuntz-Krieger uniqueness theorem and the desourcification of a row-finite directed graph. Section \ref{sec_higher-rank_graphs} provides an overview of the theory of the $C^*$-algebras associated to higher-rank graphs. This includes the definition of a higher-rank graph, the path groupoid of a row-finite higher-rank graph, and the Cuntz-Krieger $C^*$-algebra of a row-finite higher-rank graph without sources. The Cuntz-Krieger $C^*$-algebras of row-finite higher-rank graphs that may have sources are more tricky and will be covered in Section \ref{sec_higher-rank_graphs_2}. The desourcification of a higher-rank graph is an important concept in this thesis, however it is not necessary for readers only interested groupoids, so the definition of the desourcification of a higher-rank graph will instead be introduced in Section \ref{sec_boundary_paths_and_desourcification} of the following chapter before its use in Section \ref{sec_desourcification}. \section{The Cuntz-Krieger \texorpdfstring{$C^*$}{\it C*}-algebras of directed graphs}\label{section_algebras_of_directed_graphs} Recall from Section \ref{sec_path_groupoid} the definition of a directed graph and the definition and composition of the associated finite and infinite paths. Also recall that a directed graph is row-finite if $r^{-1}(v)$ is finite for every vertex $v$. For $n\in\mathbb N$, let $E^n$ denote the set of all paths of length $n$. For any $A\subset E^*$ and $X\subset E^*\cup E^\infty$, define \[AX:=\braces{\alpha x:\alpha\in A, x\in X, s(\alpha)=r(x)}.\] If in addition $\alpha\in E^*$, define $\alpha X:=\braces{\alpha}X$, noting that this is consistent with the definition of $\alpha E^\infty$ on page \pageref{def_alphaEinfty}. The following definition from Raeburn's book \cite{Raeburn2005} is based on a definition in Kumjian, Pask and Raeburn's \cite{Kumjian-Pask-Raeburn1998}. \needspace{7\baselineskip} \begin{definition}\label{def_directed_graph_algebra} Let $E$ be a row-finite directed graph. A {\em Cuntz-Krieger $E$-family $\braces{S,P}$} consists of a set $\braces{P_v:v\in E^0}$ of mutually orthogonal projections and a set $\braces{S_f:f\in E^1}$ of partial isometries satisfying \begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}} \item\label{def_directed_graph_algebra_1} $S_f^*S_f=P_{s(f)}$ for all $f\in E^1$; and \item\label{def_directed_graph_algebra_2} $P_v=\sum_{f\in vE^1}S_fS_f^*$ for every $v\in E^1$ that is not a source. \end{enumerate} The {\em Cuntz-Krieger directed-graph $C^*$-algebra}, denoted $C^*(E)$, is defined to be the universal $C^*$-algebra generated by a Cuntz-Krieger $E$-family $\braces{s,p}$. \end{definition} The existence of such a universal $C^*$-algebra is guaranteed by \cite[Proposition~1.21]{Raeburn2005}, which tells us that if $\{T,Q\}$ is a Cuntz-Krieger $E$-family in a $C^*$-algebra $B$, then there is a $\ast$-homomorphism $\pi_{T,Q}:C^*(E)\rightarrow B$ such that $\pi_{T,Q}(s_f)=T_f$ for every $f\in E^1$ and $\pi_{T,Q}(p_v)=Q_v$ for every $v\in E^0$. We will now state the two standard uniqueness theorems for $C^*(E)$ before applying the first of these to the $C^*$-algebras of the path groupoids from Section \ref{sec_path_groupoid}. The first of these uniqueness theorems is Bates, Pask, Raeburn and Szyma{\'n}ski's adaptation \cite[Theorem~2.1]{Bates-Pask-Raeburn-Szymanski2000} to directed graphs of an Huef and Raeburn's \cite[Theorem~2.3]{anHuef-Raeburn1997}. \index{gauge-invariant uniqueness theorem} \begin{theorem}[The gauge-invariant uniqueness theorem]\label{thm_gauge_invariant_uniqueness} Let $E$ be a row-finite directed graph, and suppose that $\{T,Q\}$ is a Cuntz-Krieger $E$-family in a $C^*$-algebra $B$ with each $Q_v\ne 0$. If there is a continuous action $\beta:\mathbb T\rightarrow\mathrm{Aut}\, B$ such that $\beta_z(T_f)=zT_f$ for every $f\in E^1$ and $\beta_z(Q_v)=Q_v$ for every $v\in E^0$, then $\pi_{T,Q}$ is an isomorphism of $C^*(E)$ onto a $C^*$-subalgebra of $B$ \end{theorem} The next theorem is Bates, Pask, Raeburn and Szyma{\'n}ski's refinement \cite[Theorem~3.1]{Bates-Pask-Raeburn-Szymanski2000} of Kumjian, Pask and Raeburn's \cite[Theorem~3.7]{Kumjian-Pask-Raeburn1998} version of the Cuntz-Krieger uniqueness theorem. An {\em entry}\index{entry!to a cycle} to a cycle $\alpha$ is an edge $f$ with $r(f)=r(\alpha_i)$ and $f\ne\alpha_i$ for some $i\leqslant \lvert\alpha\rvert$. \index{Cuntz-Krieger uniqueness theorem} \begin{theorem}[The Cuntz-Krieger uniqueness theorem] Suppose $E$ is a row-finite directed graph in which every cycle has an entry, and $\{T,Q\}$ is a Cuntz-Krieger $E$-family in a $C^*$-algebra $B$ such that $Q_v\ne 0$ for every $v\in E^0$. Then $\pi_{T,Q}$ is an isomorphism of $C^*(E)$ onto a $C^*$-subalgebra of $B$. \end{theorem} A {\em source}\index{source!in a directed graph} in a directed graph $E$ is a vertex $v\in E^0$ with $r^{-1}(v)=\emptyset$. Recall the definition of the path groupoid $G_E$ from Section \ref{sec_path_groupoid}. \begin{prop}\label{prop_algebras_isomorphic} Suppose $E$ is a row-finite directed graph without sources. Then $C^*(E)$ is isomorphic to $C^*(G_E)$. \begin{proof} By \cite[Proposition~4.1]{kprr1997} there is a Cuntz-Krieger $E$-family $\{T,Q\}$ such that $C^*(G_E)$ is generated by $\{T,Q\}$ and for each $f\in E^1$, the partial isometry $T_f$ is the characteristic function $1_{Z(f,s(f))}$ of $Z\big(f,s(f)\big)$. In the proof of \cite[Proposition~4.1]{kprr1997} it is shown that $T_fT_f^*=1_{Z(f,f)}$ for every $f\in E^1$, so each $Q_v$ is non-zero by Definition \ref{def_directed_graph_algebra}\eqref{def_directed_graph_algebra_2} since $E$ has no sources. Since $C^*(G_E)$ is generated by $\{T,Q\}$, the map $\pi_{T,Q}$ maps $C^*(E)$ onto $C^*(G_E)$. By \cite[Corollary~4.8]{kprr1997} there is a continuous action $\beta:\mathbb T\rightarrow\mathrm{Aut}\, C^*(G_E)$ such that $\beta_z(T_e)=zT_e$ for every $e\in E^1$ and $\beta_z(Q_v)=Q_v$ for every $v\in E^0$. Then $C^*(E)$ is isomorphic to $C^*(G_E)$ by Theorem \ref{thm_gauge_invariant_uniqueness}. \end{proof} \end{prop} The next example shows that $C^*(E)$ may not be isomorphic to $C^*(G_E)$ when $E$ has a source. \begin{example}\label{example_directed_graph_algebras_not_isomorphic} Suppose $E$ is the following directed graph. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \clip (-6.2em,-1.1em) rectangle (1.2em,1.2em); \node (u) at (-3em, 0em) {}; \node (v) at (1em,0em) {}; \node [anchor=center] at (v) {$v$}; \fill[black] (u) circle (0.15em); \draw [<-, black] (u) to node[above=-0.2em] {$f$} (v); \draw [<-,black] (u) .. controls (-7em,3em) and (-7em,-3em) .. (u); \end{tikzpicture} \end{center} Then $C^*(G_E)$ is liminal while $C^*(E)$ is not. (This is proved in Example \ref{example_directed_graph_algebras_not_isomorphic_proof} on page \pageref{example_directed_graph_algebras_not_isomorphic_proof}.) \end{example} It should be noted that in \cite{Paterson2002}, Paterson shows how to construct a groupoid from an arbitrary directed graph such that the groupoid $C^*$-algebra is isomorphic to the directed graph $C^*$-algebra. While these groupoids benefit from this additional generality, here we focus on path groupoids due to their relative simplicity. The key point to the directed-graph $C^*$-algebras is that many properties of the $C^*$-algebra are reflected in the directed graph. Recall that Lemma \ref{prop_principal_iff_no_cycles} says that if $E$ is a row-finite directed graph, then the path groupoid $G_E$ is principal if and only if $E$ has no cycles. Kumjian, Pask and Raeburn showed in \cite{Kumjian-Pask-Raeburn1998} showed that this `no cycles' condition is equivalent to $C^*(E)$ being approximately finite-dimensional: \begin{theorem}[{\cite[Theorem~2.4]{Kumjian-Pask-Raeburn1998}}] Suppose $E$ is a row-finite directed graph. Then $C^*(E)$ is approximately finite-dimensional (AF) if and only if $E$ has no cycles. \end{theorem} Before providing another example of how a property of a directed-graph Cuntz-Krieger $C^*$-algebra is reflected in the directed graph, we will introduce some standard definitions. For a directed graph $E$, let $E^{\leqslant\infty}$\notationindex{Eleinfty@$E^{\leqslant\infty}$} be the union of $E^\infty$ with the set of all $x\in E^*$ where $s(x)$ is a source. For $x\in E^*\cup E^\infty$ define $x(0):=r(x)$ and for $p\in\mathbb P$ with $p\leqslant |x|$, define \notationindex{XP@$x(p)$}$x(p):=s(x_p)$. A directed graph $E$ is {\em cofinal}\index{cofinal directed graph} if for every $x\in E^{\leqslant\infty}$ and $v\in E^0$ there exists $\alpha\in vE^*$ such that $s(\alpha)$ is in the path $x$ (or, in other words, $s(\alpha)=x(p)$ for some $p\leqslant \lvert x\rvert$). \needspace{4\baselineskip} \begin{example}[{\cite[Example~4.1]{Raeburn2005}}] The directed graph \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \clip (-6.2em,-1.1em) rectangle (1.2em,1.2em); \node (u) at (-3em, 0em) {}; \node (v) at (1em,0em) {}; \fill[black] (u) circle (0.15em); \fill[black] (v) circle (0.15em); \draw [<-, black] (v) to node[above=-0.2em] {} (u); \draw [<-,black] (u) .. controls (-7em,3em) and (-7em,-3em) .. (u); \end{tikzpicture} \end{center} is cofinal whereas the directed graph in Example \ref{example_directed_graph_algebras_not_isomorphic} is not: if $g$ is the cycle from Example \ref{example_directed_graph_algebras_not_isomorphic} then for the path $ggg\cdots$ in $E^{\leqslant\infty}$, there is no path $\alpha$ with $r(\alpha)=v$ and $s(\alpha)$ in the path $ggg\cdots$. \end{example} Recall that a $C^*$-algebra $A$ is {\em simple}\index{simple $C^*$-algebra} if it has no closed ideals other than $0$ and $A$. The following result by Raeburn generalises Bates, Pask, Raeburn and Szyma{\'n}ski's \cite[Proposition~5.1]{Bates-Pask-Raeburn-Szymanski2000}, which required the directed graph to have no sources. \begin{theorem}[{\cite[Theorem~4.14]{Raeburn2005}}] Suppose $E$ is a row-finite directed graph. Then $C^*(E)$ is simple if and only if every cycle in $E$ has an entry and $E$ is cofinal. \end{theorem} Recall from Proposition \ref{prop_algebras_isomorphic} that the path groupoids can be used as a model for studying row-finite directed graphs without sources. A natural question to ask is: if these groupoids are used to establish a relationship between the row-finite directed graphs without sources and their associated $C^*$-algebras, can this relationship tell us anything about the $C^*$-algebras of directed graphs with sources? We will now describe an answer to this question provided by Bates, Pask, Raeburn and Szyma{\'n}ski's \cite[Lemma~1.2]{Bates-Pask-Raeburn-Szymanski2000}. \begin{definition} \index{00Etilde@$\widetilde{E}$}Suppose $E$ is a row-finite directed graph. The directed graph obtained by adding a head \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-2em,-0.3em) rectangle (4*\cellwidth em + 2em,0.3em); \node (x1y0) at (0 em,0 em) {$v$}; \foreach \x in {2,3,4,5} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {} \foreach \x in {2,3,4,5} \foreach \y in {0} \fill[black] (x\x y\y) circle (0.15em); \foreach \x / \z in {1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y0) to (x\z y0); \node(endtailone) at (4*\cellwidth em + 2em,0em) {}; \draw[dotted,thick] (x5y0) -- (endtailone); \end{tikzpicture} \end{center} to every source $v$ in $E$ is called the {\em desourcification}\index{desourcification!of a directed graph} of $E$ and is denoted by $\widetilde{E}$. \end{definition} \begin{prop}[{Corollary of \cite[Lemma~1.2]{Bates-Pask-Raeburn-Szymanski2000}}] Suppose $E$ is a row-finite directed graph. Then the desourcification $\widetilde{E}$ of $E$ is a row-finite directed graph without sources and $C^*(\widetilde{E})$ is Morita equivalent to $C^*(E)$. \end{prop} The Morita equivalence that the previous proposition establishes is particularly useful because of the following theorem that combines results from Zettl in \cite{Zettl1982} and an Huef, Raeburn and Williams in \cite{anHuef-Raeburn-Williams2007}. \begin{theorem}[Results from \cite{Zettl1982,anHuef-Raeburn-Williams2007}]\label{thm_Morita_equivalence_results} Suppose $A$ and $B$ are Morita equivalent $C^*$-algebras. Then {\allowdisplaybreaks\begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}} \item $A$ has continuous trace if and only if $B$ has continuous trace; \item $A$ is Fell if and only if $B$ is Fell; \item $A$ has bounded trace if and only if $B$ has bounded trace; \item $A$ is liminal if and only if $B$ is liminal; and \item $A$ is postliminal if and only if $B$ is postliminal. \end{enumerate}} \end{theorem} \section{Higher-rank graphs and their \texorpdfstring{$C^*$}{\it C*}-algebras and groupoids}\label{sec_higher-rank_graphs} In this section we will define the notion of a higher-rank graph and the associated path groupoid before showing the relationship between higher-rank graphs and directed graphs. This section will conclude with the definition of the Cuntz-Krieger $C^*$-algebra of a row-finite higher-rank graph without sources A {\em countable category}\index{category} $\mathcal C=(\mathcal C^0,\mathcal C^*,r_\mathcal C,s_\mathcal C,\circ,\mathrm{id})$\notationindex{C@$\mathcal C,\mathcal C^0,\mathcal C^*$} consists of two countable sets $\mathcal C^0$ (objects\index{object!in a category}) and $\mathcal C^*$ (morphisms\index{morphism!in a category}), two functions $r_\mathcal C,s_\mathcal C:\mathcal C^*\rightarrow \mathcal C^0$ (range and source maps)\index{range!in a category}\index{source!in a category}\notationindex{RC@$r_\mathcal C$}\notationindex{SC@$s_\mathcal C$}, a partially defined product (composition)\index{composition!in a category} $(f,g)\mapsto f\circ g$ from \[\braces{(f,g)\in \mathcal C^*\times\mathcal C^*:s_\mathcal C(f)=r_\mathcal C(g)}\] to $\mathcal C^*$, and an injective map $\mathrm{id}:\mathcal C^0\rightarrow \mathcal C^*$, such that the following are satisfied for $f,g,h\in\mathcal C^*$ and $v\in\mathcal C^0$: \begin{itemize} \item $r_\mathcal C(f\circ g)=r_\mathcal C(f)$ and $s_\mathcal C(f\circ g)=s_\mathcal C(g)$; \item $(f\circ g)\circ h=f\circ (g\circ h)$ when $s_\mathcal C(f)=r_\mathcal C(g)$ and $s_\mathcal C(g)=r_\mathcal C(h)$; \item $r_\mathcal C\big(\mathrm{id}(v)\big)=v=s_\mathcal C\big(\mathrm{id}(v)\big)$; and \item $\mathrm{id}(v) \circ f=f$, $g\circ\mathrm{id}(v)=g$ when $r_\mathcal C(f)=v$ and $s_\mathcal C(g)=v$. \end{itemize} Given two countable categories $\mathcal C$ and $\mathcal D$, a {\em functor}\index{functor} $F:\mathcal C\rightarrow\mathcal D$ is a pair of maps $F^0:\mathcal C^0\rightarrow \mathcal D^0$ and $F^*:\mathcal C^*\rightarrow\mathcal D^*$ that respect the range and source maps and composition\footnote{In other words, for every $g,h\in\mathcal C$ we have $F^0\big(r_\mathcal C(g)\big)=r_\mathcal C\big(F^*(g)\big)$, $F^0\big(s_\mathcal C(f)\big)=s_\mathcal C\big(F^*(g)\big)$ and $F^*(g\circ h)=F^*(g)\circ F^*(h)$.}, and satisfy $\mathrm{id}\big(F^0(v)\big)=F^0\big(\mathrm{id}(v)\big)$ for every $v\in\mathcal C^0$. When the category is clear from the context, we write $r$ for $r_\mathcal C$, $s$ for $s_\mathcal C$ and $fg$ for $f\circ g$. \begin{example} Fix $k\in\mathbb P$. We can construct a countable category from $\mathbb N^k$: define $\mathcal N_k^0:=\braces{v}$ and $\mathcal N_k^*:=\mathbb N^k$. For each $m,n\in\mathbb N^k$ define $r(n):=v$, $s(n):=v$ and $m\circ n:=m+n$. Let $\mathrm{id}(v):=0$. Then $(\mathcal N_k^0,\mathcal N_k^*,r,s,\circ,\mathrm{id})$ is a countable category. With the exception of the morphisms this category has a very basic structure, so the category will be denoted by $\mathbb N^k$. \end{example} \begin{definition}[{\cite[Definitions~1.1]{Kumjian-Pask2000}}]\label{def_degree_map} Suppose $k\in\mathbb P$. A {\em $k$-graph} (or {\em higher-rank graph})\index{higher-rank graph}\index{$k$-graph} is a countable category $\Lambda=(\Lambda^0,\Lambda^*,r,s,\mathrm{id})$\index{$\Lambda$} endowed with a functor $d:\Lambda\rightarrow\mathbb N^k$\index{$d:\Lambda\rightarrow\mathbb N^k$} called the {\em degree map}\label{old_def_degree_map}\index{degree map}, which satisfies the {\em factorisation property}\index{factorisation property}: for every $\alpha\in\Lambda^*$ and $m,n\in\mathbb N^k$ with $d(\alpha)=m+n$, there exist unique $\mu,\nu\in\Lambda^*$ such that $\alpha=\mu\nu$, $d(\mu)=m$ and $d(\nu)=n$. \end{definition} \begin{example}\label{example_LambdaE} For a directed graph $E$ there is a natural category $\Lambda_E$\notationindex{LambdaE@$\Lambda_E$} that can be constructed from $E$ with $E^0$ as the objects and $E^*$ as the morphisms. The map $d:E^*\rightarrow\mathbb N$ such that $d(\alpha)=|\alpha|$ for every $\alpha\in E^*$ satisfies the factorisation property, so $\Lambda_E$ equipped with $d$ is a $1$-graph. \end{example} \notationindex{MleN@$m\leqslant n$ ($m,n\in\mathbb N^k$)}Recall that for $k\in\mathbb P$ and $m,n\in\mathbb N^k$, we write $m\leqslant n$ if $m_i\leqslant n_i$ for $i=1,2,\ldots,k$. \begin{example}\label{example_Omega_k_m}\notationindex{Omegakm@$\Omega_{k,m}$} Fix $k\in\mathbb P$ and $m\in(\mathbb N\cup\braces\infty)^k$. Define $\Omega_{k,m}$ to be the category $(\mathrm{Obj}(\Omega_{k,m}),\mathrm{Mor}(\Omega_{k,m}),r,s,\mathrm{id},\circ)$ such that \begin{itemize} \item $\mathrm{Obj}(\Omega_{k,m})=\braces{p\in\mathbb N^k:p\leqslant m}$; \item $\mathrm{Mor}(\Omega_{k,m})=\braces{(p,q)\in \Omega_{k,m}^0\times \Omega_{k,m}^0:p\leqslant q}$; \item $r(p,q)=p$ and $s(p,q)=q$ for every $(p,q)\in\mathrm{Obj}(\Omega_{k,m})$; \item $\mathrm{id}(p)=(p,p)$ for every $p\in\mathrm{Obj}(\Omega_{k,m})$; and \item $(p,q)\circ(q,t)=(p,t)$ for every $(p,q),(q,t)\in\mathrm{Mor}(\Omega_{k,m})$. \end{itemize} We equip the category $\Omega_{k,m}$ with the degree map $d:\mathrm{Mor}(\Omega_{k,m})\rightarrow\mathbb N^k$ where $d(p,q)=q-p$ in order to make $\Omega_{k,m}$ a $k$-graph. If $m=(\infty)^k$, we simplify notation by writing $\Omega_k$ for $\Omega_{k,m}$. \end{example} Suppose $\Lambda$ is a $k$-graph. For any $m\in\mathbb N^k$, we define $\Lambda^m:=\braces{\alpha\in\mathrm{Mor}(\Lambda):d(\alpha)=m}$\index{$\Lambda^m$}. Recall that the map $\mathrm{id}:\mathrm{Obj}(\Lambda)\rightarrow \mathrm{Mor}(\Lambda)$ is injective. By the factorisation property $\mathrm{id}$ is a bijection onto $d^{-1}(0)=\Lambda^0$. We implicitly identify an object $v\in\mathrm{Obj}(\Lambda)$ with the associated morphism $\mathrm{id}(v)\in \Lambda^0$, so we end up considering $\Lambda^0$ to be $\mathrm{Obj}(\Lambda)$. We simplify notation further by writing $\Lambda$ for $\mathrm{Mor}(\Lambda)$. A {\em $k$-coloured graph}\index{coloured graph}\index{k-coloured graph@$k$-coloured graph} $E=(E^0,E^1,r,s,c)$ is a directed graph $(E^0,E^1,r,s)$ with a {\em colour map}\index{colour map} $c:E^1\rightarrow\{c_1,c_2,\ldots,c_k\}$\index{$c:E^1\rightarrow\{c_1,c_2,\ldots,c_k\}$} for some distinct $c_1,c_2,\ldots,c_k$. We denote the usual basis for $\mathbb N^k$ by $\braces{e_i}$\notationindex{EzI@$e_i$}. Suppose $\Lambda$ is a $k$-graph. The {\em skeleton}\index{skeleton} of $\Lambda$ is the $k$-coloured graph $E=(E^0,E^1,r,s,c)$ where $E^0=\Lambda^0$, $E^1=\cup_{i=1}^k \Lambda^{e_i}$, range and source are inherited from $\Lambda$, and for each $i$ ($1\leqslant i\leqslant k$) and $f\in\Lambda^{e_i}$, $c(f):=c_i$. \begin{example} The skeletons of the $k$-graphs $\Omega_{k,m}$ are relatively easy to picture. For example, the skeleton of the $2$-graph $\Omega_{2,(3,2)}$ can be illustrated by the following picture. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.9em,-0.4em) rectangle (3*\cellwidth em + 0.9em,6.4em); \foreach \x in {0,1,2,3} \foreach \y in {0,1,2} \node (x\x y\y) at (\cellwidth*\x em,3*\y em) {$\scriptstyle (\x,\y)$}; \foreach \x / \z in {0/1,1/2,2/3} \foreach \y in {0,1,2} \draw[black,<-] (x\x y\y) to (x\z y\y); \foreach \x in {0,1,2,3} \foreach \y / \z in {0/1,1/2} \draw[dashed,black,<-] (x\x y\y) to (x\x y\z); \end{tikzpicture} \end{center} \end{example} A {\em $k$-graph morphism}\index{$k$-graph morphism} is a degree-preserving functor. A consequence of the factorisation property is that if $\Lambda$ is a $k$-graph and $\alpha\in\Lambda^m$ for some $m\in\mathbb N^k$, then there exists a unique $k$-graph morphism $x_\alpha:\Omega_{k,m}\rightarrow\Lambda$ such that $x_\alpha(0,m)=\alpha$. It follows that for any $m\in\mathbb N^k$ we can identify $\Lambda^m$ with \[ \braces{x:\Omega_{k,m}\rightarrow\Lambda:x\text{ is a $k$-graph morphism}}. \] When $\alpha\in\Lambda$ and $p,q\in\mathbb N^k$ with $p\leqslant q\leqslant d(\alpha)$, we write $\alpha(p,q)$ for $x_\alpha(p,q)$\index{$\alpha(p,q)$ where $\alpha\in\Lambda$} and $\alpha(p)$\index{$\alpha(p)$ where $\alpha\in\Lambda$} for $x_\alpha(p)$. Kumjian and Pask in \cite[p. 7]{Kumjian-Pask2000} defined the {\em infinite path space}\index{infinite path space} of a $k$-graph $\Lambda$ to be the set \[ \Lambda^\infty=\braces{x:\Omega_k\rightarrow\Lambda: x\text{ is a $k$-graph morphism}}.\index{$\Lambda^\infty$} \] For each $x\in\Lambda^\infty$ define $r(x):=x(0)$. For each $p\in\mathbb N^k$, define $\sigma^p:\Lambda^\infty\rightarrow\Lambda^\infty$\index{$\sigma^p(x)$} by $\sigma^p(x)(m,n)=x(m+p,n+p)$\label{def_shift_map} for $x\in\Lambda^\infty$ and $(m,n)\in\Omega_k$. In \cite{Kumjian-Pask2000}, Kumjian and Pask have the standing assumption that each of the higher-rank graphs are row-finite and have no sources (these concepts are defined immediately following Proposition \ref{prop_finite_infinite_composition}). The point here is that these assumptions are not used in the proof of the following result from \cite{Kumjian-Pask2000} and so they are left out here.\begin{prop}[{\cite[Proposition~2.3]{Kumjian-Pask2000}}]\label{prop_finite_infinite_composition} Suppose $\Lambda$ is a $k$-graph. For any $\alpha\in\Lambda$ and $x\in\Lambda^\infty$ with $r(x)=s(\alpha)$, there is a unique $y\in\Lambda^\infty$ such that $x=\sigma^{d(\alpha)}(y)$ and $\alpha=y\big(0,d(\alpha)\big)$. \end{prop} For any $\alpha$, $x$ and $y$ as in Proposition \ref{prop_finite_infinite_composition}, we write $\alpha x$\notationindex{ALPHAX@$\alpha x$} for $y$. For $\alpha\in\Lambda$ and $S\subset\Lambda\cup\Lambda^\infty$, define \[\alpha S:=\braces{\alpha x\in\Lambda\cup\Lambda^\infty:x\in S, r(x)=s(\alpha)}\index{$\alpha S$}.\ We say $\Lambda$ is {\em row-finite}\index{row-finite!higher-rank graph} if for each $v\in\Lambda^0$ and $m\in\mathbb N^k$, the set $v\Lambda^m$ is finite; a {\em source}\index{source!in a higher-rank graph} in $\Lambda$ is an element $v\in\Lambda^0$ such that $v\Lambda^{e_i}=\emptyset$ for some $i$ ($1\leqslant i\leqslant k$). It follows that $\Lambda$ has no sources if and only if $v\Lambda^m\ne\emptyset$ for all $v\in\Lambda^0$ and $m\in\mathbb N^k$. Note that these notions of `row-finite' and `source' are natural generalisations of `row-finite' and `source' in the context of directed graphs; for a directed graph $E$, the $1$-graph $\Lambda_E$ has no sources if and only if $E$ has no sources and $\Lambda_E$ is row-finite if and only if $E$ is row-finite. After assuming that $\Lambda$ is row-finite, Kumjian and Pask endowed $\Lambda^\infty$ with the topology with basis $\braces{\alpha\Lambda^\infty:\alpha\in\Lambda}$ before showing in \cite[Lemma~2.6]{Kumjian-Pask2000} that $\alpha\Lambda^\infty$ is compact for every $\alpha\in\Lambda$. We now describe the higher-rank graph analogue of the notion of shift equivalence in a directed graph from \cite{kprr1997}. Suppose $\Lambda$ is a $k$-graph. For each $n\in\mathbb Z^k$ define a relation $\sim_n$\index{$\sim_n$} on $\Lambda^\infty$ by $x\sim_n y$ if there exists $p\in\mathbb N^k$ such that $\sigma^p(x)=\sigma^{p-n}(y)$. \begin{lemma} Suppose $\Lambda$ is a $k$-graph. The relations $\sim_n$ form an equivalence relation on $\Lambda^\infty$ in that $x\sim_0 x$, $x\sim_n y$ implies $y\sim_{-n} x$, and $x\sim_n y,y\sim_m z$ implies $x\sim_{n+m} z$. \begin{proof} Suppose $x,y,z\in\Lambda^\infty$ and $n,m\in\mathbb N^k$ satisfy $x\sim_n y$ and $y\sim_m z$. The first claim that $x\sim_0 x$ is trivial. To see $y\sim_{-n} x$, first note that since $x\sim_n y$, there exists $p\in\mathbb N^k$ such that $\sigma^p(x)=\sigma^{p-n}(y)$. Then $\sigma^{p-n}(y)=\sigma^p(x)=\sigma^{(p-n)+n}(x)$, so $y\sim_{-n} x$. To see that $x\sim_{n+m} z$, first note that since $y\sim_m z$, there exists $q\in\mathbb N^k$ such that $\sigma^q(y)=\sigma^{q-m}(z)$. Increase $p$ and $q$ if necessary so that $p-n\geqslant q$. Then $\sigma^p(x)=\sigma^{p-n}(y)=\sigma^{(p-n)-m}(z)$, so $x\sim_{n+m} y$. \end{proof} \end{lemma} If $x\sim_n y$ for some $x,y\in\Lambda^\infty$ and $n\in\mathbb Z^k$, we say $x$ is {\em shift equivalent}\index{shift equivalence!in a higher-rank graph} to $y$ with {\em lag}\index{lag (shift equivalence)} $n$. We write $x\sim y$ when the lag is not important so that $\sim$ is an equivalence relation. For every $x\in\Lambda^\infty$ let $[x]:=\braces{y\in\Lambda^\infty:x\sim y}$\index{$[x]\subset\Lambda^\infty$}. Unfortunately\label{shift_equivalence_different} this notion of shift equivalence is slightly different to Kumjian, Pask, Raeburn and Renault's notion in \cite{kprr1997} of shift equivalent infinite paths in directed graphs: shift equivalence with lag $n$ in the $1$-graph case is the same as shift equivalence with lag $-n$ in their case. The change has been made to simplify the description of the groupoid in \cite{Kumjian-Pask2000}. Kumjian and Pask in \cite{Kumjian-Pask2000} defined the following groupoid given a row-finite $k$-graph $\Lambda$. Let \[ G_{\Lambda}=\braces{(x,n,y)\in\Lambda^\infty\times\mathbb Z\times\Lambda^\infty:x\sim_n y}\index{$G_{\Lambda}$}. \] Define range and source maps $r,s:G_{\Lambda}\rightarrow\Lambda^\infty$ by $r(x,n,y)=(x,0,x)$, $s(x,n,y)=(y,0,y)$. For $(x,n,y),(y,l,z)\in G_{\Lambda}$ define $(x,n,y)(y,l,z)=(x,n+l,z)$ and set $(x,n,y)^{-1}=(y,-n,x)$; the set $G_\Lambda$ with these operations is a groupoid called the {\em path groupoid}\index{path groupoid!of a higher-rank graph} of $\Lambda$. \begin{remark}\label{remark_path_groupoid_1} For any $k$-graph $\Lambda$ there is a natural isomorphism between $\Lambda^\infty$ and $G_\Lambda^{(0)}$ given by $x\mapsto (x,0,x)$. Recall the orbit equivalence relation on the unit space of a groupoid: for $a,b\in G_\Lambda^{(0)}$ we write $a \approx b$ if there exists $\gamma\in G_\Lambda$ such that $r(\gamma)=a$ and $s(\gamma)=b$. It follows immediately from the definition of the path groupoid that for $x,y\in\Lambda^\infty$ we have $x\sim y$ in $\Lambda^\infty$ if and only if $(x,0,x)\approx(y,0,y)$ in $G_\Lambda^{(0)}$. These relations are the same under the isomorphism between $\Lambda^\infty$ and $G_\Lambda^{(0)}$, so both equivalence relations will simply be denoted by $\sim$. Since the equivalence class of $x\in\Lambda^\infty$ under $\sim$ is denoted by $[x]$ we will denote the equivalence class of $(x,0,x)\in G_\Lambda^{(0)}$ under $\sim$ by $[(x,0,x)]$\index{$[(x,0,x)]\subset G_\Lambda^{(0)}$}. Proposition \ref{prop_path_groupoid} describes a topology on $G_\Lambda$ and describes a homeomorphism between $G_\Lambda^{(0)}$ and $\Lambda^\infty$. We will frequently abuse notation by implicitly using this homeomorphism. In particular the sets $\alpha\Lambda^\infty$ will often be considered to be subsets of $G_\Lambda^{(0)}$. There is only one issue when implicitly using this homeomorphism: the range and source of $x\in\Lambda^\infty$ are elements of $\Lambda^0$ whereas the range and source of $(x,0,x)\in G_\Lambda^{(0)}$ is $(x,0,x)\in G_\Lambda^{(0)}$. We will thus be careful and use appropriate subscripts with the range and source maps when the context allows for confusion. \end{remark} Let $\Lambda$ be a $k$-graph. For $(\alpha,\beta)\in\Lambda\ast_s\Lambda$ define \[ Z(\alpha,\beta)=\braces{(\alpha x, d(\alpha)-d(\beta),\beta x)\in G_\Lambda:x\in s(\alpha)\Lambda^\infty}.\index{$Z(\alpha,\beta)$} \] \begin{prop}[{extension on \cite[Proposition~2.8]{Kumjian-Pask2000}}]\label{prop_path_groupoid} Let $\Lambda$ be a row-finite $k$-graph. The sets $\braces{Z(\mu,\nu): (\mu,\nu)\in\Lambda\ast_s\Lambda}$ form a basis for a locally compact Hausdorff topology on $G_\Lambda$. With this topology $G_\Lambda$ is a second countable, $r$-discrete groupoid in which each $Z(\mu,\nu)$ is compact and open. The map from $\Lambda^\infty$ onto $G_\Lambda^{(0)}$ given by $x\mapsto (x,0,x)$ is a homeomorphism and the counting measures form a Haar system for $G_\Lambda$. \end{prop} The only part of this result not in \cite[Proposition~2.8]{Kumjian-Pask2000} is that the counting measures form a Haar system for $G_\Lambda$. While the proof that the counting measures form a Haar system should be the same as in \cite[Proposition~2.6]{kprr1997}, the proof in \cite{kprr1997} noted that the range and source maps are local homeomorphisms before using \cite[Proposition~I.2.7, Proposition~I.2.8]{Renault1980} to deduce that the counting measures form a Haar system. There is, however, a problem with \cite[Proposition~I.2.7, Proposition~I.2.8]{Renault1980} (see Remark \ref{remark_problems_Renault_results}). To get around this, we can use Proposition \ref{Renault_I_2_8i_sub} with the observation in \cite[p. 511]{kprr1997} that for each $\alpha,\beta\in\Lambda$ with $s(\alpha)=s(\beta)$, the maps $r|_{Z(\alpha,\beta)}$ and $s|_{Z(\alpha,\beta)}$ are homeomorphisms onto $\alpha\Lambda^\infty$ and $\beta\Lambda^\infty$, respectively. The following is an observation by Kumjian and Pask. \begin{lemma}[{\cite[p.~8]{Kumjian-Pask2000}}]\label{path_groupoids_isomorphic} Suppose $E$ is a row-finite directed graph without sources. Then the path groupoid associated to $E$ is isomorphic to the path groupoid associated to $\Lambda_E$. \end{lemma} \begin{definition}[{\cite[Definitions 1.5]{Kumjian-Pask2000}}]\label{def_row-finite_no_sources_algebra} Let $\Lambda$ be a row-finite $k$-graph without sources. A {\em Cuntz-Krieger $\Lambda$-family} is a collection $\braces{t_\alpha:\alpha\in\Lambda}$ of partial isometries satisfying: \begin{enumerate}\renewcommand{\theenumi}{CK\arabic{enumi}} \item$\braces{t_v:v\in\Lambda^0}$ is a family of mutually orthogonal projections; \item $t_{\mu\nu}=t_\mu t_\nu$ for all $\mu,\nu\in\Lambda$ such that $s(\mu)=r(\nu)$; \item $t_\mu^*t_\mu=t_{s(\mu)}$ for all $\mu\in\Lambda$; and \item\label{no_sources_CK4} for all $v\in\Lambda^0$ and $n\in\mathbb N^k$ we have $t_v=\sum_{\alpha\in v\Lambda^n}t_\alpha t_\alpha^*$. \end{enumerate} The {\em Cuntz-Krieger higher-rank graph $C^*$-algebra}, denoted $C^*(\Lambda)$, is defined to be the universal $C^*$-algebra generated by a Cuntz-Krieger $\Lambda$-family $\braces{s_\alpha:\alpha\in\Lambda}$. \end{definition} Kumjian and Pask showed that the Cuntz-Krieger $C^*$-algebras of row-finite higher-rank graphs without sources are a generalisation of the Cuntz-Krieger $C^*$-algebras of row-finite directed graphs without sources\footnote{This result is in turn generalised to remove the `no sources' condition in Lemma \ref{lemma_RSY_LambdaE}.}: \begin{lemma}[{\cite[Examples~1.7(i)]{Kumjian-Pask2000}}]\label{lemma_KP_LambdaE} Suppose $E$ is a row-finite directed graph without sources and let $\Lambda_E$ be the $1$-graph from Example \ref{example_LambdaE}. Then $C^*(E)$ is isomorphic to $C^*(\Lambda_E)$. \end{lemma} Kumjian and Pask also showed that the path groupoid $C^*$-algebras are faithful models for the Cuntz-Krieger $C^*$-algebras of row-finite higher-rank graphs without sources: \begin{lemma}[{\cite[Corollary~3.5]{Kumjian-Pask2000}}]\label{lemma_higher_rank_graph_algebras_isomorphic} Suppose $\Lambda$ is a row-finite $k$-graph without sources. Then $C^*(\Lambda)$ is isomorphic to $C^*(G_\Lambda)$. \end{lemma} \section{Row-finite higher-rank graphs that may have sources}\label{sec_higher-rank_graphs_2} In this section we will recall Raeburn, Sims and Yeend's construction in \cite{rsy2004} of the Cuntz-Krieger $C^*$-algebra of a row-finite higher-rank graph that may have sources. This is unnecessary for readers who are exclusively interested in groupoid results, however results for this more general class of higher-rank graph algebra will be proved in Section \ref{sec_desourcification}. For $k\in\mathbb P$ and $m,n \in \mathbb N^k$, we denote by $m \vee n$\notationindex{MvN@$m\vee n, m\wedge n$} the coordinate-wise maximum of $m$ and $n$. In other words, if $l=m\vee n$, then $l\in\mathbb N^k$ with $l_i=\max\{m_i,n_i\}$ for each $1\leqslant i\leqslant k$. Similarly the coordinate-wise minimum of $m$ and $n$ is denoted by $m\wedge n$. Suppose $\Lambda$ is a $k$-graph. For $\mu,\nu\in\Lambda$, defin \[ \Lambda^{\text{min}}(\mu,\nu)=\braces{(\alpha,\beta)\in\Lambda\times\Lambda:\mu\alpha=\nu\beta, d(\mu\alpha)=d(\mu)\vee d(\nu)}. \] The elements of $\Lambda^{\text{min}}(\mu,\nu)$ are called the {\em minimal extensions} of $(\mu,\nu)$. Suppose $v\in\Lambda^0$. A subset $D\subset v\Lambda$ of $\Lambda$ is {\em exhaustive} if for every $\mu\in v\Lambda$ there exists $\nu\in D$ such that $\Lambda^{\text{min}}(\mu,\nu)\ne\emptyset$. The set of all {\em finite exhaustive subsets} of $\Lambda$ are denoted by $\mathcal F\mathcal E(\Lambda)$, and we let $v\mathcal F\mathcal E(\Lambda)$ denote the set $\braces{D\in\mathcal F\mathcal E(\Lambda):D\subset v\Lambda}$. The $C^*$-algebra of a row-finite higher-rank graph without sources was defined in Definition \ref{def_row-finite_no_sources_algebra}. When $\Lambda$ has sources, condition \eqref{no_sources_CK4} from Definition \ref{def_row-finite_no_sources_algebra} causes problems since $v\Lambda^n$ may be non-empty for some values of $n$ and empty for others. The following defines the Cuntz-Krieger $C^*$-algebras of row-finite higher-rank graphs that may have sources. Conditions \eqref{CK3} and \eqref{CK4} change substantially between these definitions so that the new relations are ``the right relations for generating tractable Cuntz-Krieger algebras for which a homomorphism is injective on the core if and only if it is nonzero at each vertex projection'' \cite[Remark 2.6]{rsy2004}. \begin{definition}[{\cite[Definition~2.5]{rsy2004}}\footnote{This is presented in \cite{rsy2004} for the more general `finitely aligned' class of higher-rank graphs.}]\label{def_finitely-aligned_algebra} Let $\Lambda$ be a row-finite $k$-graph. A {\em Cuntz-Krieger $\Lambda$-family} is a collection $\braces{t_\alpha:\alpha\in\Lambda}$ of partial isometries satisfying: \begin{enumerate}\renewcommand{\theenumi}{CK\arabic{enumi}} \item$\braces{t_v:v\in\Lambda^0}$ is a family of mutually orthogonal projections; \item $t_{\mu\nu}=t_\mu t_\nu$ for all $\mu,\nu\in\Lambda$ such that $s(\mu)=r(\nu)$; \item\label{CK3} $t_\mu^*t_\nu=\sum_{(\alpha,\beta)\in\Lambda^{\mathrm{min}}(\mu,\nu)}t_\alpha t_\beta^*$ for all $\mu,\nu\in\Lambda$; and \item\label{CK4} $\prod_{\mu\in D}(t_v-t_\mu t_\mu^*)=0$ for all $v\in\Lambda^0$ and $D\in v\mathcal F\mathcal E(\Lambda)$. \end{enumerate} The {\em Cuntz-Krieger higher-rank graph $C^*$-algebra}, denoted $C^*(\Lambda)$, is defined to be the universal $C^*$-algebra generated by a Cuntz-Krieger $\Lambda$-family $\braces{s_\alpha:\alpha\in\Lambda}$. \end{definition} Raeburn Sims and Yeend in \cite[Proposition~B.1]{rsy2004} show that if $\Lambda$ is a row-finite $k$-graph without sources then the Cuntz-Krieger relations from Definitions \ref{def_row-finite_no_sources_algebra} and \ref{def_finitely-aligned_algebra} coincide\footnote{They actually showed that the Cuntz-Krieger relations from Definition \ref{def_finitely-aligned_algebra} coincide with those relations from \cite[Definition~3.3]{rsy2003}, however this in turn coincides with the relations in Definition \ref{def_row-finite_no_sources_algebra} when $\Lambda$ has no sources.}, so that the definitions of $C^*(\Lambda)$ in Definitions \ref{def_row-finite_no_sources_algebra} and \ref{def_finitely-aligned_algebra} are consistent. The next lemma generalises Lemma \ref{lemma_KP_LambdaE}. \begin{lemma}[{\cite[Proposition~B.1]{rsy2004}}]\label{lemma_RSY_LambdaE} Suppose $E$ is a row-finite directed graph and let $\Lambda_E$ be the $1$-graph from Example \ref{example_LambdaE}. Then $C^*(E)$ is isomorphic to $C^*(\Lambda_E)$. \end{lemma} The following example shows that it is possible for $C^*(G_\Lambda)$ to be substantially different to $C^*(\Lambda)$ when $\Lambda$ has sources. \begin{example}\label{example_groupoid_model_requires_no_sources} Suppose $\Lambda$ is the $1$-graph with the directed graph from Example \ref{example_directed_graph_algebras_not_isomorphic} as the skeleton. Then $C^*(G_\Lambda)$ is liminal while $C^*(\Lambda)$ is not. \begin{proof} In Example \ref{example_directed_graph_algebras_not_isomorphic_proof} it is shown that if $E$ is the directed graph from Example \ref{example_directed_graph_algebras_not_isomorphic}, then $C^*(G_E)$ is liminal while $C^*(E)$ is not. The result follows since $G_E\cong G_\Lambda$ by Lemma \ref{path_groupoids_isomorphic} and $C^*(E)\cong C^*(\Lambda)$ by Lemma \ref{lemma_RSY_LambdaE}. \end{proof} \end{example} Raeburn, Sims and Yeend in \cite{rsy2004} describe versions of the gauge-invariant uniqueness theorem and the Cuntz-Krieger uniqueness theorem for row-finite higher-rank graphs that may have sources. \chapter{Categorising higher-rank graphs using the path groupoid}\label{chapter_categorising_higher-rank_graphs} \section{Liminal and postliminal higher-rank graph algebras}\label{sec_liminal_postliminal} This section establishes characterisations of the row-finite higher-rank graphs without sources that have liminal and postliminal $C^*$-algebras. In Section \ref{sec_desourcification} we will will relax the hypothesis to permit higher-rank graphs with sources. A characterisation of the directed graphs with liminal $C^*$-algebras has been developed by Ephrem in \cite[Theorem~5.5]{Ephrem2004} and characterisations of the directed graphs with postliminal $C^*$-algebras have been developed by Ephrem in \cite[Theorem~7.3]{Ephrem2004} and by Deicke, Hong and Szyma\'{n}ski in \cite[Theorem~2.1]{dhs2003}. To achieve \cite[Theorem~5.5]{Ephrem2004}, Ephrem began by developing a characterisation of the row-finite directed graphs without sources that have liminal $C^*$-algebras and then used the Drinen-Tomforde desingularisation\index{Drinen-Tomforde desingularisation} (see \cite{Drinen-Tomforde2005}) to generalise this result to the arbitrary directed graph case. To characterise the higher-rank graphs with liminal $C^*$-algebras we follow a similar approach: Theorem \ref{k-graph_liminal_thm1} characterises the row-finite $k$-graphs without sources and with liminal $C^*$-algebras and Theorem \ref{k-graph_liminal_thm2} generalises this characterisation to row-finite $k$-graphs that may have sources. The generalisation process uses the Farthing-Webster higher-rank graph desourcification that was developed by Webster in \cite{Webster2011} based on Farthing's \cite{Farthing2008} (see Section \ref{sec_boundary_paths_and_desourcification}). At the time of writing no full desingularisation process exists for higher-rank graphs so our theorems will only apply to row-finite higher-rank graphs. Ephrem's characterisation \cite[Theorem~7.3]{Ephrem2004} of the directed graphs with postliminal $C^*$-algebras followed a similar approach to \cite[Theorem~5.5]{Ephrem2004} by first developing a characterisation for the row-finite directed graphs without sources and then using the Drinen-Tomforde desingularisation to generalise the result to remove the `row-finite' requirement and permit sources. Again we follow a similar approach: Theorem \ref{k-graph_postliminal_thm1} characterises the row-finite $k$-graphs without sources and with liminal $C^*$-algebras and Theorem \ref{k-graph_postliminal_thm2} uses the Farthing-Webster generalisation to generalise this characterisation to row-finite $k$-graphs that may have sources. In Section \ref{section_directed_graphs} we will show how, given a row-finite directed graph $E$, we may apply Theorems \ref{k-graph_liminal_thm1}, \ref{k-graph_postliminal_thm1}, \ref{k-graph_liminal_thm2} and \ref{k-graph_postliminal_thm2} to the $1$-graph $\Lambda_E$ to obtain characterisations of the row-finite directed graphs with liminal and postliminal $C^*$-algebras. We will also show how these characterisations coincide with Ephrem's from \cite{Ephrem2004}. \begin{definition}\label{def_frequently_divertable_infinite_paths}\index{frequently divertable!infinite paths in a $k$-graph} Suppose $\Lambda$ is a $k$-graph and $x,y\in\Lambda^\infty$. We say {\em $x$ is frequently divertable to $[y]$} if for every $n\in\mathbb N^k$ there exists $z\in x(n)\Lambda^\infty$ such that $z$ is shift equivalent to $y$. \end{definition} Suppose $\Lambda$ is a $1$-graph, $v\in\Lambda^0$ and $f\in\Lambda^{e_1}$. If there is exactly one path $x$ in $\Lambda^\infty$ with $x(0)=v$ and $x(n,n+1)=f$ for some $n\in\mathbb N$, we denote $x$ by $\uniqueinfinitepath{v,f}$\index{$\uniqueinfinitepath{v,f}\in\Lambda^\infty$}. \begin{example}\label{example_frequently_divertable} Suppose $\Lambda$ is the $1$-graph with skeleton described by the following picture. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.7em,-3.2em) rectangle (4*\cellwidth em + 3.5em,0.2em); \node(endtailone) at (4*\cellwidth em + 3.4em,0em) {}; \node(startendtail) at (4*\cellwidth em + 1.5em,0em) {}; \draw [dotted, thick] (startendtail) -- (endtailone); \node(endtail2) at (4*\cellwidth em + 3.4em,-3em) {}; \node(startendtail2) at (4*\cellwidth em + 1.5em,-3em) {}; \draw [dotted, thick] (startendtail2) -- (endtail2); \clip (-0.7em,-3.2em) rectangle (4*\cellwidth em + 1.5em,0.2em); \foreach \x in {0,1,2,3,4,5,6} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em,-3*\y em) {$\scriptstyle v_\x$}; \foreach \x in {0,1,2,3,4,5,6} \foreach \y in {1} \node (x\x y\y) at (\cellwidth*\x em,-3*\y em) {$\scriptstyle u_\x$}; \foreach \x / \z in {0/1,1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y0) to (x\z y0); \foreach \x / \z in {0/1,1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y1) to (x\z y1); \foreach \x / \z in {0,1,2,3,4} \draw[black,<-] (x\x y0) to node[anchor=west] {$\scriptstyle f_\x$} (x\x y1) ; \end{tikzpicture} \end{center} Let $x,y$ be the paths in $\Lambda^\infty$ such that $x(n)=v_n$ and $y(n)=u_n$ for all $n\in\mathbb N$. Then $x$ is frequently divertable to $[y]$ but $y$ is not frequently divertable to $[x]$. To see that $x$ is frequently divertable to $[y]$, fix $n\in\mathbb N$ and note that $\uniqueinfinitepath{v_n,f_n}\in x(n)\Lambda^\infty\cap [y]$. To see that $y$ is not frequently divertable to $[x]$, observe that the only path in $y(0)\Lambda^\infty=u_0\Lambda^\infty$ is $y$, which is not shift equivalent to $x$. \end{example} \begin{lemma}\label{k-graph_closure_lemma} Suppose $\Lambda$ is a row-finite $k$-graph and that $x,y\in \Lambda^\infty$. Then $x$ is frequently divertable to $[y]$ if and only if $x\in \overline{[y]}$. \begin{proof} Suppose $x\in\overline{[y]}$ and fix $n\in\mathbb N^k$. Then every open neighbourhood $U$ of $x$ intersects $[y]$. Since $x(0,n)\Lambda^\infty$ is an open neighbourhood of $x$, it follows that $x(0,n)\Lambda^\infty$ intersects $[y]$. Suppose $z\in x(0,n)\Lambda^\infty\cap [y]$. Then $\sigma^n(z)\in x(n)\Lambda^\infty$ and $\sigma^n(z)$ is shift equivalent to $y$. Since $n$ was fixed arbitrarily, it follows that $x$ is frequently divertable to $[y]$. Now suppose that $x$ is frequently divertable to $[y]$. Let $U$ be an open neighbourhood of $x$ in $\Lambda^\infty$. Since the sets $\alpha\Lambda^\infty$ form a basis of open sets for $\Lambda^\infty$, there exists $m\in\mathbb N^k$ such that $x(0,m)\Lambda^\infty\subset U$. Since $x$ is frequently divertable to $[y]$, there exists $z\in x(m)\Lambda^\infty$ with $z$ shift equivalent to $y$. Thus \[ x(0,m)z\in x(0,m)\Lambda^\infty\subset U, \] and as $z\in [y]$, we know that $x(0,m)z\in [y]$, so $[y]$ intersects $U$. It follows that $x\in\overline{[y]}$ since $U$ is an arbitrarily fixed neighbourhood of $x$. \end{proof} \end{lemma} \begin{theorem}\label{k-graph_liminal_thm1} Suppose $\Lambda$ is a row-finite $k$-graph without sources and let $G$ be the associated path groupoid. Then the following are equivalent: \begin{enumerate} \item\label{k-graph_liminal_thm1_1} every orbit in $G^{(0)}$ is closed; \item\label{k-graph_liminal_thm1_2} for every $x\in\Lambda^\infty$, every path in $\Lambda^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$; and \item\label{k-graph_liminal_thm1_3} $C^*(\Lambda)\cong C^*(G)$ is liminal. \end{enumerate} \begin{proof} \eqref{k-graph_liminal_thm1_1} $\implies$ \eqref{k-graph_liminal_thm1_2}. Suppose \eqref{k-graph_liminal_thm1_1}, fix $x\in\Lambda^\infty$ and suppose $y$ is frequently divertable to $[x]$. Then $y\in\overline{[x]}$ by Lemma~\ref{k-graph_closure_lemma}. Since $[x]$ is closed, $[x]=\overline{[x]}$, so $y\in [x]$, establishing \eqref{k-graph_liminal_thm1_2}. \eqref{k-graph_liminal_thm1_2} $\implies$ \eqref{k-graph_liminal_thm1_1}. Fix $x\in\Lambda^\infty$ and choose $y\in\overline{[x]}$. Then $y$ is frequently divertable to $[x]$ by Lemma~\ref{k-graph_closure_lemma} and it follows from \eqref{k-graph_liminal_thm1_2} that $y\in [x]$. Thus $[x]=\overline{[x]}$ and \eqref{k-graph_liminal_thm1_1} follows by considering the homeomorphism between $\Lambda^\infty$ and $G^{(0)}$ from Proposition \ref{prop_path_groupoid}. \eqref{k-graph_liminal_thm1_1} $\iff$ \eqref{k-graph_liminal_thm1_3}. Note that $C^*(G)$ is isomorphic to $C^*(\Lambda)$ by Lemma \ref{lemma_higher_rank_graph_algebras_isomorphic}. Since the stability subgroups $G|_x$ are abelian, they are amenable and $C^*(G|_x)\cong C_0(\widehat{G|_x})$ is liminal. We can now apply Theorem \ref{thm_groupoid_algebra_liminal} and Remark \ref{thm_groupoid_algebra_liminal} which says that $C^*(G)$ is liminal if and only if every orbit in $G^{(0)}$ is closed. \end{proof} \end{theorem} \begin{example}\label{example_not_liminal} Let $\Lambda$ be the $1$-graph from Example \ref{example_frequently_divertable}. Let $x,y$ be the associated paths in $\Lambda^\infty$, noting that $x$ is not shift equivalent to $y$. It was shown in Example \ref{example_frequently_divertable} that $x$ is frequently divertable to $[y]$, so condition \eqref{k-graph_liminal_thm1_2} from Theorem \ref{k-graph_liminal_thm1} does not hold and $C^*(\Lambda)$ is not liminal. \end{example} \begin{lemma}\label{lemma_equivalence_class_frequently_divertable} Suppose $\Lambda$ is a $k$-graph and $x\in\Lambda^\infty$. Suppose there exists $n\in\mathbb N^k$ such that every element of $x(n)\Lambda^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$. Then for every $y\in [x]$, there exists $m\in\mathbb N^k$ such that every element of $y(m)\Lambda^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$. \begin{proof} Fix $y\in [x]$. Then $x\sim_a y$ for some $a\in\mathbb Z^k$, so there exists $l\in\mathbb N^k$ such that $\sigma^l(x)=\sigma^{l-a}(y)$. We may increase $n$ if necessary to ensure that $n\geqslant l$. We now have \[ x(n)=x\big((n-l)+l\big)=y\big((n-l)+(l-a)\big)=y(n-a) \] and the result follows since $x(n)\Lambda^\infty=y(n-a)\Lambda^\infty$. \end{proof} \end{lemma} \begin{theorem}\label{k-graph_postliminal_thm1} Suppose $\Lambda$ is a row-finite $k$-graph without sources and let $G$ be the associated path groupoid. Then the following are equivalent: \begin{enumerate} \item\label{k-graph_postliminal_thm1_1} every orbit in $G^{(0)}$ is locally closed; \item\label{k-graph_postliminal_thm1_2} for every path $x\in\Lambda^\infty$ there exists $n\in\mathbb N^k$ such that every path in $x(n)\Lambda^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$; and \item\label{k-graph_postliminal_thm1_3} $C^*(\Lambda)\cong C^*(G)$ is postliminal. \end{enumerate} \begin{proof} \eqref{k-graph_postliminal_thm1_1} $\implies$ \eqref{k-graph_postliminal_thm1_2}. Suppose \eqref{k-graph_postliminal_thm1_1} and fix $x\in\Lambda^\infty$. By \eqref{k-graph_postliminal_thm1_1} there is an open $U\subset\Lambda^\infty$ such that $[x]= U\cap\overline{[x]}$. Then $x\in U$ so by the topology on $\Lambda^\infty$ there exists $n\in\mathbb N^k$ such that $x(0,n)\Lambda^\infty\subset U$. Suppose $y$ is a path in $x(n)\Lambda^\infty$ that is frequently divertable to $[x]$. Then $x(0,n)y\in\overline{[x]}$ by Lemma \ref{k-graph_closure_lemma} so $x(0,n)y\in U \cap \overline{[x]}=[x]$ and it follows that $y\in [x]$, establishing \eqref{k-graph_postliminal_thm1_2}. \eqref{k-graph_postliminal_thm1_2} $\implies$ \eqref{k-graph_postliminal_thm1_1}. Fix $x\in\Lambda^\infty$ and suppose there exists $n\in\mathbb N^k$ as in \eqref{k-graph_postliminal_thm1_2}. It follows from Lemma \ref{lemma_equivalence_class_frequently_divertable} that for every $y\in [x]$, there exists $m^{(y)}\in\mathbb N^k$ such that every path in $y(m^{(y)})\Lambda^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$. Let $V=\cup_{y\in [x]}y(0,m^{(y)})\Lambda^\infty$ and note that $[x]\subset V$ and that every path in $V$ that is frequently divertable to $[x]$ is shift equivalent to $x$. We can now use Lemma \ref{k-graph_closure_lemma} to see that $V\cap\overline{[x]}=[x]$. Finally, note that $V$ is open since each $y(0,m^{(y)})\Lambda^\infty$ is, so $[x]$ is locally closed. \eqref{k-graph_postliminal_thm1_1} $\iff$ \eqref{k-graph_postliminal_thm1_3}. The \eqref{k-graph_postliminal_thm1_1} $\iff$ \eqref{k-graph_postliminal_thm1_3} argument from Theorem \ref{k-graph_liminal_thm1} works by using Theorem \ref{thm_groupoid_algebra_postliminal} and Corollary \ref{cor_ramsay_orbits_locally_closed} in place of Theorem \ref{thm_groupoid_algebra_liminal} and Remark \ref{remark_T1_iff_orbits_closed}. \end{proof} \end{theorem} \begin{example}\label{example_postliminal_not_liminal} Let $\Lambda$ be the $1$-graph from Example \ref{example_frequently_divertable}. Then $C^*(\Lambda)$ is postliminal. \begin{proof} Fix $a\in \Lambda^\infty$ and let $x,y$ be the paths in $\Lambda^\infty$ from Example \ref{example_frequently_divertable}. We aim to establish condition \eqref{k-graph_postliminal_thm1_2} from Theorem \ref{k-graph_postliminal_thm1}. That is, for each $a\in\Lambda^\infty$ we need to find $n\in\mathbb N$ such that every path in $a(n)\Lambda^\infty$ that is frequently divertable to $[a]$ is shift equivalent to $a$. We will consider three cases. The first case is when $a=\sigma^m(y)$ for some $m\in\mathbb N$. The only path in $a(0)\Lambda^\infty=u_m \Lambda^\infty$ is $a$, so $a(0) \Lambda^\infty\backslash [a]=\emptyset$, and $n=0$ will do. The second case is when $a=\sigma^m(x)$ for some $m\in\mathbb N$. Then \[a(0)\Lambda^\infty\backslash[a]=v_m \Lambda^\infty\backslash [a]=\braces{\uniqueinfinitepath{v_m,f_{m+l}}:l\in\mathbb N}\] and none of these paths are frequently divertable to $[a]$, so $n=0$ will do. The final case is when $a=\uniqueinfinitepath{v_m,f_{m+l}}$ for some $m,l\in\mathbb N$. Then $a(l,l+1)=f_{m+l}$ and by observing the graph it can be seen that $b:=\sigma^{l+1}(a)$ is the only path in $a(l+1)\Lambda^\infty$. Since $b$ is shift equivalent to $a$, $a(l+1)\Lambda^\infty\backslash [a]=\emptyset$, so $n=l+1$ will do. It follows from our consideration of these three cases that condition \eqref{k-graph_postliminal_thm1_2} from Theorem \ref{k-graph_postliminal_thm1} holds and so $C^*(\Lambda)$ is postliminal. \end{proof} \end{example} \begin{example} Suppose $\Lambda$ is the $1$-graph with skeleton described by the following picture. Then $C^*(\Lambda)$ is not postliminal. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.7em,-3.2em) rectangle (4*\cellwidth em + 3.5em,0.2em); \node(endtailone) at (4*\cellwidth em + 3.4em,0em) {}; \node(startendtail) at (4*\cellwidth em + 1.5em,0em) {}; \draw [dotted, thick] (startendtail) -- (endtailone); \node(endtail2) at (4*\cellwidth em + 3.4em,-3em) {}; \node(startendtail2) at (4*\cellwidth em + 1.5em,-3em) {}; \draw[dotted, thick] (startendtail2) -- (endtail2); \clip (-0.7em,-3.2em) rectangle (4*\cellwidth em + 1.5em,0.2em); \foreach \x in {0,1,2,3,4,5} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em,-3*\y em) {$\scriptstyle v_\x$}; \foreach \x in {0,1,2,3,4,5} \foreach \y in {1} \node (x\x y\y) at (\cellwidth*\x em,-3*\y em) {$\scriptstyle u_\x$}; \foreach \x / \z in {0/1,1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y0) to (x\z y0); \foreach \x / \z in {0/1,1/2,2/3,3/4,4/5} \draw[black,<-] (x\x y1) to (x\z y1); \foreach \x / \z in {0,2,4} \draw[black,<-] (x\x y0) to (x\x y1); \foreach \x / \z in {1,3} \draw[black,->] (x\x y0) to (x\x y1); \end{tikzpicture} \end{center} \begin{proof} Let $x$ and $y$ be the paths in $\Lambda^\infty$ such that $x(n)=v_n$ and $y(n)=u_n$ for all $n\in\mathbb N$. We claim that condition \eqref{k-graph_postliminal_thm1_2} from Theorem \ref{k-graph_postliminal_thm1} does not hold. To see this, note that for every $n\in\mathbb N$, there are paths in $x(n)\Lambda^\infty=v_n\Lambda^\infty$ that are shift equivalent to $y$. Each of these paths in $x(n)\Lambda^\infty\cap [y]$ are frequently divertable to $[x]$ but not shift equivalent to $x$. Thus condition \eqref{k-graph_postliminal_thm1_2} from Theorem \ref{k-graph_postliminal_thm1} does not hold and so $C^*(\Lambda)$ is not postliminal. \end{proof} \end{example} \section{Bounded-trace, Fell, and continuous-trace graph algebras}\label{sec_bded-trace_cts-trace_Fell} This section establishes characterisations of row-finite higher-rank graphs without sources that have integrable, Cartan and proper path groupoids. Assuming that the path groupoids are principal enables us to use the theorems in Section \ref{bounded-trace_fell_cts-trace_overview} by Clark, an Huef, Muhly and Williams to provide necessary and sufficient conditions for the $C^*$-algebras of these higher rank graphs to have bounded trace, to be Fell and to have continuous trace. Not much generality is lost when we assume that path groupoids are principal since every integrable path groupoid is principal (see Lemma \ref{lemma_integrable_implies_principal}). A later remark, Remark \ref{remark_directed_graphs_and_cycles}, discusses what the requirement of having a principal path groupoid means in the directed graph context. \begin{lemma}\label{lemma_principal_iff_no_semi-periodic_paths} Suppose $\Lambda$ is a row-finite $k$-graph without sources. The path groupoid associated to $\Lambda$ is principal if and only if $n=0$ whenever a path in $\Lambda^\infty$ is shift equivalent to itself with lag $n$. \begin{proof} The stability subgroups in $G$ are $S_{(x,0,x)}=\braces{(x,n,x): x\sim_n x}$. Since a groupoid is principal if and only if all the stability subgroups are trivial, the result follows. \end{proof} \end{lemma} It follows from this lemma that $G_\Lambda$ is principal if and only if $\Lambda$ has no periodic paths (as defined in \cite{Kumjian-Pask2000}). We avoid this notion of `periodic paths' because it can be misleading when extended to boundary paths as would be required in Section \ref{sec_desourcification} \begin{example}\label{example_2_graph_not_principal_no_cycles} Let $\Lambda$ be the unique $2$-graph with the following skeleton. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.5em,-1em) rectangle (4*\cellwidth em + 3.5em,1em); \node(endtailone) at (4*\cellwidth em + 3.4em,0em) {}; \node(startendtail) at (4*\cellwidth em + 1.5em,0em) {}; \draw[dotted, thick] (startendtail) -- (endtailone); \clip (-0.5em,-1em) rectangle (4*\cellwidth em + 1.5em,1em); \foreach \x in {1,2,3,4,5,6} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em-\cellwidth em,-3*\y em) {}; \foreach \x in {2,3,4,5,6} \foreach \y in {0} \fill[black] (x\x y\y) circle (0.15em); \node at (x1y0) {$\scriptstyle v$}; \foreach \x / \z in {1/2,2/3,3/4,4/5,5/6} \draw[black,<-, bend left] (x\x y0) to node[anchor=south] {} (x\z y0); \foreach \x / \z in {1/2,2/3,3/4,4/5,5/6} \draw[black,dashed,<-, bend right] (x\x y0) to node[anchor=south] {} (x\z y0); \end{tikzpicture} \end{center} The path groupoid associated to $\Lambda$ is not principal: suppose the morphisms in $\Lambda$ corresponding to the solid edges have degree $(1,0)$ and the morphisms corresponding to the dashed edges have degree $(0,1)$. Let $x$ be the unique path in $\Lambda^\infty$ with range $v$. Then $\sigma^{(1,0)}(x)=\sigma^{(0,1)}(x)$, so $x$ is shift equivalent to itself with lag $(1,-1)$. Lemma \ref{lemma_principal_iff_no_semi-periodic_paths} now applies to show that the path groupoid associated to $\Lambda$ is not principal. \end{example} \begin{remark}\label{remark_no_cycles_principal} A {\em cycle} in a $k$-graph is a path $\alpha\in\Lambda$ with $d(\alpha)\ne 0$ and, for distinct $m,n\leqslant d(\alpha)$, we have $\alpha(m)=\alpha(n)$ if and only if $m,n\in\{0,d(\alpha)\}$. A corollary of Lemma \ref{lemma_principal_iff_no_semi-periodic_paths} is that the path groupoid of a $1$-graph is principal if and only if the $1$-graph has no cycles. Example \ref{example_2_graph_not_principal_no_cycles} demonstrates that the same is not true for $2$-graphs since no path in that $2$-graph has both non-zero degree and equal range and source. \end{remark} \begin{lemma}\label{lemma_integrable_implies_principal} Suppose $\Lambda$ is a row-finite $k$-graph without sources. If $G_\Lambda$ is integrable, then $G_\Lambda$ is principal. \begin{proof} Suppose $G=G_\Lambda$ is integrable and that $x\in\Lambda^\infty$ satisfies $x\sim_n x$ for some $n\in\mathbb N^k$. Then there exists $p\in\mathbb N^k$ such that $\sigma^p(x)=\sigma^{p-n}(x)$. Fix $i\in\mathbb N$ and let $q=p\vee (p+n)\vee\cdots\vee (p+in)$. Then $q-jn\geqslant p$ for every $j\leqslant i$ so $\sigma^q(x)=\sigma^{q-n}(x)=\cdots=\sigma^{q-in}(x)$ and $x\sim_{in} x$. We now have $\{(x,jn,x):j\in\mathbb N\}\subset G_x^{r(x)\Lambda^\infty}$, but this set must be finite since $\lambda_x$ is the counting measure on $G_x$ and $G$ integrable implies $\lambda_x(G^{r(x)\Lambda^\infty})<\infty$. Thus $n=0$ and so $G$ is principal by Lemma \ref{lemma_principal_iff_no_semi-periodic_paths}. \end{proof} \end{lemma} \begin{lemma}\label{lemma_description_lambda_x_GV} Suppose $\Lambda$ is a row-finite $k$-graph without sources. If $G_\Lambda$ is principal then for every $x\in\Lambda^\infty$ and $V\subset\Lambda^\infty$, the quantity $\lambda_x(G^V)$ is the number of paths in $V$ that are shift equivalent to $x$. \begin{proof} Suppose $G=G_\Lambda$ is principal and note that $\lambda_x(G^V)$ is the number of elements in $\{(y,n)\in V\times\mathbb Z^k: y\sim_n x\}$. If $y\sim_m x$ and $y\sim_n x$ for some $m,n\in\mathbb Z^k$, then $(y,m,x)$ and $(y,n,x)$ are elements of the principal groupoid $G$ with equal range and source, so $m=n$. Then $\lambda_x(G^V)$ is the number of elements in $\{y\in V: y\sim x\}$, as required. \end{proof} \end{lemma} \begin{theorem}\label{thm_integrable_bounded-trace} Suppose $\Lambda$ is a row-finite $k$-graph without sources. The following are equivalent: \begin{enumerate} \item\label{k-thm_integrable1} $G_\Lambda$ is integrable; \item\label{k-thm_integrable2} $G_\Lambda$ is principal, and for every $v\in\Lambda^0$ there exists $M\in\mathbb N$ such that for any $x\in\Lambda^\infty$ there are at most $M$ paths in $\Lambda^\infty$ with range $v$ that are shift equivalent to $x$; and \item\label{k-thm_integrable3} $G_\Lambda$ is principal, and $C^*(\Lambda)\cong C^*(G_\Lambda)$ has bounded trace. \end{enumerate} \begin{proof} Let $G=G_\Lambda$. \eqref{k-thm_integrable1} $\implies$ \eqref{k-thm_integrable2}. Suppose \eqref{k-thm_integrable1} and note that $G$ is principal by Lemma \ref{lemma_integrable_implies_principal}. Fix $v\in\Lambda^0$. Since $G$ is integrable, $\sup_{x\in v\Lambda^\infty}\lambda_x(G^{v\Lambda^\infty})<\infty$, so there exists $M\in\mathbb N$ such that $\lambda_x(G^{v\Lambda^\infty})\leqslant M$ for every $x\in v\Lambda^\infty$. Lemma \ref{lemma_description_lambda_x_GV} tells us that the number of elements in $v\Lambda^\infty$ that are shift equivalent to $x$ is equal to $\lambda_x(G^{v\Lambda^\infty})$. We just showed this is less than or equal to $M$ when $x\in v\Lambda^\infty$. When $x\notin v\Lambda^\infty$ the number of elements in $v\Lambda^\infty$ that are shift equivalent to $x$ is still less than or equal to $M$ since shift equivalence is an equivalence relation: if $x$ is shift equivalent to $y\in v\Lambda^\infty$, every $z\in v\Lambda^\infty$ that is shift equivalent to $x$ must also be shift equivalent to $y$, and we have already shown that there are at most $M$ of these paths, establishing \eqref{k-thm_integrable2}. \eqref{k-thm_integrable2} $\implies$ \eqref{k-thm_integrable1}. Suppose \eqref{k-thm_integrable2} and let $N$ be a compact subset of $G^{(0)}$. Since there is a natural homeomorphism between $G^{(0)}$ and $\Lambda^\infty$ given by $(x,0,x)\mapsto x$, we can also consider $N$ to be a compact subset of $\Lambda^\infty$. We will affix subscripts in an attempt to minimise potential confusion between the range map of $\Lambda^\infty$ and the range map of $G$. For every $x\in N$ the cylinder set $r_\Lambda(x)\Lambda^\infty$ is an open neighbourhood of $x$ in $\Lambda^\infty$. Then $N\subset r_\Lambda(N)\Lambda^\infty=\cup_{v\in r_\Lambda(N)}v\Lambda^\infty$ so, since $N$ is compact and each $v\Lambda^\infty$ is open, there exists a finite subset $V=\braces{v_1,\ldots,v_n}$ of $r_\Lambda(N)$ so that $N\subset V\Lambda^\infty$. For every $v\in V$ let $M_v$ be the integer $M$ as in \eqref{k-thm_integrable2}. Fix $x\in N$ and note that Lemma \ref{lemma_description_lambda_x_GV} tells us that $\lambda_x(G^{v_i\Lambda^\infty})$ is the number of elements in $v_i\Lambda^\infty$ that are shift equivalent to $x$. It follows that $\lambda_x(G^{v_i\Lambda^\infty})\leqslant M_{v_i}$ for each $1\leqslant i\leqslant n$. Now \[\lambda_x(G^N)\leqslant \lambda_x(G^{V\Lambda^\infty})=\lambda_x\Big(\bigcup_{i=1}^n G^{v_i\Lambda^\infty}\Big)\leqslant\sum_{i=1}^n\lambda_x (G^{v_i\Lambda^\infty})\leqslant \sum_{i=1}^n M_{v_i}.\] We can see that $G$ is integrable since $\sum_{i=1}^n M_{v_i}$ is independent of our choice of $x\in N$. \eqref{k-thm_integrable1} $\iff$ \eqref{k-thm_integrable3}. Lemma \ref{lemma_integrable_implies_principal} says that every integrable path groupoid must be principal. When $G$ is principal, Theorem \ref{thm_groupoid_algebra_bounded_trace} says that $C^*(G)$ has bounded trace if and only if $G$ is integrable. The result follows since $C^*(G)$ is isomorphic to $C^*(\Lambda)$ by Lemma \ref{lemma_higher_rank_graph_algebras_isomorphic}. \end{proof} \end{theorem} The directed graph in the following example occured earlier in Example \ref{2-times_convergence_example} and was chosen so that the path groupoid would be principal, integrable and not Cartan. The search for examples of directed graphs with these properties provided the initial motivation for Chapter \ref{chapter_categorising_higher-rank_graphs}. \needspace{6\baselineskip} \begin{example}\label{example_integrable_not_cartan} Suppose $\Lambda$ is the $1$-graph with the following skeleton. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-5em,-5.6em) rectangle (3*\cellwidth em + 4.3em,0.3em); \foreach \x in {0,1,2,3} \foreach \y in {0} \node (x\x y\y) at (\cellwidth*\x em,-3*\y em) {$\scriptstyle v_{\x}$}; \foreach \x in {0,1,2,3} \foreach \y in {1} \node (x\x y\y) at (\cellwidth*\x em,-3*\y em) {}; \foreach \x in {0,1,2,3} \foreach \y in {2} \node (x\x y\y) at (\cellwidth*\x em,-1.5em-1.5*\y em) {}; \foreach \x in {0,1,2,3} \foreach \y in {1,2} \fill[black] (x\x y\y) circle (0.15em); \foreach \x in {0,1,2,3} \draw [<-, bend left] (x\x y0) to node[anchor=west] {$\scriptstyle g_{\x}$} (x\x y1); \foreach \x in {0,1,2,3} \draw [<-, bend right] (x\x y0) to node[anchor=east] {$\scriptstyle f_{\x}$} (x\x y1); \foreach \x / \z in {0/1,1/2,2/3} \draw[black,<-] (x\x y0) to node[anchor=south] {} (x\z y0); \foreach \x in {0,1,2,3} \draw [<-] (x\x y1) -- (x\x y2); \foreach \x in {0,1,2,3} { \node (endtail\x) at (\cellwidth*\x em, -6em) {}; \draw [dotted, thick] (x\x y2) -- (endtail\x); } \node(endtailone) at (3*\cellwidth em + 2.3em,0em) {}; \draw [dotted, thick] (x3y0) -- (endtailone); \end{tikzpicture} \end{center} Recall that this directed graph previously appeared in Example \ref{2-times_convergence_example}, where it was shown that the path groupoid associated to this directed graph exhibits $2$-times convergence. Here we claim that the path groupoid associated to $\Lambda$ is principal and integrable and that $C^*(\Lambda)$ has bounded trace. First observe that since $\Lambda$ has no cycles, the path groupoid is principal by Remark \ref{remark_no_cycles_principal}. To see that the rest of condition \eqref{k-thm_integrable2} from Theorem \ref{thm_integrable_bounded-trace} holds, note that for any $v\in\Lambda^0$, there are at most $2$ paths in $v\Lambda^\infty$ that are shift equivalent to each other. Theorem \ref{thm_integrable_bounded-trace} now shows that the path groupoid is integrable and $C^*(\Lambda)$ has bounded trace. \end{example} We define the {\em monolithic extension} of $(\alpha,\beta)\in\Lambda\ast_s\Lambda$ by $x\in s(\alpha)(\Lambda\cup\Lambda^\infty)$ to be the pair $(\alpha x,\beta x)$ in $(\Lambda\ast_s\Lambda)\cup(\Lambda^\infty\times\Lambda^\infty)$. For $S\subset\Lambda$, define \[S\Lambda^\infty:=\bigcup_{\alpha\in S}\alpha\Lambda^\infty=\braces{\alpha x\in\Lambda^\infty: \alpha\in S, x\in s(\alpha)\Lambda^\infty},\] and note that $S\Lambda^\infty$ is compact whenever $S$ is finite. \begin{lemma}\label{lemma_compactness} Suppose $\Lambda$ is a row-finite $k$-graph without sources and suppose that the path groupoid $G$ is principal. Let $V$ be a subset of $\Lambda^0$. The following are equivalent: \begin{enumerate} \item\label{lemma_compactness_1} $G|_{V\Lambda^\infty}$ is compact; \item\label{lemma_compactness_2} there exists a finite $F\subset\Lambda$ such that, for every $x,y\in V\Lambda^\infty$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$; and \item\label{lemma_compactness_3} there exists a finite $F\subset\Lambda$ such that, for every $(\alpha,\beta)\in\Lambda\ast_s\Lambda$ with $r(\alpha),r(\beta)\in V$ and every $\gamma\in s(\alpha)\Lambda$ such that $d(\gamma)=\vee_{\eta\in F}d(\eta)$, the pair $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of a pair in $F\times F$. \end{enumerate} \begin{proof} \eqref{lemma_compactness_1}$\implies$\eqref{lemma_compactness_2}. Suppose \eqref{lemma_compactness_1}. For each $\phi\in G|_{V\Lambda^\infty}$ there exists $(\eta^{(\phi)},\zeta^{(\phi)})\in\Lambda\ast_s\Lambda$ such that $\phi\in Z(\eta^{(\phi)},\zeta^{(\phi)})$. Then $\braces{Z(\eta^{(\phi)},\zeta^{(\phi)}):\phi\in G|_{V\Lambda^\infty}}$ is an open cover of the compact set $G|_{V\Lambda^\infty}$ and so admits a finite subcover \[\braces{Z(\eta^{(1)},\zeta^{(1)}),Z(\eta^{(2)},\zeta^{(2)}),\ldots,Z(\eta^{(p)},\zeta^{(p)})}.\] Define $F:=\braces{\eta^{(1)},\zeta^{(1)},\eta^{(2)},\zeta^{(2)},\ldots,\eta^{(p)},\zeta^{(p)}}$ and fix $x,y\in V\Lambda^\infty$ such that $x\sim_n y$ for some $n\in\mathbb Z^k$. Then $(x,n,y)\in G|_{V\Lambda^\infty}$ so there exists $q$ such that $(x,n,y)\in Z(\eta^{(q)},\zeta^{(q)})$. It follows that $(x,y)$ is a monolithic extension of $(\eta^{(q)},\zeta^{(q)})\in F\times F$, establishing \eqref{lemma_compactness_2}. \eqref{lemma_compactness_2}$\implies$\eqref{lemma_compactness_3}. Suppose $F$ is a finite subset of $\Lambda$ as in \eqref{lemma_compactness_2}. Fix $(\alpha,\beta)\in\Lambda\ast_s\Lambda$ with $r(\alpha),r(\beta)\in V$ and fix $\gamma\in s(\alpha)\Lambda$ such that $d(\gamma)=\vee_{\eta\in F}d(\eta)$. Suppose $z\in s(\gamma)\Lambda^{\infty}$ so that $\alpha\gamma z\sim_{d(\alpha)-d(\beta)}\beta\gamma z$. By \eqref{lemma_compactness_2} there exist $\eta,\zeta \in F$ such that $(\alpha\gamma z,\beta\gamma z)$ is a monolithic extension of $(\eta,\zeta)$, so there exists $y\in s(\eta)\Lambda^\infty$ such that $(\alpha\gamma z,\beta\gamma z)=(\eta y,\zeta y)$. Note that $d(\eta),d(\zeta)\leqslant d(\gamma)$, so we can let $\phi=y\big(0,d(\alpha\gamma)-d(\eta)\big)$ and $\psi=y\big(0,d(\beta\gamma)-d(\zeta)\big)$ and observe that $(\alpha\gamma,\beta\gamma)=(\eta\phi,\zeta\psi)$. We claim that $\phi=\psi$. To see this, note that since $(\alpha\gamma z,\beta\gamma z)=(\eta y,\zeta y)$, we have $\alpha\gamma z\sim_{d(\eta)-d(\zeta)} \beta\gamma z$. We earlier noted that $\alpha\gamma z\sim_{d(\alpha)-d(\beta)} \beta\gamma z$, so both $(\alpha\gamma z, d(\eta)-d(\zeta),\beta\gamma z)$ and $(\alpha\gamma z, d(\alpha)-d(\beta),\beta\gamma z)$ are in $G$. But $G$ is principal so these elements must be equal since they have equal range and source. Then $d(\eta)-d(\zeta)=d(\alpha)-d(\beta)$, which implies $d(\alpha)+d(\gamma)-d(\eta)=d(\beta)+d(\gamma)-d(\zeta)$, so $d(\alpha\gamma)-d(\eta)=d(\beta\gamma)-d(\zeta)$. It now follows that $\phi=\psi$ by their definition. Then $(\alpha\gamma,\beta\gamma)=(\eta\phi,\zeta\phi)$, so $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of $(\eta,\zeta)\in F\times F$, establishing \eqref{lemma_compactness_3}. \eqref{lemma_compactness_3}$\implies$\eqref{lemma_compactness_1}. Suppose $F$ is a finite subset of $\Lambda$ as in \eqref{lemma_compactness_3} and fix $(x,n,y)\in G|_{V\Lambda^\infty}$. Then there exists $m\in\mathbb N^k$ such that $\sigma^m(x)=\sigma^{m-n}(y)$. Let $\alpha=x(0,m)$, $\beta=y(0,m-n)$, $\gamma=x\big(m,m+\vee_{\eta\in F}d(\eta)\big)$ and $z=\sigma^{d(\alpha)+d(\gamma)}(x)$, so that $(x,y)=(\alpha\gamma z,\beta\gamma z)$. By \eqref{lemma_compactness_3} the pair $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of some $(\mu,\nu)\in F\times F$ so there exists $\delta\in\Lambda$ such that $(\alpha\gamma,\beta\gamma)=(\mu\delta,\nu\delta)$. Then \[ (x,n,y)\in Z(\alpha\gamma,\beta\gamma)=Z(\mu\delta,\nu\delta)\subset Z(\mu,\nu)\subset \bigcup_{(\eta,\zeta)\in F\ast_s F} Z(\eta,\zeta), \] so that $G|_{V\Lambda^\infty}\subset \cup_{(\eta,\zeta)\in F\ast_s F} Z(\eta,\zeta)$. Since each $Z(\eta,\zeta)$ is compact and $F$ is finite, the set $\cup_{(\eta,\zeta)\in F\ast_s F}Z(\eta,\zeta)$ is compact. Since $V$ is finite, the set $V\Lambda^\infty$ is closed, so $G|_{V\Lambda^\infty}=r^{-1}(V\Lambda^\infty)\cap s^{-1}(V\Lambda^\infty)$ is a closed subset of the compact set $\cup_{(\eta,\zeta)\in F\ast_s F}Z(\eta,\zeta)$, establishing \eqref{lemma_compactness_1}. \end{proof} \end{lemma} \begin{remark}\label{homeomorphism_remark} It was observed in \cite[Remarks 2.5]{Kumjian-Pask2000} that for a fixed $\alpha\in\Lambda$, the map given by $\alpha x\mapsto x$ for $x\in r^{-1}\big(s(\alpha)\big)$ induces a homeomorphism between $\alpha\Lambda^\infty$ and $s(\alpha)\Lambda^\infty$. By similar reasoning it can be seen that the map from $G|_{s(\alpha)\Lambda^\infty}$ to $G|_{\alpha\Lambda^\infty}$ where $(x,n,y)\mapsto (\alpha x,n,\alpha y)$ is a homeomorphism. \end{remark} \begin{theorem}\label{thm_Cartan_Fell} Suppose $\Lambda$ is a row-finite $k$-graph without sources. The following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{thm_Cartan_Fell_1} $G_\Lambda$ is Cartan; \item\label{thm_Cartan_Fell_2} $G_\Lambda$ is principal, and for every $z\in\Lambda^\infty$ there exist $p\in\mathbb N^k$ and a finite $F\subset\Lambda$ such that for every $x,y\in z(p)\Lambda^\infty$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$; \needspace{4\baselineskip}\item\label{thm_Cartan_Fell_3} $G_\Lambda$ is principal, and for every $z\in\Lambda^\infty$ there exist $p\in\mathbb N^k$ and a finite $F\subset\Lambda$ such that for every $(\alpha,\beta)\in \Lambda\ast_s\Lambda$ with $r(\alpha)=r(\beta)=z(p)$ and every $\gamma\in s(\alpha)\Lambda$ with $d(\gamma)=\vee_{\eta\in F}d(\eta)$, the pair $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of a pair in $F\times F$; and \item\label{thm_Cartan_Fell_4} $G_\Lambda$ is principal, and $C^*(\Lambda)\cong C^*(G_\Lambda)$ is Fell. \end{enumerate} \begin{proof}[Proof of Theorem \ref{thm_Cartan_Fell}] Let $G=G_\Lambda$. \eqref{thm_Cartan_Fell_1} $\implies$ \eqref{thm_Cartan_Fell_2}. Suppose \eqref{thm_Cartan_Fell_1}. Then $G$ is integrable and thus principal by \cite[Proposition~3.11]{Clark-anHuef2008}, \cite[Lemma~5.1]{Clark-anHuef2012} and Lemma \ref{lemma_integrable_implies_principal}. Since $G$ is Cartan, for every $z\in G^{(0)}$ there exists a neighbourhood $N$ of $z$ in $G^{(0)}$ such that $G|_N$ is relatively compact. By considering $z$ to be in $\Lambda^\infty$ and $N\subset\Lambda^\infty$, the sets $z(0,n)\Lambda^\infty$ for $n\in\mathbb N^k$ form a neighbourhood basis for $z$ in $\Lambda^\infty$, so there exists $p\in\mathbb N^k$ such that $z\in z(0,p)\Lambda^\infty\subset N$. Since $G|_{z(0,p)\Lambda^\infty}$ is a closed subset of the compact set $\overline{G|_N}$, it follows that $G|_{z(0,p)\Lambda^\infty}$ is compact, and so $G|_{z(p)\Lambda^\infty}$ is compact by Remark \ref{homeomorphism_remark}. We now apply Lemma \ref{lemma_compactness} with $V=\braces{z(p)}$ to establish \eqref{thm_Cartan_Fell_2}. \eqref{thm_Cartan_Fell_2} $\implies$ \eqref{thm_Cartan_Fell_3}. Fix $z\in\Lambda^\infty$ and let $p\in\mathbb N^k$ and $F\subset\Lambda$ be as in \eqref{thm_Cartan_Fell_2}. We can now apply Lemma \ref{lemma_compactness} with $V=\braces{z(p)}$ to establish \eqref{thm_Cartan_Fell_3}. \eqref{thm_Cartan_Fell_3} $\implies$ \eqref{thm_Cartan_Fell_1}. Fix $z\in\Lambda^\infty$ and let $p$ be the element of $\mathbb N^k$ as in the statement of \eqref{thm_Cartan_Fell_3}. By Lemma \ref{lemma_compactness} we can see that $G|_{z(p)\Lambda^\infty}$ is compact, so $G|_{z(0,p)\Lambda^\infty}$ is compact by Remark \ref{homeomorphism_remark}, and $z(0,p)\Lambda^\infty$ is a wandering neighbourhood of $x$ in $G^{(0)}$, establishing \eqref{thm_Cartan_Fell_1}. \eqref{thm_Cartan_Fell_1} $\iff$ \eqref{thm_Cartan_Fell_4}. If $G$ is Cartan then $G$ is integrable and thus principal by \cite[Proposition~3.11]{Clark-anHuef2008}, \cite[Lemma~5.1]{Clark-anHuef2012} and Lemma \ref{lemma_integrable_implies_principal}. When $G$ is principal, Theorem \ref{thm_groupoid_algebra_Fell} says that $C^*(G)$ is Fell if and only if $G$ is Cartan. The result follows since $C^*(G)$ is isomorphic to $C^*(\Lambda)$ by Lemma \ref{lemma_higher_rank_graph_algebras_isomorphic}. \end{proof} \end{theorem} \begin{example}\label{example_integrable_not_Cartan} Let $\Lambda$ be the $1$-graph from Example \ref{example_integrable_not_cartan}. In Example \ref{2-times_convergence_example} it is shown that $G_\Lambda$ exhibits a sequence with $2$-times convergence; if a groupoid exhibits such a sequence, then it is not Cartan by \cite[Lemma~5.1]{Clark-anHuef2012}. Since $G_\Lambda$ is principal (Example \ref{example_integrable_not_cartan}), Theorem \ref{thm_Cartan_Fell} can be used to deduce that $C^*(\Lambda)$ is not Fell. A later result, Theorem \ref{thm_directed_graph_Cartan_Fell}, is a simplification of Theorem \ref{thm_Cartan_Fell} for the case of directed graphs that can be used to make the same claims about $G_\Lambda$ and $C^*(\Lambda)$ with a quick observation of the skeleton. \end{example} In the next example we will show how, for the $1$-graph from Example \ref{example_integrable_not_cartan}, conditions \eqref{thm_Cartan_Fell_2} and \eqref{thm_Cartan_Fell_3} from Theorem \ref{thm_Cartan_Fell} can be used to deduce that $C^*(\Lambda)$ is not a Fell algebra. First note that since we are dealing with a $1$-graph, we simplify notation by referring to $\mathbb N$ instead of $\mathbb N^1$; then $e_1$ will just be $1$. \notationindex{VFstar@$[v,f]^*$}As in the directed graph case, if for $v\in \Lambda^0$ and $f\in\Lambda^{1}$ there is a unique $\alpha\in v\Lambda$ with $\alpha\big(d(\alpha)-1,d(\alpha)\big)=f$, we denote $\alpha$ by $[v,f]^*$. \begin{example}\label{example_integrable_not_Cartan_2} Suppose that $\Lambda$ is the $1$-graph from Example \ref{example_integrable_not_cartan}. Then $G_\Lambda$ is not Cartan and $C^*(\Lambda)$ is not Fell. \begin{proof} We proceed by contradiction. Suppose $G_\Lambda$ is Cartan and let $z$ be the path in $\Lambda^\infty$ such that $z(q)=v_q$ for every $q\in\mathbb N$. Then there exist $p\in\mathbb N$ and a finite $F\subset\Lambda$ as in condition \eqref{thm_Cartan_Fell_3} of Theorem \ref{thm_Cartan_Fell}. For each $q\geqslant p$, let $\alpha^{(q)}=[v_p,f_q]^*$ and $\beta^{(q)}=[v_p,g_q]^*$. Then $s(\alpha^{(q)})=s(f_q)=s(g_q)=s(\beta^{(q)})$, so $(\alpha^{(q)},\beta^{(q)})\in\Lambda\ast_s\Lambda$. Note that $d(\alpha^{(q)})=d(\beta^{(q)})$. Let $\gamma^{(q)}$ be the path in $s(\alpha^{(q)})\Lambda$ with $d(\gamma^{(q)})=\vee_{\eta\in F}d(\eta)$. Fix $q\geqslant p$. Then $(\alpha^{(q)}\gamma^{(q)},\beta^{(q)}\gamma^{(q)})$ is a monolithic extension of a pair $(\eta,\zeta)\in F\times F$ so there exists $\mu\in\Lambda$ such that $\alpha^{(q)}\gamma^{(q)}=\eta\mu$ and $\beta^{(q)}\gamma^{(q)}=\zeta\mu$. Now \[ (\eta\mu)\big(d(\alpha^{(q)})-1,d(\alpha^{(q)})\big)=\alpha^{(q)}\big(d(\alpha^{(q)})-1,d(\alpha^{(q)})=f_q \] and \begin{align*} (\zeta\mu)\big(d(\alpha^{(q)})-1,d(\alpha^{(q)})\big)&=(\zeta\mu)\big(d(\beta^{(q)})-1,d(\beta^{(q)})\big)\\ &=\beta^{(q)}\big(d(\beta^{(q)})-1,d(\beta^{(q)})=g_q, \end{align*} so we must have $d(\eta)\geqslant d(\alpha)$ and $d(\zeta)\geqslant d(\beta)$. Then $(\eta,\zeta)$ must be a monolithic extension of $(\alpha^{(q)},\beta^{(a)})$. Since $q$ was fixed arbitrarily, $F$ must contain a monolithic extension of $(\alpha^{(q)},\beta^{(q)})$ for every $q\geqslant p$. This contradicts our assumption that $F$ is finite, so it follows that $G_\Lambda$ is not Cartan. Recall from Example \ref{example_integrable_not_cartan} that $G_\Lambda$ is principal. Since in addition $G_\Lambda$ is not Cartan, we can use Theorem \ref{thm_Cartan_Fell} to see that $C^*(\Lambda)$ is not Fell. \end{proof} \end{example} \begin{theorem}\label{thm_proper_cts-trace} Suppose $\Lambda$ is a row-finite $k$-graph without sources. The following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{thm_proper_cts-trace_1} $G_\Lambda$ is proper; \item\label{thm_proper_cts-trace_2} $G_\Lambda$ is principal, and for every finite subset $V$ of $\Lambda^0$, there exists a finite $F\subset\Lambda$ such that, for every $x,y\in V\Lambda^\infty$ with $x\sim_n y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$; \item\label{thm_proper_cts-trace_3} $G_\Lambda$ is principal, and for every finite subset $V$ of $\Lambda^0$, there exists a finite $F\subset\Lambda$ such that, for every $(\alpha,\beta)\in\Lambda\ast_s\Lambda$ with $r(\alpha),r(\beta)\in V$, and every $\gamma\in s(\alpha)\Lambda$ with $d(\gamma)=\vee_{\eta\in F} d(\eta)$, the pair $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of a pair in $F\times F$; and \item\label{thm_proper_cts-trace_4} $G_\Lambda$ is principal, and $C^*(\Lambda)\cong C^*(G_\Lambda)$ has continuous trace. \end{enumerate} \begin{proof} Let $G=G_\Lambda$. \eqref{thm_proper_cts-trace_1} $\implies$ \eqref{thm_proper_cts-trace_2}. Suppose \eqref{thm_proper_cts-trace_1}. It follows immediately from the definitions of proper and Cartan that $G$ is Cartan. Then $G$ is integrable and thus principal by \cite[Proposition~3.11]{Clark-anHuef2008}, \cite[Lemma~5.1]{Clark-anHuef2012} and Lemma \ref{lemma_integrable_implies_principal}. Let $V$ be a finite subset of $\Lambda^0$. Then $V\Lambda^\infty$ is compact since $V$ is finite. Since $G$ is proper, it follows that $G|_{V\Lambda^\infty}$ is compact. We can now apply Lemma \ref{lemma_compactness} to establish \eqref{thm_proper_cts-trace_2}. \eqref{thm_proper_cts-trace_2} $\implies$ \eqref{thm_proper_cts-trace_3} follows immediately from Lemma \ref{lemma_compactness}. \eqref{thm_proper_cts-trace_3} $\implies$ \eqref{thm_proper_cts-trace_1}. Suppose \eqref{thm_proper_cts-trace_3} and let $K$ be a compact subset of $\Lambda^\infty$. For every $z\in K$, the set $r(z)\Lambda^\infty$ is an open neighbourhood of $z$ in $\Lambda^\infty$. Thus $\braces{r(z)\Lambda^\infty:z\in K}$ is an open cover of the compact set $K$, so there exists a finite set $V\subset r(K)$ such that $K\subset V\Lambda^\infty$. By \eqref{thm_proper_cts-trace_3} and Lemma \ref{lemma_compactness} we can see that $G|_{V\Lambda^\infty}$ is compact. Since $K$ is compact and $G$ is Hausdorff, $K$ is a closed subset of $V\Lambda^\infty$. Thus $G|_{K}=s_G^{-1}(K)\cap r_G^{-1}(K)$ is a closed subset of the compact set $G|_{V\Lambda^\infty}$, so $G|_K$ is compact, establishing \eqref{thm_proper_cts-trace_1}. \eqref{thm_proper_cts-trace_1} $\iff$ \eqref{thm_proper_cts-trace_4}. If $G$ is proper then $G$ is principal (see \eqref{thm_proper_cts-trace_1} $\implies$ \eqref{thm_proper_cts-trace_2}). When $G$ is principal, Theorem \ref{thm_groupoid_algebra_continuous_trace} tells us that $C^*(G)$ has continuous trace if and only if $G$ is proper. The result follows since $C^*(G)$ is isomorphic to $C^*(\Lambda)$ by \cite[Corollary~3.5]{Kumjian-Pask2000}. \end{proof} \end{theorem} \section{Boundary paths and the Farthing-Webster desourcification}\label{sec_boundary_paths_and_desourcification} This section begins with Farthing, Muhly and Yeend's definition of a boundary path in a higher-rank graph. Given a locally-convex higher-rank graph $\Lambda$, Farthing in \cite{Farthing2008} showed how to construct a locally-convex higher-rank graph with no sources such that the Cuntz-Krieger $C^*$-algebra is Morita equivalent to $C^*(\Lambda)$. For any row-finite higher-rank graph $\Lambda$, Webster in \cite{Webster2011} modified Farthing's construction to produce a row-finite higher-rank graph $\widetilde\Lambda$ with no sources and with $C^*(\widetilde\Lambda)$ Morita equivalent to $C^*(\Lambda)$. We call the higher-rank graph $\widetilde\Lambda$ the {\em Farthing-Webster desourcification} of $\Lambda$ and we will describe its construction in this section. In Section \ref{sec_desourcification} we will generalise results from Sections \ref{sec_liminal_postliminal} and \ref{sec_bded-trace_cts-trace_Fell} using the Farthing-Webster desourcification and that the liminal, postliminal, Fell, bounded- and continuous- trace properties of $C^*$-algebras are preserved under Morita equivalence (Theorem \ref{thm_Morita_equivalence_results}). For a $k$-graph $\Lambda$, define \[ W_\Lambda:=\braces{x:\Omega_{k,m}\rightarrow\Lambda: m\in (\mathbb N\cup\braces{\infty})^k, x \text{ is a }k\text{-graph morphism}}. \] For every map $x:\Omega_{k,m}\rightarrow\Lambda$ in $W_\Lambda$, define $d(x):=m$. This map $d:W_\Lambda\rightarrow(\mathbb N\cup\{\infty\})^k$ is called the degree map on $W_\Lambda$ since it is a natural generalisation of the degree map $d:\Lambda\rightarrow \mathbb N^{\infty}$. Recall that on page \pageref{def_shift_map} we defined a shift map $\sigma^p:\Lambda^\infty\rightarrow \Lambda^\infty$ for every $p\in\mathbb N^k$. There is a natural extension: for each $p\in\mathbb N^k$ and $x\in W_\Lambda$ with $d(x)\geqslant p$, define $\sigma^p(x)\in W_\Lambda$ by $\sigma^p(x)(m,n)=x(m+p,n+p)$ for every $(m,n)\in \Omega_{k,d(x)-p}$. \begin{definition}[{\cite[Definition~5.10]{Farthing-Muhly-Yeend2005}}] An element $x\in W_\Lambda$ is a {\em boundary path} if for all $n\in\mathbb N^k$ with $n\leqslant d(x)$ and for all $D\in x(n)\mathcal F\mathcal E(\Lambda)$ there exists $m\in\mathbb N^k$ such that $x(n,n+m)\in D$. The set of all boundary paths is denoted by $\partial\Lambda$ and $v\partial\Lambda$ is defined to be $\braces{x\in\partial\Lambda:r(x)=v}$. \end{definition} In \cite[Examples~5.11]{Farthing-Muhly-Yeend2005}, Farthing, Muhly and Yeend point out that if $\Lambda$ is a row-finite $k$-graph with no sources, then $\partial\Lambda=\Lambda^\infty$\label{observation_partial_Lambda_equals_Lambda_infty}. We can roughly think of the boundary paths as the paths that are as close as possible to being infinite. Examples \ref{example_boundary_paths_in_Omega_2_32}, \ref{omega_2_infty_2} and \ref{example_annoying_k-graph} describe the boundary paths in various $2$-graphs. \begin{example}\label{example_boundary_paths_in_Omega_2_32} Consider the $2$-graph $\Omega_{2,(3,2)}$ from Example \ref{example_Omega_k_m}. Then \[ \partial\Omega_{2,(3,2)}=\braces{\alpha\in\Omega_{2,(3,2)}:s(\alpha)=(3,2)}. \] \begin{proof} Fix $x\in\partial\Omega_{2,(3,2)}$ and let $\gamma$ be the path in $\Omega_{2,(3,2)}$ with range $r(x)$ and source $(3,2)$. Then $\braces{\gamma}\in x(0)\mathcal F\mathcal E(\Omega_{2,(3,2)})$, so since $x$ is a boundary path there exists $m\in\mathbb N^k$ such that $x(0,m)=\gamma$. Since $(3,2)$ does not receive any edges, we must have $d(x)=m$. Then $s(x)=x(m)=s(\gamma)=(3,2)$. Fix $\alpha\in\Omega_{2,(3,2)}$ with $s(\alpha)=(3,2)$. Fix $n\leqslant d(\alpha)$ and $D\in\alpha(n)\mathcal F\mathcal E(\Omega_{2,(3,2)})$. Since $D$ is exhaustive there exists $\beta\in D$ such that $\Lambda^{\min}\big(\sigma^n(\alpha),\beta\big)$ is non-empty. Let $(\mu,\nu)\in\Lambda^{\min}\big(\sigma^n(\alpha),\beta\big)$, so that $\sigma^n(\alpha)\mu=\beta\nu$. We must have $\mu=s(\alpha)$ since $(3,2)$ receives no edges, so $\sigma^n(\alpha)=\beta\nu$. Then $\alpha\big(n,n+d(\beta)\big)=\beta\in D$, and it follows that $\alpha\in\partial\Omega_{2,(3,2)}$. \end{proof} \end{example} \begin{example}\label{omega_2_infty_2} The $2$-graph $\Omega_{2,(\infty,2)}$ has skeleton \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.9em,-0.4em) rectangle (3*\cellwidth em + 4.1em,6.4em); \foreach \x in {0,1,2,3} \foreach \y in {0,1,2} \node (x\x y\y) at (\cellwidth*\x em,3*\y em) {$\scriptstyle (\x,\y)$}; \foreach \y in {0,1,2} \node (x4y\y) at (\cellwidth*3 em+2.5em,3*\y em) {} \foreach \y in {0,1,2} \node (x5y\y) at (\cellwidth*3 em+4.5em,3*\y em) {} \foreach \x / \z in {0/1,1/2,2/3} \foreach \y in {0,1,2} \draw[black,<-] (x\x y\y) to (x\z y\y); \foreach \x in {0,1,2,3} \foreach \y / \z in {0/1,1/2} \draw[dashed,black,<-] (x\x y\y) to (x\x y\z); \foreach \y in {0,1,2} \draw[black,<-] (x3y\y) to (x4y\y); \foreach \y in {0,1,2} \draw[black,dotted, thick] (x4y\y) to (x5y\y); \end{tikzpicture} \end{center} and $\partial\Omega_{2,(\infty,2)}$ is the set of all $2$-graph morphisms $x^{(p)}:\Omega_{2,(\infty,2-p_2)}\rightarrow\Omega_{2,(\infty,2)}$ for $p\in\mathbb N^2$ with $p_2\leqslant 2$ such that $x^{(p)}(m,n)=(m+p,n+p)$ for all $m\leqslant n\in\mathbb N^k$ with $n_2\leqslant 2-p_2$. \begin{proof} Let $S=\braces{x^{(p)}:p\in \mathbb N^2, p_2\leqslant 2}$ and fix $x^{(p)}\in S$. Fix $n\in\mathbb N^2$ with $n\leqslant d(x^{(p)})$ (i.e. with $n_2\leqslant 2-p_2$), fix $D\in x^{(p)}(n)\mathcal F\mathcal E(\Omega_{2,(\infty,2)})$ and fix $\alpha\in D$. By examining the skeleton we can see for any $v\in\Omega_{2,(\infty,2)}^0$ and any $l\in\mathbb N^2$ there is at most one path in $v\Omega_{2,(\infty,2)}$ with degree $l$. It follows that $\alpha=x^{(p)}\big(n,n+d(\alpha)\big)$, so $x^{(p)}\in\partial\Omega_{2,(\infty,2)}$. Now fix $y\in\partial\Omega_{2,(\infty,2)}$ and let $p=y(0)\in\mathbb N^2$. Fix $m,n\in\mathbb N^2$ with $m\leqslant n$ and $n_2\leqslant 2-p_2$. Let $\beta=(m+p,n+p)\in\Omega_{2,(\infty,2)}$. Then $\braces{\beta}\in y(m)\mathcal F\mathcal E(\Omega_{2,(\infty,2)})$, so $y\big(m,m+d(\beta)\big)=y(m,n)=\beta=(m+p,n+p)$, and it follows that $y=x^{(p)}\in S$. \end{proof} \end{example} Suppose $\Lambda$ is a row-finite $k$-graph that may have sources. We will now describe the Farthing-Webster desourcification \index{Farthing-Webster desourcification} $\widetilde\Lambda$ of $\Lambda$. Recall from the first paragraph of this section that $\widetilde\Lambda$ was first described by Webster in \cite{Webster2011} based on Farthing's \cite{Farthing2008}. The key property is that $C^*(\widetilde\Lambda)$ is Morita equivalent to $C^*(\Lambda)$. Define a relation $\thickapprox$ on \[V_\Lambda:=\braces{(x;m):x\in\partial\Lambda,m\in\mathbb N^k}\] by: $(x;m)\thickapprox (y;p)$ if and only if \begin{enumerate}\renewcommand{\theenumi}{V\arabic{enumi}} \item\label{V1} $x\big(m\wedge d(x)\big) =y\big( p\wedge d(y)\big)$; and \item\label{V2} $m-m\wedge d(x)=p-p\wedge d(y)$. \end{enumerate} Define a relation $\thicksim$ on $P_\Lambda:=\big\{\big(x;(m,n)\big):x\in\partial\Lambda,m\leqslant n\in\mathbb N^k\big\}$ by: $\big(x;(m,n)\big)\thicksim \big(y;(p,q)\big)$ if and only if \begin{enumerate}\renewcommand{\theenumi}{P\arabic{enumi}} \item\label{P1} $x\big(m\wedge d(x),n\wedge d(x)\big)=y\big(p\wedge d(y),q\wedge d(y)\big)$; \item\label{P2} $m-m\wedge d(x)=p-p\wedge d(y)$; and \item\label{P3} $n-m=q-p$. \end{enumerate} Note that $\thickapprox$ and $\thicksim$ are equivalence relations. Let $\widetilde{V_\Lambda}:=V_{\Lambda}/\thickapprox$ and $\widetilde{P_\Lambda}:=P_{\Lambda}/\thicksim$. The class in $\widetilde{V_\Lambda}$ of $(x;m)\in V_\Lambda$ is denoted by $\desourceclass{x;m}$ and the class in $\widetilde{P_\Lambda}$ of $\big(x;(m,n)\big)\in V_\Lambda$ is denoted by $\desourceclass{x;(m,n)}$. Define $\tilde{r},\tilde{s}:\widetilde{P_\Lambda}\rightarrow \widetilde{V_\Lambda}$ by $\tilde{r}\big(\desourceclass{x;(m,n)}\big)=\desourceclass{x;m}$ and $\tilde{s}\big(\desourceclass{x;(m,n)}\big)=\desourceclass{x;n}$. The following proposition will be used to define the composition of elements of $\widetilde{P_\Lambda}$. \begin{prop}[{\cite[Proposition~4.6]{Webster2011}}]\label{prop_webster_3.6} Suppose that $\Lambda$ is a row-finite $k$-graph and let $\desourceclass{x;(m,n)}$ and $\desourceclass{y;(p,q)}$ be elements of $\widetilde{P_\Lambda}$ satisfying $\desourceclass{x;n}=\desourceclass{y;p}$. Let $z:=x\big(0,n\wedge d(x)\big)\sigma^{p\wedge d(y)}y$. Then \begin{enumerate} \item $z\in\partial\Lambda$; \item $m\wedge d(x)=m\wedge d(z)$ and $n\wedge d(x)=n\wedge d(z)$; and \item $x\big(m\wedge d(x),n\wedge d(x)\big)=z\big(m\wedge d(z),n\wedge d(z)\big)$ and $y\big(p\wedge d(y),q\wedge d(y)\big)=z\big(n\wedge d(z),(n+q-p)\wedge d(z)\big)$. \end{enumerate} \end{prop} For $\desourceclass{x;(m,n)},\desourceclass{y;(p,q)}\in \widetilde{P_\Lambda}$ with $\tilde{s}\big(\desourceclass{x;(m,n)}\big)=\tilde{r}\big(\desourceclass{y;(p,q)}\big)$ and for $z\in\partial\Lambda$ as in Proposition \ref{prop_webster_3.6}, the equation \[ \desourceclass{x;(m,n)}\circ\desourceclass{y;(p,q)}=\desourceclass{z;(m,n+q-p)} \] determines a well defined composition $\circ$. Define $\mathrm{id}:\widetilde{V_\Lambda}\rightarrow \widetilde{P_\Lambda}$ by $\mathrm{id}\big(\desourceclass{x;m}\big)=\desourceclass{x;(m,m)}$. Then $(\widetilde{P_\Lambda},\widetilde{V_\Lambda},\tilde{r},\tilde{s},\circ,\mathrm{id})$ is a category by \cite[Lemma~2.19]{Farthing2008}. Define a map $d:\widetilde{P_\Lambda}\rightarrow \mathbb N^k$ by $d\big(\desourceclass{x;(m,n)}\big)=n-m$. Then $\widetilde{\Lambda}:=(\widetilde{P_\Lambda},\widetilde{V_\Lambda},\tilde{r},\tilde{s},\circ,\mathrm{id},d)$ is a $k$-graph without sources by \cite[Theorem~2.22]{Farthing2008}. We call $\widetilde\Lambda$ the {\em Farthing-Webster desourcification} (or just the desourcification) of $\Lambda$ and usually simplify notation by writing $r$ for $\tilde{r}$, $s$ for $\tilde{s}$ and omitting the $\circ$ and $\mathrm{id}$ notation. \label{describing_morita_equivalence_results}Webster in \cite[Theorem~6.3]{Webster2011} shows that if $\Lambda$ is row-finite, then $\widetilde\Lambda$ is row-finite and $C^*(\widetilde{\Lambda})$ is Morita equivalent to $C^*(\Lambda)$. The Morita equivalence of $C^*$-algebras will be particularly useful to us since Zettl in \cite{Zettl1982} and an Huef, Raeburn and Williams in \cite{anHuef-Raeburn-Williams2007} show that the liminal, postliminal, continous- and bounded-trace, and Fell properties are preserved by Morita equivalence. \begin{prop}[{\cite[Proposition~4.13]{Webster2011}}] Suppose that $\Lambda$ is a row-finite $k$-graph, and that $\alpha\in\Lambda$. Then $s(\alpha)\partial\Lambda\ne\emptyset$. If $x,y\in s(\alpha)\partial\Lambda$, then $\alpha x,\alpha y\in\partial\Lambda$ and $\desourceclass{\alpha x;(0,d(\alpha))}=\desourceclass{\alpha y;(0,d(\alpha))}$. There is an injective $k$-graph morphism $\iota:\Lambda\rightarrow\widetilde\Lambda$ such that $\iota(\alpha)=\bigdesourceclass{\alpha x;\big(0,d(\alpha)\big)}$ for any $\alpha\in\Lambda$ and $x\in s(\alpha)\partial\Lambda$. \end{prop} Define $\kappa:\partial\Lambda\rightarrow\iota(\Lambda^0)\widetilde{\Lambda}^\infty$ by $\kappa(x)(m,n)=\desourceclass{x;(m,n)}$ for every $x\in\partial\Lambda$ and $m,n\in\mathbb N^k$ with $m\leqslant n$. Since $\widetilde\Lambda$ has no sources, we know $\partial\widetilde\Lambda=\widetilde\Lambda^\infty$, so $\kappa$ maps the boundary paths in $\Lambda$ into the boundary paths $\widetilde\Lambda$. The following results show that every path in $\partial\widetilde\Lambda$ is shift equivalent to a path in $\kappa(\partial\Lambda)$ and that this map preserves notions of `shift equivalence' and `frequently divertable'. \begin{lemma}\label{lemma_kappa_bijection} Suppose $\Lambda$ is a row-finite $k$-graph. Then $\kappa:\partial\Lambda\rightarrow\iota(\Lambda^0)\widetilde{\Lambda}^\infty$ is a bijection. \begin{proof} Suppose $\kappa(x)=\kappa(y)$ for some $x,y\in\partial\Lambda$ and fix $n\in\mathbb N^k$. Then $\kappa(x)(0,n)=\kappa(y)(0,n)$, and so $\desourceclass{x;(0,n)}=\desourceclass{y;(0,n)}$. By \eqref{P1} we have $x\big(0,n\wedge d(x)\big)=y\big(0,n\wedge d(y)\big)$ and it follows that $x=y$ since $n$ was chosen arbitrarily. Now fix $a\in \iota(\Lambda^0)\widetilde{\Lambda}^\infty$. In \cite[p.~170]{Webster2011}, Webster describes a map $\pi:\iota(\Lambda^0)\widetilde{\Lambda}^\infty\rightarrow\iota(\partial\Lambda)$. Since $\pi$ maps into $\iota(\partial\Lambda)$, there exists $x\in\partial\Lambda$ such that $\pi(a)=\iota(x)$. Now \cite[Lemma~5.3]{Webster2011} shows that $a(0,n)=\desourceclass{x;(0,n)}$ for every $n\in\mathbb N^k$, so $a=\kappa(x)$, establishing the surjectivity of $\kappa$. \end{proof} \end{lemma} \begin{lemma}\label{lemma_boundary_path_associated_to_infinite_path} Suppose $\Lambda$ is a row-finite $k$-graph. If $a\in\widetilde{\Lambda}^\infty$ then there exists $x\in\partial\Lambda$ and $m\in\mathbb N^k$ such that $a=\sigma^m\big(\kappa(x)\big)$. \begin{proof} Fix $a\in\widetilde{\Lambda}^\infty$. By the construction of $\widetilde{\Lambda}$ there exists $y\in\partial\Lambda$ and $m\in\mathbb N^k$ such that $r(a)=\desourceclass{y;m}$. Then $\desourceclass{y;(0,m)}$ is a path in $\widetilde{\Lambda}$ with source $r(a)$ and range in $\iota(\Lambda^0)$, so $\desourceclass{y;(0,m)}a\in\iota(\Lambda^0)\widetilde{\Lambda}^\infty$. Since $\kappa$ is a bijection onto $\iota(\Lambda^0)\widetilde\Lambda^\infty$ (Lemma \ref{lemma_kappa_bijection}) there exists $x\in\partial\Lambda$ such that $\kappa(x)=\desourceclass{y;(0,m)}a$. We conclude that $\sigma^m\big(\kappa(x)\big)=a$ since $d\big(\desourceclass{y,(0,m)}\big)=m$. \end{proof} \end{lemma} \begin{lemma}\label{lemma_sigma_kappa} Suppose $\Lambda$ is a row-finite $k$-graph, $x\in\partial\Lambda$ and $n\in\mathbb N^k$ satisfies $n\leqslant d(x)$. Then $\sigma^n\big(\kappa(x)\big)=\kappa\big(\sigma^n(x)\big)$. \begin{proof} Fix $m\in\mathbb N^k$. It suffices to show that $\sigma^n\big(\kappa(x)\big)(0,m)=\kappa\big(\sigma^n(x)\big)(0,m)$. Now \[\sigma^n\big(\kappa(x)\big)(0,m)=\kappa(x)(n,n+m)=\desourceclass{x;(n,n+m)}\] and $\kappa\big(\sigma^n(x)\big)(0,m)=\desourceclass{\sigma^n(x);(0,m)}$, so we need to show that $\desourceclass{x;(n,n+m)}=\desourceclass{\sigma^n(x);(0,m)}$. For \eqref{P1}, consider {\allowdisplaybreaks \begin{align*} \sigma^n(x)\Big(0\wedge d\big(\sigma^n(x)\big),m\wedge d\big(\sigma^n(x)\big)\Big) &=\sigma^n(x)\Big(0,m\wedge\big(d(x)-n\big)\Big)\\ &=x\Big(n,n+m\wedge\big(d(x)-n\big)\Big)\\ &=x\big(n\wedge d(x),(n+m)\wedge d(x)\big). \end{align*}} Axiom \eqref{P2} follows from $n\leqslant d(x)$ and axiom \eqref{P3} is immediate, completing the proof. \end{proof} \end{lemma} Suppose $\Lambda$ is a $k$-graph. For $x,y\in \partial\Lambda$ and $n\in\mathbb Z^k$, write $x\sim_n y$\index{$\sim_n$} if there exists $m\in\mathbb N^k$ with $m\leqslant d(x)\wedge \big(d(y)+n\big)$ such that $\sigma^m(x)=\sigma^{m-n}(y)$. \begin{lemma}\label{lemma_shift_equivalence2} Suppose $\Lambda$ is a $k$-graph. The relations $\sim_n$ form an equivalence relation on $\partial\Lambda$ in that $x\sim_0 x$, $x\sim_n y\implies y\sim_{-n} x$ and $x\sim_n y, y\sim_l z\implies x\sim_{n+l} z$ for any $x,y,z\in\partial\Lambda$ and $n,l\in\mathbb Z^k$. \begin{proof} The assertion that $x\sim_0 x$ for any $x\in\partial\Lambda$ is trivial. Suppose $x\sim_n y$ and $y\sim_l z$ for some $x,y,z\in \partial\Lambda$ and $n,l\in\mathbb N^k$. Then there exists $p\in\mathbb N^k$ with $p\leqslant d(x)\wedge \big(d(y)+n\big)$ such that $\sigma^p(x)=\sigma^{p-n}(y)$. Then $p-n\leqslant \big(d(x)-n\big)\wedge \big(d(y)+n-n\big)=d(y)\wedge\big(d(x)-n\big)$ and $\sigma^{p-n}(y)=\sigma^p(x)=\sigma^{(p-n)+n}(x)$, so $y\sim_{-n} x$. To see that $x\sim_{n+l}z$, first observe that there exists $q\in\mathbb N^k$ with $q\leqslant d(y)\wedge \big(d(z)+l\big)$ such that $\sigma^q(y)=\sigma^{q-l}(z)$. Let $t=p\vee (q+n)$. Since $d(x)=d(y)+n$, $d(y)=d(z)+l$, $p\leqslant d(x)$ and $q\leqslant d(y)$, we have $t\leqslant d(x)\wedge\big(d(z)+n+l\big)$. Now $p\leqslant t\leqslant d(x)$, so $\sigma^t(x)=\sigma^{t-n}(y)$. Similarly $q\leqslant t-n\leqslant d(y)$ gives us $\sigma^{t-n}(y)=\sigma^{t-n-l}(z)$. Thus $\sigma^t(x)=\sigma^{t-(n+l)}(z)$ and it follows that $x\sim_{n+l}z$. \end{proof} \end{lemma} The next lemma shows how shift equivalence passes from a row-finite $k$-graph to its desourcification via the map $\kappa$. \begin{lemma}\label{lemma_desource_shift_equivalence_preserved} Suppose $\Lambda$ is a row-finite $k$-graph, $x,y\in\partial\Lambda$ and $n\in \mathbb Z^k$. Then $\kappa(x)\sim_n\kappa(y)$ if and only if $x\sim_n y$. \begin{proof} Suppose $x\sim_n y$. Then there exists $m\in\mathbb N^k$ with $m\leqslant d(x)\wedge \big(d(y)+n\big)$ such that $\sigma^m(x)=\sigma^{m-n}(y)$ and so $\kappa\big(\sigma^m(x)\big)=\kappa\big(\sigma^{m-n}(y)\big)$. It follows by Lemma \ref{lemma_sigma_kappa} that $\sigma^m\big(\kappa(x)\big)=\sigma^{m-n}\big(\kappa(y)\big)$, so $\kappa(x)\sim_n\kappa(y)$. We will now prove the converse. Suppose $\kappa(x)\sim_n\kappa(y)$ so that there exists $m\in\mathbb N^k$ such that $\sigma^m\big(\kappa(x)\big)=\sigma^{m-n}\big(\kappa(y)\big)$. Then $\kappa(x)(m)=\kappa(y)(m-n)$ so \eqref{V2} tells us that $m-m\wedge d(x)=(m-n)-(m-n)\wedge d(y)$. Let $p=m-m\wedge d(x)$. We now claim that $\big(x;(m-p,m)\big)\thicksim \big(y;(m-n-p,m-n)\big)$. To show this, we need to show that \eqref{P1}, \eqref{P2} and \eqref{P3} are satisfied. To see \eqref{P1}, note that {\allowdisplaybreaks\begin{align*} x\big((m-p)\wedge d(x),m\wedge d(x)\big)&=x\Big(\big(m\wedge d(x)\big)\wedge d(x),m\wedge d(x)\Big)\\ &=x\big(m\wedge d(x),m\wedge d(x)\big)\\ &=y\big((m-n)\wedge d(y),(m-n)\wedge d(y)\big)\quad\text{(by \eqref{V1}})\\ &=y\Big(\big((m-n)\wedge d(y)\big)\wedge d(y), (m-n)\wedge d(y)\Big)\\ &=y\big((m-n-p)\wedge d(y), (m-n)\wedge d(y)\big). \end{align*}} To see \eqref{P2}, note that \[ (m-p)-(m-p)\wedge d(x)=(m-p)-\big(m\wedge d(x)\big)\wedge d(x)=m-p-m\wedge d(x)=0 \] and similarly $(m-n-p)-(m-n-p)\wedge d(y)=0$. Axiom \eqref{P3} follows immediately. We now know that $\big(x;(m-p,m)\big)\thicksim \big(y;(m-n-p,m-n)\big)$, so $\kappa(x)(m-p,m)=\kappa(y)(m-n-p,m-n)$. Since $\sigma^m\big(\kappa(x)\big)=\sigma^{m-n}\big(\kappa(y)\big)$ and $\kappa(x)(m-p,m)=\kappa(y)(m-n-p,m-n)$, we must have $\sigma^{m-p}\big(\kappa(x)\big)=\sigma^{m-n-p}\big(\kappa(y)\big)$. Since $m-p=m\wedge d(x)\leqslant d(x)$ and $m-n-p=(m-n)\wedge d(y)\leqslant d(y)$, we can apply Lemma \ref{lemma_sigma_kappa} to see that \[ \kappa\big(\sigma^{m-p}(x)\big)=\sigma^{m-p}\big(\kappa(x)\big)=\sigma^{m-n-p}\big(\kappa(y)\big)=\kappa\big(\sigma^{m-n-p}(y)\big). \] Since $\kappa$ is injective by Lemma \ref{lemma_kappa_bijection}, it follows that $\sigma^{m-p}(x)=\sigma^{m-n-p}(y)$, so $x\sim_n y$, as required. \end{proof} \end{lemma} \begin{definition}[{\cite[Definition~3.9]{rsy2003}}]\label{def_locally_convex}\index{locally convex} A $k$-graph $\Lambda$ is {\em locally convex} if, for all $v\in\Lambda^0$, $i,j\in\braces{1,\ldots,k}$ with $i\ne j$, $\alpha\in v\Lambda^{e_i}$ and $\beta\in v\Lambda^{e_j}$, the sets $s(\alpha)\Lambda^{e_j}$ and $s(\beta)\Lambda^{e_i}$ are non-empty. \end{definition} In \cite[p.~159]{Webster2011} Webster points out that $\Lambda$ being locally convex equates to its skeleton not containing any subgraph resembling \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \clip (-0.9em,-0.8em) rectangle (5.3 em,4.2em); \node (salpha) at (5em, 0em) {$\scriptstyle u$}; \node (sbeta) at (0em, 4em) {$\scriptstyle w$}; \node (v) at (0em,0em) {$\scriptstyle v$}; \draw[black,->] (salpha) to node[below] {$\scriptstyle \alpha$} (v); \draw[black,dashed,->] (sbeta) to node[left] {$\scriptstyle \beta$} (v); \end{tikzpicture} \end{center} where $w$ receives no solid edges and $u$ receives no dashed edges. The $2$-graph $\Omega_{2,(3,2)}$ from Example \ref{example_Omega_k_m} is locally convex whereas the $2$-graphs from Examples \ref{example_not_locally_convex_with_same_boundary_paths} and \ref{example_annoying_k-graph} are not locally convex. \begin{remark}\label{remark_every_1-graph_locally_convex} It follows immediately from Definition \ref{def_locally_convex} that every $1$-graph is locally convex. \end{remark} \begin{definition}[{\cite[Definition~2.8]{rsy2004}}] \label{def_Lambda_leinfty}\notationindex{LAMBDALEINFTY@$\Lambda^{\leqslant\infty}$} For a $k$-graph $\Lambda$, define $\Lambda^{\leqslant\infty}$ be the set of all $x\in W_\Lambda$ where there exists $m\in\mathbb N^k$ with $m\leqslant d(x)$ such that for every $n\in\mathbb N^k$ with $m\leqslant n\leqslant d(x)$ and every $1\leqslant i\leqslant k$ with $n_i=d(x)_i$, the set $x(n)\Lambda^{e_i}$ is empty. \end{definition} The set $\Lambda^{\leqslant\infty}$ from Definition \ref{def_Lambda_leinfty} is a slight modification of a smaller set with the same denotation first defined by Raeburn, Sims and Yeend in \cite[Definition~3.14]{rsy2003}. This smaller set was defined to be the set of all $x\in W_\Lambda$ such that, for every $n\in\mathbb N^k$ with $n\leqslant d(x)$ and every $1\leqslant i\leqslant k$ with $n_i=d(x)_i$, the set $x(n)\Lambda^{e_i}$ is empty. These two definitions of $\Lambda^{\leqslant\infty}$ coincide when $\Lambda$ is locally convex. Given that the main results from \cite{rsy2004} require the assumption that $\Lambda$ is locally convex, Raeburn, Sims and Yeend in \cite{rsy2004} used this new definition of $\Lambda^{\leqslant\infty}$ as an alternative definition to that found in \cite{rsy2003}. It turns out that using the \cite{rsy2004} definition of $\Lambda^{\leqslant\infty}$ strengthens some of the main theorems in this thesis. The two definitions of $\Lambda^{\leqslant\infty}$ do not coincide in the $2$-graph $\Lambda$ from Example \ref{example_not_locally_convex_with_same_boundary_paths}. \begin{example} Consider the $2$-graph $\Omega_{2,(3,2)}$ from Example \ref{example_Omega_k_m}. Then \[ \Omega_{2,(3,2)}^{\leqslant\infty}=\partial\Omega_{2,(3,2)}=\braces{\alpha\in\Omega_{2,(3,2)}:s(\alpha)=(3,2)}. \] \begin{proof} We saw that $\partial\Omega_{2,(3,2)}=\braces{\alpha\in\Omega_{2,(3,2)}:s(\alpha)=(3,2)}$ in Example \ref{example_boundary_paths_in_Omega_2_32}. Fix $x\in\Omega_{2,(3,2)}^{\leqslant\infty}$ and note that $x\big(d(x)\big)\Omega_{2,(3,2)}^{e_i}=s(x)\Omega_{2,(3,2)}^{e_i}$ is empty for $i=1,2$. In other words, $s(x)$ does not receive any edges. Since the only vertex in $\Omega_{2,(3,2)}$ that does not receive any edges is $(3,2)$, it follows that $s(x)=(3,2)$. Now suppose that $\alpha\in\Omega_{2,(3,2)}$ with $s(\alpha)=(3,2)$. Since $(3,2)$ does not receive any edges, \[ (3,2)\Omega_{2,(3,2)}^{e_i}=s(\alpha)\Omega_{2,(3,2)}^{e_i}=\alpha\big(d(\alpha)\big)\Omega_{2,(3,2)}^{e_i} \] is empty for $i=1,2$. We can now take $m=d(\alpha)$ to see that $\alpha\in\Omega_{2,(3,2)}^{\leqslant\infty}$. \end{proof} \end{example} The following proposition by Webster shows that it is not a coincidence that $\partial\Omega_{2,(3,2)}=\Omega_{2,(3,2)}^{\leqslant\infty}$. Indeed, $\partial\Lambda$ was chosen as a kind of generalisation of $\Lambda^{\leqslant\infty}$ and in \cite{rsy2003} Raeburn, Sims and Yeend refer to elements of $\Lambda^{\leqslant\infty}$ as `boundary paths', although here we reserve this term for elements of $\partial\Lambda$. \begin{prop}[{\cite[Proposition~2.12]{Webster2011}}]\label{prop_Lambda_leinfty_equals_partial_lambda} If $\Lambda$ is a row-finite $k$-graph, then $\Lambda^{\leqslant\infty}\subset\partial\Lambda$. If in addition $\Lambda$ is locally convex, then $\Lambda^{\leqslant\infty}=\partial\Lambda$. \end{prop} It is possible to have $\Lambda^{\leqslant\infty}=\partial\Lambda$ without $\Lambda$ being locally convex: \begin{example}\label{example_not_locally_convex_with_same_boundary_paths} If $\Lambda$ is the $2$-graph with the following skeleton, then $\Lambda$ is not locally convex and $\partial\Lambda=\Lambda^{\leqslant\infty}$. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \clip (-0.9em,-0.8em) rectangle (5.3 em,4.2em); \fill[black] (salpha) circle (0.15em); \fill[black] (sbeta) circle (0.15em); \fill[black] (v) circle (0.15em); \node (salpha) at (5em, 0em) {}; \node (sbeta) at (0em, 4em) {}; \node (v) at (0em,0em) {}; \draw[black,->] (salpha) to (v); \draw[black,dashed,->] (sbeta) to (v); \end{tikzpicture} \end{center} \end{example} The condition $\partial\Lambda=\Lambda^{\leqslant\infty}$ turns out to be very useful even though the stronger (in the row-finite context) locally convex condition is often easier to check. The next example first appeared in Robertson's honours thesis \cite{RobertsonHonours} and is the standard example of a path that is in $\partial\Lambda$ but not in $\Lambda^{\leqslant\infty}$. \begin{example}\label{example_annoying_k-graph} Let $\Lambda$ be the $2$-graph with the following skeleton. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \foreach \x in {0,1,2,3} \node (x\x y1) at (\cellwidth*\x em,3.5em) {}; \foreach \x in {0,1,2,3} \fill[black] (x\x y1) circle (0.15em); \foreach \x in {0,1,2,3} \node (x\x y0) at (\cellwidth*\x em,0em) {$\scriptstyle v_\x$}; \foreach \x / \z in {0/1,1/2,2/3,3/4} { \node (a\x) at (\cellwidth*\x em+0.6*\cellwidth em,-3em) {}; \fill[black] (a\x) circle (0.15em); \draw[black,<-] (x\x y0) to node[below left=-0.3em] {$\scriptstyle f_\z$} (a\x); } \foreach \y in {0,1} \node (x4y\y) at (\cellwidth*3 em+2.5em,3.5*\y em) {} \foreach \y in {0,1} \node (x5y\y) at (\cellwidth*3 em+4.4em,3.5*\y em) {} \foreach \x / \z in {0/1,1/2,2/3} \draw[black,<-] (x\x y0) to node[above] {$\scriptstyle x_\z$} (x\z y0); \foreach \x / \z in {0/1,1/2,2/3} \draw[black,<-] (x\x y1) to (x\z y1); \foreach \x in {0,1,2,3} \foreach \y / \z in {0/1} \draw[dashed,black,<-] (x\x y\y) to (x\x y\z); \foreach \y in {0,1} \draw[black,<-] (x3y\y) to (x4y\y); \foreach \y in {0,1} \draw[black,dotted, thick] (x4y\y) to (x5y\y); \node (a4) at (3.7*\cellwidth em,-3em) {}; \node (a5) at (3.7*\cellwidth em + 1.9em,-3em) {}; \draw[black,dotted, thick] (a4) to (a5); \end{tikzpicture} \end{center} The path $x=x_1x_2x_3\cdots$ is in $\partial\Lambda$ but not in $\Lambda^{\leqslant\infty}$. \begin{proof} First suppose that the solid edges have degree $(1,0)$ and the dashed edges have degree $(0,1)$. Observe that $x\notin\Lambda^{\leqslant\infty}$: for every $m\in\mathbb N^k$ with $m\leqslant d(x)=(\infty,0)$, we have $x(m)\Lambda^{e_2}=v_{m_1}\Lambda^{e_2}$, which is non-empty since $v_{m_1}$ receives an edge of degree $(0,1)$. To see that $x\in\partial\Lambda$, fix $n\in\mathbb N^k$ with $n\leqslant d(x)$ and fix $D\in x(n)\mathcal F\mathcal E(\Lambda)$. Since $n\leqslant d(x)$ and $d(x)=(\infty,0)$, we have $n_2=0$, so $D\in v_{n_1}\mathcal F\mathcal E(\Lambda)$. Suppose there does not exist $m\in\mathbb N^k$ such that $x(n,n+m)\in D$. For each integer $p> n_1$, let $\alpha^{(p)}=x_{n_1}x_{n_1+1}\cdots x_{p-1}f_p$. Since $D$ is exhaustive, for each $\alpha^{(p)}$ there exists $\beta^{(p)}\in D$ such that $\Lambda^{\min}(\alpha^{(p)},\beta^{(p)})$ is non-empty. Now let $(\mu,\nu)\in\Lambda^{\min}(\alpha^{(p)},\beta^{(p)})$ for some $p$, so that $\alpha^{(p)}\mu=\beta^{(p)}\nu$. Since $s(\alpha^{(p)})=s(f_p)$ receives no edges, we have $\mu=s(\alpha^{(p)})$, and so $\alpha^{(p)}=\beta^{(p)}\nu$. In order for $\beta^{(p)}$ to not be of the form $x(n,n+m)$ for some $m$, we must also have $\nu=s(\beta^{(p)})$, so that $\alpha^{(p)}=\beta^{(p)}$. It follows that $\alpha^{(p)}$ is in $D$ for every $p$, so $D$ is not finite, contradicting $D\in\mathcal F\mathcal E(\Lambda)$. Thus our assumption is incorrect, so $x(n,n+m)\in D$ for some $m\in\mathbb N^k$, and $x\in\partial\Lambda$. \end{proof} \end{example} The desourcification of the $2$-graph $\Lambda$ from Example \ref{example_annoying_k-graph} will be described in Example \ref{example_annoying_k-graph_is_liminal}. In Example \ref{examples_cts-trace} it will be shown that $C^*(\Lambda)$ has continuous trace. \needspace{4\baselineskip} \begin{lemma}\label{lemma_property_of_old_boundary_path} Suppose $\Lambda$ is a row-finite $k$-graph. If $x\in\Lambda^{\leqslant\infty}$, there exists $n\in\mathbb N^k$ with $n\leqslant d(x)$ such that for any $m\in\mathbb N^k$ with $m\geqslant n$, $x\big(m\wedge d(x)\big)\Lambda^{e_i}=\emptyset$ for every $i$ with $d(x)_i<\infty$. If $y\in x\big(m\wedge d(x)\big)\partial\Lambda$, then \[ \kappa(x)\big(m\wedge d(x), m\big)=\kappa(y)\big(0,m-m\wedge d(x)\big). \] \begin{proof} Since $x\in\Lambda^{\leqslant\infty}$, there exists $n\in\mathbb N^k$ with $n\leqslant d(x)$ such that for any $l\in\mathbb N^k$ and $1\leqslant i\leqslant k$ such that $n\leqslant l\leqslant d(x)$ and $l_i=d(x)_i$, the set $x(l)\Lambda^{e_i}$ is empty. Increase $n$ so that $n_i=d(x)_i$ for every $i$ with $d(x)_i<\infty$. Then for every $m\in\mathbb N^k$ with $m\geqslant n$, the set $x\big(m\wedge d(x)\big)\Lambda^{e_i}$ is empty for every $i$ with $d(x)_i<\infty$. Fix $m\in\mathbb N^k$ such that $m\geqslant n$ and suppose $y\in x\big(m\wedge d(x)\big)\partial\Lambda$. Let $z=x\big(0,m\wedge d(x)\big)y$. In order to show that $\kappa(x)(0,m)=\kappa(z)(0,m)$, it suffices to show that $x\big(0,m\wedge d(x)\big)=z\big(0,m\wedge d(z)\big)$. By the definition of $z$, we know $z\big(0,m\wedge d(x)\big)=x\big(0,m\wedge d(x)\big)$, so it will suffice to show that $m\wedge d(z)=m\wedge d(x)$. Suppose $m\wedge d(z)\ne m\wedge d(x)$. We know by the construction of $z$ that $d(z)\geqslant m\wedge d(x)$, so there exists $i$ such that \begin{equation}\label{frequently_divertable_eqn1} m\wedge d(z)\geqslant m\wedge d(x)+e_i. \end{equation} Since $\big(m\wedge d(x)\big)_i$ equals $m_i$ if $d(x)_i=\infty$ and equals $d(x)_i$ if $d(x)_i<\infty$, equation \eqref{frequently_divertable_eqn1} shows that we must have $d(x)_i<\infty$. Now \[ z\big(m\wedge d(x),m\wedge d(x)+e_i\big)\in x\big(m\wedge d(x)\big)\Lambda^{e_i}, \] but no such path can exist since we know $m\geqslant n$ and $\big(m\wedge d(x)\big)_j=d(x)_j$ implies $x\big(m\wedge d(x)\big)\Lambda^{e_j}=\emptyset$. This contradiction shows that $n\wedge d(z)=n\wedge d(x)$, so it follows from the construction of $z$ that $z\big(0,d(z)\big)=x\big(0,d(x)\big)$, and we can use this to see that $\kappa(z)(0,n)=\kappa(x)(0,n)$. Now $y=\sigma^{m\wedge d(x)}(z)$, so \begin{align*} \kappa(y)\big(0,m-m\wedge d(x)\big)&=\kappa\big(\sigma^{m\wedge d(x)}(z)\big)\big(0,m-m\wedge d(x)\big)\\ &=\sigma^{m\wedge d(x)}\big(\kappa(z)\big)\big(0,m-m\wedge d(x)\big)\quad\text{(by Lemma \ref{lemma_sigma_kappa})}\\ &=\kappa(z)\big(m\wedge d(x),m\big)=\kappa(x)\big(m\wedge d(x),m\big).\qedhere \end{align*} \end{proof} \end{lemma} \begin{definition}\label{def_frequently_divertable_boundary_paths}\index{frequently divertable!boundary paths in a $k$-graph} Suppose $\Lambda$ is a $k$-graph. For $x,y\in \partial\Lambda$, we say $x$ is {\em frequently divertable} to $[y]$ if for every $n\in\mathbb N^k$ with $n\leqslant d(x)$ there is a path in $x(n)\partial\Lambda$ that is shift equivalent to $y$. \end{definition} The frequently divertable property is preserved by $\kappa$: \needspace{4\baselineskip} \begin{lemma}\label{lemma_frequently_divertable_preservation} Suppose $\Lambda$ is a row-finite $k$-graph and $x,y\in\partial\Lambda$. Consider \begin{enumerate} \item\label{lemma_frequently_divertable_preservation1} $x$ is frequently divertable to $[y]$; and \item\label{lemma_frequently_divertable_preservation2} $\kappa(x)$ is frequently divertable to $[\kappa(y)]$. \end{enumerate} Then \eqref{lemma_frequently_divertable_preservation2} implies \eqref{lemma_frequently_divertable_preservation1} and, if in addition $x\in\Lambda^{\leqslant\infty}$, then \eqref{lemma_frequently_divertable_preservation1} implies \eqref{lemma_frequently_divertable_preservation2}. \begin{proof} \eqref{lemma_frequently_divertable_preservation2}$\implies$\eqref{lemma_frequently_divertable_preservation1}. Suppose \eqref{lemma_frequently_divertable_preservation2} and fix $n\leqslant d(x)$. Then there exists $a\in \kappa(x)(n)\widetilde\Lambda^\infty$ such that $a$ is shift equivalent to $\kappa(y)$. Then $r(a)\in\iota(\Lambda^0)$ since $r(a)=\kappa(x)(n)$ and $n\leqslant d(x)$. Since $\kappa$ is a bijection onto $\iota(\Lambda^0)\widetilde\Lambda^\infty$ (Lemma \ref{lemma_kappa_bijection}), there exists $z\in\partial\Lambda$ such that $a=\kappa(z)$. Now $r\big(\kappa(z)\big)=r(a)=\kappa(x)(n)$ implies $\kappa(z)(0)=\kappa(x)(n)$ and since $\kappa$ is injective, $r(z)=z(0)=x(n)$. Thus $z\in x(n)\partial\Lambda$. Since $\kappa(z)\in [\kappa(y)]$, $\kappa(z)\sim\kappa(y)$ and so we know $z\sim y$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}. Thus $z\in x(n)\partial\Lambda\cap[y]$ and \eqref{lemma_frequently_divertable_preservation1} follows since $n$ was chosen arbitrarily. \eqref{lemma_frequently_divertable_preservation1}$\implies$\eqref{lemma_frequently_divertable_preservation2} assuming $x\in\Lambda^{\leqslant\infty}$. Suppose \eqref{lemma_frequently_divertable_preservation1}. Since $x\in\Lambda^{\leqslant\infty}$, we can let $n$ be the element of $\mathbb N^k$ as in Lemma \ref{lemma_property_of_old_boundary_path}. Fix $m\in\mathbb N^k$ so that $m\geqslant n$. Since $x$ is frequently divertable to $[y]$, there exists $z\in x\big(m\wedge d(x)\big)\partial\Lambda\cap [y]$. Now \[ \kappa(z)\big(0,m-m\wedge d(x)\big)=\kappa(x)\big(m\wedge d(x),m\big), \] so $\sigma^{m-m\wedge d(x)}\big(\kappa(z)\big)\in \kappa(x)(m)\widetilde{\Lambda}^\infty$. Since $z\sim y$, $\sigma^{m-m\wedge d(x)}\big(\kappa(z)\big)$ is shift equivalent to $\kappa(y)$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}. Condition \eqref{lemma_frequently_divertable_preservation2} follows since $m$ was chosen to be an arbitrary element of $\mathbb N^k$ with $m\geqslant n$. \end{proof} \end{lemma} \begin{remark}\label{remark_annoying_example_nearly_breaks_lemma} Statement \eqref{lemma_frequently_divertable_preservation1} does not imply \eqref{lemma_frequently_divertable_preservation2} in general. To see this, consider Example \ref{example_annoying_k-graph}. Let $y$ be the path of degree $(\infty,1)$ with range $v_0$. Then $x$ and $y$ are both boundary paths and $x$ is frequently divertable to $[y]$. But by examining the desourcification presented in Figure \ref{figure_desourcification} it can be seen that $\kappa(x)$ is not frequently divertable to $[\kappa(y)]$ since there is no path in $\kappa(x)(0,1)\widetilde{\Lambda}^\infty$ which is shift equivalent to $\kappa(y)$. \end{remark} \section{Generalising results with the Farthing-Webster desourcification}\label{sec_desourcification} Recall from the previous section that if $\widetilde\Lambda$ is the Farthing-Webster desourcification of a row-finite $k$-graph $\Lambda$, then $C^*(\widetilde\Lambda)$ is Morita equivalent to $C^*(\Lambda)$. Since the liminal, postliminal, Fell, bounded- and continuous- trace properties of $C^*$-algebras are preserved under Morita equivalence (Theorem \ref{thm_Morita_equivalence_results}), the theorems in Sections \ref{sec_liminal_postliminal} and \ref{sec_bded-trace_cts-trace_Fell}, which require the $k$-graph to have no sources, can be applied to $\widetilde\Lambda$ rather than $\Lambda$ in order to determine which of the the liminal, postliminal, Fell, bounded- and continuous-trace properties are satisfied by $C^*(\Lambda)$ (the latter three properties also requiring that the path groupoid associated to $\widetilde\Lambda$ is principal). Rather than having to consider the desourcification of each $k$-graph to apply these theorems, the theorems in this section provide conditions on $\Lambda$ that are necessary and sufficient for the conditions in the theorems in Sections \ref{sec_liminal_postliminal} and \ref{sec_bded-trace_cts-trace_Fell} to hold on $\widetilde\Lambda$, thus providing characterisations for row-finite $k$-graphs that may have sources. \begin{theorem}\label{k-graph_liminal_thm2} Suppose $\Lambda$ is a row-finite $k$-graph and consider the statements \begin{enumerate} \item\label{k-graph_liminal_thm2_1} for every $x\in\partial\Lambda$, every path in $\partial\Lambda$ that is frequently divertable to $[x]$ is shift equivalent to $x$; and \item\label{k-graph_liminal_thm2_2} $C^*(\Lambda)$ is liminal. \end{enumerate} Then \eqref{k-graph_liminal_thm2_1} implies \eqref{k-graph_liminal_thm2_2} and, if in addition $\partial\Lambda=\Lambda^{\leqslant\infty}$, then \eqref{k-graph_liminal_thm2_2} implies \eqref{k-graph_liminal_thm2_1}. \begin{proof} Suppose \eqref{k-graph_liminal_thm2_1}. We begin by showing that $\widetilde{\Lambda}$ satisfies condition \eqref{k-graph_liminal_thm1_2} from Theorem \ref{k-graph_liminal_thm1}. Fix $a\in\widetilde{\Lambda}^\infty$ and suppose that $b\in\widetilde{\Lambda}^\infty$ is frequently divertable to $[a]$. By Lemma \ref{lemma_boundary_path_associated_to_infinite_path} there exists $x,y\in\partial\Lambda$ and $n,m\in\mathbb N^k$ such that $a=\sigma^n\big(\kappa(x)\big)$ and $b=\sigma^m\big(\kappa(y)\big)$. Then $\kappa(y)$ is frequently divertable to $[\kappa(x)]$ and Lemma \ref{lemma_frequently_divertable_preservation} shows that $y$ is frequently divertable to $[x]$. Then $y$ is shift equivalent to $x$ by \eqref{k-graph_liminal_thm2_1}. Lemma \ref{lemma_desource_shift_equivalence_preserved} now shows us that $\kappa(y)$ is shift equivalent to $\kappa(x)$ and it follows that $b$ is shift equivalent to $a$. We have thus shown that $\widetilde{\Lambda}$ satisfies condition \eqref{k-graph_liminal_thm1_2} from Theorem \ref{k-graph_liminal_thm1}, so $C^*(\widetilde{\Lambda})$ is liminal. Then $C^*(\Lambda)$ is liminal by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~2]{anHuef-Raeburn-Williams2007}). Suppose $\partial\Lambda=\Lambda^{\leqslant\infty}$ and that $C^*(\Lambda)$ is liminal. Fix $x\in\partial\Lambda$ and suppose $y\in\partial\Lambda$ is frequently divertable to $[x]$. Then $\kappa(y)$ is frequently divertable to $[\kappa(x)]$ by Lemma \ref{lemma_frequently_divertable_preservation}. We know that $\widetilde{\Lambda}$ is row-finite and has no sources so, since in addition $C^*(\widetilde{\Lambda})$ is liminal by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~2]{anHuef-Raeburn-Williams2007}), we can apply Theorem \ref{k-graph_liminal_thm1} to see that $\kappa(x)$ is shift equivalent to $\kappa(y)$. Lemma \ref{lemma_desource_shift_equivalence_preserved} now shows that $x$ is shift equivalent to $y$, establishing \eqref{k-graph_liminal_thm2_1}. \end{proof} \end{theorem} \begin{example} The $2$-graph $C^*$-algebra $C^*(\Omega_{2,(3,2)})$ is liminal: in Example \ref{example_boundary_paths_in_Omega_2_32} we saw that $\partial\Omega_{2,(3,2)}=\braces{\alpha\in\Omega_{2,(3,2)}:s(\alpha)=(3,2)}$, so every path in $\partial\Omega_{2,(3,2)}$ is shift equivalent to every other path in $\partial\Omega_{2,(3,2)}$. It follows that $\Omega_{2,(3,2)}$ satisfies condition \eqref{k-graph_liminal_thm2_2} from Theorem \ref{k-graph_liminal_thm2} and so $C^*(\Omega_{2,(3,2)})$ is liminal. \end{example} \begin{example}\label{example_annoying_k-graph_is_liminal} Suppose $\Lambda$ is the $2$-graph from Example \ref{example_annoying_k-graph}. Let $x$ be the path mentioned in Example \ref{example_annoying_k-graph} and, as in Remark \ref{remark_annoying_example_nearly_breaks_lemma}, let $y$ be the unique path in $v_0\partial\Lambda$ with degree $(\infty,1)$. Since $x$ is frequently divertable to $y$ with $x$ not shift equivalent to $y$, it follows that $\Lambda$ does not satisfy condition \eqref{k-graph_liminal_thm2_1} of Theorem \ref{k-graph_liminal_thm2}. The key point is that $\partial\Lambda\ne\Lambda^\infty$, so we cannot use Theorem \ref{k-graph_liminal_thm2} to deduce that $C^*(\Lambda)$ is not liminal. In fact $C^*(\Lambda)$ is liminal. To see this, a careful inspection of $\Lambda$ reveals that the skeleton of the desourcification $\widetilde\Lambda$ can be pictured by Figure \ref{figure_desourcification}. \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \def\cellwidth{5.5}; \clip (-0.9em,-13.1em) rectangle (20.5em, 4.6em); \fill [white] (-20em,-20em) rectangle (80em,80em); \foreach \x in {0,1,2,3} \foreach \y in {-2,-1,1} { \node (x\x y\y) at (\cellwidth*\x em,\y*3.5em) {}; \fill[black] (x\x y\y) circle (0.15em); } \foreach \x in {0,1,2,3} \node (x\x y0) at (\cellwidth*\x em,0em) {$\scriptstyle \iota(v_\x)$}; \foreach \x / \z in {0/1,1/2,2/3} \draw[black,<-] (x\x y0) to node[above] {$\scriptstyle \iota(x_\z)$} (x\z y0); \foreach \x / \z in {0/1,1/2,2/3} \foreach \y in {1,-1,-2} \draw[black,<-] (x\x y\y) to (x\z y\y); \foreach \x in {0,1,2,3} \foreach \y / \z in {0/1,0/-1,-1/-2} \draw[dashed,black,<-] (x\x y\y) to (x\x y\z); \foreach \x in {0,1,2,3} \foreach \a / \y in {0/0,1/-1,2/-2} { \node(x\x a\a) at (\cellwidth*\x em+0.3*\cellwidth em,-2.5em-3.5*\a em) {}; \fill[black] (x\x a\a) circle (0.15em); \draw [<-] (x\x y\y) to (x\x a\a); } \foreach \x in {0,1,2,3} \node [above right=-0.3em] at ($ (x\x y0)!.7!(x\x a0) $) {$\scriptstyle \iota(f_\x)$}; \foreach \x in {0,1,2,3} \foreach \b / \y in {0/0,1/-1,2/-2} { \node (x\x b\b) at ($ (x\x y\y)!2!(x\x a\b) $) {}; \fill[black] (x\x b\b) circle (0.15em); \draw [white,ultra thick] (x\x a\b) to (x\x b\b); \draw [<-] (x\x a\b) to (x\x b\b); } \foreach \x in {0,1,2,3} \foreach \c / \y in {0/0,1/-1,2/-2} { \node (x\x c\c) at ($ (x\x y\y)!2.50!(x\x a\c) $) {}; \draw [dotted, thick] (x\x b\c) to (x\x c\c); } \foreach \x in {0,1,2,3} \foreach \a / \d in {0/1,1/2} { \draw [white, ultra thick] (x\x a\a) to (x\x a\d); \draw [<-,dashed] (x\x a\a) to (x\x a\d); \draw [white, ultra thick] (x\x b\a) to (x\x b\d); \draw [<-,dashed] (x\x b\a) to (x\x b\d); } \foreach \x in {0,1,2,3} { \node (x\x aend) at ($ (x\x a0)!2.4!(x\x a1) $) {}; \node (x\x bend) at ($ (x\x b0)!2.4!(x\x b1) $) {}; \node (x\x ypend) at ($ (x\x y0)!1.4!(x\x y1) $) {}; \node (x\x ynend) at ($ (x\x y0)!2.4!(x\x y-1) $) {}; \draw [dotted, thick] (x\x a2) to (x\x aend); \draw [dotted, thick] (x\x b2) to (x\x bend); \draw [dotted, thick] (x\x y1) to (x\x ypend); \draw [dotted, thick] (x\x y-2) to (x\x ynend); } \foreach \y in {1,0,-1,-2} { \node (xendy\y) at ($ (x3y\y.east) + (1em,0em) $) {}; \draw [dotted, thick] (x3y\y) to (xendy\y); } \end{tikzpicture} \end{center} \vspace{-1em} \addtocounter{theorem}{1} \caption{The desourcification of the $2$-graph from Example \ref{example_annoying_k-graph}.} \label{figure_desourcification} \end{figure} We can see that $\widetilde\Lambda$ satisfies condition \eqref{k-graph_liminal_thm2_1} from Theorem \ref{k-graph_liminal_thm2} or condition \eqref{k-graph_liminal_thm1_2} from Theorem \ref{k-graph_liminal_thm1}, so $C^*(\widetilde\Lambda)$ is liminal. Then $C^*(\Lambda)$ is liminal by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~2]{anHuef-Raeburn-Williams2007}). In fact, $C^*(\Lambda)$ has continuous trace, and is thus liminal, by Theorem \ref{thm_cts-trace}.\end{example} \begin{theorem}\label{k-graph_postliminal_thm2} Suppose $\Lambda$ is a row-finite $k$-graph and consider the statements \begin{enumerate} \item\label{k-graph_postliminal_thm2_1} for every $x\in\partial\Lambda$ there exists $n\in\mathbb N^k$ with $n\leqslant d(x)$ such that every path in $x(n)\partial\Lambda$ that is frequently divertable to $[x]$ is shift equivalent to $x$; and \item\label{k-graph_postliminal_thm2_2} $C^*(\Lambda)$ is postliminal. \end{enumerate} Then \eqref{k-graph_postliminal_thm2_1} implies \eqref{k-graph_postliminal_thm2_2} and, if in addition $\partial\Lambda=\Lambda^{\leqslant\infty}$, then \eqref{k-graph_postliminal_thm2_2} implies \eqref{k-graph_postliminal_thm2_1}. \begin{proof} Suppose \eqref{k-graph_postliminal_thm2_1}. We will first show that condition \eqref{k-graph_postliminal_thm1_2} from Theorem \ref{k-graph_postliminal_thm1} holds for the desourcification $\widetilde{\Lambda}$ of $\Lambda$. Fix $a\in\widetilde{\Lambda}^\infty$. By Lemma \ref{lemma_boundary_path_associated_to_infinite_path} there exists $x\in\partial\Lambda$ and $m\in\mathbb N^k$ such that $a=\sigma^m\big(\kappa(x)\big)$. By \eqref{k-graph_postliminal_thm2_1} there exists $n\in\mathbb N^k$ such that every path in $x(n)\partial\Lambda$ that is frequently divertable to $[x]$ is shift equivalent to $x$. Now suppose $b\in a(n)\widetilde{\Lambda}^\infty$ is frequently divertable to $[a]$. Note that $a(n)=\kappa(x)(m+n)$. Since $n\leqslant d(x)$, $\kappa(x)(n)\in\kappa(\Lambda^0)$, and there exists $y\in x(n)\partial\Lambda$ such that $\kappa(y)=\kappa(x)(n,n+m)b$. Note that $\sigma^m\big(\kappa(y)\big)=b$. Since $b$ is frequently divertable to $[a]$, $\sigma^m\big(\kappa(y)\big)$ is frequently divertable to $[\kappa(x)]$ and it follows that $\kappa(y)$ is frequently divertable to $[\kappa(x)]$. Now $y$ is frequently divertable to $[x]$ by Lemma \ref{lemma_frequently_divertable_preservation} and, since in addition $y\in x(n)\partial\Lambda$, $y$ is shift equivalent to $x$. Lemma \ref{lemma_desource_shift_equivalence_preserved} now shows that $\kappa(y)$ is shift equivalent to $\kappa(x)$, so $b$ is shift equivalent to $a$, and condition \eqref{k-graph_postliminal_thm1_2} from Theorem \ref{k-graph_postliminal_thm1} holds for $\widetilde{\Lambda}$. We can now apply Theorem \ref{k-graph_postliminal_thm1} to see that $C^*(\widetilde{\Lambda})$ is postliminal and so $C^*(\Lambda)$ is postliminal by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~2]{anHuef-Raeburn-Williams2007}). Now suppose $\partial\Lambda=\Lambda^{\leqslant\infty}$, suppose that \eqref{k-graph_postliminal_thm2_2} holds, and fix $x\in\partial\Lambda$. Let $n$ be the element of $\mathbb N^k$ as in Lemma \ref{lemma_property_of_old_boundary_path}. We know $C^*(\widetilde{\Lambda})$ is postliminal by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~2]{anHuef-Raeburn-Williams2007}), so we can apply Theorem \ref{k-graph_postliminal_thm1} to see that there exists $m\in\mathbb N^k$ such that every path in $\kappa(x)(m)\widetilde{\Lambda}^\infty$ that is frequently divertable to $[\kappa(x)]$ is shift equivalent to $\kappa(x)$. We may increase $m$ to ensure that $m\geqslant n$ while preserving this defining property. Suppose $y\in x\big(m\wedge d(x)\big)\partial\Lambda$ is frequently divertable to $[x]$. Then $\kappa(y)$, and hence $\sigma^{m-m\wedge d(x)}\big(\kappa(y)\big)$, is frequently divertable to $[\kappa(x)]$ by Lemma \ref{lemma_frequently_divertable_preservation}. Since $\kappa(y)\in \kappa(x)\big(m\wedge d(x)\big)\widetilde{\Lambda}^\infty$, it follows from Lemma \ref{lemma_property_of_old_boundary_path} that \[ \sigma^{m-m\wedge d(x)}\big(\kappa(y)\big)\in\kappa(x)(m)\widetilde{\Lambda}^\infty, \] and so $\sigma^{m-m\wedge d(x)}\big(\kappa(y)\big)$ is shift equivalent to $\kappa(x)$ by our choice of $m$. Then $\kappa(y)$ is shift equivalent to $\kappa(x)$, so $y$ is shift equivalent to $x$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}, establishing \eqref{k-graph_postliminal_thm2_1}. \end{proof} \end{theorem} We will now use the results from Section \ref{sec_bded-trace_cts-trace_Fell} to describe when row-finite $k$-graphs which may have sources have $C^*$-algebras that have bounded trace, are Fell, and have continous trace, respectively. The results in Section \ref{sec_bded-trace_cts-trace_Fell} require the path groupoid to be principal, but here we will instead require that the path groupoid associated to the desourcification is principal. The following lemma describes precisely when this is the case. \begin{lemma} Suppose $\Lambda$ is a row-finite $k$-graph. The path groupoid associated to the desourcification $\widetilde{\Lambda}$ of $\Lambda$ is principal if and only if $m=0$ whenever a path in $\partial\Lambda$ is shift equivalent to itself with lag $m$. \begin{proof} Suppose the path groupoid associated to $\widetilde{\Lambda}$ is principal and suppose $x\sim_m x$ for some $x\in\partial\Lambda$ and $m\in\mathbb Z^k$. Then $\kappa(x)\sim_m\kappa(x)$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}, and hence $m=0$. Now suppose that $m=0$ whenever a path in $\partial\Lambda$ is shift equivalent to itself with lag $m$. Suppose $a\in\widetilde{\Lambda}^\infty$ satisfies $a\sim_m a$ for some $m\in\mathbb Z^k$. By Lemma \ref{lemma_boundary_path_associated_to_infinite_path} there exists $l\in\mathbb N^k$ and $x\in\partial\Lambda$ such that $a=\sigma^l\big(\kappa(x)\big)$. Then $\sigma^l\big(\kappa(x)\big)\sim_m \sigma^l\big(\kappa(x)\big)$, so $\kappa(x)\sim_m\kappa(x)$ and $x\sim_m x$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}. Then $m=0$ by our assumption so path groupoid associated to $\widetilde{\Lambda}$ is principal by Lemma \ref{lemma_principal_iff_no_semi-periodic_paths}. \end{proof} \end{lemma} \needspace{4\baselineskip} \begin{theorem}\label{thm_bounded_trace} Suppose $\Lambda$ is a row-finite $k$-graph such that $n=0$ whenever a path in $\partial\Lambda$ is shift equivalent to itself with lag $n$. Then $C^*(\Lambda)$ has bounded trace if and only if for every $v\in\Lambda^0$ there exists $M\in\mathbb N$ such that for every $x\in\partial\Lambda$ there are at most $M$ paths in $\partial\Lambda$ with range $v$ that are shift equivalent to $x$. \begin{proof} Suppose $C^*(\Lambda)$ has bounded trace and fix $v\in\Lambda^0$. Then $C^*(\widetilde{\Lambda})$ has bounded trace since this property is preserved by Morita equivalence (see \cite[Proposition~7]{anHuef-Raeburn-Williams2007}). Since in addition $\widetilde\Lambda$ has no sources, by Theorem \ref{thm_integrable_bounded-trace} there exists $M\in\mathbb N^k$ such that for every $a\in\widetilde{\Lambda}^\infty$ there are at most $M$ paths in $\widetilde{\Lambda}^\infty$ with range $\iota(v)$ that are shift equivalent to $a$. Fix $x\in\partial\Lambda$ and let $S$ be the set of all paths in $\partial\Lambda$ that are shift equivalent to $x$ and have range $v$. Then every path in $\kappa(S)$ is shift equivalent to $\kappa(x)$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}. In addition, for every $y\in S$ we have $r(y)=v$, so $r\big(\kappa(y)\big)=\iota(v)$, and $\kappa(S)$ has at most $M$ elements by our earlier application of Theorem \ref{thm_integrable_bounded-trace}. Since $\kappa$ is injective by Lemma \ref{lemma_kappa_bijection}, we may conclude that $S$ has at most $M$ elements. We now prove the converse. Suppose that for every $v\in\Lambda^0$ there exists $M_v\in\mathbb N$ as in the statement of the theorem. Fix $u\in\widetilde{\Lambda}^0$. By the construction of $\widetilde\Lambda$, there exists $y\in\partial\Lambda$ and $n\in\mathbb N^k$ such that $u=\kappa(y)(n)$. Fix $a\in\widetilde{\Lambda}^\infty$. By Lemma \ref{lemma_boundary_path_associated_to_infinite_path} we may assume without loss of generality that $a=\kappa(x)$ for some $x\in\partial\Lambda$. Let $S$ be the set of all paths in $\partial\Lambda$ with range $r(y)$ that are shift equivalent to $x$ and let $T$ be the set of all paths in $\widetilde{\Lambda}^\infty$ with range $u$ that are shift equivalent to $\kappa(x)$. Fix $b\in T$. Then $\kappa(y)(0,n)b\in\iota(\Lambda^0)\widetilde{\Lambda}^\infty$ so, since $\kappa$ is a bijection onto $\iota(\Lambda^0)\widetilde\Lambda^\infty$ (Lemma \ref{lemma_kappa_bijection}), there exists $z\in\partial\Lambda$ such that $\kappa(y)(0,n)b=\kappa(z)$. Furthermore since $b\in T$, we have $b\sim\kappa(x)$, so both $\kappa(y)(0,m)b$ and $\kappa(z)$ are shift equivalent to $\kappa(x)$. Now $r\big(\kappa(y)(0,m)b\big)=\iota\big(r(y)\big)$, so $\braces{\kappa(y)(0,m)b:b\in T}\subset \kappa(S)$. Since $T$ and $\braces{\kappa(y)(0,m)b:b\in T}$ have the same number of elements, and since $\kappa$ is injective (Lemma \ref{lemma_kappa_bijection}), we have $\#T\leqslant\#\kappa(S)=\# S\leqslant M_{r(y)}$. We arbitrarily fixed $u\in\widetilde{\Lambda}^0$, so condition \eqref{k-thm_integrable2} of Theorem \ref{thm_integrable_bounded-trace} holds by taking $M=M_{r(y)}$. Since $\widetilde\Lambda$ has no sources, Theorem \ref{thm_integrable_bounded-trace} now implies that $C^*(\widetilde{\Lambda})$ has bounded trace, and it follows that $C^*(\Lambda)$ has bounded trace by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~7]{anHuef-Raeburn-Williams2007}). \end{proof} \end{theorem} \begin{lemma}\label{lemma_cts-trace_Fell_desourcification} Suppose $\Lambda$ is a row-finite $k$-graph and $W\subset\widetilde\Lambda^0$ is finite. For each $w\in W$, suppose $z^{(w)}\in\partial\Lambda$ and $n^{(w)}\in\mathbb N^k$ satisfy $w=\kappa(z^{(w)})(n^{(w)})$. Let $V=\braces{r(z^{(w)}):w\in W}$. If there exists a finite $F\subset\Lambda$ such that for every $x,y\in V\partial\Lambda$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$, then there exists a finite $E\subset\widetilde\Lambda$ such that for every $(a,b)\in W\widetilde\Lambda^\infty$ with $a\sim b$, the pair $(a,b)$ is a monolithic extension of a pair in $E\times E$. \begin{proof} Suppose $F$ is a finite subset of $\Lambda$ as in the statement of the Lemma. Let $N=\vee_{w\in W}n^{(w)}$ and define \[ E=\Big\{\big(\iota(\eta)\delta\big)\Big(n^{(w)},d\big(\iota(\eta)\delta\big)\Big):\eta\in F, w\in W, \delta\in s\big(\iota(\eta)\big)\widetilde\Lambda^N\Big\}, \] noting that $E$ is finite since $F$ and $W$ are finite and $\widetilde\Lambda$ is row-finite. Fix $a,b\in W\widetilde\Lambda^\infty$ with $a\sim b$, let \[\phi=\kappa(z^{(r(a))})(0,n^{(r(a))})\quad\text{and}\quad\psi=\kappa(z^{(r(b))})(0,n^{(r(b))}).\] Then $\phi a,\psi b\in\iota(V)\widetilde\Lambda^\infty$ so, since $\kappa$ is a bijection onto $\iota(\Lambda^0)\widetilde\Lambda^\infty$ (Lemma \ref{lemma_kappa_bijection}), there exist $x,y\in V\partial\Lambda$ such that $\kappa(x)=\phi a$ and $\kappa(y)=\psi b$. Since $a\sim b$, we have $x\sim y$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}. By our assumption there exist $\eta,\zeta\in F$ such that $(x,y)$ is a monolithic extension of $(\eta,\zeta)$. Then $\big(\kappa(x),\kappa(y)\big)$ is a monolithic extension of $\big(\iota(\eta),\iota(\zeta)\big)$. Let $\delta=\kappa(x)\big(d(\eta),d(\eta)+N\big)$, so that $\big(\kappa(x),\kappa(y)\big)=(\phi a,\psi b)$ is a monolithic extension of $\big(\iota(\eta)\delta,\iota(\zeta)\delta\big)$ with $d\big(\iota(\eta)\delta\big)\geqslant n^{(r(a))}=d(\phi)$ and $d\big(\iota(\zeta)\delta\big)\geqslant n^{(r(b))}=d(\psi)$. Then $(a,b)$ is a monolithic extension of \[ \bigg(\big(\iota(\eta)\delta\big)\Big(n^{(r(a))},d\big(\iota(\eta)\delta\big)\Big),\big(\iota(\zeta)\delta\big)\Big(n^{(r(b))},d\big(\iota(\zeta)\delta\big)\Big)\bigg)\in E\times E.\qedhere \] \end{proof} \end{lemma} \begin{lemma}\label{lemma_transfer_ME_from_desourcification} Let $\Lambda$ be a row-finite $k$-graph and suppose that $\big(\kappa(x),\kappa(y)\big)$ is a monolithic extension of $\big(\kappa(f)(0,m),\kappa(g)(0,n)\big)$ for some $x,y,f,g\in\partial\Lambda$ and $m,n\in\mathbb N^k$. Then $(x,y)$ is a monolithic extension of $\big(f(0,m\wedge d(f)),g(0,n\wedge d(g))\big)$. \begin{proof} We have $\sigma^m\big(\kappa(x)\big)=\sigma^n\big(\kappa(y)\big)$, so $\kappa(x)(m)=\kappa(y)(n)$ and by \eqref{V1} and \eqref{V2} we can see that $x\big(m\wedge d(x)\big)=y\big(n\wedge d(y)\big)$ and $m-m\wedge d(x)=n-n\wedge d(y)$. This is enough to show that $\kappa(x)\big(m\wedge d(x),m\big)=\kappa(y)\big(n\wedge d(y),n\big)$ by considering \eqref{P1}, \eqref{P2} and \eqref{P3}. Thus $\sigma^{m\wedge d(x)}\big(\kappa(x)\big)=\sigma^{n\wedge d(y)}\big(\kappa(y)\big)$, and so $\kappa\big(\sigma^{m\wedge d(x)}(x)\big)=\kappa\big(\sigma^{n\wedge d(y)}(y)\big)$ by Lemma \ref{lemma_sigma_kappa}. For every $p\in\mathbb N^k$ we now have $\kappa\big(\sigma^{m\wedge d(x)}(x)\big)(0,p)=\kappa\big(\sigma^{n\wedge d(y)}(y)\big)(0,p)$, so \[ \sigma^{m\wedge d(x)}(x)\Big(0,p\wedge d\big(\sigma^{m\wedge d(x)}(x)\big)\Big)=\sigma^{n\wedge d(y)}(y)\Big(0,p\wedge d\big(\sigma^{n\wedge d(y)}(y)\big)\Big), \] and it follows that $\sigma^{m\wedge d(x)}(x)=\sigma^{n\wedge d(y)}(y)$. Since $\kappa(x)(0,m)=\kappa(f)(0,m)$, we have $x\big(0,m\wedge d(x)\big)=f\big(0,m\wedge d(f)\big)$ by \eqref{P1} and similarly $y\big(0,n\wedge d(y)\big)=g\big(0,n\wedge d(g)\big)$. The result follows. \end{proof} \end{lemma} \begin{theorem}\label{thm_Fell} Suppose $\Lambda$ is a row-finite $k$-graph such that $m=0$ whenever a path in $\partial\Lambda$ is shift equivalent to itself with lag $m$. Then $C^*(\Lambda)$ is Fell if for every $z\in\partial\Lambda$ there exist $p\in\mathbb N^k$ with $p\leqslant d(z)$ and a finite $F\subset\Lambda$ such that for any $x,y\in z(p)\partial\Lambda$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. The converse is true if $\partial\Lambda=\Lambda^{\leqslant\infty}$. \begin{proof} We will first show that $C^*(\widetilde\Lambda)$ is Fell by showing that $\widetilde\Lambda$ satisfies condition \eqref{thm_Cartan_Fell_2} of Theorem \ref{thm_Cartan_Fell}. Fix $a\in\widetilde\Lambda^\infty$. By Lemma \ref{lemma_boundary_path_associated_to_infinite_path} there exist $z\in\partial\Lambda$ and $m\in\mathbb N^k$ such that $a=\sigma^m\big(\kappa(z)\big)$. Let $p\in\mathbb N^k$ and $F\subset\Lambda$ be as in the hypothesis when applied to $z$. Let $W=\big\{\kappa(z)(m\vee p)\big\}$ and $V=\braces{z(p)}$. By noting that $W=\big\{\kappa\big(\sigma^p(z)\big)(m\vee p -p)\big\}$ by Lemma \ref{lemma_sigma_kappa} and that $V=\big\{r\big(\sigma^p(z)\big)\big\}$, we can apply Lemma \ref{lemma_cts-trace_Fell_desourcification} to see that there exists a finite $E\subset\widetilde\Lambda$ such that for any $b,c\in W\widetilde\Lambda^\infty$ with $b\sim c$, the pair $(b,c)$ is a monolithic extension of a pair in $E\times E$. Since Lemma \ref{lemma_sigma_kappa} gives us \[ \kappa\big(\sigma^p(z)\big)(m\vee p-p)=\kappa(z)(m\vee p)=\sigma^m\big(\kappa(z)\big)(m\vee p-m)=a(m\vee p-m), \] we can see that $W=\braces{a(m\vee p-m)}$ and thus that $\widetilde\Lambda$ satisfies condition \eqref{thm_Cartan_Fell_2} of Theorem \ref{thm_Cartan_Fell}. As we also know that $\widetilde\Lambda$ has no sources, $C^*(\widetilde\Lambda)$ is Fell by Theorem \ref{thm_Cartan_Fell}. Then $C^*(\Lambda)$ is Fell by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Corollary~14]{anHuef-Raeburn-Williams2007}). We now prove the converse. Suppose $\partial\Lambda=\Lambda^{\leqslant\infty}$, $C^*(\Lambda)$ is Fell and fix $z\in\partial\Lambda$. Since $\partial\Lambda=\Lambda^{\leqslant\infty}$, by Lemma \ref{lemma_property_of_old_boundary_path} there exists $p\in\mathbb N^k$ with $p\leqslant d(z)$ such that for any $q\in\mathbb N^k$ with $q\geqslant p$ and $x\in z\big(q\wedge d(x)\big)\partial\Lambda$, \begin{equation}\label{Fell_eqn1} \kappa(z)\big(q\wedge d(z),q\big)=\kappa(x)\big(0,q-q\wedge d(z)\big). \end{equation} We know $C^*(\widetilde{\Lambda})$ is Fell by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Corollary~14]{anHuef-Raeburn-Williams2007}). Since in addition $\widetilde\Lambda$ has no sources, Theorem \ref{thm_Cartan_Fell} can be used to see that for the path $\sigma^{p}\big(\kappa(z)\big)\in\widetilde\Lambda^\infty$ there exists $l\in\mathbb N^k$ and a finite $E\subset\widetilde\Lambda$ that satisfy condition \eqref{thm_Cartan_Fell_2} from Theorem \ref{thm_Cartan_Fell}. Let $m=p+l$. Then for every $a,b\in \kappa(z)(m)\widetilde\Lambda^\infty$ with $a\sim b$, the pair $(a,b)$ is a monolithic extension of a pair in $E\times E$. Let $n=m\wedge d(z)$ and define $A=\braces{\kappa(z)(n,m)\eta:\eta\in E}$, noting that $A\subset\Lambda$. By the constuction of $\widetilde\Lambda$, for every $\gamma\in\iota(\Lambda^0)$ we can choose $t^{(\gamma)}\in\partial\Lambda$ such that $\gamma=\kappa(t^{(\gamma)})\big(0,d(\gamma)\big)$. Let \[ F=\Big\{t^{(\phi)}\big(0,d(\phi)\wedge d(t^{(\phi)})\big):\phi\in A\Big\} \] Fix $x,y\in z(n)\partial\Lambda$ such that $x\sim y$. Then $\kappa(x),\kappa(y)\in\kappa(z)(n)\widetilde\Lambda^\infty$ with $\kappa(x)\sim \kappa(y)$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}, so $\sigma^{m-n}\big(\kappa(x)\big),\sigma^{m-n}\big(\kappa(y)\big)\in \kappa(z)(m)\widetilde\Lambda^\infty$ with $\sigma^{m-n}\big(\kappa(x)\big)\sim\sigma^{m-n}\big(\kappa(y)\big)$. By the earlier use of Theorem \ref{thm_Cartan_Fell}, there exists $\eta,\zeta\in E$ such that $\Big(\sigma^{m-n}\big(\kappa(x)\big),\sigma^{m-n}\big(\kappa(y)\big)\Big)$ is a monolithic extension of $(\eta,\zeta)$. Let $\gamma=\kappa(z)(n,m)\eta$ and $\delta=\kappa(z)(n,m)\zeta$, so that $\gamma,\delta\in A$. Since $n=m\wedge d(z)$, by \eqref{Fell_eqn1} we have \[\kappa(x)(0,m-n)=\kappa(y)(0,m-n)=\kappa(z)(n,m),\] so $\big(\kappa(x),\kappa(y)\big)$ is a monolithic extension of $(\gamma,\delta)$. Recall that $\gamma=\kappa(z^{(\gamma)})\big(0,d(\gamma)\big)$ and $\delta=\kappa(z^{(\delta)})\big(0,d(\delta)\big)$. Lemma \ref{lemma_transfer_ME_from_desourcification} can now be applied to see that $(x,y)$ is a monolithic extension of \[ \Big(z^{(\gamma)}\big(0,d(\gamma)\wedge d(z^{(\gamma)}),z^{(\delta)}\big(0,d(\delta)\wedge d(z^{(\delta)})\big)\Big)\in F\times F.\qedhere \] \end{proof} \end{theorem} \begin{theorem}\label{thm_cts-trace} Suppose $\Lambda$ is a row-finite $k$-graph such that $m=0$ whenever a path in $\partial\Lambda$ is shift equivalent to itself with lag $m$. Then $C^*(\Lambda)$ has continuous trace if and only if for every finite subset $V$ of $\Lambda^0$ there exists a finite $F\subset\Lambda$ such that for every $x,y\in V\partial\Lambda$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. \begin{proof} Suppose that for every finite $V\subset\Lambda^0$ there exists a finite $F\subset\Lambda$ as in the statement of the theorem. Let $W$ be a finite subset of $\widetilde{\Lambda}^0$. For every $w\in W$ there exist $y^{(w)}\in\partial\Lambda$ and $m^{(w)}\in\mathbb N^k$ such that $w=\kappa(y^{(w)})(m^{(w)})$. Let $V=\braces{r(y^{(w)}):w\in W}$. By our assumption there exists a finite $F\subset\Lambda$ such that for every $x,y\in V\partial\Lambda$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. Lemma \ref{lemma_cts-trace_Fell_desourcification} can now be applied to see that there exists a finite $E\subset\widetilde\Lambda$ such that for every $a,b\in W\widetilde\Lambda^\infty$ with $a\sim b$, the pair $(a,b)$ is a monolithic extension of a pair in $E\times E$. Then condition \eqref{thm_proper_cts-trace_2} of Theorem \ref{thm_proper_cts-trace} holds so, since in addition $\widetilde\Lambda$ has no sources, $C^*(\widetilde\Lambda)$ has continuous trace by Theorem \ref{thm_proper_cts-trace}. It follows that $C^*(\Lambda)$ has continuous trace by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~7]{anHuef-Raeburn-Williams2007}). Now suppose $C^*(\Lambda)$ has continuous trace and choose a finite $V\subset\Lambda^0$. Then $C^*(\widetilde{\Lambda})$ has continuous trace by Morita equivalence (p. \pageref{describing_morita_equivalence_results}, \cite[Theorem~6.3]{Webster2011} and \cite[Proposition~7]{anHuef-Raeburn-Williams2007}). We also know $\widetilde\Lambda$ has no sources, so by Theorem \ref{thm_proper_cts-trace} there exists a finite $E\subset\widetilde\Lambda$ such that for any $a,b\in \iota(V)\widetilde\Lambda^\infty$ with $a\sim b$, the pair $(a,b)$ is a monolithic extension of a pair in $E\times E$. For every $\gamma\in \iota(\Lambda^0)\widetilde\Lambda$ we choose $z^{(\gamma)}\in\partial\Lambda$ such that $\gamma=\kappa(z^{(\gamma)})\big(0,d(\gamma)\big)$. Let \[ F=\big\{z^{(\mu)}\big(0,d(\mu)\wedge d(z^{(\mu)})\big):\mu\in E\big\} \] Fix $x,y\in V\partial\Lambda$ with $x\sim y$. Then $\kappa(x),\kappa(y)\in \iota(V)\widetilde\Lambda^\infty$ with $\kappa(x)\sim\kappa(y)$ by Lemma \ref{lemma_desource_shift_equivalence_preserved}, so $\big(\kappa(x),\kappa(y)\big)$ is a monolithic extension of some $(\mu,\nu)\in E\times E$. Since $\mu=\kappa(z^{(\mu)})\big(0,d(\mu)\big)$ and $\nu=\kappa(z^{(\nu)})\big(0,d(\nu)\big)$, we can apply Lemma \ref{lemma_transfer_ME_from_desourcification} to see that $(x,y)$ is a monolithic extension of \[\Big(z^{(\mu)}\big(0,d(\mu)\wedge d(z^{(\mu)})\big),z^{(\nu)}\big(0,d(\nu)\wedge d(z^{(\nu)})\big)\Big)\in F\times F.\qedhere\] \end{proof} \end{theorem} \begin{example}\label{examples_cts-trace} The Cuntz-Krieger $C^*$-algebras generated by the $2$-graphs from Examples \ref{omega_2_infty_2}, \ref{example_not_locally_convex_with_same_boundary_paths} and \ref{example_annoying_k-graph} have continuous trace by Theorem \ref{thm_cts-trace}. To see this, in each of these cases the condition in the Theorem can be satisfied by taking $F=V$ for every finite subset $V$ of $\Lambda^0$. \end{example} \section{Special case: directed graphs}\label{section_directed_graphs} In this section we will state the main results from Sections \ref{sec_liminal_postliminal}, \ref{sec_bded-trace_cts-trace_Fell} and \ref{sec_desourcification} for the special case of directed graphs. We will also state alternative conditions that characterise the row-finite directed graphs that are postliminal, liminal, Fell, and have continuous trace. A characterisation of liminal directed graph $C^*$-algebras has previously been developed by Ephrem in \cite[Theorem~5.5]{Ephrem2004}. Similarly, characterisations of postliminal directed graph $C^*$-algebras have been developed by Ephrem in \cite[Theorem~7.3]{Ephrem2004} and by Deicke, Hong and Szyma{\'n}ski in \cite[Theorem~2.1]{dhs2003}. We reconcile our results with Ephrem's in Remark \ref{remark_Ephrems_liminal_condition_coincides} and Remark \ref{remark_Ephrems_postliminal_condition_coincides} below. Unlike the results presented here, both Ephrem's and Deicke, Hong and Szyma{\'n}ski's results may be applied to directed graphs that are not row-finite. This shortcoming will be largely overcome by showing that the conditions presented in this section are equivalent to Ephrem's. In developing his characterisation of directed graphs with liminal $C^*$-algebras, Ephrem started by developing a characterisation \cite[Theorem~3.10]{Ephrem2004} of the row-finite directed graphs without sources and with liminal $C^*$-algebras before using the Drinen-Tomforde desingularisation to remove the row-finite and no sources hypothesis in \cite[Theorem~5.5]{Ephrem2004}. Our approach is similar through the use of the higher-rank graph Farthing-Webster desourcification in Section \ref{sec_desourcification}. The characterisations of directed graphs with postliminal $C^*$-algebras in Theorem \ref{thm_directed_graph_postliminal} and \cite[Theorem~7.3]{Ephrem2004} follow similar approaches. Recall the definition of a directed graph and the associated path groupoid from Section \ref{sec_path_groupoid} and the overview of the Cuntz-Krieger $C^*$-algebras of row-finite directed graphs from Section \ref{section_algebras_of_directed_graphs}. For a directed graph $E$ we can consider $E^\infty=\Lambda_E^\infty$. Recall the definition of the boundary paths $\partial\Lambda$ of a $k$-graph $\Lambda$ from Section \ref{sec_boundary_paths_and_desourcification}. Since every $1$-graph is locally convex (Remark \ref{remark_every_1-graph_locally_convex}), after assuming that $E$ is row-finite we have $\partial\Lambda_E=\Lambda_E^{\leqslant\infty}$ by Proposition \ref{prop_Lambda_leinfty_equals_partial_lambda}. By the definition of $\Lambda^{\leqslant\infty}$ (Definition \ref{def_Lambda_leinfty}) we can consider $E^{\leqslant\infty}=\Lambda_E^{\leqslant\infty}$, so that $\partial\Lambda_E=E^{\leqslant\infty}$. Recall that Lemmas \ref{path_groupoids_isomorphic} and \ref{lemma_KP_LambdaE} tell us that $G_E\cong G_{\Lambda_E}$ and $C^*(E)\cong C^*(\Lambda_E)$, respectively. Suppose $\eta$ is a path in a directed graph with $r(\eta)=s(\eta)$. For each $p\in\mathbb P$ define \notationindex{ETAPINFTY@$\eta^p,\eta^\infty$ where $\eta\in E^*, p\in\mathbb P$} \[ \eta^p:=\underbrace{\eta\eta\cdots\eta}_{p\mathrm{-times}} \] and define $\eta^\infty$ to be the infinite path $\eta\eta\eta\cdots$. For any path $x\in E^*\cup E^\infty$, define\notationindex{FPX@$\mathcal{FP}(x)$} \[ \mathcal{FP}(x):=\{\gamma\in E^*: \gamma=x(m,n)\text{ for some }m,n\in\mathbb N\}, \] so that $\mathcal{FP}(x)$ can be thought of as the set of all finite paths that appear in $x$. We will frequently abuse notation by talking about paths as if they are sets; when we refer to elements of $x$, we really mean elements of $\mathcal{FP}(x)$. For example, by the statement ``there are no cycles in $x$'', we really mean ``there are no cycles in $\mathcal{FP}(x)$''. \begin{definition} For a directed graph $E$ and for $x,y\in E^{\leqslant\infty}$, we say $x$ is {\em frequently divertable}\index{frequently divertable!path in a directed graph} to $[y]$ if for every $n\in\mathbb N$ with $n\leqslant |x|$ there is a path in $x(n)E^{\leqslant\infty}$ that is shift equivalent to $y$. \end{definition} From the above observations that $E^\infty=\Lambda_E^\infty$ and $E^{\leqslant\infty}=\partial\Lambda_E$, we can see that this notion of a frequently divertable path in $E^{\leqslant\infty}$ is consistent with the notions of frequently divertable infinite paths and boundary paths in higher-rank graphs as introduced in Definitions \ref{def_frequently_divertable_infinite_paths} and \ref{def_frequently_divertable_boundary_paths}. \begin{theorem}\label{thm_directed_graph_liminal} Suppose $E$ is a row-finite directed graph. The following are equivalent: \begin{enumerate} \item\label{thm_directed_graph_liminal_1} for every $x\in E^{\leqslant\infty}$, every path in $E^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$; \item\label{thm_directed_graph_liminal_2} for every $x\in E^{\leqslant\infty}$ there are finitely many paths in $r(x)E^{\leqslant\infty}$ that are shift equivalent to $x$; and \item\label{thm_directed_graph_liminal_3} $C^*(E)$ is liminal. \end{enumerate} If in addition $E$ has no sources, the following are also equivalent to the preceding: \begin{enumerate}\setcounter{enumi}{3} \item\label{thm_directed_graph_liminal_4} every orbit in $G_E^{(0)}$ is closed; and \item\label{thm_directed_graph_liminal_5} $C^*(G_E)$ is liminal. \end{enumerate} \end{theorem} \begin{remark}\label{remark_Ephrems_liminal_condition_coincides} Condition \eqref{thm_directed_graph_liminal_2} is very similar to Ephrem's condition from \cite[Theorem~5.5]{Ephrem2004}. It is a straightforward task to show that these conditions are equivalent. \end{remark} The following two lemmas are used in the proof of Theorem \ref{thm_directed_graph_liminal} to establish the equivalence of \eqref{thm_directed_graph_liminal_1} and \eqref{thm_directed_graph_liminal_2}. \begin{lemma}\label{lemma_FD_not_SE} Suppose $E$ is a directed graph and that $x\in E^\infty$ is a path that does not contain a cycle and that has at least two elements in $x(n)E^\infty\cap [x]$ for each $n\in\mathbb N$. Then for each $m\in\mathbb N$ there is a path in $x(m)E^\infty$ that is frequently divertable to $[x]$ but not shift equivalent to $x$. \begin{proof} Fix $m\in\mathbb N$ and let $p_0=0$. Choose $z^{(1)}\in x(m)E^\infty\cap[x]$ with $z^{(1)}\ne x$. Since $z^{(1)}\sim x$ there exist $p_1,q_1\in\mathbb N$ with $\sigma^{p_1}(x)=\sigma^{q_1}(z^{(1)})$. Let $\gamma^{(1)}=z^{(1)}(0,q_1)$. Since $z^{(1)}\ne x$ we must have $\gamma^{(1)}\ne x(0,p_1)$. Now choose $z^{(2)}\in x(p_1)E^\infty\cap[x]$ with $z^{(2)}\ne\sigma^{p_1}(x)$. Since $z^{(2)}\sim\sigma^{p_1}(x)$ there exist $p_2,q_2\in\mathbb N$ with $p_2>p_1$ such that $\sigma^{p_2}(x)=\sigma^{q_2}(z^{(2)})$. Let $\gamma^{(2)}=z^{(2)}(0,q_2)$. Since $z^{(2)}\ne\sigma^{p_2}(x)$ we must have $\gamma^{(2)}\ne x(p_1,p_2)$. By continuing in this manner we can see that there is a strictly increasing sequence $p_0,p_1,p_2,\ldots$ in $\mathbb N$ and a sequence $\gamma^{(1)},\gamma^{(2)},\ldots$ in $E^*$ such that $r(\gamma^{(i)})=x(p_{i-1})$, $s(\gamma^{(i)})=x(p_i)$ and $\gamma^{(i)}\ne x(p_{i-1},p_i)$ for $i=1,2,\ldots$. Let $y$ be the path $\gamma^{(1)}\gamma^{(2)}\cdots$ in $x(m)E^\infty$. To show that $y\nsim x$, first fix $a,b\in\mathbb N$. We need to show that $\sigma^a(x)\ne \sigma^b(y)$. Since $y=\gamma^{(1)}\gamma^{(2)}\cdots$, we may increase $a,b$ if necessary to ensure that $\sigma^b(y)=\gamma^{(j)}\gamma^{(j+1)}\cdots$ for some $j\in\mathbb P$. If $x(a)\ne y(b)$ it follows immediately that $\sigma^a(x)\ne\sigma^b(y)$ so we may assume that $x(a)=y(b)$. Then $x(a)=y(b)=r(\gamma^{(j)})=x(p_{j-1})$, so $a=p_{j-1}$ since $x$ does not contain a cycle. If $p_j-p_{j-1}=|\gamma^{(j)}|$, then \begin{align*} \sigma^a(x)(0,p_j-&p_{j-1})=\sigma^{p_{j-1}}(x)(0,p_j-p_{j-1})=x(p_{j-1},p_j)\\ &\ne\gamma^{(j)}=\sigma^b(y)(0,|\gamma^{(j)}|)=\sigma^b(y)(0,p_j-p_{j-1}), \end{align*} so $\sigma^b(y)\ne\sigma^a(x)$. If $p_j-p_{j-1}\ne|\gamma^{(j)}|$ and $\sigma^b(y)=\sigma^a(x)$, we must have \[x(p_{j-1}+|\gamma^{(j)}|)=x(a+|\gamma^{(j)}|)=y(b+|\gamma^{(j)}|)=s(\gamma^{(j)})=x(p_j).\] This is not possible when $p_j-p_{j-1}\ne|\gamma^{(j)}|$ since $x$ does not contain a cycle. We have thus shown that $y$ is not shift equivalent to $x$. That $y$ is frequently divertable to $[x]$ follows immediately from $r(\gamma^{(i)})$ being in $x$ for each $i$. \end{proof} \end{lemma} \begin{lemma}\label{lemma_cycle_with_entry_implies_not_liminal} Let $E$ be a directed graph. Suppose that for every $x\in E^{\leqslant\infty}$, every path in $E^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$. Then no cycle in $E$ has an entry. \begin{proof} Suppose $\alpha$ is a cycle in $E$ with an entry $f\in E^1$. We may assume that $r(\alpha)=s(\alpha)=r(f)$. If there is a path $\gamma\in E^*$ with $r(\gamma)=s(f)$ and $s(\gamma)=r(f)$ then let $x=(\gamma f)^\infty$; otherwise let $x$ be any path in $s(f)E^{\leqslant\infty}$. The path $\alpha^\infty$ in $E^\infty$ is then frequently divertable to $[x]$ but not shift equivalent to $x$. \end{proof} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm_directed_graph_liminal}] \eqref{thm_directed_graph_liminal_1}$\implies$\eqref{thm_directed_graph_liminal_2}. Suppose \eqref{thm_directed_graph_liminal_2} does not hold; we aim to show that \eqref{thm_directed_graph_liminal_1} does not hold. If $E$ contains a cycle with an entry then \eqref{thm_directed_graph_liminal_1} does not hold by Lemma \ref{lemma_cycle_with_entry_implies_not_liminal}; from here on we may assume that no cycle in $E$ has an entry. Since \eqref{thm_directed_graph_liminal_2} does not hold there is a path $x\in E^{\leqslant\infty}$ such that $r(x)E^{\leqslant\infty}\cap[x]$ is infinite. Since $E$ is row-finite, there are only finitely many edges with range $r(x)$ and so there exists $y_1\in r(x)E^1$ such that $s(y_1)E^{\leqslant\infty}\cap [x]$ is infinite. There then exists $y_2\in s(y_1)E^1$ such that $s(y_2)E^{\leqslant\infty}\cap[x]$ is infinite. By repeating this observation we can see that there is a path $y=y_1y_2y_3\cdots$ in $E^\infty$ such that $y(n)E^{\leqslant\infty}\cap [x]$ is infinite for each $n\in\mathbb N$. Note that $y$ is frequently divertable to $[x]$ so that if $y$ is not shift equivalent to $x$ then \eqref{thm_directed_graph_liminal_1} does not hold and we are done. Suppose $y$ is shift equivalent to $x$. Then $y(n)E^{\leqslant\infty}\cap [x]=y(n)E^{\leqslant\infty}\cap [y]$ is infinite for all $n\in\mathbb N$. We know $y\in E^\infty$, so $E^{\leqslant\infty}\cap [y] = E^\infty\cap [y]$ and thus $y(n)E^{\leqslant\infty}\cap [y] = y(n)E^\infty\cap [y]$ for each $n\in\mathbb N$. If $y$ contains a cycle $\alpha$ then there exists $m\in\mathbb N$ with $y(m,m+|\alpha|)=\alpha$ and since no cycle in $E$ has an entry, it follows that $y=y(0,m)\alpha^\infty$. Then the only path in $y(m)E^\infty$ is $\alpha^\infty$, so $y(m)E^\infty\cap[y]$ is finite. It follows from this contradiction that $y$ does not contain a cycle. By applying Lemma \ref{lemma_FD_not_SE} to $y$ we can see that there is a path $z$ in $y(0)E^\infty$ that is frequently divertable to $[y]$ but not shift equivalent to $[y]$. Since $x$ is shift equivalent to $y$, it follows that $z$ is frequently divertable to $[x]$ but not shift equivalent to $x$, so \eqref{thm_directed_graph_liminal_1} does not hold. \eqref{thm_directed_graph_liminal_2}$\implies$\eqref{thm_directed_graph_liminal_1}. Suppose \eqref{thm_directed_graph_liminal_2}, fix $x\in E^{\leqslant\infty}$ and suppose $y\in E^\infty$ is frequently divertable to $[x]$. Then for every $n\in\mathbb N$ there exists $z^{(n)}\in y(n)E^{\leqslant\infty}$ with $z^{(n)}\sim x$. By \eqref{thm_directed_graph_liminal_2} there are finitely many paths in $r(z^{(0)})E^{\leqslant\infty}=y(0)E^{\leqslant\infty}$ that are shift equivalent to $z^{(0)}$ (and thus shift equivalent to $x$). Then $\{y(0,n)z^{(n)}:n\in\mathbb N\}$ is finite so there must exist a strictly increasing sequence $n_1,n_2,\ldots$ such that $y(0,n_1)z^{(n_1)}=y(0,n_2)z^{(n_2)}=\ldots$. But then each of these paths $y(0,n_i)z^{(n_i)}$ must equal $y$ and so $y$ is shift equivalent to $x$, establishing \eqref{thm_directed_graph_liminal_1}. \eqref{thm_directed_graph_liminal_1}$\implies$\eqref{thm_directed_graph_liminal_3}. Suppose \eqref{thm_directed_graph_liminal_1} and recall that $E^{\leqslant\infty}=\Lambda_E^{\leqslant\infty}=\partial\Lambda_E$. Applying Theorem \ref{k-graph_liminal_thm2} to the $1$-graph $\Lambda_E$ shows that $C^*(E)$ is liminal if \begin{statement}\label{condition_directed_graph_liminal_thm2} for every $x\in E^{\leqslant\infty}$, every path in $E^{\leqslant\infty}$ that is frequently divertable to $[x]$ is shift equivalent to $x$. \end{statement} To show that \ref{condition_directed_graph_liminal_thm2} holds, first fix $x\in E^{\leqslant\infty}$ and suppose that $y\in E^{\leqslant\infty}$ is frequently divertable to $[x]$. If $y\in E^\infty$ then it follows immediately from \eqref{thm_directed_graph_liminal_1} that $y\sim x$. Suppose $y\in E^{\leqslant\infty}\backslash E^\infty$. Since $y$ is frequently divertable to $[x]$, the only path in $y(|y|)E^{\leqslant\infty}$, namely $y(|y|)$, must be shift equivalent to $x$. Then $y$ is shift equivalent to $x$, so \eqref{condition_directed_graph_liminal_thm2} holds and $C^*(E)$ is liminal by Theorem \ref{k-graph_liminal_thm2}. \eqref{thm_directed_graph_liminal_3}$\implies$\eqref{thm_directed_graph_liminal_1}. Suppose \eqref{thm_directed_graph_liminal_3}. Since $C^*(E)$ is liminal, Theorem \ref{k-graph_liminal_thm2} tell us that \eqref{condition_directed_graph_liminal_thm2} holds and \eqref{thm_directed_graph_liminal_1} follows immediately. Now suppose that $E$ has no sources. Then \eqref{thm_directed_graph_liminal_3} and \eqref{thm_directed_graph_liminal_5} are equivalent since $C^*(E)$ is isomorphic to $C^*(G_E)$ by Proposition \ref{prop_algebras_isomorphic}. Since $G_E$ is isomorphic to $G_{\Lambda_E}$ by Lemma \ref{path_groupoids_isomorphic}, we can apply Theorem \ref{k-graph_liminal_thm1} to the $1$-graph $\Lambda_E$ to establish the equivalence of \eqref{thm_directed_graph_liminal_4} and \eqref{thm_directed_graph_liminal_5}. \end{proof} The following is an immediate corollary of Theorem \ref{thm_directed_graph_liminal} and Lemma \ref{lemma_cycle_with_entry_implies_not_liminal}. \begin{cor}\label{cor_liminal_implies_no_entries} Suppose $E$ is a row-finite directed graph. If $C^*(E)$ is liminal then no cycle in $E$ has an entry. \end{cor} \begin{example}\label{example_directed_graph_algebras_not_isomorphic_proof} Recall that in Example \ref{example_directed_graph_algebras_not_isomorphic} it was claimed that if $E$ is the following directed graph, then $C^*(G_E)$ is liminal while $C^*(E)$ is not. \begin{center} \begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \clip (-6.2em,-1.1em) rectangle (1.2em,1.2em); \node (u) at (-3em, 0em) {}; \node (v) at (1em,0em) {}; \node [anchor=center] at (v) {$v$}; \fill[black] (u) circle (0.15em); \draw [<-, black] (u) to node[above=-0.2em] {$f$} (v); \draw [<-,black] (u) .. controls (-7em,3em) and (-7em,-3em) .. (u); \end{tikzpicture} \end{center} To see this, first note that $C^*(E)$ is not liminal by Corollary \ref{cor_liminal_implies_no_entries}. Let $F$ be the directed graph obtained by removing $f$ and $v$ from $E$. Then $G_E=G_F$ and, since $F$ has no sources, $C^*(G_F)=C^*(G_E)$ is liminal by Theorem \ref{thm_directed_graph_liminal}. \end{example} \begin{theorem}\label{thm_directed_graph_postliminal} Suppose $E$ is a row-finite directed graph. The following are equivalent: \begin{enumerate} \item\label{thm_directed_graph_postliminal_1} for every path $x\in E^{\infty}$ there exists $n\in\mathbb N$ such that every path in $x(n) E^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$; \item\label{thm_directed_graph_postliminal_2} for every $x\in E^\infty$ there exists $n\in\mathbb N$ such that the only path in $x(n)E^\infty$ that is shift equivalent to $x$ is $\sigma^n(x)$; and \item\label{thm_directed_graph_postliminal_3} $C^*(E)$ is postliminal. \end{enumerate} If in addition $E$ has no sources, the following are also equivalent to the preceding: \begin{enumerate}\setcounter{enumi}{3} \item\label{thm_directed_graph_postliminal_4} every orbit in $G_E^{(0)}$ is locally closed; and \item\label{thm_directed_graph_postliminal_5} $C^*(G_E)$ is postliminal. \end{enumerate} \end{theorem} Condition \eqref{thm_directed_graph_postliminal_2} is similar to the condition in Ephrem's \cite[Theorem~7.3]{Ephrem2004}. We will show that these conditions are equivalent in Remark \ref{remark_Ephrems_postliminal_condition_coincides}. The proof of Theorem \ref{thm_directed_graph_postliminal} uses the following three lemmas to establish the equivalence of \eqref{thm_directed_graph_postliminal_1} and \eqref{thm_directed_graph_postliminal_2}. \begin{lemma}\label{lemma_entries_not_part_of_cycles} Let $E$ be a directed graph. Suppose that for every $x\in E^\infty$ there exists $n\in\mathbb N$ such that every path in $x(n)E^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$. Then no entry to a cycle is in a cycle. \begin{proof} Suppose $f\in E^1$ is an entry to a cycle $\alpha$ and that $f$ is in a cycle $\beta$; we may assume that $\beta_1=f$. Let $x=\alpha^\infty$ and fix $n\in\mathbb N$. Since $r(f)$ is a vertex in $\alpha$, there is a path $\eta\in E^*$ with $r(\eta)=x(n)$ and $s(\eta)=r(f)$. Let $y$ be the path $\eta\beta^\infty$ in $x(n)E^\infty$. Since the edge $f$ is in $y$ but not in $x$, the path $y$ cannot equal $\sigma^n(x)$ and the result follows. \end{proof} \end{lemma} \begin{lemma}\label{lemma_cycle_fun} Suppose $E$ is a directed graph such that no entry to a cycle is in a cycle. If $\eta\in E^*$ is of non-zero length and has $r(\eta)=s(\eta)$, then there is a cycle $\alpha$ such that $\eta=\alpha^p$ for some $p\in\mathbb P$. \begin{proof} Let $\beta$ be a shortest path in $\eta$ with $r(\beta)=s(\beta)$. Suppose $s(\eta)$ is not in $\beta$. Then there exists $q\in\mathbb P$ such that $\eta_q$ is an entry to the cycle $\beta$. But then $\eta(q,|\eta|)\eta(0,q-1)$ is a path in $E^*$ with range $s(\eta_q)$ and source $r(\eta_q)$, so $\eta_q$ must be in a cycle. This contradicts the hypothesis, so no such entry $\eta_q$ exists and $s(\eta)$ must be in $\beta$. Since $s(\eta)$ is in $\beta$ there exists $n\in\mathbb N$ such that $\beta(n)=s(\eta)$. Let $\alpha$ be the cycle $\beta(n,|\beta|)\beta(0,n)$, noting that $r(\alpha)=s(\alpha)=s(\eta)=r(\eta)$. Now choose $p\in\mathbb P$ with $p\geqslant |\eta|/|\alpha|$. We now claim that $\alpha^p(0,|\eta|)=\eta$. Suppose not. Then there is a smallest $q\in\mathbb P$ with $\alpha^p_q\ne\eta_q$. Then $\eta_q$ is an entry to the cycle $\alpha$, which we earlier showed is not possible by the hypothesis since $\eta_q$ is in a cycle. Thus $\alpha^p(0,|\eta|)=\eta$, establishing the claim. Since $s(\alpha)=s(\eta)=\alpha^p(|\eta|)$ and $\alpha(n)=s(\alpha)$ only when $n=|\alpha|$ or $n=0$, we can reduce $p$ if necessary to conclude that $\eta=\alpha^p$. \end{proof} \end{lemma} \begin{lemma}\label{lemma_postliminal_removing_cycles_from_path} Suppose $E$ is a directed graph such that no entry to a cycle is in a cycle. Suppose further that there is a path $x\in E^\infty$ such that there is more than one element in $x(n)E^\infty\cap [x]$ for each $n\in\mathbb N$. Then there is a path $y\in E^\infty$ that does not contain a cycle and has more than one element in $y(n)E^\infty\cap [y]$ for each $n\in\mathbb N$. \begin{proof} Suppose there exists $v\in E^0$ such that $v=x(n)$ for infinitely many $n\in\mathbb N$. If $m$ is an integer with $x(m)=v$, then by Lemma \ref{lemma_cycle_fun} there is a cycle $\alpha$ in $E$ such that $x=x(0,m)\alpha^\infty$. Then $x(m)E^\infty\cap [x]$ can only have one element since otherwise there would need to be an entry to $\alpha$ that is on a path that is shift equivalent to $\alpha^\infty$, which is not possible by the hypothesis. It follows that, for every $v\in E^0$, there are only finitely many $n\in\mathbb N$ with $x(n)=v$. If there exists $n\in\mathbb N$ such that $\sigma^n(x)$ contains no cycles then we can take $y=\sigma^n(x)$. For the remainder of this proof we may assume that no such $n$ exists. We will construct a path $y\in E^\infty$ as in the statement of this lemma by removing the cycles from $x$. We begin by defining two sequences $\{m_i\},\{n_i\}$ in $\mathbb N$. Let $m_1$ be the smallest integer such that $x(m_1)=x(n)$ for some $n>m_1$. Then let $n_1$ be the largest integer such that $x(n_1)=x(m_1)$. Define $m_2,n_2$ similarly: let $m_2$ be the largest integer greater than $n_1$ such that $x(m_2)=x(n)$ for some $n>m_2$, and let $n_2$ be the greatest integer with $x(n_2)=x(m_2)$. In this manner we may continue defining $m_i$ and $n_i$ for $i=3,4,\ldots$ so that each $m_i$ is the smallest integer greater than $n_{i-1}$ with $x(m_i)=x(n)$ for some $n>m_i$, and each $n_i$ is the largest integer with $x(n_i)=x(m_i)$. Let $y=x(n_1,m_2)x(n_2,m_3)x(n_3,m_4)\cdots$. We claim that $y$ contains no cycles. Fix $p\in\mathbb N$. It suffices to show that $y(p)\ne y(q)$ for each $q>p$. By the definition of $y$ there must exist $l\in\mathbb N$ and $i\in\mathbb P$ such that $n_{i-1}\leqslant l<m_i$ and $x(l)=y(p)$. We consider two cases: when $n_{i-1}<l<m_i$ and when $n_{i-1}=l$. First suppose $n_{i-1}<l<m_i$. Since $m_i$ is the smallest integer greater than $n_{i-1}$ with $x(m_i)=x(n)$ for some $n>m_i$, we must have $y(p)=x(l)\ne x(n)$ for every $n>l$, and so $y(p)\ne y(q)$ for all $q>p$. For the second case suppose $n_{i-1}=l$. Since $n_{i-1}$ is the largest integer with $x(n_{i-1})=x(m_{i-1})$, it follows that $x(n)\ne x(n_{i-1})$ for each $n>n_{i-1}=l$. Thus $y(p)\ne y(q)$ for each $q>p$, establishing the claim that $y$ contains no cycles. Recall that $p$ was fixed arbitrarily in $\mathbb N$. It remains to show that $y(p)E^\infty\cap[y]$ has more than one element. Choose $l\in\mathbb N$ such that $x(l)=y(p)$ and choose distinct $z^{(1)},z^{(2)}\in x(l)E^\infty\cap [x]$. Since these paths are shift equivalent to $x$ there exist $a,b\in\mathbb N$ with $\sigma^a(z^{(1)})=\sigma^b(z^{(2)})$ and we may increase $a,b$ if necessary to ensure that $z^{(1)}(a)=z^{(2)}(b)=x(m_j)$ for some $j\in\mathbb P$. Then $z^{(1)}(0,a)\ne z^{(2)}(0,b)$ and $x(m_j)=y(q)$ for some $q\in\mathbb N$, so the paths $z^{(1)}(0,a)\sigma^q(y)$ and $z^{(2)}(0,b)\sigma^q(y)$ are distinct elements of $y(p)E^\infty\cap [y]$, as required. \end{proof} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm_directed_graph_postliminal}] \eqref{thm_directed_graph_postliminal_1}$\implies$\eqref{thm_directed_graph_postliminal_2}. Suppose \eqref{thm_directed_graph_postliminal_2} does not hold. We may assume that no entry to a cycle is in a cycle as otherwise it follows immediately from Lemma \ref{lemma_entries_not_part_of_cycles} that \eqref{thm_directed_graph_postliminal_1} does not hold. Since \eqref{thm_directed_graph_postliminal_2} does not hold there is a path $x\in E^\infty$ such that $x(n)E^\infty\cap [x]$ has at least two elements for each $n\in\mathbb N$. By Lemma \ref{lemma_postliminal_removing_cycles_from_path} there is a path $y\in E^\infty$ that does not contain cycles such that $y(m)E^\infty\cap[y]$ has at least two elements for each $m\in\mathbb N$. That \eqref{thm_directed_graph_postliminal_1} does not hold now follows by applying Lemma \ref{lemma_FD_not_SE} to $y$. \eqref{thm_directed_graph_postliminal_2}$\implies$\eqref{thm_directed_graph_postliminal_1}. Suppose \eqref{thm_directed_graph_postliminal_2} and fix $x\in E^\infty$. Then there exists $n\in\mathbb N$ such that $\sigma^n(x)$ is the only path in $x(n)E^\infty\cap[x]$. Suppose $y\in x(n)E^\infty$ is frequently divertable to $[x]$. It follows that for every $m\in\mathbb N$ there is a path $z^{(m)}\in y(m)E^\infty\cap [x]$. Then, for each $m\in\mathbb N$, the path $y(0,m)z^{(m)}$ is shift equivalent to $x$, so $y(0,m)z^{(m)}$ is in $x(n)E^\infty\cap [x]$. By our choice of $n$ it follows that $y(0,m)z^{(m)}=\sigma^n(x)$ for each $m\in\mathbb N$. Then $y=\sigma^n(x)$, so $y$ is shift equivalent to $x$, establishing \eqref{thm_directed_graph_postliminal_1}. \eqref{thm_directed_graph_postliminal_1}$\implies$\eqref{thm_directed_graph_postliminal_3}. Suppose \eqref{thm_directed_graph_postliminal_1}. Recall from above that $E^{\leqslant\infty}=\Lambda_E^{\leqslant\infty}=\partial\Lambda_E$. Applying Theorem \ref{k-graph_postliminal_thm2} to the $1$-graph $\Lambda_E$ shows that $C^*(E)$ is postliminal if \begin{statement}\label{condition_directed_graph_postliminal_thm2} for every $x\in E^{\leqslant\infty}$ there exists $n\in\mathbb N$ with $n\leqslant |x|$ such that every path in $x(n)E^{\leqslant\infty}$ that is frequently divertable to $[x]$ is shift equivalent to $x$. \end{statement} To see that \eqref{condition_directed_graph_postliminal_thm2} holds, first fix $x\in E^{\leqslant\infty}$. When $x$ is finite, the only path in $x(|x|)E^{\leqslant\infty}=s(x)E^{\leqslant\infty}$ is $s(x)$, which is frequently divertable to $[x]$ and shift equivalent to $x$. Suppose $x$ is infinite. By \eqref{thm_directed_graph_postliminal_1} there exists $n\in\mathbb N$ such that every path in $x(n)E^\infty$ that is frequently divertable to $[x]$ is shift equivalent to $x$. If $y\in x(n)E^{\leqslant\infty}\cap E^*$ is frequently divertable to $[x]$, then there must be a path in $s(y)E^\infty$ that is shift equivalent to $x$. This is not possible since $s(y)$ is a source and $x\in E^\infty$, so no path in $x(n)E^{\leqslant\infty}\cap E^*$ is frequently divertable to $[x]$. It follows that condition \eqref{condition_directed_graph_postliminal_thm2} holds and so $C^*(E)$ is postliminal by applying Theorem \ref{k-graph_postliminal_thm2} to the $1$-graph $\Lambda_E$. \eqref{thm_directed_graph_postliminal_3}$\implies$\eqref{thm_directed_graph_postliminal_1}. Suppose \eqref{thm_directed_graph_postliminal_3}. Since $C^*(E)$ is postliminal, Theorem \ref{k-graph_postliminal_thm2} tell us that \eqref{condition_directed_graph_postliminal_thm2} holds and \eqref{thm_directed_graph_postliminal_1} follows immediately. Now suppose that $E$ has no sources. Then \eqref{thm_directed_graph_postliminal_3} and \eqref{thm_directed_graph_postliminal_5} are equivalent since $C^*(E)$ is isomorphic to $C^*(G_E)$ by Proposition \ref{prop_algebras_isomorphic}. Since $G_E$ is isomorphic to $G_{\Lambda_E}$ by Lemma \ref{path_groupoids_isomorphic}, we can apply Theorem \ref{k-graph_postliminal_thm1} to the $1$-graph $\Lambda_E$ to establish the equivalence of \eqref{thm_directed_graph_postliminal_4} and \eqref{thm_directed_graph_postliminal_5}. \end{proof} \begin{remark}\label{remark_Ephrems_postliminal_condition_coincides} For a directed graph $E$, Ephrem in \cite[Theorem~7.3]{Ephrem2004} showed that $C^*(E)$ is postliminal if and only if \begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}} \item\label{Ephrem_postliminal_1} no entry to a cycle is in a cycle, and \item\label{Ephrem_postliminal_2} for any $x\in E^\infty$, there are finitely many vertices in $x$ that receive multiple edges that are in paths that are shift equivalent to $x$. \end{enumerate} To see that this is equivalent to condition \eqref{thm_directed_graph_postliminal_2} from Theorem \ref{thm_directed_graph_postliminal}, first suppose \eqref{Ephrem_postliminal_1} and \eqref{Ephrem_postliminal_2} hold and fix $x\in E^\infty$. By \eqref{Ephrem_postliminal_2} we can let $v_1,v_2,\ldots,v_p$ be the vertices in $x$ that receive multiple edges that are in paths that are shift equivalent to $x$. Suppose there exists an integer $q$ ($1\leqslant q\leqslant p$) such that $x(m)=v_q$ for infinitely many $m\in\mathbb N$. Let $n$ be any integer such that $x(n)=v_q$. By \eqref{Ephrem_postliminal_1} and Lemma \ref{lemma_cycle_fun} there is a cycle $\alpha$ such that $x=x(0,n)\alpha^\infty$. Since no entry to a cycle is a in a cycle, the only path in $x(n)E^\infty\cap[x]$ is $\sigma^n(x)=\alpha^\infty$, as required. Now suppose that for each $v_i$ there are finitely many $m\in\mathbb N$ with $x(m)=v_i$. Then there exists $n\in\mathbb N$ such that, for every $m\geqslant n$, $x(m)$ does not equal any $v_i$. Then the only path in $\sigma^n(x)E^\infty\cap[x]$ is $\sigma^n(x)$, as required. To prove the converse, suppose $E$ satisfies condition \eqref{thm_directed_graph_postliminal_2} from Theorem \ref{thm_directed_graph_postliminal} and note that \eqref{Ephrem_postliminal_1} follows from Lemma \ref{lemma_entries_not_part_of_cycles} and the \eqref{thm_directed_graph_postliminal_2}$\implies$\eqref{thm_directed_graph_postliminal_1} component of Theorem \ref{thm_directed_graph_postliminal} (the proof of which does not require $E$ to be row-finite). Fix $x\in E^\infty$. Then there exists $n\in\mathbb N$ such that the only path in $x(n)E^\infty\cap[x]$ is $\sigma^n(x)$. It follows that the only vertices in $x$ that may receive multiple edges that are in paths that are shift equivalent to $x$ are the vertices of the form $x(m)$ for $m<n$. Condition \eqref{Ephrem_postliminal_2} follows and we can conclude that Ephrem's conditions \eqref{Ephrem_postliminal_1} and \eqref{Ephrem_postliminal_2} are equivalent to condition \eqref{thm_directed_graph_postliminal_2} from Theorem \ref{thm_directed_graph_postliminal}. \end{remark} The following is an immediate corollary of Theorem \ref{thm_directed_graph_postliminal} and Lemma \ref{lemma_entries_not_part_of_cycles}. \begin{cor} Suppose that $E$ is a row-finite directed graph. If $C^*(E)$ is postliminal then no entry to a cycle in $E$ is in a cycle. \end{cor} We will now move on to the directed graph case of the theorems from Section \ref{sec_bded-trace_cts-trace_Fell}. \begin{remark}\label{remark_directed_graphs_and_cycles} Recall that the theorems characterising the higher-rank graphs with $C^*$-algebras that have bounded trace, are Fell and have continuous trace (Theorems \ref{thm_bounded_trace}, \ref{thm_Fell} and \ref{thm_cts-trace}) only apply to higher-rank graphs with principal path groupoids. By Proposition \ref{prop_principal_iff_no_cycles} this means that these theorems can only be applied to directed graphs that do not contain cycles. Corollary \ref{cor_liminal_implies_no_entries} tells us that if a directed graph $E$ contains a cycle with an entry, then $C^*(E)$ is not liminal (and so does not have bounded or continuous trace and is not Fell). It follows that the only directed graphs where our theorems cannot be directly applied are those that have cycles, none of which have an entry. Clark and an Huef in \cite{Clark-anHuef2012} showed that if a directed graph $E$ satisfies a property concerning the stability subgroups of the path groupoid $G_E$, then $E$ can be modified to remove cycles while preserving the bounded trace and Fell properties of $C^*(E)$. In \cite[Example~7.1]{Clark-anHuef2012} Clark and an Huef describe a directed graph that contains cycles, none of which has an entry. They then demonstrate the removal of the cycles from that directed graph before using the next theorem, Theorem \ref{directed_graph_integrable_thm}, to deduce that the associated $C^*$-algebra has bounded trace. \end{remark} The following result is obtained by applying Theorem \ref{thm_integrable_bounded-trace} to the $1$-graph $\Lambda_E$. \begin{theorem}\label{directed_graph_integrable_thm} Suppose $E$ is a row-finite directed graph without sources. The following are equivalent: \begin{enumerate} \item\label{directed_graph_integrable_thm_1} $G_E$ is integrable; \item\label{directed_graph_integrable_thm_2} $E$ has no cycles, and for every $v\in E^0$ there exists $M\in\mathbb N$ such that for any $x\in E^\infty$ there are at most $M$ paths in $E^\infty$ with range $v$ that are shift equivalent to $x$; and \item\label{directed_graph_integrable_thm_3} $E$ has no cycles, and $C^*(E)\cong C^*(G_E)$ has bounded trace. \end{enumerate} \end{theorem} The next result is obtained by applying Theorem \ref{thm_bounded_trace} to the $1$-graph $\Lambda_E$. \begin{theorem} Suppose $E$ is a row-finite directed graph without cycles. Then $C^*(E)$ has bounded trace if and only if for every $v\in E^0$ there exists $M\in\mathbb N$ such that for any $x\in E^{\leqslant\infty}$ there are at most $M$ paths in $E^{\leqslant\infty}$ with range $v$ that are shift equivalent to $x$. \end{theorem} A pair of edges $\braces{f,g}$ in a directed graph is {\em splitting}\index{splitting!pair of edges} if $s(f)=s(g)$ and $f\ne g$. Suppose $S\subset E^*\cup E^\infty$. We define $E|_S$ to be the subgraph of $E$ such that \[ E|_S^1=\braces{e\in E:e=x_p\text{ for some }x\in S, p\in\mathbb P} \] and $E_S^0=r(E_S^1)\cup s(E_S^1)$. \begin{lemma}\label{lemma_finite_connections} Let $E$ be a directed graph that contains no cycles and has finitely many splitting pairs of edges. For any vertices $v$ and $w$ in $E$ there are at most finitely many finite paths with source $w$ and range $v$. \begin{proof} We prove by contradiction. Let $A$ be an infinite set of finite paths with source $w$ and range $v$. Let $B$ be a finite subset of $A$. Then $E|_B$ has finitely many splitting pairs of edges, say, $M$. We claim that we can add only finitely many paths from $A$ to $B$ before increasing the number of splitting pairs in $E|_B$ to at least $M+1$. Let $B^+$ be the set of all finite paths in $E|_B$ with source $w$ and range $v$. Since $E$ has no cycles, $B^+$ is finite and $B\subset B^+\subsetneq A$. Now suppose $\alpha\in A\backslash B^+$. Then there must be a greatest $p\in\mathbb P$ such that $\alpha_p\notin E|_B^1$. Note that $s(\alpha_p)\in E|_B^0$ and that since $r(\alpha)=v$ and $E$ contains no cycles, we cannot have $s(\alpha_p)=v$. Then there exist $\beta\in B$ and $q\in\mathbb P$ such that $s(\alpha_p)=s(\beta_q)$. It follows that $\{\alpha_p,\beta_q\}$ is a splitting pair in $E|_{B^+\cup\{\alpha\}}$ but not in $E|_B$, so $E|_{B^+\cup\{\alpha\}}$ must contain at least $M+1$ splitting pairs. Thus given the finite set $B$ with $M$ splitting pairs of edges we can only add finitely many paths from $A$ to $B$ (forming $B^+$) before adding an additional path will increase the number of splitting pairs of edges in $E|_B$ to at least $M+1$. Then $A$ must contain infinitely many splitting pairs of edges, contradicting that $E$ has finitely many splitting pairs of edges. \end{proof} \end{lemma} \begin{lemma}\label{lemma_directed_graph_simplification} Suppose $E$ is a row-finite directed graph without cycles and $V$ is a finite subset of $E^0$. Then the following are equivalent: \begin{enumerate} \item\label{lemma_directed_graph_simplification_1} the directed graph $E|_{VE^{\leqslant\infty}}$ has finitely many splitting pairs of edges; and \item\label{lemma_directed_graph_simplification_2} there exists a finite $F\subset E^*$ such that such that for any $x,y\in VE^{\leqslant\infty}$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. \end{enumerate} \begin{proof} Suppose \eqref{lemma_directed_graph_simplification_1}. Let $A$ be the set of all edges that occur in a splitting pair of edges in $E|_{VE^{\leqslant\infty}}$ and let $F=VE^*V\cup VE^*A$. Then $F$ is finite by Lemma \ref{lemma_finite_connections} since $A$ and $V$ are finite. Suppose $x,y\in VE^{\leqslant\infty}$ satisfy $x\sim_n y$. Then there exists a smallest $m\in\mathbb N$ such that $\sigma^m(x)=\sigma^{m-n}(y)$. If $m=0$ or $m-n=0$, then $r\big(\sigma^m(x)\big)$ and $r\big(\sigma^{m-n}(y)\big)$ are in $V$ so $(x,y)$ is a monolithic extension of a pair in $F\times F$. Suppose $m\ne 0$ and $m-n\ne 0$. Then $\braces{x_m,y_{m-n}}$ is a splitting pair of edges in $E|_{VE^{\leqslant\infty}}$ so there exists $\alpha,\beta\in VE^*$ such that $(x,y)$ is a monolithic extension of $(\alpha x_m,\beta y_{m-n})\in F\times F$, establishing \eqref{lemma_directed_graph_simplification_2}. Let $V$ be a finite subset of $E^0$. Suppose there exists a finite $F\subset E^*$ as in \eqref{lemma_directed_graph_simplification_2}. Let $A=\braces{\alpha_m\in E^1:\alpha\in F, m\leqslant |\alpha|}$, noting that $A$ is finite since $F$ is. Suppose $\braces{f,g}$ is a splitting pair of edges in $E|_{VE^{\leqslant\infty}}$. We claim that $f,g\in A$. To see this, first note that since $\braces{f,g}$ is a splitting pair in $E|_{VE^{\leqslant\infty}}$, there exist $\alpha,\beta\in VE^*$ such that $s(\alpha)=r(f)$ and $s(\beta)=r(g)$. Suppose $x\in s(f)E^{\leqslant\infty}$. Then $\alpha f x\sim\beta gx$ so there exists $(\eta,\zeta)\in F\times F$ such that $(\alpha fx,\beta gx)$ is a monolithic extension of $(\eta,\zeta)$. Let $y\in E^{\leqslant\infty}$ be the path such that $(\eta y, \zeta y) = (\alpha fx, \beta gx)$. Then $\eta y$ is shift equivalent to $\zeta y$ with lags $|\beta|-|\alpha|$ and $|\zeta|-|\eta|$. It follows that there exists $l\in\mathbb N$ such that $\sigma^l(\eta y)=\sigma^{l-|\beta|+|\alpha|}(\zeta y) = \sigma ^{l-|\zeta|+|\eta|}(\zeta y)$. Since $E$ has no cycles we must have $|\beta|-|\alpha|=|\zeta|-|\eta|$. Now $f=(\alpha fx)_{|\alpha|+1}=(\eta y)_{|\alpha|+1}$ and $g=(\beta gx)_{|\beta|+1}=(\zeta y)_{|\beta|+1}$. If $|\alpha f|>|\eta|$, by noting that $|\alpha f|=|\alpha|+1$ and $|\alpha|-|\eta|=|\beta|-|\zeta|$, we have \[ f=(\eta y)_{|\alpha|+1} = y_{|\alpha|-|\eta|+1} = y_{|\beta|-|\zeta|+1}=(\zeta y)_{|\beta|+1}=g,\] which is not possible since $f\ne g$ as $\{f,g\}$ is a splitting pair. Thus $|\alpha f|\leqslant |\eta|$, so $f=\eta_{|\alpha f|}\in A$ and $g=\zeta_{|\beta g|}\in A$. Since $A$ is finite and $\braces{f,g}$ is an arbitrary splitting pair of edges in $E|_{VE^{\leqslant\infty}}$, \eqref{lemma_directed_graph_simplification_1} follows. \end{proof} \end{lemma} We can now proceed to the improved versions of Theorems \ref{thm_Cartan_Fell}, \ref{thm_proper_cts-trace}, \ref{thm_Fell} and \ref{thm_cts-trace} for the directed graph case. The first of these follows immediately from Lemma \ref{lemma_directed_graph_simplification} and Theorem \ref{thm_Cartan_Fell}. \begin{theorem}\label{thm_directed_graph_Cartan_Fell} Suppose $E$ is a row-finite directed graph without sources. The following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item\label{thm_directed_graph_Cartan_Fell_1} $G_E$ is Cartan; \item $E$ has no cycles, and for every $z\in E^\infty$ there exists $n\in\mathbb N$ such that $E|_{z(n)E^{\infty}}$ has finitely many splitting pairs of edges; \item\label{thm_directed_graph_Cartan_Fell_2} $E$ has no cycles, and for every $z\in E^\infty$ there exist $p\in\mathbb N$ and a finite $F\subset E^*$ such that for every $x,y\in z(p)E^\infty$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$; \item\label{thm_directed_graph_Cartan_Fell_3} $E$ has no cycles, and for every $z\in E^\infty$ there exist $p\in\mathbb N$ and a finite $F\subset E^*$ such that for every $(\alpha,\beta)\in E^*\ast_s E^*$ with $r(\alpha)=r(\beta)=z(p)$ and every $\gamma\in s(\alpha)E^*$ with $|\gamma|=\max\{|\eta|:\eta\in F\}$, the pair $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of a pair in $F\times F$; and \item\label{thm_directed_graph_Cartan_Fell_4} $E$ has no cycles, and $C^*(E)\cong C^*(G_E)$ is Fell. \end{enumerate} \end{theorem} \needspace{4\baselineskip} \begin{theorem}\label{thm_directed_graph_Fell} Suppose $E$ is a row-finite directed graph without cycles. The following are equivalent: \begin{enumerate} \item\label{thm_directed_graph_Fell_1} $C^*(E)$ is Fell; \item\label{thm_directed_graph_Fell_2} for every $z\in E^\infty$ there exists $n\in\mathbb N$ such that $E|_{z(n)E^{\leqslant\infty}}$ has finitely many splitting pairs of edges; and \item\label{thm_directed_graph_Fell_3} for every $z\in E^\infty$ there exist $n\in\mathbb N$ and a finite $F\subset E^*$ such that for any $x,y\in z(n)E^{\leqslant\infty}$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. \end{enumerate} \begin{proof} First note that \eqref{thm_directed_graph_Fell_2} and \eqref{thm_directed_graph_Fell_3} are equivalent by Lemma \ref{lemma_directed_graph_simplification}. By applying Theorem \ref{thm_Fell} to the $1$-graph $\Lambda_E$, we can see that \eqref{thm_directed_graph_Fell_1} is equivalent to \begin{statement}\label{statement_directed_graph_Fell} for every $z\in E^{\leqslant\infty}$ there exist $n\in\mathbb N$ with $n\leqslant |z|$ and a finite $F\subset E^*$ such that for any $x,y\in z(n)E^{\leqslant\infty}$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. \end{statement} If \eqref{statement_directed_graph_Fell} holds, then \eqref{thm_directed_graph_Fell_3} must hold. Conversely, suppose $z\in E^{\leqslant\infty}$ is finite and let $F=\{s(z)\}$. The only path in $z(|z|)^{\leqslant\infty}$ is $z(|z|)$ (which equals $s(z)$) so, if $x,y\in z(|z|)E^{\leqslant\infty}$, then we must have $(x,y)=\big(s(z),s(z)\big)$, which is trivially a monolithic extension of $\big(s(z),s(z)\big)\in F\times F$. Thus \eqref{thm_directed_graph_Fell_3} implies \eqref{statement_directed_graph_Fell}, and so \eqref{thm_directed_graph_Fell_3} is equivalent to \eqref{statement_directed_graph_Fell}. \end{proof} \end{theorem} The next theorem follows immediately from Lemma \ref{lemma_directed_graph_simplification} and Theorem \ref{thm_proper_cts-trace}. \needspace{6\baselineskip} \begin{theorem}\label{thm_directed_graph_proper_cts-trace} Suppose $E$ is a row-finite directed graph without sources. The following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{(\arabic{enumi})} \item $G_E$ is proper; \item $E$ has no cycles, and for every finite $V\subset E^0$ the directed graph $E|_{VE^\infty}$ has finitely many splitting pairs of edges; \item $E$ has no cycles, and for every finite subset $V$ of $E^0$, there exists a finite $F\subset E^*$ such that, for every $x,y\in VE^\infty$ with $x\sim_n y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$; \item $E$ has no cycles, and for every finite subset $V$ of $E^0$, there exists a finite $F\subset E^*$ such that, for every $(\alpha,\beta)\in E^*\ast_s E^*$ with $r(\alpha),r(\beta)\in V$, and every $\gamma\in s(\alpha)E^*$ with $|\gamma|=\max\{|\eta|:\eta\in F\}$, the pair $(\alpha\gamma,\beta\gamma)$ is a monolithic extension of a pair in $F\times F$; and \item $E$ has no cycles, and $C^*(E)\cong C^*(G_E)$ has continuous trace. \end{enumerate} \end{theorem} Our final theorem follows immediately from Lemma \ref{lemma_directed_graph_simplification} and Theorem \ref{thm_cts-trace}. \needspace{6\baselineskip} \begin{theorem} Suppose $E$ is a row-finite directed graph without cycles. The following are equivalent: \begin{enumerate} \item $C^*(E)$ has continuous trace; \item for every finite $V\subset E^0$ the directed graph $E|_{VE^{\leqslant\infty}}$ has finitely many splitting pairs of edges; and \item for every finite $V\subset E^0$ there exists a finite $F\subset E^*$ such that for any $x,y\in VE^{\leqslant\infty}$ with $x\sim y$, the pair $(x,y)$ is a monolithic extension of a pair in $F\times F$. \end{enumerate} \end{theorem} \backmatter \linespread{1}
1,116,691,501,129
arxiv
\section{\label{sec:level1}INTRODUCTION} Based on the principle of quantum mechanism, quantum key distribution (QKD) can share unconditional security keys between two communication parties under insecure channels \cite{r1,r2}. According to the modulation methods adopted in quantum state exchanges, QKD protocols are usually divided into two categories: discrete-variable quantum key distribution (DV-QKD) and continuous-variable quantum key distribution (CV-QKD). CV-QKD has attracted much attention and has made great breakthroughs in recent years \cite{r3,r4,r5} by utilizing the existing optical communication infrastructure. The CV-QKD protocol is comprised of two main procedures: quantum state transmission over a quantum channel and classical postprocessing over an authenticated classical public channel. After quantum state transmission, the communication parties share unidentical correlated data. Then, postprocessing is implemented to transform the correlated data to identical secure keys. In recent years, the secure key rate (SKR) of CV-QKD has been greatly improved by robustly increasing the repetition rate of modulation and detection of coherent states \cite{r6,r7,r8,r9,r10}. As two main CV-QKD schemes based on Gaussian modulation coherent states \cite{r11,r12} and discrete modulation coherent states \cite{r13,r14}, the repetition rates of their transceivers have been increased to above 100MHz, which can support the distribution of tens of Mbps SKR. Especially, in the latest work, a four-state discrete-modulated CV-QKD with a 5GBaud repetition rate has been experimentally demonstrated for further increasing the asymptotic SKR to sub-Gbps level \cite{r15}. However, most of reported SKRs are only obtained by simple theoretical calculation or off-line postprocessing, which is not real-time CV-QKD. The major difficulty is that the throughput of the postprocessing cannot support the real-time SKR generation in high repetition rate CV-QKD. Therefore, the development of high throughput postprocessing is of great importance for the practical application of high-rate and real-time CV-QKD. The common postprocessing mainly includes four steps: data sifting, parameter estimation, information reconciliation and privacy amplification. For a practical CV-QKD system, information reconciliation has an important influence on the secret key rate. For information reconciliation of CV-QKD, a specific error-correcting code is adapted to correct errors in sifted keys to obtain identically corrected keys. Since a CV-QKD system usually works in a Gaussian channel with very low signal-to-noise ratios (SNRs), typically below 0dB, commercial error-correcting codes are not suitable for CV-QKD. Several error-correcting codes are studied for CV-QKD, such as LDPC codes \cite{r16,r17}, polar codes \cite{r18,r19}, and raptor codes \cite{r20,r21,r22}. Among them, a special class of LDPC codes, i.e., multi-edge-type low density parity-check (MET-LDPC) codes, is chosen as the most promising error-correcting code for CV-QKD, because its performance is very close to Shannon capacity limit and the decoding algorithm can be accelerated by using devices with parallel processing capabilities. MET-LDPC codes with low code rate and long code length are designed for CV-QKD \cite{r23}, and high-speed implementation of the decoding algorithm for MET-LDPC codes based on graphic processing unit (GPU) is shown in \cite{r17} with a throughput of 9.17 Mbps. Multiple codewords decoding in parallel is proposed to improve the throughput to 30.39 Mbps \cite{r24}, and layered decoder for Quasi-cyclic MET-LDPC (QC MET-LDPC) are proposed to further improve decoding throughput to 64.11 Mbps \cite{r25}. Although high decoding speed of MET-LDPC codes can be achieved by using GPU, the power consumption, volume, and cost of GPU limit its application in practice. FPGA is a kind of large-scale programmable integrated device and has a series of advantages, such as parallelism, high processing speed, and low power consumption. Many FPGA-based LDPC decoders have been developed over past few decades \cite{r26,r27,r28}. Recently, FPGA-based MET-LDPC decoders for CV-QKD are also studied \cite{r29,r30} and the fixed-point number of width 19 is adopted in designing the decoder. However, according to our knowledge, a successful high-speed LDPC decoder based on FPGA for a CV-QKD system under a typical transmission distance, such as 50 km, has not been realized in the literature. The difficulty of a high-speed FPGA-based decoder for CV-QKD is caused by two reasons. On the one side, the on-chip hardware resource of FPGA is limited, and a finite width (precision) of the fixed-point number is required for a high-speed decoder. On the other hand, the SNR for a CV-QKD under a typical transmission distance is much less than 1, so the decoding performance is strongly associated with the decoding accuracy. However, the limited precision decreases the decoding accuracy, leading to the existence of residual error-bits after decoding and high frame errors rate (FER), which decreases the secret key rate drastically. In this paper, a FPGA-based decoder for CV-QKD with limited precision is studied. The characteristics of the residual error-bits are studied, and it is found that a self-defined reliability value can be used to distinguish the error bits from the correct bits \cite{r31}. So, a residual bit error correction algorithm based on reliability values is used to improve the decoding performance. The improved decoding algorithm is implemented on FPGA, and throughputs of 360.92Mbps and 194.65Mbps are achieved for code rates 0.2 and 0.1 respectively, which can support 17.97 Mbps and 2.48 Mbps real-time generation of secret key rates under transmission distance of 25km and 50km, correspondingly. The proposed method can support Mbps real-time secret key rate generation for CV-QKD, which paves the way for a large-scale deployment of high-rate real time CV-QKD systems in secure metropolitan area network. The rest of this paper is organized as follows. Section ~\ref{sec:level2} gives a brief introduction into QC-LDPC codes and the iterative-decoding algorithm. The proposed architecture of the decoder and its residue error-bits erase module are introduced in Section ~\ref{sec:level3}. Implementation results are discussed in Section ~\ref{sec:level4} and we analyze the corresponding secret key rate of the decoder for a CV-QKD system in Section ~\ref{sec:level5}. Finally, Section ~\ref{sec:level6} draws a conclusion. \section{\label{sec:level2} ERROR CORRECTING SCHEME} \subsection{\label{sub:level1}QC-LDPC} QC-LDPC codes \cite{r32} is a class of structured LDPC codes. When implementing the decoding algorithm in FPGA, the QC structure can reduce its complexity of implementation. The parity-check matrice \emph{\textbf{H}} of a QC-LDPC code can be generated by using a cyclic permutation matrix (CPM) to replace the elements in base matrix \emph{\textbf{H}}$_{b}$. If an element in \emph{\textbf{H}}$_{b}$ is ‘$-1$’, the CPM used to replace this element is a $Z\times{Z}$ zero matrix. Matrix used to replace the non-negative integer $\alpha$ in \emph{\textbf{H}}$_{b}$ was obtained by cyclically shifting the $Z\times{Z}$ identity matrix to the right by $\alpha$ bit. An example of generating \emph{\textbf{H}} from \emph{\textbf{H}}$_{b}$ with $Z=4$ is shown in Fig.~\ref{fig1}. \begin{figure}[h] \centerline{ \includegraphics[width = 8cm]{./Figure/fig1} \caption{\label{fig1}Generating \emph{\textbf{H}} from \emph{\textbf{H}}$_{b}$. The size of \emph{\textbf{H}}$_{b}$ is $2\times3$, the size of CPM is $4\times4$, and the size of H is $8\times12$. The number 1, 2, … ,12 correspond to the column-index of \emph{\textbf{H}}.} \end{figure} \subsection{\label{sub:level1}Decoding algorithm} In the proposed architecture, the layered BP decoding algorithm for QC LDPC codes\cite{r20} is employed, which achieves almost the same performance as the flooding BP algorithm with less decoding iterations. We denote \emph{\textbf{u}} as a codeword over the BIAWGN channel and \emph{\textbf{R}} as the received vector. Layered BP decoding algorithm is described as follows.\\ (a)Initialize the LLR of variable nodes. \begin{equation}\label{eq1} \begin{aligned} L_{q_{n} }^{(0,0)} =\frac{2R_{n} }{\sigma ^{2} }, \end{aligned} \end{equation} where $n$ is the variable node index, and $\sigma ^{2}$ denotes the variance of noise in the BIAWGN channel.\\ (b)Update the message transmitted from variable nodes to check nodes. \begin{equation}\label{eq2} \begin{aligned} L_{q_{nm} }^{(t,l)}=L_{q_{n} }^{(t,l-1)}-L_{r_{mn} }^{(t-1,l)}, \end{aligned} \end{equation} where $ m$ is the check node index, $t$ is the number of iterations and $l$ is the number of layers, when $t=1$, $L_{r_{mn} }^{(0,l)}=0$.\\ (c)Update the message transmitted from check nodes to variable nodes. \begin{equation}\label{eq3} \begin{aligned} L_{r_{mn} }^{(t,l)}=(1-2s_{m})\times \prod_{n^{'}\in N(m)\setminus n }^{} sgn(L_{q_{n^{'} } }^{(t-1,l)})\\\times \Phi^{-1} (\sum_{n^{'}\in N(m)\setminus n}^{}\Phi(|L_{q_{n^{'} } }^{(t-1,l)}|)), \end{aligned} \end{equation} where $\Phi(x)=\Phi^{-1}(x)=-ln(tanh(x/2))$ , $sgn(x)$ is a sign function, and $N(m)$ is the set of variable nodes connected to the check node $m$. Moreover, \emph{\textbf{s}} is the syndrome generated by\emph{\textbf{ s}}=\emph{\textbf{uH}}$^{T}$.\\ (d)Update the total message of Variable nodes \begin{equation}\label{eq4} \begin{aligned} L_{q_{n} }^{(t,l)}=L_{q_{nm} }^{(t,l)}+L_{r_{mn} }^{(t-1,l)}. \end{aligned} \end{equation} Repeat steps (b), (c), (d) until all sub-matrices are updated and reach the maximum number of iterations. When all the layers finish updating message, $t = t + 1$. \\ (e)Decide the codeword $\hat{ \emph{\textbf{u}}}$ according to the total message of variable nodes. \begin{equation}\label{eq5} \begin{aligned} {\hat{u}}_{n} =\left\{\begin{matrix} 1, L_{q_{n} }^{(t,l)} <0 \\ 0, L_{q_{n} }^{(t,l)} \ge 0 \end{matrix}\right. . \end{aligned} \end{equation} \section{\label{sec:level3} PROPOSED ARCHITECTURE} As shown in Fig.~\ref{fig2}, the proposed architecture of the decoder contains a global controller unit (GCU), a receiving unit, a Addr$\_$gen unit, a decode decision unit, two FIFO buffers, four processing units, three types of RAMs and a residue error-bits erase module. The global controller unit deals with the whole procedure of decoding. The receiving unit receives the message that needs to be decoded and stores it into the variable message storage RAM with right order. The variable message storage RAM is used to store the value of $L_{q_{n} }$ in Eqs.(~\ref{eq1}) and (~\ref{eq4}). The check node message storage RAM is used to store the value of $L_{r_{mn} }$ in Eq.(~\ref{eq3}). The Syn$\_$RAM is used to store \emph{\textbf{s}} in Eq.(~\ref{eq3}). Besides, in the decode decision unit, Eq.(~\ref{eq5}) is executed and we get the vector $\hat{ \emph{\textbf{u}}}$ and its syndrome $\hat{ \emph{\textbf{s}}}$ by computing $\hat{ \emph{\textbf{u}}}$ = $\hat{ \emph{\textbf{s}}}$\emph{\textbf{H}}$^{T}$. Then, we read \emph{\textbf{s}} from Syn$\_$RAM. If \emph{\textbf{s}} is equal to $\hat{ \emph{\textbf{s}}}$, the residue error-bits erase module just outputs $\hat{ \emph{\textbf{u}}}$. Otherwise, the residue error-bits erase module starts erasing the residual error-bits in $\hat{ \emph{\textbf{u}}}$ and then outputs $\hat{ \emph{\textbf{u}}}$. \begin{figure*}[t] \includegraphics[width = 15cm]{./Figure/fig2 \caption{\label{fig2}The proposed architecture of FPGA-based LDPC decoder. \emph{\textbf{R}} is the data that received from the channel. The global controller unit deals with the whole procedure of decoding and $\hat{ \emph{\textbf{u}}}$. is the decoding result.} \end{figure*} \subsection{\label{sub:level1}Datapath} The receiving unit output $finish$\_$storing$ represents it finishes storing data into the variable message storage RAM. Then, the global controller unit uses signal $start$ to control the addr$\_$gen unit to generate read address of the variable message storage RAM and the iterative decoding starts. Data $L_{q_{n}}$ is read from the variable message storage RAM. By using the Shift-right unit, we will get the data that we need in the current clock period. Then, the VNU is used to execute Eq.(~\ref{eq2}). The output of VNU is used as the input of the CNU and is stored in the FIFO buffer$\_$2 at the same time. The function of Eq.(~\ref{eq3}) is realized at the CNU and the output of this unit is used as the input of the total message processing unit and is written back to the check message storage RAM. The total message processing unit executes Eq.(~\ref{eq4}) and its output is processed by the shift$\_$left unit to get the right order of $L_{q_{n}}$. Next, $L_{q_{n}}$ will be written back to the variable message storage RAM. After finishing writing, an iteration is completed, and the value of $Iter$\_$num$ adds one. When the value of $Iter$\_$num$ is equal to the maximum number of iterations, the decoding stops. \subsection{\label{sub:level2}Fixed-point number} When implementing the decoder based on FPGA, we adopt fixed-point number to reduce the implementation complexity. The width of the fixed-point number \emph{w} = 1 + \emph{I} + \emph{F}, one bit for the sign, \emph{I} bit for the integer part and \emph{F} bit for the fraction part. In the design of the decoder, we can reduce the consumption of hardware resources by reducing the width of the fixed-point number. \subsection{\label{sub:level3}Partial parallel and pipeline} Parallelism is useful to improve the decoding throughput. However, restricted by the resource of the FPGA, the parallelism is limited. An MET-LDPC check matrix with a proper quasi-cycle is chosen to make full use of the parallelism. Pipeline structure is often adopted with the parallelism to improve the throughput. We can minimize the number of clock cycles required in the overall decoding process by using this structure. Because there is delay between reading data from the variable message storage RAM and writting them back, a suitable parity check matrix is needed to avoid read/write conflicts for pipeline structure. \subsection{\label{sub:level4}Residue error-bits erase module} As mentioned above, we can reduce the consumption of hardware resources by reducing the width of the fixed-point number. However, reducing the width of the fixed-point number also decreases its accuracy, which leads to the decrease of the decoding performance. A rate-0.2 QC-LDPC code is taken for example. The width of the fixed-point number is set as \emph{w} = 8 (one symbol bit, 4 integer bits and 3 fraction bits) and the maximum number of decoding iterations $t_{max}$ = 13 is adopted to design the decoder. As shown in Fig.~\ref{fig3}. The red curve shows the performance of this decoder with 100 frames are decoded at each SNR. We can see that the FER is extremely high, even if in the region of high SNR. The green curve shows the decoding performance of this code when float-point number is adopted in the iterative decoding, and we can see the FER decreases obviously. It should be mentioned that the high FER is not restricted by decoding iterations. So, the FER can’t be reduced just by increasing the iterations. In order to improve the decoding performance of this design, we firstly analyze the decoding results. Reliability value is defined as the absolute value of $L_{q_{n}}$ in Eq.(~\ref{eq5}). When SNR = 0.37, four failed codewords are analyzed and the corresponding reliability values of these decoded symbols are shown in Fig.~\ref{fig4}. The erroneous symbols are marked with red asterisk and the correct ones with blue asterisk. It can be found that the reliability values of erroneous symbols are relatively smaller than those of the correct ones. To further reveal the feature of these decoded symbols, the histogram about the correct symbols and erroneous symbols are drawn in Fig.~\ref{fig5}(a) and Fig.~\ref{fig5}(b). Hence, for a failed codeword, we can set a threshold $\Delta$. If the reliability value of a symbol is smaller than $\Delta$, it is added to the set of suspicious symbols, which is an erroneous symbol with high probability. For a rate-0.2 QC-LDPC code, we set the threshold $\Delta$ to be 40 (the value of $L_{q_{n}}$ is in the range [-127, 127] when $w = 8$). For each SNR in Fig.~\ref{fig3}, 100 codewords are decoded. Parameter $N$\_$err$ is defined as the number of failed codewords whose maximum reliability value of erroneous symbols is smaller than $\Delta$, which is shown in Table~\ref{tab:tab1}. It can be found that most failed codewords are included, especially for relatively large SNRs. \begin{figure}[htb] \centerline{ \includegraphics[width = 9cm]{./Figure/fig3} \caption{\label{fig3} The FER vs SNR curves of the decoder for a rate-0.2 QC-LDPC code. The code length is 80000. The range of SNR in this simulation is [0.355, 0.389] and the interval is 0.001. For each SNR, the decoding result of 100 codewords are collected under $t_{max}$ = 13 , and the red/blue curve shows the decoding result of this decoder without/with residue error-bits erase module. The green curve shows the decoding result of this code when float-point number is adopted in the iterative decoding.} \end{figure} \begin{figure}[htb] \centerline{ \includegraphics[width = 9cm]{./Figure/fig4} \caption{\label{fig4} The reliability values of the LDPC codes after finishing decoding. The code rate is 0.2 and the code length is 80000. Four failed codewords are collected at SNR = 0.37. The erroneous symbols are marked with red asterisk and the correct ones with blue asterisk.} \end{figure} \begin{figure}[htb] {\label{fig5a}{\includegraphics[width = 9cm]{./Figure/fig5a}}} {\label{fig5b}{\includegraphics[width = 9cm]{./Figure/fig5b}}} \caption{\label{fig5}Histogram of decoded symbols. (a): correct symbols; (b): wrong symbols. Four failed codewords are collected at SNR = 0.37.}{\label{fig5}} \end{figure} \begin{table*}[htb] \caption{\label{tab:tab1}The number of failed codewords and the value of $N$\_$err$. For each SNR, the decoding result of 100 codewords are collected with $t_{max}$ = 13 and $\Delta$ = 40.} \begin{ruledtabular} \begin{tabular}{ccccccccc} SNR & $0.368$ & $0.371$ & $0.374$ & $0.377$&$0.380$ &$0.383$& $0.386$& $0.389$\\ \hline Number of failed codewords&$97$&$93$&$91$&$88$&$84$ &$82$&$79$&$79$ \\ $N$\_$err$&$44$&$51$&$59$&$63$&$64$ &$79$&$68$&$70$ \\ \end{tabular} \end{ruledtabular} \end{table*} The distribution of reliability values is similar to the description in \cite{r31}, which is caused by the trapping set. In \cite{r31}, the trapping set influences the error floor of LDPC codes at high SNR. For our situation, the high FER occurs not only in the error floor region, but also in the waterfall region. It is not caused by the trapping set, but by the precision of the decoder. However, we can also try to use the method in \cite{r31} to erase the residual error-bits and we implemente this method in the residue error-bits erase module. When the LDPC decoder with residue error-bits erase module fails decoding, this module starts running to erase the residual error-bits. The blue curve in Fig.~\ref{fig3} shows the final decoding performance when $\Delta$ is set to be 40 in the residue error-bits erase module. We can observe that the decoding performance is improved greatly. If fact, different $\Delta$ can be chosen for different SNRs. By a large amount of simulation, the performance improvement due to modification of $\Delta$ is not obvious. Therefore, in this paper, fixed $\Delta$ for different SNRs is adopted for simplicity. In a similar way, for a rate-0.1 QC-LDPC code, we set $w$ = 10 (one symbol bit, 4 integer bits and 5 fraction bits) and $\Delta$ = 180 (the value of $L_{q_{n}}$ is in the range [-511, 511] when $w$ = 10). The decoding performance (the FER vs SNR curve) of this decoder with/without residue error-bits erase module are studied. As shown in Fig.~\ref{fig6}, it can be seen that the residue error-bits erase module can also improve the decoding performance extremely for the considered codes. \begin{figure}[htb] \centerline{ \includegraphics[width = 9cm]{./Figure/fig6} \caption{\label{fig6} The FER vs SNR curves of the decoder for a rate-0.1 QC-LDPC code. The code length is 96000. The range of SNR in this simulation is [0.155, 0.180] and the interval is 0.001. For each SNR, the decoding result of 100 codewords are collected under $t_{max}$ = 25, and the red/blue curve shows the decoding result of this decoder without/with residue error-bits erase module. The green curve shows the decoding result of this code when float-point number is adopted in the iterative decoding.} \end{figure} \section{\label{sec:level4}IMPLEMTENTATION RESULTS} In this section, the implementation results of the FPGA-based LDPC decoder are shown. A Xilinx VC709 evaluation board with a Virtex-7 XC7VX690T FPGA is chosen to implement the decoder. \subsection{\label{sub:level1}Throughput} The throughput of the decoder is a bottleneck for the real-time performance of CV-QKD system. As a result, higher throughput is the aim of designing a FPGA-based decoder. The decoding throughput can be estimated by \begin{equation}\label{eq6} \begin{aligned} T=\frac{f\cdot N}{K\cdot t_{max} }, \end{aligned} \end{equation} where $f$ is the frequency of clock on FPGA, $N$ is code length, $K$ is the clock cycles that are used to finish one iteration decoding and $t_{max}$ is the maximum number of decoding iterations. Because we adopt pipeline structure to design the decoder, parameter $K$ is roughly equivalent to the clock cycles that are used to read data from the variable message storage RAM. We use $N_{total}$ to represent the number of nonzero elements in \emph{\textbf{H}} and $N_{var}$ to represent the average row-weight in \emph{\textbf{H}}. Then, the decoding throughput can be described as \cite{r30} \begin{equation}\label{eq7} \begin{aligned} T\approx \frac{f\cdot N}{(N_{total}/p )t_{max} } \approx \frac{f\cdot N\cdot p}{N\cdot (1-R)\cdot N_{avr}\cdot t_{max} } \\ \approx \frac{f\cdot p}{(1-R)\cdot N_{avr}\cdot t_{max}}, \end{aligned} \end{equation} where $p$ is the parallelism, $R$ is the code rate of the LDPC codes. From Eq.(~\ref{eq7}), we can see that when given an LDPC code, the throughput can be improved by using higher clock frequency, enlarging parallelism parameter $p$ and reducing the maximum number of decoding iterations. To decrease the FER of this proposed decoder, we add a module to erase the error bits after the iterative decoding. $D_e$ is used to represent the clock cycle delay of the residue error-bits erase module. Eq.(~\ref{eq7}) can be modified as \begin{equation}\label{eq8} \begin{aligned} T\approx \frac{f\cdot p}{(1-R)\cdot N_{avr}\cdot t_{max}+D_e}. \end{aligned} \end{equation} \subsection{\label{sub:level2}Decoding performance} In the following, we investigate the performance of two QC-LDPC codes with the code length of 96000 and 80000 and code rate of 0.1 and 0.2 by using the decoding algorithm proposed in section~\ref{sec:level2}. Table~\ref{tab:tab2} presents the parameters of the FPGA–based LDPC decoder for two different QC-LDPC codes and compares them with previous work. \begin{table*}[htb] \caption{\label{tab:tab2}Performance of the proposed LDPC decoder and its comparison with previous work.} \begin{ruledtabular} \begin{tabular}{ccccccccc} Decoder & \multicolumn{2}{c}{Yang. et al.\cite{r30}}&\multicolumn{2}{c}{This work} \\ \hline FPGA device &\multicolumn{2}{c}{Virtex-7 XC7VX690} &\multicolumn{2}{c}{Virtex-7 XC7VX690} \\ Algorithm &\multicolumn{2}{c}{ Layered BP decoding} &\multicolumn{2}{c}{ Layered BP decoding} \\ Code rate &0.43 &0.115 &0.1 &0.2\\ Parallel parameter $p$ &64 &64 &100 &100\\ Width of fixed-point number(bits) &19 &19 &10 &8\\ BRAMs of One decoder (kb) &31680(59.9$\%$) &30636 (57.9$\%$) &22863 (43.21$\%$) &18144 (34.29$\%$)\\ Residue error-bits erase module &No &No &Yes &Yes\\ Decoder numbers &1 &1 &2 &2\\ Clock frequency (MHz) &100 &100 &100 &100\\ Throughput (Mbps) &108.64 &70.32 &194.65 &360.92\\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*}[htb] \caption{\label{tab:tab3}The optimization results of secret key rate. Parameter $G_K$ represents the improvements of $K_{opt}$ by using the residue error-bits erase module.} \begin{ruledtabular} \begin{tabular}{ccccccc} $R$ & $d$(km)&Residue error-bits erase module &$V_A$ &SNR&$K_{opt}$(bits/{pulse})&$G_K$\\ \hline \multirow{2}{*}{0.2} &\multirow{2}{*}{25} &Yes &6.3692 &0.3707 &0.0498 &\multirow{2}{*}{425.64$\%$}\\ & &No &2.1001 &0.3862 &0.0117 &\\ \multirow{2}{*}{0.1} &\multirow{2}{*}{50} &Yes &2.9221 &0.1701 &0.0128 &\multirow{2}{*}{139.13$\%$}\\ & &No &2.9537 &0.1719 &0.0092 &\\ \end{tabular} \end{ruledtabular} \end{table*} The width of the fixed-point number in this paper is smaller, so less storage resources (under 50$\%$) on-chip (BRAM) are consumed in our scheme even if the parallelism parameter $p$ is higher. Hence, we can instantiate two decoders on the FPGA. To improve the decoding performance, we add a residue error-bits erase module to decrease the FER. \subsection{\label{sub:level3}Computation complexity and security} For the residue error-bits erase module, the resource consumption is mainly used for computing syndromes and the row weight of some matrices and all of them can be implemented by doing shift operation and XOR. Therefore, the overall computational complexity of this module is low. For the rate-0.2 LDPC code, the consumption of clock cycles of the residue error-bits erase module is approximately equal to 2.4 decoding iterations. For the rate-0.1 LDPC code, the consumption is approximately equal to 3 decoding iterations. Compared with the BP decoding, the residue error-bits erase module has smaller influence on the throughput of the proposed decoder. Consequently, the residue error-bits erase module is effective and efficient to reduce the FER. Besides, the residue error-bits erase module just works on the decoding results and there is no extra information exchange. Hence, this module brings no extra security problem to the CV-QKD system. \section{\label{sec:level5}INFLUENCE FOR SECRET KEY RATE} Secret key rate $K_t$ is the key indicator for QKD systems. The secret key rate per pulse of a CV-QKD system is given by \cite{r33,r34,r35} \begin{equation}\label{eq9} \begin{aligned} K_t= (1-FER)(\beta{I_{AB}}-\chi{_{BE}}), \end{aligned} \end{equation} where $\beta$ is the reconciliation efficiency, $I_{AB}$ is the mutual information between two participants Alice and Bob, and $\chi_{BE}$ is the mutual information between Bob and Eve. As explained in \cite{r36}, $I_{AB}$ and $\chi_{BE}$ can be described as the function of the modulation variance $V_{A}$, that is, ${I_{AB}} =f_{I_{AB}}(V_{A}) $ and $\chi_{BE} =f_{\chi_{BE}}(V_{A}) $. Besides, given the error correction codes and the algorithm of information reconciliation, the reconciliation efficiency can be described as the function of $V_{A}$. \begin{equation}\label{eq10} \begin{aligned} \beta=\frac{R}{0.5\log_{2}{(1+SNR)} }=f_{\beta}(V_A). \end{aligned} \end{equation} The blue curve in Fig.~\ref{fig3} shows the relationship between FER and SNR for the rate-0.2 QC-LDPC code. Because SNR is the function of $V_{A}$, we can get the fitting function of $f_{FER}(V_A)$ as given in \cite{r36}. Thus, the secret key rate $K_t$ can be written as \begin{equation}\label{eq11} \begin{aligned} K_t=(1-f_{FER}(V_A))(f_{\beta}(V_A)f_{I_{AB}}(V_A)-f_{\chi_{BE} }(V_A)). \end{aligned} \end{equation} From Eq.(~\ref{eq11}), we can get the maximum secret key rate $K_{opt}$ by optimizing the parameter $V_{A}$ . In a similar way, according to Fig.~\ref{fig6}, we get the fitting function $f_{FER}(V_A)$ for the rate-0.1 QC-LDPC code and its optimal secret key rate. Table~\ref{tab:tab3} shows the optimization results of secret key rate by the method in \cite{r36} for the decoder with/without residue error-bits erase module. We observe that, with residue error-bits erase module, the optimal secret key rate is improved up to 425.64$\%$ and 139.13$\%$ respectively, compared with that without error-bits erase module. In fact, the real-time throughput of CV-QKD is determined by the error correction throughput, the repetition rate can be increased to 360.92 pulse/s and 194.65 pulse/s correspondly. As a consequence, the proposed method in this paper can support 17.97 Mbps and 2.48 Mbps real-time generation of secret key rate under transmission distance of 25km and 50km (multiplying the optimal secret key rate and the repetiton rate). \section{\label{sec:level6}CONCLUSION} In this paper, a FPGA-based LDPC decoder with limited precision for CV-QKD is proposed. Fixed-point number with fewer bits is adopted to reduce the consumption of hardware resources. A residue error-bits erase module is used to reduce the FER caused by finite precision of the decoder. For the codes of rates 0.2 and 0.1, 425.64$\%$ and 139.13$\%$ improvement of secret key rates are achieved compared with that without the residue error-bits erase module, and the corresponding decoder throughputs are 194.65Mbps and 360.92 Mbps, which can support 17.97 Mbps and 2.48 Mbps real-time generation of secret key rate for CV-QKD systems under distances of 25km and 50km. The method proposed in this paper provides an effective and efficient method to support high-speed real-time CV-QKD, and paves the way for high-rate real-time CV-QKD deployment in secure metropolitan area network. \begin{acknowledgments} This work was supported in part by the National Natural Science Foundation of China (Grant Nos 61901425, U19A2076, 62171418 and 62101516), the National Key Research and Development Program of China (Grant No. 2020YFA030970X), the Technology Innovation and Development Foundation of China Cyber Security (Grant No. JSCX2021JC001), the Chengdu Major Science and Technology Innovation Program (Grant No. 2021-YF08-00040-GX), the Chengdu Key Research and Development Support Program (Grants Nos 2021-YF05-02430-GX and 2021-YF09-00116-GX), the Sichuan Science and Technology Program (Grants Nos 2022YFG0330 and 2021YJ0313) and the Foundation and Advanced Research Projects of Chongqing Municipal Science and Technology Commission (China) under Grant cstc2019jcyjmsxmX0110. \end{acknowledgments}
1,116,691,501,130
arxiv
\section{Analysis of the hyperbolicity in the closed curves } In this section we prove that an open and dense subset of the vector fields having a robustly chain transitive chain recurrence class $C$ has a dominated splitting over $B(C)$. A dominated splitting over $\widetilde{\La(C)}$, from\cite{BdL}, and since for an isolated neighborhood of $C$, $U$, we have that $\widetilde{\La(C)}=\widetilde{\La}(X,U)$. Consequently we just need to find a dominated splitting over $\widetilde{\La}(X,U)$ which is a rather straight forwards consequence of already existing results. \subsection{The set of periodic orbits}\label{s.per} In a series of papers such as \cite{DPU} \cite{BDP} \cite{BB},and \cite{BGV} it is shown that if a set of periodic orbits, that has periodic orbits of arbitrarily long periods, does not have a dominated splitting, then there is a perturbation of the flow having a an infinite number of sinks and sources. We state two of this result below: \begin{lemm}[ \cite{BGV} Theorem 2.2]\label{l.perdom} Let $\cA$ be a bounded linear cocycle over $\pi:E\to\sum$ where $\sum$ is a set of periodic orbits, containing periodic orbits of arbitrarily long periods. Then if $\cA$ does not admit any dominated splitting, then there exist a perturbation $\cB$ of $\cA$ such that $E$ is contracted or expanded by $\cB$ along the orbit. \end{lemm} \begin{lemm}[ \cite{BDP} lemma 6.1]\label{l.percontr} Let $\cA$ be a bounded linear cocycle over $\pi:E\to\sum$ where $\sum$ is a set of periodic orbits, containing periodic orbits of arbitrarily long periods. Let $T(x)$ denote the period of $x\in\sum$ Suppose that $\cA$ \begin{itemize} \item $\cA$ admits a dominated splitting $E=F_1\oplus_{\prec}F_2\,,$ \item $\cA$ does not admit a dominated splitting of $F_1$. \item There exist a point $p$ such that $\det (A^{T(p)}\mid_{F_1}(p))>1$ \end{itemize} then there exist a perturbation $\cB$ of $\cA$ and q in $\sum$ such that all eigen values of $A^{T(q)}\mid_{F_1}(q)$ are positive. \end{lemm} \begin{rema}\label{r.percontr} Both of these theorems hold if we consider the linear Poincar\'{e} flow as the cocycle, and the time $t$ of the flow as the diffeomorphism. Therefore the set of periodic orbits in a robustly chain transitive set $C$ has dominated splitting and the finest possible of this dominated splittings is such that the extremal bundles contract and expand volume. \end{rema} \begin{lemm}\label{l.chinesdom} There is an open and dense subset of the vector fields $X\in\cX^1M $ with a robustly chain transitive set $C$ t in a filtrating neighborhood $U\subset M $ such that set $\widetilde{\La}(X,U)$ admits a finest dominated splitting of the normal bundle, for the extended linear Poincar\'e flow. \end{lemm} \proof Take $X$ a vector field in the hypothesis of theorem \ref{ConConLem} we can find a sequence of vector fields $Y_n$ converging to $X$ such that the set $C$ is the hausdorff limit of the periodic orbits of the $Y_n$. Since $C$ is robustly chain transitive this periodic orbits must be related, so for $Y_n$, $C$ is the closure of $ U\cap Per(Y_n)$. From lemma \ref{l.perdom} the set $\set{<Y_n(x)>\in\PP M \text{ such that } x\in \La }$ has a uniform finest dominated splitting for the extended linear Poincar\'{e} flow (note that the projection to $M$ here is one to one). Therefore, the extended linear Poincar\'{e} flow has dominated splitting over the set $$\overline{\set{<Y_n(x)>\in\PP M \text{ such that } x\in C}}\,,$$ since a uniform dominated splitting extends to the closure. Using again \ref{ConConLem}, the sequence $Y_n$ can be taken so that the hausdorff limit of $$\overline{\set{<Y_n(x)>\in\PP M \text{ such that } x\in C }}\,,$$ is exactly the set $\widetilde{\La(X,U)}$, since the upper limit coincides with the Hausdorff limit if ti exist, therefore we get that $\widetilde{\La(X,U)}$ has a dominated splitting of the same dimension, for the extended linear Poincar\'{e} flow of $X$ \endproof \section{Analyzing the singulatiries}\label{s.sings} The main problem we need to deal with now, is the distortion of the contraction and expansion rates that occurs when the periodic orbits approach the singularities. For this we will use again the reparametrized linear Poincar\'{e} flow. With this tool, the main work in this section will be to find out which singularities need to be reparametrized. After this and following mainly \cite{BDP}, we show that all closed orbits in an open set of vector fields have the structure desired. Then, in the next section we follow the classical strategy. We use the ergodic closing lemma, to argue that if the robust chain transitive set did not have the desired structure, then there must be a critical element in a perturbed vector field that does not have that structure either. This contradiction, gives us our main theorem \ref{t.BDP} \subsection{Center space of the singularities and the dominated splitting on $\widetilde{\La}(X,U)$} Let us consider a singularity $\sigma\in C$. We consider the following splitting of its tangent space: $$E^{ss}\oplus E^c\oplus E^{uu}\,,$$ noting the stable escaping, the unstable escaping and the center spaces. We suppose the singularities to be hyperbolic Recall that from lemma \ref{l.chinesdom} we have that there is a finest dominated splitting for the extended linear Poincar\'{e} flow, over $\widetilde{\La}(X,U)$. We note this as follows $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u\,,$$ where $L$ is a direction in $\widetilde{\La}(X,U)$. We call $\pi_L:T_xM\to \cN_L$ where $L\in\mathbb{P}_xM$ the projection over the normal space at a given direction $L$. The next lemma from \cite{BdL} tells us that there is a relation between the splitting in the singularities into escaping and center spaces, and the dominated splitting of the class for the linear Poincar\'e flow. \begin{lemm}[BdL]\label{l.central} Let us consider the set of vector fields $\cV$ such that evey vector field $X\in\cV$ has a singular chain class $C_{\sigma}$ with the following properties. We denote $S= Sing(X)\cap C_{\sigma}$\begin{itemize} \item every $\sigma\in S$ that is hyperbolic, \item the dimension of the central space of $\sigma\in S$ is locally constant, \item the extended linear Poincar\'{e} flow over $\widetilde{\La}(C(\sigma))$ has a dominated splitting, $$\cN_L=\cN^E\oplus\cN^F\,,$$ where $L$ is any direction in $\widetilde{\La}(C(\sigma))$. \end{itemize} Let $L $ be a direction in $\widetilde{\La}\cap\PP_{\sigma}M$, Then there is an open and dense subset of $\cV$ noted $\cU\subset\cV$ such that for every $X\in\cU$ , $$\pi_L ( E_{\sigma}^c)\subset\cN_L^E\,,$$ or $$\pi_L (E_{\sigma}^c)\subset\cN_L^F\,.$$ \end{lemm} The next corollary is a direct consequence of the previous lemma. \begin{coro} Let us consider the set of vector fields $\cV$ such that evey vector field $X\in\cV$ has a singular chain class $C_{\sigma}$ such that if we note $S= Sing(X)\cap C_{\sigma}$ \begin{itemize} \item every $\sigma\in S$ that is hyperbolic, \item the dimension of the central space of $\sigma\in S$ is locally constant, \item the extended linear Poincar\'{e} flow over $\widetilde{\La}(C(\sigma))$ has a finest dominated splitting $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u\,,$$ \end{itemize} Let $L $ be a direction in $\widetilde{\La}(X,U)\cap\PP_{\sigma}M$, and such that $L=<u>$. Then there is an open and dense subset of $\cV$ noted $\cU\subset\cV$ such that for every $X\in\cU$, $$\pi_L ( E_{\sigma}^c)\cap\cN_L^s=\emptyset\,,$$ or $$\pi_L ( E_{s1}\oplus\dots\oplus E_{sk}\oplus E_{u1})\subset\cN_L^s\,.$$ \end{coro} Now we can see a more precise definition of the sets in definition \ref{defisvh}. \begin{defi} Let $X$ be a $C^1$ vector field, such that there is an open set $U$ such the maximal invariant set in $U$ is a robustly chain transitive chain recurrence class. Let us consider the finest dominated splitting for the set $\widetilde{\La}(X,U)$, $$\cN_L=\cN^s_L\oplus\dots\oplus \cN^U_L\,.$$ We define $S_{Ec}$ the set of singularities $$S_{Ec}=\set{\sigma\in Sing(X)\cap U \text{such that } dim(\cN^s_L)>E^{ss}_{\sigma}}\,.$$ Similarly we define the set $S_{Fc}$ the set of singularities $$S_{Fc}=\set{\sigma\in Sing(X)\cap U \text{such that } dim(\cN^u_L)>E^{uu}_{\sigma}}\,.$$ \end{defi} \begin{rema} This definition makes the set in definition \ref{defisvh} to de uniquely defined and disjoint \end{rema} \subsection{Volume contraction at the singularities.} Recall that for a hyperbolic singularity we note $$T_{\sigma}M=E^{ss}\oplus E^c\oplus E^{uu}\,,$$ noting the stable escaping, the unstable escaping and the center spaces. we write the center space as: $$E_{\sigma}^c= E_{s1}\oplus\dots\oplus E_{sk}\oplus E_{u1}\oplus\dots\oplus E_{ul}$$ \begin{lemm}\label{l.repcont} Let $\sigma$ be a singularity of $C$, a robustly transitive chain recurrence class with an isolating filtrating neighborhood $U$. Let $\Gamma =Orb(x)$ be a homoclinic orbit associated to $\sigma$ . Assume as well that: \begin{itemize} \item There exists a sequence of vector fields $X_n$ converging to $X$ in the $C^1$ topology \item There exist a sequence of periodic orbit $\gamma_n$ of $X_n$ such that $\gamma_n$ converges to $\Gamma$ in the Hausdorff topology. \item There is a finest dominated splitting over $\widetilde{\La}(X,U)$, $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u$$ where $L\in\widetilde{\La}(X,U)$. We note $dim(\cN^u)=h$ and we have that $dim(\cN^s)=n$. \end{itemize} Then there is a space $E\subset T_{\sigma}M$ such that $E$ contracts volume and has dimension $n+1$, and a $F\subset T_{\sigma}M$ such that $F$ expands volume and has dimension $h+1$ \end{lemm} \proof Let us recall that from remark \ref{r.percontr} the splitting $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u$$ where $L$ is a direction over the set of periodic orbits of $C$ is Volume partial hyperbolic. This means that $\cN^s$ contracts volume and $\cN^u$ expand volume. Since, for any $x_n$ in $\gamma_n$, $X_n(x_n)$ does not contract or expand at the period, then $\cN^s_L\oplus<X_n(x_n)>$ contracts volume and $<X_n(x_n)>\oplus\cN^u_L$ expands volume uniformly at the period. Since $\gamma_n$ tends to the homoclinic loop $\Gamma$, their periods must tend to infinity with $n$. For $n$ large enough, and from the contraction of volume we have there exist some constants $\nu$ and $T$ $$\prod^{\llcorner T_{\gamma_n}/T\lrcorner-1}_{i=0}\det(D\phi^{iT}(x)\mid_{\cN^s_L\oplus<X_n(x_n)>})\leq e^{-\nu T_{\gamma_n}}\,,$$ where $T_{\gamma_n}$ is the period of $\gamma_n$. Then for any $\gamma_n$, taking $T=1$, Pliss Lemma (remark \ref{plisspoint}) gives some point $p_n \in \gamma_n$ satisfying $$\frac{1}{k}\sum^{k/1}_{i=0}\log(\det(D\phi_n^{1}\mid_{D\phi^{i}(\cN^s_L\oplus<X_n(p_n)>)}))\leq -\nu \,.$$ Assume $p_n$ tends to $y\in\Gamma\cup\sigma$. One can assume $\cN^s_L\oplus<X_n(xn)>\to E(y)$, and since $y$ is accumulated by Pliss points, again we have that: $$\frac{1}{k}\sum^{k}_{i=0}\log(\det(D\phi^{1}\mid_{D\phi^{i}(E(y))}))\leq -\nu \,,$$ Now the Pliss Lemma again, allows us to find $n_j\to\infty$ such that $$\frac{1}{k}\sum^{k}_{i=0}\log(\det(D\phi^{1}\mid_{D\phi^{i+n_j}(E(y))}))\leq -\nu \,.$$ Since $\phi^{i+n_j}(y)$ tends to $\sigma$ as $n_j\to\infty$, we derive a subspace $E\subset T_{\sigma}M$ with $dim (E)=n+1$ such that $$\frac{1}{k}\sum^{k/1}_{i=0}\log(\det(D\phi^{1}\mid_{D\phi^{i}(E)}))\leq -\nu \,.$$ This shows that $E$ contracts volume, and the proof is analogous for $F$ \endproof The following corollary is a consequence of Lemma \ref{l.repcont}. \begin{coro}\label{c.repcont} There is Let $\cV$ be the set of $\cC^1$ vector fields $X$ with a robustly chain transitive class $C$ and such that: \begin{itemize} \item Every singularity in $C$ is hyperbolic and $$T_{\sigma}M=E^{ss}\oplus E^c\oplus E^{uu}\,,$$ notes the stable escaping, the unstable escaping and the center spaces. \item there is with a finest dominated splitting over $\widetilde{\La}(X,U)$ $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u\,.$$ where $U$, an isolating neighborhood of $C$, and $n=dim( \cN^s)>dim(E^{ss})$ \end{itemize} There is an open and dense subset $\cU\subset \cV$ such that for every $X\in \cU$ every singularity in $C$ is such that there is a $n+1$ dimensional space $E\subset T_{\sigma}M$ that contracts volume. Moreover $E^{ss}\oplus E^c\subset E$ \end{coro} \proof Let a vector field $X$ be a vector field such that \begin{itemize} \item Every singularity in $C$ is hyperbolic and $$T_{\sigma}M=E^{ss}\oplus E^c\oplus E^{uu}\,,$$ notes the stable escaping, the unstable escaping and the center spaces. \item there is with a finest dominated splitting over $\widetilde{\La}(X,U)$ $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u\,.$$ where $U$, an isolating neighborhood of $C$, and $n=dim( \cN^s)>dim(E^{ss})$ \item all periodic orbits in $C$ are hyperbolic \end{itemize} Note that this is a dense subset of the vector fields $X$ with a robustly chain transitive class $C$ . By the connecting Lemma \ref{l.contecting} we can find a vector field $Y$ that is $\epsilon-C^1$ close to $Y' $ and is equal to $X$ in a neighborhood of $\sigma$, such that there is $\Gamma =Orb(x)$ a homoclinic orbit associated to $\sigma$. Now Theorem \ref{ConConLem} allow us to find a sequence of vector fields \begin{itemize} \item there exists a sequence of star vector fields $Y_n$ converging to $Y$ in the $C^1$ topology \item there exist a sequence of periodic orbit $\gamma_n$ of $Y_n$ such that $\gamma_n$ converges to $\Gamma$ in the Hausdorff topology. \item there is a sequence of points $q_n\in\gamma_n$ such that $Y_n(q_n)\to u $ is in $E^{u1}\oplus\dots\oplus E^{ul}$. \item we call $L_u=<u>$ and we have that $dim(\cN^u_{L_u})=h$ \item There is a sequence of points $p_n\in\gamma_n$ such that $<Y_n(p_n)>\to v $ is in $E^{s1}\oplus\dots\oplus E^{sn}$ \item we call $L_v=<v>$ and we have that $dim(\cN^s_{L_v})=n$. \end{itemize} Since $Y$ is now in the conditions of lemma \ref{l.repcont} then there is an invariant space $E$ of dimension $n+1$ that contracts volume. Since from corollary \ref{l.central} $dim(E^{ss}\oplus E^c)\leq n+1$, then $E^{ss}\oplus E^c\subset E$. Since in a neighborhood of a singularity $X$ and $Y$ are equal, and since the volume contraction of a subspace of $T_{\sigma}M$ is an open property, we get our result. \endproof As a direct consequence we have: \begin{coro} Let $\sigma$ be a singularity of $C$, a robustly chain transitive class with all singularities hyperbolic, with a finest dominated splitting over $\widetilde{\La}$ $$\cN_L=\cN^s\oplus\cN^1\oplus\dots\oplus\cN^{k}\oplus\cN^u\,.$$ Suppose that $n=dim( \cN^s)>dim(E^{ss})$ then $E^{cs}=E^{ss}\oplus E_{\sigma}^c$ contracts volume. \end{coro} The following corollary summarizes the situation \begin{coro} Let $\sigma $ be a singularity in our chain recurrent class. We define $E^{cs}=E^{ss}\oplus E_{\sigma}^c$. We have 2 possibilities, \begin{itemize} \item either $\pi_L ( E^{c}_{\sigma})\cap\cN_L^s=\emptyset\,,$ and then there is a space $\cN^{ss}$ over the singularity contracts uniformly or \item $E^{cs}\subset \cN^{ss}\oplus L\subset E$ and $E$ contracts volume for $L\in E^c_{\sigma}$. \end{itemize} \end{coro} \section{Introduction} \subsection{General setting and historical presentation} While studying nature, the parameters of the systems in question will be determined by measurements that, for the fact of being made by humans, will introduce some error. It is natural to ask how resistant our conclusions are to this inevitable constrain. A \emph{robust properties} is a property that is impossible to break by small perturbations of a system; in other words, a dynamical property is robust if its holds on a (non-empty) $\cC^1$ open set of diffeomorphisms or flows. If the goal is then to study systems that present robust properties, we need to ask which robust properties are we interested in. In this regard we might not be interested in all the orbits in our manifold, but rather in a subset, that can be shown to contain the relevant dynamical information: that is the chain recurrence classes. For this we consider pseudo orbits, that is a generalization of an orbit but at every iterate we allow an $\epsilon$ mistake or jump, then follow this new orbit (a precise definition is presented in the preliminaries section). In this sense, the chain recurrence set is the set of recurrent pseudo orbits, and the chain recurrence classes are compact invariant sets of points that can be connected by pseudo orbits for any $\epsilon.$ It is shown by Conley in \cite{Co} that this chain classes play the role of fundamental pieces of the dynamics, and the rest of the orbits, simply go from one of this pieces to the other. Recall that a maximal invariant set is the intersection of the iterates of an open set. And a chain recurrence class $C$ is said to be robustly transitive if there is a neighborhood $U\subset M$ and a neighborhood of $f$, $\cU$ such that the maximal invariant set in $U$ is a unique chain class $C$ and has a dense orbit for any $g\in\cU$. The definition of robustly chain transitive set is a generalization of this notion since it is equivalent to ask only that there is a neighborhood $U\subset M$ and a neighborhood of $f$, $\cU$ such that the maximal invariant set in $U$ is a unique chain class $C$ for any $g\in\cU$. Reasoning as in the beginning of this introduction, it is also natural to ask ourselves: What is the situation when there are no robust properties? In a series of papers first by Ma\~{n}\'{e} \cite{Ma}, for surfaces and then \cite{DPU} for 3 manifolds and culminating with \cite{BDP} for the more general result, a dichotomy is presented between some hyperbolic like property and the appearance by perturbation of infinitely many sinks and sources. This shows that a Chain recurrence class might split under perturbation into infinitely many classes and there is no topological dynamical property that was preserved in this process. The weak hyperbolic structure that forbids the appearance by perturbation of infinitely many sinks and sources was introduced by Ma\~ n\'e and Liao and is called \emph{dominated splitting}: \begin{defi} Let $f\colon M\to M$ be a diffeomorphism of a Riemannian manifold $M$ and $K\subset M$ a compact invariant set of $f$, that is $f(K)=K$. A splitting $T_x M=E(x)\oplus F(x)$, for $x\in K$, is called dominated if \begin{itemize} \item $dim(E(x))$ is independent of $x\in K$ and this dimension is called the $s$-index of the splitting; \item it is $Df$-invariant: $E(f(x))=Df(E(x))$ and $F(f(x))=Df(F(x))$ for every $x\in K$; \item there is $n>0$ so that for every $x$ in $K$ and every unit vectors $u\in E(x)$ and $v\in F(x)$ one has $$\|Df^n(u)\|\leq \frac 12\|Df^n(v)\|.$$ \end{itemize} One denotes $TM|_K=E\oplus_{_<}F$ the dominated splitting. \end{defi} The results in \cite{BDP} is as follows \begin{theo}(theorem 1 in\cite{BDP})\label{BDP1} Let $P$ be a hyperbolic saddle of a diffeomorphism $f$ defined on a compact manifold M. Then \begin{itemize} \item either the homoclinic class $H(P, f)$ of $P$ admits a dominated splitting, \item or given any neighborhood $U$ of $H(P, f)$ and any $k \in \NN$ there is $g$ arbitrarily $\cC^1$-close to $f$ having $k$ sources or sinks, whose orbits are included in $U$. \end{itemize} \end{theo} \begin{theo}(theorem 2 in\cite{BDP})\label{BDP2} Every $C^1$-robustly transitive set (or diffeomorphism) admits a dominated splitting. \end{theo} The conclusion also holds for an open and dense subset of vector fields and robustly chain transitive sets. This shows us that asking that the chain recurrence classes are preserved by perturbation induces a constraint in the dynamics of the vector field, that is, there must be a hyperbolic like property in the tangent space. A generalization of this result implying the version for flows was given by \cite{BGV}. Also in \cite{BDP} it is shown that a robustly chain transitive sets must have a weak hyperbolic structure, that is : \begin{theo}(theorem 4 \cite{BDP})\label{BDP3} Let $\La_f (U)$ be a $C^1$-robustly transitive set and $E^1\oplus\dots \oplus E^k$, be its finest dominated splitting. Then there exists $n \in \NN$ such that $Df^n$ contracts uniformly the volume in $E^1$ and expands uniformly the volume in $E^k$. \end{theo} A set $K$ is \emph{volume partial hyperbolic} if there is a dominated splitting $E^{cs}\oplus E^c\oplus E^{cu}$ so that the volume in $E^{cs}$ is uniformly contracted and the volume in $E^{cu}$ is uniformly expanded. The aim of this paper is to generalize this results to flows with singularities. But the fact that in fifteen years this has not been done, gives the idea that there might be additional difficulties to deal with in this scenario. Firstly, a direct generalization of theorem \ref{BDP1} is not possible even for non singular flows, there are transitive sets that do not have a dominated splitting for the tangent space (for instance take the suspension of the example in \cite{BV}) many hyperbolic structures for flows are not expressed in terms of the differential of the flow, but on its transverse structure (called the \emph{linear Poincar\'e flow}). For non singular flows it has been shone that the results in \ref{BDP1} and \ref{BDP2} can be generalized (in \cite{V}, \cite{BGV} and \cite{D}). However the linear Poincar\'e flow is only defined far from the singularities, and therefore it cannot be used directly for understanding our problem. And there are in fact open sets of vector fields having robustly chain transitive singular chain recurrence classes. \begin{itemize} \item There are many examples of singular robustly chain transitive singular chain recurrence classes, with the extra property of having all periodic orbits robustly hyperbolic that we called star flows (For instance the Lorenz attractor or very many others). In this cases much is known of their hyperbolic structures. see for instance \cite{MPP}, \cite{GLW}, \cite{GSW} and \cite{BdL}. \item In \cite{BLY} The authors present an example of a vector field with a chain transitive attractor in dimension 4. This attractor has periodic orbits of different stable index and singularities. This example is not a star flow. \item The example just mentioned can be multiplied by a strong expansion for it not to be an attractor any more (now the example would be in a manifold of dimension 5). \end{itemize} For the last two items the kind of weak hiperbolicity they carry or not was yet unknown. However it is also evident that there are not that many examples of nontrivial singular chain recurrence clases out of the star flow setting. In \cite{GLW}, the authors define the notion of \emph{extended linear Poincar\'e flow} defined on some sort of blow-up of the singularities. The extension of the chain recurrence class $\La$ in $U$ with the blow-up, will be noted $\widetilde{\La}$. The definition of this blow up of the singularity relies on information of the perturbations of the vector field, so that this set varies upper semi continuously with the vector field in the $\cC^1$ topology But in \cite{BdL} we propose a bigger set that (as opposed to the first case) does not depend on knowing any information from the neighbor vector fields. This set will be called \emph{extended maximal invariant} and noted $B(X,U)$. We belive a dominated splitting should be a property that one can check without information of the surrounding vector fields. Our notion of \emph{singular volume hyperbolicity} will be expressed as the volume hyperbolicity of a well chosen reparametrization of this extended linear Poincar\'e flow over $B(X,U)$ . But it is enough to do it over $\widetilde{\La}$, since in \cite{BdL} it is shown that they both carry the same hyperbolic properties. \subsection{The singular volume partial hyperbolicity} In this section we take a closer look at weaker forms of hyperbolicity and their relation with the persistence of the dynamical properties. Following the proofs in \cite{BDP} we show the following \begin{prop}\label{t.periodic} Let $\cU\subset\cX^1(M)$ be a $C^1$-open set such that, for every $X\in \cU$ there is an open set $U$ of $M$ such that the maximal invariant set in $U$ is an isolated chain recurrence class $C$. Then $\widetilde{\La}$ has a uniform finest dominated splitting for the linear Poincar\'{e} flow: $$\cN_L=\cN_L^1\oplus\dots\oplus \cN_L^n\,.$$ Each of this periodic orbits is volume partial hyperbolic for the tangent space. \end{prop} We divide the singular set $Sing(X)\cap U$ in subsets sets: \begin{itemize}\item the set $S_{Ec}$ of singular points whose escaping stable space has dimension smaller than $\cN_L^1$, \item the set $S_{E}$ of singular points whose escaping stable space has dimension bigger or equal than $\cN_L^1$, \item the set $S_{Fc}$ of singular points whose escaping unstable space has dimension smaller than $\cN_L^n$, \item the set $S_{F}$ of singular points whose escaping unstable space has dimension bigger or equal than $\cN_L^n$. \end{itemize} We want: \begin{itemize} \item to reparametrize the cocycle $\psi^t_{\cN}$ in restriction to $\cN_L^1$ by the expansion in the direction $L$ if and only if the line $L$ is based at a point close to $S_{Ec}$; \item to reparametrize the cocycle $\psi^t_{\cN}$ in restriction to $\cN_L^n$ by the expansion in the direction $L$ if and only if the line $L$ is based at a point close to $S_{Fc}$. \end{itemize} For this we use again the cocycle \emph{center-stable cocycle} $\{h^t_{Ec}\}_{t\in\RR}$,so that: \begin{itemize} \item $h_{Ec}^t(L)$ and $\frac 1{h_{Ec}^t}(L)$ are uniformly bounded (independently of $t$), if $L$ is based on a point $x$ so that $x$ and $\phi^t(x)$ are out of a small neighborhood of $S_{Ec}$, where $\phi^t$ denotes the flow of $X$; \item $h_{Ec}^t(L)$ is in a bounded ratio with the expansion of $\phi^t$ in the direction $L$, if $L$ is based at a point $x$ so that $x$ and $\phi^t(x)$ are out of a small neighborhood of $S_E$. \item $h_{Ec}^t$ depends continuously on $X$. \end{itemize} Analogously we get the notion of \emph{center-unstable cocycles} $\{h_{Fc}^t\}$ by exchanging the roles of $S_{Ec}$ and $S_{Fc}$ in the properties above. Now similarly to the multisingular hyperbolicity case, we define the singular volume partial hyperbolicity. \begin{defi} Let $X$ be a $C^1$-vector field on a closed manifold $M$. Let $U$ be a compact set. We say that $X$ is \emph{singular volume partial hyperbolic in $U$} if: \begin{itemize} \item the extended linear Poincar\'e flow admits a finest dominated splitting $\cN_L=\cN_L^1\oplus\dots\oplus \cN_L^n\,.$ over the pre extended maximal invariant set $\widetilde{\La}$. \item the set of singular points in $U$ is the union of $$S_{Ec}\cup S_{Fc}\cup S_E\cup S_F\,,$$ defined above. \item the reparametrized linear Poincar\'e flow $h_{Ec}^t \psi^t_{\cN}$ contracts volume on $\cN_L^1$, where $h_{Ec}^t$ is a center-stable cocycle, \item the reparametrized linear Poincar\'e flow $h_{Fc}^t \psi^t_{\cN}$ contracts volume on $ \cN_L^n$, where $h_{Fc}^t$ is a center-unstable cocycle, \end{itemize} \end{defi} \subsection{Robustly chain transitive singular sets} We now state the main result. \begin{theo}\label{t.BDP} Let $\cU\subset\cX^1(M)$ be a $C^1$-open set such that, for every $X\in \cU$ there is an open set $U$ of $M$ such that the chain recurrence class $C$ is robustly chain transitive in $U$. Then $X$ is singular volume partial hyperbolic in $U$. \end{theo} \section{Singular volume partial hyperbolicity} \subsection{Strong stable, strong unstable and center spaces associated to a hyperbolic singularity.} Let $X$ be a vector field and $\sigma\in Sing(X)$ be a hyperbolic singular point of $X$. Let $\lambda^s_k\dots\lambda^s_2<\lambda^s_1<0<\lambda^u_1<\lambda^u_2\dots \lambda^u_l$ be the Lyapunov exponents of $\phi_t$ at $\sigma$ and let $E^ s_k\oplus_{_<}\cdots E^s_2\oplus_{_<}E^s_1\oplus_{_<}E^u_1\oplus_{_<}E^u_2\oplus_{_<}\cdots \oplus_{_<}E^s_l$ be the corresponding (finest) dominated splitting over $\sigma$. A subspace $F$ of $T_\sigma M$ is called a \emph{center subspace} if it is of one of the possible form below: \begin{itemize} \item Either $F=E^ s_i\oplus_{_<}\cdots E^s_2\oplus_{_<}E^s_1$ \item Or $F=E^u_1\oplus_{_<}E^u_2\oplus_{_<}\cdots \oplus_{_<}E^s_j$ \item Or else $F=E^ s_k\oplus_{_<}\cdots E^s_2\oplus_{_<}E^s_1\oplus_{_<}E^u_1\oplus_{_<}E^u_2\oplus_{_<}\cdots \oplus_{_<}E^s_l$ \end{itemize} A subspace of $T_\sigma M$ is called a \emph{strong stable space}, and we denote it $E^{ss}_{i}(\sigma)$, if there in $i\in\{1,\dots, k\}$ such that: $$E^{ss}_{i}(\sigma)=E^ s_k\oplus_{_<}\cdots E^s_{j+1}\oplus_{_<}E^s_i$$ A classical result from hyperbolic dynamics asserts that for any $i$ there is a unique injectively immersed manifold $W^{ss}_i(\sigma)$, called a \emph{strong stable manifold} tangent at $E^{ss}_i(\sigma)$ and invariant by the flow of $X$. We define analogously the \emph{strong unstable spaces} $E^{uu}_j(\sigma)$ and the \emph{strong unstable manifolds} $W^{uu}_j(\sigma)$ for $j=1,\dots ,l$. \subsection{The lifted maximal invariant set and the singular points}\label{ss.centerspace} Let $\La$ be a maximal invariant set for a flow $X$. We define de \emph{lifted maximal invariant set}, that is: $$\La_{\PP,U}(X)=\overline{\set{<X(x)>\in\PP M \text{ such that } x\in\La}}\,.$$ The lifted maximal invariant set does not vary upper semi-continuously with $X$. Let $U$ be a compact region, and $X$ a vector field. Let $\sigma\in Sing(X)\cap U$ be a hyperbolic singularity of $X$ contained in $U$. We are interested on $\La_{\PP,U}(X)\cap \PP_\sigma$. As in \cite{BdL} the aim of this section is to add to the lifted maximal invariant set $\La_{\PP,U}$, some set over the singular points in order to recover some upper semi-continuity properties. \subsubsection{Upper semi continuity of the lift} We can consider now the smallest set that varies upper semi continuously with the vector field that was introduced in \cite{GLW} . The set is defined as follows: \begin{defi}\label{d.chinese} Let $U$ be a compact region and $X$ a $C^1$ vector field. Let $\cU$ be a neighborhood of $X$ Then we define $$\widetilde{\La}=\overline{\set{<Y(x)>\in\PP M \text{ such that } x\in U\cap Per(Y) \,\,\text{ and } \,\,Y\in \cU}}\,.$$ \end{defi} When the hypothesis of our problem gives us information about an open set of vector fields, for instance when we are talking about a robustly transitive sets, this lifted set of directions prooves very easy to work with. However, when we intend to define a dominated splitting structure or a partial hyperbolic structure over it, then we want to do it over a set that one can detect by looking at only one vector field. Therefore in \cite{BdL} a bigger set is introduced, that does not relay on information of the perturbations of the flow and varies upper semi continuously with the vector field. \subsection{The extended maximal invariant set} We define the \emph{escaping stable space} $E^{ss}_{\sigma,U}$ as the biggest strong stable space $E^{ss}_j(\sigma)$ such that the corresponding strong stable manifold $W^{ss}_j(\sigma)$ is \emph{escaping}, that is: $$\Lambda_{X,U}\cap W^{ss}_j(\sigma)=\{\sigma\}.$$ We define the \emph{escaping unstable space} analogously. We define the \emph{central space $E^c_{\sigma,U}$ of $\sigma$ in $U$} the center space such that $$T_\sigma M=E^{ss}_{\sigma,U}\oplus E^ c_{\sigma,U}\oplus E^ {uu}_{\sigma,U}$$ We denote by $\PP^i_{\sigma,U}$ the projective space of $E^ i(\sigma,U)$ where $i\in\set{ss,uu,c}$. The proofs of all lemmas proppositions and theorems in this subsection can be found in \cite{BdL} \begin{lemm} The central space $E^c_{\sigma,U}$ is the smallest center space containing $\La_{\PP,U}\cap \PP_\sigma$. \end{lemm} \begin{lemm}\label{l.lower} Let $U$ be a compact region. Let $\sigma$ be a hyperbolic singular point in $U$, that has a continuation $\sigma_Y$ for vector fields $Y$ in a $C^1$-neighborhood of $X$. Then both escaping strong stable and unstable spaces $E^{ss}_{\sigma_Y,U}$ and $E^{uu}_{\sigma_Y,U}$ depend lower semi-continuously on $Y$. As a consequence the central space $E^c_{\sigma_Y,U}$ of $\sigma_Y$ in $U$ for $Y$ depends upper semi-continuously on $Y$, and the same happens for its projective space $\PP^s_{\sigma_Y,U}$. \end{lemm} \begin{defi}Let $U$ be a compact region and $X$ a vector field whose singular points are hyperbolic. Then the set $$B(X,U)=\La_{\PP,U}\cup\bigcup_{\sigma\in Sing(X)\cap U} \PP^c_{\sigma,U} \subset \PP M$$ is called the \emph{extended maximal invariant set of $X$ in $U$} \end{defi} \begin{prop}\label{p.extended} Let $U$ be a compact region and $X$ a vector field whose singular points are hyperbolic. Then the extended maximal invariant set $B(X,U)$ of $X$ in $U$ is a compact subset of $ \PP M$, invariant under the flow $\phi_\PP^t$. Furthermore, the map $X\mapsto B(X,U)$ depends upper semi-continuously on $X$. \end{prop} \begin{prop}\label{p.extended} Let $U$ be a compact region and $X$ a vector field whose singular points are hyperbolic. Then the extended maximal invariant set $B(X,U)$ of $X$ in $U$ is a compact subset of $ \PP M$, invariant under the flow $\phi_\PP^t$. Furthermore, the map $X\mapsto B(X,U)$ depends upper semi-continuously on $X$. \end{prop} \subsection{Extending chain recurrence classes} Conley theory asserts that any chain recurrent $C$ class admits a basis of neighborhood which are nested filtrating neighborhood $U_{n+1}\subset U_n$, $C=\bigcap U_n$ We define $$\widetilde{\La(C)}=\bigcap_n\widetilde{\La(X,U_n)} \mbox{ and }B(C)=\bigcap_n B(X,U_n).$$ These two sets are independent of the choice of the sequence $U_n$ and $\widetilde{\La(C)}\subset B(C)$. \begin{defi} We say that a chain recurrence class $C$ has a given singular hyperbolic structure if $\widetilde{\La(C)}$ carries this singular hyperbolic structure. \end{defi} If $\sigma\in Sing(X)$ is an hyperbolic singular point we define $E^c_\sigma=\bigcap_n E^c_{\sigma,U_n}$ and we call it the center space of $\sigma$. We denote $\PP^c_\sigma =\PP E^c_\sigma$, its projective space. \begin{rema}Consider the open and dense set of vector fields whose singular points are all hyperbolic. In this open set the singularities depend continuously on the field. Then for every singular point $\sigma$, the projective center space $\PP^c_\sigma$ varies upper semi continuously, and in particular the dimension $dim E^c_\sigma$ varies upper semi-continuously. As it is a non-negative integer, it is locally minimal and locally constant on a open and dense subset. We will say that such a singular point has \emph{locally minimal center space}. \end{rema} \subsection{Reparametrizations} Let $\cA=\{A^t(x)\}$ and $\cB=\{B^t(x)\}$ be two linear cocycles on the same linear bundle $\pi\colon \cE\to \La$ and over the same flow $\phi^t$ on a compact invariant set $\La$ of a manifold $M$. We say that $\cB$ is a \emph{reparametrization} of $\cA$ if there is a continuous map $h=\{h^t\}\colon \La\times \RR\to (0,+\infty)$ so that for every $x\in\La$ and $t\in\RR$ one has $$B^t(x)=h^t(x) A^t(x).$$ The reparametrizing map $h^t$ satisfies the cocycle relation $$h^{r+s}(x)=h^r(x)h^s(\phi^r(x)),$$ and is called a \emph{cocycle}. One easily check the following lemma: \begin{lemm} Let $\cA$ be a linear cocycle and $\cB$ be a reparametrization of $\cA$. Then any dominated splitting for $\cA$ is a dominated splitting for $\cB$. \end{lemm} \begin{rema} \begin{itemize}\item If $h^t$ is a cocycle, then for any $\alpha\in\RR$ the power $(h^t)^\alpha: x\mapsto (h^t(x))^\alpha$ is a cocycle. \item If $f^t$ and $g^t$ are cocycles then $h^t=f^t\cdot g^t$ is a cocycle. \end{itemize} \end{rema} A cocycle $h^t$ is called a \emph{coboundary} if there is a continuous function $h\colon \La\to (0,+\infty)$ so that $$h^t(x)= \frac{h(\phi^t(x))}{h(x)}.$$ A coboundary cocycle in uniformly bounded. Two cocycles $g^t$, $h^t$ are called \emph{cohomological} if $\frac{g^t}{h^t}$ is a coboundary. \begin{rema}\label{r.cohomological} The cohomological relation is an equivalence relation among the cocycle and is compatible with the product: if $g^t_1$ and $g^t_2$ are cohomological and $h^t_1$ and $h^t_2$ are cohomological then $g^t_1h^t_1$ and $g^t_2 h^t_2$ are cohomological. \end{rema} \begin{lemm} if $g$ and $h$ are cohomological then $g\cdot \cA$ is hyperbolic if and only if $h\cdot \cA$ is hyperbolic. \end{lemm} \subsubsection{Reparametrizing cocycle associated to a singular point}\label{ss.reparametrization} Let $X$ be a $C^1$ vector field, $\phi^t$ its flows, and $\sigma$ be a hyperbolic singularity of $X$. We denote by $\Lambda_X\subset\PP M$ the union $$\Lambda_X = \overline{\{\RR X(x), x\notin Sing(X)\}}\cup\bigcup_{x\in Sing(X)} \PP T_x M.$$ It can be shone easily that this set is upper semi-continuous, as in the case of $B(X,U)$ (see \ref{p.extended} ) \begin{lemm} $\La_X$ is a compact subset of $\PP M$ invariant under the flow $\phi_\PP^t$, and the map $X\mapsto \La_X$ is upper semi-continuous. Finally, if the singularities of $X$ are hyperbolic then, for any compact regions one has $B(X,U)\subset \La_X$. \end{lemm} Let $U_\sigma$ be a compact neighborhood of $\sigma$ on which $\sigma$ is the maximal invariant. Let $V_\sigma$ be a compact neighborhood of $Sing(X)\setminus\{\sigma\}$ so that $V_\sigma\cap U_\sigma=\emptyset$. We fix a ($C^1$) Riemmann metric $\|.\|$ on $M$ so that $$\|X(x)\|=1 \mbox{ for all } x\in M\setminus (U_\sigma\cup V_\sigma).$$ Consider the map $h\colon \Lambda_X\times \RR\to (0,+\infty)$, $h(L,t)=h^t(L)$, defined as follows: \begin{itemize} \item if $L\in \PP T_xM$ with $x\notin U_\sigma$ and $\phi^t(x)\notin U_\sigma$, then $h^t(L)=1$; \item if $L\in \PP T_xM$ with $x\in U_\sigma$ and $\phi^t(x)\notin U_\sigma$ then $L= \RR X(x)$ and $h^t(L)= \frac 1{\|X(x)\|}$; \item if $L\in \PP T_xM$ with $x\notin U_\sigma$ and $\phi^t(x)\in U_\sigma$ then $L= \RR X(x)$ and $h^t(L)= \|X(\phi^t(x))\|$; \item if $L\in \PP T_xM$ with $x\in U_\sigma$ and $\phi^t(x)\in U_\sigma$ but $x\neq \sigma$ then $L= \RR X(x)$ and $h^t(L)= \frac {\|X(\phi^t(x)\|}{\|X(x)\|}$; \item if $L\in \PP T_\sigma M$ then $h^t(L)= \frac {\|\phi_\PP^t(u)\|}{\|u\|}$ where $u$ is a vector in $L$. \end{itemize} Note that the case in which $x$ is not the singularity and $x\in U_\sigma$, then $h$ can be written as in the last item, taking $u=X(x)$. As it was proven in \cite{BdL} the map we just defined is a (continuous) cocycle on $\La_X$ and it's cohomology class, is independent from the choice of the metric $\|.\|$ and of the neighborhoods. Also from \cite{BdL} we have : \begin{lemm}\label{l.representativecocycle} Consider a vector field $X$ and a hyperbolic singularity $\sigma$ of $X$. Then there is a $C^1$-neighborhood $\cU$ of $X$ so that $\sigma$ has a well defined hyperbolic continuation $\sigma_Y$ for $Y$ in $\cU$ and for any $Y\in\cU$ there is a map $h_Y\colon \La_Y\times \RR\to (0,+\infty)$ so that \begin{itemize} \item for any $Y$, $h_Y$ is a cocycle belonging to the cohomology class $[h(Y,\sigma_Y)]$ \item $h_Y$ depends continuously on $Y$: if $Y_n\in \cU$ converge to $Z\in \cU$ for the $C^1$-topology and if $L_n\in \La_{Y_n}$ converge to $L\in \La_Z$ then $h^t_{Y_n}(L_n)$ tends to $h^t_Z(L)$ for every $t\in \RR$; furthermore this convergence is uniform in $t\in[-1,1]$. \end{itemize} \end{lemm} \section{Definition of singular volume partial hyperbolicity} Let $X$ be a vector field with a dominated splitting of the linear Poincar\'e flow $$\cN_L=\cN^s\oplus\dots\oplus \cN^u$$ where $\cN^s_L$ and $\cN^u_L$ denote the extremal bundles, over a chainrecurrence class $C$. Supose as well that the set $S$ of the singularities in $C$ contains only hyperbolic singularities. We subdivide $S$ as follows \begin{itemize}\item the set $S_{Ec}$ of singular points whose escaping stable space has dimension smaller than $\cN_L^s$, \item the set $S_{E}$ of singular points whose escaping stable space has dimension bigger or equal than $\cN_L^s$, \item the set $S_{Fc}$ of singular points whose escaping unstable space has dimension smaller than $\cN_L^u$, \item the set $S_{F}$ of singular points whose escaping unstable space has dimension bigger or equal than $\cN_L^u$. \end{itemize} \begin{defi}\label{defisvh}Let $X$ be a $C^1$-vector field on a compact manifold and let $C$ bea chain recurrence class. We denote $S=Sing(X)\cap C$. One says that $X$ is \emph{singular volume partial hyperbolic} on $C$ if \begin{enumerate} \item The restriction of the extended linear Poincar\'e flow $\{\psi_\cN^t\}$ to the extended maximal invariant set $B(C)$ admits a finest dominated splitting $$\cN_L=\cN^s\oplus\dots\oplus \cN^u$$ where $\cN^s_L$ and $\cN^u_L$ denote the extremal bundles, and $n_s$ and $n_u$ their respective dimensions. \item There is a subset $S_{Ec}\subset S$ so that the reparametrized cocycle $h_{Ec}^t\psi_\cN^t$ contracts volume in restriction to the bundles $\cN^s_L$ over $B(C)$ where $h_{Ec}$ denotes $$h_{Ec}=\Pi_{\sigma\in S_{Ec}} h_\sigma^{n_s}.$$ \item There is a subset $S_{Fc}\subset S$ so that the reparametrized cocycle $h_{Fc}^t\psi_\cN^t$ expands volume in restriction to the bundle $\cN^u_L$ over $B(C)$ where $h_{Fc}$ denotes $$h_{Fc}=\Pi_{\sigma\in S_{Fc}} h_\sigma^{n_u}.$$ \end{enumerate} \end{defi} \begin{rema} If $C$ is a chain recurrence class which is singular volume hyperbolic then $X$ is singular volume hyperbolic on a small filtrating neighborhood of $C$. \end{rema} \begin{defi}\label{defieq}Let $X$ be a $C^1$-vector field on a compact manifold and let $\La$ be a maximal invariant set in $U$. We denote $S=Sing(X)\cap U$. One says that $X$ is \emph{singular volume partial hyperbolic over $\widetilde{\La}(X,U)$ } if \begin{enumerate} \item The restriction of the extended linear Poincar\'e flow $\{\psi_\cN^t\}$ to the extended maximal invariant set $\widetilde{\La}(X,U)$admits a finest dominated splitting $\cN_L=\cN^s\oplus\dots\oplus \cN^u$ where $\cN^s_L$ and $\cN^u_L$ denote the extremal bundles. \item There is a subset $S_{Ec}\subset S$ so that the reparametrized cocycle $h_{Ec}^t\psi_\cN^t$ contracts volume in restriction to the bundles $\cN^s_L$ over $\widetilde{\La}(X,U)$ where $h_{Ec}$ denotes $$h_{Ec}=\Pi_{\sigma\in S_{Ec}} h_\sigma^{n_s}.$$ \item There is a subset $S_{Fc}\subset S$ so that the reparametrized cocycle $h_{Fc}^t\psi_\cN^t$ expands volume in restriction to the bundle $\cN^u_L$ over $\widetilde{\La}(X,U)$ where $h_{Fc}$ denotes $$h_{Fc}=\Pi_{\sigma\in S_{Fc}} h_\sigma^{n_u}.$$ \end{enumerate} \end{defi} \begin{rema}\label{remaeq} If $C$ is a robustly chain transitive calss it follows that there is a neighborhood $U$ of $C$ such that $\widetilde{\La}(X,U)=\widetilde{\La(C)}$ . \end{rema} In \cite{BdL} it is shonw that the hyperbolic structures of $\widetilde{\La(C)}$ extend to $B(C)$. This is fundamental to our purpose, since in our scenario, dealing with $\widetilde{\La(C)}$ is much more convinient. \begin{theo}[BdL]\label{t.nioki} Let $X$ be a vector field on a closed manifold, whose singular points are all hyperbolic and with locally minimal center spaces. Then for every $\sigma\in Sing X$, a singular volume hyperbolic structure on $\widetilde{\La(C)}$ extends to $B(C)$. \end{theo} \begin{rema}\label{r.defieq} From remark \ref{remaeq} and theorem \ref{t.nioki} we get that if $C$ is robustly chain transitive, then for $X$ under the hypothesis of the theorem the definitions \ref{defisvh} and \ref{defieq} are in fact equivalent. \end{rema} \section{Proof of the main theorem} We aim now to prove Theorem \ref{t.BDP}. The proof is very similar to the proof in \cite{Ma2}. In fact is an adaptation to flows of the proof of theorem 4 in \cite{BDP}. The idea is to argue by contradiction and show that if there is no uniform volume expansion in the extremal bundle, then there is a closed orbit orbit of a sufficiently close vector field that contracts volume in the extremal bundle. This could be a periodic orbit or a singularity, but sections \ref{ss.centerspace} and \ref{s.per} show us that this is not possible. The following proposition is equivalent to lemma 6.5 form \cite{BDP}, and the proof is analogous. \begin{lemm}\label{l.med} Let $X\in\cX^1M $ be a vector field, $C$ a maximal invariant in a filtrating neighborhood $U\subset M $ Suppose there is a dominated splitting $E\oplus_{\prec} F$ over $\widetilde{\La}(X,U)$ for the reparametrized linear Poincar\'{e} flow, $h^T_{Ec}.\psi^T (L)$. Then if the Jacovian of $h^T_{Ec}.\psi^T (L)$ restricted to $E$ is not bounded from above by one, then for every $T$ there is a $h^T_{Ec}.\psi^T (L)$ invariant measure $\nu$ such that $$\int log\abs{J(h^T_{Ec}.\psi^T (L) ,E)}\,d\nu\,\geq0\,.$$ \end{lemm} Now we want to show that if $h^T_{Ec}.\psi^T (L)$ does not contract volume on the most dominated bundle of the finest dominated splitting in $\widetilde{\La}(X,U)$, then, the measure $\nu$ from the previous lemma is not supported on the directions that are over the singularities. The following lemma is a consequence of corollary \ref{c.repcont}. \begin{lemm}\label{l.sing} Let $\cV\subset\cX^1M $ be the set of vector fields $X$ such that \begin{itemize} \item it has a robustly chain transitive class $C$ that is maximal invariant in a filtrating neighborhood $U\subset M $ with a singularity $\sigma \item there is a finest dominated splitting $$\cN_L=\cN^s_L\oplus_{\prec}\dots\oplus_{\prec}\cN^u_L$$ over $\widetilde{\La}(X,U)$ for the reparametrized linear Poincar\'{e} flow, $h^T_{Ec}.\psi^T (L)$ \end{itemize} Them there is an open and dense subset $\cU\subset\cV$ such that for any $X\in\cU$ any $h^T_{Ec}.\psi^T (L)_T$ invariant measure $\nu$ supported in $\PP^c_{\sigma}\cap\widetilde{\La}(X,U)$ is such that $$\int log\abs{J(h^T_{Ec}.\psi^T (L)_T,\cN^s_L)}\,d\nu\,<0\,.$$ \end{lemm} \proof Let us start by supposing that $\sigma\in S_E$ and $dim(\cN^s)\leq E^{ss}$. In this case, we can include $\cN^s\subset T\sigma M$ as a subspace of $E^{ss}$. Then $\cN^s$ contracts uniformly for the tangent space and for the extended linear Poincar\'{e} flow. Note that in this case the reparametrized linear Poincar\'{e} flow and the extended linear Poincar\'{e} flow are equal in restriction to $dim(\cN^s)$ at the directions over $\sigma$. Suppose that $\sigma\in S_{Ec}$ and $n_s=dim(\cN^s)\geq E^{ss}$, then given $L\in \PP^c_{\sigma}\cap\widetilde{\La}(X,U)$ there exist a subspace $E=\cN^s\oplus L$ of $T_{\sigma}M$ that contracts volume, The reparametrized linear Poincar\'{e} flow at the directions over $\sigma$ is $h^T_{Ec}.\psi^T (L)$ where $$h^T_{Ec}=\left(\frac{\norm{d\phi^t(u)}}{\norm{u}}\right)^{\frac{1}{n_s}}$$ for a non vanishing vector $u$ in the direction of $L$. Then $$\abs{J(\psi_T,\cN^s_L)}\left(\frac{\norm{d\phi^t(u)}}{\norm{u}}\right)=\abs{J(d\phi_T,E)}\,.$$ In any case lemma \ref{l.med} allows us to conclude. \endproof The flowing lemma is the only missing piece for Theorem \ref{t.BDP} for $\widetilde{\La}(X,U)$. Untill now we have from lemma \ref{l.med} that if there is no volume contraction of $\cN_L^S$, then there is a measure showing this lack of contraction. From lemma \ref{l.sing} we also know that this measure can not be supported over a singularity. Finally the next lemma uses the ergodic closing lemma to prove that if a measure was showing the lack of contraction, then it would be supported on a singularity contradicting the previous lemma. So by contradiction the following lemma implies the volume contraction of the least dominated bundle. For the volume expansion the proof is analogus. Later we extend this structure over $\widetilde{\La}(X,U)$ to $B(C)$ and conclude with the proof of Theorem \ref{t.BDP}. \begin{lemm}\label{ultimo} Let $X\in\cX^1M $ be a vector field, $L_a$ a maximal invariant in a filtrating neighborhood $U\subset M $ and the set $\widetilde{\La}$ . Suppose there is a finest dominated splitting $$\cN_L=\cN^s_L\oplus_{\prec}\dots\oplus_{\prec}\cN^u_L$$ over $\widetilde{\La}(X,U)$ for the reparametrized linear Poincar\'{e} flow, $h^T_{Ec}.\psi^T (L)$. If there is a $h^T_{Ec}.\psi^T (L)$ invariant measure $\nu$ such that $$\int log\abs{J(h^T_{Ec}.\psi^T (L),\cN^s_L)}\,d\nu\,\geq 0\,,$$ then the measure must be supported on $\PP^c_{\sigma}\cap\widetilde{\La}(X,U)$ . \end{lemm} \proof \begin{clai*} Let $\nu_n$ be a measure supported on a periodic orbits $\gamma_n$ with period $\pi \gamma_n$ bigger than $T$ , then $\int\log h^{n_s.T}_{Ec} d\nu_n(x)=0$. \end{clai*} \begin{proof} By definition of $h^T_{Ec}$ $$\log h^{n_s.T}_{Ec}\,d\nu_n(x)=\log\Pi_{\sigma_i\in S_{Ec}}\norm{h^{n_s \frac{1}{n_s}\,T}_{\sigma_i}}\,d\nu_n(x)\,,$$ so it suffices to prove the claim for a given $h^T_{\sigma_i}$. For every $x$ in $\gamma$, since $h^T_{\sigma_i}$ is a multiplicative cocycle we have that: \begin{eqnarray*} \Pi^{(m\pi(\gamma)/T)-1}_{i=0}&&h^T_{\sigma_i}(\phi^Y_{iT}(x))=h^{(m\pi(\gamma)/T)-1}_{\sigma_i}(x) \end{eqnarray*} The norm of the vector field restricted to $\gamma$ is bounded, and therefore $h^{(m\pi(\gamma)/T)-1}_{\sigma_i}(x)$ is bounded for $m\in\NN$ going to infinity. Then this is also true for $h^T_{Ec}$. Since $\nu_n$ is an ergodic measure, we have that \begin{eqnarray*} \int\log h^{T}_{\sigma_i}(x) d\nu_n(x)&=&\lim_{m\to\infty}\frac{1}{m}\sum^{(m\pi(\gamma)/T)-1}_{i=0}\log \left(h^{T}_{\sigma_i}(\phi^Y_{iT}(x))\right)\\ &=&\lim_{m\to\infty}\frac{1}{m}\log \left(\Pi^{(m\pi(\gamma)/T)-1}_{i=0} h^T_{\sigma_i}(\phi^Y_{iT}(x))\right)\\ &=&\lim_{m\to\infty}\frac{1}{m}\log\left(h^{(m\pi(\gamma)/T)-1}_{\sigma_i}(x)\right)\\ &=&0 \end{eqnarray*} \end{proof} Suppose that $\mu$ weights $0$ on $$\bigcup_{\sigma_i\in Sing(X)}\PP^c_{\sigma_i}\cap\widetilde{\La}(X,U)$$ then $\mu$ projects on $M$ on an ergodic measure $\nu$ supported on the class $C$ and such that ut weights $0$ in the singularities, for which $$\int \log\abs{J(h_E.\psi^T_{\cN},\cN^s)} d\nu(x)\geq 0 .$$ Recall that $\psi^T $ is the linear Poincar\'{e} flow, and $h^T_{Ec}$ can be defined as a function of $x\in M$ instead of as a function of $L\in\PP M$ outside of an arbitrarily small neighborhood of the singularities. However, the ergodic closing lemma implies that $\nu$ is the weak$*$-limit of measures $\nu_n$ supported on closed orbits $\gamma_n$ which converge for the Hausdorff distance to the support of $\nu$. Therefore, for $n$ large enough, the $\gamma_n$ are contained in $C$ and from remark \ref{r.percontr} and our previous claim we know that \begin{eqnarray*} \int\log\abs{J(h^T_{Ec}.\psi^T_{\cN},\cN^s)} d\nu_n(x)) &=&\int\log\abs{J(\psi^T_{\cN},\cN^s)} d\nu_n(x) \\ \int\log\abs{J(\psi^T_{\cN},\cN^s)} d\nu_n(x) &\leq & -\eta. \end{eqnarray*} Then $$\int\log\abs{J(h^T_{Ec}.\psi^T_{\cN},\cN^s)} d\nu_n(x)) \leq -\eta$$ The map $\log\abs{J(h_{Ec}.\psi^T_{\cN},\cN^s)}$ is not continuous. Nevertheless, it is uniformly bounded and the unique discontinuity points are the singularities of $X$. These singularities have (by assumption) weight $0$ for $\nu$ and thus admit neighborhoods with arbitrarily small weight. Out of such a neighborhood the map is continuous. One deduces that $$\int\log\abs{J(h_{Ec}.\psi^T_{\cN},\cN^s)} d\nu(x)=\lim\int\log\abs{J(h_{Ec}.\psi^T_{\cN},\cN^s)} d\nu_n(x)$$ and therefore is strictly negative, contradicting the assumption. \endproof Note that all of this is also valid for the reverse time of the flow and for $\cN^{uu}$. Lemmas \ref{l.sing} and \ref{ultimo} and their versions for the reverse time implies Theorem \ref{t.BDP} over $\widetilde{\La}(X,U)$ . We re state it as the following corollary \begin{coro}\label{coro1} Let $\cV\subset\cX^1M $ be the set of vector fields $X$ such that it has a robustly chain transitive class $C$ that is maximal invariant in a filtrating neighborhood $U\subset M $ with a singularity $\sigma Them there is an open and dense subset $\cU\subset\cV$ such that for any $X\in\cU$ is singular volume partial hyperbolic over $\widetilde{\La}(X,U)$. \end{coro} Theorem 4 from \cite{BdL} gives immediately the following, which is equivalent to Theorem \ref{t.BDP}. \begin{coro}\label{coro2} Let $\cV\subset\cX^1M $ be the set of vector fields $X$ such that it has a robustly chain transitive class $C$ that is maximal invariant in a filtrating neighborhood $U\subset M $ with a singularity $\sigma Them there is an open and dense subset $\cU\subset\cV$ such that for any $X\in\cU$ is singular volume partial hyperbolic over $B(C)$ . \end{coro} \begin{coro} Let $X$ be a vector field with a robustly chain transitive attractor $C$ in the attracting set $U$ and all singularities in $C$ are hyperbolic. Then $C$ is singular volume partial hyperbolic. Moreover $C$ has a dominated splitting over $\widetilde{\La}(X,U)$ of the form $\cN_L=E\oplus F$ where $E$ is contracting and $F$ is volume expanding for the reparametrized linear poincar\'e flow. \end{coro} \proof Since $C$ is an attractor then $C$ has a dominated splitting over $\widetilde{\La}(X,U)$ of the form $\cN_L=E\oplus F$ where $E$ is contracting. Given any singularity $\sigma$ in $C$ the center space $E^c$ projects to $F$ for all directions of $\widetilde{\La}(X,U)$ over $\sigma$. Therefore by Lemma \ref{l.central} the finest dominated splitting of $C$ has $F$ as the dominating extremal bundle. Therefore by Corollary \ref{coro2} $F$ has to be volume expanding for the reparametrized linear Poincar\'e flow. \endproof \begin{rema} Note that since the example in \cite{BLY} is in de conditions of corolary \ref{coro2} then it is singular volume partial hyperbolic. \end{rema} \section{Basic definitions and preliminaries} \subsection{Chain recurrent classes and filtrating neighborhoods } The following notions and theorems are due to Conley \cite{Co} and they can be found in several other references (for example \cite{AN}). \begin{itemize} \item We say that pair of sequences $\set{x_i}_{0\leq i\leq k}$ and $\set{t_i}_{0\leq i\leq k-1}$, $k\geq 1$, are an \emph{ $\varepsilon$-pseudo orbit from $x_0$ to $x_k$} for a flow $\phi$, if for every $0\leq i \leq k-1$ one has $$ t_i\geq 1 \mbox{ and }d(x_{i+1},\phi^{t_i}(x_i))<\varepsilon.$$ \item A compact invariant set $\Lambda$ is called \emph{chain transitive} if for any $\varepsilon > 0$ and for any $x, y \in\Lambda$ there is an $\varepsilon$-pseudo orbit from $x$ to $y$. \item We say that $x, y \in M$ are chain related if, for every $\varepsilon>0$, there are $\varepsilon$-pseudo orbits form $x$ to $y$ and from $y$ to $x$. This is an equivalence relation. \item We say that $x\in M$ is \emph{chain recurrent} if for every $\varepsilon>0$, there is an $\varepsilon$-pseudo orbit from $x$ to $x$. We call the set of chain recurrent points, the\emph{ Chain recurrent set} and we note it $\mathfrak{R}(M)$. The equivalent classes of this equivalence relation are called \emph{ chain recurrence classes}. \end{itemize} \begin{defi} \begin{itemize} \item An \emph{attracting region} (also called \emph{trapping region} by some authors) is a compact set $U$ so that $\phi^t(U)$ is contained in the interior of $U$ for every $t>0$. The maximal invariant set in an attracting region is called an \emph{attracting set}. A repelling region is an attracting region for $-X$, and the maximal invariant set is called a repeller. \item A \emph{filtrating region} is the intersection of an attracting region with a repelling region. \item Let $C$ be a chain recurrent class of $M$ for the flow $\phi$. A \emph{filtrating neighborhood } of $C$ is a (compact) neighborhood which is a filtrating region. \end{itemize} \end{defi} The following is a corollary of the fundamental theorem of dynamical systems \cite{Co}. \begin{coro}\cite{Co} Let $\phi$ be a $C^1$-vector field on a compact manifold $M$. Every chain class $C$ of $X$ admits a basis of filtrating neighborhoods, that is, every neighborhood of $C$ contains a filtrating neighborhood of $C$. \end{coro} \begin{defi} Let $C$ be a Chain recurrent class of $M$ for the vector field $X$. Let $C$ be such that there is a filtrating neighborhood $U$ such that $C$ is the maximal invariant set in $U$. We say that $C$ is \emph{robustly chain transitive} if there is a $C^1$ neighborhood of $X$ called $\mathcal{U}$ such that for every $Y\in\mathcal{U}$, the maximal invariant set for $Y$ ($C_Y$) in $U$ is a unique chain class. \end{defi} \begin{defi} Let $C$ be a robustly chain transitive class of $M$ for the vector field $X$. We say that $C$ is robustly transitive if there is a $C^1$ neighborhood of $X$ called $\mathcal{U}$ such that for every $Y\in\mathcal{U}$, there is an orbit for $Y$ which is dense in $C_Y$. \end{defi} \subsection{Linear cocycle} Let $\phi=\{\phi^t\}_{t\in\RR}$ be a topological flow on a compact metric space $K$. A \emph{linear cocycle over $(K,\phi)$} is a continuous map $A^t\colon E\times \RR\to E$ defined by $$A^t(x, v) = (\phi^t(x), A_t(x)v)\,,$$ where \begin{itemize} \item $\pi\colon E\to K$ is a $d$ dimensional linear bundle over $K$; \item $A_t:(x,t)\in K\times \RR \mapsto GL(E_x,E_{\phi^t(x)})$ is a continuous map that satisfies the \emph{cocycle relation } : $$A_{t+s}(x)=A_t(\phi^s(x))A_s(x),\quad \mbox{ for any } x\in K \mbox{ and }t,s\in\RR $$ \end{itemize} Note that $\cA=\{A^t\}_{t\in\RR}$ is a flow on the space $E$ which projects on $\phi^t$. $$\begin{array}[c]{ccc} E &\stackrel{A^t}{\longrightarrow}&E\\ \downarrow&&\downarrow\\ K&\stackrel{\phi^t}{\longrightarrow}&K \end{array}$$ If $\Lambda\subset K$ is a $\phi$-invariant subset, then $\pi^{-1}(\Lambda)\subset E$ is $\cA$-invariant, and we call \emph{the restriction of $\cA$ to $\Lambda$} the restriction of $\{A^t\}$ to $\pi^{-1}(\Lambda)$. \subsection{Hyperbolic structures and dominated splitting on linear cocycles} In this section we give a rough presentation of some of the hyperbolic structures over cocycles. \begin{defi} Let $\phi$ be a topological flow on a compact metric space $\La$. We consider a vector bundle $\pi\colon E\to \La$ and a linear cocycle $\cA=\{ A^t\}$ over $(\La,X)$. We say that $\cA$ admits a \emph{dominated splitting over $\Lambda$} if \begin{itemize}\item there exists a splitting $E=E^1\oplus\dots\oplus E^k$ over $\lambda$ into $k$ sub bundles \item The dimension of the sub bundles is constant, i.e. $dim(E^i_x)=dim(E^i_y)$ for all $x,y\in\Lambda$ and $i\in \set{1\dots k}$, \item The splitting is invariant, i.e. $A^t(x)(E^i_x)=E^i_{\phi^t(x)}$ for all $i\in \{1\dots k\}$, \item there exists a $t>0$ such that for every $x\in \Lambda$ and any pair of non vanishing vectors $v\in E^i_x$ and $u\in E^j_x$, $i<j$ one has \begin{equation}\label{e.dom} \frac{\norm{A^t(u)}}{\norm u}\leq \frac{1}{2}\frac{\norm{ A^t(v)}}{\norm v} \end{equation} We denote $E^1\oplus_{_\prec}\dots\oplus_{_\prec} E^k$ the splitting is \emph{$t$-dominated}. \end{itemize} \end{defi} A classical result (see for instance \cite[Appendix B]{BDV}) asserts that the bundles of a dominated splitting varies continuously with the vector field in the $\cC^1$ topology. A given cocycle may admit several dominated splittings. However, the dominated splitting is unique if one prescribes the dimensions $dim(E^i)$. We can consider in a metric spaces $K$, an invariant sub spaces $\La$ of $K$ that is not compact. In this case we would ask for the norm of $\cA$ to be bounded. Note that the dominated splitting defined as above is uniform with respect to the point. This is particularly important when we consider a dominated splitting over a set that is not compact. One says that one of the bundle $E^i$ is \emph{volume contracting} (resp. \emph{expanding}) if there is $t>0$ so that one has $$Det(J(A^t\mid_{E^i}))<\frac 12$$ (resp. $Det(J(A^{-t}\mid_{E^i}))<\frac 12$ \begin{defi} We say that the linear cocycle $\cA$ is \emph{volume partial hyperbolic over $\Lambda$} if there is a finest dominated splitting $E=E^1\oplus_{_\prec} \dots \oplus E^l$ over $\Lambda$ is such that the extremal bundles $E^1$ and resp. $E^l$ volume contracting/expanding \end{defi} \subsection{Linear Poincar\'e flow} Let $X$ be a $C^1$ vector field on a compact manifold $M$. We denote by $\phi^t$ the flow of $X$. \begin{defi} The \emph{normal bundle} of $X$ is the vector bundle $N_X $ over $M\setminus Sing(X)$ defined as follows: the fiber $N_X(x)$ of $x\in M\setminus Sing(X)$ is the quotient space of $T_xM$ by the vector line $\RR.X(x)$. \end{defi} Note that, if $M$ is endowed with a Riemannian metric, then $N_X(x)$ is canonically identified with the orthogonal space of $X(x)$: $$N_X=\{(x,v)\in TM, v\perp X(x)\} $$ Consider $x\in M\setminus Sing(M)$ and $t\in \RR$. Thus $D\phi^t(x):T_xM\to T_{\phi^t(x)}M$ is a linear automorphism mapping $X(x)$ onto $X(\phi^t(x))$. Therefore $D\phi^t(x)$ passes to the quotient as an linear automorphism $\psi^t(x)\colon N_X(x)\to N_X(\phi^t(x))$: $$\begin{array}[c]{ccc} T_xM&\stackrel{D\phi^t}{\longrightarrow}&T_{\phi^t(x)}M\\ \downarrow&&\downarrow\\ N_X(x)&\stackrel{\psi^t}{\longrightarrow}&N_X(\phi^t(x)) \end{array}$$ where the vertical arrow are the canonical projection of the tangent space to the normal space parallel to $X$. \begin{prop}\label{p.lpf} Let $X$ be a $C^1$ vector field on a manifold and $\La$ be a compact invariant set of $X$. Assume that $\La$ does not contained any singularity of $X$. Then $\La$ is hyperbolic if and only if the linear Poincar\'e flow over $\La$ is hyperbolic. \end{prop} Notice that the notion of dominated splitting for non-singular flows is sometimes better expressed in term of Linear Poincar\'e flow: for instance, the linear Poincar\'e flow of a robustly transitive vector field always admits a dominated splitting, when the flow by itself may not admit any dominated splitting (see for instance \cite{V} and \cite{D}). \subsection{Extended linear Poincar\'e flow}\label{ss.Poincare} We are dealing with singular flows and the linear Poincar\'e flow is not defined on the singularity of the vector field $X$. However we can include the linear Poincar\'e flow in a flow, called \emph{extended linear Poincar\'e flow} defined in \cite{GLW}, on a larger set. This flow will be a linear co-cycle define on some linear bundle over a manifold, that we define now. \begin{defi}Let $M$ be a manifold of dimension $d$. \begin{itemize} \item We call \emph{the projective tangent bundle of $M$}, and denote by $\Pi_\PP\colon \PP M\to M$, the fiber bundle whose fiber $\PP_x$ is the projective space of the tangent space $T_xM$: in other word, a point $L_x\in \PP_x$ is a $1$-dimensional vector subspace of $T_xM$. \item We call \emph{the tautological bundle of $\PP M$}, and we denote by $\Pi_\cT\colon \cT M\to \PP M$, the $1$-dimensional vector bundle over $\PP M$ whose fiber $\cT_{L}$, $L\in \PP M$, is the the line $L$ itself. \item We call \emph{normal bundle of $\PP M $} and we denote by $\Pi_\cN\colon \cN M\to \PP M$, the $d-1$-dimensional vector bundle over $\PP M$ whose fiber $\cN_{L}$ over $L\in \PP_x$ is the quotient space $T_x M/L$. If we endow $M$ with riemannian metric, then $\cN_L$ is identified with the orthogonal hyperplane of $L$ in $T_xM$. \end{itemize} \end{defi} Let $X$ be a $C^r$ vector field on a compact manifold $M$, and $\phi^t$ its flow. The natural actions of the derivative of $\phi^t$ on $\PP M$ and $\cN M$ define $C^{r-1}$ flows on these manifolds. More precisely, for any $t\in \RR$, \begin{itemize} \item We denote by $\phi_{\PP}^t\colon\PP M\to \PP M$ the $C^{r-1}$ diffeomorphism defined by $$\phi_{\PP}^t(L_x)= D\phi^t(L_x)\in \PP_{\phi^t(x)}.$$ \item We denote by $\psi_{\cN}^t\colon\cN M\to \cN M$ the $C^{r-1}$ diffeomorphism whose restriction to a fiber $\cN_L$, $L\in \PP_x$, is the linear automorphisms onto $\cN_{\phi^t_{\PP}(L)}$ defined as follows: $D\phi^t(x)$ is a linear automorphism from $T_xM$ to $T_{\phi^t(x)}M$, which maps the line $\cT_L\subset T_xM$ onto the line $\cT_{phi^t_{\PP}(L)}$. Therefore it passe to the quotient in the announced linear automorphism. $$\begin{array}[c]{ccc} T_xM &\stackrel{D\phi^t}{\longrightarrow}&T_{\phi^t(x)}M\\ \downarrow&&\downarrow\\ \cN_L&\stackrel{\psi^t_{\cN}}{\longrightarrow}&\cN_{\phi^t_{\PP}(L)} \end{array}$$ \end{itemize} Note that $\phi^t_\PP$, $t\in \RR$ defines a flow on $\PP_M$ which is a co-cycle over $\phi^t$ whose action on the fibers is by projective maps. The one-parameter family $\psi^t_\cN$ defines a flow on $\cN M$, which is a linear co-cycle over $\phi^t_\PP$. We call $\psi^t_\cN$ the \emph{extended linear Poncar\'e flow}. We can summarize by the following diagrams: $$\begin{array}[c]{ccc} \cN M&\stackrel{\psi^t_{\cN}}{\longrightarrow}&\cN M\\ \downarrow&&\downarrow\\ \PP M &\stackrel{\phi_{\PP}^t}{\longrightarrow}&\PP M\\ \downarrow&&\downarrow\\ M&\stackrel{\phi^t}{\longrightarrow}&M \end{array}$ \begin{rema} The extended linear Poincar\'e flow is really an extension of the linear Poincar\'e flow defined in the previous section; more precisely: Let $S_X\colon M\setminus Sing(X)\to \PP M$ be the section of the projective bundle defined as $S_X(x)$ is the line $\langle X(x)\rangle\in \PP_x$ generated by $X(x)$. Then $N_X(x)= \cN_{S_X(x)}$ and the linear automorphisms $\psi^t\colon N_X(x)\to N_X(\phi^t(x))$ and $\psi^t_\cN\colon \cN_{S_X(x)}\to \cN_{S_X(\phi^t(x))}$ \end{rema} \subsection{Some classic theorems.} We now present some results that allow us a better control of the size of the invariant manifolds near singularities. We need for this the definition of $(\eta,T,E)^*$ contracting orbit arcs. \begin{defi} Given $\flow$ a flow induced by $X\in \vf$, $\La$ a compact invariant set of $\flow$, and $E\subset \norm{\La-\sing}$ an invariant bundle of the linear Poincar\'e flow $\pflow$. For $\eta >0$ and $T>0,$ $x\in \La-\sing$ is called \emph{$(\eta,T,E)^*$ contracting} if for any $n\in \NN$, \[ \prod_{i = 0}^{n-1} \left\|\spflow[T|E(\phi_{iT}(x))]\right\|\leq e^{-n\eta}.\] Similarly $x\in \La-\sing$ is called $(\eta,T,F)^*$ expanding if it is $(\eta,T,F)^*$ contracting for $-X$. \end{defi} To find the $(\eta,T,E)^*$ contracting orbit arcs, one needs the classical result due to V.Pliss: \begin{lemm}\label{Pliss}\cite{P2}(Pliss lemma) Given a number $A$. Let $\{a_1,\cdots,a_n\}$ be a sequence of numbers which are bounded from above by $A$. Assume that there exists a number $\xi<A$ such that $\sum_{i=1}^n a_i\geq n\cdot\xi$, then for any $\xi^{\prime}<\xi$, there exist $l$ integers $1\leq t_1<\cdots< t_l\leq n$ such that $$\frac{1}{t_j-k}\sum_{i=k+1}^{t_j}a_i\geq \xi^{\prime}, \textrm{for any $j=1, \cdots,l$ and any integer $k=0,\cdots t_j-1$.}$$ Moreover, one has the estimate $\frac l n\geq \frac{\xi-\xi^{\prime}}{A-\xi^{\prime}}.$ \end{lemm} \begin{rema}\label{plisspoint} From \ref{2.7} and \ref{Pliss} we have that if the periodic orbits of a flow are volume contracting in some bundle $E$ in the period, we can always find $(\eta,T,E)^*$ volume contracting points. \end{rema} \begin{lemm}[Connecting lemma]\label{l.contecting} \cite{BC}. Given $\flow$ induced by a vector field $X\in \mathcal{X}^1(M)$ such that all periodic orbits of $X$ are hyperbolic. For any $C^1$ neighborhood $\cU$ of $X$ and $x,y\in M$ if $y$ is chain attainable from $x$, then there exists $Y\in U$ and $t>0$ such that $\flow^Y(x)= y$. Moreover the following holds: For any $k\geq 1$, let $\{x_{i,k},t_{i,k}\}_{i=0}^{n_k}$ be an $(1/k,T)$-pseudo orbit from $x$ to $y$ and denote by $$\Delta_k = \bigcup_{i=0}^{n_k-1} \phi_{[0,t_{i,k}]}(x_{i,k}).$$ Let $\Delta$ be the upper Hausdorff limit of $\Delta_k$. Then for any neighborhood $U$ of $\Delta$, there exists $Y\in U$ with $Y = X$ on $M\setminus U$ and $t>0$ such that $\flow^Y(x) = y$. \end{lemm} For a generic vector field $X\in \mathcal{X}^1(M)$ we have: \begin{theo}\label{ConConLem} \cite{C} There exists a $G_{approx}\subset \mathcal{X}^1(M)$ a generic set such that for every $X\in G_{approx}$ and for every $C$ a chain recurrence class there exists a sequence of periodic orbits $\gamma_n$ which converges to $C$ in the Hausdorff topology. \end{theo} The following theorem by Ma\~{n}e was first introduced in \cite{Ma} and it was used in \cite{Ma2} to prove the stability conjecture. The idea behind this theorem is that the lack of hyperbolicity in a set can be detected by the clack of hyperbolicity in a periodic orbit of a $C^1$ close system. \begin{defi} Let $f$ be a diffeomorphism of a compact manifold $M$ with a Riemmanian metric $d$. A point $x$ is \emph{well closable} if for every $\epsilon$ there are diffeomorphisms $g$, that are $\epsilon-C^1$ close to $f$ and periodic points $y$ for $g $ with period $T_y$, such that $$d(f^i(x),g^i(y))<\epsilon \text{ for all }\,0\leq i\leq T_y\,.$$ We note the set of well closable points of $f$ as $\cW(f)$ \end{defi} \begin{theo*}[\emph{Ergodic closing lemma}] Let $f$ be a diffeomorphism and $\mu$ an $f-$invariant probability measure, then almost every point is well closable. That is $$\mu\cW(f)=1\,.$$ \end{theo*} The version of this theorem for flows is almost the same with one exception: the well closable points might be closed by a singularity. \begin{defi} Let $X$ be a vector field of a compact manifold $M$ with a Riemannian metric $d$, and $\phi$ its associated flow. A point $x$ is \emph{well closable} if for every $\epsilon$ there are vector fields $Y$, that are $\epsilon-C^1$ close to $X$ and critical elements (closed orbits) $y$ for $Y $ with period $T_y$, such that $$d(\phi^X_t(x),\phi^Y_t(y))<\epsilon \text{ for all }\,0\leq t\leq T_y\,.$$ We note the set of well closable points of $X$ as $\cW(X)$ \end{defi} \begin{theo*}[\emph{Ergodic closing lemma for flows}]\label{l.ergodic} Let $X$ be a vector field of a compact manifold $M$ with a Riemannian metric $d$. For every $T>0$ and $\mu$ a $\phi_T-$invariant probability measure, almost every point is well closable. That is $$\mu\cW(f)=1\,.$$ \end{theo*}
1,116,691,501,131
arxiv
\section{Introduction} Magnetic fields play an important role in determining the evolution of the matter in many astrophysical objects. In highly conducting plasma, the magnetic field can be amplified by gas contraction or shear motion. Even when the magnetic field is weak initially, the magnetic field can grow rapidly and influence the gas dynamics of the system. This is particularly important for the high-energy astrophysical phenomena related to strongly magnetized relativistic plasmas associated with objects such as active galactic nuclei (AGNs) (e.g., Urry \& Pavovani 1995), relativistic jets (e.g., Mirabel \& Rodr\'{i}guez 1999; Blandford 2002), pulsar winds (e.g., Gaensler \& Slane 2006; Kirk et al. 2009), gamma-ray bursts (Zhang \& M\'{e}sz\'{a}ros 2004; Piran 2005; M\'{e}sz\'{a}ros 2006), and magnetars (e.g., Woods \& Thompson 2006; Mereghetti 2008). The ideal magnetohydrodynamic (MHD) approximation is a good description of the global properties and dynamics of such systems well into their nonlinear regimes. In this limit the electrical resistivity $\eta=1/\sigma$ vanishes (infinite electrical conductivity). In this framework, many multidimensional ideal relativistic MHD (RMHD) codes have been developed to investigate relativistic astrophysical phenomena including fully non-linear regimes (e.g., Komissarov 1999; Koide et al. 1999; Komissarov 2001; Koldoba et al. 2002; Del Zanna et al. 2003; Leismann et al. 2005; Gammie et al. 2003; De Villiers \& Hawley 2003; Anninos et al. 2005; Duez et al. 2005; Shibata \& Sekiguchi 2005; Ant\'{o}n et al. 2006; Mignone \& Bodo 2006; Mizuno et al. 2006; Neilson et al. 2006; Del Zanna et al. 2007; Giacomazzo \& Rezzolla 2007; Farris et al. 2008; Mignone et al. 2009; Beckwith \& Stone 2011; Inoue et al. 2011). The ideal MHD limit provides a convenient form for solving the equations of RMHD and is also an excellent approximation for many relativistic astrophysical phenomena. However, in extreme cases such as binary mergers (the merger of two neutron stars or of a neutron star with a black hole) (e.g. Shibata \& Taniguchi 2011; Faber \& Rasio 2012) or the central engine of long GRBs (collapsar) (e.g., MacFadyen \& Woosley 1999) the electrical conductivity can be small, and regions of high resistivity may appear. Quite often numerical simulations using ideal RMHD exhibit violent magnetic reconnection. The magnetic reconnection observed in ideal RMHD simulations is due to purely numerical resistivity, occurs as a result of truncation errors, and hence fully depends on details of the numerical scheme and resolution. Magnetic reconnection is one of the most important phenomena in astrophysics. It is highly dynamic, and it converts magnetic energy into fluid energy. The magnetic reconnection process has been invoked to explain flaring events (e.g., Lyutikov 2006; Giannios et al. 2009) and magnetic annihilation (Coroniti 1990; Lyubarsky \& Kirk 2001) in relativistic plasmas. Therefore, numerical codes solving the resistive RMHD (RRMHD) equations and that allow control of magnetic reconnection according to a physical model of resistivity are highly desirable. Numerical simulation using the ideal RMHD equations is considerably easier than using the RRMHD equations because the equations become mixed hyperbolic with stiff relaxation terms. The pioneering work on resistive RMHD done by Komissarov (2007) solved the numerical flux by using the Harten-Lax-van Leer (HLL) approximate Riemann solver and by using Strang's splitting technique for the stiff relaxation terms. More recently, Palenzuela et al. (2009) have proposed a numerical method that solves the stiff relaxation terms in the equations by an implicit-explicit (IMEX) Runge-Kutta method, and Dionysopoulou et al. (2012) have extended the work of Palenzuela et al. (2009) to 3D. A different approach has been taken by Dumbser \& Zanotti (2009) who have applied the high order $P_{N}P_{M}$ scheme to solving the resistive RMHD equations, and also Takamoto \& Inoue (2011) who have used the method of characteristics to solve the Maxwell equations accurately. Even more recently, a 3+1 resistive general relativistic MHD (GRMHD) code using mean-field dynamo closure has been developed by Bucciantini \& Del Zanna (2012). Plasma in the relativistic regime can have three major characteristics: the system has (1) relativistic fluid velocity (kinetic energy much greater than rest-mass energy), has (2) relativistic temperature (internal energy much greater than rest-mass energy), or has (3) relativistic Alfv\'{e}n speed (magnetic energy much greater than rest-mass energy). The second characteristic of relativistic temperature brings us to the issue of the equation of state (EoS) of the plasma. The EoS most commonly used in RMHD simulations is designed for plasmas with constant specific heat ratio (the so-called ideal EoS). However, this ideal EoS is valid only for plasmas with either ultra-relativistic temperature or non-relativistic temperature. The theory of relativistic perfect gases (Synge 1957) has shown that the specific heat ratio cannot be constant if consistency with kinetic theory is required. However, the exact EoS involves modified Bessel functions and is too complicated to be efficiently implemented in numerical codes. To get around this problem Mignone et al. (2005) introduced an approximate EoS given by a simple analytical formulation in the context of relativistic non-magnetized flows. This approximate EoS was applied in the context of relativistic MHD by Mignone \& McKinney (2007). A different EoS approximation than that proposed by Mignone et al. (2005) has been proposed by Ryu et al. (2006). Clearly a determination of the effects of a difference in the choice of the approximate EoS is important to further advances in RRMHD. In this paper, we present the development of a new resistive RMHD simulation code including different realistic EoS approximations such as those proposed by Mignone et al. (2005) or by Ryu et al. (2006). This new RRMHD code is based on the ideal RMHD code RAISHIN (Mizuno et al. 2006; 2011) which uses a Godunov-type scheme to solve the conservation equations of ideal RMHD. In particular, we apply this new code to the role of the EoS in the resistive RMHD regime. We describe the basic equations of resistive RMHD in \S 2, three different equations of state are investigated in \S 3, and the numerical methods are described in \S 4. The various numerical tests in one-dimension and multi-dimensions are presented in \S 5. In \S 6 we conclude. \section{Basic Equations of Resistive Relativistic MHD} We have considered $n^{\alpha}$ to be the time-like translational killing vector field in a flat (Minkowski) space-time, so $n^{\alpha}=(-1,0,0,0)$, where we use Greek letters that take values from 0 to 3 for the indices of 4D space-time tensors, while Roman letters take values from 1 to 3 for the indices of 3D spatial tensors. We use the speed of light $c=1$ and Lorentz-Heaviside notation for electromagnetic quantities, so that all $\sqrt 4 \pi$ factors disappear. The total energy-momentum tensor $T^{\alpha \beta}$ describing a perfect fluid coupled to an electromagnetic field is defined as \begin{equation} T^{\alpha \beta}= T^{\alpha \beta}_{fluid} + T^{\alpha \beta}_{EM}. \end{equation} The first term is due to matter: \begin{equation} T^{\alpha \beta}_{fluid}= \rho hu^{\alpha}u^{\beta}+pg^{\alpha \beta}, \end{equation} where $u^{\alpha}$ is the fluid four-velocity, while $h$ $(=1+ \epsilon + p/\rho)$, $\rho$, $p$ and $\epsilon$ are the enthalpy, the proper rest mass density, the gas pressure, and the specific internal energy as measured in the fluid rest frame. The second term comes from the electromagnetic field: \begin{equation} T^{\alpha \beta}_{EM} = F^{\alpha \mu} F_{\mu}^{\beta} - {1 \over 4} (F^{\mu \nu}F_{\mu \nu}) g^{\alpha \beta}, \end{equation} where $F^{\alpha \beta}$, and its dual $^*F^{\alpha \beta}$ are the Maxwell and Faraday tensors of the electromagnetic field given by \begin{eqnarray} && F^{\alpha \beta} = n^{\alpha} E^{\beta} - n^{\beta} E^{\alpha} + n_{\nu} e^{\nu \alpha \beta \mu} B_{\mu}, \\ && ^*F^{\alpha \beta} = n^{\alpha} B^{\beta} - n^{\beta} B^{\alpha} + n_{\nu} e^{\nu \alpha \beta \mu} E_{\mu}. \end{eqnarray} $E^{\alpha}$ and $B^{\alpha}$ are the electric and magnetic fields as measured by an observer moving along any time-like vector $n^{\alpha}$, while $e^{\alpha \beta \mu \nu} = \sqrt{-g} \epsilon_{\alpha \beta \mu \nu}$ is the Levi-Civita alternating tensor of space-time and $\epsilon_{\alpha \beta \mu \nu}$ is the four-dimensional Levi-Civita symbol. In the global inertial frame with time-independent coordinate grid, the full system of Euler and Maxwell's equations are \begin{eqnarray} && \partial_{t} D + \nabla \cdot D\vec{v} =0, \\ && \partial_{t} \vec{m} + \nabla \cdot \vec{\Pi} = 0, \\ && \partial_{t} \tau + \nabla \cdot \vec{Y} =0, \\ && \partial_{t} \vec{E} - \nabla \times \vec{B} + \nabla \Psi = -\vec{j}, \\ && \partial_{t} \vec{B} + \nabla \times \vec{E} + \nabla \Phi = \vec{0}, \\ && \partial_{t} \Psi + \nabla \cdot \vec{E} = q - \kappa \Psi, \\ && \partial_{t} \Phi + \nabla \cdot \vec{B} = -\kappa \Phi, \\ && \partial_{t} q + \nabla \cdot \vec{j} = 0, \end{eqnarray} where $\vec{j}$ is the spatial current vector, $q$ is the charge density, $\kappa$ is the damping rate parameter, and the conserved variables \begin{eqnarray} D &=& \rho \gamma, \\ \vec{m} &=& \rho h \gamma^{2} \vec{v} + \vec{E} \times \vec{B}, \\ \tau &=& \rho h \gamma^{2} - p + {1 \over 2}(E^{2} + B^{2}) \end{eqnarray} express the relativistic mass density, the momentum density, and the total energy density. Here, $\vec{v}$ is the velocity measured by an inertial observer and $\gamma \equiv 1/\sqrt{1-v^{2}}$ is the Lorentz factor. The energy flux density and the momentum flux density can then be given by \begin{eqnarray} \vec{Y} &=& \rho h \gamma^{2} \vec{v} + \vec{E} \times \vec{B}, \\ \vec{\Pi} &=& - \vec{E}\vec{E} - \vec{B}\vec{B} + \rho h \gamma^{2} \vec{v}\vec{v} + \left[ {1 \over 2} (E^{2}+B^{2}) + p \right] \vec{g} . \end{eqnarray} An equation of state (EoS) is needed to close the system, we have adopted a variable EoS (e.g., Mignone et al. 2005; Mignone \& McKinney 2007; Ryu et al. 2006). The details of the variable EoS are explained in next section. Eqs. (9)-(12) evolve the augmented Maxwell's equations which contain two additional fields $\Psi$ and $\Phi$ to control the system dynamics. In this approach, the two scalar fields $\Psi$ and $\Phi$ indicate deviations of the divergence of the electric and magnetic fields from the values prescribed by Maxwell's equations, propagate at the speed of light, and decay exponentially over a time-scale $\sim 1/\kappa$ when the damping rate parameter $\kappa > 0$. Following previous studies (Komissarov 2007; Palenzuela et al. 2009; Dumbser \& Zanotti 2011), we have adopted the so-called hyperbolic divergence-cleaning approach used in the context of ideal MHD (Dedner et al. 2002). The system of Eqs. (6)-(13) is closed by means of Ohm's law. Ohm's law for relativistic plasmas can be very complicated (e.g., Lichnerowicz 1967; Ardavan 1984; Blackman \& Field 1993; Gedalin 1996; Melatos \& Melrose 1996; Punsley 2001; Meier 2004). In this paper, we consider only the simplest kind of relativistic Ohm's law that assumes an isotropic plasma resistivity (e.g., Komissarov 2007; Palenzuela et al. 2009; Zenitani et al. 2010; Takamoto \& Inoue 2011; Takahashi et al. 2011). In covariant form, the four-vector of the electric current is obtained from \begin{equation} I^{\alpha} = \sigma F^{\alpha \beta} u_{\beta} + q_{0} u^{\alpha}, \end{equation} where $\sigma=1/\eta$ is the conductivity, $\eta$ is the resistivity, and $q_{0} = - I_{\alpha}u^{\alpha}$ is the electric charge density as measured in the fluid flame (Lichnerowicz 1967; Blackmas \& Field 1993). In a special relativistic inertial frame, we find \begin{equation} \vec{j} = \sigma \gamma [ \vec{E} + \vec{v} \times \vec{B} - (\vec{E} \cdot \vec{v})\vec{v}] + q \vec{v}. \end{equation} In the fluid rest frame, this equation becomes \begin{equation} \vec{j} = \sigma \vec{E}. \end{equation} The ideal MHD limit of Ohm's law is given by the limit of infinite conductivity ($\sigma \to \infty$). In this limit Eq. (20) reduces to \begin{equation} \vec{E}+\vec{v} \times \vec{B} - (\vec{E} \cdot \vec{v}) \vec{v} = 0. \end{equation} Splitting this equation into components normal and parallel to the velocity vector give \begin{eqnarray} && \partial_{t} \vec{E}_{\parallel} + \sigma \gamma [\vec{E}_{\parallel} - (\vec{E} \cdot \vec{v}) \vec{v}] =0, \\ && \partial_{t} \vec{E}_{\perp} + \sigma \gamma [\vec{E}_{\perp} + \vec{v} \times \vec{B}] = 0. \end{eqnarray} From these equations, one obtains the well-known ideal MHD condition \begin{equation} \vec{E} = - \vec{v} \times \vec{B}. \end{equation} In this limit the electric field is orthogonal to both magnetic and velocity fields. \section{Equations of State} An EoS relating thermodynamic quantities is required to close the system of eqs. (6)-(13). In general, an EoS is written as \begin{equation} h \equiv h(p, \rho), \end{equation} and general forms for the polytropic index $n$ and the sound speed $c_{s}$ are given by \begin{equation} n=\rho {\partial h \over \partial \rho} -1, \ c_{s}^{2} = - {\rho \over nh} {\partial h \over \partial \rho}. \end{equation} The most commonly used EoS, a constant $\Gamma$-law (ideal) EoS, is given by \begin{equation} h= 1+ {\Gamma \over \Gamma -1} \Theta, \end{equation} where $\Gamma$ is the constant specific heat ratio and $\Theta =p/\rho$ is the temperature. The sound speed is calculated from \begin{equation} c_{s}^{2} = {\Gamma \Theta \over h}. \end{equation} The constant $\Gamma$-law EoS may be applied correctly to a plasma with non-relativistic temperature where $\Gamma=5/3$ or to a plasma with an ultra-relativistic temperature where $\Gamma=4/3$. However, in the high-temperature limit, i.e., $\Theta \to \infty$ with $\Gamma > 4/3$, the sound speed exceeds relativistic limit ($c_{s} >1/\sqrt{3}$). Moreover a constant $\Gamma$-law EoS is not consistent with relativistic kinetic theory, the so-called Taub's fundamental inequality, which requires the specific enthalpy to satisfy \begin{equation} (h-\Theta)(h-4\Theta) \ge 1. \end{equation} This rules out a constant $\Gamma$-law EoS with $\Gamma > 4/3$, if applied to $0 < \Theta < \infty$. The theory of single-component perfect gases in the relativistic regime shows that the specific enthalpy is a function of the temperature $\Theta =p/\rho$ only, and has the form (Synge 1957) \begin{equation} h = {K_{3} (1/\Theta) \over K_{2} (1/\Theta)}, \end{equation} where $K_{2}$ and $K_{3}$ are the $2^{nd}$ and $3^{rd}$ order modified Bessel functions of the second kind respectively. Using an equivalent $\Gamma_{eq} = (h-1)/( h-1-\Theta)$ in the non-relativistic temperature limit ($\Theta \to 0$) yields $\Gamma_{eq} \to 5/3$, and in the ultra-relativistic temperature limit ($\Theta \to \infty$) yields $\Gamma_{eq} \to 4/3 $ (see Fig. 1a). However, this EoS requires extra computational costs because the thermodynamics of the fluid is expressed in terms of the modified Bessel functions (Falle \& Komissarov 1996). Recently, Mignone et al. (2005) proposed an EoS, the so-called TM EoS, that follows eq. (31) well. The TM EoS, which was first introduced by Mathews (1971), is given by \begin{equation} p={\rho \epsilon (\rho \epsilon + 2 \rho) \over 3 (\rho \epsilon + \rho)} \mbox{ or } h={5 \over 2} \Theta + \sqrt{{9 \over 4} \Theta^{2} +1}, \end{equation} and the sound speed is calculated from \begin{equation} c_{s}^{2} = {\Theta \over 3h} {5h - 8 \Theta \over h-\Theta}. \end{equation} The TM EoS corresponds to the lower bound of Taub's fundamental inequality, i.e., $(h-\Theta)(h-4\Theta)=1$, and produces the correct asymptotic values for $\Gamma_{eq}$. Ryu et al. (2006) proposed an EoS which is a simpler algebraic function of $\Theta$, hereafter referred to as the RC EoS, that satisfies Taub's inequality for all $\Theta$. The RC EoS is given by \begin{equation} {p \over \epsilon - p} = {3p + 2 \rho \over 9 p + 3 \rho} \mbox{ or } h=2{6 \Theta^{2} + 4 \Theta + 1 \over 3 \Theta +2}, \end{equation} and the sound speed is calculated from \begin{equation} c_{s}^{2} = {\Theta (3\Theta +2)(18 \Theta^{2} + 24 \Theta + 5) \over 3 (6 \Theta^{2} + 4 \Theta +1)(9 \Theta^{2}+ 12 \Theta +2)}. \end{equation} \begin{figure}[h!] \epsscale{1.1} \plotone{f1.eps} \caption{Equivalent $\Gamma$ (left), specific enthalpy (middle), and sound speed (right) as functions of the temperature $\Theta=p/\rho$. Different lines correspond to: constant $\Gamma$-law EoS with $\Gamma=5/3$ (dotted lines), constant $\Gamma$-law EoS with $\Gamma=4/3$ (dashed lines), TM EoS (dash-dotted lines) and RC EoS (dash-two dotted lines). For comparison Synge's EoS has been plotted as the solid lines. \label{f1}} \end{figure} Figure 1 shows the equivalent $\Gamma$, the specific enthalpy and the sound speed as a function of $\Theta$ for TM EoS, RC EoS, Synge's EoS as well as a constant $\Gamma$-law EoS with $\Gamma=5/3$ and $4/3$. The specific enthalpy and the sound speed computed using the TM EoS and the RC EoS are well matched to Synge's EoS and cannot be distinguished on the plots. The approximations to Synge's EoS such as the TM EoS and RC EoS are hereafter referred to as approximate EoSs. \section{Numerical Method} A well-known and challenging feature of the system of equations (6)-(13) is that they have source terms for the evolution of the electric field that become stiff in the high conductivity (low resistivity) limit. Following Komissarov (2007), we will use the Strang-splitting technique (Strang 1968). The system of equations (6)-(13) can be written as a single phase vector equation \begin{equation} {\partial U(P) \over \partial t } + {\partial F^{m} (P) \over \partial x^{m}} = S(P), \end{equation} where \vspace{0.1cm} \[ U = \left( \begin{array}{c} D \\ m^{i} \\ \tau \\ E^{i} \\ B^{i} \\ \Psi \\ \Phi \\ q \end{array} \right), \ P=\left( \begin{array}{c} \rho \\ v^{i} \\ p \\ E^{i} \\ B^{i} \\ \Psi \\ \Phi \\ q \end{array} \right), \ S=\left( \begin{array}{c} 0 \\ 0^{i} \\ 0 \\ -j^{i} \\ 0^{i} \\ q-\kappa \Psi \\ -\kappa \Phi \\ 0 \end{array} \right) \] are the vectors of conserved variables, primitive variables and sources, respectively, and \vspace{0.1cm} \[ F^{m} = \left( \begin{array}{c} \rho \gamma v^{m} \\ \Pi^{im} \\ Y^{m} \\ -e^{imk} B_{k} + \Psi g^{im} \\ e^{imk} E_{k} + \Phi g^{im} \\ E^{m} \\ B^{m} \\ j^{m} \end{array} \right) \] \vspace{0.1cm} is the vector of numerical fluxes, where $e^{ijk}$ is the Levi-Civita alternating tensor of space. The source term can be split into two parts \[ S_{a}(P)=\left( \begin{array}{c} 0 \\ 0^{i} \\ 0 \\ -qv^{i} \\ 0^{i} \\ q \\ 0 \\ 0 \end{array} \right) \mbox{and} \ S_{b}(P)=\left( \begin{array}{c} 0 \\ 0^{i} \\ 0 \\ -j_{c}^{i} \\ 0^{i} \\ -\kappa \Psi \\ -\kappa \Phi \\ 0 \end{array} \right), \] where \begin{equation} \vec{j}_{c} = \sigma \gamma [\vec{E} + \vec{v} \times \vec{B} - (\vec{E} \cdot \vec{v}) \vec{v}] \end{equation} is the conductivity current. The source term $S_{b}$ is a stiff relaxation term that requires special care to capture the dynamics in a stable and accurate manner. In the Strang time-step splitting technique, firstly the solution is advanced using the stiff-part equations \begin{equation} {\partial U(P) \over \partial t} = S_{b}(P) \end{equation} over the half time-step, $\Delta t/2$. Secondly, advance of the non-stiff part of the equations is made via second-order accurate numerical integration of \begin{equation} {\partial U(P) \over \partial t} + {\partial F^{m} (P) \over \partial x^{m}}= S_{a}(P) \end{equation} over the full time step. Thirdly, again the solution is advanced by the stiff-part equations over the half-time step. Time advance of the non-stiff part of the equations is given by \begin{equation} U_{n+1} = U_{n} + \Delta t \sum^{n_d}_{m=1} {F_{m-1/2, n+1/2} - F_{m+1/2, n+1/2} \over \Delta x^{m}} + \Delta t S_{a, n+1/2}, \end{equation} where $U_{n}$ represents the cell-centered conserved variables at $t=t_{n}$, $U_{n+1}$ represents the cell-centered conserved variables at $t=t_{n}+\Delta t$, $S_{a, n+1/2}$ represents the cell-centered non-stiff source term at $t=t_{n}+\Delta t/2$, $F_{m+1/2, n+1/2}$ is the numerical flux though the right-hand side cell interface and $F_{m-1/2, n+1/2}$ is the numerical flux though the left-hand side cell interface in the direction of $x^{m}$ at time $t=t_{n}+\Delta t/2$, $\Delta x^{m}$ is the cell size in this direction, and $n_{d}$ is the number of spatial dimensions. The non-stiff sources and numerical fluxes at the half-time step are calculated from \begin{equation} U_{n+1/2} = U_{n} + {\Delta t \over 2} \sum^{n_d}_{m=1} {F_{m-1/2, n} - F_{m+1/2, n} \over \Delta x^{m}} + {\Delta t \over 2} S_{a, n}. \end{equation} The numerical fluxes at the cell-interface $F_{m+1/2, n}$ are calculated using the simplified Harten-Lax-van Leer (HLL) approximate Riemann solver (Harten et al. 1983, Komissarov 2007) where the maximum characteristic speed of the system in each direction equals the speed of light. The left-hand and right-hand states of each cell interface using the HLL approximate Riemann solver are computed from various reconstruction schemes as in our ideal RMHD code (Mizuno et al. 2006; 2011). In this paper, we use a piecewise linear method (PLM) reconstruction such as the minmod slope-limited linear interpolation scheme or the Monotonized Central (MC) slope-limited linear interpolation scheme as these are the simplest reconstruction schemes that capture a shock sharply. The resulting scheme is second-order accurate in time and space. Following the work of Komissarov (2007), the split evolution equations of the electric field, eqs. (23) \& (24), can be solved analytically \begin{eqnarray} \vec{E}_{\parallel} &=& \vec{E}_{\parallel}^{0} \exp \left[ - {\sigma \over \gamma} t \right], \\ \vec{E}_{\perp} &=& \vec{E}_{\perp}^{*} + (\vec{E}_{\perp}^{0} - \vec{E}_{\perp}^{*}) \exp [-\sigma \gamma t], \end{eqnarray} where $E_{\perp}^{*} = - \vec{v} \times \vec{B}$ and suffix 0 indicates the initial component. The stiff-part equations related to the two scalar fields $\Psi$ and $\Phi$ are also solved analytically with solution \begin{eqnarray} \Psi &=& \Psi_{0} \exp [-\kappa t], \\ \Phi &=& \Phi_{0} \exp [-\kappa t], \end{eqnarray} where $\Psi_{0}$ and $\Phi_{0}$ are the initial values of $\Psi$ and $\Phi$. In order to evolve this system of equations, the numerical fluxes $F^{m}$ must be computed at each time-step. These fluxes depend on the primitive variables $P$, which must be recovered from the evolved conserved variables $U$. In conserved variables, $E$ and $B$ can be calculated by evolving Maxwell's equations. However, it is more stable to evolve the stiff part equations (42) - (43) during the primitive recovery process when $\sigma$ becomes large (Palenzuela et al. 2009). The primitive recovery procedure adapted to an approximate EoS such as the TM EoS and the RC EoS follows that used by Palenzuela et al. (2009). \section{Numerical Tests} In this section, one-dimensional and two-dimensional tests are presented. Three one-dimensional tests have been used to validate our new resistive relativistic MHD code in different regimes. A one-dimensional shock-tube test and three two-dimensional tests have been used to investigate the effect of different EoSs. The damping coefficient of the hyperbolic divergence cleaning is set to $\kappa=1$. The magnetic field is divergence-free and charge is preserved at the truncation error level. \subsection{One-dimensional tests} \subsubsection{Large amplitude CP Alfv\'{e}n wave test} This test consists of the propagation of a large amplitude circularly-polarized Alfv\'{e}n wave along a uniform back-ground magnetic field $B_{0}$ in a domain with periodic boundary conditions. The exact solution is given by Del Zanna et al. (2007) in the ideal MHD limit, and was used as an ideal-MHD limit test problem by Palenzuela et al. (2009) and Takamoto \& Inoue (2011). Here we use conditions similar to previous studies with \begin{eqnarray} (B_{y}, B_{z}) &=& \zeta_{A} B_{0} (\cos [k(x-v_{A}t], \sin [k(x-v_{A})t]), \\ (v_{y}, v_{z}) &=& - {v_{A} \over B_{0}} (B_{y}, B_{z}), \end{eqnarray} where $B_{x}=B_{0}$, $v_{x}=0 $, $k$ is the wave number and $\zeta_{A}$ is the amplitude of the wave. The special relativistic Alfv\'{e}n speed $v_{A}$ is given by \begin{equation} v_{A}^{2} = {2B_{0}^{2} \over h+ B_{0}^{2}(1+ \zeta_{A}^{2})} \left( 1+ \sqrt{ 1- \left( {2 \zeta_{A} B_{0}^{2} \over h + B_{0}^{2} (1 + \zeta_{A}^{2})}\right) } \right)^{-1}. \end{equation} For this test we use initial parameters $\rho=p=1$ and $B_{0}=0.46188$, the Alfv\'{e}n velocity $v_{A}=0.25c$, and we adopt a constant gamma-law EoS with $\Gamma=2$. Following Palenzuela et al. (2009), we use a high uniform conductivity $\sigma=10^{5}$ with three different resolutions of 50, 100 and 200 cells covering the computational domain $ x \in [-0.5, 0.5]$. Figure 2 shows the numerical results at $t=4$ (one Alfv\'{e}n crossing time) for the three different resolutions. This result shows that the new resistive RMHD code reproduces ideal relativistic MHD solutions when the conductivity $\sigma$ is high. \begin{figure}[h!] \epsscale{0.7} \plotone{f2.eps} \caption{Magnetic field component $B_{y}$ in a large-amplitude CP Alfv\'{e}n wave test using three different resolutions $N=50$ (dotted), $100$ (dashed) and $200$ (dash-dotted) at $t=4$. The solid line shows the exact solution. The numerical results are in good agreement with the analytical one (the highest resolution is excellent). \label{f2}} \end{figure} The $L_{1}$ norm errors of the magnetic field component $B_{y}$ in this test are shown in Figure 3. The numerical result is slightly shallower than $2^{nd}$ order convergence. This is likely caused by the periodic boundary. \begin{figure}[h!] \epsscale{0.7} \plotone{f3.eps} \caption{$L_{1}$ norm errors of the magnetic field component $B_{y}$ in a large-amplitude CP Alfv\'{e}n wave test using the three different resolutions $N=50$, $100$ and $200$. \label{f3}} \end{figure} \subsubsection{1D self-similar current sheet test} This test problem has been used for moderate resistivity cases (e.g., Komissarov 2007; Palenzuela et al. 2009; Takamoto \& Inoue 2011). In this test problem the magnetic pressure is much smaller than the gas pressure everywhere. The magnetic field configuration is given by $B=[0, B_{y}(x,t), 0]$, where $B_{y}(x,t)$ changes sign within a thin current layer of thickness $\Delta l$. An initial solution is provided in equilibrium with $p= $ constant. The evolution of this thin current layer is a slow diffusive expansion due to the resistivity and described by the diffusion equation \begin{equation} \partial_{t} B_{y} - {1 \over \sigma } \partial^{2}_{x} B_{y} =0. \end{equation} As the thickness of the layer becomes much larger than $\Delta l$ the expansion becomes self-similar with \begin{equation} B_{y}(x,t) = B_{0}\ \mbox{erf} \left( {1 \over 2} \sqrt{\sigma \over \chi} \right), \end{equation} where $\chi = t/x^{2}$ and erf is the error function. This analytic result can be used for testing the moderate resistivity regime. In the test problem, we have chosen an initial solution at $t=1$ with $p=50$, $\rho=1$, $\vec{E} = \vec{v}=0$ and $\sigma=100$ ($\eta=1/\sigma=0.01$). A constant gamma-law EoS with $\Gamma=2$ is used. The computational domain is uniform with 200 cells in $[-1.5, 1.5]$. \begin{figure}[h!] \epsscale{0.7} \plotone{f4.eps} \caption{Magnetic field component $B_{y}$ in a Self-similar current sheet test. The dotted and dashed lines are indicated the analytical solution at t=1 and t=10. The solid line shows the numerical solution at $t=10$. The numerical solution is in excellent agreement with the analytical one. \label{f4}} \end{figure} The numerical simulation is evolved up to $t=10$ and then the numerical solution is compared in Figure 4 with the analytical solution. The numerical and analytical solutions cannot be distinguished on the plot. This indicates that the moderate resistivity regime is well described by the code. \subsubsection{1D shock-tube tests} As a first shock-tube test in the restive relativistic MHD regime, we consider a simple MHD version of the Brio and Wu test as in Palenzuela et al. (2009) and Takamoto \& Inoue (2011). The initial left and right states are separated at $x=0.5$ and are given by \begin{eqnarray} (\rho^{L}, p^{L}, B^{L}_{y}) &=& (1.0, 1.0, 0.5) \\ (\rho^{R}, p^{R}, B^{R}_{y}) &=& (0.125, 0.1, -0.5). \end{eqnarray} All other fields are set to $0$. We use a constant $\Gamma$-law EoS with $\Gamma=2$. The computational domain covers the region $x \in [-0.5,0.5]$ with $200$ cells. Figure 5 shows the numerical results at $t=0.4$ for conductivities $\sigma=0$, $10$, $10^2$, $10^3$, $10^5$. The exact solution to the ideal RMHD Riemann problem was found by Giacomazzo \& Rezzolla (2006). When $B_{x}=0$, the solution contains only two fast waves, a left-moving rarefaction wave and a right-moving shock with a tangential discontinuity between them. \begin{figure}[h!] \epsscale{0.6} \plotone{f5.eps} \caption{(a) Density and (b) magnetic field component $B_{y}$ in the simplified Brio \& Wu shock-tube test. Different lines indicate different conductivities: $\sigma=0$ (orange solid), $10$ (green dash-two-dotted), $10^2$ (red dash-dotted), $10^3$ (purple dashed), $10^5$ (blue dotted). The black solid line shows the exact solution in the ideal RMHD case. \label{f5}} \end{figure} The results show that the solution smoothly changes from a wave-like solution for $\sigma=0$ to the ideal-MHD solution for high conductivity $\sigma=10^5$. Note that for $\sigma=0$ the solution describes a discontinuity propagating at the speed of light corresponding to Maxwell's equations in vacuum. This result is nearly the same as test results for other codes (Palenzuela et al. 2009; Takamoto \& Inoue 2011). Palenzuela et al. (2009) reported that Strang's splitting technique became unstable for moderately high conductivity in this shock tube test, and they suggested using an implicit method. However, Takamoto \& Inoue (2011) found that this instability is not related to Strang's splitting technique but instead to the calculation of the electric field during the primitive recovery procedure. In this new resistive RMHD code, the shock tube test is solved stably using Strang's splitting technique even when $\sigma \simeq 10^{6}$. Balsara Test 2 (Balsara 2001) is used as a second shock tube test to investigate the effect of different EoSs in the resistive relativistic MHD regime. In ideal RMHD, Mignone \& McKinney (2007) have already performed this test to check the effect of different EoSs. In this test the initial left and right states are separated at $x=0.5$ and are given by \begin{eqnarray} (\rho^{L}, p^{L}, B^{L}_{x}, B^{L}_{y}, B^{L}_{z}) &=& (1.0, 30.0, 5.0, 6.0, 6.0) \\ (\rho^{R}, p^{R}, B^{R}_{x}, B^{R}_{y}, B^{R}_{z}) &=& (1.0, 1.0, 5.0, 0.7, 0.7). \end{eqnarray} The computational domain covers the region $x \in [-0.5,0.5]$ with $800$ cells. This test shows that a mildly relativistic blast wave propagates to the right with maximum Lorentz factor of $1.3 \le \gamma \le 1.4$. The numerical results at $t=0.4$ for the constant $\Gamma$-law EoS with $\Gamma=5/3$, the TM EoS and the RC EoS using conductivities $\sigma=0$, $10$, $10^2$, $10^3$ are shown in Figures 6, 7 and 8, respectively. \begin{figure}[h!] \epsscale{1.0} \plotone{f6.eps} \caption{(a) Density, (b) gas pressure, (c) velocity component $v_{x}$, (d) velocity component $v_{y}$, (e) magnetic field component $B_{y}$ and (f) Lorentz factor in the Blasara Test 2 (mildly relativistic blast wave) at $t=0.4$ using an ideal EoS with $\Gamma=5/3$. Different lines indicate different conductivity: $\sigma=0$ (purple dash-two-dotted), $10$ (green dash-dotted), $10^2$ (red dashed), $10^3$ (blue dotted). The black solid line shows the exact solution in the ideal RMHD case. \label{f6}} \end{figure} \begin{figure}[h!] \epsscale{1.0} \plotone{f7.eps} \caption{ The same as in Fig. 6 but using the TM EoS. \label{f7}} \end{figure} \begin{figure}[h!] \epsscale{1.0} \plotone{f8.eps} \caption{ The same as in Fig. 6 but using the RC EoS. \label{f8}} \end{figure} The solutions show fast and slow rarefaction waves, a contact discontinuity, and slow and fast shocks from left to right. In these cases, no rotational discontinuity is seen. The results obtained from the TM EoS and RC EoS cases are considerably different from the constant $\Gamma$-law EoS with $\Gamma=5/3$. In the approximate EoS cases, the rarefaction waves and shocks propagate with smaller velocities. This is predicted from the lower sound speed in the approximate EoS cases relative to overestimated sound speed in the ideal EoS case with $\Gamma=5/3$. Behind the slow shock, the approximate EoS cases have a higher peak density, which follows from the previous considerations. These properties are consistent with those in the ideal RMHD case (Mignone \& McKinney 2007). On the other hand, the results obtained from TM EoS and RC EoS cases are very similar at our numerical resolution. This similarity reflects the similarity in the distributions of specific enthalpy (see Fig. 1). Again the results show a smooth change from a wave-like solution for $\sigma=0$ towards an ideal-MHD solution for the highest conductivity, $\sigma=10^3$, for all the different EoS cases. The differences between the approximate EoS cases and the constant $\Gamma$-law case are larger at higher $\sigma$ where the approximate EoS cases approach the ideal MHD case more slowly. Even for low conductivity, i.e, $\sigma=10$, we clearly see a difference between the ideal EoS case with $\Gamma=5/3$ and the approximate EoS cases. \subsection{Two-dimensional tests} \subsubsection{The Cylindrical Explosion} We now consider tests involving shocks in multi-dimensions. Firstly we choose a test involving a cylindrical blast wave expanding into an initially uniform magnetic field. This is a standard test for ideal relativistic MHD codes even though there is no exact solution because this test will reveal subtle bugs and potential weaknesses in the numerical implementation. For this test, we use a Cartesian computational domain $(x,y) \in [-6,6]$ with 200 uniform cells in each direction. The initial explosion is initialized by setting the gas pressure and density to $p=1$ and $\rho=0.01$ within a cylinder of radius $r < 0.8$ centered on the origin. In an intermediate region $0.8 < r < 1.0$, the pressure and density decrease exponentially to that of the ambient gas which has $p=\rho=0.001$. The initial magnetic field is uniform in the $x$-direction with $\vec{B}=(0.05, 0, 0)$. The other quantities are set to zero (i.e., $\vec{v}=\vec{E}=q=0$). Figure 9 shows the magnetic field components $B_{x}$ and $B_{y}$ at $t=4$ using the ideal EoS with $\Gamma=4/3$ and the TM EoS. \begin{figure}[h!] \epsscale{1.0} \plotone{f9.eps} \caption{ Magnetic field components $B_{x}$ (left panels) and $B_{y}$ (right panels) for the cylindrical explosion test at $t=4$ using a ideal EoS with $\Gamma=4/3$ (upper panels) and TM EoS (lower panels). \label{f9}} \end{figure} The ideal-MHD simulation is performed using a high conductivity of $\sigma=10^5$. The results are qualitatively similar to those reported in previous studies in ideal RMHD (Komissarov 1999; Leismann et al. 2005; Neilsen et al. 2006; Del Zanna et al. 2007; Mizuno et al. 2011) and RRMHD (Komissarov 2007; Palenzuela et al. 2009). The results obtained from the ideal EoS with $\Gamma=4/3$ and the TM EoS are qualitatively very similar. This means that an ideal EoS with $\Gamma=4/3$ satisfactorily captures all the shock properties. Figure 10 shows one-dimensional profiles of the gas and magnetic pressure along the $y$-axis for \begin{figure}[h!] \epsscale{0.9} \plotone{f10.eps} \caption{Gas pressure $P_{gas}$ (left panels) and magnetic pressure $P_{mag}$ (right panels) for the cylindrical explosion test at $t=4$ using an ideal EoS with $\Gamma=4/3$ (upper panels) and a TM EoS (lower panels). The different lines show conductivity cases: $\sigma=0$ (purple dash-two-dotted), $10$ (green dash-dotted), $10^2$ (red dashed), $10^3$ (blue dotted), and $10^5$ (black solid). \label{f10}} \end{figure} conductivities of $\sigma=0$, $10$, $10^2$, $10^3$, $10^5$ using the ideal EoS with $\Gamma=4/3$ and the TM EoS. At high conductivity $\sigma \ge 10^3$, we do not see any significant difference. This means that high conductivity recovers the ideal-MHD solution. As the conductivity decreases, the maximum gas and magnetic pressure decrease. Of course there is no magnetic pressure increase for $\sigma=0$. \subsubsection{Kelvin-Helmholtz instability test} We present calculations of the linear and nonlinear growth of the two-dimensional Kelvin-Helmhotz instability (KHI) to investigate the effect of conductivity and the EoS on the development of turbulence in the resistive relativistic MHD regime. Initial conditions for this test are taken from a combination of previous studies in ideal RMHD (Bucciantini \& Del Zanna 2006; Mignone et al. 2009; Beckwith \& Stone 2011). The shear velocity profile is given by \begin{equation} v_{x}=\left\{ \begin{array}{cl} v_{sh} \tanh \left( {y-0.5 \over a} \right) & \mbox{if $ y > 0.0$} \\ -v_{sh} \tanh \left( {y+0.5 \over a} \right) & \mbox{if $ y \le 0.0$} \end{array} \right. \end{equation} Here, $a=0.01$ is the characteristic thickness of the shear layer, and $v_{sh}=0.5$ corresponds to a relative Lorentz factor of $2.29$. The initial uniform pressure is $p=1.0$. The density is initialized using the shear velocity profile, with $\rho=1.0$ in regions with $v_{sh}=0.5$ and $\rho=10^{-2}$ in regions with $v_{sh}=-0.5$. The magnetic field components are given in terms of the poloidal and toroidal magnetization parameters $\mu_{p}$ and $\mu_{t}$ as \begin{equation} (B_{x}, B_{y}, B_{z})=(\sqrt{2 \mu_{p} p}, 0, \sqrt{2 \mu_{t} p}), \end{equation} with $\mu_{p}=0.01$ and $\mu_{t}=1.0$. The instability is seeded by a single mode perturbation of the form \begin{equation} v_{y}=\left\{ \begin{array}{cl} A_{0} v_{sh} \sin(2 \pi x) \exp \left[ -\left( {y-0.5 \over \alpha} \right)^{2} \right] & \mbox{if $ y > 0.0$} \\ - A_{0} v_{sh} \sin(2 \pi x) \exp \left[ -\left( {y+0.5 \over \alpha} \right)^{2} \right] & \mbox{if $ y \le 0.0$} \end{array} \right. \end{equation} Here, $A_{0}=0.1$ is the perturbation amplitude and $\alpha=0.1$ is the characteristic length scale over which the perturbation amplitude decreases exponentially. The computational domain covers $x \in [-0.5, 0.5]$, $y \in [-1,1]$ with $256 \times 512$ cells. Figure 11 shows the perturbation amplitude ($\Delta v_{y} \equiv (v_{y, max} - v_{y, min})/2$) and volume averaged poloidal magnetic field ($B_{pol}$ $=$ $\sqrt{B_{x}^{2}+B_{y}^{2}}$) as a function of time for conductivities of $\sigma=0$, $10$, $10^{2}$, $10^{3}$, $10^{5}$ using an ideal EoS with $\Gamma=4/3$ and using the TM EoS. All cases show an initial linear growth phase. Except for $\sigma=0$, the ideal EoS and the TM EoS have almost the same growth rate and the maximum amplitude is reached at $t\sim 2$. The maximum amplitude indicates the transition from the linear to the nonlinear phase. The $\sigma=0$ cases exhibit a lower growth rate and later transition to the nonlinear phase than the higher conductivity cases. Thus, differences in the conductivity and EoS do not affect the growth of KHI, except for $\sigma=0$. \begin{figure}[h!] \epsscale{0.9} \plotone{f11.eps} \caption{ Evolution of the amplitude of the perturbation (upper panels) and volume-averaged poloidal field ($B_{pol}$) (lower panels) as a function of time for the Kelvin-Helmholtz instability test using a ideal EoS with $\Gamma=4/3$ (left panels) and TM EoS (right panels). The different lines indicate different conductivity cases: $\sigma=0$ (purple dash-two-dotted), $10$ (green dash-dotted), $10^2$ (red dashed), $10^3$ (blue dotted), $10^5$ (black solid). \label{f11}} \end{figure} Poloidal field amplification via stretching due to the main vortex developed by KHI follows the growth of KHI (see Fig. 12). In high conductivity cases, poloidal field amplification is very large, an increase by almost one-order of magnitude. Saturation in the poloidal field amplitude occurs latter than the transition from the linear to the non-linear KHI growth phase. This means that magnetic field amplification via stretching continues even after KHI is fully developed. Poloidal field amplification is weaker and saturation occurs earlier when the conductivity is low. Larger poloidal field amplification occurs for the TM EoS case than for the ideal EoS case. Therefore we find that magnetic field amplification via stretching due to the main vortex developed by KHI is strongly affected by the conductivity and the EoS. Figures 12 and 13 show the time evolution of the density and the poloidal to toroidal field ratio ($B_{pol}/B_{tor}=\sqrt{B^{2}_{x} + B^{2}_{y}}/B_{z}$) for high conductivity, $\sigma=10^{5}$, using the ideal EoS with $\Gamma=4/3$ (Fig.\ 12) and using the TM EoS (Fig.\ 13). \begin{figure}[h!] \epsscale{0.9} \plotone{f12.eps} \caption{Density (upper panels) and the poloidal to toroidal field ratio ($B_{pol}/B_{tor}$; lower panels) for the Kelvin-Helmholtz instability test at $t=3$, $7$, \& $10$ for $\sigma=10^{5}$ using the ideal EoS with $\Gamma=4/3$. White lines indicate magnetic field lines and the arrows show velocity vectors.\label{f12}} \end{figure} Both cases show formation of a main vortex by growth of KHI in the linear growth phase. In the ideal EoS case, a secondary vortex appears, although not fully developed. However, development of a secondary vortex is not found in the TM EoS case. Beckwith \& Stone (2011) found that a secondary vortex did not appear even in very high resolution simulations using the HLL approximate Reimann solver in ideal RMHD. In our simulations, we also used the HLL approximate Riemann solver to calculate the numerical flux, but do find a secondary vortex in the ideal EoS case. The difference is likely the result of the reconstruction scheme used here and the different reconstruction scheme used by Beckwith \& Stone. In the nonlinear phase the main vortex is distorted and stretched. The magnetic field is strongly amplified by shear motion in the vortex in the linear phase and by stretching in the nonlinear phase. As the mixing layer grows the field lines are bunched into a filamentary like stretched structure. In the TM EoS case, the vortex becomes strongly elongated along the flow direction. The structure created in the nonlinear phase is very different in the ideal EoS and the TM EoS cases. \begin{figure}[h!] \epsscale{0.9} \plotone{f13.eps} \caption{ The same as Fig. 12 but using the TM EoS. \label{f13}} \end{figure} The field amplification structure for different conductivities from $\sigma=0$ to $10^{3}$ is shown in Figure 14. \begin{figure}[h!] \epsscale{0.7} \plotone{f14.eps} \caption{ The poloidal to toroidal field ratio ($B_{pol}/B_{tor}$; lower panels) for the Kelvin-Helmholtz instability test at $t=3$ for (a) $\sigma=0$, (b) $\sigma=10$, (c) $\sigma=10^{2}$, and (d) $\sigma=10^3$ using the ideal EoS with $\Gamma=4/3$. \label{f14}} \end{figure} As seen in the time evolution of the averaged poloidal field shown in Fig.\ 11, the magnetic field amplification is weaker when the conductivity is low. Field amplification is a result of fluid motion in the vortex. In the high conductivity case, the magnetic field follows the fluid motion, like ideal MHD, and is strongly twisted. When the conductivity declines, the magnetic field is no longer strongly coupled to the fluid motion. Therefore the magnetic field is not strongly twisted. In the case using the TM EoS, we see the same trend for different conductivity and do not show the result here. \subsubsection{Relativistic Magnetic Reconnection test} The final test involves relativistic magnetic reconnection. Pioneering work on relativistic magnetic reconnection using a resistive relativistic MHD code and 2D simulations was performed by Watanabe \& Yokoyama (2006) who considered Petschek-type reconnection. Zenitani et al. (2010) also have studied details of Petschek-type reconnection via 2D resistive relativistic MHD simulations. Zanotti \& Dumbser (2011) have investigated the dependence of Petschek-type relativistic magnetic reconnection by performing 2D and 3D simulations over a broad range of conductivities and magnetizations. Takahashi et al. (2011) have studied Sweet-Parker type relativistic magnetic reconnection using 2D resistive relativistic MHD simulations. In this test, we present simulations of Petschek-type reconnection and investigate the effect of the EoS. We use initial conditions similar to that used in previous work (Watanabe \& Yokoyama 2006; Zenitani et al. 2010; Zanotti \& Dumbser 2011). The density and gas pressure are given by \begin{eqnarray} \rho &=& \rho_{b} + \mu_{m} \cosh^{-2} (y) \\ p &=& p_{b} + \mu_{m} \cosh^{-2} (y), \end{eqnarray} where $\rho_{b}=p_{b}=0.1$ are the uniform density and gas pressure outside the current sheet, and $\mu_{m}=B_{0}^{2}/(2 \gamma_{0}^{2})=1.0$ is the magnetization parameter. The velocity field is initially zero, hence $\gamma_{0}=1$. The magnetic field changes orientation across the current sheet according to \begin{equation} B_{x} = B_{0} \tanh (y), \end{equation} where $B_{0}$ is calculated from the magnetization parameter. The current distribution is given by \begin{equation} j_{z} = B_{0} \cosh^{-2} (y). \end{equation} Over the whole computational domain there is a small background uniform resistivity $\eta_{b}=1/\sigma_{b}=10^{-3}$, except within a circle of radius $r_{\eta}=0.8$ which defines a region of anomalous resistivity with amplitude $\eta_{0}=1.0$, The resistivity can be written as \begin{equation} \eta = \left\{ \begin{array}{cl} \eta_{b} + \eta_{0}[2 (r/r_{\eta})^{3} - 3(r/r_{\eta})^{2} + 1] & \mbox{for $r \le r_{\eta}$,} \\ \eta_{b} & \mbox{for $r > r_{\eta}$,} \end{array} \right. \end{equation} where $r = \sqrt{x^{2} + y^{2}}$. The electric field is calculated from the resistivity distribution as \begin{equation} E_{z} = \eta j_{z}. \end{equation} The computational domain is $x \in [-50, 50]$, $y \in [-20, 20]$ with $2000 \times 800$ cells, and outflow boundary conditions are used in both directions. Figure 15 shows the density, the $x$-component of the 4-velocity, $\gamma v_{x}$, and the out-of-plane current $j_{z}$ at $t=100$ using an ideal EoS with $\Gamma=4/3$ and the TM EoS. In this figure we confirm the essential features of Petschek-type relativistic reconnection reported in previous work (Watanabe \& Yokoyama 2006; Zenitani et al. 2010; Zanotti \& Dumbser 2011). In both the ideal and TM EoS cases, we see similar time evolution and morphology. After an initial adjustment stage of $t \le 10$, the reconnection process starts around the point at $(x,y)=(0,0)$ triggered by anomalous resistivity. The magnetic field shows a typical X-type topology. As a result of reconnection, the magnetic energy is converted into both thermal and kinetic energy. Two magnetic islands (so-called plasmoids) which correspond to the high-density region in Fig. 15 move in opposite directions (the figure shows only half of the simulation region and only one magnetic island) and are accelerated along the direction of the magnetic field. \begin{figure}[h!] \epsscale{1.0} \plotone{f15.eps} \caption{Density (upper panels), the $x$-component of the four-velocity $\gamma v_{x}$ (middle panels), and the out-of-plane current $j_{z}$ (lower panels) in the relativistic magnetic reconnection test at $t=100$ using the ideal EoS with $\Gamma=4/3$ (left panels) and the $TM$ EoS (right panels). White lines indicate magnetic field lines and the arrows show velocity vectors. \label{f15}} \end{figure} A fast reconnection jet is formed inside a narrow nozzle within a pair of slow shocks (Petschek slow shock). The reconnection jet collides with a plasmoid in front of the current sheet further downstream. The plasmoid is surrounded by strong currents (see Fig. 15e,f), which also correspond to the slow shocks (Ugai 1995). These slow shocks surround the plasmoid and are connected to the Petschek slow shocks. In the TM EoS cases, the plasmoid has a faster speed and than in the ideal EoS case with $\Gamma=4/3$, and the plasmoid in the TM EoS case propagates further. The time evolution of the maximum outflow velocity ($v_{x, max}$), the volume-averaged magnetic energy ($B^{2}$) and the normalized reconnection rate is shown in Figure 16. \begin{figure}[h!] \epsscale{0.7} \plotone{f16.eps} \caption{ Evolution of (a) the maximum of the $x$-component of the velocity ($v_{x}$), (b) the volume-averaged magnetic energy ($B^{2}$), and (c) the reconnection rate as a function of time for the 2D relativistic magnetic reconnection test using an ideal EoS with $\Gamma=4/3$ (solid lines) and a TM EoS (dashed lines). \label{f16}} \end{figure} The outflow gradually accelerates and nearly saturates by $t \sim 60$ with $v_{x} \sim 0.8c$. The figure clearly shows that the outflow speed in the TM EoS case is slightly faster than in the ideal EoS case. The magnetic energy gradually decreases with time. Dissipated magnetic energy results in an increase to both the thermal and kinetic energy. Increase in the kinetic energy accompanies acceleration of the outflow (plasmoid). The normalized reconnection rate is defined as $\mathcal{R}=E_{z}^{*}/v_{A, in} B_{in} \sim v_{in}/v_{out}$, where $E_{z}^{*}$ is the electric field at the reconnection point and the upstream properties with subscript $in$ are evaluated at $(x,y)=(0,3)$ (Zenitani et al. 2010)\footnote{Zanotti \& Dumser (2011) and Takahashi et al. (2011) have used a different definition for late reconnection.}. The reconnection rate saturates at about $t = 50$ in both cases. However, the TM EoS case has a larger reconnection rate than the ideal EoS case ($\mathcal{R} \sim 0.16$ in the ideal EoS case with $\Gamma=4/3$ and $\mathcal{R} \sim 0.17$ in the TM EoS case). Therefore the different EoSs lead to a quantitative difference in relativistic magnetic reconnection. \section{Summary and Conclusions} The role of the EoS in resistive relativistic MHD using a newly developed resistive relativistic MHD code has been investigated. A number of numerical tests in 1D and multi-dimensions have been performed to check the robustness and accuracy of the new code. All of the tests show the effectiveness of the new code in situations involving both small and large uniform conductivities. The 1D tests of the propagation of a large amplitude circularly-polarized Alfv\'{e}n wave show the new resistive relativistic MHD code reproduces ideal relativistic MHD solutions when the conductivity $\sigma$ is high and that the code has $2^{nd}$ order accuracy. The 1D self-similar current sheet tests indicate that the analytical solution in the moderate conductivity regime is well described by the new code. In a simple MHD version of the Brio and Wu 1D shock-tube test, the code is stable using Strang's splitting technique even when the conductivity is high ($\sigma \sim 10^{6}$). In the 2D cylindrical explosion tests, at the high conductivity $\sigma \ge 10^{3}$, the results recover the solution from ideal RMHD. The results of the Kelvin-Helmholtz instability test show that the growth rate of KHI is independent of the conductivity, except for very low conductivity ($\sigma \simeq 0$). However, magnetic field amplification via stretching of the main vortex developed by KHI strongly depends on the conductivity. The effect of conductivity on magnetic field amplification via KHI in 3D is a topic for future study. EoSs proposed by Mignone et al. (2005) and by Ryu et al. (2006), which closely approximate Synge's single-component perfect relativistic gas EoS, have been incorporated in the code. In the limit of non-relativistic and ultra-relativistic temperatures, the equivalent specific heat ratio associated with the EoSs that approximate Synge's EoS appropriately changes from the $5/3$ to the $4/3$ limits. The numerical tests studied the effect of the EoS on shocks, blast waves, the Kelvin-Helmholtz instability, and relativistic magnetic reconnection. The results provide a useful guide for future more specific studies of each topic. The tests confirm the general result that large temperature gradients cannot be properly described by an ideal EoS with a constant specific heat ratio. The results using a more realistic EoS, which we have studied here, show considerable dynamical differences. The 1D shock tube tests (Balsara Test2) show that the results obtained from the TM EoS and RC EoS cases are considerably different from the constant $\gamma$-law EoS case with $\gamma = 5/3$. In the 2D Kelvin-Helmholtz instability tests, the non-linear behavior depended on the EoS, even though the growth rate of the KHI was almost the same. In reconnection tests, the approximate EoS cases resulted in a faster reconnection outflow speed and a larger reconnection rate than the ideal EoS case. We conclude that any studies of shocks, instabilities, and relativistic magnetic reconnection should use a realistic approximation to Synge's EoS. \acknowledgments Y.M. would like to thank H. Takahashi, S. Zenitani, K.-I. Nishikawa and P. E. Hardee for useful comments and discussion. This work has been supported by NSF awards AST-0908010 and AST-0908040, NASA award NNX09AD16G and the Taiwan National Science Council under the grant NSC 100-2112-M-007-022-MY3. Y.M. also acknowledges partial support from the NAOJ Visiting Scholar Program (Short-term). The simulations were performed on the Columbia Supercomputer at the NAS Division of the NASA Ames Research Center, the SR16000 at YITP in Kyoto University, and Nautilus and Kraken at the National Institute for Computational Sciences in the XSEDE project supported by National Science Foundation.
1,116,691,501,132
arxiv
\section{\bf Introduction:} Two-dimensional string theory can be viewed as one dimensional-matter (time) coupled to two-dimensional gravity [\DNW,\POLONE,\BL]. Since the latter has a nonperturbative formulation in terms of a one-dimensional matrix model [\BKZ] (nonrelativistic fermions) we have an opportunity to study nonperturbative aspects of two-dimensional string theory. In particular, two-dimensional string theory has a black hole solution [\MSW,\SW,\SH] and one can begin to explore nonperturbative aspects of black holes, in particular the important question of the fate of the classical singularity. In any attempt to understand the emergence of a non-trivial spacetime in the matrix model one has to contend with the fact that the non-relativistic fermions are formulated in a flat spacetime [\SW,\GK,\MOORE]. However, the ``spacetime'' of the perturbative collective excitation is a half-plane, the boundary of which is associated with the classical turning point of the fermions [\SW,\GK,\DJ,\POLTWO,\MOORE]. On the other hand it is well-known that the graviton-dilaton system ($G_{\mu\nu}, \Phi$) for the two-dimensional black hole is equivalent to a metric $\widetilde G_{\mu\nu} = G_{\mu\nu} \exp(-2\Phi)$ which corresponds to a spacetime that is flat but has a boundary determined by the condition $\exp(-2\Phi) \ge 0$. It is this reasoning that makes it plausible that one may be able to describe a black hole spacetime in the semiclassical limit of the matrix model. Recently [\DMWBH] we have discussed a scalar (tachyon) field theory which in the semi-classical limit reduces to a scalar field coupled to the two-dimensional dilaton-black hole. Other works in this direction include [\MS,\SRD,\RUSSO,\YONEYA,\MV]. References [\MS] and [\MV] deal with the continuum formulation and relate the coset black holes with `Liouville' theory. References [\SRD],[\RUSSO] and [\YONEYA] deal directly with the matrix model. The field theory discussed in [\DMWBH] is obtained from the nonperturbative formulation of the $c=1$ matrix model in terms of the phase-space distribution operator $$\,{\widehat {\cal U}}(p,q,t) = \int_{-\infty}^\infty dx\, \psi^\dagger(q- {x\over 2}) e^{-ipx} \psi(q+ {x\over 2}).$$ Since the correlations of this operator are exactly calculable we are able to define an exact nonperturbative quantization of the scalar field. The main result of [\DMWBH] was that singularities at the position of the black hole singularity ($uv = \mu/2$ in Kruskal coordinates) that occur in the computation of the tachyon ground state do not survive quantization: {\sl the singularities are only a malady of the semiclassical expansion}. In the present paper we generalize the last result to show that, even for generic field configurations, singularities at the centre of the black hole appear only in semiclassical expansions and like in [\DMWBH] the exact results are non-singular. We also explore in greater detail the semiclassical expansion applied to the tachyon field to show that perturbatively it satisfies a closed non-linear differential equation (the linear part being the equation of motion for a free field coupled to the dilaton-black hole). The picture that emerges is the following. In some regions of space-time where the semiclassical expansion is valid, the classical physics is described by the above equation of motion for the tachyon; in those regions where the semiclassical expansion shows singularity, clearly the differential equation as well as the attendant notion of spacetime backgrounds are to be discarded in favour of the exact quantum theory that is defined in terms of the fermion theory. We describe the working rules of the quantum theory and explain some unusual features of the theory associated with non-trivial commutation rules of the ``tachyon field''. We also demystify the ``hyperbolic transform'' by exhibiting some of its physically interesting properties and showing that it is the unique transform which satisfies these properties. Finally we discuss in brief the Euclidian black hole obtained from an analytically continued fermion theory. \section{\bf Quantized Tachyon in Black Hole} Let us first briefly review some salient points of [\DMWBH]. The basic construct is the fermion bilinear ${\widehat \phi}(p,q,t)$ and is defined by a ``hyperbolic transform'' of the quantum phase space density $\,{\widehat {\cal U}}(p,q,t)$: $$ {\widehat \phi}(p,q,t) = \int dp'dq'\, K(p,q| p',q') \,{\widehat {\cal U}}(p',q',t) \eqn\twoone$$ where $$ K(p,q| p',q') \equiv | (p-p')^2 - (q-q')^2 |^{-1/2} \eqn\twotwo$$ Equation \twoone\ can be regarded as a relation between Heisenberg operators, where $${\widehat \phi}(p,q,t)= e^{iHt} {\widehat \phi}(p,q,0) e^{-iHt},\; \,{\widehat {\cal U}}(p,q,t) = e^{iHt} \,{\widehat {\cal U}}(p,q,0) e^{-iHt} \eqn\twothree$$ where $$H = \int {dp dq\over 2\pi} h(p,q) \,{\widehat {\cal U}}(p,q),\qquad h(p,q) = {1\over 2}(p^2-q^2) \eqn\twofour$$ Clearly equation \twoone\ is also valid for expectation values $$ \langle \psi|{\widehat \phi}(p,q,t)|\psi \rangle = \int dp'dq'\, K(p,q| p',q') \langle \psi|\,{\widehat {\cal U}}(p',q',t) | \psi \rangle \eqn\twofive$$ where $| \psi \rangle $ is any state in the Fermi theory. For the Heisenberg operator $\,{\widehat {\cal U}}(p,q,t)$ we have the equation of motion $$ (\partial_t + p\partial_q + q\partial_p)\,{\widehat {\cal U}}(p,q,t) = 0 \eqn\twosix $$ Used in the definition \twoone, together with \twotwo, this leads to $$ (\partial_t + p\partial_q + q\partial_p){\widehat \phi}(p,q,t) = 0 \eqn\twoseven $$ The last equation implies that if we define the variables $$ u = e^{-t} (p +q)/2,\; v= e^t (p-q)/2 \eqn\twoeight $$ or equivalently $$ p = ue^t + ve^{-t},\; q= ue^t - ve^{-t} \eqn\twonine $$ then $$ \partial_t {\widehat \phi}(ue^t + ve^{-t}, ue^t - ve^{-t},t) = 0 \eqn\twoten$$ This means that we can define $$\eqalign{ {\widehat T}(u,v) \equiv & {\widehat \phi}(ue^t + ve^{-t}, ue^t - ve^{-t},t) \cr =&\int du' dv' {\widetilde K}(u,v | u',v') \,{\widehat {\cal U}}(u'e^t+v'e^{-t},u'e^t-v'e^{-t},t), \cr} \eqn\twooneone $$ which is actually independent of $t$. Here $${\widetilde K}(u,v| u',v') \equiv\, |(u - u') ( v - v')|^{-1/2} \eqn\twooneonea $$ In [\DMWBH] we observed that if one considers states $|\psi \rangle $ in the fermion theory such that $ \langle \psi | \,{\widehat {\cal U}}(p,q,t) | \psi \rangle $ differs from $ \langle \psi_0 | \,{\widehat {\cal U}}(p,q,t) | \psi_0 \rangle $ at most in a small neighbourhood of the fermi surface $p^2 - q^2 = 2\mu$, then $$\eqalign{ \delta T(u,v) \equiv & \langle \psi | \delta {\widehat T}(u,v) | \psi \rangle = \langle \psi |{\widehat T}(u,v) | \psi \rangle - T_0(u,v) \cr \delta {\widehat T}(u,v) \equiv & {\widehat T}(u,v) - T_0(u,v) \cr} \eqn\twoonetwo$$ satisfies $$ [4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1] \delta T(u,v) = 0 + o({\delta E \over \mu }) \eqn\twoonethree$$ In the above, $|\psi_0 \rangle$ refers to the fermion ground state, $T_0(u,v)= \langle \psi_0 | {\widehat T}(u,v)|\psi_0 \rangle$ and $\delta E$ is the maximum spread of energy relative to the fermi surface in the support of $\langle \psi | \,{\widehat {\cal U}}(p,q,t) | \psi \rangle $. To this order, therefore, the construct \twoonetwo\ provides us with classical solutions for a free scalar field coupled to the two-dimensional black hole. We shall see in the next section that the higher order terms in $(\delta E / \mu)$ correspond to non-linear terms in $\delta T(u,v)$. Now, as we emphasized in [\DMWBH], expression \twoonetwo\ does more than just to provide perturbative solutions to \twoonethree. $\delta T(u,v)$ is in fact exactly computable in the fermion theory (in principle for any state $| \psi \rangle$). We will see that typically the exact expression has a bad perturbative expansion in $(\delta E/ \mu)$ and the divergences in solutions of \twoonethree\ at the ``black hole singularity'' $uv = \mu/2$ are a result of such bad expansions. Indeed the exactly computed $\delta T(u,v)$ is free of singularities. In [\DMWBH] we have already explcitly computed one such example. We will thus take the attitude that the fermion theory, through equations such as \twoonetwo, \underbar{defines} for us quantization of a scalar field coupled to the two-dimensional black hole. Such a definition is of course automatically nonperturbative since the fermion theory is so. In the next few sections we will explore in detail the semiclassical physics by considering in the fermion theory small fluctuations near the fermi surface and show how to use the nonperturbative formalism of the fermion theory to extract the nonperturbative behaviour. \section{\bf Properties of the Hyperbolic Transform} Before launching into properties of the ``tachyon field'' \twooneone\ it is useful to understand its definition in some more detail. In this section we shall mention some remarkable properties of the kernel \twotwo\ that defines for us the ``hyperbolic transform'' \twoone. The basic properties of \twotwo\ (equivalently, of \twooneonea) are: \hfill\break \noindent (i) Lorenz covariance: $K(P, Q | P', Q') = K(p,q| p',q')$, where $P = p\cosh \theta + q\sinh \theta, Q = p\sinh \theta + q \cosh \theta$ and similarly for $P',Q'$. \hfill\break \noindent (ii) Translational invariance: $K(p,q| p',q') = f(p-p', q-q')$ \hfill\break \noindent (iii) Differential equation: $$\eqalign{ \,&[4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1] \widetilde K(u,v | u',v') =o[u'v'- \mu/2] \cr \,&\widetilde K(u,v | u',v') = K({u+v\over 2},{u-v\over 2}|{u'+v'\over 2}, {u'-v'\over 2}) \cr} \eqn\aone$$ The precise form of the last equation is $$\eqalign{ [4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1] & \widetilde K(u,v | u'v') \cr \,&= (u'v'-\mu/2) \partial_u\partial_v K(u,v|u',v')\cr} \eqn\atwo$$ As we shall see, it is equation \aone\ (or \atwo) that is responsible for the black hole interpretation of the low energy physics of the tachyon. This is because if one considers a state $|\psi \rangle$ such that the support of $\langle \psi|\,{\widehat {\cal U}}(p',q',t) | \psi \rangle$ is near the fermi surface, then $\langle \psi |{\widehat T}(u,v) | \psi \rangle$ vanishes to leading order according to \twooneone\ and \aone. Clearly, property (iii) is very desirable from the point of view of black hole physics. Property (i) implies that the equations of motion are the same for $\phi(p,q,t)$ and $\,{\cal U}(p,q,t)$; this implies that the reduced variables $u,v$ used in both cases have the same physical interpretation. Property (ii) is directly related to the fact that the hyperbolic transform becomes local in Fourier space; in other words, the (double) Fourier transform of $\phi(p,q,t)$'s ($\widetilde \phi(\alpha, \beta, t)$'s) are basically rescalings of $W(\alpha,\beta,t)$'s [\DMW, \DMWBH] which ensure that the $\widetilde \phi$'s have an algebra that has a classical limit. The latter property implies that the classical action written in terms of $\phi(p,q,t)$ has an $\hbar\to 0$ limit which, once again is rather crucial. We now prove that \twotwo\ is the {\bf unique kernel} satisfying all the three properties mentioned above. \noindent Proof: \hfill\break \noindent Properties (i) and (ii) imply that $K$ is some function of only the combination $(p-p')^2 - (q-q')^2$: $$ K(p,q | p', q') = g((p-p')^2- (q-q')^2)) \longleftrightarrow \widetilde K(u,v | u',v') = f(x),\; x= (u-u')(v-v') \eqn\athree $$ Property (iii) states that if we choose $u' = q_0 e^\theta/2, v'= -q_0 e^{-\theta}/2, q_0\equiv \sqrt{-2\mu}$ so that $u'v' = \mu/2$ (note that $\mu$, the fermi energy, is negative in our convention), then $$[4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1] \widetilde K(u,v | q_0e^\theta/2, -q_0e^{-\theta}/2) = 0 \eqn\afour$$ For this choice of $u',v'$ we have $x = uv -\mu/2 + [q_0/2] (ue^{-\theta} - ve^\theta)$. Using \athree\ for $\widetilde K$, and introducing the notation $y = uv - \mu/2$ , we get from \afour $$ 2xf'(x) + f(x) + y[ 6f'(x) + 4xf''(x)]= 0 \eqn\afive$$ This equation must be identically satisfied for all $y$, which implies that both the $y$-independent term and the coefficient of $y$ must vanish in \afive. Curiously the first condition implies the second condition, so we need not separately consider the second condition. Thus we get $$ 2xf'(x) + f(x) = 0 \eqn\asix$$ The above equation is solved by $$ f(x) = \hbox{constant}\; |x|^{-1/2} \eqn\aseven$$ which proves that (modulo an overall constant) $$ K(p,q|p',q') = |(p-p')^2 -(q-q')^2|^{-1/2} \longleftrightarrow \widetilde K(u,v| u',v') = |(u-u')(v-v')|^{-1/2}$$ \section{\bf Non-linear Differential Equation for the Tachyon} In this section we discuss the semiclassical physics of the tachyon ${\widehat T}(u,v)$ in detail and derive a closed non-linear differential equation for $\delta T(u,v) = \langle \psi | {\widehat T}(u,v) | \psi \rangle - T_0(u,v)$ in a semiclassical expansion. Let us consider states $| \psi \rangle $ which satisfy $$ \langle \psi | \,{\widehat {\cal U}}(p,q,t) | \psi \rangle = \vartheta ([ p_+(q,t)- p] [p - p_-(q,t)]) + o(\hbar). \eqn\threeone $$ where $\vartheta(x) = 1$ if $x>0$ and $=0$ otherwise. This corresponds to the ``quadratic profile'' ansatz [\POLTWO,\DMWClass]. We recall that in the $\hbar \to 0$ limit, the classical $\,{\cal U}(p,q,t)$ (expectation value in a state) satisfies $\,{\cal U}^2(p,q,t) = \,{\cal U}(p,q,t)$ which implies that $\,{\cal U}(p,q,t)$ is the characteristic function for some region; the quadratic profile ansatz assumes that there is one connected region with boundary given by $ (p- p_+(q,t) ) ( p - p_- (q,t)) = 0. $ As is well-known, in the ground state $| \psi_0 \rangle$, the boundary is the fermi surface itself: $ p^2 - q^2 - 2\mu=0,$ which correspond to $$ p_\pm^0 (q,t) = p_\pm^0 (q) = \pm \sqrt{q^2 + 2\mu}\;\vartheta (q^2 + 2\mu). \eqn\threetwo $$ We shall use the notation $$ p_\pm(q,t ) \equiv p_\pm^0 (q) + \eta_\pm(q,t) \eqn\threethree$$ Let us compute $\delta T(u,v)$ in the state \threeone (ignoring the $o(\hbar)$ terms for the moment), using \twoonetwo. We get $$\eqalign{ \delta T(u,v)=& \int_{-\infty}^\infty dq\int^{\eta_+(q,t)}_0 { dp' \over |[2ue^t-(p'+ p^0_+(q)+q)][2ve^{-t}- (p'+p^0_-(q)-q)]|^{1/2} } \cr -& \int_{-\infty}^\infty dq\int^{\eta_-(q,t)}_0 { dp' \over |[2ue^t-(p'+ p^0_-(q)+q)][2ve^{-t}- (p'+p^0_-(q)-q)]|^{1/2} } \cr } \eqn\threefour$$ Let us now make some further assupmtions about the state $|\psi\rangle$, namely that the fluctuations $\eta_\pm(q,t)$ are non-zero only for $q < -q_0 \equiv - \sqrt{-2\mu}$ and that in this region $|\eta_\pm(q,t)| << |p^0_\pm(q)|$. In other words, we are considering ``small'' fluctuations near the left branch of the fermi surface in the classically allowed region. Under these assumptions we can expand the integrand in a Taylor series in $p'$ around $p'=0$. The $p'$-integrals now are easy to do, giving powers of $\eta_\pm(q)$. The $q$-integrals that remain are now effectively between $-\infty$ and $-q_0$ [$q_0= \sqrt{-2\mu}$], so one can use a different integration variable $\tau$, the ``time of flight'', given by $$ q = -q_0 \cosh \tau, \; p^0_\pm (q) = \pm q_0 \sinh \tau \eqn\threefoura $$ If we also define the rescaled functions ${\bar \eta}_\pm(\tau, t) = | p_\pm^0 (q) | \eta_\pm (q,t) $, we ultimately get $$\eqalign{ \delta T&(u,v) ={1\over 2} \int_0^\infty d\tau \Big[k_+(u,v|\tau,t) {\bar \eta}_+(\tau,t) - k_-(u,v | \tau, t){\bar \eta}_-(\tau,t)] \cr - {1\over 8}& \int_0^\infty {d\tau\over p^0_+(q)}(e^{-t}\partial_u +e^t \partial_v)k_+(u,v|\tau,t) {\bar \eta}_+(\tau,t)^2 - (e^{-t}\partial_u+e^t \partial_v)k_-(u,v | \tau, t){\bar \eta}_-(\tau,t)^2 \Big] +o({\bar \eta}_\pm^3)\cr} \eqn\threefive$$ where $$k_\pm(u,v| \tau,t) \equiv |(ue^t + {q_0\over 2} e^{\mp \tau})(ve^{-t}- {q_0\over 2}e^{\pm \tau })|^{-1/2}, \quad q_0 \equiv \sqrt{-2\mu} \eqn\threefivea$$ {\bf Relations between ${\bar \eta}_\pm$ and $\delta T(u,v)$:} Equation \threefive\ is important in that it builds a correspondence between the semiclassical quantities of the fermion theory and those of the $\delta T(u,v)$-theory. To understand it better, let us first choose a different coordinatization for the $u,v$-space. Let us define\foot{ We use boldface letters so as to distiguish ${\bf t}$ from the time $t$ of the fermion theory.} $$ {\bf x} = {1\over 2} \ln |{uv\over \mu/2}|, \quad {\bf t} = {1\over 2}\ln |v/u| \eqn\threefiveb$$ This is not a one-to-one map. Let us consider for the moment the quadrant of the $u,v$-space where $u>0, v>0$. In that case we can write down the inverse maps as follows: $$ u = {q_0\over 2}e^{{\bf x} - {\bf t}},\quad v={q_0\over 2}e^{{\bf x} +{\bf t}} \eqn\threefivec$$ Now recall that by definition of $\delta T(u,v)$ ({\it cf.} the remark about the $t$-independence of the right hand side of \twooneone), if ${\bar \eta}_\pm (\tau, t)$ satisfy their equations of motion [these can be derived by tracing their definition back to $\,{\cal U}(p,q,t)$ and they read as $$ (\partial_t - \partial_\tau){\bar \eta}_\pm= {1\over 2} \partial_\tau \big({\bar \eta}_\pm^2 \over [p^0_+]^2 \big) \eqn\threefived]$$ then the right hand side of \threefive\ is actually $t$-independent. This means that we can choose $t$ to be anything we like. It is most useful to choose $$ t = {\bf t} \eqn\threefivee$$ on the right hand side of \threefive. This equation now reads, to leading order, as $$ \delta T({\bf x},{\bf t}) = {1\over 2} \int_0^\infty d\tau [\widetilde k_+({\bf x} |\tau) {\bar \eta}_+(\tau, {\bf t}) -\widetilde k_-({\bf x} |\tau) {\bar \eta}_-(\tau, {\bf t})] + o({\bar \eta}_\pm^2) \eqn\threefivef $$ where $$ \widetilde k_\pm ({\bf x},\tau) \equiv {2\over q_0} |(e^{\bf x} + e^{\mp\tau}) ( e^{\bf x} - e^{\pm\tau)}|^{-1/2} \eqn\threefiveg $$ Note that for large ${\bf x}$, $${q_0\over 2}\widetilde k_-({\bf x},\tau)\exp[{\bf x}] = |1 + \exp(\tau - {\bf x})|^{-1/2}+ o(e^{-{\bf x}})$$ The first term is similar to a low-temperature Fermi-Dirac distribution (the fact that the power is $-1/2$ instead of $-1$ does not materially affect the arguments). In fact, one can show that the for very large ${\bf x}$ it behaves like $ \vartheta(\tau -{\bf x})$ and corrections to it are like increasing powers of $\partial_{\bf x}$ on the $\vartheta$-function. Similar expansions are also available for $k_+({\bf x},\tau)$. The precise statements for these $\partial_{\bf x}$-expansions are the following: $${\cal T}({\bf x},{\bf t}) = {1\over 2} \{{\cal D}_+{\bar \eta}_+ - {\cal D}_-{\bar \eta}_- \} + o(e^{-{\bf x}} {\bar \eta}_\pm) + o({\bar \eta}_\pm^2) \eqn\threefiveh$$ where $$ {\cal T}({\bf x},{\bf t}) \equiv |uv|^{1/2} \delta T(u,v) \eqn\threefiveha$$ $$ {\cal D}_\pm \equiv I_\pm (\partial_{\bf x})+ I_\pm (1/2 - \partial_{\bf x}) \eqn\threefivehaa$$ $$ I_\pm (\alpha) = \alpha^{-1} \;_2F_1({1\over 2}, \alpha; \alpha +1; \mp 1) $$ where $\;_2F_1$ is the standard Hypergeometric function [\GRADSHTEYN]. Using its properties one can write down an expansion for ${\cal D}_\pm$ in $\partial_{\bf x}$. The expansion begins with $\partial_{\bf x}^{-1}$. Defining $$ \eta_\pm = \pi_\eta \pm \partial_\tau \eta, \eqn\threefiveha$$ where $\eta$ is the ``tachyon'' field that is associated with the standard $c=1$ matrix model and $\pi_\eta$ is its conjugate momentum, we can write down a derivative expansion for ${\cal T}({\bf x},{\bf t})$: $$ {\cal T}({\bf x},{\bf t}) = \eta({\bf x},{\bf t}) + o(\partial_{\bf x}\eta, \pi_\eta). \eqn\threefivehb$$ The identification of the ``black hole tachyon'' field with the standard $c=1$ tachyon field in the asymptotic (${\bf x} \to \infty$) is rather remarkable. As a result of this $n$-point functions of ${\cal T}({\bf x},{\bf t})$ are the same as those of the $c=1$ tachyon at extreme low energy. Finally, relation \threefiveh\ can be inverted to give ${\bar \eta}_\pm$ in terms of ${\cal T}$: $$ {\bar \eta}_\pm = ({\cal D}_\pm \partial_{\bf x})^{-1} \partial_\pm {\cal T}({\bf x},{\bf t})+ o(e^{-x} {\cal T}) + o({\cal T}^2), \quad \partial_\pm = \partial_{\bf t} \pm \partial_{\bf x} \eqn\threefivei$$ We will use this relation below to obtain a nonlinear differential equation for ${\cal T}({\bf x},{\bf t})$. We wish to emphasize that the above analysis can be repeated in other coordinates which are valid all through the Kruskal diagram (for instance in the light cone coordinates themselves) However, the formulae look more complicated. {\bf Differential Equation:} Let us now go back to the other consequences of Eq. \threefive. Eq. \afour\ of Sec. 3 ensures that $$ [4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1] k_\pm (u,v | \tau, t) =0 $$ Using this, and applying the above differential opertor to \threefive\ we get $$\eqalign {[4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1]& \delta T(u,v) \cr ={1\over 2} &\partial_u\partial_v \int_0^\infty d\tau[k_+ {\bar \eta}_+^2 + k_-{\bar \eta}_-^2] + o({\bar \eta}_\pm^3) \cr} \eqn\threesix$$ Note that the linear term dropped out because of the special properties of the kernel. Using the ${\bf x},{\bf t}$-coordinatization, \threesix\ can be written as $$ D_{{\bf x}, {\bf t}} {\cal T}({\bf x}, {\bf t}) = - {e^{-2{\bf x}}\over |\mu|} [ e^{ {\bf x}/2} \partial_{\bf x}\{ e^{-{\bf x}/2}({\cal D}_+{\bar \eta}_+^2 + {\cal D}_-{\bar \eta}_-^2 )\}] + o(e^{-3{\bf x}}{\bar \eta}_\pm^2) + o({\bar \eta}_\pm^3) \eqn\threeseven$$ where $$\eqalign{ D_{{\bf x},{\bf t}} =& e^{\bf x} [4(uv -\mu/2) \del_u \del_v + 2 (u\del_u + v\del_v) + 1] e^{-{\bf x}} \cr =& (1+ e^{-2x})(\partial_{\bf x}^2 - \partial_{\bf t}^2)+ e^{-2{\bf x}}(2\partial_{\bf x}+1)\cr} \eqn\threeeight $$ Finally, by using \threefivei\ to convert the ${\bar \eta}_\pm$ back into ${\cal T}$, we get the following closed differential equation in ${\cal T}$ upto quadratic order: $$\eqalign{D_{{\bf x}, {\bf t}} {\cal T}({\bf x}, {\bf t}) = - {e^{-2{\bf x}}\over |\mu|} [ e^{ {\bf x}/2} \partial_{\bf x}\{ e^{-{\bf x}/2}({\cal D}_+[({\cal D}_+\partial_{\bf x})^{-1}\partial_+{\cal T} ]^2 +& {\cal D}_-[({\cal D}_-\partial_{\bf x})^{-1}\partial_-{\cal T} ]^2 )\}]\cr \;&+ o({\cal T}^3) + o(e^{-3x}{\cal T}^2)\cr} \eqn\threenine$$ \section{\bf Exact Quantum Theory Does Not See Black Hole Singuarity:} In this section we will analyze the nature of singularities that occur in $\delta T(u,v)$ at $uv = \mu/2$ (which is the position of the curvature singularity of the black hole metric $ds^2= dudv/( uv - \mu/2)$). We will see that these singularities occur as a result of making badly divergent semiclassical expansions and they are not present in $ \delta T(u,v) $ when calculated exactly. Let us consider a state $| \psi \rangle $ in the fermion theory which, like in the previous section, represents fluctuations in the neighbourhood of the left branch of the fermi surface $p^2 - q^2 = 2\mu$ (generalizations are obvious). In this region the fermion phase space can be coordinatized by $(p,q) = (R \sinh \theta, -R\cosh \theta), R>0.$ As explained earlier, the support of $\langle \psi | \,{\widehat {\cal U}}(p,q,t) | \psi \rangle $ in the limit $\hbar \to 0$ defines a region ${\cal R}(t)$ occupied by the fermi fluid at time $t$. For the ground state $| \psi_0 \rangle$ this region is given by ${\cal R}_0 = \{ \infty > R \ge R_0\equiv\sqrt{-2\mu}, \infty > \theta > -\infty \}$ (plus its mirror image on the right half of the phase plane). For simplicity of calculation, let us choose for the moment a state $| \psi \rangle $ so that the region ${\cal R}(t)$ has a particularly simple geometry. To be specific, we choose that ${\cal R}(t=0)$ is obtained from ${\cal R}_0$ by adding a region $\delta {\cal R} = \{ R_0 \ge R \ge R_1, \theta_1 \ge \theta \ge \theta_2 \}$ and subtracting a region $\widetilde{\delta{\cal R}} = = \{ \widetilde R_1 \ge R \ge R_0, \widetilde \theta_1 \ge \theta \ge \widetilde \theta_2 \}$. We will call $\delta {\cal R}$ the ``blip'' and $\widetilde {\delta {\cal R}}$ the ``antiblip''; basically the state $| \psi \rangle $ is created from the ground state $| \psi_0 \rangle $ by removing fermions from the region $\widetilde {\delta {\cal R}} \subset {\cal R}_0$ and placing them in the region $\delta {\cal R}$ just outside the filled fermi sea. Fermion number conservation is achieved by choosing the areas of $\delta {\cal R}$ and $\widetilde {\delta {\cal R}}$ to be the same, which is equivalent to the condition that $$ (\theta_1 - \theta_2) (R_1^2 - R_0^2) = (\widetilde{\theta_1} - \widetilde {\theta_2}) (R_0^2 - \widetilde R_1^2) $$ The region ${\cal R}(t)$ at non-zero times $t$ is simply obtained by shifting the $\theta$-boundaries of both the blip and the antiblip by $t$. Using this, one can easily write down the expression for $\delta T(u,v)$ for this state: $$ \delta T(u,v) = \delta T_b(u,v) + {\widetilde\delta T}_b (u,v) + o(\hbar) \eqn\fourone $$ where $$ \delta T_b (u,v) = {1\over 2} \int^{R_0}_{R_1} RdR \int^{\theta_2}_{\theta_1} d\theta | ( u + Re^{-\theta } ) (v - Re^\theta )|^{-1/2} \eqn\fourtwo $$ represents contribution of the `blip'. The contribution of the `antiblip', $\widetilde{\delta T}_b(u,v),$ is a similar expression involving the tilde variables. It is important to note that there are $\hbar$-corrections to \fourone, as $\,{\cal U}(p,q,t)$ is actually a characteristic function plus $o(\hbar)$ terms. A remark is in order about two seemingly different expansions that we are making in this paper. One is in $\hbar$, and the other is in $|\delta E/\mu|$. Ultimately, as it turns out, both are expansions in ${g_{\rm str}}$. For the moment, in \fourone\ we have made an explicit $\hbar$-expansion, and no $|\delta E/\mu|$ expansion yet. What we will show in the present example is that it is this latter expansion that is badly divergent and results in increasingly singular behaviour at $uv = \mu/2$, and if one treats expressions like \fourtwo\ without a $|\delta E/\mu|$ expansion, then $\delta T(u,v)$ does not have any singularities at $uv = \mu/2$. More generally, we will argue that singularities are invariably absent whenever one performs the $|\delta E/\mu|$ resummation; the $\hbar$-corrections coming from corrections like those present in \fourone\ do not affect the conclusions {\it vis-a-vis} singularities. Let us analyze the singularities of \fourtwo\ (treatment of $\widetilde \delta T_b (u,v)$ is similar). Clearly singularities can arise only when the expression inside the square root vanishes. It is also clear that a linear zero inside the square root is not a singularity (recall that $\int dx\, x^{-1/2}$ is not singular), we must have a quadratic zero. In other words, both factors must vanish. This can happen only if $u<0, v>0$. Let us choose the parametrization $u = -r e^{-\chi}/2, v= re^\chi/2$. We get $$\delta T_b(u,v) ={1\over 2} \int^{R_0}_{R_1} RdR \int^{\theta_2}_{\theta_1} d\theta | (R-r)^2 - 4Rr\sinh^2({\theta - \chi \over 2})|^{-1/2} \eqn\fourthree$$ If we look at any one of the integrals separately, over $\theta$ or $R$, we see a logarithmic singularity at $R=r,\theta= \chi$ provided this point is included in the range of integration. Let us do the $\theta$ integral first. If $r$ is outside the range $[R_1, R_0]$ we do not have any singularities. If $r\in (R_1, R_0)$, and $\chi\in (\phi_1, \phi_2)$, it is easy to see that for $R \approx r$ the $\theta$ integral behaves as $\ln | R - r|$. Let us assume that the range $[R_0, R_1]$ is small (compared to $R_0$, say, which means that the blip consists of small energy fluctuations compared to the fermi energy) so that $R\approx r$ through the range of integration (this is only a simplifying assumption and the conclusions do not depend on it). We get (for $\phi_2 > 1/2\ln(-v/u) > \phi_1$) $$\eqalign{ \delta T_b(u,v) \sim& \int_{R_1}^{R_0} RdR \ln |R -r| \cr =& (R_0 - r)\ln |R_0 - r| - (R_1 - r) \ln |R_1 - r| - (R_1-R_0)\cr \approx & (-\mu/2)^{-1/2}[ (uv - \mu/2)\ln| uv - \mu/2| - (uv - \mu/2 - \Delta/2) \ln | uv - \mu/2 - \Delta/2| \cr} \eqn\fourfour$$ In the last line, we have put in the values $R_0 = \sqrt{-2\mu}, r = \sqrt{ -4uv}$ and defined $R_1 \equiv \sqrt{2(-\mu - \Delta)}, \Delta> 0$ ($\Delta $ thus measures the maximum energy fluctuation of the blip from the fermi surface). Since $R_0 \approx r \approx R_1$ we have used $R_{0,1} - r \approx (R_{0,1}^2 - r^2)/(2R_0)$ and also $R_0 - R_1 \approx (R_0^2 - R_1^2)/(2R_0)$. It is easy to see that $ \delta T_b(u,v)$ has no singularities\foot{ This statement is true all over $u,v$-space including the horizon. }. However it is also easy to see that it develops singurities as soon as one attempts a semiclassical expansion in $\Delta/\mu$; to be precise one gets $$\delta T_b(u,v) \sim |\Delta/\mu| \ln| uv - \mu/2| + |\Delta/\mu|^2 (uv - \mu/2)^{-1} + \cdots \eqn\fourfive$$ where once again $1/2 \ln (-v/u) \in (\phi_1, \phi_2)$ (for $1/2 \ln (-v/u)$ outside this range there are no singularities). Thus, we see that the tachyon solution \fourtwo\ develops a singularity at $uv= \mu/2$ at the level of a $\Delta/\mu$ expansion, though the full solution does not. What does the above example teach us for the general scenario? In general, the tachyon solution would be given by expressions like $$ \delta T(u,v) ={1\over 2}\int RdR \int d\phi f(R,\theta) | 4 ( u + Re^{-\theta}) ( v - R e^\theta)|^{-1/2} \eqn\foursix$$ where $f(R,\theta) \equiv \delta\,{\cal U}(R\sinh\theta, -R\cosh\theta)$ and the only thing that we have assumed is that the support of $\delta\,{\cal U} \equiv \langle \psi |\,{\widehat {\cal U}} | \psi \rangle - \langle \psi_0| \,{\widehat {\cal U}} | \psi_0 \rangle $ is confined to the region $p^2 < q^2$ (the generalization to the other region is straightforward). Note that the expression \fourtwo\ can be recovered by putting $$\delta\,{\cal U} = \vartheta [(R - R_0)(R_1 - R)] \vartheta [(\theta - \theta_1) ( \theta_2 - \theta)] \eqn\fourseven $$ where $\vartheta(x)= 1$ if $x>0$ and $0$ otherwise. The integral \foursix\ consists of contributions from $R>0$ (left half of the phase plane) and from $R<0$. Let us look at the $R>0$ part first. Once again the singularities can only come from $u<0, v>0$ region of the $u,v$ space. Using the same parametrization as in \fourthree\ we get $$\delta T(u,v) ={1\over 2} \int RdR \int d\theta f(R,\theta) | (R-r)^2 - 4Rr\sinh^2({\theta - \chi\over 2})|^{-1/2} \eqn\foureight$$ The basic lesson of the previous example is that even though each one-dimensional integral, taken separately over $R$ or $\theta$, has a logarithmic singularity if the point $R=r, \theta=\chi$ is included in the support of $f(R, \theta)$, the second integral smooths out that singularity. (In the previous example we did the $\theta$ integral first to find logarithmic singularity and the $R$-integration smoothed that out; it could as easily have been done the other way around.) In fact the issue is that of a {\bf two-dimensional} integration of the sort $\int dxdy\, f(x,y)| x^2 - y^2 |^{-1/2}.$ For a smooth function $f(x,y)$ the possible singularity at $x=y=0$ (a linear zero in the denominator) is washed away by a stronger (quadratic) zero in the integration measure. A singularity can be sustained only if $f(x,y)$ has a pole or a stronger singularity at $x=y=0$. However, in our case the quantum phase space density $\,{\cal U}(p,q,t)$ cannot have such singularities. The reason is that the fermion field theory states that we are concerned with are $W_\infty$-rotations of the ground state, and therefore the corresponding $\,{\cal U}(p,q,t)$ is also a $W_\infty$-rotation of the ground state density $\,{\cal U}_0(p,q)$. Since the latter is a smooth distribution and a unitary rotation cannot induce singularities, we see that $\,{\cal U}(p,q,t)$ must be non-singular. The tachyon field configurations constructed by integrals such as \foureight\ are therefore non-singular too. Another way to think of this is to use a two-step argument: the $\hbar\to 0$ limit of $\,{\cal U}(p,q,t)$ is obtainable by a classical area-preserving diffeomporphism (element of $w_\infty$) and is certainly nonsingular, being given by the characteristic function of a region equal to the $w_\infty$-transformed Fermi sea. This implies that the tachyon field constructed from it is already non-singular; incorporation of $\hbar$-effects further smoothen these disctributions. We conclude therefore that the exact quantum theory does not permit any singlarities in the tachyon field configuration. \section{\bf Some Novel Features of the Quantum Field Theory of ${\widehat T}(u,v)$} So far we have discussed the semiclassical physics of $\delta T(u,v)$ including its low energy differential equation and in the last section we have seen how to compute the expectation values of $\delta {\widehat T}(u,v)$ beyond the semiclassical expansion using the fermion theory. In this section we discuss in more detail some novel features of the two-dimensional quantum field theory of $\delta {\widehat T}(u,v)$ defined by the underlying fermion theory. Let us first discuss how $\delta {\widehat T}(u,v) \equiv {\widehat T}(u,v) - T_0(u,v)$ which is {\it a priori} defined in terms of an on-shell three-dimensional field $\,{\widehat {\cal U}}(p,q,t)$ defines a Heisenberg operator in a {\sl two-dimensional} field theory. Basically we use the fact that the right hand side of \twooneone\ is actually independent of $t$ to put $t$ equal to the ``time'' of the $u,v$-space. To fix ideas, let us consider the coordinatization \threefiveb-\threefivec\ of the $u,v$-space where ${\bf x},{\bf t}$ correspond to space and time. This of course limits the discussion to the quadrant $\{u>0, v>0\}$ but the discussion holds just as well for more global choices of ``time'' coordinates also. Now, in these coordinates, writing $\delta {\widehat T}(u({\bf x},{\bf t}), v({\bf x},{\bf t}))$ as $\delta {\widehat T}({\bf x},{\bf t})$ by an abuse of notation, we have $$\eqalign{ \delta {\widehat T}({\bf x},{\bf t})& \; \cr = \int dudv&\; | (e^x-ue^{\bf t})(e^x-ve^{-t})|^{-1/2}\delta \,{\widehat {\cal U}}(ue^t + ve^{-t}, ue^t - ve^{-t}, t) \cr = \int dudv&\; | (e^x-u)(e^x-v)|^{-1/2}\delta \,{\widehat {\cal U}}(ue^{t-{\bf t}}+ ve^{{\bf t}-t}, ue^{t-{\bf t}}-ve^{{\bf t}-t}, t) \cr} \eqn\sixone$$ Here $$ \delta \,{\widehat {\cal U}}(p,q,t) \equiv \,{\widehat {\cal U}}(p,q,t) - \langle \psi_0 | \,{\widehat {\cal U}}(p,q,t) | \psi_0 \rangle $$ Since these expressions are actually independent of $t$, we can choose the ``gauge'' $$ t = {\bf t} \eqn\sixtwo$$ which gives $$\delta {\widehat T}({\bf x},{\bf t}) = \int dudv\; |(e^x - u)(e^x -v)|^{-1/2} \delta \,{\widehat {\cal U}}(u+v,u-v, {\bf t}) \eqn\sixthree$$ Note that the fields on both sides of \sixthree\ are evaluated at the same time ${\bf t}$. More explcitly we see that $$\delta {\widehat T}({\bf x},{\bf t}) = e^{iH{\bf t}} \delta {\widehat T}({\bf x},0) e^{-iH{\bf t}} \eqn\sixfour$$ where $$\delta {\widehat T}({\bf x},0) = \int dudv\; |(e^x - u)(e^x -v)|^{-1/2}\delta \,{\widehat {\cal U}}(u+v,u-v, 0) \eqn\sixfive$$ The hamiltonian is the same as in \twofour. Eq. \sixfour\ tells us that $\delta {\widehat T}({\bf x},{\bf t})$ is a Heisenberg operator in a two-dimensional field theory. Eq. \sixthree\ allows us to write down time-ordered products of $\delta {\widehat T}({\bf x},{\bf t})$'s in terms of time-ordered products of the $\delta \,{\widehat {\cal U}}(p,q,t)$'s: thus $$\eqalign{ \langle &\psi| {\cal T}(\delta {\widehat T}({\bf x}_1,{\bf t}_1)\cdots \delta {\widehat T}(x_n,{\bf t}_n)) | \psi \rangle \cr =&\int du_1dv_1\cdots du_ndv_n |(e^{x_1}-u_1)(e^{x_1}-v_1)|^{-1/2} \cdots |(e^{x_n}- u_n)(e^{x_n} - v_n)|^{-1/2}\times \cr \qquad &{\cal T}( \delta \,{\widehat {\cal U}}(u_1+v_1, u_1-v_1,{\bf t}_1)\cdots \delta \,{\widehat {\cal U}}(u_n + v_n, u_n - v_n, t_n)) \cr} \eqn\sixsix$$ In this way the expectation values of time-ordered products of ${\widehat T}(u,v)$ can be computed from the fermion theory. As we have stressed earlier, in principle these contain answers to all dynamical questions in the theory. However, these correlation functions are not related to usual particle-scattering amplitudes in the standard fashion, parimarily because: $\bullet$ $[\delta {\widehat T}({\bf x}_1,{\bf t}), \delta {\widehat T}({\bf x}_2, {\bf t})] \ne 0$, {\it i.e.}, the field $\delta {\widehat T}({\bf x}, {\bf t})$ does not commute with itself at equal times. In fact the non-trivial commutation relation is a direct consequence of the $W_{\infty}$ algebra. As we should expect, the field ${\widehat T}(u,v)$ bears a close resemblance to the spin operator in a magnetic field. In both cases the symplectic structures (ETCR in the quantum theory) are non-trivial. We know that in case of the spin, dynamical questions are better formulated in terms of coherent states $| {\bf n}\rangle$ satisfying $\langle {\bf n} |\widehat{\bf S} | {\bf n} \rangle = {\bf n}$. Questions such as how $| {\bf n} \rangle$ evolves in time are equivalent to calculating the dynamical trajectory $\langle \widehat {\bf S(t)} \rangle $. In the present case $\delta T(u,v) \equiv \langle \delta {\widehat T}(u,v) \rangle$ plays a role exactly similar to this object. $\bullet$ Two-point function $\ne $ ``Propagator''. We have seen in Sec. 4 that $\delta T(u,v)$, or equivalently ${\cal T}({\bf x},{\bf t})$, satisfies the `black hole' differential equation \threenine. In an ordinary scalar field theory such a thing would imply $$ D_{{\bf x},{\bf t}} G_2({\bf x},{\bf t}| {\bf x}',{\bf t}') = (1+ e^{-2{\bf x}}) \delta({\bf x}-{\bf x}') \delta ({\bf t}-{\bf t}') \eqn\sixseven $$ where $$ G_2({\bf x},{\bf t}|{\bf x}',{\bf t}') \equiv \langle \psi_0 | T( {\widehat{\cal T}}({\bf x},{\bf t}) {\widehat{\cal T}} ({\bf x}',{\bf t}') | \psi_0 \rangle \eqn\sixeight$$ The prefactor $1 + \exp(-2{\bf x})$ is equal to $1/\sqrt {\hbox{det}\, g}$ in $({\bf x},{\bf t})$ coordinates, required to make the $\delta$-function covariant. The operator ${\widehat{\cal T}}$ is defined as $$\widehat{\cal T}({\bf x}, {\bf t}) \equiv | uv |^{-1/2} \delta {\widehat T}(u,v). \eqn\sixeighta$$ The symbol $T$ in \sixeight\ denotes time-ordering in the time ${\bf t}$. In our case, \sixseven\ is not true because of the non-standard commutation relations of the $\widehat{ \cal T}$ field. It is easy to derive that $$ D_{{\bf x}, {\bf t}}G_2({\bf x}, {\bf t}| {\bf x}', {\bf t}') = (1 + e^{-2{\bf x}}) \big( \delta ({\bf t} - {\bf t}') [ {\widehat {\cal T}} ({\bf x}, t), {\widehat {\cal T}} ({\bf x}', t)] + \partial_{\bf t} \delta({\bf t} - {\bf t}') [ {\widehat {\cal T}} ({\bf x}, {\bf t}), {\widehat {\cal T}} ({\bf x}', {\bf t}) ] \big) \eqn\sixnine$$ The commutation relation of the ${\widehat {\cal T}}$ fields can be derived from the definition \sixeighta\ and the $\,{\cal U}(p,q,t)$ commutation relations. Neglecting corrections of order $e^{-x}$ we have \def{\widehat {\cal T}}{{\widehat {\cal T}}} \def{\cal D}{{\cal D}} $$\eqalign{ [{\widehat {\cal T}} ({\bf x}, {\bf t}), \partial_t {\widehat {\cal T}}({\bf x}' ,{\bf t}) ] &\propto [{\cal D}_+' {\cal D}_+ + {\cal D}_-' {\cal D}_-] \partial_{\bf x}^2 \delta ({\bf x} - {\bf x}') \cr [{\widehat {\cal T}} ({\bf x}, {\bf t}), {\widehat {\cal T}}({\bf x}' ,{\bf t}) ] &\propto [{\cal D}_+' {\cal D}_+ - {\cal D}_-' {\cal D}_-] \partial_{\bf x} \delta ({\bf x} - {\bf x}') \cr} \eqn\sixten $$ where ${\cal D}_\pm$ are as defined in \threefivehaa\ (the primes refer to ${\bf x}'$). In the limit of extremely large ${\bf x}$, the operators ${\cal D}_\pm$ go as $(\partial_{\bf x})^{-1}$ and we recover canonical communication relations. This is essentially because in this limit the field ${\widehat {\cal T}}({\bf x},{\bf t})$ becomes the same as the ``tachyon'' $\eta$ of the standard $c=1$ matrix model (see Eq. \threefivehb). The non-identification of the two-point function with the propagator is basically related to the fact that $\delta {\widehat T}(u,v)$ or ${\widehat {\cal T}}({\bf x},{\bf t})$ cannot create particle states because they do not commute at equal times. $\bullet$ How does one address the issue of propagation then? As we have stressed, ${\widehat T}({\bf x}, {\bf t})$ can be regarded as a Heisenberg operator in a two-dimensional field theory. If $H$ is a functional of ${\widehat T}({\bf x},0)$ and $\partial_{\bf t} {\widehat T}({\bf x},0)$, then given $\langle \psi | {\widehat T}({\bf x},0) | \psi \rangle$ and $\langle \psi | \partial_{\bf t}{\widehat T}({\bf x},0) | \psi \rangle$, one can determine in principle $\langle \psi | {\widehat T}({\bf x}, {\bf t}) | \psi \rangle $. In general it is a difficult question whether $H$ is a functional of only ${\widehat T}({\bf x},0), \partial_{\bf t} {\widehat T}({\bf x},0)$. However even if it is not clear how much initial data is required to get a unique dynamical trajetory (unique answer for, let's say, $\langle \psi | {\widehat T}({\bf x}, {\bf t}) | \psi \rangle $), the fermionic construction {\bf does} list {\bf all} dynamical trajectories. Also, we do know from the analysis of small fluctuations (for instance using the language of ${\bar \eta}_\pm$) that data worth two real functions (e.g. ${\bar \eta}_\pm(\tau, t=0)$ are enough to determine the future evolution for the fermionic state, except perhaps some discrete data corresponding to ``discrete states''. Incidentally, it is also clear from the one-to-one correspondence between ${\bar \eta}_\pm(\tau)$ and ${\widehat T}({\bf x},0), \partial_{\bf t} {\widehat T}({\bf x},0)$ that we can choose {\bf any} kind of initial data on any given spacelike (or lightlike) surface ${\bf t} = {\bf t}_0$ (this list includes the white hole horizon) by simply choosing the appropriate fermionic state, or equivalently the appropriate values for ${\bar \eta}_\pm(\tau,{\bf t}_0)$. \section {\bf Analytically continued Fermion Theory and The Euclidian Black Hole} In this section we would like to study the implication for the tachyon theory of the analytic continuation of the Fermion field theory discussed earlier in [\DDMW], namely\foot {This is equivalent to the analytic continuation \REF\GROSSMILJK{D.J. Gross and N. Miljkovich, Ref. 4.} [\GROSSMILJK] of the harmonic oscillator frequency $w \to iw$ in the fermion potential.} $$ t \to it, \; p \to -ip \eqn\sevenone$$ In this analytic continuation, the ``hyperbolic transform'' becomes an ``elliptic transform'', $$ \phi(p,q,t) = \int { dp' dq' \over [(p - p')^2 + (q - q')^2 ]^{1/2} } \,{\cal U} (p,q,t), \eqn \seventwo $$ Then, by the analytically continued equation of motion, $$ (\partial_t + p \partial_q - q \partial_p) \,{\cal U}(p,q,t) = 0 \eqn\seventhree $$ we can show that $\phi (p,q,t)$ is of the form $$ \phi (p,q,t) = T(u, \bar u) \eqn\sevenfour $$ where $$ u= { q - ip \over 2} \exp( -it),\; \bar u = {q + ip \over 2} \exp (it) \eqn\sevenfive $$ As in Sec. 2, one can show that in the low energy approximation $T(u, \bar u)$ satisfies the equation of motion of a massless field in the Euclidian black hole: $$ [\barDuv ] T(u, \bar u) = 0 + o(T^2) \eqn\sevensix $$ This corresponds to a dilaton-metric background that describes the Euclidian black hole (the ``cigar''). It is remarkable that the analytic continuation \sevenone\ of the matrix model defines the usual analytical continuation of the black hole physics! This makes it tempting to believe that the thermal Green's functions of the latter may have a direct significance in terms of the matrix model in the forbidden region. \section{\bf Concluding Remarks} In this concluding section we would like to comment on some issues that need deeper understanding. Firstly there is the question of the $S$-matrix. The definition of the $S$-matrix requires the specification of the `in' and `out' states. These states can be inferred by analyzing the two-point correlation function of a complete commuting (equal time) set of field operators. A natural set is the density operator $\psi^\dagger (x, t)\psi (x,t) \equiv \partial_x \varphi$, because perturbatively we know that $\varphi$ creates `in' and `out' massless particle states. Once this is done we can evolve the `in' states to the `out' states in the standard fashion and define the $S$-matrix. This has been previously calculated [\MSWTWO,\POLTWO,\MOORE,\DJR] The black hole interpretation of this theory and in particular the non-linear differential equation \threenine\ seem to strongly suggest that the kernel that propagates `in' states to the `out' states has a representation in the $T(u,v)$ theory. It would be very interesting to see this explicitly. One of the limitations of the $c=1$ matrix model as a model of black holes is that this black hole is eternal and one cannot envisage any process ({\it e.g} formation and evaporation) that involves changing the mass of the black hole. The simple reason for this is that the mass of the black hole is equal to the fermi level which cannot be changed if the number of fermions is held fixed. It is an interesting question whether there is a way of circumventing this difficulty within the context of string theory. {\bf Note Added:} While this paper was being written up, we received the article by S.R. Das, ``Matrix models and nonperturbative string propagation in two-dimensional black hole backgrounds", Enrico Fermi Institute preprint EFI-93-16. In this paper the author has raised the issue of the identification of the ``correct tachyon operator'' which gives rise to ``physical scattering processes'' from a black hole. We would like to point out here that none of the ``tachyon'' operators defined in Eq. (5) of that paper can create particle states because they do not commute with each other at spacelike distances since they are built out of the density operator at different `matrix model times'. This reason is analogous to the reason why the operator ${\widehat T}(u,v)$ defined in [\DMWBH] and the present paper cannot, as has been explained in great detail in Sec. 6. For this reason, one cannot interpret the object defined in Eq. (12) of that paper as the wavefunction sought by the author. We would also like to point out here that given the definitions in Eq. (5) of that paper, it is not clear to us how time-ordered (in some definition of time in the $u,v$-space) correlators of these operators are related to those of the collective field theory. For this reason the l.h.s. of Eq. (12) of that paper cannot be obtained simply by inserting the result (19) in the r.h.s. of (12). This needs some understanding of the connection between (some definition of) time in the $u,v$-space and the matrix model time. Our definition of the tachyon operator has afforded us an immediate connection between these two ``times''. In fact, as explained in this paper, we can use the $t$-independence in the right hand side of our Eq. \twooneone\ so that the matrix model time and the ``time'' of the $u,v$-space can be identified with each other and thus relate the time-ordered correlators of $\delta {\widehat T}(u,v)$ and those of the matrix model. \refout \end
1,116,691,501,133
arxiv
\section*{Appendix} \subsection{More Details about Data Collection} The dance videos are collected from YouTube by crowd workers, which are all solo dance videos with 30FPS. We trim the beginning and ending several seconds for each of collected videos to remove the silence parts, and split them into one-minute video clips. Then, we extract 2D pose data from these video clips by OpenPose \citep{cao2017realtime} with 15FPS setting and collect 790 one-minute clips of pose data with 15FPS, totaling about 13 hours. Finally, we extract the corresponding audio data from video clips by FFmpeg\footnote{\small \url{https://ffmpeg.org/}}. The code is implemented based on PyTorch framework. MIT License will be used for the released code and data. \subsection{Extracted Music Features} \label{sec:music_features} In this section, we detail the extracted features of music that are feeded into our model. Specifically, we leverage the public Librosa \citep{mcfee2015librosa} to extract the music features including: 20-dim \textit{MFCC}, 20-dim \textit{MFCC delta}, 12-dim \textit{chroma}, 384-dim \textit{tempogram}, 1-dim \textit{onset strength} (i.e., \textit{envelope}) and 1-dim \textit{one-hot beat}, as shown in Table \ref{tab:feature}. \begin{table}[h] \centering \caption{Acoustic features of music.} \vspace{2mm} \label{tab:feature} \begin{adjustbox}{width=0.55\textwidth} \begin{tabular}{l c c } \toprule Feature & Characteristic & Dimension \\ \cmidrule(lr){2-3} MFCC & Pitch & 20 \\ MFCC delta & Pitch & 20 \\ Constant-Q chromagram & Pitch & 12 \\ Tempogram & Strength & 384 \\ Onset strength & Strength & 1 \\ Beat one-hot & Beat & 1 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \subsection{Downstream Applications} The downstream applications of our proposed audio-conditioned dance generation method include: (1) Our model can be used to help professional people choreograph new dances for a given song and teach human how to dance with regard to this song; (2) With the help of 3D human pose estimation \citep{ci2019optimizing} and 3D animation driving techniques, our model can be used to drive the various 3D character models, such as the 3D model of Hatsune Miku (very popular virtual character in Japan). This technique has the great potential for the virtual advertisement video generation that could be used for the promotion events on social medias like TikTok, Twitter and etc. \subsection{Musical Beat Detection under Low Sampling Rates} In this section, we conduct an additional experiment to evaluate the performance of Librosa beat tracker under 15,400Hz sampling rate. Specifically, we first utilize it to extract the onset beats from audio data with 15,400Hz and 22,050Hz respectively, then compare two groups of onset beats under the same time scale. Since 22,050Hz is a common sampling rate used in music information retrieval, we define the beat alignment ratio as $B_2/B_1$, where $B_1$ is the number of beats under 15,400Hz and $B_2$ is the number of beats under 15,400Hz that are aligned with the beats under 22,050Hz. One beat is counted when $\mid t_1 - t_2 \mid \leq \Delta t$. $t_1$ denotes the timestamp of a certain beat under 15,400Hz while $t_2$ refers to the timestamp of a certain beat under 22,050Hz, $\Delta t$ is the time offset threshold. In the experiment, we set $\Delta t=1/15s$ (FPS is 15) and calculate the beat alignment ratio for audio data of three styles respectively. We randomly sample 10 audio clips from each style to calculate the beat alignment ratio and do the sampling for 10 times to take the average. As shown in Table ~\ref{tab:beat_comparison_sampling_rate}, most of beats extracted with 15,400Hz are aligned with those extracted with 22,050Hz. Besides, we randomly sample an audio clip and visualize beat tracking curves of its first 20 seconds under the sampling rates of 15,400Hz and 22,050Hz. As we can see in Figure~\ref{fig:musical_beat_detection}, most of the beats from 15,400Hz and 22,050Hz are aligned within the time offset threshold $\Delta t=1/15s$. \begin{table}[h] \centering \caption{The beat alignment ratio $B_2/B_1$.} \vspace{2mm} \label{tab:beat_comparison_sampling_rate} \begin{tabular}{l c c c c } \toprule Category & Beat Alignment Ratio (\%) \\ \cmidrule(lr){1-2} Ballet & 88.7 \\ Hiphop & 93.2 \\ Japanese Pop & 91.6 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{figures/musical_beat_detection.pdf} \vspace{-1.5em} \caption{Beat tracking curves for the first 20 seconds of a music clip randomly sampled from audio data under the sampling rates of 15,400Hz and 22,050Hz. Most of the beats are aligned within the time offset threshold $\Delta t=1/15s$.} \label{fig:musical_beat_detection} \end{figure} \section{Conclusion} In this work, we propose a novel seq2seq architecture for long-term dance generation with music, e.g., about one-minute length. To efficiently process long sequences of music features, we introduce a local self-attention mechanism in the transformer encoder structure to reduce the quadratic memory requirements to the linear in the sequence length. Besides, we also propose a dynamic auto-condition training method as a novel curriculum learning strategy to alleviate the error accumulation of autoregressive models in long motion sequence generation, thus facilitate the long-term dance generation. Extensive experiments have demonstrated the superior performance of our proposed approach on automatic metrics and human evaluation. Future works include the exploration on end-to-end methods that could work with raw audio data instead of preprocessed audio features. \section{Experimental Setup} \subsection{Dataset Collection and Preprocessing} \label{sec:dataset} Although the dataset \citep{lee2019dancing} contains about 70-hour video clips, the average length of each clip is only about 6 seconds. Moreover, their dataset cannot be used by other methods to generate long-term dance except for their specially designed decomposition-to-composition framework. To facilitate the task of long-term dance generation with music, we collect a high-quality dataset consisting of 790 one-minute clips of pose data with 15 FPS, totaling about 12 hours. And there are three styles in the dataset including ``Ballet'', ``Hiphop'' and ``Japanese Pop''. Table ~\ref{tab:dataset} shows the statistics of our dataset. In the experiment, we randomly split the dataset into the 90\% training set and 10\% test set. \begin{table} \centering \caption{Statistics of the dataset.} \vspace{2mm} \label{tab:dataset} \begin{adjustbox}{width=0.75\textwidth} \begin{tabular}{l c c c c } \toprule Category & Num of Clips & Clip Length (min) & FPS & Resolution \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){5-5} Ballet & 136 & 1 & 15 & 720P \\ Hiphop & 298 & 1 & 15 & 720P \\ Japanese Pop & 356 & 1 & 15 & 720P \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \textbf{Pose Preprocess.} For the human pose estimation, we leverage OpenPose \citep{cao2017realtime} to extract 2D body keyjoints from videos. Each pose consists of 25 keyjoints\footnote{\small\url{https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md\#pose-output-format-body_25}} and is represented by a 50-dimension vector in the continuous space. In practice, we develop a linear interpolation algorithm to find the missing keyjoints from nearby frames to reduce the noise in extracted pose data. \textbf{Audio Preprocess.} Librosa \citep{mcfee2015librosa} is a well-known audio and music analysis library in the music information retrieval, which provides flexible ways to extract the spectral and rhythm features of audio data. Specifically, we extract the following features: \textit{mel frequency cepstral coefficients (MFCC)}, \textit{MFCC delta}, \textit{constant-Q chromagram}, \textit{tempogram} and \textit{onset strength}. To better capture the beat information of music, we convert \textit{onset strength} into a one-hot vector as an additional feature to explicitly represent the beat sequence of music, i.e., \textit{beat one-hot}. Finally, we concatenate these audio features as the representation of a music frame, as shown in Table \ref{tab:feature} in Section \ref{sec:music_features}. During extracting audio features, the sampling rate is 15,400Hz and hop size is 1024. Hence, we have 15 audio samples per second that is aligned with the 15 FPS of pose data. \subsection{Implementation Details} The music encoder consists of a stack of $N=2$ identical layers. Each layer has two sublayers: a local self-attention sublayer with $l=8$ heads and a position-wise fully connected feed-forward sublayer with 1024 hidden units. Each head contains a scaled dot-product attention layer with $d_k=d_v=64$ and its receptive yield is restricted by setting $k=100$. Then we set the sequence length $n=900$, dimension of acoustic feature vector $d_x=438$, dimension of hidden vector $d_z=256$, respectively. The dance decoder is a 3-layer LSTM with $d_s=1024$ and the dimension of pose vector $d_y=50$. We set $\lambda=0.01$ and $q=10$ for the proposed learning approach and train the model using the Adam optimizer with the learning rate 1e-4 on 2 NVIDIA V100 GPUs. \subsection{Baselines} The music-conditioned generation is an emerging task and there are few methods proposed to solve this problem. In our experiment, we compare our proposed approach to the following baselines: (1) \textbf{Dancing2Music.} We use Dancing2Music \citep{lee2019dancing} as the primary baseline that is the previous state-of-the-art; (2) \textbf{Aud-MoCoGAN.} Aud-MoCoGAN is an auxiliary baseline used in Dancing2Music, which is original from MoCoGAN \citep{tulyakov2018mocogan}; (3) \textbf{LSTM.} \cite{shlizerman2018audio} propose a LSTM network to predict body dynamics from the audio signal. We modify their open-source code to take audio features of music as inputs and produce human pose sequences. \section{Experimental Results} In this section, we conduct extensive experiments to evaluate our approach and compare it to the aforementioned baselines. Note that we do not include comparison experiment on the dataset collect by \cite{lee2019dancing} due to the above issue in Section \ref{sec:dataset}. \subsection{Automatic Metrics and Human Evaluation} We evaluate the different methods by automatic metrics and conduct a human evaluation on motion realism and smoothness of generated dance movements, as well as style consistency. Specifically, we randomly sample 60 music clips from the test set to generate dances, then divide the dances generated by three methods and the real dances into four pairs: (LSTM, ours), (Dancing2Music, ours), (real, ours) and (real, real). We invite 10 amateur dancers as the annotators to answer 3 questions for each pair that is blind for them (use (A, B) pair instead): (1) \textit{Which dance is more realistic regardless of music?} (2) \textit{Which dance is more smooth regardless of music?} (3) \textit{Which dance matches the music better with respect to style?} Note that, we do not include Aud-MoCoGAN in the human evaluation, since both it and Dancing2Music are GAN based methods while the latter is the state-of-the-art. \textbf{Motion Realism, Style Consistency and Smoothness.} We evaluate the visual quality and realism of generated dances by Fréchet Inception Distance (FID) \citep{heusel2017FID}, which is used to measure how close the distribution of generated dances is to the real. Similar to \cite{lee2019dancing}, we train a style classifier on dance movement sequences of three styles and use it to extract features for the given dances. Then we calculate FID between the synthesized dances and the real. As shown in Table~\ref{tab:automatic_metrics}, our FID score is significantly lower than those of baselines and much closer to that of the real, which means our generated dances are more motion-realistic and more likely to the real dances. Besides, we use the same classifier to measure the style accuracy of generated dance to the music, and our approach achieves 77.6\% accuracy and significantly outperforms Dancing2Music by 17.2\%. \begin{table} \centering \caption{\textbf{Automatic metrics of different methods}. \textit{FID} (lower is better) evaluates the quality and realism of dances by measuring the distance between the distributions of the real dances and the generated dances. \textit{ACC} evaluates the style consistency between generated dances and music. \textit{Beat Coverage} measures the ratio of total kinematic beats to total musical beats. The higher the beat coverage is, the stronger the rhythm of dance is. \textit{Beat Hit Rate} measures the ratio of kinematic beats aligned with musical beats to total kinematic beats. \textit{Diversity} denotes the variations among a set of generated dances while \textit{Multimodality} refers to the variations of generated dances for the same music. } \label{tab:automatic_metrics} \vspace{1em} \begin{adjustbox}{width=\textwidth} \begin{tabular}{l c c c c c c} \toprule Method & FID & ACC (\%) & Beat Coverage (\%) & Beat Hit Rate (\%) & Diversity & Multimodality \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){5-5}\cmidrule(lr){6-7} Real Dances & 2.6 & 99.5 & 56.4 & 63.9 & 40.2 &- \\ \midrule LSTM & 51.9 & 12.1 & 4.3 & 9.7 & 16.8 &- \\ Aud-MoCoGAN & 48.5 & 33.8 & 10.9 & 28.5 & 30.7 & 16.4 \\ Dancing2Music & 22.7 & 60.4 & 15.7 & 65.7 & 30.8 & \textbf{18.9} \\ Ours & \textbf{6.5} & \textbf{77.6} & \textbf{21.8} & \textbf{68.4} & \textbf{36.9} & 15.3 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \begin{figure}[!t] \centering \includegraphics[width=1.0\textwidth]{figures/human_eval.pdf} \vspace{-1.5em} \caption{\textbf{Human evaluation results on motion realism, style consistency and smoothness.} We conduct a human evaluation to ask annotators to select the dances that are \textit{more realistic and more smooth regardless of music, more style-consistent with music} through pairwise comparison.} \vspace{-1em} \label{fig:human_eval} \end{figure} Human evaluation in Figure~\ref{fig:human_eval} consistently shows the superior performance of our approach, compared to baselines on motion realism, style consistency and smoothness. We observe the dances generated by LSTM have lots of floating movements and tend to quickly freeze within several seconds due to the error accumulation, which results in lower preferences. While Dancing2Music can generate the smooth dance units, the synthesized dances still have significant jumps where dance units are connected due to the inherent gap between them, which harm its overall performance in practice. This is also reflected in the preference comparisons to our approach, in which only 35\% annotators prefer Dancing2Music on motion realism and 21.7\% prefer Dancing2Music on smoothness. Besides, the result on style consistency shows our approach can generate more style-consistent dances with musics. Since our approach models the fine-grained correspondence between music and dance at the methodological level and also introduces more specific music features. However, Dancing2Music only considers a global style feature extracted by a pre-trained classifier in the dance generation. In the comparison to real dances, 41.2\% annotators prefer our method on motion realism while 30.3\% prefer on style consistency. Additionally, we found that 57.9\% annotators prefer our approach on smoothness compared to real dances. Since the imperfect OpenPose \citep{cao2017realtime} introduces the minor noise on pose data extracted from dance videos. On the other hand, in the preprocessing, we develop an interpolation algorithm to find missing keyjoints and reduce the noise of training pose data. Besides, this also indicates the chain structure based decoder can well capture the spatial-temporal dependency among human motion dynamics and produce the smooth movements. \textbf{Beat Coverage and Hit Rate.} We also evaluate the beat coverage and hit rate introduced in \cite{lee2019dancing}, which defines the beat coverage as $B_k/B_m$ and beat hit rate as $B_a/B_k$, where $B_k$ is the number of kinematic beats, $B_m$ is the number of musical beats and $B_a$ is the number of kinematic beats that are aligned with the musical beats. An early study \citep{ho2013extraction} reveals that kinematic beat frames occur when the direction of the movement changes. Therefore, similar to \cite{yalta2019weakly}, we use the standard deviation (SD) of movement to detect the kinematic beats in our experiment. The onset strength \citep{ellis2007beat} is a common way to detect musical beats. Figure~\ref{fig:beat} shows two short clips of motion SD curves and aligned musical beats. We observed that the kinematic beats occur where the musical beats occur, which is consistent with the common sense that dancers would step on musical beats during dancing but do not step on every musical beat. As we can see in Table~\ref{tab:automatic_metrics}, LSTM and Aud-MoCoGAN generate dances with few kinematic beats and most of them do not match the musical beats. Our method outperforms Dancing2Music by 6.1\% on beat coverage and 2.7\% on hit rate, which indicates that introducing the features about musical beat into model is beneficial to the better beat alignment of generated dance and input music. \begin{figure}[hb] \centering \includegraphics[width=0.9\textwidth]{figures/beat.png} \caption{Beat tracking curves for music and dance by onset strength and motion standard deviation respectively. The left is a short clip of ballet while the right is a short clip of hip-hop. The red circles are kinematic beats and dash lines denote the musical beats.} \vspace{1em} \label{fig:beat} \end{figure} \textbf{Diversity and Multimodality.} \cite{lee2019dancing} introduce the diversity metric and the multimodality metric. The diversity metric evaluates the variations among generated dances corresponding to various music, which reflects the generalization ability of the model and its dependency on the input music. Similarly, we use the average feature distance as the measurement and these features are extracted by the same classifier used in measuring FID. We randomly sample 60 music clips from test set to generate dances and randomly pick 500 combinations to compute the average feature distance. Results in Table~\ref{tab:automatic_metrics} show the performance of our approach is superior to Dancing2Music on diversity. The reason is that Dancing2Music only considers a global music style feature in generation while different music of the same style have the almost same style feature. Our approach models the fine-grained correspondence between music and dance at the methodological level, simultaneously takes more specific music features into account. In other words, our approach is more dependent on input music and thus can generate different samples for difference music clips of the same style. Multimodality metric evaluates the variations of generated dances for the same music clip. Specifically, we generate 5 dances for each of randomly sampled 60 music clips and compute the average feature distance. As we can see, our approach slightly underperforms Dancing2Music and Aud-MoCoGAN since it has more dependency on music inputs as aforementioned, which make the generated dances have the limited variation patterns given the same music clip. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/long_term.pdf} \vspace{-1em} \caption{FID curves of different methods over time.} \vspace{-1em} \label{fig:long_term} \end{figure} \subsection{Long-Term Dance Generation} To further investigate the performance of different methods on long-term dance generation, we evaluate FID of dances generated by different methods over time. Specifically, we first split a generated dance with 1 minute into 15 four-second clips and measure FID of each clip. Figure~\ref{fig:long_term} shows FID scores of LSTM and Aud-MoCoGAN grow rapidly in the early stage due to error accumulation and converge to the high FID, since the generated dances quickly become frozen. Dancing2Music maintains the relatively stable FID scores all the time, which benefits from its decomposition-to-composition method. While its curve still has subtle fluctuations since it synthesizes whole dances by composing the generated dance units. Compared to baselines, FID of our method are much lower and close to real dances, which validates the good performance of our method on long-term generation. \subsection{Ablation Study} \textbf{Comparison on Encoder Structures.} We compare the different encoder structures in our model with the same LSTM decoder and the same curriculum learning strategy, as shown in Table~\ref{tab:discussion}. The transformer encoders outperform LSTM and the encoder in ConvS2S \citep{gehring2017convolutional} on both FID and style consistency, due to its superior performance on sequence modeling. Although the transformer encoder with our proposed local self-attention slightly underperforms the global self-attention by 0.7 higher in FID and 1.5\% lower in ACC, it has much fewer parameters in our settings. This confirms the effectiveness of the local self-attention on long sequence modeling. \textbf{Comparison on Learning Strategies.} To investigate the performance of our proposed learning approach, we conduct an ablation study to compare the performances of our model using different learning strategies. As we can see the right of in Table~\ref{tab:discussion}, all curriculum learning strategies significantly outperform the static auto-condition strategy \citep{li2017auto}. Among them, the linear growth function $f(t)=\lfloor \lambda t \rfloor$ performs best, which might be that increasing the difficulty of curriculum too fast would result in the model degradation. Besides, the original teacher-forcing strategy has the highest FID and the lowest style accuracy due to severe error accumulation problem, which indicates that using our proposed curriculum learning strategy to train the model can effectively alleviate this issue. \begin{table*}[t!] \caption{Left: Automatic metrics of our model with different encoder structures. Right: Automatic metrics of our model training by different learning strategies.} \vspace{1em} \begin{tabular}{cccc} & \begin{adjustbox}{width=0.4\textwidth, height=0.084\textwidth} \centering \begin{tabular}{l c c c c } \toprule Encoder Structure & FID & ACC (\%) \\ \cmidrule(lr){2-3} LSTM & 7.9 & 45 \\ ConvS2S encoder & 13.3 & 54 \\ Global self-attention &3.6 &80 \\ Local self-attention & 4.3 & 78.5 \\ \bottomrule \end{tabular} \end{adjustbox} && \begin{adjustbox}{width=0.42\textwidth, height=0.095\textwidth} \begin{tabular}{l c c} \toprule Learning Approach & FID & ACC (\%) \\ \cmidrule(lr){2-3} Teacher-forcing & 61.2 & 5.3 \\ Auto-condition ($const$) & 15.7 & 35 \\ Ours ($\lfloor \lambda e^t \rfloor$) & 9.8 & 69 \\ Ours ($\lfloor \lambda t^2 \rfloor$) & 6.4 & 73 \\ Ours ($\lfloor \lambda t \rfloor$) & 5.1 & 77.6 \\ \bottomrule \end{tabular} \end{adjustbox} \end{tabular} \label{tab:discussion} \end{table*} \section{Introduction} Arguably, dancing to music is one of human's innate abilities, as we can spontaneously sway along with the tempo of music we hear. The research in neuropsychology indicates that our brain is hardwired to make us move and synchronize with music regardless of our intention \citep{chen2008listening}. Another study of archaeology also suggests that dance is a social communication skill among early humans connected to the ability of survival long time ago \citep{adshead2006dance}. Nowadays, dance (to music) has become a means to the cultural promotion, a method of emotional expression, a tool for socialization and an art form to bring aesthetic enjoyment. The neurological mechanism behind dancing behavior and the unique value of dance to society motivate us to explore a computational approach to dance creation from a piece of music in artificial intelligence research. Such work is potentially beneficial to a wide range of applications such as dance creation assistant in art and sports, character motion generation for audio games and research on cross-modal behavior. In literature, the music-conditioned dance generation is a relatively new task that attracts increasing research interests recently. Early works \citep{fan2011example,lee2013music} synthesize dance sequences from music by retrieval-based methods, which show the limited creativity in practice. Recently, \cite{lee2019dancing} formulate the task from the generative perspective, and further propose a decomposition-to-composition framework. Their model first generates basic dance units from the music clips and then composes them by using the last pose of current unit to initialize the first pose of the next unit. Although this approach shows better performance than the retrieval-based methods, several challenges still remain. First, existing generative methods synthesize new human motion sequences through autoregressive models like RNN, which tend to result in short sequences. In other words, the generated sequences often quickly freeze within a few seconds due to an accumulation of prediction errors that are fed back into the neural network. This problem becomes even more severe in long motion sequence generation. For instance, composing a dance for a 1-minute music clip under 15 Frame Per Second (FPS) means generating 900 poses at one time. In practice, we need novel methods which can effectively generate long motion sequences. Besides, how to enhance the harmony between the synthesized dance movements and the given music is a largely unexplored challenge. Inutitively, the movements need to be consistent with the music in terms of style, rhythm and beat. However, to achieve this goal is non-trivial, which requires the generation model to have the capability to capture the fine-grained correspondence between music and dance. In this paper, we formalize music-conditioned dance generation as a sequence-to-sequence learning problem where the fine-grained correspondence between music and dance is represented through sequence modeling and their alignment is established via mapping from a sequence of acoustic features of the music to a sequence of movements of the dance. The model consists of a music encoder and a dance decoder. The encoder transforms low-level acoustic features of an input music clip into high-level latent representations via self-attention with a receptive field restricted within $k$-nearest neighbors of an element. Thus, the encoder can efficiently process long sequences of music features, e.g., a sequence with more than 1000 elements, and model local characteristics of the music such as chord and rhythm patterns. The decoder exploits a recurrent structure to predict the dance movement frame by frame conditioned on the corresponding element in the latent representations of music feature sequence. Furthermore, we propose a curriculum learning \citep{bengio2009curriculum} strategy to alleviate error accumulation \citep{li2017auto} of autoregressive models in long motion sequence generation. Specifically, it gently changes the training process from a fully guided teacher-forcing scheme using the previous ground-truth movements, towards a less guided autoregressive scheme which mostly utilizes the generated movements instead. This strategy bridges the gap between training and inference of autoregressive models, and thus alleviates error accumulation at inference. The length of each video clip in the dataset released by \cite{lee2019dancing} is only about 6 seconds, which cannot be used by other methds to generate long-term dance except for their specially designed decomposition-to-composition framework. To facilitate the task of long-term dance generation with music, we collect a high-quality dataset consisting of 1-minute video clips, totaling about 12 hours. And there are three representative styles in our dataset: ``Ballet'', ``Hiphop'' and ``Japanese Pop''. Our contributions in this work are four-fold: (1) We formalize music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture for the long-term dance generation with music. (2) We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation. (3) To facilitate long-term dance generation with music, we collect a high-quality dataset that is available with our code\footnote{\small\url{https://github.com/stonyhu/DanceRevolution}}. (4) The extensive experiments show that our approach significantly outperforms the existing state-of-the-arts on both automatic metrics and human judgements. The demo video in the supplementary material also exhibits our approach can generate diverse minute-length dances that are smooth, natural-looking, style-consistent and beat-matching with the musics from test set. \section{Approach} In this section, we present our approach to music-conditioned dance generation. After formalization of the problem in question, we elaborate the model architecture and the dynamic auto-condition learning approach that facilitates long-term dance generation according to the given music. \begin{figure}[hb] \centering \includegraphics[width=0.9\textwidth]{figures/model.pdf} \caption{The overview of our method. The transformer based encoder is stacked of $N$ identical layers with the local self-attention mechanism, which aims to efficiently process long features sequences extracted from music in a GPU-memory-economic way. The decoder exploits a recurrent structure to predict dance movement $y_i$ frame by frame conditioned on hidden state $h_i$ of decoder and latent vector $z_i$. Note that $y_0$ is a predefined Begin Of Pose (BOP). During the training phase, the proposed curriculum learning strategy gently increases the number of steps of autoregressive scheme according to the number of training epochs. This strategy bridges the gap between training and inference of autoregressive models, and thus alleviates error accumulation at inference.} \label{fig:overview} \end{figure} \subsection{Problem Formalization} Suppose that there is a dataset $\mathcal{D} = \{(X_i, Y_i)\}_{i=1}^{N}$, where $X = \{x_t\}_{t=1}^{n}$ is a music clip with $x_t$ being a vector of acoustic features at time-step $t$, and $Y = \{y_t\}_{t=1}^{n}$ is a sequence of dance movements with $y_t$ aligned to $x_t$. The goal is to estimate a generation model $g(\cdot)$ from $\mathcal{D}$, and thus given a new music input $X$, the model can synthesize dance $Y$ to music $X$ based on $g(X)$. We first present our seq2seq architectures chosen for music-conditioned dance generation in the following section. Later in the experiments, we empirically justify the choice by comparing the architectures with other alternatives. \subsection{Model Architecture} In the architecture of $g(\cdot)$, a music encoder first transforms $X = (x_1,..., x_n)$ ($x_i \in \mathbb{R}^{d_x}$) into a hidden sequence $Z = (z_1,..., z_n)$ ($z_i \in \mathbb{R}^{d_z}$) using a local self-attention mechanism to reduce the memory requirement for long sequence modeling, and then a dance decoder exploits a recurrent structure to autoregressively predicts movements $Y = (y_1,..., y_n)$ conditioned on $Z$. \paragraph{Music Encoder.} Encouraged by the compelling performance on music generation \citep{huang2018music}, we define the music encoder with a transformer encoder structure. While the self-attention mechanism \citep{vaswani2017attention} in transformer can effectively represent the multi-scale structure of music, the quadratic memory complexity $\mathcal{O}(n^2)$ about sequence length $n$ impedes its application to long sequence modeling due to huge of GPU memory consumption \citep{child2019generating}. To keep the effectiveness of representation and control the cost, we introduce a local self-attention mechanism that modifies the receptive field of self-attention by restricting the element connections within $k$-nearest neighbors. Thus, the memory complexity is reduced to $\mathcal{O}(nk)$. $k$ could be small in our scenario since we only pursue an effective representation for a given music clip. Therefore, the local patterns of music are encoded in $z_t$ that is sufficient for the generation of movement $y_t$ at time-step $t$, which is aligned with the common sense that the dance movement at certain time-step is highly influenced by the nearby clips of music. Yet we can handle long sequences of acoustic features, e.g., more than 1000 elements, in an efficient and memory-economic way. Specifically, we first embed $X = (x_1,..., x_n)$ into $U = (u_1,..., u_n)$ with a linear layer parameterized by $W^E \in \mathbb{R}^{d_x \times d_z}$. Then $\forall i \in \{1,...,n\}$, $z_i$ can be formulated as: \begin{align} z_i = \mathcal{F}(a_i),\quad a_i = \sum_{j=i-\lfloor k/2 \rfloor}^{i+ \lfloor k/2 \rfloor} \alpha_{ij}(u_j W^V_l),\quad U = X W^E, \end{align} where $\mathcal{F(\cdot)}: \mathbb{R}^{d_v} \rightarrow \mathbb{R}^{d_z}$ is a feed forward neural network. Each $u_j$ is only allowed to attend its $k$-nearest neighbors including itself, where $k$ is a hyper-parameter referring to the sliding window size of the local self-attention. Then attention weight $\alpha_{ij}$ is calculated using a softmax function as: \begin{align} \alpha_{ij} = \frac{\mathrm{exp}\ e_{ij}}{\sum_{t=j- \lfloor k/2 \rfloor}^{j+ \lfloor k/2 \rfloor} \mathrm{exp}\ e_{it}},\quad e_{ij} = \frac{(u_i W^Q_l) (u_j W^K_l)^{\top}}{\sqrt{d_{k}}}, \end{align} where for the the $l$-th head, $W^Q_l$, $W^K_l \in \mathbb{R}^{d_z \times d_{k}}$ and $W^V_l \in \mathbb{R}^{d_z \times d_{v}}$ are parameters that transform $U$ into a \textit{query}, a \textit{key}, and a \textit{value} respectively. $d_z$ is the dimension of hidden state $z_i$ while $d_k$ is the dimension of \textit{query}, \textit{key} and $d_v$ is the dimension of \textit{value}. \paragraph{Dance Decoder.} We choose a recurrent neural network as the dance decoder in consideration of two factors: (1) the chain structure can well capture the spatial-temporal dependency among human movement dynamics, which has proven to be highly effective in the state-of-the-art methods for human motion prediction \citep{li2018convolutional,mao2019learning}; (2) our proposed learning strategy is tailored for the autoregressive models like RNN, as will be described in the next section. Specifically, with $Z = (z_1,..., z_n)$, the dance movements $Y = (\hat{y}_1, ..., \hat{y}_n)$ are synthesized by: \begin{align} \hat{y}_i &= [h_i;z_i]W^S + b,\\ \label{eq:decoder} h_i &= \mathrm{RNN}(h_{i-1},\hat{y}_{i-1}), \end{align} where $h_i$ is the $i$-th hidden state of the decoder and $h_0$ is initialized by sampling from the standard normal distribution to enhance variation of the generated sequences. $[\cdot;\cdot]$ denotes the concatenation operation. $W^S \in \mathbb{R}^{(d_s+d_z) \times d_y}$ and $b \in \mathbb{R}^{d_y}$ are parameters where $d_s$ and $d_y$ are the dimensions of $h_i$ and $\hat{y}_i$, respectively. At the $i$-th time-step, the decoder predicts the movement $\hat{y}_i$ conditioned on $h_i$ as well as the latent feature representation $z_i$, and thus can capture the fine-grained correspondence between music feature sequence and dance movement sequence. \subsection{Dynamic Auto-Condition Learning Approach} Exposure bias \citep{ranzato2015sequence,lamb2016professor,zhang2019bridging} is a notorious issue in natural language generation (NLG), which refers to the train-test discrepancy of autoregressive generative models. This kind of models are never exposed to their own prediction errors at the training phase due to the conventional teacher-forcing training scheme. However, compared to NLG, motion sequence generation suffers from the more severe exposure bias problem. In NLG, the bias of predicted probability distribution over vocabulary can be corrected by sampling strategy. For instance, we can still generate the target token with index 2 by sampling on probability distribution [0.3, 0.3, 0.4] whose groundtruth is [0, 0, 1]. While any small biases of the predicted motion at each time-step will be accumulated and propagated to the future, since each generated motion is a real-valued vector (rather than a discrete token) representing 2D or 3D body keyjoints in the continuous space. As a result, the generated motion sequences tend to quickly freeze within a few seconds. This problem becomes even worse in the generation of long motion sequences, e.g., more than 1000 movements. Scheduled sampling \citep{bengio2015scheduled} is proposed to alleviate exposure bias in NLG, which does not work in long motion sequence generation. Since its sampling-based strategy would feed long predicted sub-sequences of high bias into the model at early stage, which causes gradient vanishing at training due to error accumulation of these high-biased sub-sequences. Motivated by this, we propose a dynamic auto-condition training method as a novel curriculum learning strategy to alleviate the error accumulation of autoregressive models in long motion sequence generation. Specifically, our learning strategy gently changes the training process from a fully guided teacher-forcing scheme using the previous ground-truth movements, towards a less guided autoregressive scheme mostly using the generated movements instead, which bridges the gap between training and inference. As shown in Figure \ref{fig:overview}, The decoder predicts $Y_{tgt} = \{y_1,...,y_{p},y_{p+1},...,y_{p+q},y_{p+q+1},... \}$ from $Y_{in}=\{y_0,\hat{y_1},...,\hat{y_p},y_{p+1},...,y_{p+q},\hat{y}_{p+q+1},... \}$ which alternates the predicted sub-sequences with length $p$ (e.g., $\hat{y}_{1},...,\hat{y}_{p}$) and the ground-truth sub-sequences with length $q$ (e.g., $y_{p+1},...,y_{p+q}$). $y_0$ is a predefined begin of pose (BOP). During the training phase, we fix $q$ and gradually increase $p$ according to a growth function $f(t) \in \{const, \lfloor \lambda t \rfloor$, $\lfloor \lambda t^2 \rfloor$, $\lfloor \lambda e^t \rfloor\}$ where $t$ is the number of training epochs and $\lambda<1$ controls how frequently $p$ is updated. We choose $f(t)=\lfloor \lambda t \rfloor$ in practice through the empirical comparison. Note that our learning strategy degenerates to the method proposed in \cite{li2017auto} when $f(t)=const$. While the advantage of a dynamic strategy over a static strategy lies in that the former introduces the curriculum learning spirit to dynamically increase the difficulty of curriculum (e.g., larger $p$) as the model capacity improves during training, which can further improve the model capacity and alleviate the error accumulation of autoregressive predictions. Finally, we estimate parameters of $g(\cdot)$ by minimizing the $\ell_1$ loss on $\mathcal{D}$: \begin{equation} \ell_1=\frac{1}{N} \sum_{i=1}^{N} {|| g(X_i) - Y_{tgt}^{(i)} ||}_1 \end{equation} \section{Related Work} \paragraph{Cross-Modal Learning.} Most existing works focus on the modeling between vision and text, such as image captioning \citep{lu2017knowing,xu2015show} and text-to-image generation \citep{reed2016generative,zhang2017stackgan}. There are some other works to study the translation between audio and text like Automatic Speech Recognition (ASR) \citep{hinton2012deep} and Text-To-Speech (TTS) \citep{oord2016wavenet}. While the modeling between audio and vision is largely unexplored and the music-conditioned dance generation is a typical cross-modal learning problem from audio to vision. \paragraph{Human Motion Prediction.} Prediction of human motion dynamics has been a challenging problem in computer vision, which suffers from the high spatial-temporal complexity. Existing works \citep{chan2019everybody, wang2018video} represent the human pose as 2D or 3D body keyjoints \citep{cao2017realtime} and address the problem via sequence modeling. Early methods, such as hidden markov models \citep{lehrmann2014efficient}, Gaussian processes \citep{wang2006gaussian} and restricted boltzmann machines \citep{taylor2007modeling}, have to balance the model capacity and inference complexity due to complicated training procedures. Recently, neural networks dominate the human motion modeling. For instance, \cite{fragkiadaki2015recurrent} present LSTM-3LR and Encoder-Recurrent-Decoder (ERD) as two recurrent architectures for the task; \cite{jain2016structural} propose a structural-RNN to model human-object interactions in a spatio-temporal graph; and \cite{ghosh2017learning} equip LSTM-3LR with a dropout autoencoder to enhance the long-term prediction. Besides, convolutional neural networks (CNNs) have also been utilized to model the human motion prediction \citep{li2018convolutional}. \paragraph{Audio-Conditioned Dance Generation.} In the research of audio-conditioned dance generation, most existing works study 2D dance motion generation with music since the training data for paired 2D pose and music can be extracted from the huge amount of dance videos available online. Various methods have been proposed to handle this task, such as adversarial learning based methods \citep{lee2019dancing,sun2020deepdance,ferreira2021learning}, autoencoder methods \citep{tang2018dance} and sequence-to-sequence methods \citep{lee2018listen,ren2019music,yalta2019weakly,ye2020choreonet}. While these works mainly focus on exploring the different neural architectures and overlook the freezing motion issue in dance motion synthesis. In this work, we first propose a novel seq2seq architecture to model the fine-grained correspondence between music and dance, and then introduce a novel curriculum learning strategy to address the freezing motion issue caused by error accumulation \citep{li2017auto} in long-term dance motion generation.
1,116,691,501,134
arxiv
\section{Symmetries and band structure for different $t_2/t$} The tight binding part of our Hamiltonian has the following form \begin{equation} \label{Seq:tb} H_{0} = -t\sum_{\langle ij \rangle l \sigma}\left(c^{\dagger}_{il\sigma}c_{jl\sigma}+h.c. \right)-t_2\sum_{\langle ij \rangle' l\sigma} \left( i^{2l-1}c^{\dagger}_{il\sigma}c_{jl\sigma} + h.c. \right). \end{equation} on a honeycomb lattice, where $t$ is the hopping amplitude between the nearest neighbor sites $\langle \cdots \rangle$, $t_2$ is the hopping amplitude between the fifth neighbor sites $\langle \cdots \rangle '$ and $i^{2l-1}$ sets the correct phase factor for orbital $l$, according to Ref.~\cite{koshino2018}. As there are two orbitals $l=1,2$ and two sites in one unit cell, there are four bands (do not count spin) in the Brillouin zone (BZ). \begin{figure}[h!] \includegraphics[width=0.4\columnwidth]{modelonelayer.pdf} \caption{ A two-orbital extended Hubbard model on a honeycomb lattice. $t$ is the nearest neighbor hopping, $t_2$ is the fifth neighbor hopping, $U$ is the cluster charge interaction.} \label{Sfig:model} \end{figure} For generic $t_2/t$, $H_0$ acquires the symmetry group $G=D_3 \times U(1) \times SU(2) \times T \times \mathcal{Z}_2$, where $U(1)$ stems from charge conservation of each orbital, $SU(2)$ is the spin rotational symmetry, $T$ is time reversal symmetry, and $\mathcal{Z}_2$ is a combination of twofold rotation in real space and chirality flip in orbital space. At the limit of $t_2/t=0$, $H_0$ acquires an even higher symmetry $D_3 \times SU(4) \times T \times \mathcal{Z}_2$. The band structure of $H_0$ is shown in Fig.~\ref{Sfig:bandstructure}. When $t_2/t = 0$, the orbital components are degenerate, so the band structure in Fig.\ref{Sfig:bandstructure}(a) is the same as that of the nearest hopping honeycomb lattice, i.e. graphene, with Dirac cones at momenta $\mathbf{K}$ and $\mathbf{K'}$ in BZ. Increasing $t_2/t$, upper and lower bands within $\Gamma$-M break the orbital degeneracy and the four bands manifest in this segment of high-symmetry path, with two inner ones move towards and eventually touch each other at $t_2/t\approx 0.31$, as shown in Fig.~\ref{Sfig:bandstructure} (b). Further increasing $t_2/t$ these two band crosses, as shown in Fig.\ref{Sfig:bandstructure} (c), and results in a uncommon Fermi surface (FS), which are the blue streched circles close to M points in two neighboring BZs, as shown in Fig.\ref{Sfig:bandstructure}(e). The parallel segments of these FS-s contribute high density of states to the tight-binding model. Under the cluster charge interaction $U$, as shown in Fig.~\ref{Sfig:bandstructure} (d), the crossing points along $\Gamma$-M will be gapped out firstly, and consequently these FS-s will disappear and result in a jump in kinetic energy of the system, with the corresponding $U$ values hightlighted as the dash line in $t_2/t$--$U/t$ phase diagram in Fig.1 of the main text. After the jump, the FS-s will only contain the Dirac cones at $\mathbf{K}$ and $\mathbf{K'}$, and only when further increasing $U/t$, the Dirac cones will be gapped out as well through the $N=4$ Gross-Neveu chiral XY transition at $U_c$, as discussed in the main text. \begin{figure}[h!] \includegraphics[width=0.9\columnwidth]{combine.pdf} \caption{(a)-(c) The band structure of the tight-binding model in Eq.~\ref{Seq:tb} for different $t_2/t$. Dirac points are at $\mathbf{K}$ and $\mathbf{K'}$, the crossing of bands within $\Gamma$--$M$ happens at $t_2/t=0.31$. (d) Single particle gap, $\Delta_{sp}$, along the high-symmetry path of BZ obtained from PQMC at $t_2/t = 0.5$ for $L=18$ system at different $U/t$. At $U=0$, due to the crossing within $\Gamma$-$M$, as shown in (c), $\Delta_{sp}$ is zero at $\mathbf{K}$, $\mathbf{K'}$ and the two crossing points. As $U/t$ increases, the crossing points are firstly gapped out at the $U$ value highlighted by the dashed line in Fig.1 of the main text, and further increasing of $U$ eventually gaps out the Dirac cones at $\mathbf{K}$ and $\mathbf{K'}$ at $U=U_c$. (e) The FS for the non-interacting system at $t_2/t=0.5$, with the red hexagon the first BZ. The pockets close to $\mathbf{M}$ points come from the encircling of dispersion within $\Gamma$-M.} \label{Sfig:bandstructure} \end{figure} \section{Projection QMC method and absence of sign-problem} Since we are interested in the ground state properties of the system, the projection QMC is the method of choice~\cite{Assaad2008}. In PQMC, one can obtain a ground state wave function $\vert \Psi_0 \rangle$ from projecting a trial wave function $\vert \Psi_T \rangle$ along the imaginary axis $\vert \Psi_0 \rangle = \lim\limits_{\Theta \to \infty} e^{-\frac{\Theta}{2} \mathbf{H}} \vert \Psi_T \rangle$, then observable can be calculated as \begin{equation} \label{eq:observablepqmc} \langle \hat{O} \rangle = \frac{\langle \Psi_0 \vert \hat{O} \vert \Psi_0 \rangle}{\langle \Psi_0 \vert \Psi_0 \rangle} = \lim\limits_{\Theta \to \infty} \frac{\langle \Psi_T \vert e^{-\frac{\Theta}{2} \mathbf{H}} \hat{O} e^{-\frac{\Theta}{2} \mathbf{H}} \vert \Psi_T \rangle}{\langle \Psi_T \vert e^{-\Theta \mathbf{H}} \vert \Psi_T \rangle} . \end{equation} To evaluate overlaps in above equation, we performed Trotter decomposition and $\Theta$ is discretized into $L_\tau$ slices ($\Theta=L_\tau \Delta\tau$). Each slices $\Delta\tau$ is small and the systematic error is $\mathcal{O}(\Delta\tau^2)$. After the Trotter decomposition, we have \begin{equation} \langle\Psi_{T}|e^{-\Theta H}|\Psi_{T}\rangle=\langle\Psi_{T}|\left(e^{-\Delta\tau H_{U}}e^{-\Delta\tau H_{0}}\right)^{L_\tau}|\Psi_{T}\rangle+\mathcal{O}(\Delta{\tau}^{2}) \end{equation} where the non-interacting and interacting parts of the Hamiltonian is separated. To treat the interacting part, one usually employ a Hubbard Stratonovich (HS) transformation to decouple the interacting quartic fermion term to fermion bilinears coupled to auxiliary fields. For the cluster charge interaction in the TBG model, we make use of a fourth order $SU(2)$ symmetric decoupling~\cite{Assaad2008,xu2018kekule} \begin{equation} e^{-\Delta\tau U(Q_{\varhexagon}-4)^{2}}=\frac{1}{4}\sum_{\{s_{\varhexagon}\}}\gamma(s_{\varhexagon})e^{\alpha\eta(s_{\varhexagon})\left(Q_{\varhexagon}-4\right)} \label{eq:decompo} \end{equation} with $\alpha=\sqrt{-\Delta\tau U}$, $\gamma(\pm1)=1+\sqrt{6}/3$, $\gamma(\pm2)=1-\sqrt{6}/3$, $\eta(\pm1)=\pm\sqrt{2(3-\sqrt{6})}$, $\eta(\pm2)=\pm\sqrt{2(3+\sqrt{6})}$ and the sum is taken over the auxiliary fields $s_{\varhexagon}$ on each hexagon which can take four values $\pm2$ and $\pm1$. After tracing out the free fermions degrees of freedom, we obtain the following formula with a constant factor omitted \begin{equation} \langle\Psi_{T}|e^{-\Theta H}|\Psi_{T}\rangle=\sum_{\{s_{\varhexagon,\tau}\}}\left[\left(\prod_{\tau}\prod_{\varhexagon}\gamma(s_{\varhexagon,\tau})e^{-4\alpha\eta(s_{\varhexagon,\tau})}\right)\det\left[P^{\dagger}B(\Theta,0)P\right]\right] \label{eq:mcweight} \end{equation} where $P$ is the coefficient matrix of trial wave function $|\Psi_T\rangle$. In the simulation, we choose the ground state wavefunction of the half-filled non-interacting system (described by $H_0$) as the trial wave function. In the above formula, the $B$ matrix is defined as \begin{equation} B(\tau+1,\tau)=e^{V[\{s_{\varhexagon,\tau}\}]}e^{-\Delta_\tau K} \end{equation} and has properties $B(\tau_3,\tau_1)=B(\tau_3,\tau_2)B(\tau_2,\tau_1)$, where we have written the coefficient matrix of interaction part as $V[\{s_{\varhexagon,\tau}\}]$ and $K$ is the hopping matrix from the $H_0$. The Monte Carlo sampling of auxiliary fields are further performed based on the weight defined in the sum of Eq.~\eqref{eq:mcweight}. The measurements are performed near $\tau=\Theta/2$. Single particle observables are measured by Green's function directly and many body correlation functions are measured from the products of single-particle Green's function based on their corresponding form after Wick-decomposition. The equal time Green's function are calculated as \begin{equation} G(\tau,\tau)=1-R(\tau)\left(L(\tau)R(\tau)\right)^{-1}L(\tau) \end{equation} with $R(\tau)=B(\tau,0)P$, $L(\tau)=P^{\dagger}B(\Theta,\tau)$. More technique details of PQMC method, please refer to Refs~\cite{blankenbecler1981monte,hirsch1985two,Assaad2008,xu2018kekule}. At half-filling, the model is sign-problem-free, this can be seen from the following analysis. Since the model is particle-hole symmetric at half-filling, one can perform a particle-hole transformation only for the orbital $l=2$, such transformation changes the cluster charge operator $Q_{\varhexagon} \equiv Q_{\varhexagon}^1 + Q_{\varhexagon}^2$ to $Q_{\varhexagon}' \equiv Q_{\varhexagon}^1 - Q_{\varhexagon}^2$ in the Hamiltonian. Here $Q_{\varhexagon}^{1}$/$Q_{\varhexagon}^{2}$ is cluster charge operator for orbital-1/orbital-2. Due to the specific form of Eq.~\ref{eq:decompo}, the fermion bilinears after HS transformation is invariant under the antiunitary transformation \begin{equation} \mathcal{U} = i\sigma_0 \tau_2 \mathcal{K}, \end{equation} where $\sigma_0$ is identity in spin space, $\tau_2$ is second Pauli matrix in orbital space, $\mathcal{K}$ is complex conjugate. As all the fermion bilinears obeys above antiunitary symmetry, and $\mathcal{U}^2=-1$, these properties of our model and HS decoupling form give rise to a sufficient condition for sign-problem-free Monte Carlo simulations, as proved in Ref.~\cite{wu2005suff}. \begin{figure}[h!] \includegraphics[width=0.9\columnwidth]{vbsmf.pdf} \caption{(a) Bond modulation with $(1+\delta)t$ for red bonds and $(1-\delta)t$ for blue bonds. We define $m\equiv 2\delta$. The Brillouin zone is folded to smaller one (blue dash line) when $m \ne 0$. (b)-(d) Band structure with $m=-0.2$, $m=0$, and $m=0.2$.} \label{Sfig:vbsmf} \end{figure} \section{$k \cdot p$ Hamiltonian for VBS phase} In the VBS phase, we consider a mean field description with bond charge order, while forget the spin and orbital degrees of freedom. Then we can define the static VBS order as a modulation in the nearest neighbor hopping. As showed in the inset of Fig.1 of main text, there are two kinds of bonds and the hopping magnitude is defined as $(1+\delta)t$ and $(1-\delta)t$. As such kind of bond modulation enlarges unit cell to $\sqrt{3}\times\sqrt{3}$, the tight binding Hamiltonian with enlarged unit cell writes \begin{equation} h(k)=-\tilde{t}\left(\begin{array}{cccccc} 0 & (1+m)e^{-ik_{y}} & 0 & e^{-i\frac{\sqrt{3}}{2}k_{x}+\frac{i}{2}k_{y}} & 0 & e^{i\frac{\sqrt{3}}{2}k_{x}+\frac{i}{2}k_{y}}\\ (1+m)e^{ik_{y}} & 0 & e^{i\frac{\sqrt{3}}{2}k_{x}-\frac{i}{2}k_{y}} & 0 & e^{-i\frac{\sqrt{3}}{2}k_{x}-\frac{i}{2}k_{y}} & 0\\ 0 & e^{-i\frac{\sqrt{3}}{2}k_{x}+\frac{i}{2}k_{y}} & 0 & (1+m)e^{i\frac{\sqrt{3}}{2}k_{x}+\frac{i}{2}k_{y}} & 0 & e^{-ik_{y}}\\ e^{i\frac{\sqrt{3}}{2}k_{x}-\frac{i}{2}k_{y}} & 0 & (1+m)e^{-i\frac{\sqrt{3}}{2}k_{x}-\frac{i}{2}k_{y}} & 0 & e^{ik_{y}} & 0\\ 0 & e^{i\frac{\sqrt{3}}{2}k_{x}+\frac{i}{2}k_{y}} & 0 & e^{-ik_{y}} & 0 & (1+m)e^{-i\frac{\sqrt{3}}{2}k_{x}+\frac{i}{2}k_{y}}\\ e^{-i\frac{\sqrt{3}}{2}k_{x}-\frac{i}{2}k_{y}} & 0 & e^{ik_{y}} & 0 & (1+m)e^{i\frac{\sqrt{3}}{2}k_{x}-\frac{i}{2}k_{y}} & 0 \end{array}\right) \end{equation} where $m\equiv 2\delta$, $\tilde{t}=(1-\delta)t$. Fig.~\ref{Sfig:vbsmf} shows band structure with different mass term $m$. We can see the bond modulation folds Dirac point K and K' of original BZ to $\Gamma$ point in VBS BZ (small BZ) and open a gap when $m\ne 0$ and the gap is exactly $m$. This can be seen more explicitly if we expand above tight binding Hamiltonian around $\Gamma$ point and perform a further down folding from six bands to four bands with unitary transformation \begin{equation} U=\left(\begin{array}{ccc} -1 & -1 & 1\\ 0 & 1 & 1\\ 1 & 0 & 1 \end{array}\right)\otimes \tau^{0}. \end{equation} Then we get a $k \cdot p$ Hamiltonian $H_{\text{eff}}(\vec{k}) = -t\left( \tilde{\vec{k}}\cdot \tilde{ \vec{s}}\tau^2 + ms^0 \tau^1 \right)$ where only linear terms of $k$ and $m$ are reserved. We change the basis a little bit ($\tau^1 \rightarrow \tau^3$ and $\tau^2 \rightarrow \tau^2$), and get the final form as shown in Eq.(4) of the main text. For convenient, we also show it here \begin{equation} \label{Seq:kp} H_{\text{eff}}(\vec{k}) = -t\left( \tilde{\vec{k}}\cdot \tilde{ \vec{s}}\tau^2 + ms^0 \tau^3 \right), \end{equation} where momentum $\tilde{\vec{k}}\equiv (\frac 3 2 k_y, \frac {\sqrt{3}} {2} ik_x, \sqrt{3}k_x)$, the vector $\tilde{\vec{s}}=(s^1, s^2, s^3)$. Here $s^i$ and $\tau^i$ are Pauli matrices in two different spaces. From the $k\cdot p$ Hamiltonian, it is clear that the bond modulation plays the role of a mass term, and there is a sign change in it across the pVBS-cVBS transition. \section{Critical exponents for 3D $N=4$ Gross-Nevue chiral XY universality class} We compare the critical exponents for 3D $N=4$ Gross-Nevue chiral XY universality class calculated by different methods. As showed in Table~\ref{Stable:exponents}, the critical exponents we get are comparable with existed results. We want to remark, there are still differences between the critical exponents got from different methods. Much more efforts in the direction of developments of advanced analytical techniques for higher order calculations, as well as the developments of technically sound numeric methods to achieve much larger system sizes are requiring. \begin{table}[ht] \caption{Comparison of critcal exponents for 3D $N=4$ Gross-Nevue chiral XY universality class calculated by different methods.} \centering \def1.5{1.5} \begin{tabular}{c c c } \hline\hline $N=4$ & $\eta$ & $\nu$ \\ [0.0ex] \hline Monte Carlo, \it this work & \ \ 0.80(2) & \ \ 1.01(3) \\ [0.0ex] \hline $4-\epsilon$, four loop, $P_{[2/2]}$~\cite{zerf2017four} & \ \ 0.929 & \ \ 1.130 \\ [0.0ex] \hline $4-\epsilon$, four loop, $P_{[3/1]}$~\cite{zerf2017four} & \ \ 0.911 & \ \ 1.130 \\ [0.0ex] \hline functional RG (LPA')~\cite{classen2017fluctuation} & \ \ 0.946 & \ \ 1.082 \\ [0.0ex] \hline $4-\epsilon$, two loop~\cite{rosenstein1993critical} & \ \ 0.82 & \ \ 0.97 \\ [0.0ex] \hline Monte Carlo~\cite{li2017fermion} & \ \ 0.80(4) & \ \ 1.11(3) \\ [0.0ex] \hline large-$N$~\cite{hands1993} & \ \ 0.932 & \ \ 1.135 \\ [0.0ex] \hline \end{tabular} \label{Stable:exponents} \end{table} \end{document}
1,116,691,501,135
arxiv
\section{Five Zones of Debris Dust} Great advances are being made in the observations of debris disks using all available platforms, revealing enduring patterns in the behavior of debris disk structure and its evolution that point toward general aspects of planetary system information and development that are not accessible by any other means. Although each individual resolved disk appears to be different from the others, combining all multi-wavelength observations from many different ground- and space-based facilities, debris disk structures can be broadly characterized by five different zones at roughly four different temperatures with thermal emission peaked at four different wavelengths (see Fig.\ \ref{fig1}). A majority of disks posses dense, cold ($\sim$50 K), Kuiper-belt-analog dust that emits prominently in the far-infrared near 70 $\mu$m. Closer to the star, $\sim$20\% of the debris systems have a warm ($\sim$150 K), asteroid-like-analog dust near the water-ice line (Ballering et al.\ 2013). Even closer, dust in the terrestrial zone has a temperature of $\sim$300 K with emission peaked around 10 $\mu$m. Some debris systems even have $\sim$1500 K, very hot excess emission that has been detected through ground-based near-infrared interferometric techniques. Finally, the fifth zone is a large halo, an extended structure, surrounding the aforementioned four zones, that only contains small grains. Not all debris disks have these five zones. Furthermore, except for the one at the water-ice line, the temperatures of the zones and hence their spectral energy distributions (SEDs) depend on the exact locations of the leftover planetesimal belts and the shepherding planets. Therefore, the variation in these zones around different stars is probably related to the various paths by which a system forms and evolves. Since the majority of debris systems are not spatially resolved, it is not trivial to convert the observed properties like measured dust temperatures to physical parameters like the location of planetesimal belts. The dust temperatures can be well constrained from well-sampled (both Wien- and Rayleigh-Jeans sides of the emission) disk SEDs from mid- to far-infrared. With an additional assumption that grains in a debris disk are generated in collisional cascades that generally result in a steep size distribution (Gaspar et al.\ 2012), the observed (optically thin) disk emission is sensitive to the opacity of the disk, which is dominated by small grains. Ideally we can infer the location of the dust from the observed dust temperature and the average grain size in a disk and its optical properties. Using {\it Spitzer} IRS spectra and broad-band 70/100 $\mu$m photometry from {\it Spitzer} and {\it Herschel}, roughly a quarter of debris systems require two (warm and cold) temperatures to fit the disk emission. Although not all these two-temperature debris systems have two distinct dust belts, the ones that have resolved disk images at multiple wavelengths strongly suggest two separate planetesimal belts as we discuss below. \begin{figure}[hbt] \vspace*{-0.5 cm} \begin{center} \includegraphics[width=4.72in]{su_fig1.eps} \vspace*{-0.8 cm} \caption{Illustration for the five zones of debris dust.} \label{fig1} \end{center} \end{figure} \vspace{-0.3cm} \section{Nearby Resolved Debris Disk Structures} {\underline{\bf The HR 8799 System}} Among these two-temperature resolved disks, the most famous one is the HR 8799 system, which has been under the spotlight since the discovery of the four massive planets by direct imaging (Marois et al.\ 2010). Besides the massive planets, the system also has a lot of dust generated from leftover planetesimals, predominantly at two characteristic dust temperatures: $\sim$150 K and $\sim$45 K (Su et al.\ 2009). Figure \ref{fig2} shows the disk images at 24 and 70 $\mu$m along with its SED. At 24 $\mu$m, the emission is unresolved and dominated by the material closer to the star. At 70 $\mu$m, the disk is resolved as an elliptical ring with a position angle of 60\arcdeg, indicating the system is slightly inclined by 10\arcdeg--25\arcdeg\ from face-on. From these images and dust temperatures measured from the SED, we can estimate the radial distances of these two belts. The four massive planets lie between the two dust belts as expected if planetary perturbations maintain their structures and create the dust free zone between them. \begin{figure}[thb] \begin{center} \includegraphics[width=\linewidth]{su_fig2.eps} \vspace*{-0.3 cm} \caption{Multi-wavelength images of the HR 8799 system along with the disk SED.} \label{fig2} \end{center} \end{figure} {\underline{\bf The Fomalhaut System}} Fomalhaut has a prominent cold ($\sim$50 K) excess, dominated by dust generated in a narrow planetesimal belt at $\sim$140 AU, well resolved at multiple wavelengths (e.g.\ Kalas et al.\ 2005; Acke et al.\ 2012; Boley et al.\ 2012). The unresolved warm excess near the star was first discovered in the {\it Spitzer} 24 $\mu$m image by Stapelfeldt et al.\ (2004), and later confirmed by {\it Herschel} at 70 $\mu$m (Acke et al.\ 2012). Re-analysis of the {\it Spitzer} IRS spectrum centered at the star confirms that the unresolved excess emission arises from dust emission at a temperature of $\sim$170 K, an asteroid-belt analog near the water-ice line (Su et al.\ 2013). Using interferometic observations, the Fomalhaut system was found to possess a very hot K-band and hot N-band excesses at the levels of 0.88$\pm$0.12\% and 0.35$\pm$0.10\% above the photosphere (Absil et al.\ 2009; Mennesson et al.\ 2013). Recently, Lebreton et al.\ (2013) proposed these interferometric excesses are due to two populations of grains located at $\sim$0.1--0.3 AU ($\sim$2000 K) and $\sim$2 AU ($\sim$400 K). Although their model can explain the N-band null depths, the model SED significantly under-estimates the amount of the K-band excess (by 25\%), and over-estimates the excess in the IRS spectrum (i.e., $\gtrsim$3 $\sigma$ for wavelengths of 15--30 $\mu$m). {\underline{\bf The Vega System}} Vega and Fomalhaut are often referred to as debris disk twins since they are both A-type stars, located less than 8 pc away, half way through their main-sequence lifetime, and most importantly, they both have far-infrared excesses first discovered by {\it IRAS} 30 years ago, arising from dust particles in an enhanced Kuiper-belt analog. Su et al.\ (2013) present the {\it Spitzer} IRS spectrum taken at the star position and re-analysis of the {\it Herschel} 70 $\mu$m image, and suggest the presence of an unresolved warm excess in the vicinity of the star with a dust temperature of $\sim$170 K, roughly the temperature at which water ice sublimates. With other ancillary data, they confirm that Vega, just like Fomalhaut, also has an asteroid-belt analog. {\underline{\bf Other Solar-like Systems}} $\epsilon$ Eri, like Fomalhaut and Vega, was found to possess a prominent far-infrared excess revealed by {\it IRAS}. {\it Spitzer} imaging and spectroscopic data combined with a detailed SED model further suggest a complex debris structure, with multiple zones in both warm ($\sim$150 K) and cold ($\sim$50 K) components (Backman et al.\ 2009). Using the same technique (SED and resolved cold component), the warm and cold configuration is also seen in the HD 107146 (Hughes et al.\ 2011) and HD 61005 (Hughes et al., this volume). \vspace{-0.2cm} \section{Implications} In our Solar System, the minor bodies that failed to form planets are arranged and sculpted by the planets over the course of 4.5 Gyr of evolution. From knowledge of resonance structures observed in the Kuiper belt, we know that the outer giant planets have migrated in the past (e.g., Malhotra 1993) (not formed in-situ). The inner edge of the Kuiper belt's dusty disk is believed to be maintined by Neptune (Liou \& Zook 1999), whereas the outer edge of the Asteroid belt is maintained by the 2:1 orbital resonance with Jupiter (Kirkwood 1867; Dermott \& Murray 1983). Any debris system with similar structures is possibly signaling the presence of planets (e.g.\ the HR 8799 system). In the resolved systems discussed above, the orbital ratios ($\sim$10, consistent with the estimated dust temperatures) between the inner warm and outer cold belts suggest a large gap in the minor-body population. This large, mostly dust-free zone may be maintained by one or multiple planet-mass objects through dynamical interaction, just like the HR 8799 system. For one single planet, the width of the chaotic zone depends on the location and eccentricity of the planet and its mass. Therefore, one can use the boundaries of the warm and cold dust belts to estimate the mass of the possible perturber. We use the Vega system as an example where the warm belt is estimated at $\sim11-14$ AU (Su et al.\ 2013) and the cold belt is at $\sim$90--120 AU (Su et al.\ 2005, excluding the disk halo). For one single object on a circular orbit, the mass of the object has to be greater than 100 $M_J$ (a brown dwarf), which should have been found in the intensive direct imaging searches (planets with masses $\gtrsim$3--4 $M_J$ can be ruled out in the 20--70 AU range, Marois et al.\ 2006). For one single object on an eccentric orbit, the boundaries of the belts give an expected eccentricity of 0.8 (i.e., warm belt as the pericenter and cold belt as the apocenter). An object with such a high eccentricity inside the cold belt will have significant dynamical effects in the system, forcing the leftover planetesimals to be on highly eccentric orbits and creating an offset eccentric ring and/or resonance structures, which is not what is seen in the resolved disk images (Su et al.\ 2013). The giant planets are expected to form more efficiently outside the water-ice line, and these giant planets, once formed, likely experience inward or outward migration. The discovery of hot Jupiter systems in the early success of exoplanet searches is the best evidence for inward migration. Furthermore, if a giant planet did migrate inward, it is likely to destroy other bodies during the course of migration, consistent with the low incidence of warm excesses in such systems (Morales et al.\ 2012). Therefore, the existence of terrestrial planets and Asteroid belt in our Solar System is probably because it does not have a hot Jupiter. The same logic applies to these two-belt systems like Vega, Fomalhaut, $\epsilon$ Eri and HR 8799, having an asteroid belt might indicate a greater chance to harbor terrestrial planets compared to systems that do not have an asteroid belt. \vspace{-0.3cm}
1,116,691,501,136
arxiv
\section{Introduction} Quantum computation is one based on the principle of quantum mechanics. Compared with classical computation, it provides the advantage on calculation speed {[}1{]}. For example, quantum computer can simulate the property of quantum mechanics, and it is very difficult for classical computer to do that {[}2{]}. Shor\textquoteright s algorithm {[}3{]} offers an exponential speedup over the best-known classical solution for factorizing big integers and solving discrete logarithm problems, and Grover\textquoteright s algorithm {[}4{]} are also much faster than the best-known classical search algorithms. However, realization of quantum computer is still an enormous challenge now. Although quantum computer appears to be very promising, it still has a long way to go before it becomes popular. Consequently, in the near future only a few expensive quantum servers can be accessed by clients who have to perform quantum computation, but with only a limited quantum ability. Hence, the clients may have to delegate his problem to a quantum server without revealing his/her information, including the input, the output and the intended computation revealed to the server. Blind quantum computation (BQC) protocol is particularly suitable to satisfy this requirement. Based on quantum circuit model, Childs {[}5{]} proposed the first BQC protocol, where clients need quantum memory and the ability to perform SWAP gate. Arrighi et al. {[}6{]} also proposed a BQC protocol, in which the client needs to prepare entanglement states and measure them. However, these protocols are not universal protocols, in the sense that they only work on certain classical function, and even the server can reveal partial information of the client. Broadbent et al. {[}7{]} then presented the first universal BQC protocol, where a client does not have any quantum computation ability and quantum memory except generating rotated single photon and the private information of the client can be unconditionally secure. After that, many BQC protocols have been proposed with the intention to make the ability of client more classical. Li et al. {[}8{]} proposed a triple-server BQC protocol using entanglement state and Xu et al. {[}9{]} proposed a single-server BQC protocol based on Li et al.\textquoteright s protocol. Both Li et al. and Xu et al. claimed that in their protocol the client only needs to have a quantum channel to receive and resend photons. However, these protocols have to assume the existence of a trusted center, which is not practical in reality. Besides, Hung et al. {[}10{]} pointed out that both Li et al.\textquoteright s and Xu et al.\textquoteright s protocols are not secure because server can get the information of the client. This paper intends to design two secure BQC protocols without trusted center. The clients in the new BQC protocols only have to perform either rotation operation or reorder the particles. The rest of this article is organized as follows. Section 2 reviews Broadbent et al.\textquoteright s BQC protocol. Section 3 proposes two BQC protocols. Section 4 analyzes the security of two proposed protocols. Finally, a concluding remark is given in Section 5. \section{Review Broadbent et al.\textquoteright s BQC protocol} Before presenting our protocol, let us briefly review Broadbent et al.\textquoteright s BQC protocol first. Suppose that a client Alice with limited quantum capability wants to delegate a quantum problem to a quantum server Bob on the $m$-qubit graph states corresponding to the graph $G$ without revealing any information about the input, the output and the intended computation. The Broadbent et al.\textquoteright s protocol can be briefly described as follows. \begin{description} \item [{Step$\,$1.}] Alice prepares $m$ qubits and sends them to Bob. The state of each qubit is $\left|\theta_{i}\right\rangle =\left|0\right\rangle +e^{i\theta_{i}}\left|1\right\rangle \left(i=1,2,...,m\right)$, where $\theta_{i}$ is selected randomly from the set $S=\left\{ k\pi/4|k=0,1,...,7\right\} $. \item [{Step$\,$2.}] Alice asks Bob to generate a brickwork state according to the graph $G$ specified by her. \item [{Step$\,$3.}] According to the graph $G$, Bob produces a brickwork state $\left|G\left(\theta\right)\right\rangle $ by applying CTRL-Z gates between the qubits sent from Alice. \item [{Step$\,$4.}] Alice sends $\delta_{i}=\theta_{i}+\phi_{i}'+r_{i}\pi$ to Bob, where $r_{i}\in\left\{ 0,1\right\} $ is randomly selected by Alice and $\phi_{i}'$ is a modification of $\phi_{i}$ that depends on the previous measurement outcomes. Then Bob can measure the $ith$ qubits $\left(i=1,2,...,m\right)$ of $\left|G\left(\theta\right)\right\rangle $. \item [{Step$\,$5.}] Bob performs a measurement on the $ith$ qubit $\left(i=1,2,...,m\right)$ in the basis $\left\{ \left|\pm\delta_{i}\right\rangle \right\} $ and sends Alice the measurement result. \item [{Step$\,$6.}] Alice can get the computation output from the measurement result. \end{description} In Broadbent et al.\textquoteright s protocol, Alice needs the ability of generating rotated single photon. However, the ability of generating rotated single photon for a client is still considered to be very difficult now. \section{Proposed BQC protocols} This section proposes two BQC protocols. Each reduces the client\textquoteright s quantum ability to only rotation operation on the particles or to reorder the particles. The details of these two protocols are described in Sec. 3.1 and Sec. 3.2 respectively. \subsection{The first proposed protocol} \begin{description} \item [{Step$\,$1.}] Bob prepares $m$ qubits and sends them one-by-one to Alice. The state of each qubit is $\left|+\right\rangle $. \item [{Step$\,$2.}] Alice preforms rotation operation $Z_{\theta_{i}}$ on each qubit received from Bob, where $\theta_{i}$ is selected randomly from the set $S$. \item [{Step$\,$3.}] Alice sends back this qubit to Bob. \item [{Step$\,$4.}] Since Bob has the $m$ qubit graph state $\otimes_{i=1}^{m}\left|\theta_{i}\right\rangle \left(i=1,2,...,m\right)$, and only Alice knows the values of $\theta_{i}$, Alice can run Broadbent et al.\textquoteright s single-server BQC protocol from Step 2, Bob\textquoteright s preparation, to delegate the quantum problem to Bob. \end{description} \subsection{The second proposed protocol} \begin{description} \item [{Step$\,$1.}] Bob generates $m$ Bell pairs $\left|\psi_{0,0}\left(B_{k},A_{k}\right)\right\rangle =\frac{1}{\sqrt{2}}\left(\left|00\right\rangle +\left|11\right\rangle \right)\left(k=1,2,...,2n\right)$ and sends the particle $A_{k}$ of each Bell state to Alice. \item [{Step$\,$2.}] After Alice receives all particles sent from Bob, she reorders those particles and sends them back to Bob. \item [{Step$\,$3.}] Alice sends $m$ classical message $\left\{ \theta_{i}\right\} _{i=1}^{m}$ to Bob, where $\theta_{i}$ is selected randomly form the set $S$. \item [{Step$\,$4.}] Bob measures his $m$ particles $B_{k}$ in the basis $\left\{ \pm\theta_{k}\right\} _{k=1}^{m}$ and sends the measurement results $\left\{ b_{k}\right\} _{k=1}^{m}$ to Alice. \item [{Step$\,$5.}] Upon receiving $\left\{ b_{k}\right\} _{k=1}^{m}$ form Bob, she knows the state of each $A_{k}$ Bob kept by the measurement results $\left\{ b_{k}\right\} _{k=1}^{m}$ and the reorder information. \item [{Step$\,$6.}] Since Bob has the qubit graph state $\otimes_{i=1}^{m}\left|\theta_{i}+b_{i}\pi\right\rangle \left(i=1,2,...,m\right)$, and only Alice knows the values of $\theta_{i}$ and $b_{i}$, Alice can run Broadbent et al.\textquoteright s single-server BQC protocolfrom Step 2, Bob\textquoteright s preparation, to delegate the quantum problem to Bob. \end{description} \section{Security Analysis and Comparison} \subsection{Security Analysis} In this section, we discuss the security analysis about both proposed protocols. Since the security of Broadbent et al.\textquoteright s BQC protocol has been proved, we only focus on the privacy of $\left\{ \theta_{i}\right\} _{i=1}^{n}$; if Bob obtains the $\theta$ of each particles, then he can calculate the input, the output and the intended computation. In the first proposed protocol, since Alice performs the rotation operation $Z_{\theta}$ on the particles in private and then sends them back to Bob, only Alice knows the $\theta$ of each particle. Hence, this protocol is as secure as Broadbent et al.\textquoteright s protocol. In the second proposed protocol, because Bob knows the measurement basis $\pm\theta$ and the measurement result for each particle $B_{k}$, he can calculate the state of $A_{k}$. However, since only Alice knows the new order of the particles $\left\{ A_{k}\right\} _{k=1}^{m}$ sent from Alice to Bob, Bob cannot find which two particles have entanglement. Hence, Bob cannot know the state of each $A_{k}$, and hence this protocol is as secure as Broadbent et al.\textquoteright s protocol. \subsection{Comparison} In this sub-section, we give a comparison of Broadbent et al.\textquoteright s BQC protocol and two proposed BQC protocols (see also Table 1). In Broadbent et al.\textquoteright s protocol, the client needs the ability to generate rotated single qubits; in the first proposed protocol, the client\textquoteright s ability is reduced to preforming rotation operation; in the second proposed protocol, the client only needs to reorder the particles. However, to achieve this reduction on the client\textquoteright s ability, the second proposed protocol pays the cost: it needs some devices to prevent Trojans horse attack, and the qubit efficiency of second proposed protocol is lower than Broadbent et al.\textquoteright s protocol and the first proposed protocol. \begin{table} \caption{Comparison table} \begin{centering} \begin{tabular}{|c|c|c|c|} \hline & $\begin{array}{c} \textrm{Broadbent\text{\textquoteright}s}\\ \textrm{protocol} \end{array}$ & $\begin{array}{c} \textrm{The first }\\ \textrm{proposed protocol} \end{array}$ & $\begin{array}{c} \textrm{The second }\\ \textrm{proposed protocol} \end{array}$ \tabularnewline \hline \hline Client ability & generate qubits & rotation operation & reorder\tabularnewline \hline Qubit efficiency & 1/1 & 1/1 & 1/2\tabularnewline \hline Trojans horse attack & $\begin{array}{c} \textrm{Automatically}\\ \textrm{prevent} \end{array}$ & $\begin{array}{c} \textrm{Automatically}\\ \textrm{prevent} \end{array}$ & $\begin{array}{c} \textrm{Need device}\\ \textrm{to prevent} \end{array}$ \tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \section{Conclusions} This paper has proposed two BQC protocols for a client to delegate a quantum computation to a remote quantum server without revealing the input, the output and the intended computation. In both proposed protocols, the client needs less quantum ability than in Broadbent et al.\textquoteright s protocol. Whereas the client in the first protocol needs to preform rotation operation on the particles, the client in the second protocol only needs the ability to reorder the particles. We have shown that both proposed protocols are as secure as Broadbent et al.\textquoteright s protocol. Yet, how to further reduce a client\textquoteright s quantum ability in a BQC without revealing any client\textquoteright s information will be a promising future work. \section*{Acknowledgment} We would like to thank the Ministry of Science and Technology of Republic of China for financial support of this research under Contract No. MOST 104-2221-E-006-102 -.
1,116,691,501,137
arxiv
\section{Introduction} The recent discovery of superconductivity in iron-pnictides \cite{kamihara} has generated an intense research activity, in part because they provide an unexpected and very interesting scenario to study the interplay between magnetism and superconductivity.\cite{reviews} The interest for these compounds is enhanced by the fact that they share many properties with the high-$T_c$ cuprates (HTSC), as the layered structure, a similar evolution of the superconducting parameters with doping, and the proximity to a magnetic transition.\cite{reviews} As it is still the case of the HTSC, the pairing mechanism in superconducting iron pnictides is not yet known, making the phenomenological descriptions of their superconducting transition a central issue of the present researches on these compounds. As first established in the pioneering experiments by M. Tinkham and coworkers in low-$T_c$ metallic superconductors,\cite{gollub} a powerful tool to probe at a phenomenological level the own nature of a superconducting transition is the diamagnetism induced above $T_c$ by thermally activated Cooper pairs.\cite{tinkham} The fluctuation diamagnetism above $T_c$ (sometimes called \textit{precursor diamagnetism}) was also early used to characterize at a phenomenological level the superconducting transition in both optimally-doped \cite{johnston,reviewHTSC} and underdoped \cite{carballeira} high-$T_c$ cuprates. However, in spite of its interest the precursor diamagnetism received little attention due to both the relative smallness of the superconducting fluctuation effects above $T_c$ in these compounds and, mainly, to the need of high quality single crystals, with a relatively sharp superconducting transition.\cite{flucpnictides} In this paper we report measurements in superconducting iron-pnictides of the fluctuation-induced diamagnetism above $T_c$, which were possible by applying high resolution magnetometry to a Ba$_{1-x}$K$_x$Fe$_2$As$_2$ single crystal with an excellent chemical and structural quality. These data allow an experimental demonstration that the superconducting transition of these materials may be explained at a phenomenological level in terms of the Gaussian Ginzburg-Landau approach for three-dimensional anisotropic superconductors (3D-AGL). This conclusion excludes phase incoherent superconductivity above $T_c$ in iron-pnictides, a long standing but still at present debated issue of the HTSC physics.\cite{citalarga,kondo} Also, it provides a direct check of the fluctuations dimensionality: while bulk low-$T_c$ superconductors (LTSC) behave as three-dimensional (3D) and most HTSC as two-dimensional (2D), \cite{tinkham,reviewHTSC, carballeira} it has been recently proposed that iron-pnictides are in an intermediate regime as a consequence of their moderate anisotropy.\cite{tesanovic} In relation with this, another interesting aspect of our work is that the fluctuation diamagnetism is probed with magnetic fields applied along the two main crystallographic directions. This allows to accede directly to the superconducting anisotropy factor, but also to study an interesting issue recently observed in other moderately anisotropic superconductor (NbSe$_2$), as it is the anisotropy of non-local electrodynamic effects.\cite{NbSe2} \section{Experimental details and results} \subsection{Sample fabrication and characterization} \begin{figure}[b] \includegraphics[scale=.6]{fig1.EPS} \caption{(Color online) Temperature dependence of the FC and ZFC magnetic susceptibility (already corrected for demagnetizing effects) obtained with a 0.5 mT magnetic field perpendicular to the $ab$ layers. $T_c$ and its uncertainty were estimated from the maximum of $d\chi^{\rm ZFC}(T)/dT$ (solid line), and from the high-temperature half-width at half-maximum (this eludes the possible widening associated to the transit through the mixed state).} \label{backs} \end{figure} The Ba$_{1-x}$K$_x$Fe$_2$As$_2$ sample (with $x=0.28$) used in this work is a $\sim1.1\times0.7\times 0.1$ mm$^3$ (430~$\mu$g) single crystal. Details of its growth procedure and characterization may be seen in Ref.~\onlinecite{growth}. We have chosen such a small crystal because it presents a well defined transition temperature which is essential to study critical phenomena. The measurements were performed with a magnetometer based in the superconducting quantum interference (Quantum Design, model MPMS-XL). To position the crystal with the magnetic field, $H$, applied parallel to the crystal $ab$ layers, we used a quartz sample holder (0.3 cm in diameter, 22 cm in length) to which the crystal was glued with GE varnish. Two plastic rods at the sample holder ends ($\sim0.3$ mm smaller than the sample space diameter) ensured that its alignment was better than $0.1^\circ$. For the measurements with $H\perp ab$ we made a groove ($\sim0.3$ mm wide) in the sample holder into which the crystal was glued also with GE varnish. The crystal alignment was checked by optical microscopy to be better than $5^\circ$. This allowed to determine the anisotropy factor from the anisotropy of the precursor diamagnetism with a $\sim0.5$\% uncertainty.\cite{errorgamma} As a first magnetic characterization, in Fig.~1 it is presented the temperature dependence of the zero-field-cooled ZFC and field-cooled FC magnetic susceptibilities, measured with a magnetic field of 0.5 mT perpendicular to the $ab$ planes. These measurements were corrected for demagnetizing effects by using as demagnetizing factor $D=0.78$, which was obtained by approximating the crystal shape by the inscribed ellipsoid. A proof of the adequacy of this procedure is that the ZFC magnetic susceptibility in the Meissner region is close to the ideal value (-1) well below $T_c$. From these curves we estimated $T_c\approx33.2\pm0.2$~K, which attests the excellent quality of the sample, and allows to study the fluctuation-induced magnetization in almost the whole temperature range above $T_c$. Such a small transition width is in good agreement with the one determined from the resistive transition of crystals with the same composition and grown following the same procedure.\cite{growth} \subsection{Measurements of the fluctuation diamagnetism around $T_c$ and background subtraction} To measure the effect of superconducting fluctuations above $T_c$ (which according to the 3D-AGL approach, see below, is about $10^{-6}$-$10^{-7}$ emu), we used the \textit{reciprocating sample option}. It produces sinusoidal oscillations of the sample about the center of the detection system and improves the resolution by about two orders of magnitude with respect to the conventional DC option. Data were taken by waiting $\sim 3$ min for complete stabilization after the temperature was within 0.5\% the target temperature. For each temperature we averaged six measurements consisting of 15 cycles at 1 Hz frequency. The final resolution in magnetic moment, $m$, was in the $\sim10^{-8}$ emu range. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig2.EPS} \caption{(Color online) (a) Example of the temperature dependence up to 50 K of the as-measured magnetic moment for a magnetic field applied in the two main crystallographic directions. The normal-state backgrounds (lines) were determined by fitting Eq.~(\ref{back}) in the region indicated. The shaded areas represent the precursor diamagnetism. (b) and (c) Detail around the superconducting transition. Solid (open) symbols were obtained under ZFC (FC) conditions. The diamagnetism above $T_c$ is unobservable in this scale.} \label{backs} \end{figure} An example of the as measured $m(T)$ data above $T_c$ is presented in Fig.~2(a). As may be clearly seen already in this figure, the $m(T)$ curves present a {\it rounding} just above $T_c$ (a factor $\sim 3$ larger in amplitude when $H \perp ab$), which extends few degrees above $T_c$. In view of the sharp low-field diamagnetic transition (Fig.~1) such a rounding cannot be attributed to $T_c$ inhomogeneities, and is a first evidence of the presence of an anisotropic precursor diamagnetism in these materials. In the detail around $T_c$ presented in Figs.~2(b) and (c), it is clearly seen that the reversible region extends few degrees below $T_c$, allowing to study the \textit{critical} fluctuation regime, where $m(T)$ presents a similar anisotropy. The fluctuation-induced contribution was determined from the as measured $m(T)$ by subtracting the \textit{background} magnetic moment, $m_B(T)$, coming from the crystal normal-state and in a much lesser extent from the sample holder (this last contribution was estimated to be in the 10$^{-6}$ emu range). Such a background contribution was determined by fitting the function \begin{equation} m_B(T)=a+bT+\frac{c}{T} \label{back} \end{equation} to the raw data in the temperature interval between 37 and 50 K ($a$, $b$ and $c$ are free parameters). The lower bound corresponds to a temperature above which the fluctuation-induced magnetic moment is expected to be smaller than the experimental uncertainty (see below). It is worth noting that the use of other plausible functions for $m_B(T)$, for instance $m_B(T)=a+b/(T-c)$, leads to imperceptible changes in the scale of Fig.~2(a). \section{Data analysis} \subsection{Theoretical background} In view of the moderate anisotropy observed, the results of Fig.~\ref{backs} were analyzed in the framework of the 3D-AGL approach. This theory predicts that the fluctuation-induced magnetization for magnetic fields applied perpendicular or parallel to the crystal $ab$ layers ($M_\perp$ or $M_\parallel$, respectively) may be obtained from the result for 3D isotropic materials, $M_{\rm iso}$, through\cite{Klemm80,Hao92,Blatter92} \begin{equation} M_\perp(H)=\gamma M_{\rm iso}(H), \label{transperp} \end{equation} and \begin{equation} M_\parallel(H)=M_{\rm iso}\left(\frac{H}{\gamma}\right)=\frac{1}{\gamma}M_\perp\left(\frac{H}{\gamma}\right), \label{transpara} \end{equation} where $\gamma$ is the superconducting anisotropy factor. This transformation was introduced by Klemm and Clem \cite{Klemm80} and generalized by Blatter \cite{Blatter92} and by Hao and Clem \cite{Hao92} to different observables and different regions in the $H-T$ phase diagram. In particular, it was shown to be also valid for the fluctuation region above $T_c$.\cite{Blatter92} For isotropic materials, the fluctuation magnetization above $T_c$ was calculated in Ref.~\onlinecite{PbIn} in the framework of a Gaussian GL approach including a \textit{total-energy} cutoff in the fluctuation spectrum.\cite{EPL_uncertainty} By combining that result with Eqs.~(\ref{transperp}) and (\ref{transpara}) it follows \begin{eqnarray} &&M_\perp=-\frac{k_BT\mu_0H\gamma\xi_{ab}(0)}{3 \phi_{0}^2} \left[\frac{\arctan\sqrt{(c-\varepsilon)/\varepsilon}}{\sqrt{\varepsilon}}\right.\nonumber\\ &&\left.-\frac{\arctan\sqrt{(c-\varepsilon)/c}}{\sqrt{c}} \right], \label{Mperp} \end{eqnarray} and \begin{equation} M_\parallel=\frac{M_{\perp}}{\gamma^2}, \label{Mpara} \end{equation} where $\varepsilon=\ln(T/T_c)$ is the reduced temperature, $\xi_{ab}(0)$ the \textit{in-plane} coherence length amplitude, $c\approx0.55$ the cutoff constant,\cite{EPL_uncertainty} $k_B$ the Boltzmann constant, $\mu_0$ the vacuum magnetic permeability, and $\phi_0$ the magnetic flux quantum. These expressions are valid in the low magnetic field limit, i.e., for $H\ll\varepsilon H_{c2}^\perp(0)$ and, respectively, $H\ll\varepsilon H_{c2}^\parallel(0)=\varepsilon\gamma H_{c2}^\perp(0)$, where $H_{c2}^\perp(0)=\phi_0/2\pi\mu_0\xi_{ab}^2(0)$ is the upper critical field for $H\perp ab$ extrapolated linearly to $T=0$~K. In absence of any cutoff (i.e., when $c\to\infty$) Eq.~(4) simplifies to \begin{equation} M_\perp=-\frac{\pi k_BT\mu_0H\gamma\xi_{ab}(0)}{6\phi_{0}^2}\varepsilon^{-1/2}, \label{schmidt} \end{equation} which corresponds to the well known Schmidt result.\cite{Schmidts} Equations (\ref{Mperp}) and (\ref{Mpara}) predict the vanishing of the precursor diamagnetism at $\varepsilon=c$ (i.e., $T\approx1.7T_c\approx56$~K in the present case). This has been experimentally confirmed in a number of HTSC and LTSC,\cite{EPL_uncertainty} but in the present case the small crystal size does not allow a quantitative check of this onset temperature.\cite{note} The difference between Eq.~(\ref{Mperp}) and the conventional Schmidt approach Eq.~(\ref{schmidt}) is dramatic at high reduced temperatures (typically above $\varepsilon=0.1$). However, it is still as high as 15\% for $\varepsilon$ as low as 0.01, which justifies the use of the total energy approach also at low reduced temperatures. \begin{figure}[b] \includegraphics[width=\columnwidth]{fig3.EPS} \caption{(Color online) Detail around $T_c$ of the temperature dependence of the fluctuation magnetization (over $H$) for several magnetic fields applied in the two main crystallographic directions. The lines correspond to the Gaussian 3D-AGL approach in the low magnetic field limit (solid blue for $H\perp ab$ and dashed red for $H \parallel ab$). For a better comparison, both lines are presented in (a) and (b).} \label{excess} \end{figure} \subsection{Fluctuation diamagnetism in the Gaussian region above $T_c$} A detail of the fluctuation magnetization above $T_c$ (already corrected for the normal state contribution and normalized by the applied field) is presented in Fig.~3 for both $H\perp ab$ and $H\parallel ab$. In this representation we have only included magnetic fields up to 2 T, which are expected to be in the low magnetic field regime except in a narrow temperature interval close to $T_c$. As expected, both $M_\perp/H$ and $M_\parallel/H$ are independent of the applied magnetic field except very close to $T_c$, where finite-field effects are already observable.\cite{tinkham,PbIn,breakdown} The best fit of Eqs.~(\ref{Mperp}) and (\ref{Mpara}) to the data in the low-field region is in good agreement with the temperature dependences and amplitudes observed. The values obtained for the two free parameters, $\xi_{ab}(0)=1.5$~nm (which leads to $\mu_0dH_{c2}^\perp/dT=-4.2$~T/K near $T_c$) and $\gamma=1.8$, are close to the ones in the literature.\cite{parametros} These values lead to a \textit{transverse} coherence length $\xi_c(0)=\xi_{ab}(0)/\gamma=0.83$~nm, larger than the Fe-layers periodicity length ($\sim$0.66~nm),\cite{growth} which justifies the applicability of the 3D-AGL approach. The $\xi_{ab}(0)$ value is well within the ones in HTSC, and $\gamma$ is just a few times smaller than in YBa$_2$Cu$_3$O$_{7-\delta}$. These similarities contrast with the differences between their precursor diamagnetism (allegedly unconventional in the HTSC).\cite{citalarga} Finally, $\xi_{ab}(0)$ is much shorter than in the moderately anisotropic 2H-NbSe$_2$, which could explain the non observation of non-local electrodynamic effects when $H\perp ab$ in the present case.\cite{NbSe2} \subsection{Fluctuation diamagnetism in the critical region around $T_c$} \begin{figure}[h] \includegraphics[width=\columnwidth]{fig4.EPS} \caption{(Color online) 3D-GL scaling of the magnetization in the critical region for $H\perp ab$ (a) and $H\parallel ab$ (b). The solid blue line is the \textit{experimental} scaling function for $H\perp ab$, while the dashed red one corresponds to $H\parallel ab$ and was calculated from $f_\perp$ through Eq.~(\ref{scalingfuncion}). (c) and (d) $H$-$T$ phase diagrams for $H\perp ab$ and, respectively, $H\parallel ab$. The corresponding $H_{c2}(T)$ lines were obtained from the superconducting parameters obtained in the analysis. The symbols are the lower bound of the reversible region, and the dashed lines the limits of the critical-region according to Eqs.~(\ref{criterioperp}) and (\ref{criteriopara}).} \label{scaling} \end{figure} As a check of consistency of the above analysis we studied the data in the critical region, closer to the $H$-dependent critical temperature, where the fluctuations amplitude is so large that the Gaussian approximation breaks down.\cite{tinkham} This region is bounded by the so-called \textit{Ginzburg criterion},\cite{Ikeda} which according to the transformation to anisotropic materials may be written as \begin{equation} T\approx T_c^\perp(H)\pm T_c\left[\frac{4\pi k_B\mu_0H}{\Delta c\xi_c(0)\phi_0}\right]^{2/3} \label{criterioperp} \end{equation} for $H\perp ab$, and \begin{equation} T\approx T_c^\parallel(H)\pm T_c\left[\frac{4\pi k_B\mu_0H}{\Delta c\xi_c(0)\gamma\phi_0}\right]^{2/3} \label{criteriopara} \end{equation} for $H\parallel ab$, where $\Delta c$ is the specific-heat jump at $T_c$. In this region, the 3D-GL approach in the lowest-Landau-level approximation predicts that $M_\perp(T,H)$ follows a scaling behavior, $m_\perp=f_\perp(t_\perp)$, the scaling variables being \cite{Ullah} \begin{equation} m_\perp\equiv \frac{M_\perp}{(HT)^{2/3}} \label{perpvarm} \end{equation} and \begin{equation} t_\perp\equiv \frac{T-T_c^\perp(H)}{(HT)^{2/3}}, \label{perpvart} \end{equation} where $T_c^\perp(H)=T_c[1-H/H_{c2}^\perp(0)]$. According to Eq.~(\ref{transpara}), $M_\parallel(T,H)$ should scale as \begin{equation} m_\parallel=f_\parallel(t_\parallel)=\frac{f_\perp(t_\parallel\gamma^{2/3})}{\gamma^{5/3}} \label{scalingfuncion} \end{equation} the scaling variables being now \begin{equation} m_\parallel\equiv \frac{M_\parallel}{(HT)^{2/3}} \label{paravarm} \end{equation} and \begin{equation} t_\parallel\equiv\frac{T-T_c^\parallel(H)}{(HT)^{2/3}}, \label{paravart} \end{equation} where $T_c^\parallel(H)=T_c[1-H/H_{c2}^\parallel(0)]$, and $H_{c2}^\parallel(0)=\gamma H_{c2}^\perp(0)$. In Figs.~\ref{scaling}(a) and (b) it is presented the scaling of the $M_\perp(T,H)$ and $M_\parallel(T,H)$ data according to Eqs.~(\ref{perpvarm}), (\ref{perpvart}), (\ref{paravarm}) and (\ref{paravart}). For that, $t_\perp$ and $t_\parallel$ were evaluated by using the $H_{c2}^\perp(0)$ and $\gamma$ values obtained in the analysis of the precursor diamagnetism in the Gaussian region, and $T_c=33.1$~K (in good agreement with the $T_c$ value resulting from Fig.~1). As may be clearly seen, both scalings are excellent and the relation between the observed scaling functions is in good agreement with Eq.~(\ref{scalingfuncion}). The applicability of the above scalings extends down to the lower bound of the reversible region. This is in agreement with Eqs.~(\ref{criterioperp}) and (\ref{criteriopara}), which when evaluated by using $\Delta c/T_c=0.1$ J/molK$^2$ (see Ref.~\onlinecite{zaanen}) and the superconducting parameters resulting from the previous analysis, are in an unexpected agreement with the observed lower bound of the reversible region for both $H$ directions [see Figs.~4(c) and (d)]. The 3D scaling was previously found to be adequate when $H\perp ab$, \cite{flucpnictides} but here its validity is extended to the case in which $H\parallel ab$. \section{Conclusions} The present experimental results and analysis represent a solid evidence that the precursor diamagnetism and the magnetization in the critical region around $T_c$ in Ba$_{1-x}$K$_x$Fe$_2$As$_2$ may be consistently explained at a phenomenological level in terms of a conventional GL approach for 3D anisotropic superconductors. This provides a strong constraint for any microscopic theory for the superconductivity in iron pnictides, and may have implications in systems like the HTSC, with similar superconducting parameters and possibly a similar mechanism for their superconductivity. It would be interesting to extend these measurements to other iron-pnictides with other type of substitution and doping levels, and also in larger samples to confirm whether the onset of the precursor diamagnetism is close to $1.7T_c$, as proposed by the total-energy cutoff scenario. \section*{Acknowledgments} This work was supported by the Spanish MICINN and ERDF \mbox{(grant no. FIS2010-19807)}, and by the Xunta de Galicia (grants no. 2010/XA043 and 10TMT206012PR). SSS and ADA acknowledge support from the CNPq and FAPERJ.
1,116,691,501,138
arxiv
\section{Introduction} The train and test distributions are often not identically distributed. Such train-test mismatches occur because evaluation datasets rarely characterize the entire distribution~\cite{Torralba2011UnbiasedLA}, and the test distribution typically drifts over time~\cite{QuioneroCandela2009DatasetSI}. Chasing an evolving data distribution is costly, and even if the training data does not become stale, models will still encounter unexpected situations at test time. Accordingly, models must \emph{generalize} to OOD examples whenever possible, and when OOD examples do not belong to any known class, models must \emph{detect} them in order to abstain or trigger a conservative fallback policy~\cite{Emmott2015AMO}. Most evaluation in natural language processing (NLP) assumes the train and test examples are independent and identically distributed (IID). In the IID setting, large pretrained Transformer models can attain near human-level performance on numerous tasks \cite{Wang2018GLUEA}. However, high IID accuracy does not necessarily translate to OOD robustness for image classifiers~\cite{Hendrycks2019BenchmarkingNN}, and pretrained Transformers may embody this same fragility. Moreover, pretrained Transformers can rely heavily on spurious cues and annotation artifacts \cite{Cai2017PayAT,gururangan2018annotation} which out-of-distribution examples are less likely to include, so their OOD robustness remains uncertain. In this work, we systematically study the OOD robustness of various NLP models, such as word embeddings averages, LSTMs, pretrained Transformers, and more. We decompose OOD robustness into a model's ability to (1) generalize and to (2) detect OOD examples~\cite{Card2018DeepWA}. To measure OOD generalization, we create a new evaluation benchmark that tests robustness to shifts in writing style, topic, and vocabulary, and spans the tasks of sentiment analysis, textual entailment, question answering, and semantic similarity. We create OOD test sets by splitting datasets with their metadata or by pairing similar datasets together (Section~\ref{sec:setup}). Using our OOD generalization benchmark, we show that pretrained Transformers are considerably more robust to OOD examples than traditional NLP models (Section~\ref{sec:accuracy}). We show that the performance of an LSTM semantic similarity model declines by over $35\%$ on OOD examples, while a RoBERTa model's performance slightly \textit{increases}. Moreover, we demonstrate that while pretraining larger models does not seem to improve OOD generalization, pretraining models on diverse data does improve OOD generalization. To measure OOD detection performance, we turn classifiers into anomaly detectors by using their prediction confidences as anomaly scores \cite{Hendrycks2016ABF}. We show that many non-pretrained NLP models are often near or \emph{worse than random chance} at OOD detection. In contrast, pretrained Transformers are far more capable at OOD detection. Overall, our results highlight that while there is room for future robustness improvements, pretrained Transformers are already moderately robust.\looseness=-1 \section{How We Test Robustness}\label{sec:setup} \subsection{Train and Test Datasets} We evaluate OOD generalization with \emph{seven} carefully selected datasets. Each dataset either (1) contains metadata which allows us to naturally split the samples or (2) can be paired with a similar dataset from a distinct data generating process. By splitting or grouping our chosen datasets, we can induce a distribution shift and measure OOD generalization. \noindent We utilize four sentiment analysis datasets: \begin{itemize}[nosep,leftmargin=4mm] \item We use \textbf{SST-2}, which contains pithy expert movie reviews~\cite{socher2013recursive}, and \textbf{IMDb}~\cite{maas2011learning}, which contains full-length lay movie reviews. We train on one dataset and evaluate on the other dataset, and vice versa. Models predict a movie review's binary sentiment, and we report accuracy. \item The \textbf{Yelp Review Dataset} contains restaurant reviews with detailed metadata (e.g., user ID, restaurant name). We carve out four groups from the dataset based on food type: \emph{American, Chinese, Italian,} and \emph{Japanese}. Models predict a restaurant review's binary sentiment, and we report accuracy. \item The \textbf{Amazon Review Dataset} contains product reviews from Amazon~\cite{DBLP:journals/corr/McAuleyTSH15,DBLP:journals/corr/HeM16}. We split the data into five categories of clothing (Clothes, Women Clothing, Men Clothing, Baby Clothing, Shoes) and two categories of entertainment products (Music, Movies). We sample 50,000 reviews for each category. Models predict a review's 1 to 5 star rating, and we report accuracy. \end{itemize} \noindent We also utilize these datasets for semantic similarity, reading comprehension, and textual entailment: \begin{itemize}[nosep,leftmargin=4mm] \item \textbf{STS-B} requires predicting the semantic similarity between pairs of sentences~\cite{cer2017semeval}. The dataset contains text of different genres and sources; we use four sources from two genres: MSRpar (news), Headlines (news); MSRvid (captions), Images (captions). The evaluation metric is Pearson's correlation coefficient. \item \textbf{ReCoRD} is a reading comprehension dataset using paragraphs from CNN and Daily Mail news articles and automatically generated questions~\cite{DBLP:journals/corr/abs-1810-12885}. We bifurcate the dataset into CNN and Daily Mail splits and evaluate using exact match. \item \textbf{MNLI} is a textual entailment dataset using sentence pairs drawn from different genres of text~\cite{williams2017broad}. We select examples from two genres of transcribed text (Telephone and Face-to-Face) and one genre of written text (Letters), and we report classification accuracy. \end{itemize} \subsection{Embedding and Model Types} We evaluate NLP models with different input representations and encoders. We investigate three model categories with a total of thirteen models. \paragraph{Bag-of-words (BoW) Model.} We use a bag-of-words model~\cite{harris1954distributional}, which is high-bias but low-variance, so it may exhibit performance stability. The BoW model is only used for sentiment analysis and STS-B due to its low performance on the other tasks. For STS-B, we use the cosine similarity of the BoW representations from the two input sentences. \paragraph{Word Embedding Models.} We use word2vec~\cite{mikolov2013distributed} and GloVe~\cite{pennington2014glove} word embeddings. These embeddings are encoded with one of three models: word averages~\cite{Wieting2015TowardsUP}, LSTMs~\cite{hochreiter1997lstm}, and Convolutional Neural Networks (ConvNets). For classification tasks, the representation from the encoder is fed into an MLP. For STS-B and MNLI, we use the cosine similarity of the encoded representations from the two input sentences. For reading comprehension, we use the DocQA model~\cite{clark2017simple} with GloVe embeddings. We implement our models in AllenNLP~\cite{Gardner2017AllenNLP} and tune the hyperparameters to maximize validation performance on the IID task. \paragraph{Pretrained Transformers.} We investigate BERT-based models~\cite{Devlin2019BERTPO} which are pretrained bidirectional Transformers~\cite{vaswani2017attention} with GELU~\cite{hendrycks2016gelu} activations. In addition to using BERT Base and BERT Large, we also use the large version of RoBERTa~\cite{liu2019roberta}, which is pretrained on a larger dataset than BERT. We use ALBERT~\cite{Lan2020ALBERT} and also a distilled version of BERT, DistilBERT~\cite{sanh2019distilbert}. We follow the standard BERT fine-tuning procedure~\cite{Devlin2019BERTPO} and lightly tune the hyperparameters for our tasks. We perform our experiments using the HuggingFace Transformers library~\cite{Wolf2019HuggingFacesTS}. \section{Out-of-Distribution Generalization}\label{sec:accuracy} \begin{figure}[t] \centering \vspace{-0.35cm} \includegraphics[width=0.5\textwidth]{figures/sts-b.pdf} \vspace{-15pt} \caption{Pretrained Transformers often have smaller IID/OOD generalization gaps than previous models.} \label{fig:sts} \vspace{-10pt} \end{figure} In this section, we evaluate OOD generalization of numerous NLP models on seven datasets and provide some upshots. A subset of results are in Figures~\ref{fig:sts} and \ref{fig:ood_accuracy}. Full results are in Appendix~\ref{appendix:additional}. \paragraph{Pretrained Transformers are More Robust.} In our experiments, pretrained Transformers often have smaller generalization gaps from IID data to OOD data than traditional NLP models. For instance, Figure~\ref{fig:sts} shows that the LSTM model declined by over 35\%, while RoBERTa's generalization performance in fact increases. For Amazon, MNLI, and Yelp, we find that pretrained Transformers' accuracy only slightly fluctuates on OOD examples. Partial MNLI results are in Table~\ref{tab:multinli}. We present the full results for these three tasks in Appendix~\ref{appendix:minor}. In short, pretrained Transformers can generalize across a variety of distribution shifts.\looseness=-1 \begin{table}[h] \begin{center} \begin{tabular}{l|ccc} Model & \multicolumn{1}{p{1.5cm}}{\centering Telephone (IID)} & \multicolumn{1}{p{1.5cm}}{\centering Letters (OOD)} & \multicolumn{1}{p{2cm}}{\centering Face-to-Face (OOD)}\\ \hline BERT & 81.4\% & 82.3\% & 80.8\%\\ \bottomrule \end{tabular} \end{center} \caption{Accuracy of a BERT Base MNLI model trained on Telephone data and tested on three different distributions. Accuracy only slightly fluctuates.} \label{tab:multinli} \vspace{-5pt} \end{table} \begin{figure}[t] \centering \begin{subfigure}{0.5\textwidth} \vspace{-0.35cm} \includegraphics[width=\textwidth]{figures/imdb.pdf} \label{fig:imdb} \end{subfigure} \begin{subfigure}{0.5\textwidth} \vspace{-0.15cm} \includegraphics[width=\textwidth]{figures/record.pdf} \label{fig:record} \end{subfigure} \vspace{-15pt} \caption{Generalization results for sentiment analysis and reading comprehension. While IID accuracy does not vary much for IMDb sentiment analysis, OOD accuracy does. Here pretrained Transformers do best.} \label{fig:ood_accuracy} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{figures/size.pdf} \vspace{-15pt} \caption{The IID/OOD generalization gap is not improved with larger models, unlike in computer vision.} \label{fig:size} \vspace{-5pt} \end{figure} \paragraph{Bigger Models Are Not Always Better.} While larger models reduce the IID/OOD generalization gap in computer vision \cite{Hendrycks2019BenchmarkingNN,Xie2019IntriguingPO,Hendrycks2019NaturalAE}, we find the same does \emph{not} hold in NLP. Figure~\ref{fig:size} shows that larger BERT and ALBERT models do not reduce the generalization gap. However, in keeping with results from vision \cite{Hendrycks2019BenchmarkingNN}, we find that model distillation can reduce robustness, as evident in our DistilBERT results in Figure~\ref{fig:ood_accuracy}. This highlights that testing model compression methods for BERT~\cite{shen2019q,ganesh2020compressing,li2020train} on only in-distribution examples gives a limited account of model generalization, and such narrow evaluation may mask downstream costs \paragraph{More Diverse Data Improves Generalization.} Similar to computer vision~\cite{Orhan2019RobustnessPO, Xie2019SelftrainingWN,hendrycks2019pretraining}, pretraining on larger and more diverse datasets can improve robustness. RoBERTa exhibits greater robustness than BERT Large, where one of the largest differences between these two models is that RoBERTa pretrains on more data. See Figure~\ref{fig:ood_accuracy}'s results \section{Out-of-Distribution Detection}\label{sec:ood_detection} Since OOD robustness requires evaluating both OOD generalization and OOD detection, we now turn to the latter. Without access to an outlier dataset~\cite{Hendrycks2018DeepAD}, the state-of-the-art OOD detection technique is to use the model's prediction confidence to separate in- and out-of-distribution examples \cite{Hendrycks2016ABF}. Specifically, we assign an example $x$ the anomaly score $-\max_y p(y\mid x)$, the negative prediction confidence, to perform OOD detection. We train models on SST-2, record the model's confidence values on SST-2 test examples, and then record the model's confidence values on OOD examples from five other datasets. For our OOD examples, we use validation examples from 20 Newsgroups (20 NG)~\cite{Lang95}, the English source side of English-German WMT16 and English-German Multi30K~\cite{elliott2016multi30k}, and concatenations of the premise and hypothesis for RTE~\cite{dagan2005pascal} and SNLI~\cite{Bowman2015ALA}. These examples are only used during OOD evaluation not training. For evaluation, we follow past work~\cite{Hendrycks2018DeepAD} and report the False Alarm Rate at $95\%$ Recall (FAR95). The FAR95 is the probability that an in-distribution example raises a false alarm, assuming that 95\% of all out-of-distribution examples are detected. Hence a lower FAR95 is better. Partial results are in Figure~\ref{fig:fpr}, and full results are in Appendix~\ref{appendix:ooddetection}. \paragraph{Previous Models Struggle at OOD Detection.} Models without pretraining (e.g., BoW, LSTM word2vec) are often unable to reliably detect OOD examples. In particular, these models' FAR95 scores are sometimes \emph{worse} than chance because the models often assign a higher probability to out-of-distribution examples than in-distribution examples. The models particularly struggle on 20 Newsgroups (which contains text on diverse topics including computer hardware, motorcycles, space), as their false alarm rates are approximately $100\%$. \paragraph{Pretrained Transformers Are Better Detectors.} In contrast, pretrained Transformer models are better OOD detectors. Their FAR95 scores are always better than chance. Their superior detection performance is not solely because the underlying model is a language model, as prior work~\cite{Hendrycks2018DeepAD} shows that language models are not necessarily adept at OOD detection. Also note that in OOD detection for computer vision, higher accuracy does not reliably improve OOD detection~\cite{kimin}, so pretrained Transformers' OOD detection performance is not anticipated. Despite their relatively low FAR95 scores, pretrained Transformers still do not cleanly separate in- and out-of-distribution examples (Figure~\ref{fig:roberta_ood}). OOD detection using pretrained Transformers is still far from perfect, and future work can aim towards creating better methods for OOD detection. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/sst-ood-histogram.pdf} \caption{The confidence distribution for a RoBERTa SST-2 classifier on examples from the SST-2 test set and the English side of WMT16 English-German. The WMT16 histogram is translucent and overlays the SST histogram. The minimum prediction confidence is $0.5$. Although RoBERTa is better than previous models at OOD detection, there is clearly room for future work.} \label{fig:roberta_ood} \end{figure} \section{Discussion and Related Work} \paragraph{Why Are Pretrained Models More Robust?} An interesting area for future work is to analyze \textit{why} pretrained Transformers are more robust. A flawed explanation is that pretrained models are simply more accurate. However, this work and past work show that increases in accuracy do not directly translate to reduced IID/OOD generalization gaps~\cite{Hendrycks2019BenchmarkingNN,fried2019cross}. One partial explanation is that Transformer models are pretrained on \textit{diverse} data, and in computer vision, dataset diversity can improve OOD generalization~\cite{hendrycks2020augmix} and OOD detection~\cite{Hendrycks2018DeepAD}. Similarly, Transformer models are pretrained with large \textit{amounts} of data, which may also aid robustness~\cite{Orhan2019RobustnessPO, Xie2019SelftrainingWN,hendrycks2019pretraining}. However, this is not a complete explanation as BERT is pretrained on roughly 3 billion tokens, while GloVe is trained on roughly 840 billion tokens. Another partial explanation may lie in self-supervised training itself. \citet{hendrycks2019self} show that computer vision models trained with self-supervised objectives exhibit better OOD generalization and far better OOD detection performance. Future work could propose new self-supervised objectives that enhance model robustness.\looseness=-1 \paragraph{Domain Adaptation.} Other research on robustness considers the separate problem of domain adaptation~\cite{Blitzer2007BiographiesBB,daume2009frustratingly}, where models must learn representations of a source and target distribution. We focus on testing generalization \textit{without} adaptation in order to benchmark robustness to unforeseen distribution shifts. Unlike \citet{fisch2019proceedings,Yogatama2019LearningAE}, we measure OOD generalization by considering simple and natural distribution shifts, and we also evaluate more than question answering. \paragraph{Adversarial Examples.} Adversarial examples can be created for NLP models by inserting phrases~\cite{Jia2017AdversarialEF,Wallace2019UniversalAT}, paraphrasing questions~\cite{ribeiro2018semantically}, and reducing inputs~\cite{feng2018pathologies}. However, adversarial examples are often disconnected from real-world performance concerns~\cite{Gilmer2018MotivatingTR}. Thus, we focus on an experimental setting that is more realistic. While previous works show that, for all NLP models, there exist adversarial examples, we show that all models are not equally fragile. Rather, pretrained Transformers are overall far more robust than previous models. \paragraph{Counteracting Annotation Artifacts.} Annotators can accidentally leave unintended shortcuts in datasets that allow models to achieve high accuracy by effectively ``cheating''~\cite{Cai2017PayAT,gururangan2018annotation,min2019compositional}. These \textit{annotation artifacts} are one reason for OOD brittleness: OOD examples are unlikely to contain the same spurious patterns as in-distribution examples. OOD robustness benchmarks like ours can \textit{stress test} a model's dependence on artifacts~\cite{liu2019inoculation,feng2019misleading,naik2018stress}. \section{Conclusion} We created an expansive benchmark across several NLP tasks to evaluate out-of-distribution robustness. To accomplish this, we carefully restructured and matched previous datasets to induce numerous realistic distribution shifts. We first showed that pretrained Transformers \emph{generalize} to OOD examples far better than previous models, so that the IID/OOD generalization gap is often markedly reduced. We then showed that pretrained Transformers \emph{detect} OOD examples surprisingly well. Overall, our extensive evaluation shows that while pretrained Transformers are moderately robust, there remains room for future research on robustness.\looseness=-1 \section{Additional Experimental Results}\label{appendix:additional} \subsection{Significant OOD Accuracy Drops} For STS-B, ReCoRD, and SST-2/IMDb, there is a noticeable drop in accuracy when testing on OOD examples. We show the STS-B results in Table~\ref{tab:STS-B}, the ReCoRD results in Table~\ref{tab:record}, and the SST-2/IMDb results in Table~\ref{tab:sst2}. \subsection{Minor OOD Accuracy Drops}\label{appendix:minor} We observe more minor performance declines for the Amazon, MNLI, and Yelp datasets. Figure~\ref{fig:amazon} shows the Amazon results for BERT Base, Table~\ref{tab:mnli} shows the MNLI results, and Table~\ref{tab:yelp} shows the Yelp results. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{figures/amazon.pdf} \caption{We finetune BERT Base on one category of Amazon reviews and then evaluate it on other categories. Models predict the review's star rating with 5-way classification. We use five clothing categories: Clothes (C), Women's Clothing (WC), Men's Clothing (MC), Baby Clothing (BC), and Shoes (S); and two entertainment categories: Music (MS), Movies (MV). BERT is robust for closely related categories such as men's, women's, and baby clothing. However, BERT struggles when there is an extreme distribution shift such as Baby Clothing to Music (dark blue region). Note this shift is closer to a domain adaptation setting.} \label{fig:amazon} \end{figure} \subsection{OOD Detection}\label{appendix:ooddetection} Full FAR95 values are in Table~\ref{tab:oodfpr}. We also report the Area Under the Receiver Operating Characteristic (AUROC)~\cite{Hendrycks2016ABF}. The AUROC is the probability that an OOD example receives a higher anomaly score than an in-distribution example, viz.,\[\mathrm{P}(-\max_y p(y\mid x_\text{out}) > -\max_y p(y\mid x_\text{in})).\] A flawless AUROC is $100\%$ while $50\%$ is random chance. These results are in Figure~\ref{fig:auroc} and Table~\ref{tab:oodauroc} \begin{table*}[h] \scriptsize \setlength\tabcolsep{3pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.3\hsize}X} *{1}{>{\hsize=0.9cm}X } | *{10}{>{\hsize=0.4\hsize}L}} Train & Test & BoW & Average word2vec & LSTM word2vec & ConvNet word2vec & Average GloVe & LSTM GloVe & ConvNet GloVe & BERT ~~Base & BERT Large & RoBERTa \\ \Xhline{2\arrayrulewidth} \parbox[t]{20mm}{\multirow{2}{*}{Images}} & Images & {39.7} & {61.4} & {75.7} & {81.8} & {61.2} & {79.8} & {81.8} & {91.8} & {92.8} & {94.2} \\ & MSRvid & {4.4} (-35.4) & {11.3} (-50.1) & {38.3} (-37.4) & {62.0} (-19.8) & {6.1} (-55.2) & {43.1} (-36.7) & {57.8} (-24.1) & {89.5} (-2.3) & {90.5} (-2.3) & {94.3} (0.1) \\ \Xhline{2\arrayrulewidth} \parbox[t]{20mm}{\multirow{2}{*}{MSRvid}} & MSRvid & {60.7} & {68.7} & {85.9} & {85.0} & {66.8} & {85.6} & {87.4} & {92.4} & {93.9} & {94.9} \\ & Images & {19.3} (-41.4) & {23.7} (-44.9) & {45.6} (-40.2) & {54.3} (-30.7) & {11.1} (-55.7) & {49.0} (-36.6) & {51.9} (-35.4) & {85.8} (-6.6) & {86.8} (-7.1) & {90.4} (-4.6) \\ \Xhline{2\arrayrulewidth} \parbox[t]{20mm}{\multirow{2}{*}{Headlines}} & Headlines & {26.8} & {58.9} & {66.2} & {67.4} & {53.4} & {69.9} & {69.6} & {87.0} & {88.3} & {91.3} \\ & MSRpar & {10.1} (-16.7) & {19.1} (-39.7) & {-1.9} (-68.1) & {9.8} (-57.6) & {25.9} (-27.5) & {25.4} (-44.5) & {10.9} (-58.7) & {69.9} (-17.1) & {63.6} (-24.7) & {75.5} (-15.8) \\ \Xhline{2\arrayrulewidth} \parbox[t]{20mm}{\multirow{2}{*}{MSRpar}} & MSRpar & {47.0} & {27.0} & {46.7} & {49.8} & {50.9} & {46.7} & {46.2} & {78.8} & {81.6} & {86.8} \\ & Headlines & {-9.7} (-56.7) & {12.7} (-14.4) & {10.3} (-36.5) & {23.7} (-26.1) & {7.0} (-43.9) & {15.6} (-31.1) & {30.6} (-15.6) & {73.0} (-5.8) & {71.7} (-9.9) & {83.9} (-2.9) \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{We train and test models on different STS-B distributions (Images, MSR videos, Headlines, and MSR paraphrase). The severe drop in the Pearson correlation coefficient shows the consequence of a distribution shift. Models such as Average GloVe lose nearly all performance when out-of-distribution. RoBERTa does especially well in comparison to other models.} \label{tab:STS-B} \end{table*} \begin{table*}[h] \small \setlength\tabcolsep{3pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.2\hsize}X} *{1}{>{\hsize=1.4cm}X } | *{5}{>{\hsize=0.4\hsize}L}} Train & Test & Document QA & DistilBERT & BERT Base & BERT Large & RoBERTa\\ \Xhline{2\arrayrulewidth} \parbox[t]{50mm}{\multirow{2}{*}{CNN}} & CNN & 39.0 & 45.0 & 53.2 & 67.2 & 71.5 \\ & DailyMail & 29.7 (-9.3) & 34.8 (-10.2) & 46.7 (-6.6) & 59.8 (-7.4) & 72.2 (0.7) \\ \Xhline{2\arrayrulewidth} \parbox[t]{50mm}{\multirow{2}{*}{DailyMail}} & DailyMail & 30.8 & 36.7 & 48.2 & 61.2 & 73.0\\ & CNN & 36.9 (6.2) & 43.9 (7.2) & 51.8 (3.6) & 65.5 (4.3) & 73.0 (0.0) \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{For ReCoRD, the exact match performance is closely tethered to the test dataset, which suggests a difference in the difficulty of the two test sets. This gap can be bridged by larger Transformer models pretrained on more data.} \label{tab:record} \end{table*} \begin{table*}[t] \scriptsize \setlength\tabcolsep{3pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.3\hsize}X} *{1}{>{\hsize=0.7cm}X } | *{14}{>{\hsize=0.4\hsize}L}} Train & Test & BoW & Average word2vec & LSTM word2vec & ConvNet word2vec & Average GloVe & LSTM GloVe & ConvNet GloVe & {BERT ~~~Base} & BERT Large & RoBERTa\\ \Xhline{2\arrayrulewidth} \parbox[t]{20mm}{\multirow{2}{*}{SST}} & SST & 80.6 & 81.4 & 87.5 & 85.3 & 80.3 & 87.4 & 84.8 & 91.9 & 93.6 & 95.6 \\ & IMDb & 73.9 (-6.8) & 76.4 (-5.0) & 78.0 (-9.5) & 81.0 (-4.4) & 74.5 (-5.8) & 82.1 (-5.3) & 81.0 (-3.8) & 87.5 (-4.4) & 88.3 (-5.3) & 92.8 (-2.8) \\ \Xhline{2\arrayrulewidth} \parbox[t]{20mm}{\multirow{2}{*}{IMDb}} & IMDb & {85.9} & {84.8} & {89.9} & {91.0} & {83.5} & {91.3} & {91.0} & {91.8} & {92.9} & {94.3} \\ & SST & 78.3 (-7.6) & 68.5 (-16.3) & 63.7 (-26.3) & 83.0 (-8.0) & 77.5 (-6.1) & 79.9 (-11.4) & 80.0 (-10.9) & 87.6 (-4.3) & 88.6 (-4.3) & 91.0 (-3.4) \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{We train and test models on SST-2 and IMDB. Notice IID accuracy is not perfectly predictive of OOD accuracy, so increasing IID benchmark performance does not necessarily yield superior OOD generalization.} \label{tab:sst2} \end{table*} \begin{table*}[h!] \small \setlength\tabcolsep{3pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.3\hsize}X} *{1}{>{\hsize=2.2cm}X } | *{4}{>{\hsize=0.4\hsize}L}} Train & Test & DistilBERT & BERT Base & BERT Large & RoBERTa\\ \Xhline{3\arrayrulewidth} \parbox[t]{50mm}{\multirow{3}{*}{Telephone}} & Telephone & {77.5} & {81.4} & {84.0} & {89.6} \\ & Letters & {75.6} (-1.9) & {82.3} (0.9) & {85.1} (1.0) & {90.0} (0.4) \\ & Face-to-face & {76.0} (-1.4) & {80.8} (-0.7) & {83.2} (-0.8) & {89.4} (-0.2) \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{We train models on the MNLI Telephone dataset and test on the Telephone, Letters, and Face-to-face datasets. The difference in accuracies are quite small (and sometimes even positive) for all four models. This demonstrates that pretrained Transformers can withstand various types of shifts in the data distribution.} \label{tab:mnli} \end{table*} \begin{table*}[h!] \scriptsize \setlength\tabcolsep{3pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.2\hsize}X} *{1}{>{\hsize=0.6cm}X } | *{14}{>{\hsize=0.4\hsize}L}} Train & Test & BoW & Average word2vec & LSTM word2vec & ConvNet word2vec & Average GloVe & LSTM GloVe & ConvNet GloVe & DistilBERT & BERT Base & BERT Large & RoBERTa\\ \Xhline{2\arrayrulewidth} \parbox[t]{50mm}{\multirow{4}{*}{AM}} & AM & 87.2 & 85.6 & 88.0 & 89.6 & 85.0 & 88.0 & 91.2 & 90.0 & 90.8 & 91.0 & 93.0 \\ & CH & 82.4 (-4.8) & 80.4 (-5.2) & 87.2 (-0.8) & 88.6 (-1.0) & 75.1 (-9.9) & 88.4 (0.4) & 89.6 (-1.6) & 91.8 (1.8) & 91.0 (0.2) & 90.6 (-0.4) & 90.8 (-2.2) \\ & IT & 81.8 (-5.4) & 82.6 (-3.0) & 86.4 (-1.6) & 89.4 (-0.2) & 82.0 (-3.0) & 89.2 (1.2) & 89.6 (-1.6) & 92.6 (2.6) & 91.6 (0.8) & 91.2 (0.2) & 91.8 (-1.2) \\ & JA & 84.2 (-3.0) & 86.0 (0.4) & 89.6 (1.6) & 89.4 (-0.2) & 79.2 (-5.8) & 87.8 (-0.2) & 89.2 (-2.0) & 92.0 (2.0) & 92.0 (1.2) & 92.2 (1.2) & 93.4 (0.4) \\ \Xhline{2\arrayrulewidth} \parbox[t]{50mm}{\multirow{4}{*}{CH}} & CH & {82.2} & {84.4} & {87.6} & {88.8} & {84.4} & {89.2} & {89.0} & {90.2} & {90.4} & {90.8} & {92.4} \\ & AM & 82.2 (0.0) & 85.4 (1.0) & 88.0 (0.4) & 89.2 (0.4) & 83.0 (-1.4) & 85.6 (-3.6) & 90.2 (1.2) & 90.6 (0.4) & 88.8 (-1.6) & 91.8 (1.0) & 92.4 (0.0) \\ & IT & 84.6 (2.4) & 82.0 (-2.4) & 88.0 (0.4) & 89.6 (0.8) & 84.6 (0.2) & 88.6 (-0.6) & 90.4 (1.4) & 91.4 (1.2) & 89.0 (-1.4) & 90.2 (-0.6) & 92.6 (0.2) \\ & JA & 83.8 (1.6) & 85.8 (1.4) & 88.6 (1.0) & 89.0 (0.2) & 86.8 (2.4) & 88.8 (-0.4) & 89.6 (0.6) & 91.6 (1.4) & 89.4 (-1.0) & 91.6 (0.8) & 92.2 (-0.2) \\ \Xhline{2\arrayrulewidth} \parbox[t]{50mm}{\multirow{4}{*}{IT}} & IT & {87.2} & {86.8} & {89.6} & {90.8} & {86.2} & {89.6} & {90.8} & {92.4} & {91.6} & {91.8} & {94.2} \\ & AM & 85.4 (-1.8) & 83.8 (-3.0) & 89.0 (-0.6) & 90.2 (-0.6) & 85.6 (-0.6) & 89.0 (-0.6) & 90.2 (-0.6) & 90.4 (-2.0) & 90.6 (-1.0) & 89.4 (-2.4) & 92.0 (-2.2) \\ & CH & 79.6 (-7.6) & 81.6 (-5.2) & 83.8 (-5.8) & 88.4 (-2.4) & 78.0 (-8.2) & 83.2 (-6.4) & 85.8 (-5.0) & 90.4 (-2.0) & 89.6 (-2.0) & 90.0 (-1.8) & 92.4 (-1.8) \\ & JA & 82.0 (-5.2) & {84.6} (-2.2) & {87.4} (-2.2) & {88.6} (-2.2) & {85.0} (-1.2) & {86.8} (-2.8) & {89.4} (-1.4) & {91.8} (-0.6) & {91.4} (-0.2) & {91.2} (-0.6) & {92.2} (-2.0) \\ \Xhline{2\arrayrulewidth} \parbox[t]{50mm}{\multirow{4}{*}{JA}} & JA & {85.0} & {87.6} & {89.0} & {90.4} & {88.0} & {89.0} & {89.6} & {91.6} & {92.2} & {93.4} & {92.6} \\ & AM & {83.4} (-1.6) & {85.0} (-2.6) & {87.8} (-1.2) & {87.8} (-2.6) & {80.4} (-7.6) & {88.6} (-0.4) & {89.4} (-0.2) & {91.2} (-0.4) & {90.4} (-1.8) & {90.6} (-2.8) & {91.0} (-1.6) \\ & CH & {81.6} (-3.4) & {83.6} (-4.0) & {89.0} (0.0) & {89.0} (-1.4) & {80.6} (-7.4) & {87.4} (-1.6) & {89.2} (-0.4) & {92.8} (1.2) & {91.4} (-0.8) & {90.8} (-2.6) & {92.4} (-0.2) \\ & IT & {84.0} (-1.0) & {83.6} (-4.0) & {88.2} (-0.8) & {89.4} (-1.0) & {83.6} (-4.4) & {88.0} (-1.0) & {90.6} (1.0) & {92.6} (1.0) & {90.2} (-2.0) & {91.0} (-2.4) & {92.6} (0.0) \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{We train and test models on American (AM), Chinese (CH), Italian (IT), and Japanese (JA) restaurant reviews. The accuracy drop is smaller compared to SST-2/IMDb for most models and pretrained transformers are typically the most robust.} \label{tab:yelp} \end{table*} \begin{table*} \vspace{-5pt} \small \setlength\tabcolsep{1.5pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.25\hsize}X} *{1}{>{\hsize=1.4cm}X } | *{11}{>{\hsize=0.4\hsize}Y}} $\mathcal{D}_\text{in}$ & \multicolumn{1}{l|}{$\mathcal{D}_\text{out}^\text{test}$} & \multicolumn{1}{p{1cm}}{\centering BoW} & \multicolumn{1}{p{1cm}}{\centering Avg w2v} & \multicolumn{1}{p{1cm}}{\centering Avg GloVe} & \multicolumn{1}{p{1cm}}{\centering LSTM w2v} & \multicolumn{1}{p{1cm}}{\centering LSTM GloVe} & \multicolumn{1}{p{1cm}}{\centering ConvNet w2v} & \multicolumn{1}{p{1cm}}{\centering ConvNet GloVe} & \multicolumn{1}{p{1.35cm}}{\centering DistilBERT} & \multicolumn{1}{p{1cm}}{\centering BERT Base} & \multicolumn{1}{p{1cm}}{\centering BERT Large} & \multicolumn{1}{p{1cm}}{\centering RoBERTa} \\ \hline \parbox[t]{50mm}{\multirow{5}{*}{SST}} & 20 NG & 100 & 100 & 100 & 94 & 90 & 61 & 71 & 39 & 35 & 29 & 22 \\ & Multi30K & 61 & 57 & 52 & 92 & 85 & 65 & 63 & 37 & 22 & 23 & 61 \\ & RTE & 100 & 100 & 84 & 93 & 88 & 75 & 56 & 43 & 32 & 29 & 36 \\ & SNLI & 81 & 83 & 72 & 92 & 82 & 63 & 63 & 38 & 28 & 28 & 29 \\ & WMT16 & 100 & 91 & 77 & 90 & 82 & 70 & 63 & 56 & 48 & 44 & 65 \\ \Xhline{0.5\arrayrulewidth} \multicolumn{2}{c|}{Mean FAR95} & {88.4} & {86.2} & {76.9} & {92.2} & {85.4} & {66.9} & {63.1} & {42.5} & {33.0} & {\textbf{30.5}} & {43.0} \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{Out-of-distribution detection FAR95 scores for various NLP models using the maximum softmax probability anomaly score. Observe that while pretrained Transformers are consistently best, there remains room for improvement.} \label{tab:oodfpr} \end{table*} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figures/auroc.pdf} \caption{We feed in OOD examples from out-of-distribution datasets (20 Newsgroups, Multi30K, etc.) to SST-2 sentiment classifiers and report the AUROC detection performance. A 50\% AUROC is the random chance level. } \label{fig:auroc} \end{figure*} \begin{table*} \vspace{-5pt} \small \setlength\tabcolsep{1.5pt} \centering \begin{tabularx}{\textwidth}{*{1}{>{\hsize=0.25\hsize}X} *{1}{>{\hsize=1.4cm}X } | *{11}{>{\hsize=0.4\hsize}Y}} $\mathcal{D}_\text{in}$ & \multicolumn{1}{l|}{$\mathcal{D}_\text{out}^\text{test}$} & \multicolumn{1}{p{1cm}}{\centering BoW} & \multicolumn{1}{p{1cm}}{\centering Avg w2v} & \multicolumn{1}{p{1cm}}{\centering Avg GloVe} & \multicolumn{1}{p{1cm}}{\centering LSTM w2v} & \multicolumn{1}{p{1cm}}{\centering LSTM GloVe} & \multicolumn{1}{p{1cm}}{\centering ConvNet w2v} & \multicolumn{1}{p{1cm}}{\centering ConvNet GloVe} & \multicolumn{1}{p{1.35cm}}{\centering DistilBERT} & \multicolumn{1}{p{1cm}}{\centering BERT Base} & \multicolumn{1}{p{1cm}}{\centering BERT Large} & \multicolumn{1}{p{1cm}}{\centering RoBERTa} \\ \hline \parbox[t]{50mm}{\multirow{5}{*}{SST}} & 20 NG & 17 & 19 & 30 & 44 & 59 & 74 & 64 & 82 & 83 & 87 & 90 \\ & Multi30K & 77 & 75 & 80 & 55 & 62 & 71 & 73 & 86 & 93 & 91 & 89 \\ & RTE & 63 & 47 & 72 & 36 & 54 & 61 & 77 & 83 & 89 & 89 & 90 \\ & SNLI & 56 & 58 & 71 & 53 & 64 & 72 & 74 & 86 & 92 & 90 & 92 \\ & WMT16 & 58 & 60 & 69 & 58 & 63 & 69 & 74 & 80 & 85 & 85 & 83 \\ \Xhline{0.5\arrayrulewidth} \multicolumn{2}{c|}{Mean AUROC} & {54.2} & {51.8} & {64.5} & {49.3} & {60.4} & {69.5} & {72.5} & {83.1} & {88.1} & {88.4} & {\textbf{88.7}} \\ \Xhline{3\arrayrulewidth} \end{tabularx} \caption{Out-of-distribution detection AUROC scores for various NLP models using the maximum softmax probability anomaly score. An AUROC score of $50\%$ is random chance, while $100\%$ is perfect.} \label{tab:oodauroc} \end{table*} \section*{Acknowledgements} We thank the members of Berkeley NLP, Sona Jeswani, Suchin Gururangan, Nelson Liu, Shi Feng, the anonymous reviewers, and especially Jon Cai. This material is in part based upon work supported by the National Science Foundation Frontier Award 1804794. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
1,116,691,501,139
arxiv
\section{Introduction} Non-linear stochastic flows are at the heart of thermally driven processes in systems whose potential energy surfaces are characterized by multiple local energy minima. Pioneered by the seminal work of Kramers \cite{kramers_brownian_1940} the concept of thermally activated barrier-crossing has ever since been applied to diverse phenomena, incl. chemical reactions \cite{Nitzan,Schlogl_1972,Matheson}, tunnel-diodes \cite{Landauer}, laser-pumping \cite{Risken_1965}, magnetic resonance \cite{Kubo}, conformational dynamics and folding of proteins \cite{lannon_force-clamp_2013, chung_protein_2018, yu_energy_2012, manuel_reconstructing_2015, truex_testing_2015, chung_single-molecule_2012, de_sancho_molecular_2014, best_diffusive_2006,Netz_2018} and nucleic acids \cite{neupane_transition-path_2015, neupane_direct_2016} and receptor-ligand binding \cite{rico_heterogeneous_2019}, to name but a few. From a theoretical point of view, the most detailed and precise results were obtained in the context of relaxation phenomena \cite{van_Kampen_1977,Caroli_1979,Saito,Bunde,Hanggi_bistable,perico_positional_1993, perico_torsional_1994} and first passage time statistics \cite{hanggi_reaction-rate_1990, felderhof_escape_2008, berdichevsky_one-dimensional_1996, jun_one-dimensional_2002, fleming_activated_1986, berezhkovskii_communication_2018, hartich_duality_2018,hartich_interlacing} in Markovian (i.e. memory-less) systems. However, physical observables typically correspond to lower-dimensional projections and the observed dynamics is Markovian only under quite restrictive conditions on the nature of the projection \cite{lapolla_manifestations_2019}. Quoting van Kampen: "Non-Markov is the rule, Markov is the exception" \cite{van_kampen_remarks_1998}. Over the years non-Markovian barrier crossing has therefore received special attention. Most approaches considered a generalized Langevin equation in the underdamped regime with diverse phenomenological memory kernels for the velocity in the high \cite{hanggi_thermally_1982, okuyama_generalized_1986, okuyama_nonmarkovian_1986} and low \cite{carmeli_non-markoffian_1982, carmeli_nonmarkovian_1983} viscosity limit. In the case of diffusion in double-well potentials unified solutions have been obtained \cite{carmeli_nonmarkovian_1984}. Seminal results on non-Markovian effects in the crossing of high energy barriers have been obtained by Mel’nikov and Meshkov \cite{melnikov_theory_1986}, and were later extended to low barriers by Kalmykov, Coffey and Titov \cite{kalmykov_thermally_2006}. Important results on non-Markovian barrier-crossing have been obtained in the context of condensed-phase dynamics \cite{grote_stable_1980, chandler_statistical_1978,Chakrabarti}. More recent studies of memory effects in bi-stable potentials have been carried out in the context of conformational dynamics of macromolecules \cite{Makarov,Makarov2,Netz_2018,kappler_non-markovian_2019} and the role of hydrodynamic memory in surmounting energy barriers \cite{Seyler_2020}, while recent applications involve the interpretation of experiments on the folding of a DNA hairpin \cite{pyo_memory_2019}. Quite detailed analytical results have also been obtained for overdamped non-Markovian stochastic flows in bi-stable potentials, in particular for exponentially correlated noise \cite{Hanggi_NM1,Hanggi_NM2,Fox,Doering,Sancho}. Characteristic of these studies is that the memory is introduced phenomenologically and/or the systems typically posses slow and fast degrees of freedom. Thereby, integrating out of fast degrees of freedom leads to memory, and time-scales similar to, or longer than, the correlation time are of interest. Here, we are interested in the scenario where the background degrees of freedom (i.e. those that become integrated out) relax on exactly the same time scale as the observable. In particular, we are interested in the relaxation dynamics of a tagged-particle in a single-file of Brownian particles confined to a bi-stable potential, and investigate the r\^ole of the height of the potential barrier. Projecting out particles' positions introduces memory and strongly breaks Markovianity \cite{lapolla_manifestations_2019}. The more particles' coordinates become integrated out the stronger Markovianity is broken \cite{lapolla_manifestations_2019}. A distinguishing characteristic of our approach with respect to the existing literature is, therefore, that we do not introduce memory phenomenologically via a generalized Langevin equation. Instead, the memory arises explicitly as a result of projecting out degrees of freedom in an exactly solvable Markovian many-body system. This is important because any external potential in general also affects the memory in the tagged-particle's dynamics \cite{ZwanzigLN,Zwanzig_Book,lapolla_manifestations_2019}. One therefore may \emph{not} employ ad hoc memory-kernels that are independent of the external potential, except when the potential \textcolor{black}{does not act background degrees of freedom} and the \textcolor{black}{interaction between the} background degrees of freedom \textcolor{black}{and} the tagged-particle is \textcolor{black}{harmonic or} negligibly weak. Single-file models are generically used to describe strongly correlated, effectively one-dimensional, systems and processes, e.g. biological channels \cite{hummer_water_2001}, transport in zeolites \cite{chou_entropy-driven_1999}, crowding effects in gene regulation \cite{gene,li_effects_2009}, superionic conductors \cite{richards_theory_1977}, and strongly correlated one-dimensional soft matter systems in general \cite{taloni_single_2017,Lutz,Lin,Locatelli}. Over the past years diverse theoretical studies yielded deep insight about the anomalous tagged-particle diffusion \cite{harris_diffusion_1965, jepsen_dynamics_1965, lizana_single-file_2008, lizana_diffusion_2009,barkai_theory_2009, barkai_diffusion_2010, leibovich_everlasting_2013,flomenbom_single-file_2008,metzler_ageing_2014} and the emergence and meaning of memory \cite{lizana_foundation_2010,lapolla_unfolding_2018,lapolla_manifestations_2019}. Single-file diffusion in potential landscapes has been studied by computer simulations \cite{Goldt_2014}. It is well known that a tagged-particle's diffusion in any homogeneous overdamped system of identically interacting particles with excluded mutual passage is asymptotically subdiffusive, i.e the tagged-particle's mean squared displacement asymptotically scales as $\langle x^2\rangle \sim t^{1/2}$ (see e.g. Ref.~\onlinecite{Kollman}). However, the manner in which crowding/steric obstruction and particle correlations affect memory in barrier-crossing, and in particular in the relaxation towards equilibrium \textcolor{black}{and how such memory can be inferred and quantified from measurable physical observables}, has so far remained elusive. \textcolor{black}{Our results are relevant in the context of search processes of proteins on DNA in the presence of macromolecular crowding involved in transcription regulation, and on a conceptual level for transport in ion-channels. More generally, the methodological framework presented here does not require an analytical solution of the problem to be known, and can thus also be applied in the analysis of experiments or computer simulations.} In this work we provide in Sec.~\ref{Theory} an analytical solution to the problem using the coordinate Bethe ansatz. In Sec.~\ref{linear} we analyze \textcolor{black}{the equilibrium correlation functions and underlying linear memory kernel} as a function of the barrier height and number of particles in the single-file. Sec.~\ref{fixed} addresses the relaxation to equilibrium from a fixed, non-equilibrium initial condition of the tagged-particle in the 'confining' and ''pushing' scenario, respectively. We conclude with a brief discussion incl. potential applications and extensions of our results. \section{Theory}\label{Theory} We consider a single-file of $N$ point-particles confined to a box of length $L=2\pi$. In the center of the box there is a square-top energy barrier of width $\pi$ and height $U_b$ (see Fig.~\ref{fig:meanbarrier}a). More precisely, each particle experiences the potential \cite{morsch_one-dimensional_1979, risken_fokker-planck_1996} \begin{equation} U(x)=\begin{cases} 0, & \pi> |x| > \pi/2\\ U_b, & |x| \leq \pi/2\\ \infty, &\text{otherwise}. \end{cases} \label{pot} \end{equation} The particles move according to overdamped Brownian dynamics \textcolor{black}{but are not allowed to cross}. For simplicity and without loss of generality we set $D=1$, which is equivalent to expressing time in units of $4\pi^2/D$, and express $U$ in units of thermal energy $k_{\rm B}T$, i.e. $U\to U/k_{\rm B}T$. The probability density of the set of positions $\{x_i\}=\mathbf{x}$ of the $N$ particles evolves according to the many-body Fokker-Planck equation \begin{equation} \left(\partial_t-\sum_{i=1}^N\left[ \partial_{x_i}^2+\partial_{x_i}\{\partial_{x_i}U(x_i)\}\right]\right)G(\mathbf{x},t|\mathbf{x}_0)=0, \label{FPE} \end{equation} with initial condition $G(\mathbf{x},0|\mathbf{x}_0)=\prod_{i=1}^N\delta(x_i-x_{0i})$ and where the operator in curly brackets $\{\}$ acts only within the bracket. Eq.~(\ref{FPE}) is equipped with the set of external and internal boundary conditions \begin{eqnarray} \partial_{x_1}G(\mathbf{x},t|\mathbf{x}_0)|_{x_1=-\pi}=\partial_{x_N}G(\mathbf{x},t|\mathbf{x}_0)|_{x_N=\pi}&=&0\nonumber\\ \left(\partial_{x_{i+1}}-\partial_{x_i}\right)G(\mathbf{x},t|\mathbf{x}_0)|_{x_{i+1}=x_i}&=&0, \label{BC} \end{eqnarray} and is solved exactly using the coordinate Bethe ansatz (for technical details refer to Refs.~\onlinecite{lapolla_unfolding_2018,lapolla_manifestations_2019,lapolla_bethesf_2020}). \textcolor{black}{In a nutshell, the Bethe ansatz solution exploits the intuitive fact that a trajectory of $N$ identical non-crossing Brownian particles is \emph{identical} to that of an ideal Brownian gas if we re-label the particle indices such that that they are ordered at all times. As a result, one can construct the probability density of the set of particles' positions $\mathbf{x}$ by a suitable permutation of the products of probability densities of individual, non-interacting particles. In turn, an eigenfunction expansion of the many-body Fokker-Planck operator can be obtained by permuting products of single-particle eigenspectra, which is what we exploit in the present article.} The resulting \textcolor{black}{Bethe} many-body Green's function reads \begin{equation} G(\mathbf{x},t|\mathbf{x}_0)=\sum_{\mathbf{k}} \Psi_\mathbf{k}^R(\mathbf{x})\Psi_\mathbf{k}^L(\mathbf{x}_0)\mathrm{e}^{-\Lambda_\mathbf{k}\tau} \label{Greens} \end{equation} where $\Psi_\mathbf{k}^L(\mathbf{x})$ and $\Psi_\mathbf{k}^R(\mathbf{x})$ are the so-called left and right Bethe eigenfunctions, respectively, defined as \begin{equation} \Psi_\mathbf{k}^{L,R}(\mathbf{x})\equiv\mathcal{N}^{1/2}\hat{O}_{\mathbf{x}}\sum_{\{\mathbf{k}\}} \prod_{i=1}^N \psi_{k_i}^{L,R}(x_i), \label{eigf} \end{equation} where $\psi_{n}^{L,R}(x)$ are the orthonormal eigenfunctions of the single-particle problem (given in Appendix), the sum over $\{\mathbf{k}\}$ refers to the sum over all permutations of the multiset $\mathbf{k}$, and $\mathcal{N}$ is the number of these permutations $\mathbf{k}$. $\Lambda_\mathbf{k}=\sum_{i=1}^N \lambda_{k_i}$ refers to the Bethe eigenvalue with multi-index $\mathbf{k}=\{k_i\},i\in [1,N]$, and $\hat{O}_\mathbf{x}$ is the particle-ordering operator, which ensures that $x_1\le \cdots\le x_i\le \ldots\le x_N$. Moreover, $\lambda_{n}$ refer to the eigenvalues of the respective one-body problem given by \cite{morsch_one-dimensional_1979,risken_fokker-planck_1996} \begin{equation} \lambda_n= \begin{dcases} \frac{n^2}{4},& \mathrm{mod}(n,4)=0,\\ \left(\frac{n-1}{2}+\nu\right)^2,& \mathrm{mod}(n,4)=1,\\ \frac{n^2}{4},& \mathrm{mod}(n,4)=2,\\ \left(\frac{n+1}{2}-\nu\right)^2,& \mathrm{mod}(n,4)=3 \end{dcases} \label{evs} \end{equation} where $\nu=2\arctan(\ee{-U_b/2})/\pi$ and $\mathrm{mod}(k,l)$ stands for the remainder of the division $k/l$. We are interested in the non-Markovian probability density of $x_i$, the position of the $i$-th tagged-particle under the condition that the initial positions of the remaining particles are drawn from those equilibrium configurations that contain particle $i$ at $x_0$, which reads (for a derivation see Refs.~\onlinecite{lapolla_unfolding_2018,lapolla_manifestations_2019,lapolla_bethesf_2020}) \begin{equation} \mathcal{G}(x_i,t|x_{0i})=V_{\mathbf{0}\mathbf{0}}^{-1}(x_{0i})\sum_\mathbf{k} V_{\mathbf{0}\mathbf{k}}(x_i)V_{\mathbf{k}\mathbf{0}}(x_{0i})\mathrm{e}^{-\Lambda_\mathbf{k}t}, \label{NMGreen} \end{equation} where the 'overlap-elements' $V_{\mathbf{k}\mathbf{l}}(x_i)$ are defined as \cite{lapolla_bethesf_2020} \begin{equation} V_{\mathbf{k}\mathbf{l}}(x_i)=\frac{m_{\mathbf{l}}}{N_L!N_R!}\sum_{\{\mathbf{k}\}}\sum_{\{\mathbf{l}\}}\psi_{k_i}^R(x_i)\psi_{l_i}^L(x_i) \prod_{n=1}^{i-1}L_n(x_i)\!\!\!\!\!\prod_{m=i+1}^{N}\!\!\!\!R_m(x_i) \label{overlap} \end{equation} with $m_{\mathbf{l}}$ being the multiplicity of the multiset $\mathbf{l}$, and $N_L=i-1$ and $N_R=N-i$ are, respectively, the number of particles to the left and right of the tagged-particle. In Eq.~(\ref{overlap}) we introduced the auxiliary functions \begin{equation} L_n(x)=\int_{-\pi}^x dz \psi_{l_n}^L(z)\psi_{k_n}^R(z),\quad R_n(x)=\int_{x}^\pi dz\psi_{l_n}^L(z)\psi_{k_n}^R(z). \label{aux} \end{equation} Note that the equilibrium probability density of the tagged-particle's position is given by (see Eq.~(\ref{NMGreen})) $\mathcal{P}_{\rm eq}(x_i)\equiv\lim_{t\to\infty}\mathcal{G}(x_i,t|x_{0i})=V_{\mathbf{0}\mathbf{0}}(x_i)$ and is depicted for various values of $U_b$ in Fig.~\ref{fig:meanbarrier}b-d. Intuitively, as $U_b$ increases particles become expelled from the barrier In Ref.~\onlinecite{lapolla_bethesf_2020} we have developed an algorithm designed to efficiently cope with the combinatorial complexity of the implementation of the analytical solution in Eq.~(\ref{NMGreen}). Due to the piece-wise constant nature of the potential $U(x)$ in Eq.~(\ref{pot}) all integrals (\ref{aux}) can be computed analytically. As the resulting expressions are lengthy we do not show them here. Instead, they are readily implemented in an extension of the code published in Ref.~\onlinecite{lapolla_bethesf_2020} (see Supplementary Material). \begin{figure} \centering \includegraphics[width=0.48\textwidth]{meanbarrier.pdf} \caption{a) Schematic of the potential $U(x)$ defined in Eq.~(\ref{pot}); Single-file with $N=5$ particles. Throughout this work we either tag the first (magenta stands for $i=1$) or the last (orange stands for $i=N$) particle. b), c) and d) depict, respectively, the equilibrium probability distribution $\mathcal{P}_{\rm eq}(x_i)$ for $i=1$ (b) $i=3$ (c) and $i=5$ (d) in a single-file with $N=5$ for different barrier heights $U_b$.} \label{fig:meanbarrier} \end{figure} \section{Linear correlations at equilibrium} \label{linear} First we consider linear correlations at equilibrium and limit the discussion in the reminder of the paper to tagging the first or last particle (i.e. throughout we set $i=1$ or $i=N$). That is, we are interested in the normalized positional autocorrelation function of a tagged-particle defined as \begin{equation} C_i(t)=\frac{\langle x_i(t)x_i(0)\rangle -\langle x_i\rangle^2}{\langle x_i^2\rangle -\langle x_i\rangle^2}, \label{autocor} \end{equation} where the covariance of the position is defined as \begin{equation} \langle x_i(t)x_i(0)\rangle\equiv\int_{-\pi}^\pi dx_i \int_{-\pi}^\pi dx_{0i}\; x_i x_{0i} \mathcal{G}(x_i,t|x_{0i}) \mathcal{P}_{\rm eq}(x_{0i}), \end{equation} and $\langle x_i^n \rangle =\int_{-\pi}^\pi dx_i\; x_i^n \mathcal{P}_{\rm eq}(x_i)$. The above integrals have been performed numerically by means of Gauss-Kronrod quadrature \cite{boost_quadrature}. Note that Eq.~(\ref{autocor}) alongside Eqs.~(\ref{eigf}-\ref{aux}) necessarily implies the structure $C_i(t)=\sum_{\mathbf{k}\ne\mathbf{0}}a_{\mathbf{k}}\mathrm{e}^{-\Lambda_{\mathbf{k}}t}$ with $\sum_{\mathbf{k}\ne\mathbf{0}}a_{\mathbf{k}}=1$ and where all $a_{\mathbf{k}}\ge 0$ \footnote{Note that Eqs.~(\ref{eigf}-\ref{aux}) imply that $a_{\mathbf{k}}$ is a positive constant times the square of a real number.}. The results for $C_1(t)$ as a function of the barrier height $U_b$ are depicted in Fig.~\ref{fig:autocorr}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{autocorr.pdf} \caption{Position autocorrelation function $C_1(t)$ of an isolated particle (a) and the leftmost tagged-particle in a single-file with five particles (b) as a function of the barrier height $U_b$. Symbols denote $C_1(\Lambda_{\mathbf{1}}^{-1})$.} \label{fig:autocorr} \end{figure} Since $U(x)$ is symmetric the autocorrelation functions of the first and last particle coincide, i.e. $C_1(t)=C_N(t)$. The autocorrelation of an isolated particle (i.e. $N=1$) in Fig.~\ref{fig:autocorr}a displays for a given value of $U_b$ to a good approximation an exponential decay with rate $\Lambda_{\mathbf{1}}=\lambda_{1}$ given by Eq.~(\ref{evs}). This reflects that positional correlations decay predominantly due to barrier-crossing. Conversely, as the number of particles increases $C_1(t)$ decays on multiple time-scales (see Fig.~\ref{fig:autocorr}b) and develops an ``anomalous'' shoulder on shorter time-scales\cite{lizana_foundation_2010}, whose span increases with the barrier height $U_b$. A comparison of $C_1(\Lambda_1^{-1})$ reveals that the relative decay of correlations from the relaxation time $\tau_{\rm rel}\equiv\Lambda_1^{-1}$ onward is substantially reduced for about a factor of $2$ compared to the isolated particle case. $\tau_{\rm rel}$ denotes the time-scale on which the system reaches equilibrium from any initial condition. Note that (i) $C_1(t)$ measures relative correlations and (ii) according to Eq.~(\ref{NMGreen}) (terminal) relaxation roughly corresponds to the particles individually crossing the barrier several times. It is also important to note that the natural time-scale of a tagged-particle is set by the average collision time\cite{lapolla_unfolding_2018,lapolla_manifestations_2019} $\tau_{\rm col}=1/N^2$ which decreases with increasing $N$. That is, in units of the average number of collisions $t\to t/\tau_{\rm col}$ correlations decay more slowly for larger $N$. A common means to quantify the extent of correlations found in the literature is the so-called \emph{correlation time} $T_c$ \cite{lipari_model-free_1982, perico_positional_1993, perico_torsional_1994, kalmykov_thermally_2006} and should be compared with the actual relaxation time $\tau_{\rm rel}$:\footnote{Note that $\tau_{\rm rel}$ does not depend on $N$.} \begin{equation} T_c=\int_0^\infty dt C_i(t), \,\, \tau_{\rm rel}\equiv\Lambda_1^{-1}=\left(\frac{2}{\pi}\arctan(\mathrm{e}^{-U_b/2})\right)^{-2}, \label{ctime} \end{equation} where we note that for high barriers, i.e. $U_B\gg 1$, the relaxation time follows the expected Arrhenius scaling $\tau_{\rm rel}\simeq 4\mathrm{e}^{U_b}/\pi^2$. In Fig.~\ref{fig:corrtime}a we depict the correlation time for the leftmost particle in units of $\tau_{\rm col}$ as a function of the barrier height $U_b$ for different $N$. For an isolated particle $T_c=T_c^{\rm isolated}$ agrees very well with $\tau_{\rm rel}$ for all values of $U_b$, confirming the idea that $C(t)$ decays to a very good approximation as a single exponential. Note that, for systems obeying detailed balance, the mathematical structure of $C_i(t)$ trivially implies a shorter correlation time as soon as $C_i(t)$ decays on multiple time-scales if the longest time-scale $\Lambda_{\mathbf{1}}^{-1}$ is the same. This is particularly true when comparing $C_i(t)$ of a tagged-particle in a single-file with an isolated particle. Namely, \begin{equation} T_c=\sum_{\mathbf{k}\ne\mathbf{0}}a_\mathbf{k}/\Lambda_{\mathbf{k}}\le\sum_{\mathbf{k}\ne\mathbf{0}}a_\mathbf{k}/\Lambda_{\mathbf{1}}= \Lambda_{\mathbf{1}}^{-1}\approx T_c^{\rm isolated}. \label{ineq} \end{equation} Therefore, the interpretation of $T_c$ should always be made cautiously and in the particular case of tagged-particle diffusion in a single-file is not meaningful if we consider $T_c$ on an absolute scale. However, it becomes somewhat more meaningful on the natural time-scale, i.e. when time is expressed in terms of the average number of inter-particle collisions (see also Ref. \onlinecite{lapolla_manifestations_2019}). Inspecting $C_1(t)$ on this natural time scale we find in Fig.~\ref{fig:corrtime}a that the tagged-particle on average undergoes more collisions before it decorrelates for larger values of $N$, and this number increases with increasing $U_b$. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{corrtime.pdf} \caption{a) $T_c/\tau_{\rm col}$ for the first particle (i.e. $i=1$) as a function of the barrier-height $U_b$ for various values of $N$; The full line depicts $\tau_{\rm rel}\equiv\Lambda_1^{-1}$. b) Ratio $T_c/\tau_{\rm rel}$ as a function of the barrier-height $U_b$ for various values of $N$.} \label{fig:corrtime} \end{figure} Moreover, as $N$ increases the space explored by a tagged-particle becomes progressively more confined \cite{lapolla_manifestations_2019} rendering the correlation time $T_c$ on an absolute time-scale also intuitively shorter. Indeed, in Fig.~\ref{fig:corrtime}b we depict the ratio $T_c/\Lambda_1^{-1}$ which decreases with increasing $N$ for any barrier height $U_b$. Note that $\Lambda_1^{-1}$ is independent of $N$ and the breaking of Markovianity (reflected, e.g. in the violation of the Chapman-Kolmogorov semi-group property \cite{lapolla_manifestations_2019}) is encoded entirely in the overlap elements $V_{\mathbf{0k}},V_{\mathbf{k0}}$. For systems with microscopically reversible dynamics $T_c/\Lambda_1^{-1}<1$ quite generally implies that relaxation evolves on multiple time-scales. Thus, the results in Fig.~\ref{fig:corrtime}b suggest, in agreement with intuition, that more and more time-scales are involved in the relaxation of a tagged-particle's position in equilibrium as we increase $N$. In other words, on the level of linear correlations signatures of memory of the initial conditions of 'latent'/background particles are reflected in the multi-scale relaxation of $C_1(t)$. As shown by Zwanzig from first principles \cite{ZwanzigLN,Zwanzig_Book} one can also analyze the memory encoded in $C_i(t)$ defined in Eq.~(\ref{autocor}) in terms of a \emph{memory function} $K_U(t)$ defined through \begin{equation} \frac{d}{dt}C_i(t)=-\int_0^tK_U(s)C_i(t-s)ds, \label{kernel} \end{equation} where the subscript $U$ is included to stress that the memory kernel depends on the external potential. \textcolor{black}{The kinetic equation} (\ref{kernel}) is obeyed \emph{exactly} \cite{ZwanzigLN,Zwanzig_Book}. \textcolor{black}{Note that the memory kernel $K_U(t)$ in the \emph{linear} kinetic equation (\ref{kernel}) is \emph{not} equivalent to the memory kernel entering a \emph{non-linear} generalized Langevin equation for a tagged-particle motion in a potential of mean force \cite{Zwanzig_Book,Makarov,Makarov2}. If, however, one were to compute $C_i(t)$ from such a non-linear generalized Langevin equation this would yield Eq.~(\ref{kernel}). Here, we aim to connect quantitatively the different signatures of memory encoded in $C_i(t)$ and the correlation time $T_c$ solely by means of the information encoded in $C_i(t)$. Note that this approach is simple and model-free, and can therefore directly be used in the analysis of experimental and simulation data. Alternatively one may equally well use a non-linear framework (for an excellent recent example see Ref.~\onlinecite{Sollich}) that, however, requires more effort as well as a more detailed input.} We determine $K_U(s)$ from the Laplace transform of Eq.~(\ref{kernel}) \begin{equation} \tilde{K}_U(u)=\frac{1}{\sum_{\mathbf{k}\ne\mathbf{0}}a_{\mathbf{k}}/(\Lambda_{\mathbf{k}}+u)}-u \label{Lap_kernel} \end{equation} where $\tilde f(u)\equiv \int_0^\infty\mathrm{e}^{-ut}f(t)dt$. The inverse Laplace transform is in turn determined by numerically inverting Eq.~(\ref{Lap_kernel}) using the \emph{fixed Talbot method} \cite{Abate}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{memory_kernel.pdf} \caption{Memory kernel $-K_U(t)$ of an isolated particle (a) and the leftmost tagged-particle in a single-file with five particles (b) as a function of the barrier height $U_b$ shown on a double logarithmic scale. Inset: the same results shown on a linear-logarithmic scale. We used 900 pairs $a_{\mathbf{k}},\Lambda_{\mathbf{k}}$ to determine $\tilde{K}_U(u)$ in Eq.~(\ref{Lap_kernel}). Note that the range of $t$ in (a) and (b) is different.} \label{memKer} \end{figure}\\ \indent By construction $\int_0^tK_U(s)C_i(t-s)ds\ge 0$ and therefore $K_U(t)$ must have a positive contribution at least for $t\to 0$. For example, if $C_i(t)$ decays \emph{exactly} as a single exponential with rate $\Lambda$ we have $\tilde{K}_U(u)=\Lambda$ and hence $K_U(t)=\Lambda \delta(t)$. In fact, one can show by means of Mori's projection-operator formalism that $K_U(t)$ has the generic structure $K_U(t)=a\delta(t)+\zeta(t)$ where $a>0$ and $\zeta(t)$ is a smooth function of $t$ (see e.g. Eq.~(8.61) in Ref.~\onlinecite{Zwanzig_Book}).\\ \indent More generally, when memory is short-lived, that is, when $K(t)$ decays rapidly compared to the time-scale on which $C_i(t)$ changes appreciably, we may approximate Eq.~(\ref{kernel}) as $\frac{d}{dt}C_i(t)\approx-(\int_0^\infty K(s)ds) C_i(t)\equiv-\Lambda C_i(t)$ and hence the auto-correlation decays approximately as a single exponential $C_i(t)\approx \mathrm{e}^{-\Lambda t}$ -- the system is therefore said to be effectively memoryless (Markovian) \cite{ZwanzigLN,Zwanzig_Book}. \\ \indent The memory kernel of an isolated particle and that of the leftmost tagged-particle in a single-file is depicted in Fig.~\ref{memKer} (note that we depict $-K_U(t)>0$ implying an anti-persistent motion). The unavoidable truncation of the spectral solution (\ref{NMGreen}) does not allow us to determine $K_U(t)$ in the limit $t\to 0$. In the case of an isolated particle (Fig.~\ref{memKer}a) the memory is short-lived and decays on a time-scale $t\ll \Lambda_1^{-1}$ much shorter than the relaxation time and decreases with the barrier height $U_b$. This agrees with the essentially single-exponential decay of $C_1(t)$ found in Fig.~\ref{fig:autocorr} and implies that the dynamics is essentially memoryless \cite{ZwanzigLN,Zwanzig_Book}.\\ \indent Conversely, in the case of a tagged-particle (see Fig.~\ref{memKer}b) $K_U(t)$ displays a strikingly different behavior. First, the range where $K_U(t)$ considerably differs from zero is orders of magnitude longer and increases with increasing barrier height $U_b$. Second, the anti-persistence of $K_U(t)$ is much larger than for an isolated particle, and third for high barriers $K_U(t)$ develops a shoulder indicating a transiently stalled decay of memory, presumably due to ``jamming'' in front of the barrier prior to crossing. The latter acts as a transient entropic trap.\\ \indent Importantly, $U_b$ strongly and non-trivially affects the memory in the tagged-particle's motion (compare with the trivial effect of $U_b$ on $K(t)$ for an isolated particle in Fig.~\ref{memKer}a). As a result, any microscopically consistent memory kernel $K_U(t)$ must depend on the external potential $U(x)$. The reason is twofold: (i) the external potential also acts on the background degrees of freedom and (ii) the coupling of the background degrees of freedom and the tagged-particle's motion is strong. In fact, whenever (i) and/or (ii) hold the external potential gerenally alters the memory. Conclusive evidence that the potential affects the memory function has been found e.g. by atomistic computer simulations of molecular solutes in water \cite{Netz_PRX}.\\ \indent In contrast to $C_1(t)$ depicted Fig.~\ref{fig:autocorr}, which on an absolute time-scale decays to zero faster for larger $N$ as a result of being normalized and having the same relaxation time $\Lambda_1^{-1}$, the memory kernel $K_U(t)$ clearly displays long-time memory effects that become more pronounced as $N$ increases. This can be understood by noticing that $K_U(t)$ in Eq.~(\ref{kernel}) is unaffected by the normalization. The memory-kernel is thus more informative than $C_1(t)$ and less ambiguous than correlation times $T_c$. Moreover, it is not required that $C_i(t)$ is known analytically in order to apply the analysis. \section{Relaxation from a pinned configuration}\label{fixed} We now focus on the 'complete' (i.e including correlations to all orders) relaxation to equilibrium from a \emph{pinned} configuration. That is, we are interested in those initial configurations where either the first ($i=1$) or the last ($i=N$) particle is pinned at $x_0$, while the initial conditions of the remaining particles are drawn from the corresponding pinned equilibria (i.e. those equilibrium many-body configurations where the first/last particle is located at $x_0$). \textcolor{black}{In this non-stationary setting the analysis of memory kernels seems less sensible, since these would depend explicitly on time as well as $x_0$.} We quantify the relaxation dynamics by means of $\mathcal{D}(t,x_{0i})$, the Kullback-Leibler divergence \cite{s._kullback_information_1951} between the non-Markovian probability density of the tagged-particle's position at time $t$, $\mathcal{G}(x_i,t|x_{0i})$ in Eq.~(\ref{NMGreen}), and the respective equilibrium density $\mathcal{P}_{\rm eq}(x_i)\equiv\lim_{t\to\infty}\mathcal{G}(x_i,t|x_{0i})$: \begin{equation} \mathcal{D}(t,x_{0i})\equiv \int_{-\pi}^{\pi} dx \mathcal{G}(x,t|x_{0i})\ln\left(\frac{\mathcal{G}(x,t|x_{0i})}{\mathcal{P}_{\rm eq}(x)}\right). \label{kldiv} \end{equation} In physical terms $\mathcal{D}(t,x_{0i})$ represents the displacement from equilibrium in the sense of an \emph{excess instantaneous free energy}, i.e. $k_{\rm B}T\mathcal{D}(t,x_{0i})=F(t)-F$ \cite{Mackey_1989,Qian_2013,Lapolla_PRL}. Since the integral in Eq.~(\ref{kldiv}) cannot be performed analytically we evaluate it numerically. We always pin the initial position of the tagged-particle at $x_0=-2$. According to the effect of the pinning on the relaxation of the tagged-particle, the scenario in which we tag the first particle is referred to as \emph{'confining'} (since background particles obstruct the relaxation of the tagged-particle) and the one in which we tag the first particle as \emph{'pushing'} (since background particles exert an entropic force pushing the tagged-particle over the barrier). $ \mathcal{D}(t,x_{0i})$ as a function of the barrier height $U_b$ for $N=5$ and $N=9$ is shown in Fig.~\ref{fig:klrelax}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{klrelax.pdf} \caption{Time evolution of $\mathcal{D}(t,x_{0i})$ for various barrier-heights $U_b$ for $N=5$ (a -- confining; b -- pushing) and $N=9$ (c -- confining; d -- pushing), respectively. The symbols denote $\mathcal{D}(\Lambda_{\mathbf{1}}^{-1},x_{0i})$.} \label{fig:klrelax} \end{figure} Note that $\lim_{t\to 0}\mathcal{D}(t,x_{0i})=\infty$ irrespective of $N$ and $U_b$ since we are comparing a delta distribution with a smooth probability density. Conversely, in an arbitrarily small time interval $\tau_\epsilon>0$ the non-Markovian tagged-particle density $\mathcal{G}(x,t|x_{0i})$ evolves to a smooth, well-behaved probability density such that $\mathcal{D}(t>0,x_{0i})$ is always finite and the 'pathology' at $t=0$ is mathematical and not physical. With this in mind we observe in Fig.~\ref{fig:klrelax} a striking difference between the 'confining' and 'pushing' scenario. In the 'confining' setting $\mathcal{D}(t,x_{01})$ at a fixed time $t$ is a monotonically increasing function of $U_b$ and as a function of time decays on a time-scale that seems to be rather independent of $U_b$. In the 'confining' scenario an increase of $U_b$ displaces the system at $t=0^+$ further from equilibrium. This is intuitive because $\mathcal{P}_{\rm eq}(x_1)$ becomes more strongly confined to the boundary and hence away from $x_0$. To a dominant extent relaxation occurs already on time-scales $t\gtrsim 1\ll \Lambda_{\mathbf{1}}^{-1}$. The reason may be found in the fact that $\Lambda_{\mathbf{1}}^{-1}$ corresponds to the mixing/ergodic time-scale on which the full single-file (and thus the tagged-particle) explores the entire system. In the 'confining' scenario the background particles are drawn from a distribution that resembles closely the unconstrained equilibrium and, in addition, the tagged-particle is nominally unlikely to be found in the right well in equilibrium. Therefore, the fraction of paths that cross the barrier in the ensemble of relaxation paths is small, rendering $V_{\mathbf{0k}}V_{\mathbf{k0}}$ for low-lying $\mathbf{k}$ essentially negligible (see Eq.~(\ref{NMGreen})). Nevertheless, a second, slower relaxation stage is still discernible at $t\gtrsim 1$. Conversely, in the 'pushing' scenario depicted in Fig.~\ref{fig:klrelax}b and \ref{fig:klrelax}d we find (i) the dependence of $\mathcal{D}(0^+,x_{01})$ on $U_b$ to be inverted, and (ii) for given $N$ and $U_b$ relaxation extends to much longer time-scales compared to the 'confining' scenario. In order to rationalize (i) we consider a pair of barriers $U_{b_1},U_{b_2}$ and take the limit \begin{equation} \lim_{t\to 0}\left(\mathcal{D}^{b_1}(t,x_0)-\mathcal{D}^{b_2}(t,x_0)\right)=\ln\left(\mathcal{P}^{b_2}_{\rm eq}(x_0)/\mathcal{P}^{b_1}_{\rm eq}(x_0)\right), \label{limit} \end{equation} which is finite and well defined despite the fact that $\lim_{t\to 0}\mathcal{D}^{b_1,b_2}(t,x_0)$ are infinite. Eq.~(\ref{limit}) explains that the dependence of $\mathcal{D}(0^+,x_0)$ on $U_b$ is not unique and depends on the pinning point $x_0$ which determines whether or not $\mathcal{P}^{b_2}_{\rm eq}(x_0)/\mathcal{P}^{b_1}_{\rm eq}(x_0)$ is greater or smaller than $1$ (see Fig. ~\ref{fig:meanbarrier}b and \ref{fig:meanbarrier}d). (ii) can be understood by an extension of the argument put forward in the discussion of the 'confining' scenario, i.e. as a result of the pinning the initial configurations of the background particles are displaced much further away from equilibrium, rendering $V_{\mathbf{0k}}V_{\mathbf{k0}}$ for low-lying $\mathbf{k}$ substantial (see Eq.~(\ref{NMGreen})). Therefore, a pronounced second relaxation stage is visible at longer times $t\gtrsim 1$. Based on Fig.~\ref{fig:klrelax} alone we are not able to deduce whether these observations are a trivial consequence of the pinning in the sense that they have nothing to do with memory (note that a Markov process 'remembers' the initial condition up to $\sim \tau_{\rm rel}$) or whether they are in fact a signature of memory in the dynamics. Additional insight is gained by inspecting the relaxation of the full, Markovian single-file evolving from the same initial condition, i.e. \begin{equation} \mathcal{D}_M(t,x_0)\equiv \left[\prod_{i=1}^N\int_{-\pi}^{\pi} dx_i\right] G(\mathbf{x},t,P_0)\ln\left(\frac{G(\mathbf{x},t,P_0)}{P_{\rm eq}(\mathbf{x})}\right), \label{kldivM} \end{equation} were we have introduced the joint Markovian two-point probability density $G(\mathbf{x},t,P_{0})\equiv \int d\mathbf{y}_0G(\mathbf{x},t|\mathbf{y}_{0})P_0(\mathbf{y}_0)$ whereby, for $-\pi<x_0<-\pi/2$, $P_0(y)$ is defined as \begin{equation} P_0(\mathbf{x})=N!(\pi+x_0)^{-N_L}\hat{O}_{\mathbf{x}}\delta(x_{i}-x_0)\frac{\mathrm{e}^{-U_b\sum_{j=i+1}^N\theta(\pi/2-|x_j|)}}{(\pi\mathrm{e}^{-U_b}-x_0)^{N_R}}, \label{init} \end{equation} where $\theta(x)$ is the Heaviside step function, and $P_{\rm eq}(\mathbf{x})=\lim_{t\to\infty}G(\mathbf{x},t|\mathbf{x}_{0})$. The integration in Eq.~(\ref{kldivM}) can be performed analytically (for details please see Ref.~\onlinecite{Lapolla_PRL}). Introducing the two-point joint density of the single-particle problem $\Gamma_t(x,a,b)\equiv\sum_{k}\psi^R_k(x)[\int_{a}^{b}dy\psi^L_k(y)P_{0}(y)]\mathrm{e}^{-\lambda_kt}$ with $P_{0}(y)\equiv \theta(-\pi/2-y)/(\pi+x_0)+\theta(y+\pi/2)\mathrm{e}^{-U(y)}/(\pi\mathrm{e}^{-U_b}-x_0)$ as well as the auxiliary function \begin{equation} \Xi_t(a,b)=\int_a^bdx\Gamma_t(x,a,b)\ln(\Gamma_t(x,a,b)/P_{\rm eq}(x)), \label{aux2} \end{equation} where $P_{\rm eq}(x)=\mathrm{e}^{-U(x)}/\pi(1+\mathrm{e}^{-U_b})$, the result reads \begin{equation} \mathcal{D}_M(t,x_{0})=\Xi_t(-\pi,\pi)\Xi_t(-\pi,x_0)^{N_L}\Xi_t(x_0,\pi)^{N_R}. \label{KLM} \end{equation} An explicit solution is obtained with the aid of \textsf{Mathematica} \cite{Mathematica}. As it is bulky but straightforward we do not show it here. The Markovian result in Eq.~(\ref{KLM}) for the same set of parameters as in Fig.~\ref{fig:klrelax}a and \ref{fig:klrelax}b is depicted in Fig.~\ref{FMarkov}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{markoviankldiv} \caption{Time evolution of $\mathcal{D}_M(t,x_{0i})$ for various barrier-heights $U_b$ for $N=5$ (a -- confining, $i=1$; b -- pushing, $i=N$). The symbols denote $\mathcal{D}_M(\Lambda_{\mathbf{1}}^{-1},x_{0i})$.} \label{FMarkov} \end{figure} A comparison of Figs.~\ref{fig:klrelax} and \ref{FMarkov} reveals that the second, long-time relaxation stage observed in the 'pushing' scenario in Fig.~\ref{fig:klrelax} is absent in the Markovian setting (compare Figs.~\ref{fig:klrelax} and \ref{FMarkov} and note that the relaxation time $\Lambda_{\mathbf{1}}^{-1}$ is identical in both settings). This in turn implies that the pronounced second relaxation stage in the non-Markovian, tagged-particle scenario at times $t\gtrsim 1$ is indeed a signature of memory. \section{Discussion} We identified pronounced signatures of memory in the overdamped relaxation of a tagged-particle in a single-file confined to a bi-stable potential. On the level of linear correlations in equilibrium memory is visible in the form of a multi-scale relaxation of the autocorrelation function (see Fig.~\ref{fig:autocorr}), \textcolor{black}{a substantial and long-ranged (linear) memory-kernel (see Fig.~\ref{memKer}b)}, and a seemingly paradoxical shortening of the so-called correlation time $T_c$ (see Fig.~\ref{fig:corrtime}) . The latter was shown to be an artifact of the definition of $T_c$. When including the complete correlation-structure as encoded in the so-called excess instantaneous free energy (see Eq.~(\ref{kldiv})) distinctive signatures of memory emerge in the form of a second, late-time relaxation regime. The memory originates from the fact that the entire single-file relaxes to equilibrium in the form of linearly independent many-body eigenmodes, which become projected on the motion of a tagged particle \cite{lapolla_unfolding_2018,lapolla_manifestations_2019}. The projection couples distinct modes thus braking Markovianity and giving rise to memory \cite{lapolla_manifestations_2019}. It turns out to be very important which particle is tagged. Here, we were only interested in the 'confining' (all background particles in front of the tagged particle) and 'pushing' (all background particles behind the tagged particle) scenarios and found qualitatively different relaxation behavior. A systematic analysis would be required to understand the intricate details in how the number of particles on each side affects relaxation dynamics, which is beyond the scope of \textcolor{black}{the present work}. \textcolor{black}{We have shown that the memory non-trivially depends on the external potential. That is, a tagged particle was shown to experience the external potential directly (i.e. time-locally) as well as indirectly through the effect it exerts on the memory kernel. This effect is general -- it occurs whenever the potential is sufficiently strong and either also acts on the latent degrees of freedom (i.e. those that are integrated out) or the interaction between the tagged-particle and the latent degrees of freedom is \textcolor{black}{not harmonic or, more generally,} non-negligible. Direct evidence for the effect has been found e.g. in all-atom computer simulations of the hydration of molecular solutes \cite{Netz_PRX}. It is important to keep this in mind when applying generalized Langevin equations (GLEs) with phenomenological memory kernels or microscopically consistent GLEs ``decorated'' with an external potential, as these may lead to erroneous conclusions or misinterpretations.} \textcolor{black}{The application of the methodology for extracting and analyzing memory in tagged-particle dynamics put forward in Eqs.~(\ref{autocor}), (\ref{kernel}), and ~(\ref{kldiv}) does \emph{not} require $C_i(t)$ nor $\mathcal{G}(x,t|x_{0i})$ to be known analytically. The quantities can equally well be determined from experiments or computer simulations. The analysis is expected to provide insight into memory effects as long as either the ``crowding'' (i.e. the concentration of background particles) and/or the barrier-height can be controlled. The qualitative features of the signatures of memory are expected to be preserved in most systems of effectively one-dimensional systems with obstructed tagged-particle dynamics. The proposed analysis of memory effects can be viewed as complementary to the analysis of anomalies in tagged-particle diffusion \cite{toolbox,Schwarzl,Metzler_2014}.} Our results can readily be tested by existing experiments probing colloidal particle systems (see e.g. Ref.~\onlinecite{Wei625,Hanes_2012,Thorneywork_2020}), and may furthermore be relevant for a theoretical description of transport in ion-channels \cite{roux_theoretical_2004, pohorille_validity_2017, kopec_direct_2018, epsztein_towards_2020}. Our results can be extended in diverse ways, most immediately by including other types of inter-particle interactions \cite{Kollman} and time-depended energy barriers \cite{subrt_diffusion_2006}. \section{Appendix} In this Appendix we give explicit expressions for the single-particle eigenfunctions that are required in the diagonalization of the many-body Fokker-Planck operator using the so-called coordinate Bethe ansatz. For details of the solution method please see \cite{lapolla_unfolding_2018,lapolla_manifestations_2019,lapolla_bethesf_2020}). The eigenfunctions of the corresponding single-particle eigenvalue problem \begin{eqnarray} (\partial_{x}^2+\partial_{x}\{\partial_{x}U(x)\})\psi_{k}^{R}(x)&=&-\lambda_k\psi_{k}^{R}(x)\nonumber\\ (\partial_{x}^2-\{\partial_{x}U(x)\}\partial_{x})\psi_{k}^{L}(x)&=&-\lambda_k\psi_{k}^{L}(x), \end{eqnarray} where $\psi_{k}^{L,R}(x): [-\pi,\pi]\to\mathbb{R}$ allow for a spectral decomposition of the single-particle Green's function \begin{equation} \Gamma(x,t|x_0)=\sum_k\psi_{k}^{R}(x)\psi_{k}^{L}(x_0)\mathrm{e}^{-\lambda_kt}. \end{equation} $\psi_{k}^{L,R}(x)$ enter Eq.~(\ref{eigf}) and are here defined via their 'Hermitianized' counterpart $\psi_k(x):[-\pi,\pi]\to\mathbb{R}$ as $\psi_k^R(x)=\ee{-U(x)/2}\psi_k(x)$, $\psi_k^L(x)=\ee{U(x)/2}\psi_k(x)$ where\footnote{We here correct typos found in the original publications.} \begin{align} &\psi_k(x)=\sqrt{\frac{2}{1+\delta_{k,0}}}\frac{\ee{-U(x)/2}}{\sqrt{\pi(1+\ee{-f0})}}\cos(\sqrt{\lambda_k}x), \quad \mathrm{mod}(k,4)=0\\ &\psi_k(x)=\begin{dcases} -\frac{\cos(\sqrt{\lambda_k}(x+\pi))}{\sqrt{\pi}},& x<-\pi/2 \\ \frac{\sin(\sqrt{\lambda_k}x)}{\sqrt{\pi}},& |x|\leq \pi/2 \\ \frac{\cos(\sqrt{\lambda_k}(x-\pi))}{\sqrt{\pi}},& x>\pi/2 \end{dcases}, \quad \mathrm{mod}(k,4)=1\\ &\psi_k(x)=\sqrt{2}\frac{\ee{U(x)/2}}{\sqrt{\pi(1+\ee{f0})}}\cos(\sqrt{\lambda_k}x), \quad \mathrm{mod}(k,4)=2\\ &\psi_k(x)=\begin{dcases} \frac{\cos(\sqrt{\lambda_k}(x+\pi))}{\sqrt{\pi}},& x<-\pi/2 \\ \frac{\sin(\sqrt{\lambda_k}x)}{\sqrt{\pi}},& |x|\leq \pi/2 \\ -\frac{\cos(\sqrt{\lambda_k}(x-\pi))}{\sqrt{\pi}},& x>\pi/2 \end{dcases}, \quad \mathrm{mod}(k,4)=3, \end{align} where $\delta_{k,0}$ is the Kronecker delta. \section*{Supplementary Material} The Supplementary Material contains and extension of the code published in Ref.~\onlinecite{lapolla_bethesf_2020} that implements the analytical results presented this manuscript. \begin{acknowledgments} The financial support from the German Research Foundation (DFG) through the Emmy Noether Program GO 2762/1-1 (to AG) is gratefully acknowledged. We thank Kristian Blom for fruitful discussions. \end{acknowledgments} \section*{DATA AVAILABILITY} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{References} \bibliographystyle{unsrt}
1,116,691,501,140
arxiv
\section{Introduction} \IEEEPARstart{G}{eant4} \cite{g4nim,g4tns} is an object oriented toolkit for the simulation of particle interactions with matter. It provides advanced functionality for all the domains typical of detector simulation: geometry and material modelling, description of particle properties, physics processes, tracking, event and run management, user interface and visualisation. Geant4 is nowadays a mature Monte Carlo system and is used in many, multi-disciplinary experimental applications; its rich collection of physics processes and models, extending over a wide energy range, has played a key role in satisfying the needs of a large variety of experimental developments. Nevertheless, new experimental requirements have emerged in the recent years, which challenge the conventional scope of major Monte Carlo transport codes like Geant4. Research in nanodosimetry, nanotechnology-based detectors, radiation effects on components in space and at high luminosity colliders, nuclear power, plasma physics etc. have shown the need of new methodological approaches to radiation transport simulation along with new physics functionality in Geant4. A common requirement in all such research domains is the ability to change the scale at which the problem is treated in a complex simulation environment. Significant technological developments both in software and computing hardware have also occurred since the RD44 \cite{rd44} phase, which defined Geant4 design. New software techniques are available nowadays, that were not yet established at the time when Geant4 was designed. A R\&D project, named NANO5, has been recently launched \cite{mc2009,nano5_nss2009} to address fundamental methods in radiation transport simulation; it explores possible solutions to cope with the new experimental requirements and evaluates whether and how they can be supported by Geant4 kernel design. The main focus of the project lies in the simulation at different scales in the same experimental environment: this objective is associated with the research of transport methods across the current boundaries of condensed-random-walk and discrete transport schemes. This study requires electromagnetic physics processes, and related physics objects, to be lightweight and easily configurable: one of the main issues to be addressed in the project is indeed the capability of objects to adapt dynamically to the environment. For this purpose a pilot project has been set up to evaluate the current design of Geant4 electromagnetic package in view of the foreseen extension of capabilities: it investigates design techniques suitable to better support fine-grained physics customization and mutability in response to the environment. The project adopts a software process model based on the Unified Process \cite{up} framework. The software developments are motivated by concrete experimental applications, and significant effort is invested in the software design: these features of the project are well served by the Unified Process, which is use case driven and architecture-centric. The adopted software process framework involves an iterative and incremental life-cycle. \section{Generic programming techniques in physics simulation design} Metaprogramming has emerged in the last few years as a powerful design technique. In C++ the template mechanism provides naturally a rich facility for metaprogramming; Boost libraries \cite{boost} are nowadays easily available to support generic programming development. Metaprogramming presents several interesting advantages, which propose it as a worthy candidate for physics simulation design. This technique has not been exploited in Geant4 core yet, the partial support of templates available in C++ compilers at the time of the RD44 phase was a limiting factor in the exploitation of templates in Geant4 architectural design at that stage. A preliminary investigation of generic programming techniques in a multi-platform simulation context has been carried out through the application of a policy-based class design \cite{dna}; this prototype was limited to a small physics sub-domain. An advantage of generic programming techniques over conventional object oriented programming is the potential for performance improvement. Physics modelling specialization would profit of the shift from dynamic to static polymorphism, which binds it at compile time rather than runtime, thus resulting in intrinsically faster programs. Design techniques intrinsically capable of performance gains are relevant to computationally intensive simulation domains, like calorimetry and microdosimetry; in general, the large scale simulation productions required by HEP experiments would profit of opportunities for improved physics performance. A side product of the use of generic programming techniques in Geant4 design is the improved transparency of physics models: the technology intrinsically achieves their exposure at a fine-grained level. This feature greatly facilitates the validation of the code at microscopic level and the flexible configuration of physics processes in multiple combinations. Customization and extensibility through the provision of user-specific (or experiment-specific) functionality in the simulation are also facilitated by this technique: in fact, metaprogramming allows the user to write more expressive code, that more closely corresponds to the mental model of the problem domain, like the configuration of physics modeling options in experimental applications. As a side benefit, a design based on this technique would naturally overcome all the current issues about duplicated or competing functionality in different Geant4 physics packages. It is worth reminding the reader that, since dynamic and static polymorphism coexist in C++, the adoption of generic programming techniques would not force Geant4 developers and users to replace object oriented methods entirely: a clever design can exploit generic and object oriented programming techniques in the same software environment according to the characteristics of the problem domain. Generic programming appears a promising candidate technique to support the design of the discrete simulation sector in an efficient, transparent and easily customizable way; the agile design achievable with such techniques would greatly facilitate the kernel evolution to accommodate both condensed-random-walk and discrete schemes. \section{Prototype design} The R\&D project currently elaborates a conceptual scheme for condensed and discrete simulation approaches to co-work in the same environment, and a software design capable of supporting it. This requirement implies the introduction of a new concept in the simulation: mutable physics entities (process, model or other physics-aware object), whose state and behavior depend on the environment and may evolve as an effect of it. Such a new concept requires rethinking how the Geant4 kernel handles the interaction between tracking and processes, and represents a design challenge in a Monte Carlo software system. The introduction of the concept of mutability in physics-related objects requires the identification of their stable and mutable states and behaviour, and their fine-grained decomposition into parts capable of evolving, or remaining unchanged. The first step along this path involves the re-design of the current Geant4 electromagnetic processes \cite{lowe_chep,lowe_nss}, \cite{standard}.. Processes are decomposed down to fine granularity, and objects responsible of well-identified functionality are created. The fine-grained decomposition of processes is propaedeutic to handling their stable and mutable components independently. The application of a policy-based class design \cite{alexandrescu} is currently investigated as a means to achieve the objective of granular decomposition of processes. This design technique offers various advantages in terms of flexibility of configuration and computational performance; however, its suitability to large scale physics simulation and its capability to model the evolution associated with mutable physics entities have not been fully demonstrated yet. For this purpose, a pilot project is currently in progress in the domain of photon interactions (Compton and Rayleigh scattering, photoelectric effect and photon conversion): the current Geant4 physics models are re-implemented in terms of the new design, thus allowing performance measurements as well as first-hand evaluations of the capabilities and drawbacks of the policy-based design. The design prototype has adopted a minimalist approach. A generic process acts as a host class, which is deprived of intrinsic physics functionality. Physics behavior is acquired through policy classes, respectively responsible for cross section and final state generation. A UML (Unified Modelling Language) \cite{uml} class diagram illustrates the main features of the design in Figure 1. \begin{figure*} \begin{center} \includegraphics[angle=0,width=16cm]{em_typedef.jpg} \caption{Main features of the policy-based design prototype, illustrated for Compton scattering.} \end{center} \label{fig_em_typedef} \end{figure*} The design of the policy classes reimplementing the existing physics functionality is currently focused on the adoption of well-established object oriented programming practices. Basic software requirements, like encapsulation and sharply identified object responsibilities, are enforced throughout the design. The exploration of more sophisticated software solutions, as well as possible improvements or extensions to the current physics functionality, are not a priority in the first development cycle now in progress, but greater attention could be devoted to these aspects in future development cycles, once the basic design issues have been addressed in a working prototype. \section{Preliminary results} Fully functional processes for photon interactions can be configured at the present stage in the new design by assembling fine-grained policy classes into a generic host class. A few metrics have been collected to evaluate quantitatively the possible benefits or drawbacks of the candidate software technology. Preliminary performance measurements in a few simple physics test cases concerning photon interactions indicate a gain on the order of 30\% with respect to equivalent physics implementations in the current Geant4 design scheme; however, it should be stressed that no effort has been invested yet into optimizing the new design prototype, nor the code implementation. The testing of basic physics components of the simulation is also greatly facilitated with respect to the current Geant4 version: since in the new design scheme they are associated with low level objects like policy classes, they can be verified and validated independently. This agility represents an improvement over the current design scheme, where a full-scale Geant4-based application is necessary to study even low-level physics entities of the simulation, like atomic cross sections or features of the final state models. A quantitative appraisal of this improvement was performed by comparing the effort needed to compare Geant4 cross sections of photon interactions against NIST reference data: this comparison, together with similar tests concerning electron stopping powers and ranges, documented in \cite{g4nist}, requires a Geant4-based application consisting of approximately 4000 lines of code in a conventional scheme of Geant4 electromagnetic physics design, whereas the test code required for the comparison of photon cross sections in the proposed policy-based class design amounts to less than 100 lines. Similarly, the simulation production associated with the results described in \cite{g4nist} required a significant investment of resources (a dedicated test application developer, a simulation production responsible and the computing resources of a PC farm), whereas the tests associated with the policy-based design can be executed in terms of minutes of human time on a laptop computer. The inter-comparisons of Geant4 physics models is also greatly facilitated; an example is shown in Figures 2 to 4, concerning Compton scattering. \begin{figure} \includegraphics[angle=0,width=9cm]{crossdiff_libpen.jpg} \caption{Percent difference between Library-based \cite{lowe_chep,lowe_nss} and Penelope-like \cite{penelope} Compton scattering cross sections over the energy range 1 keV to 100 GeV; the cross sections are calculated through policy-based classes implementing the same functionality as the physics processes and models released in Geant4 9.1.} \label{fig_cross_libpen} \end{figure} \begin{figure} \includegraphics[angle=0,width=9cm]{crossdiff_libstd.jpg} \caption{Percent difference between Library-based \cite{lowe_chep,lowe_nss} and Standard \cite{standard} Compton scattering cross sections over the energy range 1 keV to 100 GeV; the cross sections are calculated through policy-based classes implementing the same functionality as the physics processes and models released in Geant4 9.1.} \label{fig_cross_libstd} \end{figure} \begin{figure} \includegraphics[angle=0,width=9cm]{doppler_14_40kev.jpg} \caption{Differential cross section of Compton scattering as a function of the ratio of the scattered photon's energy over the incident one's, for 40 keV photons impinging onto a silicon target; the distributions are calculated by policy-based classes implementing the same functionality as Library-based \cite{lowe_chep,lowe_nss} (blue), Penelope-like (red) \cite{penelope} and Standard (black) \cite{standard} physics models released in Geant4 9.1.} \label{fig_doppler} \end{figure} \section{Conclusion and outlook} A R\&D project is in progress to address the capability of handling multi-scale use cases in the same simulation environment associated with Geant4: this requirement involves the capability of handling physics processes according to different transport schemes. A propaedeutic investigation is in progress to evaluate design techniques, like generic programming, capable of supporting the main design goals of the project. A pilot project concerns the re-design of Geant4 photon interactions, to evaluate conceptual methods and design techniques suitable to larger scale application. Preliminary results on the application of policy-based design techniques indicate that significant improvement in the flexibility of the physics design is achieved along with a non-negligible improvement in execution time and facilitated verification and validation testing. At the present time no adverse effect memory consumption has been observed yet in association with the prototype design. \section*{Acknowledgment} The authors thank Sergio Bertolucci, Thomas Evans, Elisabetta Gargioni, Simone Giani, Vladimir Grichine, Bernd Grosswendt, Andreas Pfeiffer, Reinhard Schulte, Manju Sudhakar and Andrew Wroe for helpful discussions. M. Kuster and S. Hauf acknowledege support by the Bundesministerium f\"ur Wirtschaft und Technologie and the Deutsches Zentrum f\"ur Luft- und Raumfahrt - DLR under the grant number 50QR0902.
1,116,691,501,141
arxiv
\section{Introduction} One-counter automata (OCAs) are a fundamental model of infinite-state systems. Their expressive power resides between finite automata and pushdown automata. Unlike finite automata, however, OCAs are not robust: They lack closure under complementation and have an undecidable universality, equivalence, and inclusion problem \cite{Greibach69,Ibarra79}. Several directions to overcome this drawback have been taken. One may underapproximate the above decision problems in terms of bisimilarity \cite{Jancar2000} or overapproximate the system behavior by a finite-state abstraction, e.g., in terms of the downward closure or preserving the Parikh image \cite{vanLeeuwen1978,Parikh1966}. In this paper, we consider a new and simple way of obtaining a robust model of one-counter systems. Whenever the automaton produces a letter from a finite alphabet $\Sigma$, it will also output the \emph{current} counter value along with it (transitions that manipulate the counter are not concerned). Hence, its language is henceforth a subset of $(\Sigma \times \mathds{N})^\ast$. For obvious reasons, we call this variant \emph{OCAs with counter observability}. We will show that, under the observability semantics, OCAs form a robust automata model: They are closed under all boolean operations. Moreover, their universality and inclusion problem are in PSPACE and, as a simple reduction from universality for finite automata shows, PSPACE-complete. These results may come as a surprise given that universality for OCAs is undecidable and introducing counter observability seems like an extension of OCAs. But, actually, the problem becomes quite different. The fact that a priori hidden details from a run (in terms of the counter values) are revealed makes the model more robust and the decision problems easier. Note that this is also what happens in input-driven/visibly pushdown automata \cite{Mehlhorn80,Alur2009} or their restriction of visibly OCAs \cite{BaranyLS06,S:CSL:06}. They all recognize languages over a \emph{finite} alphabet and the stack/counter operation can be deduced from the letter that is read. Interestingly, our proofs establish a link between the observability semantics and visibly OCAs, which somehow explains the robustness of OCAs under the observability semantics. However, it is worth noting that revealing details from a system configuration does not always help, quite the contrary: Though timed automata and counter automata are closely related \cite{HaaseOW16}, the universality problem of timed automata is decidable only if time stamps are excluded from the semantics \cite{AD94}. Note that it is not only for the pure fact that we obtain a robust model that we consider counter observability. Counter values usually have a \emph{meaning}, such as energy level, value of a variable, or number of items yet to be produced (cf.\ Example~\ref{ex:job}). In those contexts, it is natural to include them in the semantics, just like including time stamps in timed automata. \paragraph{Related Work.} Apart from the connection with visibly OCAs, another model closely related to ours is that of \emph{strong automata} \cite{CzybaST15}. Strong automata operate on infinite alphabets and were introduced as an extension of \emph{symbolic automata} \cite{Bes08,DAntoniV14}. Essentially, a transition of a strong automaton is labeled with a formula from monadic second-order (MSO) logic over some infinite structure, say $(\mathds{N},+1)$. In fact, the formula has two free first-order variables so that it defines a binary relation over $\mathds{N}$. This relation is interpreted as a constraint between successive letters from the infinite alphabet. We will show that \text{OCAs}\xspace with the observability semantics and strong automata over $(\mathds{N},+1)$ (extended by a component for the finite alphabet $\Sigma$) are expressively equivalent. This underpins a certain naturalness of the observability semantics. Note that the universality and the inclusion problem have been shown decidable for strong automata over $(\mathds{N},+1)$ \cite{CzybaST15}. However, strong automata do not allow us to derive any elementary complexity upper bounds. In fact, our model can be seen as an operational counterpart of strong automata over $(\mathds{N},+1)$. \paragraph{Outline.} Section~\ref{sec:cav} defines OCAs and their different semantics. Section~\ref{sec:det} relates the observability semantics with visibly OCAs and shows that, under the new semantics, \text{OCAs}\xspace are closed under boolean operations and have a PSPACE-complete universality and inclusion problem. In Section~\ref{sec:strong-aut}, we show expressive equivalence of strong automata and OCAs with counter observability. We conclude in Section~\ref{sec:conclusion}. Omitted proofs or proof details can be found in the appendix. \section{One-Counter Automata with Counter Observability}\label{sec:cav} For $n \in \mathds{N} = \{0,1,2,\ldots\}$, we set $\set{n} := \{1,\ldots,n\}$ and $\setz{n} := \{0,1,\ldots,n\}$. Given an alphabet $\Sigma$, the set of finite words over $\Sigma$, including the empty word $\epsilon$, is denoted by $\Sigma^\ast$. \subsection{One-Counter Automata and Their Semantics} We consider ordinary one-counter automata over some nonempty finite alphabet $\Sigma$. In addition to a finite-state control and transitions that produce a letter from $\Sigma$, they have a counter that can be incremented, decremented, or tested for values up to some threshold $m \in \mathds{N}$ (as defined in \cite{BaranyLS06}). Accordingly, the set of \emph{counter operations} is $\mathsf{Op} = \{\raisebox{-0.1ex}{$\shneg$},\raisebox{-0.2ex}{$\shpos$}\}$, where $\raisebox{-0.1ex}{$\shneg$}$ stands for ``increment the counter by one'' and $\raisebox{-0.2ex}{$\shpos$}$ for ``decrement the counter by one''. A transition is of the form $(q,k,\sigma,q')$ where $q,q'$ are states, $k \in \setz{m}$ is a counter test, and $\sigma \in \Sigma \cup \mathsf{Op}$. It leads from $q$ to $q'$, while $\sigma$ either produces a letter from $\Sigma$ or modifies the counter. However, the transition can only be taken if the current counter value $x \in \mathds{N}$ satisfies $k = \min\{x,m\}$. That is, counter values can be checked against any number strictly below $m$ or for being at least $m$. In particular, if $m = 1$, then we deal with the classical definition of one-counter automata, which only allows for zero and non-zero tests. \begin{definition}[OCA, cf.\ \cite{BaranyLS06}] A \emph{one-counter automaton} (or simply OCA) is a tuple $\mathcal{A} = (Q,\Sigma,\iota,F,m,\Delta)$ where $Q$ is a finite set of \emph{states}, $\Sigma$ is a nonempty finite alphabet (disjoint from $\mathsf{Op}$), $\iota \in Q$ is the \emph{initial state}, $F \subseteq Q$ is the set of \emph{final states}, $m \in \mathds{N}$ is the \emph{threshold}, and $\Delta \subseteq Q \times \setz{m} \times (\Sigma \cup \mathsf{Op}) \times Q$ is the \emph{transition relation}. We also say that $\mathcal{A}$ is an $m$-OCA. Its size is defined as $|Q| + |\Sigma| + m + |\Delta|$. \end{definition} An OCA $\mathcal{A} = (Q,\Sigma,\iota,F,m,\Delta)$ can have several different semantics: \begin{itemize} \item ${L}_\mathit{oca}(\mathcal{A}) \subseteq \Sigma^\ast$ is the classical semantics when $\mathcal{A}$ is seen as an ordinary OCA. \item ${L}_\mathit{vis}(\mathcal{A}) \subseteq (\Sigma \cup \mathsf{Op})^\ast$ is the \emph{visibly semantics} where, in addition to the letters from $\Sigma$, all counter movements are made apparent. \item $L_\mathit{obs}(\mathcal{A}) \subseteq (\Sigma \times \mathds{N})^\ast$ is the \emph{semantics with counter observability} where the current counter value is output each time a $\Sigma$-transition is taken. \end{itemize} We define all three semantics in one go. Let $\Confp{\mathcal{A}} := Q \times \mathds{N}$ be the set of \emph{configurations} of $\mathcal{A}$. In a configuration $(q,x)$, $q$ is the current state and $x$ is the current counter value. The \emph{initial} configuration is $(\iota,0)$, and a configuration $(q,x)$ is \emph{final} if $q \in F$. \newcommand{\mathit{trace}}{\mathit{trace}} We determine a global transition relation ${\srelp{\mathcal{A}}} \subseteq \Confp{\mathcal{A}} \times ((\Sigma \times \mathds{N}) \cup \mathsf{Op}) \times \Confp{\mathcal{A}}$. For two configurations $(q,x),(q',x') \in \Confp{\mathcal{A}}$ and $\tau \in (\Sigma \times \mathds{N}) \cup \mathsf{Op}$, we have $(q,x) \srelpp{\mathcal{A}}{\tau\,} (q',x')$ if one of the following holds: \begin{itemize}\itemsep=0.5ex \item $\tau = \raisebox{-0.1ex}{$\shneg$}$ and $x' = x + 1$ and $(q,\min\{x,m\},\raisebox{-0.1ex}{$\shneg$},q') \in \Delta$, or \item $\tau = \raisebox{-0.2ex}{$\shpos$}$ and $x' = x - 1$ and $(q,\min\{x,m\},\raisebox{-0.2ex}{$\shpos$},q') \in \Delta$, or \item $x' = x$ and there is $a \in \Sigma$ such that $\tau = (a,x)$ and $(q,\min\{x,m\},a,q') \in \Delta$. \end{itemize} A \emph{partial run} of $\mathcal{A}$ is a sequence $\rho = (q_0,x_0) \srelpp{\mathcal{A}}{\tau_1\;} (q_1,x_1) \srelpp{\mathcal{A}}{\tau_2\;} \ldots \srelpp{\mathcal{A}}{\tau_n\;} (q_n,x_n)$, with $n \ge 0$. If, in addition, $(q_0,x_0)$ is the initial configuration, then we say that $\rho$ is a \emph{run}. We call $\rho$ \emph{accepting} if its last configuration $(q_n,x_n)$ is final. Now, the semantics of $\mathcal{A}$ that we consider depends on what we would like to extract from $\mathit{trace}(\rho) := \tau_1 \ldots \tau_n \in ((\Sigma \times \mathds{N}) \cup \mathsf{Op})^\ast$. We let (given $(a,x) \in \Sigma \times \mathds{N}$): \begin{itemize}\itemsep=0.5ex \item $\mathit{oca}((a,x)) = a$ and $\mathit{oca}(\raisebox{-0.1ex}{$\shneg$}) = \mathit{oca}(\raisebox{-0.2ex}{$\shpos$}) = \epsilon$ \item $\mathit{vis}((a,x)) = a$ and $\mathit{vis}(\raisebox{-0.1ex}{$\shneg$}) = \raisebox{-0.1ex}{$\shneg$}$ and $\mathit{vis}(\raisebox{-0.2ex}{$\shpos$}) = \raisebox{-0.2ex}{$\shpos$}$ \item $\mathit{obs}((a,x)) = (a,x)$ and $\mathit{obs}(\raisebox{-0.1ex}{$\shneg$}) = \mathit{obs}(\raisebox{-0.2ex}{$\shpos$}) = \epsilon$ \end{itemize} Moreover, we extend each such mapping $\eta \in \{\mathit{oca},\mathit{vis},\mathit{obs}\}$ to $\tau_1 \ldots \tau_n \in ((\Sigma \times \mathds{N}) \cup \mathsf{Op})^\ast$ letting $\eta(\tau_1 \ldots \tau_n) := \eta(\tau_1) \cdot \ldots \cdot \eta(\tau_n)$. Note that, hereby $u \cdot \epsilon = \epsilon \cdot u = u$ for any word $u$. Finally, we let $L_\extr(\mathcal{A}) = \{\eta(\mathit{trace}(\rho)) \mid \rho \text{ is an accepting run of } \mathcal{A}\}$. \newcommand{\mathsf{req}}{\mathsf{req}} \newcommand{\mathsf{prod}}{\mathsf{prod}} \begin{figure}[t] \begin{center} \scalebox{0.9}{ \begin{gpicture} \gasset{Nframe=y,Nw=5,Nh=5,Nmr=8,ilength=4,flength=4,AHangle=30} \node[Nmarks=i](q0)(0,0){$q_0$} \node(q1)(25,0){$q_1$} \node(q2)(50,0){$q_2$} \node[Nmarks=r](q3)(75,0){$q_3$} \drawedge(q0,q1){$\mathord{\ge} 1\,|\,\mathsf{req}$} \drawloop[ELside=r,loopCW=n,loopdiam=4,loopangle=90](q0){$\raisebox{-0.1ex}{$\shneg$}$} \drawedge[curvedepth=2,ELside=l](q1,q2){$\raisebox{-0.2ex}{$\shpos$}$} \drawedge[curvedepth=2,ELside=l](q2,q1){$\mathord{\ge} 1\,|\,\mathsf{prod}$} \drawedge(q2,q3){$\mathord{=}0\,|\,\mathsf{prod}$} \end{gpicture} } \end{center} \caption{A one-counter automaton with threshold $m = 1$\label{fig:cav}} \end{figure} \begin{example}\label{ex:job} Consider the 1-OCA $\mathcal{A}$ from Figure~\ref{fig:cav} over $\Sigma=\{\mathsf{req},\mathsf{prod}\}$. For readability, counter tests $0$ and $1$ are written as $\mathord{=}0$ and $\mathord{\ge} 1$, respectively. A transition without counter test stands for two distinct transitions, one for $\mathord{=}0$ and one for $\mathord{\ge} 1$ (i.e., the counter value may actually be arbitrary). When looking at the semantics $L_\mathit{obs}(\mathcal{A})$, i.e., with counter observability, we can think of $(\mathsf{req},n)$ signalizing that the production of $n \ge 1$ items is required (where $n$ is the current counter value). Moreover, $\mathsf{prod}$ indicates that an item has been produced so that, along a run, the counter value represents the number of items yet to be produced. It is thus natural to include it in the semantics. Concretely, we have \begin{itemize}\itemsep=0.5ex \item ${L}_\mathit{oca}(\mathcal{A}) = \{\mathsf{req}\; \mathsf{prod}^n \mid n \ge 1\}$ \item ${L}_\mathit{vis}(\mathcal{A}) = \{\raisebox{-0.1ex}{$\shneg$}^n \mathsf{req}\, (\raisebox{-0.2ex}{$\shpos$}\, \mathsf{prod})^n \mid n \ge 1\}$ \item $L_\mathit{obs}(\mathcal{A}) = \{(\mathsf{req},n)(\mathsf{prod},n-1)(\mathsf{prod},n-2) \ldots (\mathsf{prod},0) \mid n \ge 1\}$ \end{itemize} Apparently, ${L}_\mathit{vis}(\mathcal{A})$ and $L_\mathit{obs}(\mathcal{A})$ are the only meaningful semantics in the context described above. \end{example} \newcommand{\Gamma}{\Gamma} \begin{remark} Visibly OCAs \cite{BaranyLS06,S:CSL:06} usually allow for general input alphabets $\Gamma$, which are partitioned into $\Gamma = \Gamma_\textup{inc} \uplus \Gamma_\textup{dec} \uplus \Gamma_\textup{nop}$ so that every $\gamma \in \Gamma$ is associated with a unique counter operation (or ``no counter operation'' if $\gamma \in \Gamma_\textup{nop})$. In fact, we consider here (wrt.\ the visibly semantics) a particular case where $\Gamma = \Sigma \cup \mathsf{Op}$ with $\Gamma_\textup{inc} = \{\raisebox{-0.1ex}{$\shneg$}\}$, $\Gamma_\textup{dec} = \{\raisebox{-0.2ex}{$\shpos$}\}$, and $\Gamma_\textup{nop} = \Sigma$. \end{remark} \subsection{Standard Results for OCAs} Let us recall some well-known results for classical OCAs and visibly OCAs. For $\eta \in \{\mathit{oca},\mathit{vis},\mathit{obs}\}$, the \emph{nonemptiness problem for OCAs} wrt.\ the $\eta$-semantics is defined as follows: Given an \text{OCA}\xspace $\mathcal{A}$, do we have $L_\extr(\mathcal{A}) \neq \emptyset$\,? Of course, this reduces to a reachability problem that is independent of the actual choice of the semantics: \begin{theorem}[\!\!\cite{Valiant1975}]\label{thm:emptiness} The nonemptiness problem for OCAs is NL-complete, wrt.\ any of the three semantics. \end{theorem} However, the universality (and, therefore, inclusion) problem for classical OCAs is undecidable: \begin{theorem}[\hspace{-0.05em}\cite{Greibach69,Ibarra79}] The following problem is undecidable: Given an OCA $\mathcal{A}$ with alphabet $\Sigma$, do we have ${L}_\mathit{oca}(\mathcal{A}) = \Sigma^\ast$\,? \end{theorem} In this paper, we show that universality and inclusion are decidable when considering counter observability. To do so, we make use of the theory of the visibly semantics. Concretely, we exploit determinizability as well as closure under complementation and intersection. In fact, the following definition of determinism only makes sense for the visibly semantics, but we will see later that a subclass of deterministic OCAs gives a natural notion of determinism for the observability semantics as well. \begin{definition}[deterministic OCA]\label{def:doca} An OCA $\mathcal{A} = (Q,\Sigma,\iota,F,m,\Delta)$ is called \emph{deterministic} (dOCA or $m$-dOCA) if, for all $(q,k,\sigma) \in Q \times \setz{m} \times (\Sigma \cup \mathsf{Op})$, there is exactly one $q' \in Q$ such that $(q,k,\sigma,q') \in \Delta$. In that case, $\Delta$ represents a (total) function $\delta: Q \times \setz{m} \times (\Sigma \cup \mathsf{Op}) \to Q$ so that we rather consider $\mathcal{A}$ to be the tuple $(Q,\Sigma,\iota,F,m,\delta)$. \end{definition} A powerset construction like for finite automata can be used to determinize OCAs wrt.\ the visibly semantics \cite{BaranyLS06}. That construction also preserves the two other semantics. However, Definition~\ref{def:doca} only guarantees uniqueness of runs for words from $(\Sigma \cup \mathsf{Op})^\ast$. That is, for complementation, we have to restrict to the visibly semantics (cf.\ Lemma~\ref{lem:compl} below). \begin{lemma}[cf.\ \cite{BaranyLS06}]\label{lem:det} Let $\mathcal{A}$ be an $m$-OCA. There is an $m$-dOCA $\dA{\mathcal{A}}$ of exponential size such that $L_\extr(\dA{\mathcal{A}}) = L_\extr(\mathcal{A})$ for all $\eta \in \{\mathit{oca},\mathit{vis},\mathit{obs}\}$. \end{lemma} A (visibly) dOCA can be easily complemented wrt.\ the set of \emph{well-formed words} $\mathsf{WF}_\Sigma := \{w \in (\Sigma \cup \mathsf{Op})^\ast \mid$ no prefix of $w$ contains more $\raisebox{-0.2ex}{$\shpos$}$'s than $\raisebox{-0.1ex}{$\shneg$}$'s$\}$. In fact, for all OCAs $\mathcal{A}$ with alphabet $\Sigma$, we have ${L}_\mathit{vis}(\mathcal{A}) \subseteq \mathsf{WF}_\Sigma$. \begin{lemma}\label{lem:compl} Let $\mathcal{A}=(Q,\Sigma,\iota,F,m,\delta)$ be a dOCA and define $\cA{\mathcal{A}}$ as the dOCA $(Q,\Sigma,\iota,Q \setminus F,m,\delta)$. Then, ${L}_\mathit{vis}(\cA{\mathcal{A}}) = \mathsf{WF}_\Sigma \setminus {L}_\mathit{vis}(\mathcal{A})$. \end{lemma} Finally, visibility of counter operations allows us to simulate two OCAs in sync by a straightforward product construction (cf.\ Appendix~\ref{app:prod}): \begin{lemma}[cf.\ \cite{Alur2009}]\label{lem:prod} Let $\mathcal{A}_1$ be an $m_1$-OCA and $\mathcal{A}_2$ be an $m_2$-OCA over the same alphabet. There is a $\max\{m_1,m_2\}$-OCA $\mathcal{A}_1 \times \mathcal{A}_2$ of polynomial size such that ${L}_\mathit{vis}(\mathcal{A}_1 \times \mathcal{A}_2) = {L}_\mathit{vis}(\mathcal{A}_1) \cap {L}_\mathit{vis}(\mathcal{A}_2)$. Moreover, if $\mathcal{A}_1$ and $\mathcal{A}_2$ are deterministic, then so is $\mathcal{A}_1 \times \mathcal{A}_2$. \end{lemma} \newcommand{\modform}[2]{#1 + #2\mathds{N}} \newcommand{Q_\mathsf{sink}}{Q_\mathsf{sink}} \newcommand{f}{f} \begin{figure}[t] \begin{center} \scalebox{1}{ \!\!\!\!\!\!\! \begin{gpicture} \unitlength=0.3mm \gasset{Nframe=n,Nw=4.8,Nh=4.8,Nmr=8,ilength=4,flength=4,AHlength=5.5,AHLength=6,AHangle=25} \node(cv)(0,63){\scalebox{0.7}{counter}} \node(cv)(0,56){\scalebox{0.7}{value}} \drawline[AHnb=1](0,0)(0,50) \drawline[AHnb=1](0,0)(298,0) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,10)(290,10) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,20)(290,20) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,30)(290,30) \drawline[AHnb=0,dash={1.5}0](0,0)(10,10)(20,0)(30,10)(40,20)(50,30)(60,20)(70,10)(80,0)(90,10)(100,20)(110,10)(120,20)(130,30)(140,20)(150,30)(160,40)(170,30)(180,20)(190,30)(200,30)(210,20)(220,30)(230,20)(240,10)(250,0)(260,10)(270,10)(280,20)(290,30) \drawline[AHnb=0,linewidth=1.4](0,0)(10,10) \drawline[AHnb=0,linewidth=1.4](30,10)(40,20) \drawline[AHnb=0,linewidth=1.4](120,20)(130,30) \drawline[AHnb=0,linewidth=1.4](190,30)(200,30) \drawline[AHnb=0,linewidth=1.4](220,30)(230,20) \drawline[AHnb=0,linewidth=1.4](230,20)(240,10) \drawline[AHnb=0,linewidth=1.4](260,10)(270,10) \node(a)(195,34.5){\scalebox{0.8}{$a_1$}} \node(a)(265,14.7){\scalebox{0.8}{$a_2$}} \node(x)(-6,30){\scalebox{0.8}{$x_1$}} \node(x)(-6,10){\scalebox{0.8}{$x_2$}} \node(q)(0,-7){\scalebox{0.7}{$\iota$}} \node(q)(10,-7){\scalebox{0.7}{$p_1$}} \node(q)(40,-7){\scalebox{0.7}{$p_2$}} \node(q)(120,-5){\scalebox{0.7}{$p_2'$}} \node(q)(130,-7){\scalebox{0.7}{$p_3$}} \node(q)(200,-7){\scalebox{0.7}{$p_4$}} \node(q)(230,-7){\scalebox{0.7}{$p_5$}} \node(q)(245,-7){\scalebox{0.7}{$p_6$}} \node(q)(275,-7){\scalebox{0.7}{$p_7$}} \node(q)(290,-7){\scalebox{0.7}{$p_8$}} \node(f)(80,45){\scalebox{0.85}{$2 \models \phi_{p_2,p_2'}$}} \node(f)(250,45){\scalebox{0.85}{$1 \models \psi_{p_7,p_8} \wedge \psi_{q_7,q_8}$}} \put(0,-70){ \drawline[AHnb=1](0,0)(0,50) \drawline[AHnb=1](0,0)(298,0) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,10)(290,10) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,20)(290,20) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,30)(290,30) \drawline[AHnb=0,dash={1.5}0](0,0)(10,10)(20,0)(30,10)(40,20)(50,10)(60,20)(70,10)(80,0)(90,10)(100,20)(110,10)(120,0)(130,10)(140,20)(150,30)(160,40)(170,30)(180,30)(190,20)(200,30)(210,20)(220,30)(230,20)(240,10)(250,20)(260,10)(270,10)(280,20)(290,10) \drawline[AHnb=0,linewidth=1.4](20,0)(30,10) \drawline[AHnb=0,linewidth=1.4](90,10)(100,20) \drawline[AHnb=0,linewidth=1.4](140,20)(150,30) \drawline[AHnb=0,linewidth=1.4](170,30)(180,30) \drawline[AHnb=0,linewidth=1.4](200,30)(210,20) \drawline[AHnb=0,linewidth=1.4](230,20)(240,10) \drawline[AHnb=0,linewidth=1.4](260,10)(270,10) \node(a)(175,34.5){\scalebox{0.8}{$a_1$}} \node(a)(265,14.5){\scalebox{0.8}{$a_2$}} \node(x)(-6,30){\scalebox{0.8}{$x_1$}} \node(x)(-6,10){\scalebox{0.8}{$x_2$}} \node(q)(0,-6){\scalebox{0.7}{$\iota$}} \node(q)(30,-6){\scalebox{0.7}{$q_1$}} \node(q)(100,-6){\scalebox{0.7}{$q_2$}} \node(q)(150,-6){\scalebox{0.7}{$q_3$}} \node(q)(180,-6){\scalebox{0.7}{$q_4$}} \node(q)(210,-6){\scalebox{0.7}{$q_5$}} \node(q)(240,-6){\scalebox{0.7}{$q_6$}} \node(q)(270,-6){\scalebox{0.7}{$q_7$}} \node(q)(290,-6){\scalebox{0.7}{$q_8$}} } \put(330,-40){ \drawline[AHnb=1](0,0)(0,50) \drawline[AHnb=1](0,0)(105,0) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,13)(100,13) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,27)(100,27) \drawline[AHnb=0,linewidth=0.1,linecolor=red](0,40)(100,40) \drawline[AHnb=0,linewidth=1.4](0,0)(12,12) \drawline[AHnb=0,linewidth=1.4](14,14)(26,26) \drawline[AHnb=0,linewidth=1.4](28,28)(40,40) \drawline[AHnb=0,linewidth=1.4](42,41)(54,41) \drawline[AHnb=0,linewidth=1.4](56,40)(68,28) \drawline[AHnb=0,linewidth=1.4](70,26)(82,14) \drawline[AHnb=0,linewidth=1.4](84,13)(96,13) \node(a)(48.5,45.5){\scalebox{0.8}{$a_1$}} \node(a)(90.2,18.3){\scalebox{0.8}{$a_2$}} \node(x)(-6,40){\scalebox{0.8}{$x_1$}} \node(x)(-6,14){\scalebox{0.8}{$x_2$}} \node(vdots)(48,-22){\scalebox{0.7}{$\vdots$}} \node(q)(0,-6){\scalebox{0.7}{$\iota$}} \node(q)(13,-6){\scalebox{0.7}{$p_1$}} \node(q)(27,-6){\scalebox{0.7}{$p_2$}} \node(q)(41,-6){\scalebox{0.7}{$p_3$}} \node(q)(55,-6){\scalebox{0.7}{$p_4$}} \node(q)(69,-6){\scalebox{0.7}{$p_5$}} \node(q)(83,-6){\scalebox{0.7}{$p_6$}} \node(q)(97,-6){\scalebox{0.7}{$p_7$}} \node(q)(13,-14){\scalebox{0.7}{$q_1$}} \node(q)(27,-14){\scalebox{0.7}{$q_2$}} \node(q)(41,-14){\scalebox{0.7}{$q_3$}} \node(q)(55,-14){\scalebox{0.7}{$q_4$}} \node(q)(69,-14){\scalebox{0.7}{$q_5$}} \node(q)(83,-14){\scalebox{0.7}{$q_6$}} \node(q)(97,-14){\scalebox{0.7}{$q_7$}} \node(arrow)(-22,30){\scalebox{1}{$\searrow$}} \node(arrow)(-22,10){\scalebox{1}{$\nearrow$}} \drawline[AHnb=0,linewidth=0.1,linecolor=blue](13,13)(13,0) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](27,27)(27,0) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](41,41)(41,0) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](55,41)(55,0) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](69,27)(69,0) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](83,13)(83,0) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](97,13)(97,0) } \drawline[AHnb=0,linewidth=0.1,linecolor=blue](10,10)(30,-60) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](40,20)(100,-50) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](130,30)(150,-40) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](200,30)(180,-40) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](230,20)(210,-50) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](240,10)(240,-60) \drawline[AHnb=0,linewidth=0.1,linecolor=blue](270,10)(270,-60) \end{gpicture} } \end{center} \caption{Decompositions of two runs on $(a_1,3)(a_2,1)$, and corresponding runs in normal form\label{fig:stairs}} \end{figure} \section{Determinizing and Complementing \text{OCAs}\xspace}\label{sec:det} In this section, we will show that, under the \emph{observability semantics}, OCAs are effectively closed under all boolean operations. The main ingredient of the proof is a determinization procedure, which we first describe informally. Let $\mathcal{A} = (Q,\Sigma,\iota,F,m,\Delta)$ be the \text{OCA}\xspace to be determinized (wrt.\ the observability semantics). Moreover, let $w =(a_1,x_1) \ldots (a_n,x_n) \in (\Sigma \times \mathds{N})^\ast$. Every run $\rho$ of $\mathcal{A}$ such that $\mathit{obs}(\mathit{trace}(\rho)) = w$ has to have reached the counter value $x_1$ by the time it reads the first letter $a_1$. In particular, it has to perform at least $x_1$ counter increments. In other words, we can identify $x_1$ transitions that lift the counter value from $0$ to $1$, from $1$ to $2$, and, finally, from $x_1-1$ to $x_1$, respectively, and that are separated by partial runs that ``oscillate'' around the current counter value but, at the end, return to their original level. Similarly, before reading the second letter $a_2$, $\mathcal{A}$ will perform $|x_2 - x_1|$-many \emph{identical} counter operations to reach $x_2$, again separated by some oscillation phases, and so on. This is illustrated on the left hand side of Figure~\ref{fig:stairs} for two runs on the word $(a_1,3)(a_2,1)$. We will transform $\mathcal{A}$ into another automaton that decomposes a run into oscillations and increment/decrement/letter transitions, but, in fact, abstracts away oscillations. Thus, the automaton starts in an increasing mode and goes straight to the value $x_1$. Once it reads letter $a_1$, it may go into an increasing or decreasing mode, and so on. Observe that a run $\rho$ of this new automaton is in a sort of normal form as illustrated on the right hand side of Figure~\ref{fig:stairs}. The crux is that $\mathit{vis}(\mathit{trace}(\rho))$ is a \emph{unique} encoding of $w$: Of course, it determines the counter values output when a letter is read; and it is unique, since it continues incrementing (decrementing, respectively) until a letter is read. This normalization and encoding finally allows us to apply known results on visibly one-counter automata for determinization and complementation. There is a little issue here, since the possibility of performing an oscillation leading from $p_2$ to $p_2'$ (cf.\ left hand side of Figure~\ref{fig:stairs}) depends on the current counter value. However, it was shown in \cite{GMWT-lics09} that the set of counter values allowing for such a shortcut can be described as a boolean combination of arithmetic progressions that can be computed in polynomial time. We will, therefore, work with an extended version of \text{OCAs}\xspace that includes arithmetic-progression tests (but is no more expressive, as we show afterwards). The outline of this section is as follows: We present extended OCAs in Section~\ref{sec:ecav} and the link between the observability and the visibly semantics in Section~\ref{sec:obs-vis}. In Section~\ref{sec:univ}, we solve the universality and inclusion problem for OCAs wrt.\ the observability semantics. \subsection{Extended One-Counter Automata}\label{sec:ecav} While OCAs can only test a counter value up to some threshold, \emph{extended \text{OCAs}\xspace} have access to boolean combinations of modulo constraints. The set $\Guards^\textup{mod}$ is given by the grammar $\phi ::= \modform{c}{d} \mid \neg\phi \mid \phi \wedge \phi \mid \phi \vee \phi$ where $c,d \in \mathds{N}$. We call $\modform{c}{d}$ an \emph{arithmetic-progression formula} and assume that $c$ and $d$ are encoded in unary. For $x \in \mathds{N}$ (a counter value), we define $x \models \modform{c}{d}$ if $x = c + d \cdot i$ for some $i \in \mathds{N}$. Thus, we may use $\mathit{true}$ as an abbreviation for $\modform{0}{1}$. The other formulas are interpreted as expected. Moreover, given $\phi \in \Guards^\textup{mod}$, we set $\sem{\phi} := \{x \in \mathds{N} \mid x \models \phi\}$. Before we introduce extended OCAs, we will state a lemma saying that the ``possibility'' of a shortcut in terms of an oscillation (see above) is definable in $\Guards^\textup{mod}$. Let $\mathcal{A}=(Q,\Sigma,\iota,F,m,\Delta)$ be an \text{OCA}\xspace and $p,q \in Q$. By $\smash{\URel{p}{q}{\mathcal{A}}}$, we denote the set of natural numbers $x \in \mathds{N}$ such that $(p,x) \mathrel{(\srelpp{\mathcal{A}}{\raisebox{-0.1ex}{$\scriptstyle\shneg$}\;} \cup \srelpp{\mathcal{A}}{\raisebox{-0.2ex}{$\scriptstyle\shpos$}\;})^\ast} (q,x)$, i.e., there is a partial run from $(p,x)$ to $(q,x)$ that performs only counter operations. Moreover, we define $Y_{p,q}^\mathcal{A}$ to be the set of natural numbers $x \in \mathds{N}$ such that $(p,x) \mathrel{(\srelpp{\mathcal{A}}{\raisebox{-0.1ex}{$\scriptstyle\shneg$}\;} \cup \srelpp{\mathcal{A}}{\raisebox{-0.2ex}{$\scriptstyle\shpos$}\;})^\ast} (q,x')$ for some $x' \in \mathds{N}$. Note that $\smash{\URel{p}{q}{\mathcal{A}}} \subseteq Y_{p,q}^\mathcal{A}$. The following result is due to \cite[Lemmas~6--9]{GMWT-lics09}: \begin{lemma}[\hspace{-0.05em}\cite{GMWT-lics09}]\label{lem:rpq} Let $\mathcal{A}=(Q,\Sigma,\iota,F,m,\Delta)$ be an \text{OCA}\xspace and $p,q \in Q$. We can compute, in polynomial time, guards $\phi_{p,q},\psi_{p,q} \in \Guards^\textup{mod}$ such that $\sem{\phi_{p,q}} = \URel{p}{q}{\mathcal{A}}$ and $\sem{\psi_{p,q}} = Y_{p,q}^{\mathcal{A}}$. In particular, the constants in $\phi_{p,q}$ and $\psi_{p,q}$ are all polynomially bounded. \end{lemma} \begin{definition}[extended OCA]\label{def:eoco} An \emph{extended OCA (eOCA)} is a tuple $\mathcal{A}=(Q,\Sigma,\iota,f,\Delta)$ where $Q,\Sigma,\iota$ are like in an OCA, $f: Q \to \Guards^\textup{mod}$ is the \emph{acceptance condition}, and $\Delta$ is the \emph{transition relation}: a \emph{finite} subset of $Q \times \Guards^\textup{mod} \times (\Sigma \mathrel{\cup} \mathsf{Op}) \times Q$ \end{definition} Runs and the languages $L_\eta(\mathcal{A})$, with $\eta \in \{\mathit{oca},\mathit{vis},\mathit{obs}\}$, of an \text{eOCA}\xspace $\mathcal{A}=(Q,\Sigma,\iota,f,\Delta)$ are defined very similarly to OCAs. In fact, there are only two (slight) changes: \begin{enumerate}\itemsep=0.5ex \item The definition of ${\srelp{\mathcal{A}}} \subseteq \Confp{\mathcal{A}} \times ((\Sigma \times \mathds{N}) \cup \mathsf{Op}) \times \Confp{\mathcal{A}}$ is now as follows: For $(q,x),(q',x') \in \Confp{\mathcal{A}}$ and $\tau \in (\Sigma \times \mathds{N}) \cup \mathsf{Op}$, we have $(q,x) \srelpp{\mathcal{A}}{\tau\;} (q',x')$ if there is $\phi \in \Guards^\textup{mod}$ such that $x \models \phi$ and one of the following holds:\vspace{0.5ex} \begin{itemize \item $\tau = \raisebox{-0.1ex}{$\shneg$}$ and $x' = x + 1$ and $(q,\phi,\raisebox{-0.1ex}{$\shneg$},q') \in \Delta$, or \item $\tau = \raisebox{-0.2ex}{$\shpos$}$ and $x' = x - 1$ and $(q,\phi,\raisebox{-0.2ex}{$\shpos$},q') \in \Delta$, or \item $x' = x$ and there is $a \in \Sigma$ such that $\tau = (a,x)$ and $(q,\phi,a,q') \in \Delta$. \end{itemize} \item A run is now \emph{accepting} if its last configuration $(q,x)$ is such that $x \models f(q)$. \end{enumerate} Apart from these modifications, the languages ${L}_\mathit{oca}(\mathcal{A})$, ${L}_\mathit{vis}(\mathcal{A})$, and $L_\mathit{obs}(\mathcal{A})$ are defined in exactly the same way as for OCAs. \subsection{From OCAs with Counter Observability to Visibly OCAs} \label{sec:obs-vis} \newcommand{\enc}[1]{\mathsf{enc}(#1)} \newcommand{\Encs}[1]{\mathsf{Enc}_{#1}} \newcommand{\mathsf{enc}}{\mathsf{enc}} \newcommand{\lambda}{\lambda} To establish a link between the observability and the visibly semantics, we will encode a word $w = (a_1,x_1)(a_2,x_2) \ldots (a_n,x_n) \in (\Sigma \times \mathds{N})^\ast$ as a word $\enc{w} \in (\Sigma \cup \mathsf{Op})^\ast$ as follows: \[\enc{w} := \raisebox{-0.1ex}{$\shneg$}^{x_1} a_1\, \textup{sign}(x_2-x_1)^{|x_2-x_1|}\, a_2 \,\ldots\, \textup{sign}(x_n-x_{n-1})^{|x_n - x_{n-1}|}\, a_n\] where, for an integer $z \in \mathds{Z}$, we let $\textup{sign}(z) = \raisebox{-0.1ex}{$\shneg$}$ if $z \ge 0$, and $\textup{sign}(z) = \raisebox{-0.2ex}{$\shpos$}$ if $z < 0$. For example, $\enc{\epsilon} = \epsilon$ and $\enc{(a,5)(b,2)(c,4)} = \raisebox{-0.1ex}{$\shneg$}^5 a \,\raisebox{-0.2ex}{$\shpos$}^3 b \,\raisebox{-0.1ex}{$\shneg$}^2 c$. The mapping $\mathsf{enc}$ is extended to sets $L \subseteq (\Sigma \times \mathds{N})^\ast$ by $\enc{L} = \{\enc{w} \mid w \in L\}$. Let $\Encs{\Sigma} := \enc{(\Sigma \times \mathds{N})^\ast}$ denote the set of \emph{valid encodings}. Note that $\mathsf{enc}$ is a \emph{bijection} between $(\Sigma \times \mathds{N})^\ast$ and $\Encs{\Sigma}$ and that the latter is the set of words of the form $w = u_1 a_1 u_2 a_2 \ldots u_n a_n$ where $a_i \in \Sigma$ and $u_i \in \{\raisebox{-0.1ex}{$\shneg$}\}^\ast \cup \{\raisebox{-0.2ex}{$\shpos$}\}^\ast$ for all $i \in \{1,\ldots,n\}$, and such that no prefix of $w$ contains more $\raisebox{-0.2ex}{$\shpos$}$'s than $\raisebox{-0.1ex}{$\shneg$}$'s. Obviously, there is a small dOCA whose visibly semantics is $\Encs{\Sigma}$ (see Appendix~\ref{app:benc}). It will be needed later for complementation of OCAs wrt.\ the observability semantics. \begin{lemma}\label{lem:benc} There is a $0$-dOCA $\B_\mathsf{enc}$ with only four states such that ${L}_\mathit{vis}(\B_\mathsf{enc}) = \Encs{\Sigma}$. \end{lemma} In fact, there is a tight link between the visibly and the observability semantics of OCAs provided the visibly semantics is in a certain normal form (cf.\ Appendix~\ref{app:enc} for the proof): \begin{lemma}\label{lem:enc} Let $\mathcal{A}$ be an OCA with alphabet $\Sigma$ such that ${L}_\mathit{vis}(\mathcal{A}) \subseteq \Encs{\Sigma}$. Then, we have ${L}_\mathit{vis}(\mathcal{A}) = \enc{L_\mathit{obs}(\mathcal{A})}$ and, equivalently, $L_\mathit{obs}(\mathcal{A}) = \mathsf{enc}^{-1}({L}_\mathit{vis}(\mathcal{A}))$. \end{lemma} Lemmas~\ref{lem:prod} and \ref{lem:enc} imply the following closure property: \begin{proposition}\label{prop:intersect} Let $\mathcal{A}_1$ and $\mathcal{A}_2$ be OCAs over $\Sigma$ such that ${L}_\mathit{vis}(\mathcal{A}_1) \subseteq \Encs{\Sigma}$ and ${L}_\mathit{vis}(\mathcal{A}_2) \subseteq \Encs{\Sigma}$. Then, $L_\mathit{obs}(\mathcal{A}_1 \times \mathcal{A}_2) = L_\mathit{obs}(\mathcal{A}_1) \cap L_\mathit{obs}(\mathcal{A}_2)$ (where $\mathcal{A}_1 \times \mathcal{A}_2$ is due to Lemma~\ref{lem:prod}). \end{proposition} The next lemma constitutes the main ingredient of the determinization procedure. It will eventually allow us to rely on OCAs whose visibly semantics consists only of valid encodings. \begin{lemma}\label{lem:eoca} Let $\mathcal{A}$ be an OCA. We can compute, in polynomial time, an eOCA $\eA{\mathcal{A}}$ such that $L_\mathit{obs}(\eA{\mathcal{A}}) = L_\mathit{obs}(\mathcal{A})$ and, for all $w \in L_\mathit{obs}(\eA{\mathcal{A}})$, we have $\enc{w} \in {L}_\mathit{vis}(\eA{\mathcal{A}})$. \end{lemma} \begin{proof} Suppose $\mathcal{A} = (Q,\Sigma,\iota,F,m,\Delta)$ is the given OCA. We first translate a simple ``threshold constraint'' into an arithmetic expression that can be used as a guard in the eOCA $\eA{\mathcal{A}}$: Let $\kform{m} = \modform{m}{1}$, and $\kform{k} = \modform{k}{0}$ for all $k \in \{0,\ldots,m-1\}$. We define $\eA{\mathcal{A}} = (Q,\Sigma,\iota,f,\Delta')$ as follows: Essentially, $\eA{\mathcal{A}}$ simulates $\mathcal{A}$ so that it has the same state space. However, when $\mathcal{A}$ allows for a shortcut (oscillation) from state $p$ to state $q$ (which will be checked in terms of $\phi_{p,q}$ from Lemma~\ref{lem:rpq}) and there is a transition $(q,k,\sigma,q')$ of $\mathcal{A}$, then $\eA{\mathcal{A}}$ may perform $\sigma$ and go directly from $p$ to $q'$, provided $\kform{k}$ is satisfied as well. Formally, the transition relation is given as $\Delta' = \{(p,\phi_{p,q} \wedge \kform{k},\sigma,q') ~\mid~ p \in Q \text{ and } (q,k,\sigma,q') \in \Delta\}$. Moreover, a configuration $(p,x)$ is ``final'' in $\eA{\mathcal{A}}$ if the current counter value $x$ satisfies $\psi_{p,q}$ for some $q \in F$ (cf.\ Lemma~\ref{lem:rpq}). That is, for all $p \in Q$, we let $f(p) = \bigvee_{q \in F} \psi_{p,q}$. The correctness proof of the construction can be found in Appendix~\ref{app:eoca}. \end{proof} To transform an eOCA back into an ordinary OCA while determinizing it and preserving its observability semantics, we will need a dOCA that takes care of the modulo constraints: \begin{lemma}\label{lem:BC} Let $\Omega \subseteq \Guards^\textup{mod}$ be a nonempty finite set. Set $m_\Omega := \max\{c \mid \modform{c}{d}$ is an atomic subformula of some $\phi \in \Omega\}+2$. There are a dOCA $\mathcal{B}_\Omega=(Q,\Sigma,\iota,Q,m_\Omega,\delta)$ of exponential size and $\lambda: Q \to 2^\Omega$ such that, for all $(q,x) \in \mathit{Conf}_{\mathcal{B}_\Omega}$ and all runs of $\mathcal{B}_\Omega$ ending in $(q,x)$, we have $\lambda(q) = \{\phi \in \Omega \mid x \models \phi\}$. \end{lemma} \newcommand{\incmap}[2]{\delta_{#1,#2}^\raisebox{-0.1ex}{$\shneg$}} \newcommand{\decmap}[2]{\delta_{#1,#2}^\raisebox{-0.2ex}{$\shpos$}} \newcommand{\pmap}[3]{\delta_{#1,#2}^{#3}} \begin{proof} We sketch the idea. The complete construction is given in Appendix~\ref{app:BC}. For every arithmetic-progression formula $\modform{c}{d}$ that occurs in $\Omega$ (for simplicity, let us assume $d \ge 1$), we introduce a state component $\{0,1,\ldots,c\} \times \{0,1,\ldots,d-1\}$. Increasing the counter, we increment the first component until $c$ and then count modulo $d$ in the second. We proceed similarly when decreasing the counter. The current state $(x,y) \in \setz{c} \times \setz{d-1}$ will then tell us whether $\modform{c}{d}$ holds, namely iff $x = c$ and $y = 0$. Finally, the mapping $\lambda$ evaluates a formula based on the outcome for its atomic subformulas. Note that the size of $\mathcal{B}_\Omega$ is exponential in the number of arithmetic-progression formulas that occur in $\Omega$. \end{proof} We will now apply Lemma~\ref{lem:BC} to transform an eOCA into a dOCA (cf.\ also Lemma~\ref{lem:det}). \begin{lemma}\label{thm:oca-to-doca} Let $\mathcal{A}$ be an eOCA. We can compute, in exponential time, a dOCA $\mathcal{A}'$ (deterministic according to Definition~\ref{def:doca}) such that $L_\extr(\mathcal{A}') = L_\extr(\mathcal{A})$ for all $\eta \in \{\mathit{oca},\mathit{vis},\mathit{obs}\}$. \end{lemma} \begin{proof} Suppose $\mathcal{A}=(Q,\Sigma,\iota,f,\Delta)$ is the given eOCA. Let $\Omega \subseteq \Guards^\textup{mod}$ be the set of formulas that occur in $\Delta$ or $f$, and let $\mathcal{B}_\Omega=(\hat Q,\Sigma,\hat\iota,\hat Q,m_\Omega,\hat\delta)$ be the dOCA along with the function $\lambda$ according to Lemma~\ref{lem:BC}. We build the dOCA ${\mathcal{A}'} = (Q',\Sigma,{\iota'},{F'},m_\Omega,{\delta'})$ as follows. Essentially, we perform a simple powerset construction for $\mathcal{A}$. Moreover, to eliminate modulo guards, we run $\mathcal{B}_\Omega$ in parallel. Thus, the set of states is $Q' = 2^Q \times \hat Q $, with initial state $\iota' = (\{\iota\},\hat\iota)$ and set of final states $F' = \{(P,q) \in Q' \mid f(p) \in \lambda(q)$ for some $p \in P\}$. Finally, the transition function is given by $\delta'((P,q),k,\sigma) = (P',\hat\delta(q,k,\sigma))$ where $P' = \bigl\{p' \mid (p,\phi,\sigma,p') \in \Delta \cap (P \times \lambda(q) \times \{\sigma\} \times Q)\bigr\}$. For the correctness proof, we refer to Appendix~\ref{app:oca-to-doca}. \end{proof} There is a ``nondeterministic version'' of Lemma~\ref{thm:oca-to-doca}, which does not perform a powerset construction but rather computes a nondeterministic OCA. The latter is then still of exponential size, but only wrt.\ to the number of arithmetic-progression formulas in $\mathcal{A}$. With Theorem~\ref{thm:emptiness}, it follows that nonemptiness for \text{eOCAs}\xspace can be solved in PSPACE. We do not know if this upper bound is tight. \medskip Let $\mathcal{A}$ be a dOCA with alphabet $\Sigma$ and let $w \in (\Sigma \times \mathds{N})^\ast$. By $\rho_{\!\mathcal{A}}(w)$, we denote the unique run of $\mathcal{A}$ such that $\mathit{vis}(\mathit{trace}(\rho_{\!\mathcal{A}}(w))) = \enc{w}$. By the following observation, which follows directly from Lemma~\ref{lem:enc}, it is justified to call any dOCA $\mathcal{A}$ with ${L}_\mathit{vis}(\mathcal{A}) \subseteq \Encs{\Sigma}$ \emph{deterministic wrt.\ the observability semantics}: \begin{lemma}\label{rem:unique} Let $\mathcal{A}$ be a dOCA such that ${L}_\mathit{vis}(\mathcal{A}) \subseteq \Encs{\Sigma}$. For every word $w \in (\Sigma \times \mathds{N})^\ast$, we have $w \in L_\mathit{obs}({\mathcal{A}})$ iff $\rho_{\!\mathcal{A}}(w)$ is accepting. \end{lemma} Altogether, we obtain that OCAs are determinizable wrt.\ the observability semantics. \begin{theorem}[determinizability]\label{lem:doca-compl} Let $\mathcal{A}$ be an OCA over $\Sigma$. We can compute, in exponential time, an $m$-dOCA ${\mathcal{A}'}$ (with $m$ only polynomial) such that $L_\mathit{obs}({\mathcal{A}'}) = L_\mathit{obs}(\mathcal{A})$ and ${L}_\mathit{vis}(\mathcal{A}') \subseteq \Encs{\Sigma}$. \end{theorem} \begin{proof} Let $\mathcal{A}$ be the given OCA. We apply Lemmas~\ref{lem:eoca} and \ref{thm:oca-to-doca} to obtain a dOCA $\widetilde{\mathcal{A}}$ of exponential size such that $L_\mathit{obs}(\widetilde\mathcal{A}) = L_\mathit{obs}(\mathcal{A})$ and, for all $w \in L_\mathit{obs}(\widetilde\mathcal{A})$, we have $\enc{w} \in {L}_\mathit{vis}(\widetilde\mathcal{A})$. We set $\mathcal{A}' = \widetilde\mathcal{A} \mathrel{\times} \B_\mathsf{enc}$ (cf.\ Lemmas~\ref{lem:prod} and \ref{lem:benc}) and obtain $L_\mathit{obs}({\mathcal{A}'}) = L_\mathit{obs}(\mathcal{A})$ and ${L}_\mathit{vis}(\mathcal{A}') \subseteq \Encs{\Sigma}$. \end{proof} We conclude that OCAs are complementable wrt.\ the observability semantics: \begin{theorem}[complementability]\label{thm:complobs} Let $\mathcal{A}$ be an OCA with alphabet $\Sigma$. We can compute, in exponential time, a dOCA $\bar{\mathcal{A}}$ such that $L_\mathit{obs}({\bar{\mathcal{A}}}) = (\Sigma \times \mathds{N})^\ast \setminus L_\mathit{obs}(\mathcal{A})$. \end{theorem} \begin{proof} We first transform $\mathcal{A}$ into the dOCA $\mathcal{A}' = \widetilde\mathcal{A} \mathrel{\times} \B_\mathsf{enc}$ according to (the proof of) Theorem~\ref{lem:doca-compl}. Suppose $\widetilde\mathcal{A} = (Q,\Sigma,\iota,F,m,\delta)$. Then, we set $\bar\mathcal{A} = (Q,\Sigma,\iota,Q \setminus F,m,\delta) \mathrel{\times} \B_\mathsf{enc}$. Note that $\bar\mathcal{A}$ is indeed a dOCA and that ${L}_\mathit{vis}(\bar\mathcal{A}) \subseteq \Encs{\Sigma}$. For $w \in (\Sigma \times \mathds{N})^\ast$, we have: \[ \begin{array}[b]{c} w \in L_\mathit{obs}(\bar\mathcal{A}) \stackrel{\textup{Lem.~\ref{rem:unique}}}{\Longleftrightarrow} \rho_{\!\bar\mathcal{A}}(w) \text{ is accepting} \stackrel{\textup{($\ast$)}}{\Longleftrightarrow} \rho_{\!\mathcal{A}'}(w) \text{ is not accepting} \stackrel{\textup{Lem.~\ref{rem:unique}}}{\Longleftrightarrow} w \not\in L_\mathit{obs}(\mathcal{A}') \end{array} \] Equivalence ($\ast$) holds as $\rho_{\!\bar\mathcal{A}}(w)$ and $\rho_{\!\mathcal{A}'}(w)$ have the same projection to the $Q$-component. \end{proof} Determinization and complementation of \emph{extended} OCAs are a priori more expensive: Lemmas~\ref{lem:rpq} and \ref{lem:eoca} only apply to OCAs so that one has to go through Lemma~\ref{thm:oca-to-doca} first. \subsection{Universality and Inclusion Problem wrt.\ Observability Semantics} \label{sec:univ} We are now ready to solve the universality and the inclusion problem for OCAs wrt.\ the observability semantics. The \emph{universality problem} is defined as follows: Given an \text{OCA}\xspace $\mathcal{A}$ over some alphabet $\Sigma$, do we have $L_\mathit{obs}(\mathcal{A}) = (\Sigma \times \mathds{N})^\ast$\,? The \emph{inclusion problem} asks whether, given OCAs $\mathcal{A}_1$ and $\mathcal{A}_2$, we have $L_\mathit{obs}(\mathcal{A}_1) \subseteq L_\mathit{obs}(\mathcal{A}_2)$. \begin{theorem} The universality problem and the inclusion problem for \text{OCAs}\xspace wrt.\ the observability semantics are PSPACE-complete. In both cases, PSPACE-hardness already holds when $|\Sigma|=1$. \end{theorem} \begin{proof} To solve the universality problem for a given \text{OCA}\xspace $\mathcal{A} = (Q,\Sigma,\iota,F,m,\Delta)$ in (nondeterministic) polynomial space, we apply the construction from Theorem~\ref{thm:complobs} (and, in particular, Theorem~\ref{lem:doca-compl}) on the fly to obtain a dOCA $\bar\mathcal{A}$ such that $L_\mathit{obs}({\bar{\mathcal{A}}}) = (\Sigma \times \mathds{N})^\ast \setminus L_\mathit{obs}(\mathcal{A})$. That is, we have to keep in memory a state of the form $(P,q,r)$, where $P \subseteq Q$, $q$ is the modulo-counting component (Lemma~\ref{lem:BC}), and $r$ is a state of $\B_\mathsf{enc}$ (Lemma~\ref{lem:benc}). In addition, we will maintain a component for the current counter value. In fact, the latter can be supposed to be polynomially bounded (cf.\ \cite{CCHPW-fossacs16} for a tight upper bound) in the size of $\bar\mathcal{A}$. The size of $\bar\mathcal{A}$ is exponential in the size of $\mathcal{A}$, and so the required information can be stored in polynomial space. To compute a successor state of $(P,q,r)$, we first guess an operation $\sigma \in \Sigma \cup \mathsf{Op}$. We then compute $(P',q')$ according to the proof of Lemma~\ref{thm:oca-to-doca} and update $r$ to $r'$ according to the type of $\sigma$. Note that this takes polynomial time only, since the function $\lambda$ as required in Lemma~\ref{lem:BC} can be computed on the fly. Finally, the algorithm outputs ``non-universal'' when we find a final state of $\bar\mathcal{A}$. For the inclusion problem, we rely on Proposition~\ref{prop:intersect} and perform the determinization procedure on-the-fly for \emph{both} of the given \text{OCAs}\xspace. \smallskip For the lower bound, we will restrict to the universality problem, since it is a special case of the inclusion problem. We reduce from the universality problem for ordinary finite automata, which is known to be PSPACE-complete \cite{MeyerS72}. If we suppose that $\Sigma$ is part of the input, then there is a straightforward reduction, which essentially takes the (ordinary) finite automaton and adds self-looping increment/decrement transitions to each state. Assuming $|\Sigma|=1$, the reduction is as follows. Let $\mathcal{A}$ be a finite automaton over some finite alphabet $\Gamma = \{a_0,\ldots,a_{n-1}\}$. We construct an \text{OCA}\xspace $\mathcal{A}'$ over the singleton alphabet $\Sigma$ such that $L(\mathcal{A}) = \Gamma^\ast$ iff $L(\mathcal{A}') = (\Sigma \times \mathds{N})^\ast$. The idea is to represent letter $a_i$ by (counter) value $i$. To obtain $\mathcal{A}'$, an $a_i$-transition in $\mathcal{A}$ is replaced with a gadget that nondeterministically outputs $i$ or any other natural number strictly greater than $n-1$. \end{proof} \section{Relation with Strong Automata}\label{sec:strong-aut} \newcommand{\mathsf{x}}{\mathsf{x}} \newcommand{\mathsf{x}'}{\mathsf{x}'} \newcommand{\mathsf{y}}{\mathsf{y}} \newcommand{\$}{\$} \newcommand{\$_1}{\$_1} \newcommand{\$_2}{\$_2} \newcommand{\eta}{\eta} \newcommand{\Phi}{\Phi} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\Psi}{\Psi} \newcommand{\mu}{\mu} \newcommand{\mymatrix}[4]{\smash{[\begin{array}{cc}\scriptstyle{#3} &\! \scriptstyle{#4}\\[-1.6ex]\scriptstyle{#1} &\! \scriptstyle{#2}\end{array}]}} \newcommand{\transformp}[4]{\smash{\Phi[\begin{array}{cc}\scriptstyle{#3} &\! \scriptstyle{#4}\\[-1.6ex]\scriptstyle{#1} &\! \scriptstyle{#2}\end{array}]}} \newcommand{\initformp}[2]{\smash{\Psi[\begin{array}{c}\scriptstyle{#2}\\[-1.7ex]\scriptstyle{#1}\end{array}]}} In this section, we show that OCAs with counter observability are expressively equivalent to strong automata over $(\mathds{N},+1)$ \cite{CzybaST15}. As the latter are descriptive in spirit, OCAs can thus be seen as their operational counterpart. Let us first give a short account of monadic-second order (MSO) logic over $(\mathds{N},+1)$ (see \cite{Tho97handbook} for more details). We have infinite supplies of first-order variables, ranging over $\mathds{N}$, and second-order variables, ranging over subsets of $\mathds{N}$. The atomic formulas are $\mathit{true}$, $\mathsf{x}' = \mathsf{x} + 1$, $\mathsf{x}' = \mathsf{x}$, and $\mathsf{x} \in \mathsf{X}$ where $\mathsf{x}$ and $\mathsf{x}'$ are first-order variables and $\mathsf{X}$ is a second-order variable. Those formulas have the expected meaning. Further, MSO logic includes all boolean combinations, first-order quantification $\exists \mathsf{x} \Phi$, and second-order quantification $\exists \mathsf{X} \Phi$. The latter requires that there is a (possibly infinite) subset of $\mathds{N}$ satisfying $\Phi$. As abbreviations, we may also employ $\mathsf{x}' = \mathsf{x} - 1$ and formulas of the form $\mathsf{x}' \in (\mathsf{x} + c + d\mathds{N})$, where $c,d \in \mathds{N}$. This does not change the expressive power of MSO logic. In the following, we assume that $\mathsf{x}$ and $\mathsf{x}'$ are two distinguished first-order variables. We write $\Phi(\mathsf{x},\mathsf{x}')$ to indicate that the free variables of $\Phi$ are among $\mathsf{x}$ and $\mathsf{x}'$. If $\Phi(\mathsf{x},\mathsf{x}')$ is evaluated to true when $\mathsf{x}$ is interpreted as $x \in \mathds{N}$ and $\mathsf{x}'$ is interpreted as $x' \in \mathds{N}$, then we write $(x,x') \models \Phi$. In fact, a transition of a strong automaton is labeled with a formula $\Phi(\mathsf{x},\mathsf{x}')$ and can only be executed if $(x,x') \models \Phi$ where $x$ and $x'$ are the natural numbers read at the previous and the current position, respectively. Thus, two successive natural numbers in a word can be related explicitly in terms of an MSO formula. \begin{definition}[\!\!\cite{CzybaST15}]\label{def:strong-aut} A \emph{strong automaton} is a tuple $\mathcal{S} = (Q,\Sigma,\iota,F,\Delta)$ where $Q$ is the finite set of \emph{states}, $\Sigma$ is a nonempty finite alphabet, $\iota \in Q$ is the \emph{initial state}, and $F \subseteq Q$ is the set of \emph{final states}. Further, $\Delta$ is a \emph{finite} set of \emph{transitions}, which are of the form $(q,\Phi,a,q')$ where $q,q' \in Q$ are the source and the target state, $a \in \Sigma$, and $\Phi(\mathsf{x},\mathsf{x}')$ is an MSO formula. \end{definition} Similarly to an OCA, $\mathcal{S}$ induces a relation ${\srelp{\,\mathcal{S}}} \subseteq \mathit{Conf}_{\!\mathcal{S}} \times (\Sigma \times \mathds{N}) \times \mathit{Conf}_{\!\mathcal{S}}$, where $\mathit{Conf}_{\!\mathcal{S}} = Q \times \mathds{N}$. For $(q,x),(q',x') \in \mathit{Conf}_{\!\mathcal{S}}$ and $(a,y) \in \Sigma \times \mathds{N}$, we have $(q,x) \xRightarrow{(a,y)\,}_{\mathcal{S}} (q',x')$ if $y = x'$ and there is an MSO formula $\Phi(\mathsf{x},\mathsf{x}')$ such that $(q,\Phi,a,q') \in \Delta$ and $(x,x') \models \Phi$. A \emph{run} of $\mathcal{S}$ on $w = (a_1,x_1) \ldots (a_n,x_n) \in (\Sigma \times \mathds{N})^\ast$ is a sequence $\rho = (q_0,x_0) \xRightarrow{(a_1,x_1)\,}_{\mathcal{S}} (q_1,x_1) \xRightarrow{(a_2,x_2)\,}_{\mathcal{S}} \ldots \xRightarrow{(a_n,x_n)\,}_{\mathcal{S}} (q_n,x_n)$ such that $q_0 = \iota$ and $x_0 = 0$. It is \emph{accepting} if $q_n \in F$. The language $L(\mathcal{S}) \subseteq (\Sigma \times \mathds{N})^\ast$ of $\mathcal{S}$ is defined as the set of words $w \in (\Sigma \times \mathds{N})^\ast$ such that there is an accepting run of $\mathcal{S}$ on $w$. \begin{figure}[t] \begin{center} \scalebox{0.9}{ \begin{gpicture} \gasset{Nframe=y,Nw=5,Nh=5,Nmr=8,ilength=4,flength=4,AHangle=30} \node[Nmarks=i](q0)(0,0){$q_0$} \node(q1)(30,0){$q_1$} \node[Nmarks=r](q2)(70,0){$q_2$} \drawedge(q0,q1){$\mathsf{x}' \ge 1\,|\,\mathsf{req}$} \drawloop[ELside=r,loopCW=n,loopdiam=8,loopangle=90](q1){$\mathsf{x}' = \mathsf{x} - 1\,|\,\mathsf{prod}$} \drawedge[ELside=l](q1,q2){$\left(\!\begin{array}{c}\mathsf{x}' = \mathsf{x} - 1\\{\wedge}~\mathsf{x}'=0\end{array}\!\right)|\,\mathsf{prod}$} \end{gpicture} } \end{center} \caption{A strong automaton over $(\mathds{N},+1)$\label{fig:strong-aut}} \end{figure} \begin{example} We refer to the OCA $\mathcal{A}$ from Example~\ref{ex:job}. Figure~\ref{fig:strong-aut} depicts a strong automaton $\mathcal{S}$ such that $L(\mathcal{S}) = L_\mathit{obs}(\mathcal{A}) = \{(\mathsf{req},n)(\mathsf{prod},n-1)(\mathsf{prod},n-2) \ldots (\mathsf{prod},0) \mid n \ge 1\}$. \end{example} In fact, we can transform any OCA into an equivalent strong automaton preserving the observability semantics, and vice versa: \begin{theorem}\label{thm:strong-aut} Let $L \subseteq (\Sigma \times \mathds{N})^\ast$. There is an OCA $\mathcal{A}$ such that $L_\mathit{obs}(\mathcal{A}) = L$ iff there is a strong automaton $\mathcal{S}$ such that $L(\mathcal{S}) = L$. \end{theorem} \begin{proof} ``$\Longrightarrow$'': This is the easier direction. Using the following observation, we can directly transform an OCA into a strong automaton: For all states $q$ and $q'$ of a given OCA $\mathcal{A}$, there is an MSO formula $\Phi_{q,q'}(\mathsf{x},\mathsf{x}')$ such that, for all $x,x' \in \mathds{N}$, we have $(x,x') \models \Phi_{q,q'}$ iff $(q,x) \mathrel{\smash{\bigl(\srelpp{\mathcal{A}}{\raisebox{-0.1ex}{$\scriptstyle\shneg$}\;} \cup \srelpp{\mathcal{A}}{\raisebox{-0.2ex}{$\scriptstyle\shpos$}\;}\bigr)^\ast}} (q',x')$. The existence of $\Phi_{q,q'}$ can be shown using a two-way automaton over infinite words \cite{Pecuchet85}, which simulates $\mathcal{A}$ and can be translated into an MSO formula \cite{Tho97handbook}. We refer to Appendix~\ref{app:strong-aut} for more details. \medskip \noindent``$\Longleftarrow$'': We will transform a strong automaton $\mathcal{S}$ into an \emph{extended} OCA $\mathcal{A}$ such that $L_\mathit{obs}(\mathcal{A}) = L(\mathcal{S})$. The main ingredient of $\mathcal{A}$ is a dOCA $\mathcal{C}_\Phi$, with $\Phi(\mathsf{x},\mathsf{x}')$ an MSO formula, that, starting at 0, goes straight to two counter values $x$ and $x'$, and decides whether $(x,x') \models \Phi$ or not: \begin{claim}\label{cl:Cphi} Let $\Phi(\mathsf{x},\mathsf{x}')$ be an MSO formula. There is a dOCA $\mathcal{C}_\Phi$ over the alphabet $\{\$_1,\$_2\}$ such that ${L}_\mathit{vis}(\mathcal{C}_\Phi) = {L}_1 \cup {L}_2$ where \begin{itemize}\itemsep=0.5ex \item ${L}_1 = \{\raisebox{-0.1ex}{$\shneg$}^x \$_1 \raisebox{-0.1ex}{$\shneg$}^y \$_2 \mid x,y \in \mathds{N} \text{ such that } (x,x+y) \models \Phi\}$ and \item ${L}_2 = \{\raisebox{-0.1ex}{$\shneg$}^{x+y} \$_1 \raisebox{-0.2ex}{$\shpos$}^y \$_2 \mid x,y \in \mathds{N} \text{ such that } (x+y,x) \models \Phi\}$\,. \end{itemize} \end{claim} \noindent\textit{Proof of Claim~\ref{cl:Cphi}.}~ By Lemma~\ref{lem:det}, it is actually sufficient to come up with a \emph{nondeterministic} OCA. Using B{\"u}chi's theorem (cf.\ \cite{Tho97handbook}), we first transform $\Phi(\mathsf{x},\mathsf{x}')$ into \emph{finite automata} $\mathcal{B}_1$ and $\mathcal{B}_2$ recognizing the following (regular) languages over the alphabet $\{\$_1,\$_2,\Box\}$: \begin{itemize}\itemsep=0.5ex \item $L(\mathcal{B}_1) = \{\Box^x \$_1 \Box^{y} \$_2 \mid x,y \in \mathds{N} \text{ such that } (x,x + y) \models \Phi\}$ \item $L(\mathcal{B}_2) = \{\Box^x \$_2 \Box^{y} \$_1 \mid x,y \in \mathds{N} \text{ such that } (x+y,x) \models \Phi\}$ \end{itemize} OCAs are easily seen to be closed under union (wrt.\ any semantics). So, it is enough to translate $\mathcal{B}_1$ and $\mathcal{B}_2$ into OCAs $\mathcal{C}_1$ and $\mathcal{C}_2$ such that ${L}_\mathit{vis}(\mathcal{C}_1) = L_1$ and ${L}_\mathit{vis}(\mathcal{C}_2) = L_2$. While the construction of $\mathcal{C}_1$ is immediate, it is less obvious how to transform $\mathcal{B}_2$ into a suitable $\mathcal{C}_2$. Note that $L(\mathcal{B}_2)$ can be written as the finite union of sets $\{\Box^{x} \$_2 \Box^{y} \$_1 \mid x \in \sem{\modform{c_1}{d_1}}$ and $y \in \sem{\modform{c_2}{d_2}}\}$ with $c_1,d_1,c_2,d_2 \in \mathds{N}$. This is achieved by determinizing $\mathcal{B}_2$ and splitting it into components as illustrated in the upper part of Figure~\ref{fig:Bphi}. As we can of course handle finite unions, we assume that $\mathcal{B}_2$ is of that form. For simplicity, we also suppose $d_1,d_2 \ge 1$. \begin{figure}[t] \begin{center} \scalebox{0.9}{ \begin{gpicture} \gasset{Nframe=y,Nw=4,Nh=4,Nmr=8} \unitlength=0.8mm \node[Nframe=n](n3)(-13,0){$\mathcal{B}_2$} \node[iangle=180,Nmarks=i](q0)(0,0){} \node(q1)(25,0){} \node(q2)(50,0){} \node(q3)(75,0){} \node[Nmarks=r](q4)(100,0){} \drawedge(q0,q1){$\Box^{c_1}$} \drawedge(q1,q2){$\$_2$} \drawedge(q2,q3){$\Box^{c_2}$} \drawedge(q3,q4){$\$_1$} \drawloop[loopCW=n,loopdiam=6,loopangle=90,ELside=r](q1){~~$\Box^{d_1}$} \drawloop[loopCW=n,loopdiam=6,loopangle=90,ELside=r](q3){~~$\Box^{d_2}$} \put(0,-15){ \node[Nframe=n](n3)(-13,0){$\mathcal{C}_2$} \node[iangle=180,Nmarks=i](q0)(0,0){} \node(q1)(25,0){} \node(q2)(50,0){} \node(q3)(75,0){} \node(q4)(100,0){} \drawedge(q0,q1){${\raisebox{-0.1ex}{$\shneg$}}^{c_1}$} \drawedge(q1,q2){$\epsilon$} \drawedge(q2,q3){${\raisebox{-0.1ex}{$\shneg$}}^{c_2}$} \drawedge(q3,q4){$\$_1$} \drawloop[loopCW=y,loopdiam=6,loopangle=-90,ELside=l](q1){~~${\raisebox{-0.1ex}{$\shneg$}}^{d_1}$} \drawloop[loopCW=y,loopdiam=6,loopangle=-90,ELside=l](q3){~~${\raisebox{-0.1ex}{$\shneg$}}^{d_2}$} \node(q5)(125,0){} \node[Nmarks=r](q6)(150,0){} \drawedge(q4,q5){${\raisebox{-0.2ex}{$\shpos$}}^{c_2}$} \drawedge(q5,q6){$\$_2$} \drawloop[loopCW=y,loopdiam=6,loopangle=-90,ELside=l](q5){~~${\raisebox{-0.2ex}{$\shpos$}}^{d_2}$} \node[Nframe=n](q5)(85,-9){$\left.\begin{array}{c}~\\[0.4ex]~\end{array}\right\}\scriptstyle{n_2\textup{-times}}$} \node[Nframe=n](q5)(115,-9){$\scriptstyle{n_2'\textup{-times}}\left\{\begin{array}{c}~\\[0.4ex]~\end{array}\right.$} \node[Nframe=n](n3)(100,-14){$\scriptstyle{n_2 \;\mathrel{\text{mod}}\; d_1 \,=\, n_2' \;\mathrel{\text{mod}}\; d_1}$} \node[Nframe=n](n4)(115,8){$\overbrace{~~~~~~~~~~~~~~~~~}^{\textup{counter value} \;\ge\; c_1}$} } \end{gpicture} } \end{center} \caption{Transformation of $\mathcal{B}_2$ into $\mathcal{C}_2$\label{fig:Bphi}} \end{figure} From $\mathcal{B}_2$, we obtain the $c_1$-OCA $\mathcal{C}_2$ for $L_2$ depicted in the lower part of Figure~\ref{fig:Bphi}. It works as follows: It first produces $\raisebox{-0.1ex}{$\shneg$}^{x+y}\$_1$, where $x = c_1 + d_1n_1$ and $y = c_2 + d_2n_2$ for some $n_1,n_2 \in \mathds{N}$. In doing so, it counts, modulo $d_1$, how often it traverses the ${\raisebox{-0.1ex}{$\shneg$}}^{d_2}$-loop. That is, after reading $\$_1$, it will be aware of $k := n_2 \mathrel{\text{mod}} d_1$. It then continues reading $\raisebox{-0.2ex}{$\shpos$}^{z} \$_2$ where $z = c_2 + d_2n_2'$ for some $n_2' \in \mathds{N}$ such that $k = n_2' \mathrel{\text{mod}} d_1$ and $x + y - z \ge c_1$ (recall that we define a $c_1$-OCA). We claim that $(x+y,x+y-z) \models \Phi$, which shows ${L}_\mathit{vis}(\mathcal{C}_2) \subseteq L_2$. Since $n_2 \mathrel{\text{mod}} d_1 = n_2' \mathrel{\text{mod}} d_1 =: k$, we have $n_2 = m_1 d_1 + k$ and $n_2' = m_2 d_1 + k$ for some $m_1,m_2 \in \mathds{N}$. Thus, \[\begin{array}{rl} x + y - z\!\!\!\! & =\, c_1 + d_1n_1 + c_2 + d_2n_2 - (c_2 + d_2n_2')\\ & =\, c_1 + d_1n_1 + c_2 + d_2(m_1 d_1 + k) - c_2 - d_2(m_2 d_1 + k)\\ & =\, c_1 + d_1n_1 + d_2 m_1 d_1 - d_2 m_2 d_1\\ & =\, c_1 + d_1(n_1 + d_2 m_1 - d_2 m_2) \;\in\; \sem{\modform{c_1}{d_1}}\,. \end{array} \] Note that membership in $\sem{\modform{c_1}{d_1}}$ holds due to $x + y - z \ge c_1$. We obtain $\Box^{x + y - z} \$_2 \Box^{z} \$_1 \in L(\mathcal{B}_2)$ and, therefore, $(x + y,x + y - z) \models \Phi$. It remains to prove $L_2 \subseteq {L}_\mathit{vis}(\mathcal{C}_2)$. Suppose $x,y \in \mathds{N}$ such that $(x+y,x) \models \Phi$. We have $\Box^{x} \$_2 \Box^{y} \$_1 \in L(\mathcal{B}_2)$. Thus, $x \in \sem{\modform{c_1}{d_1}}$ and $y \in \sem{\modform{c_2}{d_2}}$. But this implies $\raisebox{-0.1ex}{$\shneg$}^{x+y} \$_1 \raisebox{-0.2ex}{$\shpos$}^y \$_2 \in {L}_\mathit{vis}(\mathcal{C}_2)$: The automaton $\mathcal{C}_2$ takes the same number of $d_2$-loops before and after reading $\$_1$, namely $(y-c_2)/d_2$ many. \qed$_{\textup{Claim~\ref{cl:Cphi}}}$ \medskip We now turn to the actual translation of a strong automaton $\mathcal{S} = (Q,\Sigma,\iota,F,\Delta)$ into an eOCA $\mathcal{A}$ such that $L_\mathit{obs}(\mathcal{A}) = L(\mathcal{S})$. Let $\mathcal{F}$ denote the set of MSO formulas that occur in the transitions of $\mathcal{S}$. Without loss of generality, we assume that $\mathcal{F}$ contains $\mathit{false} := \neg\mathit{true}$. For all $\Phi \in \mathcal{F}$, we compute the dOCA $\mathcal{C}_\Phi = (Q_\Phi,\{\$_1,\$_2\},\iota_\Phi,F_\Phi,m_\Phi,\delta_\Phi)$ according to Claim~\ref{cl:Cphi}. Without loss of generality, we assume that there is $m \in \mathds{N}$ such that $m = m_\Phi$ for all $\Phi \in \mathcal{F}$. That is, $\delta_\Phi$ is of the form $Q_\Phi \times \setz{m} \times (\{\$_1,\$_2\} \mathrel{\cup} \mathsf{Op}) \to Q_\Phi$ (Definition~\ref{def:doca}). We will also assume that the sets $(Q_\Phi)_{\Phi \in \mathcal{F}}$ are pairwise disjoint. For $r \in Q_\Phi$, we can easily compute a formula $\mu_r \in \Guards^\textup{mod}$ such that $\sem{\mu_r} = \{x \in \mathds{N} \mid (\iota_\Phi,0) \mathrel{\bigl(\srelpp{\,\mathcal{C}_\Phi}{\raisebox{-0.1ex}{$\scriptstyle\shneg$}\;}\bigr)^x} (r,x)\}$. We now define the eOCA $\mathcal{A} = (Q',\Sigma,I,f,\Delta')$ satisfying $L_\mathit{obs}(\mathcal{A}) = L(\mathcal{S})$. Note that we use a set of initial states $I \subseteq Q'$ instead of one initial state; it is easily seen that this extension does not add expressive power to eOCAs. Whenever $\mathcal{A}$ outputs a counter value $x$ (and at the very beginning of an execution), it guesses (i) a formula $\Phi$ that will label the next transition of $\mathcal{S}$ to be simulated, as well as (ii) an \emph{entrance point} in $\mathcal{C}_\Phi$. The latter is a state $r$ of $\mathcal{C}_\Phi$ that can be reached from $\iota_\Phi$ by reading $\raisebox{-0.1ex}{$\shneg$}^x$ (which will be guaranteed by the guard $\mu_r$). Then, $\mathcal{A}$ will continue simulating $\mathcal{C}_\Phi$ until another counter value $x'$ is output, so that $\mathcal{A}$ can determine whether $(x,x') \models \Phi$. Accordingly, a state of $\mathcal{A}$ is a tuple $(p,\Phi,q) \in Q' = Q \times \mathcal{F} \times \bigl(\bigcup_{\Phi \in \mathcal{F}} Q_\Phi\bigr)$, where $p$ represents the current state of $\mathcal{S}$, $\Phi$ is the guessed next transition formula, and $q$ is the current state of $\mathcal{C}_\Phi$. The set of initial states is $I = \{(\iota,\Phi,\delta_\Phi(\iota_\Phi,0,\$_1)) \mid \Phi \in \mathcal{F}\}$. The guess $\Phi = \mathit{false}$ is used to signalize that no further transition will be taken in $\mathcal{S}$. Thus, the acceptance condition $f$ maps every state $(p,\Phi,q)$ such that $p \in F$ and $\Phi = \mathit{false}$ to $\mathit{true}$, and all other states to $\neg\mathit{true}$. It remains to define the transition relation $\Delta'$: \begin{itemize}\itemsep=1ex \item $\Sigma$-transitions: For all $(p,\Phi,a,p') \in \Delta$ with $\Phi \neq \mathit{false}$, all $k \in \setz{m}$ (recall that $\mathcal{C}_\Phi$ is an $m$-dOCA), all $q \in Q_{\Phi}$ such that $\delta_\Phi(q,k,\$_2) \in F_\Phi$, all $\Phi' \in \mathcal{F}$, and all $r \in Q_{\Phi'}$, we have a transition $(p,\Phi,q) \xrightarrow{\mu_r \wedge \kform{k}\,|\, a\,} (p',\Phi',\delta_{\Phi'}(r,k,\$_1))$, with $\kform{k}$ as in the proof of Lemma~\ref{lem:eoca}. \item $\mathsf{Op}$-transitions: For all $(p,\Phi,q) \in Q'$ with $\Phi \neq \mathit{false}$, all $k \in \setz{m}$, and all $\mathit{op} \in \mathsf{Op}$, we have a transition $(p,\Phi,q) \xrightarrow{\kform{k}\,|\,\mathit{op}\,} (p,\Phi,\delta_\Phi(q,k,\mathit{op}))$. \end{itemize} The proof of $L_\mathit{obs}(\mathcal{A}) = L(\mathcal{S})$ can be found in Appendix~\ref{app:strong-aut}. \end{proof} \section{Conclusion}\label{sec:conclusion} The observability semantics opens several directions for follow-up work. We may carry it over to other classes of infinite-state systems such as Petri nets. Are there infinite-state restrictions of Petri nets other than 1-VASS whose visibly semantics is robust? A direct application of our results is that the language $L \subseteq (\Sigma \times \mathds{N})^\ast$ of an OCA with observability semantics/strong automaton is learnable (in the sense of Angluin) in terms of a visibly one-counter automaton for $\enc{L}$ \cite{NeiderLoeding10}. It would be worthwhile to transfer results on visibly one-counter/pushdown automata that concern Myhill-Nerode congruences or minimization \cite{AlurKMV05,ChervetW07}. Another interesting question is to which extent we can relax the requirement that the counter value be output with every letter $a \in \Sigma$. It may indeed be possible to deal with a bounded number of $\Sigma$-transitions between any two counter outputs. Note that there have been relaxations of the visibility condition in pushdown automata, albeit preserving closure under boolean operations \cite{NowotkaS07}. \paragraph*{Acknowledgments.} The author is grateful to C.\ Aiswarya, Stefan G{\"{o}}ller, Christoph Haase, and Arnaud Sangnier for numerous helpful discussions and pointers to the literature.
1,116,691,501,142
arxiv
\section{Introduction} \vspace{0.3cm} With the consideration that the Quantum Chromodynamics (QCD) is the underlying theory of the strong interaction, the hadronic systems are assumed to be composed of quarks and gluons, and most of descriptions of baryons are associated with some effective degrees of freedom of nonperturbative $QCD$ (NPQCD), such as constituent quarks [1], and chiral fields [2], etc.. But, in the multi-baryon system, the property of the system mostly can be well explained in the baryon and meson degrees of freedom with only some special cases where the quark and gluon degrees of freedom has to be taken into account as exceptions[3]. In principle, nothing special would prohibit a system with the baryon number larger than one, and there might exist dibaryons which are composed of six quarks. Many efforts have been made in both theoretical and experimental investigations to search such object in past years [4-7]. Unlike the deuteron, the dibaryon has a QCD motivated origin and should be a system with quarks and gluons confining in a rather smaller volume, say smaller than 0.85fm in radius. In this system, both the perturbative QCD (PQCD) and the NPQCD effects should be taken into account. However, because of the complexity of NPQCD, one has to use effective models to simulate the strong interaction, especially in the NPQCD region. Thus, investigating dibaryons is a prospective field to test various models of NPQCD, and consequently, to enrich our knowledge of the QCD phenomenology. \vspace{0.3cm} According to Jaffe's calculation with the bag model [4], the color magnetic interaction (CMI) of the one-gluon-exchange (OGE) potential of the H particle is attractive. In 1987, K.Yazaki [8] investigated non-strange six-quark systems by considering OGE and confinement potentials and showed that CMI in the $\Delta\Delta_{(3,0)}$ system demonstrate the attractive feature among the $NN$, $N\Delta$ and $\Delta\Delta$ systems. Therefore, this state Is highly possible a dibaryon. Studying this system is particularly plausible. \vspace{0.3cm} Since Jaffe predicted the H dibaryon in 1977 [4], many works have been done on searching the possible existences of dibaryons such as strangenessless dibaryons $d^*$ [5,8-10] and $d'$ [11] and the strange dibaryons such as H, $\Omega\Omega$, $\Xi^*\Omega$ and $\Xi\Omega$ [13,14,12]. The current calculations of H showed that the mass of H is around the threshold of the $\Lambda\Lambda$ channel [14,15]. This result is consistent with the reports of current experiment [16]. The recent theoretical result of $d^*$ is obtained by considering the coupling of the $\Delta\Delta$ and $CC$(hidden color) channels in the chiral quark models. It is shown that the binding energy of $d^*$ is in the order of tens MeV [10]. The predicted $\Omega\Omega$, $\Xi^{*}\Omega$ and $\Xi\Omega$ are about 100 MeV, 80 MeV and 30 MeV, respectively[12, 23-25]. \vspace{0.3cm} In the theoretical study of dibaryon, the quark cluster models have been Intensively employed. With these models, one can easily realize the antisymmetrization character between quarks belonging to different clusters, which can only be neglected when clusters are well separated from each other. \vspace{0.3cm} For the system with a radius not larger than 0.85fm, the Pauli principle on the quark level would play a substantial role [20]. According to our calculation, $\Delta\Delta_{(3,0)}$ $(d^*)$ has the most antisymmetized property in the spin-flavor-color space, which is characterized by the expectation value of the antisymmetrizer in the spin-flavor-color space. This property appears in another non-strange six-quark system, $\Delta\Delta_{(0,3)}$. In this paper, we would discuss the one channel $\Delta\Delta_{(0,3)}$ together with $d^*$ within the chiral quark cluster model. The paper is organized in such a way that the model is briefly presented in Sect.2, the results and discussions are given in Sect.3 and the conclusions is drawn in Sect.4. \noindent \section{The effective Hamiltonian and two-cluster wavefunction} \noindent As an effective theory of QCD, in the chiral quark model, the constituent quark and the chiral field are assumed as the effective degrees of freedom, and a long-range confinement term, which dominants the long-range NPQCD effect and still cannot strictly be derived up to now, a short-range one-gluon-exchange term, which describes the short-range QCD effect and a set of terms induced by the coupling between the quark and chiral fields, which mainly depict the short- and medium-range NPQCD effects are considered as the effective interaction. Thus the dynamics of a six-quark system in the SU(3) chiral quark model is governed by the effective Hamiltonian [18,19] \begin{eqnarray} H~=~T~+~\sum_{i<j}\big(V_{ij}^{CONF}~+~V_{ij}^{OGE}~+~V_{ij}^{PS}~+~ V_{ij}^{S}\big), \end{eqnarray} where $T$ denotes the kinetic energy operator of the system and $V_{ij}^{CONF}$, $V_{ij}^{OGE}$, $V_{ij}^{PS}$ and $V_{ij}^{S}$ represent the confinement, one-gluon exchange, pseudo-scalar chiral field induced and scalar chiral field induced potentials intervening between the $i-$th and $j-$th quarks, respectively. The confinement potential can phenomenologically take a quadratic form \begin{eqnarray} V_{ij}^{CONF}~=~-(\lambda^{a}_{i} \lambda^{a}_{j})_{c}(a_{ij}^{0} +a_{ij}r_{ij}^{2}), \end{eqnarray} where the superscript a is the color index. The OGE potential can be derived from the perturbative tree diagram \begin{eqnarray} V_{ij}^{OGE}~=~\frac{g_{i} g_{j}}{4}(\lambda^{a}_{i} \lambda^{a}_{j})_{c} ~\lbrack~ \frac{1}{r_{ij}}&-&\frac{\pi}{2}\big(\frac{1}{m_{i}^{2}}+\frac{1} {m_{j}^{2}}+\frac{4}{3}\frac{\vec{\sigma}_{i}\cdot\vec{\sigma}_{j}} {m_{i}m_{j}}\big)\delta(\vec{r}_{ij}) \\ \nonumber &-&\frac{1}{4m_{i}m_{j}r_{ij}^{3}}S_{ij}-\frac{3} {4m_{i}m_{j}r_{ij}^{3}}\vec{L}\cdot(\vec{\sigma}_{i}+\vec{\sigma}_{j}) ~\rbrack, \end{eqnarray} with the tensor operator $S_{ij}$ being \begin{eqnarray} S_{ij}~&=&~3(\vec{\sigma}_{i}\cdot\hat{r}_{ij})~ (\vec{\sigma}_{j}\cdot\hat{r}_{ij}) - (\vec{\sigma}_{i} \cdot\vec{\sigma}_{j})~. \end{eqnarray} The pseudo-scalar and scalar field induced potentials are originated from the restoration of the important symmetry of strong interaction---the chiral symmetry and can be written as \begin{eqnarray} V_{ij}^{PS} &=& C(g_{ch},~m_{\pi_{a}}, ~\Lambda)~\frac{m_{\pi_{a}}^{2}}{12m_{i}m_{j}} \nonumber \\ &\cdot&\lbrack f_{1}(m_{\pi_{a}},~\Lambda,~r_{ij})~(\vec{\sigma_{i}}\cdot \vec{\sigma_{j}}) + f_{2}(m_{\pi_{a}},~\Lambda,~r_{ij})~S_{ij}~\rbrack (\lambda^{a}_{i}\lambda^{a}_{j})_{f}~, \end{eqnarray} and \begin{eqnarray} V_{ij}^{S} &=& - C(g_{ch},~m_{\sigma_{a}}, ~\Lambda) \cdot \lbrack f_{3}(m_{\sigma_{a}},~\Lambda,~r_{ij}) \nonumber \\ &+& \frac{m_{\sigma_{a}}^{2}}{4m_{i}m_{j}}~ f_{4}(m_{\sigma_{a}},~\Lambda,~r_{ij})~\vec{L}\cdot(\vec{\sigma_{i}} +\vec{\sigma_{j}})~\rbrack (\lambda^{a}_{i}\lambda^{a}_{j})_{f}~, \end{eqnarray} respectively. In above equations, the subscript $f$ denotes the flavor index. The functions $f_{i},~Y,~G$, and $H$ and the constant c-number $C$ are \begin{eqnarray} f_{1}(m,~\Lambda,~r)~&=&~Y(mr) - \Big(\frac{\Lambda}{m}\Big)^{3}~Y(\Lambda r),\\ f_{2}(m,~\Lambda,~r)~&=&~H(mr) - \Big(\frac{\Lambda}{m}\Big)^{3}~H(\Lambda r),\\ f_{3}(m,~\Lambda,~r)~&=&~Y(mr) - \frac{\Lambda}{m}~Y(\Lambda r),\\ f_{4}(m,~\Lambda,~r)~&=&~G(mr) - \Big(\frac{\Lambda}{m}\Big)^{3}~G(\Lambda r),\\ C(g,~m,~\Lambda)~&=&~\frac{g_{ch}^{2}}{4\pi}~\frac{\Lambda^{2}~m}{\Lambda^{2} -m^{2}},\\ \end{eqnarray} respectively, with \begin{eqnarray} Y(mr)~&=&~\frac{1}{mr}~e^{-mr},\\ G(mr)~&=&~\frac{1}{mr}~\Big(1+\frac{1}{mr}\Big)~Y(mr),\\ H(mr)~&=&~~\Big(1+\frac{3}{mr}+\frac{3}{m^{2}r^{2}}\Big)~Y(mr),\\ \frac{g_{ch}^{2}}{4\pi}~&=&~\frac{9}{25}~\frac{g_{NN\pi}^{2}}{4\pi}~ \Big(\frac{m_{q}}{M_{N}}\Big)^{2}. \end{eqnarray} In Eq.(5), $\pi_{a}$ with ($a=1,2,...,8,$ and $0$) correspond to the pseudoscalar fields $\pi,~K,~\eta_{8}$ and $\eta_{0}$, respectively, and $\eta$ and $\eta'$ are the linear combinations of $\eta_{0}$ and $\eta_{8}$ with the mixing angle $\theta$ [14]. In Eq.(6), $\sigma_{a}$ with ($a=1,2,...,8,$ and $0$) denotes the scalar fields $\sigma',~\kappa,~\epsilon$ and $~\sigma$, respectively. \vspace{0.3cm} In the framework of Resonating Group Method $(RGM)$, the wavefunction of the Six-quark system can be written as \begin{eqnarray} {\Psi}_{6q}~=~{\cal A}[{\Phi}_A{\Phi}_B\chi({\bf R}_{AB})Z({\bf R}_{cm})] \end{eqnarray} where $\phi_{A(B)}$ denotes the wavefunction of cluster A(B), ${\chi}({\bf R}_{AB})$ is the trial wavefunction between clusters A and B, $Z({\bf R}_{cm})$ is the wavefunction of the center of mass motion (CM) of the six quark system and $\cal A$ represents the antisymmetrizer. Expanding the unknown wavefunction ${\chi}({\bf R}_{AB})$ by well-defined basis functions, such as Gaussian functions, one can solve $RGM$ bound state equation to obtain eigenvalues and corresponding wavefunctions, simultaneously. The details of solving $RGM$ bound-state problem can be found in Refs. [21,22]. \vspace{0.3cm} The model parameters should be fixed at the very beginning by the mass splittings among $N$,$\Delta$, $\Sigma$ and $\Xi$, respectively, and the stability conditions of octet $(S=1/2)$ and decuplet $(S=3/2)$ baryons, respectively. The resultant values are listed in Table 1. \centerline{\bf {Table 1~~~~Model parameters under $SU(3)$ chiral quark model}} {\small \begin{center} \begin{tabular}{ccc|ccc}\hline & $~~~~~$ Set1 & $~~~~~$ Set2 & & $~~~~~$ Set1 & $~~~~~$ Set2 \\\hline $m_u~(MeV)$ & $~~~~~$ 313 & $~~~~~$ 313 & & & \\ $m_s~(MeV)$ & $~~~~~$ 470 & $~~~~~$ 470 & & & \\ $b_u~(fm)$ & $~~~~~$ 0.505 & $~~~~~$ 0.505 & & & \\ $m_\pi~(fm^{-1})$ & $~~~~~$ 0.7 & $~~~~~$ 0.7 & $\Lambda_\pi~(fm^{-1})$ & $~~~~~$ 4.2 & $~~~~~$ 4.2 \\ $m_k~(fm^{-1})$ & $~~~~~$ 2.51 & $~~~~~$ 2.51 & $\Lambda_k~(fm^{-1})$ & $~~~~~$ 4.2 & $~~~~~$ 4.2 \\ $m_\eta~(fm^{-1})$ & $~~~~~$ 2.78 & $~~~~~$ 2.78 & $\Lambda_\eta~(fm^{-1})$ & $~~~~~$ 5.0 & $~~~~~$ 5.0 \\ $m_{\eta'}~(fm^{-1})$ & $~~~~~$ 4.85 & $~~~~~$ 4.85 & $\Lambda_{\eta'}~(fm^{-1})$ & $~~~~~$ 5.0 & $~~~~~$ 5.0 \\ $m_\sigma~(fm^{-1})$ & $~~~~~$ 3.17 & $~~~~~$ 3.17 & $\Lambda_\sigma~(fm^{-1})$ & $~~~~~$ 4.2 & $~~~~~$ 7.0 \\ $m_{\sigma'}~(fm^{-1})$ & $~~~~~$ 4.85 & $~~~~~$ 4.85 & $\Lambda_{\sigma'}~(fm^{-1})$ & $~~~~~$ 5.0 & $~~~~~$ 5.0 \\ $m_\kappa~(fm^{-1})$ & $~~~~~$ 4.85 & $~~~~~$ 7.09 & $\Lambda_\kappa~(fm^{-1})$ & $~~~~~$ 5.0 & $~~~~~$ 7.61 \\ $m_\epsilon~(fm^{-1})$ & $~~~~~$ 4.85 & $~~~~~$ 7.09 & $\Lambda_\epsilon~(fm^{-1})$ & $~~~~~$ 5.0 & $~~~~~$ 7.61 \\ $g_u$ & $~~~~~$ 0.936 & $~~~~~$ 0.936 & & & \\ $g_s$ & $~~~~~$ 0.924 & $~~~~~$ 0.781 & & & \\ $a_{uu}~(MeV/fm^2)$ & $~~~~~$ 54.34 & $~~~~~$ 57.71 & $a^0_{uu}~(MeV)$ & $~~~~~$ -47.69 & $~~~~~$ -48.89 \\ $a_{us}~(MeV/fm^2)$ & $~~~~~$ 65.75 & $~~~~~$ 66.51 & $a^0_{us}~(MeV)$ & $~~~~~$ -41.73 & $~~~~~$ -50.57 \\ $a_{ss}~(MeV/fm^2)$ & $~~~~~$ 102.97 & $~~~~~$ 115.39 & $a^0_{ss}~(MeV)$ & $~~~~~$ -45.04 & $~~~~~$ -68.11 \\\hline & \end{tabular} \end{center} } In this table, $m_A$ denotes the mass of particle A which can be either valence quark or meson involved, $\Lambda_A$ represents the corresponding cut-off mass of the particle, $g_q$ is the coupling constant of gluon to the valence quark q, $a_{q_{1}q_{2}}$ depicts the confinement strength between valence quarks $q_{1}$ and $q_{2}$, respectively, and $a^0_{q_{1}q_{2}}$ denotes the corresponding zero-point energy. It should be mentioned again that with both sets of parameters, one can reasonably reproduce the NN and NY scattering data and part of single baryon properties and renders a mass of the H particle which agrees with the available experimental data [17-19,23-25]. \noindent \section{Results and discussions} \noindent We are now ready to study the structure of $\Delta\Delta$ dibaryons. First of all, let us look at the symmetry character of the $\Delta\Delta$ system. As is emphasized above, the antisymmetrization of quarks belonging to different clusters plays a substantial role in the structure of the dibaryon. In both $\Delta\Delta_{(3,0)}$ and $\Delta\Delta_{(0,3)}$ systems, the expectation value of he antisymmetrizer in the spin-flavor-color space, $\langle {\cal A} ^{sfc}\rangle$, is equal to 2. As a result, the kinetic energy term provides an attractive effect. Therefore, although the CMI of $\Delta\Delta_{(0,3)}$ does not have an attractive nature like $\Delta\Delta_{(3,0)}$, it is still possible to form bound $\Delta\Delta_{(0,3)}$ like $\Delta\Delta_{(3,0)}$. \vspace{0.3cm} With parameter Sets 1 and 2, we calculate the binding energy and the root-mean-square-radii (RMS) of two systems. The results are tabulated in Table 2. \vspace{0.3cm} It is shown that both $\Delta\Delta_{(3,0)}$ and $\Delta\Delta_{(0,3)}$ are bound states, their binding energies are a few tens of MeV and the correspondent RMS' are around 1 fm. The binding nature of $\Delta\Delta_{(0,3)}$ is consistent with our speculation. The binding energy of $\Delta\Delta_{(0,3)}$ is lower than that of $\Delta\Delta_{(3,0)}$ by 5-6 MeV. This is attributed to the fact that the CMI of $\Delta\Delta_{(0,3)}$ is repulsive in contrast to that of $\Delta\Delta_{(3,0)}$, although their antisymmetrization characters are the same. The results with Sets 1 and 2 have small difference mainly because of the variation of $g_q$ brought by the mass changes of $\kappa$ and $\epsilon$, although the strange meson clouds are not important in the non-strange system. \vspace{0.3cm} Then, we try to reveal the effects of chiral-quark interactions in $\Delta\Delta$ systems. This can be realized by locating calculations in other two chiral quark models, the extended $SU(2)$ and $SU(2)$ chiral quark models. In the former one (hereafter call Model II), the chiral-quark interactions are provided by scalar meson $\sigma$ and all pseudoscalar mesons $\pi$, $K$, $\eta$ and $\eta\prime$, while in the latter one (hereafter call Model III), the chiral-quark interactions are induced by the pseudoscalar meson $\pi$ and scalar meson $\sigma$ only. The results in these two models are also tabulated in Table 2. It is seen that the binding natures of the two systems remain the same, but the values of binding energies and RMS' have substantial differences. In the higher spin system like $\Delta\Delta_{(3,0)}$, the binding energy would increase when the model changes from I to II or III, and the largest value occurs in Model II. But in the lower spin state like $\Delta\Delta_{(0,3)}$, the binding energy changes in the opposite direction and the largest value appears in Model I. Anyway, the fact that the binding energy of the system with spin $S=3$ is larger than that of the system with spin $S=0$ remains un-changed when the model used is alternated. \centerline{\bf {Table 2~~~~Binding energy $B_{AB}$ and RMS for $\Delta\Delta_{(3,0)}$ and $\Delta\Delta_{(0,3)}$ systems$^{\dag}$}} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $~$ & \multicolumn{2}{|c|}{one channel} & \multicolumn{2}{|c|}{one channel} \\ Channel & \multicolumn{2}{|c|}{$\Delta\Delta_{(3,0)}$} & \multicolumn{2}{|c|}{$\Delta\Delta_{(0,3)}$} \\ \cline{2-5} & $B_{\Delta\Delta_{(3,0)}}$ & $RMS$ & $B_{\Delta\Delta_{(0,3)}}$ & $RMS$ \\ &$(MeV)$ & $(fm)$ & $(MeV)$ & $(fm)$ \\ \hline Model I Set1 & 22.2 & 1.01 & 16.0 & 1.10 \\ \hline Model I Set2 & 18.5 & 1.05 & 13.5 & 1.14 \\ \hline Model II & 64.8 & 0.84 & 6.3 & 1.25 \\ \hline Model III & 62.7 & 0.86 & 13.2 & 1.11 \\ \hline \end{tabular} \end{center} $\dag$ {\footnotesize {$B_{AB}$ denotes the binding energy between A and B baryons and RMS represents the corresponding root-mean-square radius.}} \vspace{0.3cm} This result can be understood in the following way. The basic observation is that their symmetry structures, which cause $\langle {\cal A} \rangle^{sfc}=2$, where $sfc$ denotes that the operation takes place in the spin-flavor-color space only, is very beneficial in forming bound state. Namely, the quark exchange effect due to the Pauli principle is enormous so that the quarks in two clusters can be sufficiently close, and consequently the contribution from the kinetic energy term shows the attractive character. On the other hand, the $\sigma$ field induced interaction provides a fairly strong attraction which plays a dominant role in forming bound state. These two factors make the binding nature of the system stable with respect to models and parameters. \vspace{0.3cm} Moreover, from the form of Gell-Mann matrices we know that the K and $\kappa$ meson exchanges do not exist between u or d quark pairs, namely the chiral fields K and $\kappa$ do not contribute in both systems. The contributions from pseudoscalar mesons in two systems are comparable. But the contribution from the $\sigma\prime$ meson are characteristically different in these two systems. Opposite to the contributions of CMI, it is strongly repulsive in $\Delta\Delta_{(3,0)}$ and relatively weakly attractive in $\Delta\Delta_{(0,3)}$, and the repulsive strength of $\sigma\prime$ is much larger than the attractive strength in CMI. As a result, when one moves from Model I to Model II or III, the scalar mesons $\sigma\prime$, $\kappa$ and $\epsilon$ are turned off, the wavefunction of $\Delta\Delta_{(3,0)}$ would move inward and that of $\Delta\Delta_{(0,3)}$ would distribute more outward. Consequently, the contribution from $\sigma$ becomes stronger in $\Delta\Delta_{(3,0)}$ and a little weak in $\Delta\Delta_{(0,3)}$. That is why when model alternates from I to II or III, $\Delta\Delta_{(3,0)}$ becomes much more bounder and $\Delta\Delta_{(0,3)}$ turns out to be a little less bound. \vspace{0.3cm} Above results can be crosschecked by their S-wave scattering phase shifts and corresponding scattering lengths. We plot the S-wave phase shifts of these two systems in Figs. 1 and 2, respectively. The solid and dashed curves are those with parameter Sets 1 and 2, accordingly. With these phase shifts, one can easily extract the corresponding scattering lengths, which are tabulated in Table 4. \centerline{\bf {Table 4~~~~The Scattering length $a$ of the $\Delta\Delta$ systems}} \begin{center} \begin{tabular}{|c|c|c|} \hline & one channel & one channel\\ & $\Delta\Delta_{(3,0)}$ & $\Delta\Delta_{(0,3)}$\\ \hline Model I Set1 & $-1.30~(fm)$ & $-2.56~(fm)$\\ \hline Model I Set2 & $-1.16~(fm)$ & $-2.72~(fm)$\\ \hline \end{tabular} \end{center} Both phase shifts and scattering lengths are consistent with our above results. \noindent \section{Conclusions} In the $\Delta\Delta$ system, there exist two bound states, $\Delta\Delta_{(3,0)}$ and $(d^*)$ $\Delta\Delta_{(0,3)}$. The former one has been predicted as a bound dibaryon [5,8-10]. Its binding energy varies model by model. By employing the $SU(3)$ chiral quark model with which the available empirical data can be well-reproduced, the binding energy of the one channel $\Delta\Delta_{(3,0)}$ (or $d^*$) is 18.5-22.2 MeV and the corresponding RMS is about 1.0 fm. This result is consistent with most reports of other theoretical predictions [5,8-10]. Then, we predict the binding energy of $\Delta\Delta_{(0,3)}$. The result turns out to be 13.5-16.0 MeV and the corresponding RMS is around 1.1 fm. The binding behaviors of these systems are attributed to their special symmetry properties which make the contribution of kinetic energy term beneficial for forming dibaryons. Moreover, the chiral fields also provide substantial contributions to their binding behaviors. Among these chiral fields, the relatively stronger $\sigma$ induced interaction is always the dominant one, no matter which model and model parameters are used. As a consequence, the binding phenomena of the $\Delta\Delta_{(3,0)}$ and $\Delta\Delta_{(0,3)}$ systems are always remained the same. \vspace{0.3cm} However, the mass of these two systems predicted in the one-channel approximation are all larger than the thresholds of strong decay channels $\Delta\Delta\rightarrow{\Delta}N\pi$ and $\Delta\Delta\rightarrow{NN}\pi\pi$ which are about 155 MeV and 310 MeV ( even after considering the $CC$ channel coupling in $d^*$, this conclusion is still true). These dibaryons would have very broad widths. Because the newly predicted $\Delta\Delta_{(0,3)}$ has the lower spin $S=0$ with respect to $\Delta\Delta_{(3,0)}$, it might be more favorable to be experimentally detected. \newpage
1,116,691,501,143
arxiv
\section{Introduction} Prosodic phrasing is the division of fluent speech into \emph{prosodic} (or \emph{intonation}~\cite{beckman1997guidelines}) \emph{phrases} -- groups of words in a spoken sentence, typically featuring an intonation peak and often separated by pauses. Prosodic phrasing not only plays an important role in the human understanding of spoken language~\cite{frazier2006prosodic} but is also highly relevant for many speech processing tasks, such as speech synthesis and audio annotation. In text-to-speech (TTS) systems, information about prosodic boundaries in text helps improve the naturalness of synthesized speech, by allowing the system to insert pauses and modify intonation in a similar way to a human speaker. In audio, it can be used to enhance the training data, likewise leading to a more natural-sounding speech~\cite{Taylor_2009}. In speech recognition and spoken language understanding, phrase breaks also help distinguish between otherwise identical sentences with a different meaning (such as the popular example \emph{\enquote{Let's eat, grandma!}} versus \emph{\enquote{Let's eat grandma!}}). There are two different scenarios for the automatic detection of prosodic boundaries: detection solely from text, most often for the purposes of speech synthesis~\cite{futamata21_interspeech,read2007stochastic,taylor1998assigning,volin2021human,zou21_interspeech}, or detection from spoken utterances as a form of audio annotation. In the latter case, some approaches have been based solely on acoustic information (though sometimes with word or syllable boundaries derived from text transcripts)~\cite{lin20m_interspeech,rosenberg2010autobi,Schuppler2020,suni2016boundary}, while others have combined both lexical and acoustic information~\cite{Christodoulides2017,gallwitz2002integrated,KOCHAROV2019231}. In this paper, our main goal is to obtain a detector which works solely in the audio modality, using only acoustic cues. However, its results will also be compared to an existing text-based model~\cite{volin2021human}, evaluated on the transcripts of the same utterances. \section{Data} The experiments were performed on a set of recordings of Czech radio broadcast news (Channels 1 and 2 of the Czech Radio), previously used in \cite{volin2021human} as the News-Reading Speech (NRS) corpus\footnotemark. The dataset consists of 12 news bulletins presented by different speakers (six male and six female), each between 2.5 and 5 minutes long, for a total of 42 minutes of speech (486 sentences). The recordings have been annotated by phonetic experts, following the guidelines in \cite{beckman1997guidelines}. \footnotetext{Since the publication of \cite{volin2021human}, the NRS annotations have undergone a round of revisions and the model was updated accordingly. The text-based results in section~\ref{sec:textresults} will thus differ from those listed in the aforementioned paper.} The annotation conventions, as described in \cite{beckman1997guidelines}, include multiple levels of phrasing: most relevantly, prosodic (intonation) phrases can also be further divided into one of more \emph{intermediate} phrases -- smaller units with less discernible boundaries. These are also labeled in the NRS dataset. However, in our work, we are specifically interested in the detection of \emph{prosodic} boundaries as the most important ones for most speech processing applications -- we will explore the use of intermediate boundaries during \emph{training}, but we ignore them during evaluation. \section{Model for text-based detection} We compare the results of our audio-based prosodic boundary detection to those of our existing text-based detector~\cite{volin2021human}, which was tested on the same dataset. This model remains as described in~\cite{volin2021human}: it is a Text-to-Text Transfer Transformer (T5) model~\cite{raffel2020exploring}, which transforms a given sequence of words into an output sequence with predicted phrase boundaries. It was pre-trained on large amounts of unlabeled Czech text in the CommonCrawl corpus and fine-tuned for the phrase detection task on what~\cite{volin2021human} referred to as The Laboratory Speech (LS) data -- text sentences from 6 large-scale Czech speech corpora created for the purposes of speech synthesis in the TTS system ARTIC \cite{tihelka2018current}. The prosodic boundaries in the LS dataset were labeled only using automatic segmentation, but the fine-tuned model was subsequently adapted on the hand-annotated NRS data using a leave-one-out approach -- 12 different models were trained, each adapted on 11 speakers and evaluated on the last speaker. \section{Model for audio-based detection} Systems for audio-based prosodic boundary detection have traditionally utilized combinations of different features such as the duration of pauses and syllables, $F_0$ range and resets, intensity, or pitch movement~\cite{Christodoulides2017,KOCHAROV2019231,lin20m_interspeech,rosenberg2010autobi,Schuppler2020}. Rather than use such handcrafted combinations of features, however, we chose to employ learned representations from raw audio data. Wav2vec 2.0~\cite{baevski2020wav2vec} is a self-supervised framework for speech representation which has been used for a large variety of different speech-related tasks~\cite{cooper2021generalization,yang21c_interspeech,zhang2020pushing}. One of the main advantages of the wav2vec approach is that a generic pre-trained model can be fine-tuned for a specific purpose using only a small amount of labeled data. We use the pre-trained wav2vec 2.0 base model \enquote{ClTRUS}\footnote{\textbf{C}zech \textbf{l}anguage \textbf{TR}ransformer from \textbf{U}nlabeled \textbf{S}peech, \newline available from: \url{https://huggingface.co/fav-kky/wav2vec2-base-cs-80k-ClTRUS}}, which is specifically trained for the Czech language using more than 80 thousand hours of Czech speech from various domains~\cite{lehecka2022cltrus}. Using the HuggingFace Transformers library~\cite{wolf2020transformers}, we fine-tuned the model for an audio frame classification task (\emph{Wav2Vec2ForAudioFrameClassification}) on the NRS data (Fig.~\ref{fig:w2v_diagram}) and evaluate it using a leave-one-out approach, similarly to the text-based T5 model. \begin{figure} \centering \includegraphics[width=\textwidth]{img/TSD2022_wav2vec_diagram.pdf} \caption{Illustration of the wav2vec2-based prosodic boundary detector. The model outputs a label for each audio frame (every 20\,ms).} \label{fig:w2v_diagram} \end{figure} During the fine-tuning of the wav2vec 2.0 model, the references are given in the form of a fuzzy labeling function, as depicted in Figure~\ref{fig:w2v_ref_labels} (top): prosodic boundaries are given the reference label 1, linearly decreasing to 0 in an interval $\pm 0.2$\,s around each boundary. The model was fine-tuned with MSE loss. The fine-tuning process is very fast -- the model learns to predict the triangular shapes nearly perfectly within several epochs, at which point the results do not improve further with additional training. Due to the relatively high memory requirements of wav2vec, the audio is processed in chunks of 30\,s, with a 15\,s step -- the chunks are partially overlapping. When the outputs are stitched back together for evaluation, the middle part of each chunk is used and the overlapping edges are discarded. This was originally meant to avoid potential issues near the beginning and end of each chunk (due to missing context on one side). However, in terms of the overall precision and recall, the difference appears to be minimal. Finally, in order to improve the robustness and consistency of the results and limit the influence of random chance, each model was fine-tuned five times with identical settings and different random seeds, and the raw outputs were averaged. This does not substantially improve the results, but it reduces random fluctuations and allows for a better comparison between models fine-tuned with different settings. \begin{figure} \centering \adjincludegraphics[width=\textwidth,trim={{.1\width} 0 {.08\width} 0},clip]{img/fuzzy02_example_output.eps} \caption{Example of the reference labels and predictions for one audio segment. Training labels for the wav2vec2 model either include only prosodic phrase boundaries, with a peak value of 1 (top), or also intermediate phrase boundaries, with a smaller peak value of 0.5 (bottom).} \label{fig:w2v_ref_labels} \end{figure} \subsection{Influence of intermediate phrase boundaries} As previously stated, our targets for prediction are only the prosodic phrase boundaries. However, the less important intermediate boundaries may also convey useful information for training, particularly since the distinction between the two categories is not always clear. In our initial experiments with a model fine-tuned solely on prosodic boundaries, we found that the majority of false positives (approximately two thirds, as seen in Figure~\ref{fig:prec_vs_rec_audio}) were located in spots marked as \emph{intermediate} phrase boundaries by the expert annotators. This is despite the fact that intermediate boundaries are present in less than 7\% of all word boundaries in the NRS dataset. This spurred us to question whether it is truly appropriate to label these boundaries as \emph{zero} in the reference labels - they clearly exhibit similar acoustic features that the model is learning to detect, albeit perhaps to a less pronounced degree. Assigning them a smaller, but non-zero label may have a positive effect on the resulting model. Thus, we decided to test two options for the training data: \begin{enumerate} \item[a)] Only prosodic phrases are included in the reference labels. \item[b)] Both prosodic and intermediate phrases are included in the reference labels, with different values. Prosodic boundaries are given the maximum value of 1 and intermediate boundaries are labeled as 0.5 -- both with a linear decrease to zero over $\pm0.2$\,s, as previously described. \end{enumerate} In both cases, the model is still evaluated on \emph{prosodic} boundaries only. \subsection{Post-processing} \label{sec:postprocessing} The wav2vec2 model outputs predicted labels for each audio frame (every 20\,ms). However, the text-based T5 model naturally predicts phrase breaks between words and is evaluated in terms of within-sentence word boundaries. Thus, it is necessary to convert the wav2vec2 predictions to a more comparable format: First, we identify the peaks in the raw outputs. If the value of a peak is higher than a specific threshold and there is no higher peak within 0.25\,s, the system marks this as a predicted boundary. For the purposes of evaluation, these predicted boundaries are then aligned to the nearest end of a word within 100 ms, based on the reference annotations -- this is because the ground truth phrase boundaries are likewise aligned to the ends of words. For the numeric results listed in this paper, we did not specifically tune the decision threshold -- we simply choose the value 0.5 as the \enquote{middle ground}. Similarly, for the model trained with added intermediate boundaries, the threshold was selected as 0.75 -- as the average between the labels of prosodic boundaries (1.0) and intermediate boundaries (0.5). \section{Results} In this paper, we list two separate sets of results, evaluated under slightly different conditions: First, evaluation of the full outputs of the wav2vec2 models, given as time labels, and including all boundaries, even those between sentences. However, for a fair comparison with the T5 model, we secondly convert our predictions into text form (using the transcripts to ensure identical sentences), with boundaries marked only between words and ignoring the ends of sentences -- this is because the text-based T5 model worked with isolated sentences and only searched for prosodic boundaries \emph{within} the sentence. \subsection{Evaluation measures} The standard evaluation metrics for phrase boundary detection are precision ($P$), recall ($R$), accuracy (Acc), and F1-score, given as \begin{equation} \label{eq:prec} P = \frac{tp}{tp + fp}\\ \end{equation} \begin{equation} R = \frac{tp}{tp + fn}\\ \end{equation} \begin{equation} \operatorname{Acc} = \frac{tp + tn}{tp + tn + fp + fn}\\ \end{equation} \begin{equation} \label{eq:F1} F1 = 2 \cdot \frac{P \cdot R}{P + R}\\ \end{equation} where $tp$ refers to the number of correctly detected phrase boundaries (\emph{true positives}), $fp$ the number of \emph{false positives}, $fn$ is the number of missed phrase boundaries (\emph{false negatives}) and $tn$ is the number of \emph{true negatives} - between-word boundaries that were correctly labeled as not being phrase breaks. As the wav2vec2 model outputs per-frame predictions, not constrained to word boundaries, we decided to also perform frame-wise evaluation, in terms of segmentation of each audio file into prosodic phrases. For this, we chose segment purity and coverage (e.g. \cite{Bredin2017}) as the main metrics. These are obtained as \begin{equation} \operatorname{purity}(S, R) = \frac{\sum_k \max_j |s_k \cap r_j|}{\sum_k |s_k|} \label{eq:purityBredin} \end{equation} and \begin{equation} \operatorname{coverage}(S, R) = \frac{\sum_j \max_k |s_k \cap r_j|}{\sum_j |r_j|} \label{eq:coverageBredin} \end{equation} where $S = \{s_1, \ldots, s_K\}$ is the set of segments (i.e. prosodic phrases) found by the system, $R = \{r_1, \ldots, r_J\}$ corresponds to the reference segments, $|r_j|$ is the duration of segment $r_j$, and $s_k \cap r_j$ denotes the intersection of segments $s_k$ and $r_j$. \subsection{Audio-based evaluation} \label{sec:audioresults} Figure~\ref{fig:prec_vs_rec_audio} shows the precision-recall and purity-coverage curves achieved by the two fine-tuned wav2vec2 models when evaluated on the entire audio data. Table~\ref{fig:prec_vs_rec_audio} then lists the numeric results corresponding to the default thresholds. \begin{figure} \centering \adjincludegraphics[width=0.49\textwidth,trim={{.02\width} 0 {.08\width} 0},clip]{img/fuzzy_02_prec-rec-intB_withEvalOnAll.eps} \adjincludegraphics[width=0.49\textwidth,trim={0 0 {.08\width} 0},clip]{img/fuzzy_02_cov-pur_withEvalOnAll.eps} \caption{Precision-recall (left) and purity-coverage (right) curves of models fine-tuned a) only on prosodic boundaries (\enquote{w2v2 (P)}), or b) on both prosodic and intermediate boundaries (\enquote{w2v2 (PI)}). The left plot additionally shows the fraction of false positives which correspond to intermediate boundaries, relative to the total number of false positives.} \label{fig:prec_vs_rec_audio} \end{figure} From the results displayed in Figure~\ref{fig:prec_vs_rec_audio}, it appears that the addition of intermediate boundaries to the training data has had a very minimal effect on the two curves, at least when evaluated only on prosodic boundaries. However, if we look at the false positives, a greater percentage of them now consists of intermediate boundaries as opposed to no-breaks. This could be considered an improvement by itself -- in many use-cases, an intermediate boundary being incorrectly marked as a prosodic boundary is a less problematic mistake than if a location with \emph{no} phrase boundary was marked as such. \begin{table}[ht] \centering \caption{Results on the entire audio files, including boundaries at the ends of sentences, and with a wav2vec2 model fine-tuned a) only on prosodic boundaries (threshold 0.5) or b) also intermediate boundaries (threshold 0.75). \enquote{Pur} and \enquote{Cov} refers to purity and coverage, respectively.} \label{tab:textresults} \begin{tabular}{| l | *{6}{S[round-mode=places,table-format=3.2, round-precision=2,detect-weight]} | c c c c |} \hline {fine-tuning data} & {Pur} & {Cov} & {Acc} & {P} & {R} & {F1} & {tp} & {fp} & {fn} & {tn} \\ \hline a) prosodic b. only & 93.8225 & 92.9418 & 94.8698 & 88.223 & 88.9045 & 88.5624 & 1266 & 169 & 158 & 4781 \\ b) pros. \& interm. b. & 92.2868 & 94.3863 & 94.7757 & 90.2583 & 85.8848 & 88.0173 & 1223 & 132 & 201 & 4818 \\ \hline \end{tabular} \end{table} \subsection{Text-based evaluation} \label{sec:textresults} In order to compare the results of the wav2vec2 model with the text-based T5 model, we convert the wav2vec2 predictions to the same format -- sequences of words, separated by sentence, with marked prosodic boundaries. Thus, for this second evaluation, we only consider the predicted phrase boundaries which were matched to word boundaries within the sentence during post-processing. Peaks in the wav2vec2 output which were more than 100\,ms from the nearest end of a word are simply ignored. However, the number of such cases is minimal (3 out of 1435 predicted boundaries at threshold 0.5). \begin{figure} \centering \includegraphics[width=0.6\textwidth]{img/fuzzy02_prec-rec_text_seeds101-105.eps} \caption{Results evaluated on text - precision and recall of the text-based T5 model, audio-based wav2vec2 model, and their combinations (wav2vec2 fine-tuned only on prosodic boundaries)} \label{fig:prec_vs_rec} \end{figure} The text-based results are illustrated in Figure~\ref{fig:prec_vs_rec}, which compares the pre\-ci\-sion-re\-call curve of the wav2vec2 model (fine-tuned with prosodic boundaries only) with the results of the T5 model. The latter are shown only as a single point, as there is no threshold to change -- the T5 model directly outputs a sequence of words and prosodic boundaries. The graph additionally shows the precision-recall curves for two possible combinations of the two models: \begin{enumerate} \item[a)] prosodic boundaries are marked only where \emph{both} the T5 model and the wav2vec2 model predict them (\enquote{T5 AND wav2vec}), \item[b)] prosodic boundaries are marked where \emph{at least one} of the models predicts them (\enquote{T5 OR wav2vec}). \end{enumerate} Finally, the numeric results are presented in Tables~\ref{tab:textresults_perspeaker} and~\ref{tab:textresults_summary}: Table~\ref{tab:textresults_perspeaker} lists the individual results of the T5 model and one wav2vec2 model (fine-tuned only on prosodic boundaries) for separate speakers. Table~\ref{tab:textresults_summary} then shows the overall results for both wav2vec2 models and also for the combinations of T5 and wav2vec2. We can see that in the terms of accuracy and F1, the listed \enquote{T5 OR wav2vec} variants score higher that the individual models alone. However, it is at the cost of slightly reduced precision. Conversely, the \enquote{T5 AND wav2vec} achieve a very high precision of $\sim$94\,\%, but with a relatively low recall of $\sim$60\,\%. Which one of these alternatives is best would depend on the specific application. \begin{table}[ht] \centering \caption{Results on individual speakers, wav2vec2 fine-tuned only on prosodic boundaries and with a threshold of 0.5.} \label{tab:textresults_perspeaker} \begin{tabular}{| l | c | c | *{4}{S[round-mode=places,table-format=3.2, round-precision=2,detect-weight]} | c c c c |} \hline {model} & {speaker} & {\# sent.} & {Acc} & {P} & {R} & {F1} & {tp} & {fp} & {fn} & {tn} \\ \hline T5 & NRS01 & 36 & 91.4221 & 85.2459 & 64.1975 & 73.2394 & 52 & 9 & 29 & 353 \\ & NRS02 & 60 & 94.2424 & 82.0225 & 76.8421 & 79.3478 & 73 & 16 & 22 & 549 \\ & NRS03 & 38 & 95.7447 & 86.3636 & 83.8235 & 85.0746 & 57 & 9 & 11 & 393 \\ & NRS04 & 31 & 92.757 & 84.9057 & 66.1765 & 74.3802 & 45 & 8 & 23 & 352 \\ & NRS05 & 48 & 92.559 & 87.6712 & 66.6667 & 75.7396 & 64 & 9 & 32 & 446 \\ & NRS06 & 45 & 89.2857 & 92.5373 & 54.386 & 68.5083 & 62 & 5 & 52 & 413 \\ & NRS07 & 33 & 94.6565 & 87.037 & 77.0492 & 81.7391 & 47 & 7 & 14 & 325 \\ & NRS08 & 37 & 93.2755 & 91.0714 & 66.2338 & 76.6917 & 51 & 5 & 26 & 379 \\ & NRS09 & 50 & 94.3978 & 80 & 74.7253 & 77.2727 & 68 & 17 & 23 & 606 \\ & NRS10 & 34 & 94.5736 & 90 & 73.7705 & 81.0811 & 45 & 5 & 16 & 321 \\ & NRS11 & 35 & 94.697 & 86 & 75.4386 & 80.3738 & 43 & 7 & 14 & 332 \\ & NRS12 & 39 & 93.9597 & 85.7143 & 75 & 80 & 54 & 9 & 18 & 366 \\ \cline{2-11} & all & 486 & 93.4376 & 86.1799 & 70.2444 & 77.4005 & 661 & 106 & 280 & 4835 \\ \hline wav2vec2 & NRS01 & 36 & 92.5508 & 81.5789 & 76.5432 & 78.9809 & 62 & 14 & 19 & 348 \\ & NRS02 & 60 & 95.6061 & 82.3529 & 88.4211 & 85.2792 & 84 & 18 & 11 & 547 \\ & NRS03 & 38 & 94.6809 & 77.9221 & 88.2353 & 82.7586 & 60 & 17 & 8 & 385 \\ & NRS04 & 31 & 94.3925 & 84.375 & 79.4118 & 81.8182 & 54 & 10 & 14 & 350 \\ & NRS05 & 48 & 92.0145 & 80.2326 & 71.875 & 75.8242 & 69 & 17 & 27 & 438 \\ & NRS06 & 45 & 96.2406 & 92.7273 & 89.4737 & 91.0714 & 102 & 8 & 12 & 410 \\ & NRS07 & 33 & 95.6743 & 85.4839 & 86.8852 & 86.1789 & 53 & 9 & 8 & 323 \\ & NRS08 & 37 & 93.4924 & 84.058 & 75.3247 & 79.4521 & 58 & 11 & 19 & 373 \\ & NRS09 & 50 & 94.1176 & 76.3441 & 78.022 & 77.1739 & 71 & 22 & 20 & 601 \\ & NRS10 & 34 & 95.0904 & 83.871 & 85.2459 & 84.5528 & 52 & 10 & 9 & 316 \\ & NRS11 & 35 & 95.202 & 76.3889 & 96.4912 & 85.2713 & 55 & 17 & 2 & 322 \\ & NRS12 & 39 & 94.6309 & 82.4324 & 84.7222 & 83.5616 & 61 & 13 & 11 & 362 \\ \cline{2-11} & all & 486 & 94.4577 & 82.471 & 82.9968 & 82.7331 & 781 & 166 & 160 & 4775 \\ \hline \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Results on the entire NRS data, using a leave-one-out-approach. \enquote{T5 AND wav2vec2} places prosodic boundaries only where \emph{both} models predicted them. \enquote{T5 OR wav2vec2} places them where \emph{at least one} of the models did.} \label{tab:textresults_summary} \begin{tabular}{| l H H | *{4}{S[round-mode=places,table-format=3.2, round-precision=2,detect-weight]} | c c c c |} \hline {model} & {speaker} & {\# sent.} & {Acc} & {P} & {R} & {F1} & {tp} & {fp} & {fn} & {tn} \\ \hline T5 Model & all & 486 & 93.4376 & 86.1799 & 70.2444 & 77.4005 & 661 & 106 & 280 & 4835 \\ wav2vec2 - f.-t. on pros. b. only & all & 486 & 94.4577 & 82.471 & 82.9968 & 82.7331 & 781 & 166 & 160 & 4775 \\ wav2vec2 - fine-tuned with int. b. & all & 486 & 94.3557 & 85.1211 & 78.4272 & 81.6372 & 738 & 129 & 203 & 4812 \\ \hline T5 AND wav2vec2 (pros. b.) & all & 486 & 93.3526 & 93.3754 & 62.9118 & 75.1746 & 592 & 42 & 349 & 4899 \\ T5 OR wav2vec2 (pros. b.) & all & 486 & 94.5427 & 78.7037 & \B 90.3294 & \B 84.1168 & 850 & 230 & 91 & 4711 \\ T5 AND wav2vec2 (with int. b.) & all & 486 & 93.1486 & \B 94.3894 & 60.7864 & 73.9496 & 572 & 34 & 369 & 4907 \\ T5 OR wav2vec2 (with int. b.) & all & 486 & \B 94.6447 & 80.4475 & 87.8852 & 84.002 & 827 & 201 & 114 & 4740 \\ \hline \end{tabular} \end{table} One may also notice that the precision and recall values here are slightly lower than those in Table~\ref{tab:textresults}. This is because of the exclusion of end-of-sentence boundaries. These are naturally much more pronounced in speech, in terms of both intonation and pause, and so the wav2vec2 model can detect them with much greater accuracy than the within-sentence boundaries. \section{Discussion} We have shown that the results achieved by the wav2vec2 model surpass those of the text-based T5 model. However, it is important to note that this is still a somewhat \enquote{unfair} comparison: The locations of phrase breaks are partly subjective and different speakers may place them differently. However, the ground truth labels (provided by phonetic experts) used in our experiments were based on the spoken sentences and therefore likely match the specific phrasing of the speaker. Thus, the predictions made by the T5 model may not necessarily be \emph{less correct}, they simply do not match the specific speaker. Another thing to consider is the relatively small amount of data which was available for fine-tuning and testing -- approximately 42 minutes of speech or 486 sentences. Although the wav2vec2 framework is known for being able to achieve good results with small amounts of data, and the results achieved here do indeed look very promising, it is likely that the performance could be improved further if more data were available. This is also suggested by Figure~\ref{fig:trainset_size_comparison}, which compares wav2vec2 models fine-tuned with different amounts of training data: models fine-tuned using only one, three or six of the 12 speakers show a lower precision and recall, indicating that \emph{increasing} the amount of training data could lead to further improvement. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{img/fuzzy_02_prec-rec_1_vs_3_vs_6_vs_11_spks_text_better.eps} \caption{Precision-recall curve for wav2vec2 models fine-tuned using different amounts of training data, evaluated on the text sentences and with the text-based results shown for comparison.} \label{fig:trainset_size_comparison} \end{figure} \section{Conclusion and Future Work} In this paper, we explored the use of the wav2vec 2.0 framework for the detection of prosodic boundaries in speech. We have found that the relatively straightforward and easy to use wav2vec 2.0 approach works surprisingly well: it does not require text annotation or knowledge of word boundaries (these were only used for evaluation), nor a handcrafted selection of features, yet it achieves very good results, surpassing the text-based T5 model which was used for comparison. Still, this was, in its essence, only an initial experiment. In the future, we would like to test the approach on a larger amount of more varied data and also explore the possibilities of combining the audio and text modalities within a single model, rather than merely combining the outputs. \subsubsection{Acknowledgements} This research was supported by the Czech Science Foundation (GA CR), project No. GA21-14758S, and by the grant of the University of West Bohemia, project No. SGS-2022-017. Computational resources were supplied by the project ``e-Infrastruktura CZ'' (e-INFRA CZ LM2018140) supported by the Ministry of Education, Youth and Sports of the Czech Republic. \bibliographystyle{splncs04}
1,116,691,501,144
arxiv
\section{Introduction} We study simultaneous tilings of $\mathbb{R}^n$ by two actions which, on their face, do not have any relationship. The first action is via translation by a full rank lattice $\Gamma \subset \mathbb{R}^n$. There always exists a set $V \subset \mathbb{R}^n$ of finite measure such that $\{V + \gamma: \gamma \in \Gamma\}$ is a measurable tiling of $\mathbb{R}^n$. The second action is via multiplication by integer powers of an invertible matrix $A$. If $\left|\det A\right| \not= 1$, then there exists a set $U$ of finite measure such that $\{A^j(U): j\in \mathbb{Z}\}$ is a measurable tiling of $\mathbb{R}^n$, see \cite{LarSchSpeTay06}. The question solved in this paper has been explicitly posed by Wang \cite{Wan02, IonWan06} and the second author \cite{Spe03}, although it has been studied earlier in the late 1990s \cite{DaiLarSpe97, Spe97}. \begin{question}\cite{IonWan06, Spe97} For which pairs $(A, \Gamma)$ does there exist a measurable set $W \subset \mathbb{R}^n$ such that \begin{equation}\label{dilationTile} \{A^j(W): j\in \mathbb{Z}\} \,\, {\text{is a measurable tiling of }} \mathbb{R}^n \end{equation} and \begin{equation}\label{translationTile} \{W + \gamma: \gamma \in \Gamma\} \,\, {\text{is a measurable tiling of }} \mathbb{R}^n? \end{equation} A set $W$ that satisfies \eqref{dilationTile} and \eqref{translationTile} is called an $(A, \Gamma)$ {\emph {wavelet set}}. \end{question} \subsection{Motivation} Our motivation for studying this problem is three-fold. First, whenever a set $W \subset \mathbb{R}^n$ satisfies \eqref{dilationTile} and \eqref{translationTile}, the indicator function $\mathbf 1_W$ is the Fourier transform of an orthogonal wavelet \cite{DL98, HW96, ILP98}. That is, the inverse Fourier transform $\psi = \check\mathbf 1_W$ generates a wavelet system \[ \left\{\left |\det B \right|^{j/2} \psi(B^j x + k): j\in \mathbb{Z}, k \in \Gamma^*\right\}, \] which forms an orthogonal basis for $L^2(\mathbb{R}^n)$, where $B$ is the transpose of $A$ and $\Gamma^*$ is the dual lattice of $\Gamma$. In dimension 2 and higher, it is an open problem to determine for which pairs $\left(B, \Gamma^*\right)$, there exists a $(B, \Gamma^*)$ orthonormal wavelet, see \cite{BowRze, Spe03, Wan02}. The current paper provides many new examples of pairs for which such wavelets exist. For all cases currently known, when there exists an orthonormal wavelet, there also exists an $(A, \Gamma)$ wavelet set. The authors conjecture that this is true in general: \begin{conjecture} For each pair $(B, \Gamma^*)$ such that there exists a $(B, \Gamma^*)$ orthonormal wavelet, there exists an $(A, \Gamma)$ wavelet set, where $B$ is the transpose of $A$ and $\Gamma^*$ is the dual lattice of $\Gamma$. \end{conjecture} Evidence in favor of this conjecture includes \cite{ChuShi00}, where it is shown in dimension 1 that if $a^j$ is irrational for all $j \in \mathbb{Z} \setminus \{0\}$, then the {\emph {only}} $(a,\mathbb{Z})$ orthonormal wavelets that exist are those that are supported on wavelet sets. A higher dimensional extension of this result was shown by the first author \cite{Bow3}. An even stronger open problem is to determine whether the support of every $(B, \Gamma^*)$ wavelet contains an $(A, \Gamma)$ wavelet set. This stronger conjecture is open even in the classical dyadic case: dimension $n =1$, dilation $A=B=2$, and translations along integers. This question was originally posed by Larson in late 1990's although its official formulation appeared only in \cite{Lar}. It is shown in \cite{RzeSpe02} that this stronger conjecture is true in the one dimensional dyadic case under the additional assumption that the support $E$ of $\hat \psi$ satisfies $\sum_{j\in\mathbb{Z}} \mathbf 1_{E} (2^j x)\le 2$ and $\sum_{k\in \mathbb{Z}} \mathbf 1_E(x + k) \le 2$. Moreover, it is also known that Larson's problem has an affirmative answer for MRA wavelets \cite{BowRze}. Our second motivation comes from general tiling questions. For $\alpha \in SO(n)$, let $\Gamma_{\alpha} = \alpha \mathbb{Z}^n$ be a rotation of the integer lattice $\mathbb{Z}^n$. The (measurable) Steinhaus tiling problem is to determine whether there is a single Lebesgue measurable set $E \subset \mathbb{R}^n$ such that $E$ is a fundamental region for each $\mathbb{R}^n/\Gamma_{\alpha}$. This problem was solved in the negative in dimensions 3 and higher by Kolountzakis and Wolff \cite{KW}, but remains open in dimension 2. However, the existence of non-measurable Steinhaus tilings in $\mathbb{R}^2$ was shown by Jackson and Mauldin \cite{JM, JM2}. The commonality in the two problems is that we have multiple actions on $\mathbb{R}^n$ for which it is easy to see that there are measurable tilings when considered separately, yet it is not at all clear when and whether there is a single measurable set which tiles by both actions simultaneously. Our third motivation comes from attempts to solve the wavelet set existence problem itself. We found while working on the problem that in one approach we needed to estimate the cardinality of a ball centered at zero intersected with the image of a lattice: $\#\left|{\mathbf {B}}(0, 1) \cap A^j(\Gamma)\right|$. In another approach, we needed to estimate the subspace measure of a lattice subspace intersected with an image of the ball: $m_d(V \cap A^j({\mathbf {B}}(0, 1))$, where $V$ is the span of some subset of $\Gamma$. The relationship between these two quantities has a long history, and we were intrigued by the connections between our attempts at a solution to the wavelet set existence problem and these well-studied objects. \subsection{Prior Results} Larson, Schulz, Taylor, and the second author \cite{LarSchSpeTay06} have shown that there exists a set of finite measure $W$ that tiles by dilations \eqref{dilationTile} if and only if $\left|\det A\right| \not= 1$. An immediate corollary of this fact is that no wavelet sets exist when the dilation has determinant 1. An interesting counterpoint to that statement was given by the first author and Lemvig \cite{BowLem}, who showed that whenever $\left|\det A\right| \not= 1$, then for almost every lattice $\Gamma$ there is an $(A, \Gamma)$ wavelet set. When $A$ is expansive, that is, all eigenvalues are bigger than one in modulus, Dai, Larson and the second author \cite{DaiLarSpe97} showed that $(A, \Gamma)$ wavelet sets exist for all lattices $\Gamma$. The second author \cite{Spe03} provided the first necessary conditions and sufficient conditions on the existence of wavelet sets when the dilation $A$ has eigenvalues both bigger than and less than 1 in modulus. Ionascu and Wang \cite[Theorem 1.3]{IonWan06} extended these results to characterize pairs $(A, \Gamma)$ for which wavelet sets exist in the 2-dimensional case. We reformulate their result as follows. \begin{theorem}\label{iw_intro} Let $A$ be $2\times 2$ matrix with $\left |\det A \right|>1$ and let $\Gamma$ be a full rank lattice in $\mathbb{R}^2$. Let $\lambda_1$ and $\lambda_2$ be the eigenvalues of $A$ such that $|\lambda_1| \ge |\lambda_2|$. There exists an $(A,\Gamma)$ wavelet set if and only if \begin{enumerate}[(i)] \item $|\lambda_2| \ge 1$, or \item $|\lambda_2|<1$ and $ \ker (A- \lambda_2 \mathbf I) \cap \Gamma = \{0\}. $ \end{enumerate} \end{theorem} The characterizing condition (ii) in Theorem \ref{iw_intro} has several possible restatements in higher dimensions when the smallest eigenvalue of $A$ is less than one in modulus. For example, these four statements are all equivalent in dimension $n=2$ when $\left|\det A\right|>1$. \begin{enumerate} \item\label{cone} for every $R > 0$, $\liminf_{j \to \infty} \# | A^{-j}(\mathbf B(0,R)) \cap \Gamma | = 1$, \item\label{ctwo} for every $R > 0$, $\liminf_{j \to \infty} \# |A^{-j}(\mathbf B(0,R)) \cap \Gamma |< \infty$, \item\label{cthree} for every sublattice $\Lambda\subset \Gamma$, if $V = \operatorname{span}(\Lambda)$ and $d=\dim V$, then \[ \liminf_{j\to \infty} m_d(A^{-j}(\mathbf B(0,1)) \cap V) <\infty, \] where $m_d$ denotes the Lebesgue measure on the subspace $V$, \item\label{cfour} if $V$ is the space spanned by the eigenvectors associated with eigenvalues less than one in modulus, then $V \cap \Gamma = \{0\}$. \end{enumerate} It is relatively easy to see the sequence of implications \eqref{cone} $\implies$ \eqref{ctwo} $\implies$ \eqref{cthree} $\implies$ \eqref{cfour} for all dimensions $n$. The condition \eqref{cone} is a known sufficient condition for the existence of wavelet sets, see \cite[Theorem 2.5]{IonWan06}. On the other hand, \eqref{cfour} is a necessary condition for the existence of wavelet sets in light of Theorem \ref{iw_intro}, but as we will see later, it is not sufficient in dimensions $n\ge 3$, see Example \ref{obvious}. Conditions \eqref{ctwo} and \eqref{cthree} are not equivalent; see \cite{Wei04} for an example of the types of theorems in this area and further references in ergodic theory and \cite{Mos10} for Diophantine approximation results that are written in notation and language closer to that of this paper. See also Section \ref{examples} in this paper for related, explicit examples. {\bf {None}} of these natural extensions are equivalent to the existence of wavelet sets in higher dimensions. \subsection{Statement of Result} Our main result answers the wavelet set existence problem by giving a necessary and sufficient condition for the existence of wavelet sets. \begin{theorem}\label{mt} Let $A$ be an $n\times n$ matrix with $\left |\det A \right|>1$. Let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice. Then, there exists an $(A,\Gamma)$ wavelet set if and only if \begin{equation}\label{char} \sum_{j=1}^\infty \frac{1}{\# \left|A^{-j}(\mathbf B(0,1)) \cap \Gamma \right |} =\infty, \end{equation} \end{theorem} One appealing characteristic of Theorem \ref{mt} is that the statement is not split into cases where the dilation $A$ is expansive versus where it is not. For example, when $A$ is expansive, it is known that there exists $J \ge 1$ such that $A^{-j}\left({\mathbf {B}}(0, 1)\right) \subset {\mathbf {B}}(0, 1)$ for all $j \ge J$. Therefore, we recover that wavelet sets exist when $A$ is expansive as a corollary. As another corollary of our characterization, we deduce that condition \eqref{ctwo} is sufficient and condition \eqref{cthree} is necessary for the existence of a wavelet set (see Theorem \ref{nexist}), respectively. The paper is organized as follows. In Section \ref{sufficiency} we show that condition \eqref{char} is sufficient for the existence of a wavelet set. In Section \ref{necessity} we show the same condition \eqref{char} is also necessary. In Section \ref{oldnecessity}, we show that a weaker, but more easily checked, condition is a necessary condition for the existence of wavelet sets. In Section \ref{idk}, we examine applications of Theorem \ref{mt}. The first application is Theorem \ref{eigenbiggerone}, where we show the existence of $(A, \Gamma)$ wavelet sets for any lattice $\Gamma$ if all eigenvalues of $A$ are greater than or equal to one in modulus using a theorem due to Margulis \cite{Margulis71}. The second application is a generalization of Theorem \ref{iw_intro} for dilations whose product of two smallest eigenvalues in absolute value is $\ge 1$. The third application is the existence of $(A,\mathbb{Z}^n)$ wavelet sets for matrices $A$ with integer entries. Section \ref{examples} gives examples which illustrate the main theorem. In particular, we provide an example where wavelet sets do not exist even though condition \eqref{cthree} holds. \section{Definitions and Preliminary Results} Throughout this paper, we assume that $A$ is an invertible $n \times n$ matrix and $\Gamma$ is a full rank lattice in $\mathbb{R}^n$. That is, $\Gamma$ is the image of the integer lattice under an invertible linear transform. \begin{definition} Let $M\in \mathbb{N}$. We say a measurable set $U\subset \mathbb{R}^n$ {\it packs $M$-redundantly} by $A$ dilations if \[ \sum_{j\in\mathbb{Z}} \mathbf 1_U(A^jx) \le M \qquad\text{for a.e. }x\in \mathbb{R}^n. \] In the special case that $M$ can be chosen to be 1, we say that $U$ {\it packs} by $A$ dilations. The set $U$ {\it covers} by $A$ dilations if \[ \sum_{j\in \mathbb{Z}} \mathbf 1_U(A^jx) \ge 1 \qquad\text{for a.e. }x\in \mathbb{R}^n. \] The set $U$ {\it tiles} by $A$ dilations if it both packs and covers, in which case we call $\{A^j(U): j\in \mathbb{Z}\}$ a {\it measurable partition} of $\mathbb{R}^n$, or a \emph{(measurable) tiling} of $\mathbb{R}^n$. \end{definition} Similarly, we say a measurable set $V \subset \mathbb{R}^n$ packs $M$-redundantly, packs, covers, or tiles by $\Gamma$ translations if $\sum_{\gamma \in \Gamma} \mathbf 1_V(x + \gamma)$ has the corresponding property. The following is an easy consequence of this definition. \begin{proposition}\label{easyProp} If a measurable set $U$ packs $M$-redundantly by translations, then there exists a partition $(U_m)_{m=1}^M$ of $U$ into measurable sets such that for each $1\le m \le M$, $U_m$ packs by translations. In particular, there is a subset $U^\prime \subset U$ which packs by translations such that $\left |U^\prime \right| = \frac 1M \left|U\right |$. \end{proposition} Dilations $A$ for which measurable tilings exist were characterized by Larson, Schulz, Taylor, and the second author \cite{LarSchSpeTay06}. \begin{theorem}\label{LSST} Let $A$ be an invertible matrix. \begin{enumerate}[(i)] \item There exists a set that tiles by dilations if and only if $A$ is not orthogonal. \item There exists a set of finite measure that tiles by dilations if and only if $\left|\det A\right| \not= 1$. \item There exists a bounded set that tiles by dilations if and only if all (real or complex) eigenvalues of $A$ or $A^{-1}$ have modulus larger than 1. \end{enumerate} \end{theorem} Given one measurable tiling by dilations (or translations), all such sets that tile can be constructed in the following manner. Let $U$ be a set that tiles by $A$ dilations. Let $\{U_j: j\in \mathbb{Z}\}$ be a measurable partition of $U$. Then, $\bigcup_{j\in \mathbb{Z}} A^j(U_j)$ tiles by $A$ dilations. Moreover, given a set $T$ that tiles by $A$ dilations, define $T_j = A^j(U) \cap T$ and $U_j = A^{-j}(T_j)$. Then $\{U_j: j\in \mathbb{Z}\}$ is a measurable partition of $U$ such that $\bigcup_{j\in \mathbb{Z}} A^j(U_j) = T.$ Similar results hold for sets that tile by translations. Given a set $W$ that packs by $A$ dilations, we define the dilation equivalency mapping $d=d_W$ onto $W$ by \begin{equation}\label{dem} d(V) = \bigcup_{j\in\mathbb{Z}} \left(A^j(V) \cap W\right), \qquad\text{where }V \subset \mathbb{R}^n. \end{equation} Then we have the following useful result. \begin{proposition}\label{convergeD} Let $d$ be the equivalency mapping of a set $W$ that packs by dilations. Suppose that $(U_k)_{k\in \mathbb{N}}$ is a sequence of sets, which pack by dilations and by translations, and converges in the symmetric difference metric to $U$. The following holds: \begin{enumerate}[(i)] \item The set $U$ packs by dilations and by translations. \item If \begin{equation}\label{cd1} \sum_k |d(U_k \triangle U_{k + 1})| < \infty, \end{equation} then $d(U_k) \to d(U)$ in the symmetric difference metric as $k\to \infty$. \item If $V \subset W$ is such that \begin{equation}\label{cd2} \sum_k |d(U_k \triangle U_{k + 1}) \cap V| < \infty, \end{equation} then $d(U_k) \cap V \to d(U) \cap V$ in the symmetric difference metric as $k\to \infty$. \end{enumerate} \end{proposition} \begin{proof} Condition (i) is an exercise and can be found in \cite[Lemma 3.1]{Spe03}. To prove (ii), note that \[ U\triangle U_k \subset \bigcup_{j = k}^\infty \bigl(U_{j+ 1} \triangle U_j\bigr). \] By \eqref{cd1} it follows that $d(U \triangle U_k) \to \emptyset$ as $k \to \infty$. Since $d(U) \triangle d(U_k) \subset d(U \triangle U_k)$, the conclusion (ii) follows. Condition (iii) follows by noting that $d_V(U) = d_W(U) \cap V$ and applying (ii). \end{proof} Finally, we recall a standard fact on wavelet sets \cite[Theorem 2.2]{IonWan06}, which is a consequence of the Cantor-Schr\"oder-Bernstein theorem. To show the existence of a wavelet set, it suffices to construct a set which tiles by dilations and packs by translations. \begin{theorem}\label{csb} Let $A$ be an invertible matrix and let $\Gamma$ be a full rank lattice in $\mathbb{R}^n$. Suppose there exists a measurable set $U \subset \mathbb{R}^n$ that tiles by $A$ dilations and packs by $\Gamma$ translations. Then, there exists $(A,\Gamma)$ wavelet set. \end{theorem} \section{Proof of sufficiency for existence of wavelet sets} \label{sufficiency} The goal of this section is to prove the sufficiency part of the main theorem. \begin{theorem}\label{mainTheorem} Let $A$ be an invertible $n \times n$ matrix and let $\Gamma$ be a full rank lattice. If for some $r>0$ \begin{equation}\label{suf0} \left | \det A \right | > 1 \qquad\text{and}\qquad \sum_{j=1}^\infty \frac{1}{\#\left |A^{-j}(\mathbf B(0,r)) \cap \Gamma \right|} =\infty, \end{equation} or \begin{equation}\label{suff} \left | \det A \right | < 1 \qquad\text{and}\qquad \sum_{j=1}^\infty \frac{1}{\#\left | A^{j}(\mathbf B(0,r)) \cap \Gamma \right|} =\infty, \end{equation} then there exists an $(A, \Gamma)$ wavelet set. \end{theorem} Note that the second part of Theorem \ref{mainTheorem} follows from the first part by replacing $A$ with $A^{-1}$. Hence, from now on we shall assume that $\left | \det A \right |>1$ and we prove the existence of a wavelet set under the assumption \eqref{suf0}. We start with an elementary lemma. \begin{lemma}\label{as} Let $A$ be an invertible matrix. Let $\Gamma$ be a full rank lattice in $\mathbb{R}^n$. The following are equivalent: \begin{enumerate}[(i)] \item for every $r > 0$, there exists a sequence $(m_j)_{j\in \mathbb{N}}$ such that $\sum 1/m_j=\infty$ and a set $A^{-j}(\mathbf B(0,r))$ packs $m_j$ redundantly via $\Gamma$ translations for any $j\in\mathbb{N}$, \item $\sum_{j=1}^\infty 1/ \# | A^{-j} (\mathbf B(0,r)) \cap \Gamma| =\infty$ for every $r > 0$, \item $\sum_{j=1}^\infty 1/ \# | A^{-j} (\mathbf B(0,r)) \cap \Gamma| =\infty$ for some $r > 0$. \end{enumerate} \end{lemma} \begin{proof} The condition (i) means that for any $r>0$ there exists a sequence $(m_j)$ such that $\sum 1/m_j=\infty$ and \[ \sum_{\gamma \in \Gamma} \mathbf 1_{A^{-j} (\mathbf B(0,2r))}(x+\gamma) \le m_j \qquad\text{for a.e. }x\in \mathbb{R}^n, \ j\in\mathbb{N}. \] Since \[ A^{-j}({\mathbf {B}}(0, r)) \subset A^{-j}({\mathbf {B}}(0, 2r))+ \gamma\qquad\text{for all }\gamma \in A^{-j}({\mathbf {B}}(0, r)) \cap \Gamma, \] it follows that $\#\left|A^{-j}({\mathbf {B}}(0, r)) \cap \Gamma\right| \le m_j$ and \[ \sum \frac {1}{\#\left|A^{-j}({\mathbf {B}}(0, r)) \cap \Gamma\right|} \ge \sum 1/m_j = \infty. \] The implication (ii) $\implies$ (iii) is trivial. Finally, assume that (iii) holds for some $r_0>0$. Observe that \begin{equation}\label{as5} \sum_{\gamma \in \Gamma} \mathbf 1_{A^{-j} (\mathbf B(0,r_0/2))} (x+\gamma) \le \# | A^{-j} (\mathbf B(0,r_0)) \cap \Gamma| \qquad\text{for all }x\in \mathbb{R}^n. \end{equation} Indeed, if there exist $k$ distinct elements $\gamma_1,\ldots,\gamma_k \in \Gamma$, such that $x+\gamma_1,\ldots, x+\gamma_k \in A^{-j}(\mathbf B(0,r_0/2))$, then we have $k$ distinct elements $\gamma_i-\gamma_1$, $i=1,\ldots,k$, belonging to $A^{-j}(\mathbf B(0,r_0))$. Take any $r>0$. Then, there exists $M>0$ such that a ball of radius $r$ can be covered by a union of $M$ balls of radius $r_0/2$. Hence, \eqref{as5} yields \[ \sum_{\gamma \in \Gamma} \mathbf 1_{A^{-j} (\mathbf B(0,r))}(x+\gamma) \le M \# | A^{-j} (\mathbf B(0,r_0)) \cap \Gamma| \qquad\text{for all }x\in \mathbb{R}^n. \] Letting $m_j =M \# | A^{-j} (\mathbf B(0,r_0)) \cap \Gamma|$, $j\in\mathbb{N}$, implies (i) and completes the proof of the lemma. \end{proof} We also need a simple lemma about sequences. \begin{lemma}\label{seq} Let $(a_i)_{i\in \mathbb{N}}$ be a sequence of numbers in $[0,1]$. Let $(b_i)_{i\in \mathbb{N}}$ be a sequence defined recursively by $b_1=1$, $b_{i+1}=(1-a_{i+1}) b_i$, $i\in \mathbb{N}$. Then, $\sum a_{i+1} b_{i} <\infty$. \end{lemma} \begin{proof} Define a sequence $(c_i)_{i\in \mathbb{N}}$ by \[ c_1=1, \qquad c_{i}=\exp\bigg(-\sum_{j=2}^{i} a_i \bigg), \ i\ge 2. \] Using $1-x \le \exp(-x)$ for all $x\in \mathbb{R}$ we have $b_i \le c_i$ for all $i\in \mathbb{N}$. Applying the Mean Value Theorem for $\exp(-x)$ yields \[ a_{i+1}c_{i+1} = a_{i+1} \exp\bigg(-\sum_{j=2}^{i+1} a_i \bigg) \le \exp\bigg(-\sum_{j=2}^{i} a_i \bigg) - \exp\bigg(-\sum_{j=2}^{i+1} a_i \bigg) . \] Telescoping yields $\sum a_{i+1}c_{i+1} <\infty$. Since $c_i \le e c_{i+1}$, we deduce that $\sum a_{i+1} b_i \le \sum a_{i+1} c_{i} <\infty$. \end{proof} Given a set $U$ that packs by $\Gamma$ translations, we define the translation equivalency mapping $\tau_U$ onto $U$ by \begin{equation}\label{tau} \tau_U(V) = \bigcup_{\gamma \in \Gamma} (\gamma + V) \cap U, \qquad\text{where }V \subset \mathbb{R}^n. \end{equation} The following lemma plays the key role in the proof of Theorem \ref{mainTheorem}. \begin{lemma}\label{easyCase} Let $A$ be an invertible $n \times n$ matrix and $\Gamma$ a full-rank lattice. Let $W$ be a set that packs by dilations of $A$ and let $U\subset W$ be a finite measure subset. Suppose there exists a sequence $(m_j)_{j\in\mathbb{N}}$ of positive integers such that $\sum_{j\in\mathbb{N}} 1/m_j=\infty$ and $A^{-j}(U)$ packs $m_j$ redundantly via translations for every $j\in\mathbb{N}$. Then, for every $\epsilon > 0$, there exists a set $V$ with the following properties: \begin{enumerate}[(i)] \item $V$ packs via dilations, \item $V$ packs via translations, \item $d(V) = U$, where $d$ is the dilation equivalency mapping onto $W$ given by \eqref{dem}, \item $|V| < \epsilon$. \end{enumerate} \end{lemma} \begin{proof} Let $J\in \mathbb{N}$ be such that \[ \sum_{j=J}^\infty | A^{-j}(U) | < \epsilon. \] Since the set $V$ is going to be chosen as a subset of $\bigcup_{j\ge J} A^{-j} (U)$, property (iv) will be satisfied by $V$. We shall construct inductively a sequence of sets $(U_j)_{j\ge J}$ such that for all $j\ge J$ we have (letting $U_{J - 1} = \emptyset$): \begin{enumerate}[(a)] \item $U_j$ packs by translations and dilations, \item letting $W_j:=U \setminus d(U_j)$ we have $|W_{j}| \le (1-c/m_j)|W_{j-1}|$, where $c=1-\left | \det A \right |^{-1}$, \item $U_{j} \setminus U_{j-1} \subset A^{-j}(W_{j-1})$, \item $U_{j-1} \setminus U_{j} \subset \tau_{U_{j-1}}(A^{-j}(W_{j-1}))$. \end{enumerate} By Lemma \ref{easyProp} there is a set $U_J \subset A^{-J}(U)$ which packs by translations and $|U_J| \ge \frac 1{m_J} |A^{-J}(U)|$. Let $W_J = U \setminus d(U_J)$. Then, $|W_J| = |U \setminus d(U_J)| \le \frac {m_J - 1}{m_J} |U| \le \left(1 - c/m_J\right) |U|$, as desired. Suppose that $(U_j)_{j = J}^k$ and $(W_j)_{j = J}^k$ have been defined so that items (a)--(d) above hold. Since $A^{-j-1} U$ packs $m_{j+1}$ redundantly by translations and $W_j \subset U$, there exists a set $\tilde U_{j + 1} \subset A^{-j-1} W_j$, which packs by translations (and dilations) such that \begin{equation}\label{wm0} |\tilde U_{j + 1}| = \frac 1{m_{j+1}} |A^{-j-1} W_j|. \end{equation} Let $U_{j + 1} = \tilde U_{j + 1} \cup \left(U_j \setminus \tau_{U_j} (\tilde U_{j + 1})\right)$. We see from construction that $U_{j + 1}$ satisfies (a), (c), and (d). It remains to see that $U_{j + 1}$ satisfies (b). By (c) we have \begin{equation}\label{wm1} U_j \subset \bigcup_{l=J}^j A^{-l}(U). \end{equation} Since $\tilde U_{j+1} \subset A^{-j-1}(U)$, by \eqref{wm0} we have \begin{equation}\label{wm3} |d(\tilde U_{j+1})|= \left | \det A \right |^{j+1}|\tilde U_{j+1}|= \frac {|W_j|}{m_{j+1}}. \end{equation} Hence, \eqref{wm1} and $ |\tau_{U_j}(\tilde U_{j+1})| = |\tilde U_{j+1}|$ yields \begin{equation}\label{wm2} |d(\tau_{U_j}(\tilde U_{j+1}))| \le \left | \det A \right |^{j} |\tilde U_{j+1}| = \frac{|d(\tilde U_{j+1})|}{\left | \det A \right |}. \end{equation} By \eqref{wm2} and noting that $d(\tilde U_{j + 1})$ and $d(U_j)$ are disjoint, we have \begin{align*} |W_{j + 1}| &= |U \setminus d(U_{j + 1})|\\ &\le |U| - \bigg(|d(\tilde U_{j + 1})| + |d(U_j)| - \frac{|d(\tilde U_{j+1})|}{\left | \det A \right |} \bigg) \\ & = |W_j| - c |d(\tilde U_{j + 1})| = |W_j| - c \frac{|W_j|}{m_{j+1}} = \bigg(1- \frac{c}{m_{j+1}}\bigg) |W_j|, \end{align*} where $c=1-\left | \det A \right |^{-1}$. This proves (b). By (c) and (d) the constructed sets $(U_j)_{j \ge J}$ satisfy \[ U_{j+1} \triangle U_{j} = (U_{j+1} \setminus U_{j}) \cup (U_{j} \setminus U_{j+1}) \subset \tilde U_{j+1} \cup \tau_{U_j}(\tilde U_{j+1}) \subset A^{-j-1}(W_{j}) \cup \tau_{U_j}(A^{-j-1}(W_j)), \] where $\triangle$ is the symmetric difference of the sets. Thus, \begin{equation}\label{p5} \sum_{j=J}^\infty |U_{j+1} \triangle U_j| \le \sum_{j=J}^\infty 2 \left | \det A \right |^{-j-1} |W_{j}| < \infty. \end{equation} Likewise, by \eqref{wm3} and \eqref{wm2} we have \[ |d(U_{j+1} \triangle U_{j})| \le |d(\tilde U_{j+1})| +|d( \tau_{U_j}(\tilde U_{j+1})) | \le 2 |d(\tilde U_{j+1})| = \frac{2}{m_{j+1}} |W_j|. \] Consequently, by property (b) and Lemma \ref{seq} we have \begin{equation}\label{p6} \sum_{j=J}^\infty |d(U_{j+1} \triangle U_j)| \le 2 \sum_{j=J}^\infty \frac{|W_j|}{m_{j+1}} < \infty. \end{equation} By Proposition \ref{convergeD}, the sets $U_j$ converge in the symmetric difference metric to a set $V$ that packs by translations and by dilations, and $d(V) = \lim_{j \to \infty} d(U_j) = U$. Indeed, the last equality is a consequence of property (b) and the assumption $\sum 1/m_j=\infty$ since \[ |U \setminus d(U_{j})| = |W_{j}| \le \prod_{i=J+1}^j \bigg(1- \frac c{m_i} \bigg ) |W_{J}| \le \exp\bigg(- \sum_{i=J+1}^j \frac c{m_i}\bigg) |W_J| \to 0 \qquad\text{as }j\to \infty. \] This proves the lemma. \end{proof} We are now ready to prove the main result that we use to deduce Theorem \ref{mainTheorem}. \begin{theorem}\label{weakLemvig} Let $A$ be an invertible $n \times n$ matrix and $\Gamma$ a full-rank lattice. Suppose there exists a set $W\subset \mathbb{R}^n$ which tiles $\mathbb{R}^n$ by dilations and a partition $(W_m)_{m\in \mathbb{N}}$ of $W$ such that for each $m\in \mathbb{N}$, there exists a sequence $(m_j)_{j\in\mathbb{N}}$ of positive integers such that $\sum_{j\in\mathbb{N}} 1/m_j=\infty$ and $A^{-j}(W_m)$ packs $m_j$ redundantly via translations for every $j\in\mathbb{N}$. Then, there exists an $(A, \Gamma)$ wavelet set. \end{theorem} \begin{proof Without loss of generality, we may assume that each set in the partition of $W$ has measure less than 1/2. Let $d$ be the dilation equivalency mapping onto $W$. We shall construct inductively two sequences of sets $(U_k)_{k\in\mathbb{N}}$ and $(\tilde U_k)_{k\in\mathbb{N}}$ such that for all $k\in\mathbb{N}$ we have: \begin{enumerate}[(a)] \item $U_k$ packs by translations and dilations, \item $\tilde U_k$ packs by translations and dilations, \item $U_{k+1}=\tilde U_k \cup (U_k \setminus \tau_{U_k}(\tilde U_k))$, \item $d(\tilde U_k)= \bigl(\bigcup_{j=1}^{k+1} W_j\bigr) \setminus d(U_k)$, \item $|d(\tau_{U_k}(\tilde U_k))|< 1/2^k$. \end{enumerate} By Lemma \ref{easyCase} there exists $U_1$ such that $U_1$ packs by translations and dilations and $d(U_1) = W_1$. Likewise, by Lemma \ref{easyCase} there exists $\tilde U_1$ such that $\tilde U_1$ packs by translations and dilations, $d(\tilde U_1) = W_2$, and $|\tilde U_1|<1/2$. Then, $ |d(\tau_{U_1}(\tilde U_1))| \le |d(U_1)|=|W_1|< 1/2$. Consequently, (a)--(e) hold for $k=1$. For the inductive step, suppose that we have already constructed sets $U_1,\ldots,U_k$ and $\tilde U_1,\ldots, \tilde U_k$ satisfying (a)--(e). Then, the set $U_{k+1}$ is determined by the property (c). Since both $U_k$ and $\tilde U_k$ pack by translations, so does $U_{k+1}$. To see that $U_{k+1}$ packs by dilations, note that both $U_{k}$ and $\tilde U_k$ pack by dilations, and by property (d) \[ d(\tilde U_{k}) \cap d( U_k \setminus \tau_{U_k}(\tilde U_{k})) \subset \bigg(\bigcup_{j=1}^{k+1} W_j \setminus d(U_k) \bigg) \cap d(U_k) = \emptyset. \] This proves (a) for $k+1$. Since $d(U_{k+1}) \subset W_1 \cup \ldots \cup W_{k+1}$ has finite measure, there exists $K \in \mathbb{N}$ such that \begin{equation}\label{q2} \bigg|d \bigg(U_{k+1} \cap \bigcup_{j \ge K} A^{-j}(W) \bigg)\bigg| < 1/2^{k+2}. \end{equation} By Lemma \ref{easyCase} there exists $\tilde U_{k+1}$ such that $\tilde U_{k+1}$ packs by translations and by dilations, \[ d(\tilde U_{k+1}) = \bigcup_{j=1}^{k+2} W_j \setminus d(U_{k+1}). \] and \begin{equation}\label{q4} |\tilde U_{k+1}| < \epsilon: = \left | \det A \right |^{-K}/2^{k+2}. \end{equation} This guarantees that (b) and (d) hold for $k+1$. Hence, it remains to show the estimate (e) for $k+1$. By \eqref{q2}, \eqref{q4}, and the fact that $|\tau_{U_{k+1}}(\tilde U_{k+1})|<\epsilon$, we have \begin{align*} |d(\tau_{U_{k+1}}(\tilde U_{k+1}))| &\le \bigg|d \bigg( \tau_{U_{k+1}}(\tilde U_{k+1}) \cap \bigcup_{j \ge K} A^{-j} W \bigg) \bigg| + \bigg|d \bigg (\tau_{U_{k+1}}(\tilde U_{k+1}) \cap \bigcup_{j < K} A^{-j} W \bigg) \bigg| \\ &\le 1/2^{k+2} + \left | \det A \right |^{K} |\tau_{U_{k+1}}(\tilde U_{k+1})| \le 1/2^{k+1}. \end{align*} Indeed, the second inequality is a consequence of a fact that for any measurable set $V \subset \mathbb{R}^n$ we have \[ \bigg|d \bigg(V \cap \bigcup_{j < K} A^{-j} W \bigg) \bigg| \le \left | \det A \right |^{K} |V|. \] This completes the proof of properties (a)--(e). By (a), (c), and (d) we have \begin{equation}\label{q6} \begin{aligned} d(U_{k+1}) = d(\tilde U_k) \cup d(U_k \setminus \tau_{U_k}(\tilde U_k)) &=\bigg(\bigcup_{j=1}^{k+1} W_j \setminus d(U_k)\bigg) \cup ( d(U_k) \setminus d(\tau_{U_k}(\tilde U_k))) \\ & =\bigcup_{j=1}^{k+1} W_j \setminus d(\tau_{U_k}(\tilde U_k)). \end{aligned} \end{equation} Hence, for $k\ge 2$ we have \[ \begin{aligned} d(U_{k+1} \setminus U_{k}) \subset d(\tilde U_k) &= \bigcup_{j=1}^{k+1} W_j \setminus d(U_k) = \bigcup_{j=1}^{k+1} W_j \setminus \bigg(\bigcup_{j=1}^{k} W_j \setminus d(\tau_{U_{k-1}}(\tilde U_{k-1}))\bigg) \\ & \subset W_{k+1} \cup d(\tau_{U_{k-1}}(\tilde U_{k-1})). \end{aligned} \] On the other hand, \[ U_{k} \setminus U_{k+1} \subset U_k \setminus (U_k \setminus \tau_{U_{k}}(\tilde U_{k})) =\tau_{U_{k}}(\tilde U_{k}). \] Combining the last two inclusions yields \begin{equation}\label{q8} d(U_{k+1} \triangle U_{k}) \subset W_{k+1} \cup d(\tau_{U_{k}}(\tilde U_{k})) \cup d(\tau_{U_{k-1}}(\tilde U_{k-1})). \end{equation} Likewise, we have \begin{equation}\label{q10} U_{k+1} \triangle U_{k} \subset \tilde U_{k} \cup \tau_{U_k}(\tilde U_k). \end{equation} Hence, by \eqref{q4} and \eqref{q10} \begin{equation}\label{q12} \sum_{k\in\mathbb{N}} | U_{k+1} \triangle U_{k} | \le 2 \sum_{k\in \mathbb{N}} |\tilde U_k| <\infty. \end{equation} Define \[ U = \bigcap_{j = 1}^\infty \bigcup_{k = j}^\infty U_k. \] By \eqref{q12} $U_k \to U$ in the symmetric difference metric, from which it follows that $U$ packs by dilations and translations. We claim that $U$ tiles by dilations. Fix $K\in\mathbb{N} $. By \eqref{q8} and property (e) we have \[ \sum_{k=K}^\infty \bigg |d(U_{k+1} \triangle U_k) \cap \bigcup_{j=1}^K W_j \bigg| \le \sum_{k=K}^\infty |d(\tau_{U_{k}}(\tilde U_{k}))|+ |d(\tau_{U_{k-1}}(\tilde U_{k-1}))|< \infty \] Thus, by Proposition \ref{convergeD}(iii) we have \[ d(U_k) \cap \bigcup_{j=1}^K W_j \to d(U) \cap \bigcup_{j=1}^K W_j \qquad\text{as } k\to \infty. \] On other hand, by \eqref{q6} and property (e) \[ d(U_k) \cap \bigcup_{j=1}^K W_j \to \bigcup_{j=1}^K W_j \qquad\text{as }k\to\infty. \] Since $K\in \mathbb{N}$ is arbitrary, we conclude that $d(U) = \bigcup_{j=1}^\infty W_j= W$. Hence, $U$ tiles by dilations and packs by translations. Theorem \ref{csb} implies that there exists a set that tiles both by dilations and translations. \end{proof} Finally we can complete the proof of Theorem \ref{mainTheorem}. \begin{proof}[Proof of Theorem \ref{mainTheorem}] Without loss of generality we can assume that $\left | \det A \right | > 1$. By Lemma \ref{as}, the condition \eqref{suf0} implies that there exists a sequence $(m_j)_{j\in\mathbb{N}}$ such that $\sum 1/m_j =\infty$ and $A^{-j}(\mathbf B(0,r))$ packs $m_j$ redundantly by translations for all $j\in \mathbb{N}$. Take any set $W\subset \mathbb{R}^n$ that tiles by dilations. The partition $W_m=W \cap \bigl(\mathbf B(0,m) \setminus \mathbf B(0,m-1)\bigr)$, $m\in \mathbb{N}$, fulfills assumptions of Theorem \ref{weakLemvig}. Consequently, there exists an $(A, \Gamma)$ wavelet set. \end{proof} The contrapositive of Theorem \ref{mainTheorem} is as follows. \begin{corollary}\label{cin} If a pair $(A,\Gamma)$ does not admit a wavelet set and $\left | \det A \right |>1$, then \begin{equation}\label{cin0} \sum_{j=1}^\infty \frac1{\#|A^{-j}(\mathbf B(0,1))\cap \Gamma|} <\infty. \end{equation} \end{corollary} \section{Proof of necessity for existence of wavelet sets} \label{necessity} The goal of this section is to prove the converse of Corollary \ref{cin}. That is, we prove the necessity part of the main theorem. \begin{theorem} \label{nece} Let $A$ be an $n \times n$ invertible matrix with $\left | \det A \right | > 1$. Let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice. If \eqref{cin0} holds, then there is no $(A,\Gamma)$ wavelet set. \end{theorem} We need the following two elementary lemmas, which may be of independent interest. \begin{lemma}\label{elp} Let $\mathcal E \subset \mathbb{R}^n$ be an ellipsoid centered at $0$. Let $V \subset \mathbb{R}^n$ be a subspace of dimension $d$. Let $Q$ be the orthogonal projection of $\mathbb{R}^n$ onto \[ V^\perp=\{ x\in \mathbb{R}^n: \langle x, y \rangle =0 \quad\text{for all }y\in V \}. \] Then, \[ m_d( \mathcal E \cap V) m_{n-d}( Q(\mathcal E)) \le 2^n m_n(\mathcal E). \] \end{lemma} \begin{proof} By Fubini's theorem \begin{equation}\label{elp2} m_n(\mathcal E) = \int_{V^\perp} m_d(\mathcal E \cap (x+V)) dm_{n-d}(x) \ge \int_{Q(\frac12 \mathcal E)} m_d(\mathcal E \cap (x+V)) dm_{n-d}(x) . \end{equation} Take any $x_0\in Q(\frac12 \mathcal E)$. Let $x_1 \in \mathcal E$ be such that $x_0=\frac12 Q(x_1)$. Consider a mapping $F: V \to x_0+ V$ given by $F(x) = (x+x_1)/2$. Since $\mathcal E$ is convex, we have $F(\mathcal E \cap V) \subset \mathcal E \cap (x_0+V)$. Since $F$ is affine \begin{equation}\label{elp4} m_d(\mathcal E \cap (x_0+V)) \ge m_{d}(F(\mathcal E \cap V)) = 2^{-d} m_d(\mathcal E \cap V). \end{equation} Combining \eqref{elp2} and \eqref{elp4} yields \[ m_n(\mathcal E) \ge 2^{-d} m_d(\mathcal E \cap V) m_{n-d}(\tfrac12 Q(\mathcal E)) = 2^{-n} m_d(\mathcal E \cap V) m_{n-d}(Q(\mathcal E)). \qedhere \] \end{proof} \begin{lemma}\label{scz} Let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice in $\mathbb{R}^n$. Suppose that a measurable set $W\subset \mathbb{R}^n$ packs by $\Gamma$ translations, \begin{equation}\label{sca} \sum_{\gamma \in \Gamma} \mathbf 1_{W}(x+\gamma) \le 1 \qquad\text{for a.e. }x\in \mathbb{R}^n. \end{equation} Let $V$ be a $d$-dimensional lattice subspace of $\mathbb{R}^n$. That is, $\dim V=d$ and $\Gamma \cap V$ is a lattice of rank $d$. Let $F_0$ be a fundamental domain of $\Gamma \cap V$ in $V$. Then, we have \begin{equation}\label{scb} m_d(W \cap (y+V)) \le m_d(F_0) \qquad\text{$m_{n-d}$-a.e. } y\in V^\perp. \end{equation} \end{lemma} \begin{proof} Let $K \subset V^\perp$ be a measurable set. By \eqref{sca} we have \[ \int_{K+F_0} \sum_{\gamma \in \Gamma \cap V} \mathbf 1_W(x+\gamma) dm_n(x) \le \int_{K+F_0} dm_n(x) = m_{n-d}(K)m_{d}(F_0). \] By Fubini's Theorem \[ \begin{aligned} \int_{K+F_0} \sum_{\gamma \in \Gamma \cap V} \mathbf 1_W(x+\gamma) dm_n(x) & = \int_K \int_{F_0} \sum_{\gamma \in \Gamma \cap V} \mathbf 1_W(x_1+x_2+\gamma) dm_d(x_1)dm_{n-d}(x_2) \\ &= \int_K m_d((x_2+V) \cap W) dm_{n-d}(x_2). \end{aligned} \] Hence, \[ \int_K m_d((x_2+V) \cap W) dm_{n-d}(x_2) \le m_d(F_0) \int_K dm_{n-d}(x_2). \] Since $K \subset V^\perp$ is arbitrary, we deduce \eqref{scb}. \end{proof} We will also need a volume packing lemma which can be found in the book of Tao and Vu \cite[Lemma 3.24 and Lemma 3.26]{TaoVu06}. \begin{lemma}\label{vp} Let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice, and let $\Omega$ be a symmetric convex body in $\mathbb{R}^n$. Then, \begin{equation}\label{eq:vp1} \frac{|\Omega|}{2^n |\mathbb{R}^n/\Gamma|} \le \#|\Omega \cap \Gamma|. \end{equation} In, addition if the vectors $\Omega \cap \Gamma$ linearly span $\mathbb{R}^n$, then \begin{equation}\label{eq:vp2} \#|\Omega \cap \Gamma| \le \frac{3^n n! |\Omega|}{2^n |\mathbb{R}^n/\Gamma|}. \end{equation} \end{lemma} The following lemma plays the key role in the proof of Theorem \ref{nece}. \begin{lemma}\label{slice} Let $A$ be an $n \times n$ invertible matrix and let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice. Suppose that a measurable set $W\subset \mathbb{R}^n$ packs by $\Gamma$ translations, that is, \eqref{sca} holds. Then, there exists a constant $C>0$, which depends only on the dimension $n$, such that \begin{equation}\label{slice0} |\mathbf B(x,1) \cap A(W)| \le \frac{C}{\#|A^{-1}(\mathbf B(0,1))\cap \Gamma|} \qquad\text{for all } x\in\mathbb{R}^n. \end{equation} \end{lemma} \begin{proof} If $A^{-1}(\mathbf B(0,1))\cap \Gamma =\{0\}$, then the inequality \eqref{slice0} holds automatically with $C=m_n(\mathbf B(0,1))$. Otherwise, we define $V$ to be the linear span of $A^{-1}(\mathbf B(0,1))\cap \Gamma $. Let $F_0$ be a fundamental domain of $V/(\Gamma \cap V)$ and $d=\dim V$. By Lemma \ref{scz} \begin{equation}\label{sc1} m_d(W \cap (y+V)) \le m_d(F_0) \qquad\text{for a.e. }y\in V^\perp. \end{equation} Fubini's theorem yields \begin{equation}\label{sc3} \begin{aligned} |A^{-1}(\mathbf B(x,1)) \cap W| &\le \int_{V^\perp} m_d(A^{-1}(\mathbf B(x,1)) \cap W \cap (y+V)) dm_{n-d}(y) \\ &\le \int_{Q(A^{-1}({\mathbf {B}}(x, 1)))} m_d(W \cap (y + V)) \, dm_{n - d}(y) \\ &\le m_d(F_0) m_{n-d}\left(Q(A^{-1} (\mathbf B(x,1)))\right), \end{aligned} \end{equation} where $Q$ is an orthogonal projection of $\mathbb{R}^n$ onto $V^\perp$. The second inequality is because the integrand is nonzero only if $A^{-1}(\mathbf B(x,1)) \cap (y+V) \not= \emptyset$, which implies that $y\in Q\left(A^{-1} (\mathbf B(x,1))\right)$. Therefore, by Lemmas \ref{elp} and \ref{vp} \begin{equation}\label{sc6} \begin{aligned} |\mathbf B(x,1)\cap A(W)| & \le \left | \det A \right | m_d(F_0) m_{n-d}\left(Q(A^{-1} (\mathbf B(0,1)))\right) \le \frac{2^n m_n(\mathbf B(0,1)) m_d(F_0)}{ m_{d}(A^{-1} (\mathbf B(0,1)) \cap V)} \\ &\le \frac{3^d d! }{2^{d}} \frac{2^n m_n(\mathbf B(0,1))}{\#|A^{-1} (\mathbf B(0,1)) \cap \Gamma|}, \end{aligned} \end{equation} Hence, the inequality \eqref{slice0} holds with $C=3^n n! m_n(\mathbf B(0,1))$. \end{proof} \begin{lemma}\label{dti} Let $A$ be $n\times n $ matrix such that $\left | \det A \right |>1$. Then, there exists a set $U \subset \mathbb{R}^n$ such that $\{A^j(U): j\in \mathbb{Z}\}$ is a tiling of $\mathbb{R}^n$ modulo null sets and $U$ contains a collection of disjoint balls $V_k \subset U$, $k\in \mathbb{N}$, each having the same radius. \end{lemma} \begin{proof} Since $\left | \det A \right |>1$, the matrix $A$ has at least one eigenvalue $\lambda$ such that $|\lambda|>1$. Suppose momentarily that $\lambda$ is real. For simplicity assume that $e_1$ is an eigenvector of $A$ with eigenvalue $\lambda$. Then, \begin{equation}\label{ug} U= ((-\lambda,-1) \cup (1,\lambda)) \times \mathbb{R}^{n-1} \end{equation} is a dilation tiling generator of $\mathbb{R}^n$. Likewise, if $\lambda$ is complex, then a real Jordan normal form of $A$ contains a $2\times 2$ block $M_\lambda$, see the proof of Lemma \ref{ite}. For simplicity assume that $A$ is already in a Jordan normal form with $M_\lambda$ as its first block. Then, \begin{equation}\label{ug2} U= \{x\in \mathbb{R}^2: 1<|x|<|\lambda| \} \times \mathbb{R}^{n-2} \end{equation} is a dilation tiling generator of $\mathbb{R}^n$. That is, $\{A^j(U): j\in \mathbb{Z}\}$ is a tiling of $\mathbb{R}^n$ modulo null sets. As a consequence of \eqref{ug} and \eqref{ug2} we deduce that there exists a collection of disjoint balls $V_k \subset U$, $k\in \mathbb{N}$, each having the same radius that does not depend on $k$. The simplifying assumptions, which we made on $A$, do not affect this conclusion. \end{proof} The general principle that we will use to prove the non-existence of wavelet sets is the following proposition, which was implicit in earlier work \cite{Spe03}. \begin{proposition}\label{generalprinciple} Let $A$ be an invertible $n \times n$ matrix with $\left | \det A \right | > 1$, let $\Gamma \subset \mathbb{R}^n$ be a full-rank lattice, and let $W \subset \mathbb{R}^n$. Suppose that there exists $\epsilon > 0$ and an $A$ dilation generator $U$ which contains infinitely many disjoint sets $(V_k)_{k = 1}^\infty$ of measure at least $\epsilon$ such that \[ a_J:= \sup_{k \in \mathbb{N}} \sum_{j = J}^\infty \bigl |V_k \cap A^j(W) \bigr | \to 0 \qquad\text{as } J \to \infty. \] Then, $W$ is not an $(A, \Gamma)$ wavelet set. \end{proposition} \begin{proof} Suppose for the sake of contradiction that $W$ is an $(A, \Gamma)$ wavelet set. For $j\in \mathbb{Z}$ we define sets $W_j = A^{-j}(U) \cap W$. The collection $\{W_j:j\in \mathbb{Z}\}$ is a partition of $W$. For any $j\in \mathbb{Z}$ we have that \[ |W_j|=|A^{-j}(U) \cap W| \le |F|, \] where $F \subset \mathbb{R}^n$ is a fundamental domain of $\Gamma$. Hence, for $J \in \mathbb{N}$ chosen so that $a_J < \epsilon/ 2$, we have \begin{equation}\label{finite1} \sum_{j=-\infty}^J | A^{j}(W_{j})| \le \sum_{j=-\infty}^J \left | \det A \right |^{j}|F| = \frac{\left | \det A \right |^{J} |F|}{1-\left | \det A \right |^{-1}} < \infty. \end{equation} Since the sets $V_k$, $k\in \mathbb{N}$, are disjoint, by \eqref{finite1} there exists $k\in\mathbb{N}$ such that \[ \bigg|V_k \cap \bigcup_{j=-\infty}^J A^{j}(W_{j})\bigg| < |V_k|/2. \] Additionally, by the definition of $a_J$ we have that \[ \epsilon > \sum_{j = J}^\infty |V_k \cap A^j(W)| \ge \bigg|\bigcup_{j = J}^\infty V_k \cap A^j(W)\bigg| = \biggl|V_k \cap \bigcup_{j = J}^\infty A^j(W_j) \bigg|. \] Therefore, $|V_k \cap \bigcup_{j \in \mathbb{Z}} A^j(W_j)| < |V_k|/2 + \epsilon/2 < |V_k|$, and $W$ is not an $A$ dilation generator of $\mathbb{R}$. In particular, $W$ is not an $(A, \Gamma)$ wavelet set. \end{proof} We are now ready to prove Theorem \ref{nece}. \begin{proof}[Proof of Theorem \ref{nece}] Suppose that $W\subset \mathbb{R}^n$ is any measurable set which tiles by $\Gamma$ translations. We shall use Proposition \ref{generalprinciple} to deduce that $W$ can not be an $(A,\Gamma)$ wavelet set. By Lemma \ref{dti} there exists an $A$ dilation tiling generator $U \subset \mathbb{R}^n$ which contains a collection of disjoint balls $V_k \subset U$, $k\in \mathbb{N}$, each having the same radius $\le 1$. Let $x_k$ denote the center of $V_k$. Then, for any $j\in\mathbb{N}$, Lemma \ref{slice} applied to the matrix $A^j$ yields \[ |V_k \cap A^j(W)| \le |\mathbf B(x_k,1) \cap A^j(W)| \le \frac{C}{\#|A^{-j}(\mathbf B(0,1))\cap \Gamma|} . \] By our hypothesis \eqref{cin0} we have \[ \sum_{j = J}^\infty \bigl |V_k \cap A^j(W) \bigr | \le C \sum_{j=J}^\infty \frac{1}{\#|A^{-j}(\mathbf B(0,1))\cap \Gamma|} \to 0 \qquad\text{as }J\to \infty. \] Since $k\in\mathbb{N}$ is arbitrary, Proposition \ref{generalprinciple} implies that $W$ is not an $(A, \Gamma)$ wavelet set. \end{proof} \section{Necessary condition for existence of wavelet sets} \label{oldnecessity} In this section we show a convenient necessary condition for existence of wavelet sets, which is slightly weaker than the characterization equation \eqref{char}, but is easier to check when it does hold. This necessary condition states that for every sublattice $\Lambda \subset \Gamma$ we have \begin{equation}\label{nsec} \liminf_{j\to \infty} m_d(V \cap A^{-j}(\mathbf B(0, 1))) <\infty, \qquad\text{where }V= \operatorname{span} \Lambda, \ d=\dim V. \end{equation} Here, $m_d$ denotes $d$-dimensional Lebesgue measure on a subspace $V\subset \mathbb{R}^n$. Equivalently, we have the following non-existence result. \begin{theorem}\label{nexist} Let $A$ be an $n \times n$ invertible matrix with $\left | \det A \right | > 1$. Let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice. If for any sublattice $\Lambda \subset \Gamma$ we have \begin{equation}\label{sec} \lim_{j\to \infty} m_d(V \cap A^{-j}(\mathbf B(0, 1)))= \infty, \qquad\text{where }V= \operatorname{span} \Lambda, \ d=\dim V, \end{equation} then there is no $(A,\Gamma)$ wavelet set. \end{theorem} As an illustration of Theorem \ref{nexist} we recover the following non-existence result shown by the second author in \cite[Proposition 2.2]{Spe03}. \begin{corollary}\label{collectanea} Suppose that $A = \begin{bmatrix} A_1 & 0 \\ T & A_2 \end{bmatrix}$, where $A_1$ is $n_1 \times n_1$ matrix, $A_2$ is $n_2\times n_2$ matrix, and $T$ is $n_2 \times n_1$ matrix, such that $\left | \det A \right |>1$, $|\det A_1| > 1$, and $|\det A_2| < 1$. Suppose that $\Gamma$ is a full-rank lattice such that $\Gamma \cap (\{0\} \times \mathbb{R}^{n_2})$ has rank $n_2$ . Then there is no $(A, \Gamma)$ wavelet set. \end{corollary} \begin{proof} Take a finite set $F \subset \Gamma \cap (\{0\} \times \mathbb{R}^{n_2})$ such that span $V=\operatorname{span}{F}=\{0\} \times \mathbb{R}^{n_2}$. Then \[ m_{n_2}(V \cap A^{-j}(\mathbf B(0, 1))) =m_{n_2}((A_2)^{-j}(\mathbf B(0,1))) = c_{n_2} |\det A_2|^{-j}, \] where $c_d$ denotes the Lebesgue measure of $d$-dimensional unit ball. Since $|\det A_2|<1$, this yields the required conclusion by Theorem \ref{nexist}.\end{proof} The proof of Theorem \ref{nexist} involves writing $A$ in real Jordan form, and relating the action of $A$ on balls to the action of corresponding matrices with positive eigenvalues on the same balls, and then to diagonal matrices. Lemma \ref{ite} is essentially a reformulation of \cite[Lemma 6.7]{ChesFuhr2020}. We give its proof for the sake of completeness. \begin{lemma}\label{ite} Suppose that $B$ is an $n\times n$ invertible matrix. Then, there exists a positive constant $c$ and an $n\times n$ matrix $\tilde B$ such that all eigenvalues of $\tilde B$ are positive and \begin{equation}\label{ite1} B^j(\mathbf B(0,r/c)) \subset \tilde B^j(\mathbf B(0,r)) \subset B^j(\mathbf B(0,cr)) \qquad\text{for all }j\in \mathbb{Z}, \ r>0. \end{equation} \end{lemma} \begin{proof} For $z\in \mathbb C$ we define $2\times 2$ matrix \[ M_z= \begin{bmatrix} \operatorname{Re}(z) & \operatorname{Im}(z) \\ -\operatorname{Im}(z) & \operatorname{Re}(z) \end{bmatrix}. \] There exists an $n\times n$ invertible matrix $P$ such that $PBP^{-1}$ is in real Jordan normal form. That is, $PBP^{-1}$ is a block diagonal matrix consisting of real Jordan blocks $J_1, \ldots, J_k$. Suppose that a Jordan block $J_i$ corresponds to a complex eigenvalue $\lambda \in \mathbb C \setminus \mathbb{R}$, \[ J_i= \begin{bmatrix} M_\lambda & M_1 & & & \\ & M_\lambda & M_1 & &\\ & & \ddots & \ddots & \\ & & & M_\lambda & M_1 \\ & & & & M_\lambda \end{bmatrix}. \] We write $\lambda = |\lambda| \omega$, where $|\omega|=1$, and define matrices $R_i$ and $J_i'$ by \[ R_i= \begin{bmatrix} M_{\overline \omega} & & & & & \\ & M_{\overline \omega} & & &\\ & & \ddots & & \\ & & & M_{\overline \omega} & \\ & & & & M_{\overline \omega} \end{bmatrix}, \qquad J'_i = \begin{bmatrix} M_{|\lambda|} & M_{\overline \omega} & & & \\ & M_{|\lambda|} & M_{\overline \omega} & &\\ & & \ddots & \ddots & \\ & & & M_{|\lambda|} & M_{\overline \omega} \\ & & & & M_{|\lambda|} \end{bmatrix} . \] Observe that $R_i$ is an orthogonal matrix, $R_i$ and $J_i$ commute, and $J_i'=R_i J_i$. In the case a Jordan block $J_i$ corresponds to a real eigenvalue $\lambda<0$, we let $R_i=-I$ and $J_i'=-J_i$ if $\lambda<0$, and otherwise we set $R_i=I$ and $J_i'=J_i$. Note that each block $J_i'$ has a single positive eigenvalue $|\lambda|$. Define a matrix $\tilde B$ such that $P\tilde B P^{-1}$ is block diagonal with blocks $J_1',\ldots,J_k'$. Define a matrix $R$ to be block diagonal with blocks $R_1,\ldots, R_k$. By the construction, $R$ is an orthogonal matrix which commutes with $P B P^{-1}$, and \[ P\tilde B P^{-1} = R (P B P^{-1}) = (P B P^{-1})R. \] Consequently, we have for all $j\in \mathbb{Z}$, $r>0$, \[ (PBP^{-1})^j(\mathbf B(0,r)) = (P\tilde BP^{-1})^j(\mathbf B(0,r)). \] Hence, \[ B^j(\mathbf B(0,r/||P||)) \subset (B^j P^{-1}) (\mathbf B(0,r)) = (\tilde B^j P^{-1}) (\mathbf B(0,r)) \subset \tilde B^j (\mathbf B(0,||P^{-1}||r)). \] This proves \eqref{ite1} with $c= ||P^{-1}||\cdot ||P||$. \end{proof} The next lemma shows a basic estimate on the intersection of a subspace with dilates of a ball. For simplicity we formulate Lemma \ref{dil} for matrices with positive eigenvalues in Jordan normal form. \begin{lemma}\label{dil} Suppose that $ B$ is an $n\times n$ matrix in Jordan normal form and all eigenvalues of $ B$ are positive. Let $V \subset \mathbb{R}^n$ be a subspace of dimension $d$. Then there exists a subset $\sigma \subset [n]$ of cardinality $d$ such that the following holds. Let $P_\sigma$ be the coordinate orthogonal projection of $\mathbb{R}^n$ onto \[ \mathbb{R}^\sigma=\operatorname{span}\{ e_i: i\in \sigma\}. \] Then, the restriction $P_\sigma |_V$ is an isomorphism of $V$ and $\mathbb{R}^\sigma$ and \begin{equation}\label{dil1} n^{-d/2} \le \frac{m_d(P_{\sigma}(V \cap { B}^{j}(\mathbf B(0,r)))) }{ m_d(( P_\sigma B P_\sigma)^j (\mathbf B(0,r) ) )} \le n^{d/2} \qquad\text{for all }j\ge 1, \ r>0. \end{equation} \end{lemma} \begin{proof} Without loss of generality we can assume that $ B$ has diagonal entries $\lambda_1,\ldots,\lambda_n$ satisfying $0<\lambda_1 \le \ldots \le \lambda_n$. We claim that there exists a basis $v_1,\ldots,v_d$ of $V$ and a sequence of integers $1 \le \sigma_1< \ldots <\sigma_d \le n$ such that \begin{equation}\label{sp} v_i -e_{\sigma_i} \in \operatorname{span}\{ e_k: \sigma_{i}+1 \le k \le d \quad\text{ and }\quad k \ne \sigma_{i+1},\ldots,\sigma_{d}\} \qquad\text{for }i=1,\ldots,d. \end{equation} Indeed, let $w_1,\ldots,w_d$ be any basis of $V$. Consider a $d\times n$ matrix which consists of vectors $w_1,\ldots, w_d$ treated as row vectors. Next, we perform a Gaussian elimination on this matrix to obtain a reduced row echelon form (only the first three rows shown): \setcounter{MaxMatrixCols}{20} \[ \begin{bmatrix} 0 & \ldots & 0 & 1 & * & \ldots & * & 0 & * & \ldots & * & 0 & * & \ldots \cr 0 & \ldots & 0 & 0 & 0 &\ldots & 0 & 1 & * & \ldots & * & 0 & * & \ldots \\ 0 & \ldots & 0 & 0 & 0 &\ldots & 0 & 0 & 0 & \ldots & 0 & 1 & * & \ldots \end{bmatrix} \] Here, $*$ represents an arbitrary real value. Let $\sigma_i$ be the column which contains the leading entry $1$ in row $i$; all other entries in column $\sigma_i$ are zero. Let $v_i$ be the row vector given by row $i$. Since row operations preserve $V$, the vectors $v_1,\ldots, v_d$ form a basis of $V$ satisfying \eqref{sp}. In addition, we assume that Jordan blocks $J$ of $B$ appear in lower diagonal form (rather than the more common upper diagonal form) \begin{equation}\label{jj} J= \begin{bmatrix} \lambda & & & \\ 1 & \ddots & & \\ & \ddots & \lambda & \\ & & 1& \lambda \end{bmatrix} \end{equation} For a fixed $j\ge 1$, define the set \[ S=P_{\sigma}(V \cap { B}^{j}((-1,1)^n)). \] Since $P_\sigma(v_i) =e_{\sigma_i}$ for $i=1,\ldots, d,$ by identifying $\mathbb{R}^\sigma= \mathbb{R}^d$ we have \[ S=\bigg \{(c_1,\ldots,c_d) \in \mathbb{R}^d: v=\sum_{i=1}^d c_i v_i \in B^j((-1,1)^n) \bigg\}. \] We shall estimate the $d$-dimensional Lebesgue measure of $S$ by Fubini's theorem \begin{equation}\label{fub} m_d(S)= \int_\mathbb{R} \ldots \bigg( \int_\mathbb{R} \mathbf 1_S(c_1,\ldots,c_d) dc_d \bigg) \ldots dc_1. \end{equation} First we consider an extreme case when the index set $\sigma$ is located in a single Jordan block $J$ of the form \eqref{jj} corresponding to an eigenvalue $\lambda$. Let $k$ be the size of $J$. If $j\ge k-1$, then $J^j((-1,1)^k)$ equals \begin{equation}\label{jjj} \textstyle \ \{ (\lambda^j d_1, \lambda^j d_2 + j\lambda^{j-1} d_1, \ldots, \lambda^j d_k + j \lambda^{j-1} d_{k-1} + \ldots + \binom{j}{k-1} \lambda^{j-k+1} d_1 ): |d_1|, \ldots, |d_k| <1 \}. \end{equation} Suppose $(c_1,\ldots, c_d) \in S$. This implies that $c_1 \in (-\lambda^j, \lambda^j)$. Once such $c_1$ is fixed, then $c_2 \in (-\lambda^j, \lambda^j)+z_2$, for some $z_2$ depending on $c_1$. In general, once $c_1,\ldots,c_i$, $i<d$, are fixed, then by \eqref{jjj} we deduce that $c_{i+1} \in (-\lambda^j, \lambda^j)+z_{i+1}$, for some $z_{i+1}$ depending on $c_1,\ldots,c_i$. This shows that \[ m_d(S)= (2\lambda)^d = m_d((P_\sigma B P_\sigma)^j((-1,1)^n)). \] In the general case, when $\sigma$ is spreads across different Jordan blocks, we can extend this argument, by grouping elements of $\sigma$ located in Jordan blocks, to deduce that \[ m_d(S)= 2^d \prod_{i=1}^d \lambda_{\sigma_i} = m_d((P_\sigma B P_\sigma)^j((-1,1)^n)). \] By the fact that \begin{equation}\label{tin} (-1/\sqrt n,1/\sqrt{n} )^n\subset\mathbf B(0,1) \subset [-1,1]^n, \end{equation} and scaling, we deduce \eqref{dil1} for $j\ge 1$. \end{proof} As a corollary of Lemmas \ref{ite} and \ref{dil} we extend estimate \eqref{dil1} to arbitrary matrices. \begin{corollary}\label{dila} Suppose that $B$ is an $n\times n$ invertible matrix. Let $V \subset \mathbb{R}^n$ be a subspace of dimension $d$. Then, there exist positive constants $b$ and $C$ such that \begin{equation}\label{dila0} b^j /C\le m_d(V \cap B^j(\mathbf B(0,1))) \le C b^j \qquad\text{for all }j\ge 1, \end{equation} where $m_d$ is $d$-dimensional Lebesgue measure on $V$. \end{corollary} \begin{proof} Let $\tilde B$ be a matrix with positive eigenvalues satisfying the conclusion of Lemma \ref{ite} corresponding to $B$. Then, we have \[ V \cap {\tilde B}^j(\mathbf B(0,1/c)) \subset V \cap B^j(\mathbf B(0,1)) \subset V \cap {\tilde B}^j(\mathbf B(0,c)). \] Therefore, replacing matrix $B$ by $\tilde B$, without loss of generality we can assume that all eigenvalues of $B$ are positive. Let $S$ be an invertible matrix such that $\check B= S B S^{-1}$ is in Jordan normal form. Note that \begin{equation}\label{dila2} V \cap B^j(\mathbf B(0,1)) = V \cap S^{-1} \check B^j S(\mathbf B(0,1)) = S^{-1} (SV \cap \check B^j S(\mathbf B(0,1))). \end{equation} Since \[ \mathbf B(0,1/c') \subset S(\mathbf B(0,1)) \subset \mathbf B(0,c') \qquad\text{where } c'=\max(||S^{-1}||,||S||), \] Lemma \ref{dil} implies that \begin{equation}\label{dila4} n^{-d/2}(c')^{-d} \ \le \frac{m_d(P_{\sigma}(SV \cap { \check B}^{j} S (\mathbf B(0,1)))) }{ m_d(( P_\sigma \check B P_\sigma)^j (\mathbf B(0,1) ) )} \le n^{d/2} (c')^d \qquad\text{for all }j\ge 1. \end{equation} Here, $\sigma$ is a subset of $[n]$ of cardinality $d$ such that the restriction $P_\sigma |_{SV}$ is an isomorphism of $SV$ and $\mathbb{R}^\sigma$. Consequently, $(P_\sigma \circ S)|_V$ is an isomorphism of $V$ and $\mathbb{R}^\sigma$. Hence, by \eqref{dila2}, there exists a constant $c''>0$ such that \[ m_d(V \cap B^j(\mathbf B(0,1))) =c '' m_d(P_\sigma (SV \cap \check B^j S(\mathbf B(0,1)))) . \] Combining this with \eqref{dila4} yields \eqref{dila0}. \ \end{proof} We are now ready to give the proof of Theorem \ref{nexist}. \begin{proof}[Proof of Theorem \ref{nexist}] Let $B = A^{-1}$. By Corollary \ref{dila} there exist positive constants $b$ and $C$ such that \eqref{dila0} holds. By the hypothesis \eqref{sec} we necessarily have $b>1$. By Lemma \ref{vp} \[ \#|V \cap A^{-j}(\mathbf B(0,1)) \cap \Gamma )| \ge \frac{m_d(V \cap A^{-j}(\mathbf B(0,1))) }{2^d m_d(V/(V\cap \Gamma))}. \] Hence, for some constant $C'>0$ we have \[ \sum_{j=1}^\infty \frac{1}{\#| A^{-j}(\mathbf B(0,1))\cap \Gamma|} \le \sum_{j=1}^\infty \frac{2^d m_d(V/(V\cap \Gamma))}{m_d(V \cap A^{-j}(\mathbf B(0,1))) } \le C' \sum_{j=1}^\infty b^{-j}<\infty. \] By Theorem \ref{nece} there is no $(A,\Gamma)$ wavelet set. \end{proof} \section{Applications of main results}\label{idk} In \cite{DaiLarSpe97}, it was shown that if all eigenvalues of a matrix $A$ have modulus strictly larger than one, then for every full rank lattice $\Gamma$ there exists an $(A, \Gamma)$ wavelet set. Various wavelet sets for matrices with some eigenvalues greater than one and some equal to one have been constructed in the literature. In our first application of Theorem \ref{mainTheorem}, we show that we are always able to construct such wavelet sets. Recall that a matrix $A$ is {\emph{unipotent}} if all of its eigenvalues are 1. The key added ingredient is the following result due to Margulis \cite[Theorem 1]{Margulis71}, see also \cite{Dani}. \begin{theorem}\label{marg71} Let $\Gamma$ be a full rank lattice in $\mathbb{R}^n$ and let $U_t$ be a one parameter group of unipotent matrices. There exists $\delta > 0$ such that \[ \sup\{t \in \mathbb{R}: {\mathbf {B}}(0, \delta) \cap U_t \Gamma = \{0\}\} = \infty. \] \end{theorem} We will apply Theorem \ref{marg71} in the case $U_t = J^t$, where $J$ is the block of the real Jordan decomposition of a matrix corresponding to eigenvalue 1. \begin{theorem}\label{eigenbiggerone} Let $A$ be an $n\times n$ matrix such that $\left | \det A \right |>1$ and all eigenvalues of $A$ are greater than or equal to one in modulus. Then, for every full rank lattice $\Gamma$, there exists an $(A, \Gamma)$ wavelet set. \end{theorem} \begin{proof} By Lemma \ref{ite}, there exists a constant $c$ and an $n \times n$ invertible matrix $\tilde A$ such that all of the eigenvalues of $\tilde A$ are positive and such that for all $j\in \mathbb{Z}$ and all $r > 0$, \[ A^j({\mathbf {B}}(0, r/c)) \subset \tilde A^j({\mathbf {B}}(0, r)) \subset A^j({\mathbf {B}}(0, cr)). \] By a change of basis, we can assume that $\tilde A$ is in Jordan form. Write $\tilde A = \begin{bmatrix} A_1&0\\ 0&A_2 \end{bmatrix}$ where all eigenvalues of $A_1$ are larger than 1, and all eigenvalues of $A_2$ are equal to 1. Let $T$ denote the $n \times n$ matrix $T = \begin{bmatrix} I&0\\ 0&A_2 \end{bmatrix}.$ If $A_2$ is the identity matrix, then there exists $\epsilon > 0$ such that $\tilde A^{-j}\left({\mathbf {B}}(0, \epsilon)\right) \cap \Gamma = \{0\}$ for all sufficiently large $j$. Therefore, $A^{-j}({\mathbf {B}}(0, \epsilon/c)) \cap \Gamma = \{0\}$ for sufficiently large $j$ as well, so $(A, \Gamma)$ wavelet sets exist by Theorem \ref{mainTheorem}. If $A_2$ contains a non-trivial Jordan block, then by assumption, $T$ generates a one parameter unipotent group, so by Theorem \ref{marg71}, there exists an $\epsilon > 0$ and $n_1 < n_2 < \cdots$ such that $T^{-n_k}({\mathbf {B}}(0, \epsilon)) \cap \Gamma = \{0\}$. For large $k$, $\tilde A^{-n_k}({\mathbf {B}}(0, \epsilon)) \subset T^{-n_k}({\mathbf {B}}(0, \epsilon)),$ so there exist $n_1 < n_2 < \cdots$ such that $\tilde A ^{-n_k}({\mathbf {B}}(0, \epsilon)) \cap \Gamma = \{0\}$. Since $A^{-n_k}({\mathbf {B}}(0, \epsilon/c)) \subset \tilde A^{-n_k} ({\mathbf {B}}(0, \epsilon)),$ $(A, \Gamma)$ wavelet sets exist by Theorem \ref{mainTheorem}. \end{proof} \begin{remark} Theorem \ref{eigenbiggerone} is optimal in the sense that it identifies the largest class of matrices $A$ with $\left | \det A \right |>1$ for which $(A, \Gamma)$ wavelet set exists regardless of the choice of the lattice $\Gamma$. Indeed, if one of eigenvalues of $A$ is less than $1$, then we consider the invariant subspace $V$, which is the linear span of eigenspaces of $A$ corresponding to eigenvalues $|\lambda|<1$. Then, we choose a full rank lattice $\Gamma \subset \mathbb{R}^n$ such that the rank of $\Gamma \cap V$ equals $d=\dim V$. Since $\lim_{j\to \infty} m_d(V \cap A^{-j}(\mathbf B(0, 1)))= \infty$, by Theorem \ref{nexist} we deduce that there is no $(A,\Gamma)$ wavelet set. \end{remark} The second application of Theorem \ref{mainTheorem} is a generalization of the main result in \cite{IonWan06}. We will need to use the inequality between the measure of an ellipsoid intersected with a subspace and the measure of the ellipsoid and lengths of its principal semi-axes. \begin{lemma}\label{ellipsoidsection} There exists a constant $C = C(n)$ such that whenever $\mathcal{E} \subset \mathbb{R}^n$ is an ellipsoid with lengths of principal semi-axes given by $\lambda_1 \ge \cdots \ge \lambda_n > 0$ and $V$ is a subspace of $\mathbb{R}^n$ of dimension $k$, \[ m_k(V \cap \mathcal{E}) \le C \prod_{i = 1}^k \lambda_i. \] \end{lemma} \begin{proof} Fix $k$. We first note that $\mathcal{E} \cap V$ is an ellipsoid with $k$ non-zero lengths of principle semi-axes given which we denote by $\omega_1 \ge \cdots \ge \omega_k > 0$. Let $\{v_1, \ldots, v_n\}$ be an orthonormal basis of $\mathbb{R}^n$ such that the first $k$ vectors form an orthonormal basis for $V$. Let $A$ be an $n \times n$ positive definite matrix written in terms of $\{v_1, \ldots, v_n\}$ such that $\mathcal{E} = {\text {conv}} \{x\in \mathbb{R}^n: \langle Ax, x\rangle = 1\}$. Note that $\mathcal{E} \cap V = {\text {conv}} \{x \in \mathbb{R}^n: \langle PAPx, x\rangle = 1\}$, where $P$ is the orthogonal projection of $\mathbb{R}^n$ onto $V$. Let $\lambda_j(A)$ denote the $j$th smallest eigenvalue of $A$, and $\lambda'_j(B)$ denote the $j$th smallest non-zero eigenvalue of $B=PAP$. By the inclusion principle, \cite[Theorem 4.3.15]{HornJohnson85} $\lambda_i(A) \le \lambda'_i(PAP) \le \lambda_{i + n - k}(A)$. Therefore, $\omega_i = \lambda'_{i}(PAP)^{-1/2} \le \lambda_i(A)^{-1/2} = \lambda_i$. In particular, \[ m_k(V \cap \mathcal{E}) = c_k \prod_{i = 1}^k \omega_i \le c_k \prod_{i = 1}^k \lambda_i, \] where $c_k = \frac {\pi^{k/2}}{\Gamma(k/2 + 1)}$ is the Lebesgue measure of $k$-dimensional unit ball. The lemma follows by letting $C = \max(c_1, \ldots, c_n)$. \end{proof} \begin{theorem}\label{diagiw} Let $A$ be an $n \times n$ diagonal matrix with $\left | \det A \right | > 1$ and with eigenvalues arranged so that $|\lambda_1| \ge |\lambda_2| \ge \cdots \ge |\lambda_{n - 1}| > 1 > |\lambda_n|$. Assume in addition that $|\lambda_n \lambda_{n - 1}| \ge 1$. Let $\Gamma \subset \mathbb{R}^n$ be a full rank lattice. Then, there exists an $(A, \Gamma)$ wavelet set if and only if $\Gamma \cap \operatorname{span}(e_n)= \{0\}$, where $e_n$ is the last standard unit basis vector. \end{theorem} \begin{proof} We start by showing the ``only if'' part. The proof follows from the following two claims. \begin{claim}\label{c55} Let ${\mathcal{F}}$ denote the collection of subspaces of $\mathbb{R}^n$ of dimension at least two. For every $R > 0$, the sequence $(\gamma_j)_{j = 1}^\infty$ defined by \[ \gamma_j = \sup\{m_k\left(V \cap A^{-j}({\mathbf {B}}(0, R))\right): V \in \mathcal{F}, k = \dim V\} \] is bounded. \end{claim} \begin{proof}[Proof of Claim] Let $V$ be a subspace of $\mathbb{R}^n$ of dimension $k \ge 2$. The intersection $V \cap A^{-j}({\mathbf {B}}(0, R))$ is an ellipsoid. By Lemma \ref{ellipsoidsection}, the $k$-dimensional volume of $A^{-j}({\mathbf {B}}(0, R)) \cap V$ is less than or equal to $C |\lambda_{n - 1} \lambda_n|^{-j},$ which is bounded. \end{proof} \begin{claim} \label{c56} If $\lim_{j\to \infty} \#|A^{-j}({\mathbf {B}}(0, R)) \cap \Gamma| = \infty$, then \[ \limsup_{j \to \infty}\dim {\rm{span}} (A^{-j}({\mathbf {B}}(0, R)) \cap \Gamma) \ge 2 \] \end{claim} \begin{proof}[Proof of Claim] Choose a cylinder of the form $S = {\mathbf {B}}_{n - 1}(0, R) \times [-R, R]$ and notice that ${\mathbf {B}}(0, R) \subset S$. We prove the following formally stronger statement than the claim: \[ L = \limsup_{j \to \infty}\dim {\rm{span}} (A^{-j}(S) \cap \Gamma) \ge 2. \] We first note that $L \ge 1$ since there is a constant $K$ such that for $j \ge K$, $\#|A^{-j}(S) \cap \Gamma| > 1$. Assume for the sake of contradiction that $L = 1$, and redefine $K$ so that for $j \ge K$, $\dim {\rm{span}} \left(A^{-j}(S) \cap \Gamma\right) = 1$. For fixed $j \ge K$, let $v$ be the element of $A^{-j}(S) \cap \Gamma$ of smallest norm, so that $A^{-j}(S) \cap \Gamma \subset v \mathbb{Z}$. There exists a largest $k \ge j$ such that $v \in A^{-k}(S)$ since $\Gamma \cap \{t e_1: t\in \mathbb{R}\} = \{0\}$. There exists non-zero $v_2 \in A^{-k - 1}(S) \cap \Gamma$, and the last coordinate of $v_2$ must be in the interval $[|\lambda_n|^{-k} R, |\lambda_n|^{-k - 1} R]$. This implies that $v_2/|\lambda_n| \not\in A^{-k - 1}(S)$, and $\#|A^{-k - 1}(S) \cap \Gamma| \le 2/|\lambda_n|$. Since $j$ was an arbitrary integer larger than $K$, this contradicts the hypothesis that $\lim_{j\to \infty} \#|A^{-j}({\mathbf {B}}(0, R)) \cap \Gamma| = \infty$. \end{proof} We turn to the proof of the theorem. Assume $\Gamma \cap \{t e_n: t\in \mathbb{R}\} = \{0\}$; we wish to show that there exists an $(A, \Gamma)$ wavelet set by showing that $\liminf_{j \to \infty} \#|A^{-j}({\mathbf {B}}(0, R))\cap \Gamma| < \infty$, which implies $\sum \frac {1}{\#\left|A^{-j}({\mathbf {B}}(0, 1)) \cap \Gamma\right |} = \infty$. Assume to the contrary that this limit is infinite. By Claim \ref{c56} there exists an increasing sequence $(j_k)_{k=1}^\infty$ such that \begin{equation}\label{c57} \dim {\rm{span}} (A^{-j_k}({\mathbf {B}}(0, R)) \cap \Gamma) \ge 2 \qquad\text{for all }k\in\mathbb{N}. \end{equation} Let $V_k$ be the span of $A^{-j_k}({\mathbf {B}}(0, R)) \cap \Gamma$ and let $\Lambda_k = V_k \cap \Gamma$. Let $F_k$ be a fundamental domain of $\Lambda_k$. Let $n_k=\dim V_k$. By Lemma \ref{vp}, Claim \ref{c55}, and \eqref{c57} we have \begin{eqnarray*} \#|A^{-j_k}({\mathbf {B}}(0, R)) \cap \Gamma | &=& \#|A^{-j_k}({\mathbf {B}}(0, R)) \cap V_k \cap \Lambda_k|\\ &\le& \frac{3^{n_k} n_k!}{2^{n_k} |F_k|} m_{n_k} \left(A^{-j_k}({\mathbf {B}}(0, R)) \cap V_k \right)\\ &\le& C \gamma_{j_k}. \end{eqnarray*} The last inequality is due to a positive lower bound on the measure of the fundamental domains of non-zero sublattices of a fixed lattice. Since $(\gamma_{j_k})$ is bounded, this is a contradiction to our hypothesis that $\liminf_{j \to \infty} \#|A^{-j}({\mathbf {B}}(0, R))\cap \Gamma| = \infty$. Hence, by Theorem \ref{mainTheorem} $(A, \Gamma)$ wavelet sets exist. For the other direction, if $\Gamma \cap \{t e_n: t\in \mathbb{R}\} \not= \{0\}$, then we let $\Lambda = \{t e_n: t\in \mathbb{R}\} \cap \Gamma$ and $V = {\rm {span}} (F).$ Since \[ \lim_{j \to \infty} m_1(V \cap A^{-j} {\mathbf {B}}(0, 1)) = \infty, \] by Theorem \ref{nexist}, there is no $(A, \Gamma)$ wavelet set. \end{proof} \begin{remark} The above proof shows that for diagonal matrices $A$ satisfying the assumptions in Theorem \ref{diagiw}, conditions \eqref{ctwo}, \eqref{cthree}, and \eqref{cfour} in the Introduction are all equivalent. This equivalence can also be deduced from the result of Weiss \cite[Theorem 5.2]{Wei04} on divergent trajectories. \end{remark} Our third application of the main theorem in this paper is the following. \begin{theorem}\label{lce} Let $A$ be $n\times n$ matrix with integer entries and $\left | \det A \right | \ge 2$. Then, the pair $(A,\mathbb{Z}^n)$ satisfies the lattice counting estimate \begin{equation}\label{lce1} \#| \mathbb{Z}^n \cap A^j(\mathbf B(0,r))| \le C \max(1,r^n) \max(1, \left | \det A \right |^j) \qquad\text{for all }j\in \mathbb{Z}, r>0, \end{equation} for some positive constant $C$ depending only on $n$. In particular, there exists $(A,\mathbb{Z}^n)$ wavelet set. \end{theorem} \begin{proof} For $j<0$ we have $A^{-j}\mathbb{Z}^n \subset \mathbb{Z}^n$. Hence, \[ \#| \mathbb{Z}^n \cap A^j(\mathbf B(0,r))| = \#|A^{-j}\mathbb{Z}^n \cap \mathbf B(0,r) | \le \#| \mathbb{Z}^n \cap \mathbf B(0,r)| = C(r) \le C \max(1,r^n). \] For $j>0$ we use the fact that each coset of the quotient group $A^{-j}\mathbb{Z}^n /\mathbb{Z}^n$ intersects $[0,1)^n$ exactly once. Hence, for any $k\in \mathbb{Z}^n$, \[ \#| A^{-j} \mathbb{Z}^n \cap (k+[0,1)^n)| = \#|A^{-j}\mathbb{Z}^n /\mathbb{Z}^n| = \#|\mathbb{Z}^n/(A^j\mathbb{Z}^n)| = \left | \det A \right |^j. \] Consequently, \[ \begin{aligned} \#| \mathbb{Z}^n \cap A^j(\mathbf B(0,r))| & = \#|A^{-j}\mathbb{Z}^n \cap \mathbf B(0,r) | \\ & \le \sum_{k\in \mathbb{Z}^n, \ |k| < r+\sqrt{n}} \#|A^{-j}\mathbb{Z}^n \cap (k+[0,1)^n)| \\ & \le \#| \mathbb{Z}^n \cap \mathbf B(0,r+\sqrt{n}) | \left | \det A \right |^j = C(r+\sqrt{n}) \left | \det A \right |^j. \end{aligned} \] This proves \eqref{lce1}. The existence of $(A,\mathbb{Z}^n)$ wavelet set follows then by Theorem \ref{mainTheorem}. \end{proof} \section{Examples}\label{examples} In this section we provide two examples in three dimensions which relate to the theorems proven above. As always, we are assuming that $\left | \det A \right | > 1$. The following example shows that the characterization of wavelet sets in Theorem \ref{diagiw} requires the condition on the product of eigenvalues. Namely, if the product of the smallest two eigenvalues is less than one, then there are pairs $(A, \Gamma)$ such that no wavelet set exists despite that $\Gamma \cap \operatorname{span} (e_n) = \{0\}$. \begin{example}\label{obvious} Let $A$ be a $3\times 3$ diagonal matrix with entries $\lambda_1$, $\lambda_2$, $\lambda_3$. Let $\Gamma$ be a full rank lattice in $\mathbb{R}^3$. Suppose that \begin{enumerate}[(i)] \item $|\lambda_1| \ge |\lambda_2| \ge |\lambda_3|$, \item $\left | \det A \right |= |\lambda_1\lambda_2\lambda_3|>1$, \item $|\lambda_1\lambda_3|<1$, \item there exists $\gamma_1,\gamma_2 \in \Gamma$ such that \[ \operatorname{span} (\gamma_1,\gamma_2) = \operatorname{span}(ae_1+be_2,e_3),\qquad\text{where }a,b\ne 0. \] \end{enumerate} Then, there is no $(A,\Gamma)$ wavelet set. \end{example} \begin{proof} Take $F=\{\gamma_1,\gamma_2\}$, $V = \operatorname{span} F$, and observe that \[ m_2(V \cap A^{-j}((-1,1)^3)) = m_2( V \cap \prod_{i=1}^3 (-|\lambda_i|^{-j},|\lambda_i|^{-j})) \ge |\lambda_1 \lambda_3|^{-j} \to \infty \] as $j\to \infty$. Hence, \eqref{sec2} holds. By Theorem \ref{nexist}, there is no $(A,\Gamma)$ wavelet set. \end{proof} Consider the condition that for some subset $F \subset \Gamma$ we have \begin{equation}\label{sec2} \lim_{j\to \infty} m_d(V \cap A^{-j} \mathbf B(0,R)) = \infty, \qquad\text{where }V= \operatorname{span} F, \ d=\dim V. \end{equation} It was shown in Theorem \ref{nexist} that this condition implies that $(A, \Gamma)$ wavelet sets do not exist. In Example \ref{lcc}, we provide an example of a matrix $A$ and a lattice $\Gamma$ such that condition \eqref{sec2} is fails, yet no $(A, \Gamma)$ wavelet set exists. This shows that \eqref{sec2} is not equivalent to the non-existence of wavelet sets. The easiest path to showing that condition \eqref{sec2} is not equivalent to the characterizing condition is to use the following result of Khinchine, as stated in \cite{Mos10}. \begin{theorem}\label{khinchine} Suppose that $\psi(j) \to 0$ as $j \to \infty$. There exist numbers $\alpha$ and $\beta$ such that for sufficiently large $j$, there is an integer solution $(a,b)\in \mathbb{Z}^2$ of the system \[ \|a \alpha + b \beta\| < \psi(n), \qquad 0 < \max(|a|, |b|) < j, \] where $\| \cdot \|$ denotes the distance to the nearest integer. \end{theorem} We will also use the following lemma on codimension $1$ sections of an ellipsoid. \begin{lemma}\label{cross-section} Let $A$ be an $n \times n$ invertible matrix and let $\xi \in \mathbb{R}^n$ such that $\|\xi\| = 1$. Then, \[ m_{n - 1}\left(A({\mathbf {B}}(0, 1)) \cap \xi^\perp \right) = C_n \frac{|\det A|}{\|A^T\xi\|}, \qquad\text{ where }C_n = \frac{\pi^{(n-1)/2}}{\Gamma((n + 1)/2)}. \] \end{lemma} \begin{proof} Let $B = {\mathbf {B}}(0, 1) \subset \mathbb{R}^n$. By \cite[Theorem 2.2]{KolYas08}, the Minkowski norm $||\cdot||_D$ associated to any centrally symmetric convex body $D \subset \mathbb{R}^n$ satisfies \[ m_{n - 1}\left(D \cap \xi^\perp\right) = \frac{1}{\pi(n-1)} (\|\cdot\|_D^{-n+1})^\wedge(\xi). \] Moreover, the Euclidean norm $||\cdot||=||\cdot||_B$ satisfies \[ (\| \cdot \|^{-n + 1})^\wedge(\xi) = \frac{2\pi^{(n+1)/2}}{\Gamma((n-1)/2)} \|\xi\|^{-1}. \] Let $C_n$ be as in the lemma statement. Then, we have \begin{align*} (\|\cdot \|_{A(B)}^{-n + 1})^\wedge(\xi) &= (\|A^{-1}(\cdot)\|_B^{-n + 1})^\wedge(\xi) = \frac{(\|\cdot\|_B^{-n + 1})^\wedge(A^T\xi)}{|\det(A^{-1})|}= C_n \frac{|\det A|}{\|A^T\xi\|}, \end{align*} as desired. \end{proof} \begin{example}\label{lcc} There exists a lattice $\Gamma \subset \mathbb{R}^3$ and an invertible, diagonal matrix $A$ with $|\det A| > 1$ such that no $(A,\Gamma)$ wavelet sets exist and such that for every $R > 0$ and every subset $F \subset \Gamma$ \[ m_d(V \cap A^{-j}({\mathbf {B}}(0, R)) ) \to 0, \] where $V$ is the span of $F$ and $d=\dim V$. \end{example} \begin{proof Let $B = [-1, 1]^3$. For $\psi(j) = 11^{-j}$, let $\alpha$ and $\beta$ as in Khinchine's Theorem. Let \begin{equation}\label{exa} A = \begin{bmatrix} 10 &0 & 0 \\ 0 & \frac 12 & 0 \\ 0 & 0 & \frac 12 \end{bmatrix}. \end{equation} Let $\Gamma$ be the lattice given by \begin{equation}\label{exb} \Gamma = \operatorname{span}_{\mathbb{Z}}\{(1,0,0), (\alpha,1,0), (\beta,0,1)\}. \end{equation} For large $j$, there exist integers $a, b$ and $N$ such that $|N + a \alpha + b\beta| < 11^{-j}$ and $\max(|a|, |b|) < j$. Therefore, \begin{align*} N (1,0,0) + a (\alpha, 1, 0) + b(\beta, 0, 1) & \in [-11^{-j}, 11^{-j}] \times [-j,j]^2 \\ & \subset \frac{10^j}{11^j} A^{-j}(B) \end{align*} It follows that $\#| A^{-j}(B) \cap \Gamma| \ge \frac{11^j}{10^j}$ for large $j$. Hence, by Theorem \ref{nece} no $(A,\Gamma)$ wavelet sets exist. Next, let $R > 0$ and $F \subset \Gamma$. We need to show that \[ m_V(V \cap A^{-j}({\mathbf {B}}(0, R) ) \to 0, \] where $V$ is the span of $F$. If $\dim V = 1$, then this is follows from the fact that there are no elements of $\Gamma$ in ${\rm {span}} \{(0, 1, 0), (0, 0, 1)\}$. If $\dim V = 2$, let $\xi$ be a norm-one vector such that $V = \xi^\perp$. Since $\xi \not\in {\rm {span}} \{(1, 0, 0)\}$, by Lemma \ref{cross-section} we have \[ m_V(A^{-j}({\mathbf {B}}(0, R)) \cap \xi^\perp) = C\frac{ \|A^{-j} \xi\|^{-1}}{|\det(A^{j})|} \to 0 \qquad\text{as }j\to \infty. \qedhere\] \end{proof} \bibliographystyle{plain}
1,116,691,501,145
arxiv
\section{Introduction}\Label{int} The remarkable result of Forstreni\v{c} \cite{Fo} on the classification problem of proper holomorphic mappings between unit balls states that if $f$ is proper, holomorphic map from a ball in $\mathbb C^n$ to a ball in $\mathbb C^N$ and smooth of class $C^{N-n+1}$ on the closure then $f$ is a rational mapping. He posed the question of the holomorphic extendibility of such a rational mapping to any boundary point. In \cite{CS}, Cima and Suffridge proved that every such mapping extends holomorphically to a neighborhood of the closed ball. This result was extended by Chiappari \cite{Ch} by replacing the unit ball in domain with an arbitrary real analytic hypersurface in $\mathbb C^n$. This results are also related to regularity of CR mappings between real hypersurfaces. When the real hypersurfaces lie in the complex spaces of same dimension, CR mappings of given smoothness must be real-analytic, (see for example \cite {BER}). In the case of real hypersurfaces of different dimensions, analyticity of CR mappings with given smoothness on the boundary was shown provided that the target is a real sphere (see for example \cite{BHR, Mir} ). In the proof, they first show that the CR mappings extend meromorphically. Then using the results of Chiappari and Cima-Suffridge, this meromorphic extension defines an analytic extension. In this work, we obtain a holomorphic extension result for meromorphic mappings with more general target spaces. More precisely we prove the following theorem. \begin{Thm}\label{main} Let $M\subset \mathbb C^n$ be a real analytic hypersurface and $M'\subset\mathbb C^N$ be a strongly pseudoconvex real algebraic hypersurface which is locally equivalent to ${\sf Im}\, z_N'=p(z',\bar z')$ by a birational holomorphic change of coordinates at a point $q\in M'$, where $(z',z_N')\in \mathbb C^N$, $N\geq n$ and $p(z',\bar z')$ is a real valued polynomial. Let $U\subset \mathbb C^n$ be a neighborhood of a point $p\in M$ and $\Omega$ be the portion of $U$ lying on one side of $M$. If $F:U\rightarrow \mathbb C^N$ is a meromorphic mapping which maps $\Omega$ holomorphically to one side of $M'$, extends continuously on $\overline \Omega$, $F(M\cap U)\subset M' $ and $F(p)=q$, then $F$ extends holomorphically to a neighborhood of $p$. \end{Thm} In the statement of Theorem \ref{main}, by $F(M\cap U)\subset M'$, we mean that $\displaystyle \lim_{\Omega \ni z\to p} F(z)\in M'$ and $\displaystyle F(p):= \lim_{\Omega \ni z\to p} F(z)$ for all $p\in M\cap U$. Note that Theorem \ref{main} improves the result of Chiappari \cite{Ch} by replacing the sphere in the target with a special type of real algebraic hypersurface. One can not expect to have extension for mappings with arbitrary targets. There are examples of proper rational mappings from the unit ball to a compact set that can not be extended holomorphically through the boundary, (see \cite{CS}, \cite{IM} ). \section {Proof of Theorem \ref{main}} \begin{proof} For simplicity, we will take $p=(0,\dots,0).$ Since the ring of germs of holomorphic functions is a unique factorization domain, we may assume that $F=\frac{f}{g}$ where $f=(f_j)_{1\leq j \leq N}$ is a holomorphic mapping and $g$ is a holomorphic function near $0\in \mathbb C^n$ which has no common factor with $f$. If $g(0)\neq 0$, then $F$ defines a holomorphic mapping near $0$. Hence we may assume that $g(0)=0$. Let $M$ be given by $\psi(z,\overline{z})=0$ for some real analytic function $\psi$ near $0$ such that $\frac{\partial \psi}{\partial z_1}(0)\neq 0$. We define a non-zero holomorphic function $m(z)=\sum_{i=1}^{n}m_i z_i$ where $m_i=\frac{\partial \psi}{\partial z_i}(0)$. Since the zero sets of holomorphic functions are of measure zero, we can find a point $a=(a_1,\dots,a_n)\neq 0$ such that $m(a)\neq 0$, $g(a)\neq 0$, $f_j(a)\neq 0$ for all $j=1,\dots , N$. Here we have assumed that $f_j'$-s are not identically equal to 0, otherwise we can replace the those $f_j'$-s with zeros in the rest of the proof. Now we change the coordinates by $$z_i=a_i\zeta_1+\sum_{j=2}^{n}b_{ij}\zeta_j .$$ Since $a\neq 0$, we can choose $b_{ij}$ so that $\zeta=(\zeta_1,\dots,\zeta_n)$ gives a non-singular linear change of coordinates. In these new coordinates $\zeta=(\zeta_1,\dots ,\zeta_n)$, we have that $f_j(1,0,\dots,0)=f_j(a)\neq 0$, $g(1,0\dots,0)=g(a)\neq 0$ and $$\frac{\partial \psi }{\partial \zeta_1}(0)=\sum_{i=1}^n\frac{\partial \psi}{\partial z_i}(0)a_i=m(a)\neq 0.$$ For the convenience, we will denote the new coordinates by $z$ again. Then we may assume that $f_j(z_1,0)\not \equiv 0$, $g(z_1,0)\not\equiv 0$ and $\frac{\partial \psi}{\partial z_1}(0)\neq 0$. Hence $M$ can be defined as a graph $z_1=\rho(\overline{z_1}, \tilde z, \overline {\tilde z})$ where $\tilde z=(z_2,\dots,z_n)$ and $\rho(z_1,\lambda, \tau)$ is a holomorphic function near $0$ in $\mathbb C\times \mathbb C^{n-1}\times \mathbb C^{n-1}$. We may also assume that $\rho(\overline {z_1},0,0)=\overline{z_1}.$ By Weierstrass preparation theorem, $g$ can be written as $g(z_1,\tilde{z})=u(z_1,\tilde{z})h(z_1,\tilde{z})$ where $u(z_1,\tilde{z})=\sum_{j=0}^{l}a_j(\tilde{z})z_1^j$ is a Weierstrass polynomial so that $a_l(\tilde{z}) \equiv 1$, $a_j(0)=0$ for $0\leq j \leq l-1$ and $h(0,0)\neq 0.$ Since $F$ is bounded as $ z=(z_1,\tilde {z})\rightarrow 0$ in $\Omega$, $f_j$ can be decomposed as $f_j(z_1,\tilde{z})=u_j(z_1,\tilde{z})h_j(z_1,\tilde{z})$ where $u_j'$-s are Weierstrass polynomials in $z_1$ of degree $k_j\geq l$ and $h_j(0,0)\neq 0$. Using division algorithm, one can find $r_j$ of degree smaller than $l$ in $z_1$ such that $u_j(z_1,\tilde{z})=u(z_1,\tilde{z})d_j(z_1,\tilde{z})+r_j(z_1,\tilde{z}).$ Setting $D_j=d_jh_j$, $R_j=r_jh_j$, $D=(D_j)_{j=1}^N$, $R=(R_j)_{j=1}^N$, we have $f=uD+R$. Our aim is to show that $R\equiv 0$.\\ Since $M'$ is strongly pseudoconvex, by a holomorphic change of variables it can be written as $$M'=\{({z'},z'_N)\in \mathbb C^N: {\sf Im}\, z_N'-\sum_{j=1}^{N-1}|z_j'|^2+\varphi({z'},\overline{ {z'}})=0\}$$ where $\varphi\equiv 0 $ or $\varphi$ is a real valued polynomial of degree bigger than $2$. If $\varphi\equiv 0$ then $M'$ is locally equivalent to the real sphere in $\mathbb C^N$ and Theorem \ref{main} follows from the main result in \cite {Ch}. Hence we can assume that $\varphi {\equiv \!\!\!\!\!\! / \,\,} 0$. Let's write $\varphi$ as $$\varphi({z'},\overline{{z'}})=\sum_{I,J}\alpha_{I,J}{z'}^{I}{\overline{z'}}^J .$$ Since $\varphi$ is real valued, $\overline {\alpha_{IJ}}=\alpha_{JI}$ and hence the highest degrees of $z'$ and $\overline {z'}$ in $\varphi$ are the same, say they are equal to $d\geq 2.$ \\ Since $F$ maps $M$ into $M'$, $\forall z\in M$ we have that \begin{eqnarray}\label{1} \frac{f_N(z)}{g(z)}- \frac{\overline{f_N(z)}}{\overline{g(z)}}-2i\sum_{j=1}^{N-1}\frac{|f_j(z)|^2}{|g(z)|^2}-2i\varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{\tilde f(z)}}{\overline {g(z)}}\right)\equiv 0 \end{eqnarray} where $\tilde{f}=(f_1,\dots,f_{N-1})$. Multiplying both sides of the above equation by $g(z)^d{\overline{g(z)}}^d$, we obtain that \begin{eqnarray}\label{new0} f_N(z)g(z)^{d-1}\overline {g(z)}^d-\overline{f_N(z)}g(z)^d\overline {g(z)}^{d-1}-2i\sum_{j=1}^{N-1}| f_j(z)|^2g(z)^{d-1}\overline {g(z)}^{d-1} \\ \nonumber -2i g(z)^d{\overline{g(z)}}^d \varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{\tilde f(z)}}{\overline {g(z)}}\right)\equiv 0 \end{eqnarray} For $z=(z_1,\tilde{z})$, we set $z^*=(\rho(\overline{z_1},\tilde{z},\overline {\tilde{z}}),\tilde{z})$, $s^*(z)=\overline{s(z^*)}$ for any function $s$. Then \begin{eqnarray}\label{new} f_N(z)g(z)^{d-1}\overline {g(z^*)}^d-\overline{f_N(z^*)}g(z)^d\overline {g(z^*)}^{d-1}-2i\langle \tilde f(z),\tilde f(z^*)\rangle g(z)^{d-1}\overline {g(z^*)}^{d-1}\\ \nonumber -2i g(z)^d{\overline{g(z^*)}}^d \varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{\tilde f(z^*)}}{\overline {g(z^*)}}\right) \end{eqnarray} is a holomorphic function of $z_1$ and by (\ref{new0}) it is equal to $0$ whenever $z_1=\rho(\overline{z_1},\tilde z,\overline {\tilde z})$, that is when $z=z^*$. Here $\langle \, , \rangle$ denotes the standard inner product, that is $\langle a,b \rangle =\sum_{i=1}^N a_i\overline {b_i}$ for $a=(a_1,\dots,a_N)$ and $b=(b_1,\dots,\ b_N)$ in $\mathbb C^N$. For a fixed $\tilde z_0$, the real codimension of the set $\{z_1=\rho(\overline{z_1},\tilde z_0,\overline{\tilde z_0})\}$ in $\mathbb C^n$ is at most the sum of real codimensions of $M$ and $\{\tilde{z}=\tilde z_0\}$. Hence the real dimension of the set $\{z_1=\rho(\overline{z_1},\tilde z_0,\overline{\tilde z_0})\}$ is at least $1$. It follows that the function above is identically $0$ as a function of $z_1$. Using the identities $f=uD+R$, $g=uh$ and $\tilde f(z^*)=\overline {\tilde f^*(z)}$, it follows from (\ref{new}) that \begin{eqnarray}\label{2} &&u^d{u^*}^d(D_Nh^{d-1}{h^*}^d-D_N^*h^d{h^*}^{(d-1)}-2i\langle \tilde D,\overline{\tilde D^*}\rangle h^{d-1}{h^*}^{(d-1)})\\ && +u^{d-1}{u^*}^d (R_N h^{d-1}{h^*}^d -2ih^{d-1}{h^*}^{(d-1)}\langle \tilde R,\overline{{\tilde D}^*} \rangle) \nonumber \\ &&+u^d{u^*}^{(d-1)}(R_N^*h^d{h^*}^{(d-1)} -2ih^{d-1}{h^*}^{(d-1)}\langle \tilde D,\overline{\tilde R^*}\rangle ) \nonumber \\ &&-2iu^{d-1}{u^*}^{(d-1)}h^{d-1}{h^*}^{(d-1)}\langle \tilde R,\overline {\tilde R^*}\rangle-2ig(z)^d{\overline{g(z^*)}}^d \varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{\tilde f(z^*)}}{\overline {g(z^*)}}\right)\equiv 0. \nonumber \end{eqnarray} where $\tilde D=(D_1,\dots,D_{N-1})$ and $\tilde R=(R_1,\dots,R_{N-1})$. Let $\tilde z=0$. We note that $$u^*(z_1,0)=\overline{ u(\rho(\overline {z_1},0,0),0)}=\overline{\rho(\overline {z_1},0,0)^l}=z_1^{l} $$ and $u(z_1,0)=z_1^l$. Let us assume that $R(z_1,0){\equiv \!\!\!\!\!\! / \,\,} 0$ and the multiplicity of $z_1$ in $R(z_1,0)$ is $a$ for some $0\leq a < l.$ That is $R(z_1,0)=z_1^a Q(z_1)$ for some holomorphic function $Q$ such that $Q(0)\neq 0$. The multiplicity of $z_1$ in the first summand of the function in (\ref 2) is greater than or equal to $2dl$. In the second and the third summands, the multiplicity of $z_1$ is greater than or equal to $(2d-1)l+a$. In the fourth summand, the multiplicity of $z_1$ is greater than or equal to $2(d-1)l+2a$. The equation $(\ref{2})$ implies that \begin{eqnarray}\label{111}\min\{2dl,(2d-1)l+a,2(d-1)l+2a \}=2(d-1)l+2a \end{eqnarray} must be smaller than or equal to the multiplicity of $z_1$ in the last term $g(z)^d{\overline{g(z^*)}}^d \varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{\tilde f(z^*)}}{\overline {g(z^*)}}\right)$. Note that the multiplicity of $z_1$ in $f=uD+R$ and in $g=uh$ are $a$ and $l$, respectively. By writing $$g(z)^d{\overline{g(z^*)}}^d \varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{\tilde f(z^*)}}{\overline {g(z^*)}}\right)=\sum_{|I|,|J|\leq d}\alpha_{IJ}\tilde f(z)^Ig(z)^{d-|I|}{\overline{\tilde f(z^*)}}^J{\overline{g(z^*)}}^{d-|J|}$$ we see that the multiplicity of $z_1$ in the last summand in $(\ref{2})$ is equal to \begin{eqnarray} \label{112} \min_{|I|,|J|, \alpha_{IJ} \ne 0}\{a|I|+l(d-|I|)+a|J|+l(d-|J|) \}\leq \min_{|J|}\{ad+ld+|J|(a-l) \}. \end{eqnarray} The inequality above is obtained by taking $|I|=d$. We have the following cases for $d$. \\ Case 1: $d=2$. Since the total degree of $\varphi$ is bigger than or equal to $3$, when $|I|=2$, $|J|$ must be at least one. Then it follows from $(\ref{111})$ and $(\ref{112})$ that $$2l+2a\leq \min_{|J|}\{2a+2l+|J|(a-l) \}\leq 3a+l.$$ The second inequality above follows from the fact that $|J|\geq 1$ and $a-l<0$. But this implies that $l\leq a$, which contradicts to the choice of $a$ and hence $R(z_1,0)\equiv 0.$ \\ Case 2: $d>2$. It follows from $(\ref{111})$ and $(\ref{112})$ that $$2(d-1)l+2a\leq \min_{|J|}\{ad+ld+|J|(a-l) \}\leq ad+ld.$$ But this implies that $d\leq 2$. Hence again we have that $R(z_1,0)\equiv 0$. Now we suppose that $R\not\equiv 0$. We may assume that $R(a)\neq 0$ for some $a=(a_1,\dots,a_n)$ such that $a_2\neq 0$. We change the coordinates from $z$ to $\zeta$ defined by $$z_1=a_1\zeta_2+ \zeta_1,\; z_i=a_i\zeta_2+\sum_{j=3}^nb_{ij}\zeta_j, \; i=2,\dots,n.$$ Since $(a_2,\dots,a_n)\neq 0$, $b_{ij}$ can be chosen so that $\zeta$ gives a non-singular linear change of coordinates. In these new coordinates, $R(0,1,0,\dots,0)=R(a)\neq 0$, $R(\zeta_1,0)\equiv 0$, $f_j(\zeta_1,0)\not \equiv 0$, $g(\zeta_1,0)\not \equiv 0$ and $\frac{\partial \varphi}{\partial \zeta_1}(0)=\frac{\partial \varphi}{\partial z_1}(0)\neq 0$. We denote these new coordinates by $z$ again. Then $$R(z_1,0)\equiv 0, R(z_1,z_2,0\dots,0)\not\equiv 0,$$ $f_j$ and $g$ do not vanish on $z_1$-axis and $M$ can be written as $z_1=\rho(\overline {z_1}, \tilde z, \overline {\tilde z} )$ near the origin. Since $R(z_1,z_2,0,\dots,0)$ is a non-zero analytic function of $z_2$ vanishing at $z_2=0$ there exists the largest integer $k\geq 1$ such that $z_2^k$ divides $R(z_1,z_2,0,\dots,0)$. We define $G_i=\frac{R_i}{z_2^k}$, $G=(G_1,\dots,G_N)$ and $\tilde G=(G_1,\dots, G_{N-1})$. Note that \begin{eqnarray} \label{113} G(z_1,0,\dots ,0)\neq 0. \end{eqnarray} Then dividing the terms in (\ref{2}) by $|z_2|^{2k}$ , we obtain that \begin{eqnarray}\label{12} && \frac{1}{|z_2|^{2k}}u^d{u^*}^d(D_Nh^{d-1}{h^*}^d-D_N^*h^d{h^*}^{(d-1)}-2i\langle \tilde D,\overline{\tilde D^*}\rangle h^{d-1}{h^*}^{(d-1)})\\ && +\frac{1}{{\overline{z_2}}^k} u^{d-1}{u^*}^d (G_N h^{d-1}{h^*}^d -2ih^{d-1}{h^*}^{(d-1)}\langle \tilde G,\overline{{\tilde D}^*} \rangle) \nonumber \\ &&+\frac{1}{{z_2}^k}u^d{u^*}^{(d-1)}(G_N^*h^d{h^*}^{(d-1)} -2ih^{d-1}{h^*}^{(d-1)}\langle \tilde D,\overline{\tilde G^*}\rangle ) \nonumber \\ &&-2iu^{d-1}{u^*}^{(d-1)}h^{d-1}{h^*}^{(d-1)}\langle \tilde G,\overline {\tilde G^*}\rangle-\frac{2i}{|z_2|^{2k}}g(z)^d{\overline{g(z^*)}}^d \varphi\left(\frac{\tilde f(z)}{g(z)},\frac{\overline{ \tilde f(z^*)}}{\overline {g(z^*)}}\right)\equiv 0. \nonumber \end{eqnarray} We take $(z_3,\dots,z_n)=0$ and let $z_2\rightarrow 0$ in above equation. Considering the order of $z_1$ in all terms in (\ref{12}), as in above argument for $R(z_1,0,\dots,0)$, we obtain that $G(z_1,0\dots, 0)=0$ when $z_1=\rho(\overline{z_1},0,0)$. But this contradicts to $(\ref{113})$. Consequently, $R\equiv 0$ and $F=\frac{f}{g}=\frac{D}{h}$ defines a holomorphic mapping near $0\in\mathbb C^n.$ \end{proof} \noindent \textbf{Acknowledgments.} I am grateful to Nordine Mir for his suggestion to work on this problem and for useful discussions on this subject. I would like to thank the referee for his/her remarks and suggestions which helped to improve the presentation of the paper. The author is supported by T\"UB\.ITAK 2232 Proj. No 117C037.
1,116,691,501,146
arxiv
\section{Acknowledgements} MT would like to acknowledge suport from the Open Partnership Joint Projects of JSPS Bilateral Joint Research Projects and the ImPACT Program of Council for Science, Technology and Innovation, Japan. KPS, and JPD would like to acknowledge support from the Air Force Office of Scientific Research, the Army Research Office, the National Science Foundation, and the Northrop Grumman Corporation. CY would like to acknowledge support from an Economic Development Assistantship from the the Louisiana State University System Board of Regents. \bibliographystyle{apsrev4-1}
1,116,691,501,147
arxiv
\section{Parallelization as Information Exchange} \label{sec:communication-graph} We observe that our parallelization approach implicitly defines a certain type of informational dependencies. In both \eqref{def:greedy-algorithm} and \eqref{def:parallel-greedy}, each agent uses the choices of prior agents to make its own decision. Alternatively, we can view this process as an agent communicating its decision to other agents who depend on this piece of information. In our parallelization model, each agent is required to broadcast its decision to all of the later agents. This requirements seems rather restrictive and thus in this section, we will reduce the amount of informational dependencies by relaxing the constraints. Specifically, we will treat the communication between one pair of agents as an individual entity so that an agent only has to send its choice to selected later agents and receive choices from selected previous agents. \subsection{Parallelization and Information} \label{sec:parallel-communication} \begin{figure*} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[scale=1.2]{fig/weird_parallel_graph.pdf} \caption{The optimal assignment for $n = 5$, $q = 2$ as described in \eqref{equ:weird-parallel}.} \label{fig:weird-parallel-graph} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[scale=1.2]{fig/weird_parallel_bad_graph.pdf} \caption{A non-optimal assignment for $n=5, q = 2$.} \label{fig:bad-parallel-weird-graph} \end{subfigure} \hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[scale=1.2]{fig/standard_parallel_graph.pdf} \caption{The optimal assignment for $n = 5$, $q = 3$ as described in \eqref{equ:standard-parallel}.} \label{fig:standard-parallel-graph} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[scale=1.2]{fig/bad_parallel_graph.pdf} \caption{A non-optimal assignment for $n=5, q = 3$.} \label{fig:bad-parallel-graph} \end{subfigure} \caption{ The induced information graph of iteration assignments from Figure \ref{fig:parallel} as discussed in Section \ref{sec:communication-graph}. In the information graphs, an edge $(i, j)$ where $i < j$ represents that the iteration of agent $i$ is before the iteration of agent $j$ and hence agent $j$ requires knowing the choice made by agent $j$ before making its own decision. Each edge can be thought of as the flow of information exchanges between the agents in which a prior agent share its choice so later agents can decide. } \label{fig:parallel-graph} \end{figure*} We can model the information exchange as an undirected graph $G = (V, E)$, where $V = N$ represents the agents and each edge $(i, j) \in E$ represents information being exchanged between two agents. Since the greedy algorithm has an implicit ordering induced by the agents' names, for every pair $i < j$ so that $(i, j) \in E$, agent $j$ requires knowing the choice of agent $i$, but agents $i$ does not access agent $j$'s decision. Hence, we can use $G$, which we will call the \textit{information graph}, to determine the set of prior agents which agent $i \in N$ accesses as $\mathcal{N}_i = \{j < i \mid (j, i) \in E\} \subseteq [i-1]$. From here, it is natural to consider the generalized greedy algorithm proposed in \cite{gharesifard2017distributed}: \begin{equation} \label{def:general-greedy} x_i \in \argmax_{x_i \in X_i} f(x_i \mid x_{\mathcal{N}_i}) \end{equation} First we noticed that the parallelized greedy algorithm as defined in \eqref{def:parallel-greedy} is a special case of \eqref{def:general-greedy}. In the parallelized greedy algorithm, each agent requires information from all agents executed in prior iterations. Therefore, an iteration assignment $P \in \mathcal{P}_{n, q}$ induces a corresponding information graph $G = (V, E)$ so that $V = N$, $E = \{(i, j) \mid P(i) < P(j)\}$. Figure \ref{fig:parallel-graph} illustrates how the iteration assignments from Figure \ref{fig:parallel} induce corresponding information graphs. For rest of this section, we mention an iteration assignment with the understanding that we are interested in its induced information graph. Conversely, an information graph induces an ordering to parallelize the generalized greedy algorithm. If all agents in $\mathcal{N}_i$ have made their decisions but agent $j < i$ is still undecided, then we can let agents $i$ and $j$ to make their decisions in the same iteration without affecting the algorithm. Intuitively, we can think of information being relayed through some paths in $G$ and the earliest iteration for which agent $i$ can decide depends on the longest path in $G$ leading to node $i$. Formally, a simple induction argument shows that the function $\overline{P} : G \times [n] \to [n]$ determines the earliest iteration in which an agent could decide: \begin{equation} \overline{P}(G; i) = \begin{cases} 1 & \text{if } \mathcal{N}_i = \emptyset \\ 1 + \max_{j \in \mathcal{N}_i} \overline{P}(G; j) & \text{otherwise} \end{cases} \end{equation} This function $\overline{P}$ can be used to construct a parallelization of the greedy algorithm in which agent $i$ makes its decision in iteration $\overline{P}(G; i)$ and for every $j \in \mathcal{N}_i$, $\overline{P}(G; j) < \overline{P}(G; i)$. From here we can define $\mathcal{G}_{n, q}$ as the set of graphs which induces a parallelization with at most $q$ iterations: \begin{equation} \mathcal{G}_{n, q} = \{G = (V, E) : |V| = n, \max_i \overline{P}(G; i) \le q\} \end{equation} From the the definitions, it is clear that $\mathcal{P}_{n, q} \subseteq \mathcal{G}_{n, q}$. Hence we can adopt competitive ratios for information graphs so that \eqref{def:comp-ratio-parallel}, \eqref{def:comp-ratio-over-all-parallel} and \eqref{def:optimal-parallel} become: \begin{equation} \label{def:comp-ratio-graph} \gamma(G) = \inf_{f, X} \gamma(f, X, G) \end{equation} \begin{equation} \label{def:comp-ratio-over-all-graph} \eta(n, q) = \sup_{G \in \mathcal{G}_{n, q}} \gamma(G) \end{equation} \begin{equation} \label{def:optimal-graph} G^*_{n, q} \in \argmax_{G \in \mathcal{G}_{n, q}} \gamma(G) \end{equation} \begin{comment} and \eqref{def:comp-ratio-parallel-beta} and \eqref{def:comp-ratio-over-all-parallel-beta} become: \begin{equation} \label{def:comp-ratio-graph-beta} \gamma'(G) = \inf_{f \in \mathcal{F}', X} \gamma(f, X, G) \end{equation} \begin{equation} \label{def:comp-ratio-over-all-graph-beta} \eta'(n, q) = \sup_{G \in \mathcal{G}_{n, q}} \gamma'(G) \end{equation} \end{comment} where $\gamma(f, X, G)$ is $f(x) / f(x^*)$ from the generalized greedy algorithm subject to objective function $f$, set of action profiles $X$ and information graph $G$. It is natural to ask whether the above generalization of parallelized greedy algorithm yields higher competitive ratio under the same number of agents and iterations. Also, in some applications, the information exchange may incur some costs, and therefore having a information graph with many edges is undesirable. So we wish to answer a second question, whether we can achieve the same competitive ratio in \eqref{equ:parallel-exact} with information graphs that have fewer edges than that of \eqref{equ:weird-parallel} and \eqref{equ:standard-parallel}. Theorems \ref{thm:best-parallel-graph}, as presented below, show that generalizing to information graphs does not yield higher competitive ratio but allows us to achieve the same optimal competitive ratio with fewer edges. \subsection{Preliminary: Graph Theory} The main result on information graphs leverages several concepts from graph theory. We assume that we have an undirected graph $G = (V, E)$: \begin{definition} Nodes $K \subseteq V$ form a \textit{clique} if for all distinct $i, j \in K$, $(i, j) \in E$. The \textit{clique number} $\omega(G)$ of a graph $G$ is the size of the maximum clique in $G$. \end{definition} \begin{definition} A \textit{clique cover} of a graph $G$ is a partition of $V$ so that each set in the partition forms a clique. And the \textit{clique cover number} $\theta(G)$ is the least number of cliques necessary to form a clique cover. \end{definition} \begin{definition} Nodes $I \subseteq V$ form an \textit{independent set} if for all distinct $i, j \in I$, $(i, j) \not\in E$. The \textit{maximum independent set} is an independent set with the largest possible number of nodes. And the \textit{independence number} $\alpha(G)$ is the size of the maximum independent set. \end{definition} Lastly, a \textit{Tur\'{a}n graph} $T(n, r)$ of $n$ vertices and clique number $r$ is constructed as follows: \begin{itemize} \item We partition the $n$ vertices into $r$ subsets as evenly as possible. More specifically, we have $(n \mod{r})$ subsets with $\lceil n/r \rceil$ vertices and $(r - n \mod{r})$ subsets with $\lfloor n/r \rfloor$ vertices. \item We draw an edge between two vertices if and only if they belong to different subsets. \end{itemize} And it has the following interesting property~\cite{turan1941external}: \begin{lemma}[Tur\'{a}n] The Tur\'{a}n graph $T(n, r)$ is the $n$-vertex graph with the largest number of edges that does not contain an $(r+1)$-clique. In other words: \begin{equation} T(n, r) = \argmax_{G = (V, E): \, |V| = n, \, \omega(G) \le r} |E| \end{equation} \end{lemma} \input{proof-parallel.tex} \section{Proof of Theorem \ref{thm:best-parallel-strict-monotone-graph}} \label{sec:complete-proof-parallel-beta} First we note that the function $g(x) = (x - (x-1) \lambda) / x$ is decreasing, and the upper bound in \eqref{equ:strict-monotone-lower} is equal to $g(\alpha(G))$. The upper bound in \eqref{equ:strict-monotone-parallel-bounds-graph} then follows from combining Lemma \ref{thm:parallel-indep-number} and the upper bound in Lemma \ref{thm:strict-monotone-lower}. Consider the graph $G$ described by \eqref{equ:complement-turan}; as we already argued previously in Appendix \ref{sec:complete-proof-parallel}, $r = \alpha(G) = \theta(G)$ and $G \in \mathcal{G}_{n, q}$. Hence, from the lower bound in Lemma \ref{thm:strict-monotone-lower}, we conclude that this graph achieves the lower bound in \eqref{equ:strict-monotone-parallel-bounds-graph}, so we are done. \hfill$\blacksquare$ \section{Proof of Theorem \ref{thm:best-parallel-graph}} \label{sec:complete-proof-parallel} We shall prove Theorem \ref{thm:best-parallel-graph} by applying Lemmas \ref{thm:parallel-indep-number}, \ref{thm:david} and \ref{thm:david-sibling}. We first consider when $n \equiv 1 \pmod{q}$. Combining Lemmas \ref{thm:parallel-indep-number} and \ref{thm:david} yields that $\gamma(G) \le 1/r$. Now let $G$ be the graph described in \eqref{equ:weird-graph} and we can show that $\gamma(G) \ge 1/r$. First note that for any $i < n$, $\overline{P}(G; i) = \lceil i / (r-1) \rceil$ and if $i = n$, $\overline{P}(G; i) = q$. Hence this graph is in $\mathcal{G}_{n, q}$. Let $T = \{i \mid i \le (r-1)(q-1) \}$, and note that for all $i \le n$, $\mathcal{N}_i \subseteq T$. Then we have: \begin{subequations} \begin{align} f(x^*) &\le f(x^*, x_T) \label{equ:weird-graphL1} \\ &= f(x_T) + \sum_{i=1}^n f(x^*_i \mid x^*_{1:i-1}, x_T) \label{equ:weird-graphL2} \\ &\le f(x_T) + \sum_{i=1}^n f(x^*_i \mid x_{\mathcal{N}_i}) \label{equ:weird-graphL3} \\ &= f(x_T) + \sum_{j=1}^{r-1} \sum_{i=0}^{q-1} f(x^*_{i(r-1)+j} \mid x_{\mathcal{N}_i}) + f(x^*_n \mid x_T) \label{equ:weird-graphL4} \\ &\le f(x_T) + \sum_{j=1}^{r-1} \sum_{i=0}^{q-1} f(x_{i(r-1)+j} \mid x_{\mathcal{N}_i}) + f(x_n \mid x_T) \label{equ:weird-graphL5} \\ &= \sum_{j=1}^{r-1} f(x_{\{j, j+(r-1), \dots, j+(r-1)(q-1)\}}) + f(x_n, x_T) \label{equ:weird-graphL6} \\ &\le r f(x) \label{equ:weird-graphL7} \end{align} \end{subequations} where \eqref{equ:weird-graphL1} and \eqref{equ:weird-graphL7} follow from monotonicity, \eqref{equ:weird-graphL2} and \eqref{equ:weird-graphL6} follow from telescoping sums, \eqref{equ:weird-graphL3} follows from submodularity, \eqref{equ:weird-graphL4} rearranges the sum according to the graph structure as defined in \eqref{equ:weird-graph}, and \eqref{equ:weird-graphL5} follows from the definition of the greedy algorithm as stated in \eqref{def:general-greedy}. From here we can conclude that $\eta(n, q) = 1/r$ when $n \equiv 1 \pmod{q}$. Now we consider the case where $n \not\equiv 1 \pmod{q}$. If $\alpha(G) > r$, then by Lemma \ref{thm:david}, $\gamma(G) \le 1/(r + 1)$. \begin{comment} If $\alpha(G) = r$, then $|D_q| \le r$. Since $n \not\equiv 1 \pmod{c}$, therefore $n > q(r-1) + 1$, or equivalently $n-r > (q-1)(r-1)$. Hence $|\bigcup_{1 \le k < q} D_k| > (q-1)(r-1)$. Now applying the pigeonhole principle, we conclude that there exists some $k < q$, so that $|D_k| = r$. Then for this particular $k$, there must exist some $w \le n, d \in D_k$ so that $d \in \mathcal{N}_w$. Then, Lemma \ref{thm:david-sibling} implies that $\gamma(G) \le 1/(r + 1)$. \end{comment} If $\alpha(G) = r$, we suppose for the sake of contradiction that condition of Lemma \ref{thm:david-sibling} fails to hold. Let $q' = \argmax_k |D_k| > 0$. Then we need $|D_k| < r$ for any $k < q'$, otherwise we can pick $w$ be any agent in iteration $k+1$. So we can have at most $(r-1)(q'-1) + r \le (r-1)q + 1$ agents. But the condition $n \not\equiv 1 \pmod{q}$ implies that $n > q(r-1) + 1$, contradiction. So by Lemma \ref{thm:david-sibling}, $\gamma(G) \le 1/(r + 1)$. Now let $G$ be the graph described by \eqref{equ:complement-turan}. For any $i \in n$, $\overline{P}(G; i) = \lceil i / r \rceil \le q$, hence this graph is in $\mathcal{G}_{n, q}$. Also note that this graph consists of $r$ disjoint cliques, therefore $r = \alpha(G) = \theta(G)$. By Lemma \ref{thm:david}, we have $\gamma(G) \ge 1/(r + 1)$, hence the equality is achieved. \hfill$\blacksquare$ \section{Conclusion} In this paper, we derived bounds on the competitive ratio of the parallelized greedy algorithm for both submodular objective functions and those with an additional property of total curvature. We also provided the optimal design which achieves such bound and showed that a graph theoretic approach yields more effective parallelization that still achieves the same competitive ratio. There are several directions of future research. One possibility is to consider whether employing other structural properties on the objective functions can also improve the competitive ratio of greedy algorithm. In particular, we are interested in properties that consider fixed number decisions at once because the total curvature property considers arbitrarily many decisions at once. Another possible direction is to consider applying something other than the marginal contribution in making the greedy decisions. \section{Introduction} Submodular maximization is an important topic with relevance to many fields and applications, including sensor placement \cite{krause2006near}, outbreak detection in networks \cite{leskovec2007cost}, maximizing and inferring influence in a social network \cite{kempe2003maximizing,gomez2012inferring}, document summarization \cite{lin2011class}, clustering \cite{mirzasoleiman2013distributed}, assigning satellites to targets \cite{qu2019distributed}, path planning for multiple robots \cite{singh2007efficient}, and leader selection and resource allocation in multiagent systems \cite{clark2011submodular,marden2016}. An important similarity among these applications is the presence of an objective function which exhibits a ``diminishing returns" property. For instance, a group of leaders can impose some influence on a social network, but the marginal gain in influence achieved by adding a new leader to the group decreases as the size of the group increases. Objective functions (such as influence) satisfying this property are \emph{submodular}. The submodular maximization problem is to choose a set of elements (such as social leaders) which maximize the submodular objective function, according to some constraints. This problem is known to be NP-hard for certain subclasses of submodular functions \cite{lovasz1983submodular}. Therefore, much research has focused on how to approximate the optimal solution \cite{calinescu2007maximizing,minoux1978accelerated,buchbinder2015tight,vondrak2008optimal}. The overall message of this research is that simple algorithms can perform well by providing solutions which are guaranteed to be within some factor of optimal. One such algorithm is the greedy algorithm, first proposed in \cite{nemhauser1978analysis}. It was shown in this seminal work that for certain classes of constraints the solution provided by the greedy algorithm is guaranteed to be within $1-1/e$ of the optimal, and within $1/2$ of the optimal for the more general cases \cite{fisher1978analysis}. Since then, more sophisticated algorithms have been developed to show that there are many instances of the submodular maximization problem which can be solved efficiently within the $1-1/e$ guarantee \cite{calinescu2007maximizing,gairing2009covering}. It has also been shown that progress beyond this level of optimality is not possible using a polynomial-time algorithm \cite{feige1998threshold}. More recent research has focused on distributed algorithms, since in many cases having a centralized agent with access to all the relevant data is untenable \cite{clark2016submodularity, mirzasoleiman2013distributed,corah2018distributed}. In this case, the greedy algorithm can be generalized using a set of $n$ agents, each with its own decision set. The combined set of decisions by the agents is evaluated by the submodular objective function, which they seek to maximize. Each agent chooses sequentially, maximizing its marginal contribution with respect to the decisions of the prior agents. In this setting, the greedy algorithm has been shown to provide a solution within $1/2$ of the optimal. While simplistic from an algorithmic perspective, the informational requirement can be quite demanding as agents are required to observe all previous choices. Accordingly, researchers have sought to characterize the impact of reducing this informational dependencies on the resulting performance guarantees~\cite{lin2011class,pan2014parallel,ene2018parallel}. In this paper, we consider the case where no such centralized authority is present, i.e., the agents' decision sets are determined a priori and cannot be modified. Terminating the greedy algorithm in $q < n$ iterations thus requires a partitioning of the agents into sets $1, \dots, q$, where now each agent in set $j$ simultaneously chooses an action which maximizes its marginal contribution relative to the actions chosen by all the agents in sets $1, \dots, j-1$. In this setting, one can ask the following questions: \begin{itemize} \item What is the best way to partition the agents? Should the agents be spread evenly across the sets in the partition, or should set 1 or set $q$ be larger than the others? \item Can we further reduce the amount of informational dependencies without sacrificing any performance guarantees? \item If some additional structure is known about the submodular objective function, how does that affect the partitioning strategy? \end{itemize} \begin{comment} In the non-parallelized setting, recent research has explored how the performance of the greedy algorithm degrades as we relax the information sharing constraint that each agent must have access to the decisions of all prior agents \cite{gharesifard2017distributed,grimsman2018impact}. Therefore, it is a natural extension to ask in the parallelized setting: \begin{enumerate} \setcounter{enumi}{2} \item If we relax the requirement that an agent in partition set $j$ must have access to the decisions of all agents in sets $1, \dots, j-1$, can the same level of performance be maintained by the greedy algorithm? \end{enumerate} \end{comment} In response to the questions listed above, the contributions of this paper are the following: \begin{enumerate} \item Theorem \ref{thm:best-parallel} shows that given time constraint $q$, partitioning the agents into equally-sized sets is the best strategy to parallelize the greedy algorithm. \item Theorem \ref{thm:best-parallel-graph} proves that we can relax the information sharing constraints under a graph-theoretic interpretation and still achieve the same performance guarantees. \item Theorem \ref{thm:best-parallel-strict-monotone-graph} shows that if we know some additional properties of the submodular function, i.e., their total curvature, then we have improved performance guarantees and the strategy in Theorem \ref{thm:best-parallel-graph} is nearly optimal. \end{enumerate} This paper is organized as follows: \begin{enumerate} \item Section \ref{sec:model} introduces general definitions and the greedy algorithm; \item Section \ref{sec:parallel-computation} introduces Theorem \ref{thm:best-parallel}; \item Section \ref{sec:parallel-communication} gives a more general graph-theoretic interpretation of the model and introduces Theorem \ref{thm:best-parallel-graph}. \item Section \ref{sec:beta-strict-monotone} defines the total curvature property and introduces Theorem \ref{thm:best-parallel-strict-monotone-graph}. \end{enumerate} \section{Limited Communication} \begin{cor} Given any graph $G = (V, E)$ so that $\gamma(G) \ge 1/(k+1)$ for some integer $k > 1$, then \begin{equation} \begin{aligned} |E| &\ge \frac{1}{2}(n \mod{k}) \left\lceil \frac{n}{k} \right\rceil \left(\left\lceil \frac{n}{k} \right\rceil - 1\right)\\ &\phantom{==}+ \frac{1}{2}(-n \mod{k}) \left\lfloor \frac{n}{k} \right\rfloor \left(\left\lfloor \frac{n}{k} \right\rfloor - 1\right) \end{aligned} \end{equation} where $n = |V|$. Furthermore, the complement Tur\`{a}n graph $\overline{T(n; k)}$ achieves this lower bound. \end{cor} \begin{definition} A distributed submodular maximization problem satisfies \textit{$p$-wise redundancy constraint} if there is some nonnegative function $\Phi$ so that for all $A \subseteq V$ where $|A| = p$, \begin{equation} \sum_{i \in A} f(x_i) \geq f\left(\{x_i\}_{i \in A}\right) \geq \Phi\left( \{f(x_i)\}_{i \in A}, f\left(\{x_i\}_{i \in A}\right) \right). \end{equation} We say the problem is \textit{$p$-additive} if the inequalities above are equalities. \end{definition} Note that $\beta$-strict montonicity is a $p$-wise redundancy constraint for all positive integer $p$. \begin{definition} Let $\gamma_p(G)$ be the efficiency guarantee of $G$ subject to $p$-additivity. Formally, \begin{equation} \gamma_p(G) = \inf_{(f, X) \text{ are $p$-additive}} \gamma(f, X, G) \end{equation} \end{definition} \begin{theorem} \label{thm:not-useful} Given any graph $G = (V, E)$ so that $\gamma_p(G) \ge 1/(k+1)$ for some fixed integers $p \ge 1$ and $k > 1$, then \begin{equation} |E| \ge \left(\frac{1}{k} + o(1)\right)\binom{n}{2} \end{equation} as a function of $n$, where $n = |V|$. Furthermore, the complement Tur\`{a}n graph $\overline{T(n; k)}$ achieves this lower bound asymptotically. \end{theorem} \subsection{Proof of Theorem \ref{thm:not-useful}} To prove Theorem \ref{thm:not-useful}, we will show an upper bound on $\gamma_p$ with an explicit construction. First, we will need a few definitions which will be key to the construction. \begin{definition} \label{def:p-indep} A set of nodes $J$ is called \textit{$p$-pseudo-independent} if for every $j \in J$, $|\mathcal{N}_j \cap J| < p$. The \textit{$p$-pseudo-independent} number $\alpha_p(G)$ is the size of the maximum $p$-pseudo-independent set. \end{definition} \begin{definition} \label{def:p-sib} A node $w$ is a \textit{$p$-sibling} of a maximum $p$-pseudo-independent set $J$ if $w \in V \setminus J$ and there exist distinct $j_1, \dots, j_p \in J$ s.t. $j_i \in \mathcal{N}_w$. And a graph $G$ has the \textit{$p$-sibling property} if some maximum $p$-pseudo-independent set in $G$ has a $p$-sibling. \end{definition} \begin{lemma} \label{thm:no-sibling} If a graph $G$ does not have the $p$-sibling property, then it has no disjoint maximum $p$-pseudo-independent sets. \end{lemma} \begin{proof} Suppose the contrary, there are maximum pseudo-independent sets $J, J'$ so that $J' \cap J = \emptyset$. Let $j= \max(J \cup J')$ be the last node in $J$ and $J'$ and WLOG $j \in J'$. For any $i \in J$, $|\mathcal{N}_i \cap (J \cup \{j\})| = |\mathcal{N}_i \cap J| < p$ since by definition $j \not\in \mathcal{N}_i$. And because $j$ cannot be a $p$-sibling of $J$, $|\mathcal{N}_j \cap (J \cup \{j\})| \le |\mathcal{N}_j \cap J| < p$. This implies that $J \cup \{j\}$ is $p$-pseudo-independent, contradiction. \end{proof} \begin{lemma} \label{thm:upper-bound} Given a graph $G$ with the $p$-sibling property, then \begin{equation} \gamma_p(G) \le \frac{p}{\alpha_p(G) + 1} \end{equation} and if $G$ does not have the $p$-sibling property, then \begin{equation} \gamma_p(G) \le \frac{p}{\alpha_p(G)} \end{equation} \end{lemma} \begin{proof} We first consider the case with the sibling property. Let $J$ be the maximum pseudo-independent set of $G$ with a sibling. Let $a = \alpha_p(G)$ and denote $J_i$ be the $i$th node in $J$ and $J_{a+1}$ be a sibling of $J$. Consider $S = \{u_1, \dots, u_a, v_1, \dots, v_a, w\}$ and action sets defined by: \begin{equation} X_{i} = \begin{cases} \{u_i, v_i\} & \text{if } i \in \{J_1, \dots, J_a\} \\ \{u_i, t\} & \text{if } i = J_{a+1} \\ \emptyset & \text{otherwise} \end{cases} \end{equation} We now define the function $f : 2^S \to \mathbb{R}$ that for any $A \subseteq S$, \begin{equation} f(A) = \min\left(1, \sum_{u_i \in A} \frac{1}{p} \right) + \sum_{v_i \in A} \frac{1}{p} \end{equation} Note that through this construction, $f$ is monotonic, submodular, and $p$-additive. $f$ also has the following properties for any $i \in \{J_1, \dots, J_a\}$: \begin{enumerate} \item $f(u_i) = f(v_i) = 1/p$. \item $f(u_i \mid \cup_{j \in A} \{u_j\}) = f(u_i)$ for any $A \subseteq \mathcal{N}_i$ due to Definition \ref{def:p-indep}. \item $f(v_i | B) = f(v_i)$ for any $B \subseteq S \{i\}$ \end{enumerate} Given these properties, it follows that in the greedy solution, the actions of $J_1, \dots, J_{a+1}$ are $u_1, u_2, \dots, u_a, t$, respectively. And in the optimal solution, the actions of $J_1, \dots, J_{a+1}$ are $v_1, v_2, \dots, v_a, u_{a+1}$, respectively. This results in $f(x) \le 1$ and $f(x^*) = a + 1$, hence $\gamma_p(G) \le p / (a+1)$. The case without the sibling property follows the same construction, except that we do not have $J_{a+1}$. This results in $f(x) \le 1$ and $f(x^*) = a$, leading to $\gamma_p(G) \le p / a$. \end{proof} The following result is also essential in the proof. \begin{theorem}[Erd\H{o}s-Stone \cite{erdos1946structure}] Given a graph $H$, let $ex(n; H)$ be the maximum number of edges in an $n$-node graph not containing $H$. Then \begin{equation} ex(n; H) = \left(1 - \frac{1}{\chi(H)-1} + o(1)\right) \binom{n}{2} \end{equation} as a function in $n$. \end{theorem} If we instead consider the complement, we can equivalently state the following. \begin{cor} \label{thm:erdos-stone} Given a graph $H$, let $ex'(n; H)$ be the minimum number of edges in an $n$-node graph not containing $H$ or any of its spanning subgraph. Then \begin{equation} ex'(n; H) = \left(\frac{1}{\theta(H)-1} + o(1)\right) \binom{n}{2} \end{equation} as a function in $n$. \end{cor} \begin{lemma} \label{thm:min-edges} Given a graph $G = (V, E)$, we have \begin{equation} |E| \ge \left(\frac{1}{\lceil (\alpha_p(G) + 1) / m\rceil - 1} + o(1)\right)\binom{n}{2} \end{equation} \end{lemma} \begin{proof} Let $H$ be a graph with $\alpha_p(G) + 1$ nodes and edges $\{(i, j) \mid i \neq j, \lceil i/p \rceil = \lceil j / p\rceil\}$. It is clear from the definition that $w(H) = \lceil (\alpha_p(G) + 1) / p \rceil$. By Definition \ref{def:p-indep}, $G$ cannot contain $H$ or any of its spanning subgraphs. Then this lemma immediately follows from Corollary \ref{thm:erdos-stone}. \end{proof} Now we complete the proof of Theorem \ref{thm:not-useful}. If $\alpha_p(G) > p(k+1)$, then by Lemma \ref{thm:upper-bound}, $\gamma_p(G) < 1/(k+1)$. If $\alpha_p(G) = p(k+1)$ and $G$ has the $p$-sibling property, then by Lemma \ref{thm:upper-bound}, $\gamma_p(G) < 1/(k+1)$. Hence we are interested in the following two cases: \begin{enumerate} \item $\alpha_p(G) < p (k+1)$, with or without $m$-sibling property. \item $\alpha_p(G) = p (k+1)$ and no $m$-sibling property. \end{enumerate} In the first case, Lemma \ref{thm:min-edges} implies that $G$ has at least $(1/k + o(1))\binom{n}{2}$ edges. And in the second case, let $J$ be a maximum $p$-pseudo-independent set of $G$. Consider the graph $G'$ induced by removing all nodes in $J$ from $G$. By Lemma \ref{thm:no-sibling}, $\alpha_p(G') < \alpha_p(G)$, and thus Lemma \ref{thm:min-edges} implies that the number of edges in $G'$ is at least \begin{subequations} \begin{align} & \left(\frac{1}{k} + o(1)\right)\binom{n-p(k+1)}{2} \\ ={}& \left(\frac{1}{k} + o(1)\right)\frac{n^2 - n + g(n)}{2} \label{equ:messy-expan} \\ ={}& \left(\frac{1}{k} + o(1)\right)\binom{n}{2} \end{align} \end{subequations} where $g(n)$ in \eqref{equ:messy-expan} is $-2p(k+1)n + p(k+1)(p(k+1)+1)$. This theorem follows from combining the two above cases. \section*{Acknowledgement} \end{document} \section{Model} \label{sec:model} \subsection{Distributed Submodular Optimization} Consider a base set $S$ and a set function $f : 2^S \to \mathbb{R}_{\ge 0}$. Given $A, B \subseteq S$, we define the \textit{marginal contribution} of $A$ with respect to $B$ as \begin{equation} f(A \mid B) = f(A \cup B) - f(B) \end{equation} In this paper, we are interested in distributed algorithms for maximization of submodular functions, where there is a collection of $n$ decision-making agents, named as $N = \{1, \dots, n\}$, and a set of decisions $S$. Each decision-making agent is associated with a set of permissible decisions $X_i \subseteq S$ so that $X_1, \dots, X_n$ form a partition of $S$. For rest of the paper, we refer to decision-making agents as simply agents. Then the collection of decisions by all agents $x = \{x_1, \dots, x_n\}$ is called an \textit{action profile}, and we denote the set of all action profiles as $X = X_1 \times \cdots \times X_n$. An action profile $x \in X$ is evaluated by an \textit{objective function} $f : 2^S \to \mathbb{R}_{\ge 0}$ as $f(x) = f(\cup_i \{x_i\})$. Furthermore, we restrict our attention to $f$'s which satisfy the following properties: \begin{itemize} \item \textbf{Normalized} $f(\emptyset) = 0$. \item \textbf{Monotonic} $f(\{e\} \mid A) \ge 0$ for all $e \in S$ and $A \subseteq S$. \item \textbf{Submodular} $f(\{e\} \mid A) \ge f(\{e\} \mid B)$ for all $A \subseteq B \subseteq S$ and $e \in S \setminus B$. \end{itemize} For simplicity, define $\mathcal{F}$ as the set of functions satisfying these three properties. For any $f \in \mathcal{F}$, the goal of the submodular maximization problem is to find: \begin{equation} \label{def:submodular-maximization} x^* \in \argmax_{x \in X} f(x) \end{equation} For convenience, we overload $f$ with the understanding that $f(e) = f(\{e\})$ for $e \in S$, and $f(A, B) = f(A \cup B)$ for $A, B \subseteq S$. We also introduction the follow notation. For any $j, k \in \{1, \dots, n\}, j < k$, let $[k] = \{1, \dots, k\}$ and $j : k = \{j, i+1, \dots, k\}$. Furthermore, for any action profile $x \in X$ and set $M \subseteq N$, let $f(x_M) = f(\cup_{i \in M} \{x_i\})$. For instance, $f(x_{1:10}) = f(\{x_1, \dots, x_{10}\})$. \begin{figure} \centering \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.8\textwidth]{fig/weight_set_cover.pdf} \caption{Illustration of weighted subset cover problem. The 5 agents are represented by circles. The target set $Y = \{y_1, \dots y_7\}$ is represented by boxes. For each box, its label indicates its weight, and the black lines indicate the permissible decisions, for example, $T(X_1) = \{\{y_1\}, \{y_2\}\}$.} \end{subfigure} \begin{subfigure}{0.9\textwidth} \centering \begin{tabular}{l|c|c|c|c|c|c} Algorithm & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $f(x_{1:5})$ \\ \hline \hline Optimal & $y_1$ & $y_2$ & $y_4$ & $y_6$ & $y_5$ & 13 \\ \hline Greedy & $y_2$ & $y_4$ & $y_5$ & $y_6$ & $y_7$ & 11 \\ \hline Parallel ($P_1$) & $y_2$ & $y_2$ & $y_5$ & $y_6$ & $y_6$ & 8 \\ \hline Parallel ($P_2$) & $y_2$ & $y_2$ & $y_5$ & $y_6$ & $y_7$ & 9 \\ \end{tabular} \caption{For the above weighted set cover problem, this table shows the decisions $x_1, \dots, x_5$ by several algorithms described in this paper. $P_1$ and $P_2$ are the optimal iteration assignments as shown in Fig. \ref{fig:weird-parallel} and \ref{fig:standard-parallel}, respectively. For simplicity, we used element instead of set to denote each decision, e.g. we wrote $y_1$ but we mean $\{y_1\}$. To illustrate how the greedy algorithm works, let's consider $P_2$. We have $x_1 = x_2 = \{y_2\}$ because agents 1 and 2 are deciding at the same time. Then agents 3 and 4 choose at the same time, and one possible result is $x_3 = \{y_5\}, x_4 = \{y_6\}$. Lastly, agent 5 makes a decision, it sees that agents 3 and 4 already took $y_5$ and $y_6$, so it chooses $\{y_7\}$ as the decision maximizing its marginal contribution.} \label{table:run} \end{subfigure} \caption{An instance of weighted subset cover problem and the behavior of various algorithms for this problem.} \label{fig:weighted-subset-cover} \end{figure} An example of the submodular maximization problem is the weighted set cover problem~\cite{gairing2009covering}. Given a target set $Y$ and a mapping $T : S \to 2^Y$ from the decisions to subsets of $Y$,\footnote{ Note that the decisions are related to assignments to the subsets of $Y$ in the sense that each decision is an ordered pair (agent ID, subset of $Y$). This way we allow agents to choose the same element from $Y$ while maintaining that the decisions are distinct. } the value of an action profile $x \in X$ is determined by a weight function $w : Y \to \mathbb{R}_{\ge 0}$: \begin{equation} \label{def:weighted-subset} f(x) = \sum_{y \in \cup_i T(x_i)} w(y) \end{equation} Intuitively, this problem aims to ``cover'' as much of the target set as possible. An instance of this problem is illustrated by Figure \ref{fig:weighted-subset-cover}. \subsection{Greedy Algorithm and Parallelization} A distributed greedy algorithm can be used to approximate the problem as stated in \eqref{def:submodular-maximization}. In the greedy algorithm, the agents make decision in sequential order according to their names $i \in N$ and each agent attempts to maximize its marginal contribution with respect to the choices of prior agents: \begin{equation} \label{def:greedy-algorithm} x_i \in \argmax_{x_i \in X_i} f(x_i \mid x_{1:i-1}) \end{equation} Note that the greedy solution $x$ may not be unique. In the context of this paper, we assume that $x$ is the worst solution among all possible greedy solutions. Define $\gamma(f, X)$ to be $f(x) / f(x^*)$ from the greedy algorithm subject to objective function $f$ and set of action profiles $X$. Then, to measure the quality of the greedy algorithm, we consider its \textit{competitive ratio}: \begin{equation} \gamma = \inf_{f \in \mathcal{F}, X} \gamma(f, X) \end{equation} The well known result from \cite{fisher1978analysis} states that $\gamma = 1/2$, which means that \eqref{def:greedy-algorithm} guarantees that the performance of the greedy solution is always within 1/2 of optimal. Furthermore, only one agent can make a decision at any given time and thus a solution will be found in precisely $n$ iterations. The focus of this work is on characterizing the achievable competitive ratio for situations where a system-design does not have the luxury of having $n$ iterations to construct a decision. To this end, we introduce the parallelized greedy algorithm to allow multiple agents to make decisions at the same time. More formally, we consider a situation where the system is given limited number of iterations $q \leq n$ and needs to come up with an iteration assignment $P : [n] \to [q]$ so that each agent makes decision only based off the choices made by agents from earlier iterations: \begin{equation} \label{def:parallel-greedy} x_i \in \argmax_{x_i \in X_i} f(x_i \mid x_{\mathcal{N}_i}) \end{equation} where $\mathcal{N}_i = \{j \mid P(j) < P(i)\} \subseteq \{1, \dots, i-1\}$. Note, that to maintain consistency with \eqref{def:greedy-algorithm}, we require that $P$ preserves the implicit ordering of the greedy algorithm: $P(i) \le P(j)$ whenever $i < j$. We are interested in the performance of \eqref{def:parallel-greedy} given a fixed number of agents and number of iterations. We define the competitive ratio of a time step assignment $P$ as: \begin{equation} \label{def:comp-ratio-parallel} \gamma(P) = \inf_{f \in \mathcal{F}, X} \gamma(f, X, P) \end{equation} where $\gamma(f, X, P)$ is $f(x) / f(x^*)$ from the parallelized greedy algorithm subject to objective function $f$, set of action profiles $X$ and iteration assignment $P$. Denote the set of all possible iteration assignments with $n$ agents and at most $q$ iterations as: \begin{equation} \mathcal{P}_{n, q} = \{P : [n] \to [q] \} \end{equation} We seek to find the best possible competitive ratio, and the iteration assignment which achieves the optimal competitive ratio (if such an assignment exists): \begin{equation} \label{def:comp-ratio-over-all-parallel} \rho(n, q) = \sup_{P \in \mathcal{P}_{n, q}} \gamma(P) \end{equation} \begin{equation} \label{def:optimal-parallel} P^*_{n, q} \in \argmax_{P \in \mathcal{P}_{n, q}} \gamma(P) \end{equation} \section{Optimal Parallel Structures} \label{sec:parallel-computation} In this section, we present the best possible competitive ratio for the parallelized greedy algorithm and optimal iteration assignment which achieves such a bound. Note that according to \eqref{def:parallel-greedy}, a pair of agents do not utilize each other's decisions (in either direction) when they are in the same iteration. This intuitively represent some ``blindspots'' in the parallelized greedy algorithm compared to the original greedy algorithm. One way to reduce those ``blindspots'' is to divide the agents evenly among the available iterations, in other words, we want the number of agents deciding in parallel to be as close to the average number as possible. Theorem \ref{thm:best-parallel} illustrates that this simple idea is nearly sufficient to yield the best possible iteration assignment. \begin{theorem} \label{thm:best-parallel} Given a parallelized submodular maximization problem with $n$ agents and $q$ iterations, the competitive ratio is: \begin{equation} \label{equ:parallel-exact} \rho(n, q) = \begin{cases} \frac{1}{r} & \text{if } n \equiv 1 \pmod{q} \\ \frac{1}{r+1} & \text{otherwise} \\ \end{cases} \end{equation} where $r = \lceil n / q \rceil$. In particular, when $n \equiv 1 \pmod{q}$, \begin{equation} \label{equ:weird-parallel} P^*_{n, q} = \begin{cases} \left\lceil \frac{i}{r-1} \right\rceil & \text{if } i < n \\ q & \text{if } i = n \end{cases} \end{equation} Otherwise, \begin{equation} \label{equ:standard-parallel} P^*_{n, q} = \left\lceil \frac{i}{r} \right\rceil \end{equation} \end{theorem} Figure \ref{fig:parallel} illustrates some example of optimal and non-optimal assignments. In particular, Figure \ref{fig:bad-parallel-weird} gives an edge case where our simple intuition is not enough to make the optimal assignment. And Figure \ref{fig:bad-parallel} shows that other approaches, such as assigning more agents to earlier iterations to reduce the ``blindness" in later iterations, are not optimal. We observe that the competitive ratio is roughly inversely proportional to the value $r$. Theorem \ref{thm:best-parallel} as stated does not offer a good explanation for why this relation holds. To better understand this, we will generalize our model in Section \ref{sec:parallel-communication} and present a strengthening of Theorem \ref{thm:best-parallel} in Section \ref{sec:proof-parallel}. \begin{figure*} \begin{subfigure}{0.20\textwidth} \centering \includegraphics[scale=1.2]{fig/weird_parallel.pdf} \caption{The optimal assignment for $n = 5$, $q = 2$ as described in \eqref{equ:weird-parallel}.} \label{fig:weird-parallel} \end{subfigure} \hfill \begin{subfigure}{0.20\textwidth} \centering \includegraphics[scale=1.2]{fig/weird_parallel_bad.pdf} \caption{A non-optimal assignment for $n=5, q = 2$.} \label{fig:bad-parallel-weird} \end{subfigure} \hfill \begin{subfigure}{0.25\textwidth} \centering \includegraphics[scale=1.2]{fig/standard_parallel.pdf} \caption{The optimal assignment for $n = 5$, $q = 3$ as described in \eqref{equ:standard-parallel}.} \label{fig:standard-parallel} \end{subfigure} \hfill \begin{subfigure}{0.27\textwidth} \centering \includegraphics[scale=1.2]{fig/bad_parallel.pdf} \caption{A non-optimal assignment for $n=5, q = 3$.} \label{fig:bad-parallel} \end{subfigure} \caption{Examples of optimal iteration assignments as given by Theorem \ref{thm:best-parallel}. Each agent, represented by a node, is named as $1, \dots, n$. Each column, labeled by a Roman numeral, contains all agents executed at a given iteration. Also shown are some examples of iteration assignments that are not optimal. According to Theorem \ref{thm:best-parallel}, the competitive ratio in Fig. \ref{fig:weird-parallel} is $1/3$ and the competitive ratio in Fig. \ref{fig:standard-parallel} is also $1/3$. On the other hand, in Section \ref{sec:proof-parallel}, Lemma \ref{thm:david} shows that the competitive ratio in Fig. \ref{fig:bad-parallel-weird} and Fig. \ref{fig:bad-parallel} are both $1/4$. } \label{fig:parallel} \end{figure*} \section{Proof of Lemma \ref{thm:strict-monotone-lower}} \label{sec:proof-strict-monotone-lower} We first show the lower bound on $\gamma_\lambda(G)$. In this proof, we consider some minimum clique covering $\{K_1, K_2, \dots, K_\theta\}$ of $G$, where we can choose the $K$'s so they are disjoint. Let $K(i)$ denotes the clique containing the $i$ node, and $\sigma_j = f(x_{K_j})$. We will upper bound $f(x^*)$ in terms of the $\sigma$'s. Then we express the quantity $\sigma_j/f(x)$ as a concave function in $\sigma_j$ and using convexity to derive the final lower bound. First, we need to resolve some technicalities so we have an appropriate objective function $f$ and action profile $X$.\footnote{The readers can safely ignore this paragraph without affecting their understanding of the rest of this proof.} Given $(f \in \mathcal{F}_\lambda, X)$, we can assume that $X_{i} = \{x_i, x^*_i\}$ without affecting the competitive ratio. We want $x^*_i \neq x_i$ for all $i \in N$, which ensures that all decisions are distinct. Suppose not and for some $i \in N$, $x^*_i = x_i = u$ for some decision $u$. Now we create a new decision $v$ and transform $(f, X)$ to $\left(\overline{f}, \overline{X}\right)$ so that $\overline{x_j} = x_j$, $\overline{x^*_j} =x^*_j$ for any $j \neq i$, $\overline{x_i} = u$ and $\overline{x^*_i} = v$. And we define $\overline{f}$ as: \begin{equation} \label{equ:transformation} \begin{cases} \overline{f}(u, v, A) = f(u, A) + (1 - \lambda) f(u) \\ \overline{f}(u, A) = \overline{f}(v, A) = f(u, A) \\ \overline{f}(A) = f(A) \end{cases} \end{equation} for any $A \subseteq S \setminus X_i$. Note that the above transformation does not affect the competitive ratio because our construction guarantees $\overline{f}\left(\overline{x}\right) = f(x)$ and $\overline{f}\left(\overline{x^*}\right) = f(x^*)$. Also, from some simple algebra: \begin{subequations} \label{equ:after-transformation} \begin{align} \overline{f}(u \mid v, A) = \overline{f}(v \mid u, A) &= (1-\lambda) f(u) \label{equ:after-transformationL1} \\ \overline{f}(y \mid u, v, A) &= f(y \mid u, A) \label{equ:after-transformationL2} \end{align} \end{subequations} where $A \subseteq S \setminus X_i$, $y \in S \setminus X_i$. From \eqref{equ:after-transformation}, it is easy to verify that $(\overline{f}, \overline{X})$ satisfies all properties of submodular functions and has total curvature $\lambda$; but for brevity, we will not explicitly show them here. Hence, for rest of the proof, we can safely assume that $x^*_i \neq x_i$ for all $i \in N$. Now we bound $f(x^*)$ through total curvature: \begin{subequations} \label{equ:cmb-opt-sol} \begin{align} f(x^*) + (1-\lambda) \sum_{j=1}^\theta \sigma_j &\le f(x^*) + \sum_{j=1}^\theta \sum_{i \in K_j} (1-\lambda) f(x_i) \label{equ:cmb-opt-solL1} \\ &= f(x^*) + \sum_{i=1}^n (1-\lambda) f(x_i) \label{equ:cmb-opt-solL2} \\ &\le f(x^*) + \sum_{i=1}^n f(x_i \mid x_{1:i-1}, x^*) \label{equ:cmb-opt-solL3} \\ &= f(x^*, x) \label{equ:cmb-opt-solL4} \end{align} \end{subequations} where \eqref{equ:cmb-opt-solL1} follows from submodularity, \eqref{equ:cmb-opt-solL2} follows from the fact that $K's$ are disjoint, \eqref{equ:cmb-opt-solL3} follows from total curvature, and \eqref{equ:cmb-opt-solL4} is a telescoping sum. Then we bound $f(x^*, x)$ using properties of submodular functions and the greedy algorithm. \begin{subequations} \label{equ:telescope-clique} \begin{align} f(x^*, x) &= f(x) + \sum_{i=1}^n f(x^*_i \mid x^*_{1:i-1}, x) \label{equ:telescope-cliqueL1} \\ &\le f(x) + \sum_{i=1}^n f(x^*_i \mid x_{\mathcal{N}_i}) \label{equ:telescope-cliqueL2} \\ &\le f(x) + \sum_{i=1}^n f(x_i \mid x_{\mathcal{N}_i}) \label{equ:telescope-cliqueL3} \\ &\le f(x) + \sum_{i=1}^n f(x_i \mid x_{\mathcal{N}_i} \cap K(i)) \label{equ:telescope-cliqueL4} \\ &= f(x) + \sum_{j=1}^\theta \sum_{i \in K_j} f(x_i \mid x_{\mathcal{N}_i} \cap K_j) \label{equ:telescope-cliqueL5} \\ &= f(x) + \sum_{j=1}^\theta \sigma_j \label{equ:telescope-cliqueL6} \end{align} \end{subequations} where \eqref{equ:telescope-cliqueL1} and \eqref{equ:telescope-cliqueL6} are telescoping sums, \eqref{equ:telescope-cliqueL2} and \eqref{equ:telescope-cliqueL4} follow from submodularity, \eqref{equ:telescope-cliqueL5} follows from the fact that $K$'s are disjoint, and \eqref{equ:telescope-cliqueL3} follows from the greedy algorithm as defined in \eqref{def:general-greedy}. Combining \eqref{equ:cmb-opt-sol} and \eqref{equ:telescope-clique}, we have that \begin{equation} f(x^*) + (1-\lambda) \sum_{j=1}^\theta \sigma_j \le f(x) + \sum_{j=1}^\theta \sigma_j \end{equation} Rearrange the terms yields: \begin{equation} \label{equ:bound-opt-clique} \frac{f(x^*)}{f(x)} \le 1 + \lambda \sum_{j=1}^\theta \frac{\sigma_j}{f(x)} \end{equation} And we can upper bound the $\sigma$'s using total curvature in a manner similar to that of \eqref{equ:cmb-opt-sol}. \begin{subequations} \begin{align} \sigma_j + \sum_{\ell \neq j} (1 - \lambda) \sigma_\ell \le{}& \sigma_j + \sum_{i \not\in K_j} (1 - \lambda) f(x_i) \label{equ:sum-sigmaL1} \\ \le{}& \sigma_j + \sum_{i \not\in K_j} f(x_i \mid x_{1:i-1} \cup x_{K_j}) \label{equ:sum-sigmaL2} \\ ={}& f(x) \label{equ:sum-sigmaL3} \end{align} \end{subequations} where \eqref{equ:sum-sigmaL1} follows from submodularity, \eqref{equ:sum-sigmaL2} follows from total curvature, and \eqref{equ:sum-sigmaL3} follows from telescoping sum. Without loss of generality, we impose the normalization that $\sum_{\ell=1}^\theta \sigma_\ell = \theta$. Then for any $1 \le j \le \theta$, \begin{equation} \label{equ:upper-clique} \frac{\sigma_j}{f(x)} \le \frac{\sigma_j}{(1-\lambda) \sum_{\ell=1}^\theta \sigma_\ell + \lambda \sigma_j} = \frac{\sigma_j}{(1-\lambda) \theta + \lambda \sigma_j} \end{equation} We can denote the right hand side of \eqref{equ:upper-clique} by the function $g(x) = x / ((1-\lambda) x + \lambda x)$. Note that $g$ is a concave function. We then apply the Jensen's inequality: \begin{lemma}[Jensen] For a concave function $g$ and values $y_1, \dots, y_m$ in its domain, then \begin{equation} \frac{1}{m} \sum_{i=1}^m g(y_i) \le g\left(\frac{\sum_{i=1}^m y_i}{m}\right) \end{equation} And the equality holds if and only if $y_1 = \cdots = y_m$ or when $g$ is linear. \end{lemma} We substitute our normalization into \eqref{equ:upper-clique} and combine it with \eqref{equ:bound-opt-clique}: \begin{equation} \frac{1}{\gamma_\lambda(G)} = \frac{f(x^*)}{f(x)} \le 1 + \frac{\lambda \theta}{(1-\lambda) \theta + \lambda} = \frac{\theta + \lambda}{\theta - (\theta-1)\lambda} \end{equation} Hence we derived the lower bound. Now we show the upper bound on $\gamma_\lambda(G)$. Let $I$ be a maximum independent set of $G$ and $\alpha = |I|$. Define two sets of distinct decisions $U = \{u_i, u_2, \dots, u_\alpha\}$ and $V = \{v_1, v_2, \dots, v_\alpha\}.$ Consider $S = U \cup V$ and action profile determined by: \begin{equation} X_{i} = \begin{cases} \{u_i. v_i\} & \text{if } i \in I \\ \emptyset & \text{otherwise} \end{cases} \end{equation} Define the objective function $f$ so that for any $x \in X$ (with $x$ interpreted as subset of $S$), \begin{equation} f(x) = \min(1, |x \cap U|)\lambda + |x \cap U| (1-\lambda) + |x \cap V| \end{equation} Note that through this construction, $f$ is a submodular function and has total curvature $\lambda$. Also, $f$ has the following properties for any $i \in I$: \begin{enumerate} \item $f(u_i) = f(v_i) = 1$. \item $f(u_i \mid x_{\mathcal{N}_i}) = f(u_i)$ for any $x \in X$ because by definition, we have $\mathcal{N}_i \cap I = \emptyset$. \item $f(v_i | B) = f(v_i)$ for any $B \subseteq S \setminus \{v_i\}$ \end{enumerate} From these properties, the agents in $I$ are equally incentivized to pick either option. In one of the greedy solutions, the action would be the $u$'s. And in the optimal solution, the action would be the $v$'s. This results in $f(x) \le \lambda + \alpha(1 - \lambda)$ and $f(x^*) = \alpha$, hence $\gamma_\lambda \le \frac{\alpha - (\alpha-1)\lambda}{\alpha}$. \hfill$\blacksquare$ \subsection{Extension of Theorem \ref{thm:best-parallel-strict-monotone} to Information Graphs} \label{sec:proof-parallel-beta} We will prove a version of Theorem \ref{thm:best-parallel-strict-monotone} generalized to information graphs, and the original theorem follows from the same logic. \begin{theorem} \label{thm:best-parallel-strict-monotone-graph} Given a parallelized submodular and $\beta$-strictly monotone maximization problem with $n$ agents and $q$ iterations, then: \begin{equation} \label{equ:strict-monotone-parallel-bounds-graph} \frac{(r-1)\beta+1}{r} \ge \eta'(n, q) \ge \frac{(r-1)\beta+1}{r - \beta + 1} \end{equation} Furthermore, the graph $G$ as defined in \eqref{equ:complement-turan} achieves the lower bound $\gamma'(G) \ge \frac{(r-1)\beta+1}{r - \beta + 1}$. \end{theorem} To prove Theorem \ref{thm:best-parallel-strict-monotone-graph}, we present a lower and an upper bound on the competitive ratio subject to $\beta$-strict monotoncity in a similar fashion as Lemma \ref{thm:david}. The following lemma is proven in the Appendix. \begin{lemma} \label{thm:strict-monotone-lower} Given any information graph $G$, \begin{equation} \frac{(\alpha(G)-1)\beta + 1}{\alpha(G)} \ge \gamma'(G) \ge \frac{(\theta(G)-1)\beta + 1}{\theta(G) - \beta + 1} \end{equation} \end{lemma} Now we come back to the proof of Theorem \ref{thm:best-parallel-strict-monotone-graph}. The upper bound in \eqref{equ:strict-monotone-parallel-bounds-graph} follows from combining Lemma \ref{thm:parallel-indep-number} and the upper bound in Lemma \ref{thm:strict-monotone-lower}. Consider the graph $G$ described by \eqref{equ:complement-turan}; as we already argued previously in Section \ref{sec:proof-parallel}, $r = \alpha(G) = \theta(G)$ and $G \in \mathcal{G}_{n, q}$. Hence, from the lower bound in Lemma \ref{thm:strict-monotone-lower}, we conclude that this $G$ achieves the lower bound in \eqref{equ:strict-monotone-parallel-bounds-graph}, so we are done. \hfill$\blacksquare$ \subsection{Main Result on Information Graphs} \label{sec:proof-parallel} We will present a stronger version of Theorem \ref{thm:best-parallel} that generalizes to the setting of information graphs. \begin{theorem} \label{thm:best-parallel-graph} Given a parallelized submodular maximization problem with $n$ agents and $q$ iterations, the competitive ratio is (recall that $r = \lceil n / q \rceil$): \begin{equation} \label{equ:parallel-exact-graph} \eta(n, q) = \begin{cases} \frac{1}{r} & \text{if } n \equiv 1 \pmod{q} \\ \frac{1}{r + 1} & \text{otherwise} \\ \end{cases} \end{equation} In particular, when $n \equiv 1 \pmod{q}$, for the following edge set $E$, $G = (V, E)$ achieves the equality $\gamma(G) = 1/r$: \begin{equation} \label{equ:weird-graph} E = \left\{ \begin{aligned} (i, j) & \text{ for all } i < j < n, i \equiv j \, (\mathrm{mod} \; r-1) \\ (i, n) & \text{ for all } i\le (q-1)(r-1) \end{aligned} \right\} \end{equation} Otherwise, for the following edge set $E$, $G = (V, E)$ achieves the equality $\gamma(G) = 1/(r+1)$: \begin{equation} \label{equ:complement-turan} E = \{(i, j) \mid i \neq j, i \equiv j \, (\mathrm{mod} \; r) \} \end{equation} \end{theorem} Figure \ref{fig:graph} illustrates the aforementioned optimal information graphs. Note that \eqref{equ:weird-graph} and \eqref{equ:complement-turan} are subgraphs of the information graph induced by \eqref{equ:weird-parallel} and \eqref{equ:standard-parallel}, respectively. Furthermore, \eqref{equ:weird-graph} has approximately $1/(r-1)$ times as many edges as \eqref{equ:weird-parallel} and \eqref{equ:complement-turan} has approximately $1/r$ times as many edges as \eqref{equ:standard-parallel}. This is true because every agent (except agent $n$ when $n \equiv 1 \pmod{q}$) requires information from just one agent from each prior time step. Additonally, \eqref{equ:complement-turan} is ``delightfully parallel'' so that we can partition the agents into threads according to the clique in which they reside, and no information need to be exchanged among threads. \begin{figure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[scale=1.2]{fig/weird_graph.pdf} \caption{The optimal graph for $n = 5$, $q = 2$ as described in \eqref{equ:weird-graph}.} \label{fig:weird-graph} \end{subfigure} ~ \begin{subfigure}{0.48\textwidth} \centering \includegraphics[scale=1.2]{fig/c_turan.pdf} \caption{The optimal graph for $n = 5$, $q = 3$ as described in \eqref{equ:complement-turan}.} \label{fig:complement-turan} \end{subfigure} \caption{Examples of optimal information graph as given by Theorem \ref{thm:best-parallel-graph}. The agents, represented by nodes, are implicitly named as $1, \dots n$ from left to right and the Roman numeral labels indicate the iteration in which the agents will be executed. Note that Fig. \ref{fig:weird-graph} is a subgraph of Fig. \ref{fig:weird-parallel-graph} and Fig. \ref{fig:complement-turan} is a subgraph of Fig. \ref{fig:standard-parallel-graph}. All four of these graphs achieve their respective optimal competitive ratio, which all happen to be $1/3$. However, both Fig. \ref{fig:weird-graph} and Fig. \ref{fig:complement-turan} have 4 edges, whereas Fig. \ref{fig:weird-parallel-graph} has 6 edges and Fig. \ref{fig:standard-parallel-graph} has 8 edges. Also, by Theorem \ref{thm:best-parallel-strict-monotone-graph}, when the objective function has total curvature $\lambda$, Fig. \ref{fig:weird-graph} and Fig. \ref{fig:complement-turan} are still nearly-optimal. } \label{fig:graph} \end{figure} The following lemma shows that the value $r$ is related to the independence number of the information graph: \begin{lemma} \label{thm:parallel-indep-number} For any $G \in \mathcal{G}_{n, q}$, $\alpha(G) \ge r = \lceil n / q \rceil$. \end{lemma} \begin{proof} Let $D_k = \{i \mid \overline{P}(G; i) = k\}$ be the set of agents executed in the $k$th iteration. From out construction, $\bigcup_{1 \le k \le q} D_k = [n]$. By the pigeonhole principle, $\max_{1 \le k \le q} |D_k| \ge r$. Since $D_k$ must be an independent set for any $1 \le k \le q$, we conclude that $\alpha(G) \ge r$. \end{proof} Since the agents in an independent set do not rely on each other to make a decision, they can be assigned to the same iteration. Therefore $r$ is a lower bound on how many agents can decide in parallel. The proof of Theorem \ref{thm:best-parallel-graph} relies on Theorem 1 and Corollary 1 from~\cite{grimsman2018impact} that relates the competitive ratio of an information graph to its independence number and clique cover number: \begin{lemma} \label{thm:david} Given any information graph $G = (V, E)$, \begin{equation} \frac{1}{\alpha(G)} \ge \gamma(G) \ge \frac{1}{\theta(G) + 1} \end{equation} \end{lemma} \begin{lemma} \label{thm:david-sibling} If there exists $w \in V$ and a maximum independent set $I$ s.t. $i \in \mathcal{N}_w$ for some $i \in I$, then \begin{equation} \gamma(G) \le \frac{1}{\alpha(G) + 1} \end{equation} \end{lemma} By combining Lemmas \ref{thm:parallel-indep-number} and \ref{thm:david}, we can see that, intuitively, the competitive ratio $\eta$ should be approximately $1/r$. Lastly, we note that the graph described by \eqref{equ:complement-turan} is in fact a complement Tur\`{a}n graph. This reinforces our earlier intuitions and relates to Lemma \ref{thm:david} that the optimal parallel structure arises from having as few agents decide in parallel as possible. For the full proof of Theorem \ref{thm:best-parallel-graph}, please refer to Appendix \ref{sec:complete-proof-parallel}. \section{Submodular Functions with Total Curvature} \label{sec:beta-strict-monotone} Recent research, such as \cite{corah2018distributed}, has been focused on whether imposing additional structure on the objective function and the set of action profiles would help the underlying greedy algorithm to yield better performing solutions. One intuition is to consider the scenario when actions do not ``overlap'' in the sense that each decision is guaranteed to make some marginal contribution. Here, we will analyze how the property introduced in \cite{conforti1984submodular} enables a parallelized greedy algorithm to yield higher competitive ratio. \begin{figure} \centering \includegraphics[scale=0.5]{fig/plot.pdf} \caption{The lower bound \eqref{equ:strict-monotone-parallel-bounds-graph} on the optimal competitive ratio when the objective function has total curvature $\lambda$, as given in Theorem \ref{thm:best-parallel-strict-monotone-graph}. For $r = 2, 3, 5, 20$, we plot the lower bound against $\lambda$. Note that at $\lambda = 0$, the competitive ratio becomes 1, as expected from our intuition, and at $\lambda = 1$, the lower bound is $1/(r+1)$, which is consistent with \eqref{equ:parallel-exact-graph}. } \label{fig:plot-beta} \end{figure} \begin{comment} \begin{definition} Objective function $f : 2^S \to \mathbb{R}$ is said to be $\beta$-\textit{strictly monotonic} for some real value $\beta \in (0,1)$ if $f(e \mid A) \ge \beta f(e)$ for all $e \in S$ and $A \subseteq S \setminus \{e\}$. For simplicity, we will refer to the set of functions that are both submodular and $\beta$-strictly monotone as $\mathcal{F}'$. \end{definition} \end{comment} \begin{definition} Objective function $f : 2^S \to \mathbb{R}_{\ge 0}$ has \textit{total curvature} $\lambda \in (0,1)$ if $f(e \mid A) \ge (1-\lambda) f(e)$ for $e \in S$ and $A \subseteq S \setminus \{e\}$. We denote the set of normalized, monotonic and submodular functions with total curvature $\lambda$ as $\mathcal{F}_\lambda$. \end{definition} We can extend competitive ratio defined in \eqref{def:comp-ratio-graph} and \eqref{def:comp-ratio-over-all-graph} to when they are subject to total curvature of $\lambda$: \begin{equation} \label{def:comp-ratio-graph-beta} \gamma_\lambda(G) = \inf_{f \in\mathcal{F}_\lambda, X} \gamma(f, X, G) \end{equation} \begin{equation} \label{def:comp-ratio-over-all-graph-beta} \eta_\lambda(n, q) = \sup_{G \in \mathcal{G}_{n, q}} \gamma_\lambda(G) \end{equation} We can generalize the result of Lemma \ref{thm:david} to the total curvature setting: \begin{lemma} \label{thm:strict-monotone-lower} Given any information graph $G$, \begin{equation} \label{equ:strict-monotone-lower} \frac{\alpha(G) - (\alpha(G)-1)\lambda}{\alpha(G)} \ge \gamma_\lambda(G) \ge \frac{\theta(G) - (\theta(G)-1)\lambda}{\theta(G) + \lambda} \end{equation} \end{lemma} We then generalize Theorem \ref{thm:best-parallel-graph} to the total curvature setting and demonstrates that the iteration assignment \eqref{equ:complement-turan} is nearly optimal. \begin{theorem} \label{thm:best-parallel-strict-monotone-graph} Given a parallelized submodular maximization problem with $n$ agents and $q$ iterations, where the objective function has total curvature $\lambda$, then: \begin{equation} \label{equ:strict-monotone-parallel-bounds-graph} \frac{r - (r-1) \lambda}{r} \ge \eta_\lambda(n, q) \ge \frac{r - (r-1) \lambda}{r + \lambda} \end{equation} where $r = \lceil n / q \rceil$. Furthermore, the graph $G$ as defined in \eqref{equ:complement-turan} achieves the lower bound $\gamma_\lambda(G) \ge \frac{r - (r-1) \lambda}{r + \lambda}$. \end{theorem} \begin{comment} \begin{theorem} \label{thm:best-parallel-strict-monotone-graph} Given a parallelized submodular and $\beta$-strictly monotone maximization problem with $n$ agents and $q$ iterations, then: \begin{equation} \label{equ:strict-monotone-parallel-bounds} \frac{(r-1)\beta+1}{r} \ge \rho'(n, q) \ge \frac{(r-1)\beta+1}{r - \beta + 1} \end{equation} Furthermore, the time step assignment $P$ in \eqref{equ:standard-parallel} achieves the lower bound $\gamma'(P) \ge \frac{(r-1)\beta+1}{r - \beta + 1}$. \end{theorem} \end{comment} If there is no parallelization (when $r=1$), \cite{conforti1984submodular} showed that the lower bound in \eqref{equ:strict-monotone-parallel-bounds-graph} is tight. Figure \ref{fig:plot-beta} illustrates how the lower bound changes with respect to $\lambda$ at different $r$. We can make some quick observations on how the lower bound changes with respect to $\lambda$. In the limit of $\lambda \to 0$, the competitive ratio tends to 1, as expected from our intuition, and in the limit of $\lambda \to 1$, the lower bound tends to $1/(r+1)$, which is consistent with \eqref{equ:parallel-exact-graph}. This confirms the intuition that for smaller $\lambda$, decisions have less ``overlap'' and as a result the greedy algorithm can perform closer to the optimal. Also, as $r \to \infty$ both the lower and upper bounds converge to $1 - \lambda$. So the lower and upper bounds are close to each other if $r$ is large (when there are a lot of parallelization). For full proofs of the results in this section, please refer to Appendices \ref{sec:proof-strict-monotone-lower} and \ref{sec:complete-proof-parallel-beta}.
1,116,691,501,148
arxiv
\section{\label{sec:level1}Introduction:\protect\\ } \section{Introduction} In the near-wall region of turbulent shear flows, there exists a universal logarithmic velocity profile \citep{karman1930mechanische,millikan1938critical}, characterized by a constant slope between mean velocity and the logarithm of the distance to the wall. The universal velocity log law has been supported by laboratory measurements of pipe flow \citep{mckeon2004further} and boundary layer \citep{monkewitz2007self}, atmospheric observations \citep{andreas2006evaluations}, and direct numerical simulations (DNS) \citep{lee2015direct,yamamoto2018numerical} of turbulent shear flows. When turbulent flow is influenced by buoyancy, the mean velocity may not be adequately described by the universal log law. To address the buoyancy effects, Monin-Obukhov similarity theory (MOST) was proposed to revise the universal velocity log law using stability correction functions of the distance to the wall $z$ and the Obukhov length $L$ \citep{obukhov1946turbulence} based on dimensional analysis \citep{monin1954basic}. MOST has been used to describe buoyancy corrections of the mean velocity profile and to provide velocity and momentum flux in the atmospheric surface layer, roughly the lowest $10 \%$ of the atmospheric boundary layer (ABL) \citep{stull1988introduction}, in almost all numerical weather prediction and climate models \citep{deardorff1972parameterization,troen1986simple,holtslag1993local,louis1979parametric}. According to MOST, the mean velocity profile is not logarithmic in stably stratified conditions as compared to von K\'arm\'an's universal log law due to buoyancy corrections. The stably stratified conditions are frequently observed over land at night \citep{mahrt1998stratified} and in polar regions of the Earth \citep{grachev2015similarity} when the air is cooled by the land surface. Stably stratified turbulence in the ABL is very difficult to be represented \citep{viterbo1999representation,cuxart2006single,svensson2009analysis} in numerical weather prediction (NWP) and climate models \citep{holtslag2013stable,teixeira2008parameterization,svensson2011evaluation}. The turbulence representation in the ABL is especially important for the Arctic as climate change is amplified there \citep{holland2003polar}. The difficulty of representing stably stratified turbulence is partly due to the widely known failure of MOST in the atmospheric surface layer in very stable conditions \citep{mahrt1998stratified,mahrt2014stably}. In particular, MOST does not capture the mean velocity profile when buoyancy-driven stratification is significant as was shown in field observations of the very stable ABL \citep{forrer1997turbulence,pahlow2001monin,klipp2004flux,cheng2005pathology,grachev2005stable,yague2006influence}. Based on a reformulation of MOST, Grachev et al. \cite{grachev2015similarity} proposed a similarity theory using the Dougherty-Ozmidov length scale $L_O$ \citep{dougherty1961anisotropy,ozmidov1965turbulent}, which is typically regarded as the outer scale of isotropic turbulence in stably stratified conditions \citep{gargett1984local,waite2011stratified,li2016connections,cheng2020model}. It is thus possible that some important length scales might be missing in the dimensional analysis of MOST when stably stratified turbulence is considered, as was shown in convective conditions \citep{tong2019multi,cheng2021logarithmic}. The strength of stratification is typically described by the stability parameter $z/L$ according to MOST, where $L=\frac{u_{\tau}^3}{\frac{\kappa g }{\Theta_r}u_\tau \theta_*}$, $u_\tau$ is the friction velocity, $\kappa$ is the von K\'arm\'an constant, $g$ is the gravitational acceleration, $\Theta_r$ is a reference potential temperature, $\theta_* \equiv \frac{\nu_\theta}{u_\tau} \frac{\partial \Theta}{\partial z}_{|z=0}$ is a scaling temperature, and $\nu_\theta$ is the thermal diffusivity. When buoyant stratification increases, $z/L$ increases. In addition, Grachev et al. \cite{grachev2015similarity} showed that $z/L_O$ may also characterize buoyancy effects, where $L_O$ is the Dougherty-Ozmidov length scale \citep{dougherty1961anisotropy,ozmidov1965turbulent}. It is worth noting that the collapse of turbulence in stably stratified conditions can be indicated by the parameter $\frac{L}{\delta_v}=\frac{L u_\tau}{\nu}$ \citep{flores2011analysis}, where $\delta_v$ is the viscous length scale, and $\nu$ is the kinematic viscosity. $L/\delta_v$ can characterize both buoyancy effects (i.e., the inverse of gradient Richardson number) \citep{ansorge2014global} and Reynolds number effects (i.e., the scale separation between $L$ and $\delta_v$) \citep{flores2011analysis}. We can write $\frac{L}{\delta_v}=\frac{z_i/\delta_v}{z_i/L}$, where $z_i$ is the boundary layer height. $\frac{z_i}{\delta_v} =\frac{u_\tau z_i}{\nu}$ can represent Reynolds number effects, and $\frac{z_i}{L}$ can represent buoyancy effects (e.g., in the convective boundary layer \citep{cheng2021logarithmic}). We will show that $\frac{z_i}{\delta_v}$ and $\frac{z_i}{L}$ can be used to constrain the slope of our proposed velocity log law. Recently, the logarithmic temperature profiles have been reported in the near-wall regions of turbulent Rayleigh-Bénard convection \citep{ahlers2012logarithmic,grossmann2012logarithmic,ahlers2014logarithmic}, vertical natural convection \citep{holling2005asymptotic}, and convective ABL \citep{cheng2021logarithmic}, which is in contrast with the breakdown of a log law according to MOST. Thus, the logarithmic nature does not necessarily break down under the influence of buoyancy. In this study, we aim to investigate the existence of logarithmic velocity profiles in the stably stratified ABL and the possible dependence of velocity profiles on other stability parameters (or length scales) using high-resolution DNS experiments and field observations. \section{methods} \subsection{Direct numerical simulations} \label{loglawSec2} Large eddy simulations (LESs) \citep{moeng1984large,deardorff1972numerical,nieuwstadt1993large} have been widely used to study the ABL. However, subgrid-scale turbulence models \citep{li2018implications,mellado2018dns} may lead to uncertainties near the wall and LESs might have difficulties in simulating strongly stratified turbulence \citep{jimenez2005large,flores2011analysis}. In addition, wall-modeled LES for the ABL usually invokes MOST \citep{moeng1984large,schmidt1989coherent,khanna1997analysis,bou2005scale,cheng2017failure}. Moreover, turbulence spectra are often not well resolved \citep{cheng2020model} since the Dougherty-Ozmidov scale is typically not resolved in LESs \citep{beare2006intercomparison,khani2014buoyancy,waite2011stratified} except possibly in a few studies \citep{sullivan2016turbulent}. Recently, DNS has been used to study the stably stratified ABL \citep{flores2011analysis,ansorge2014global,shah2014direct,gohari2017direct,cheng2020model}, although the Reynolds number is not as high as that in the real ABL. To obtain high-resolution velocity profiles in the near-wall region, DNSs of the stably stratified Ekman layers are conducted in this study. The incompressible Navier-Stokes equations with Boussinesq approximation are solved \citep{heerwaarden2017microhh}. Periodic boundary conditions are employed in the horizontal ($x$ and $y$) directions. Firstly, we simulate a turbulent Ekman layer flow \citep{coleman1992direct} over a smooth surface in the absence of buoyancy as in previous studies \citep{shah2014direct,gohari2017direct}. The three simulations of neutral Ekman layer flow named ReD900, ReD1800 and ReD2700 are forced with varying mean geostrophic wind. The grid points are $320 \times 320 \times 1664$ for the dataset ReD900, $640 \times 640 \times 3328$ for the dataset ReD1800, $960 \times 960 \times 4992$ for the dataset ReD2700 in streamwise ($x$), spanwise ($y$), and vertical directions ($z$), respectively. The Reynolds number is $Re_D=\frac{U_g D}{\nu}$, where $U_g$ is the geostrophic wind speed, $D={\big( 2\nu/f \big)}^{1/2}$ is the laminar Ekman layer depth, $\nu$ is the kinematic viscosity, and $f$ is the Coriolis parameter. Similarly to previous experiments \citep{shah2014direct,gohari2017direct}, a neutral velocity log law of the Ekman layer is obtained after $ft=5.9$, $6.0$ and $13.6$ for ReD900, ReD1800 and ReD2700, respectively. Then we add a cooling surface buoyancy flux $B_0$ to generate various stably stratified conditions. The boundary conditions for the temperature field are zero heat flux at the top of the computational domain. At the top $25\%$ of the computation domain, a sponge layer is added to prevent reflection of gravity waves \citep{nieuwstadt1993large}. The near-surface stability is measured by normalized Obukhov length $L^+=\frac{L}{\delta_v}=\frac{u_{\tau}^3}{\frac{\kappa g }{\Theta_r}u_\tau \theta_*} \frac{u_\tau}{\nu}$. The initial $L^+(t=0)$ is used to measure the strength of imposed stratification, which is computed from $u_\tau$ in the neutral Ekman layer before the cooling surface buoyancy flux $B_0$ is applied. Details of the DNS setup can be found in Cheng et al. \cite{cheng2020model} and the code is described in Heerwaarden et al. \cite{heerwaarden2017microhh}. We note that turbulence decays fast in the stably stratified Ekman layer \citep{gohari2017direct}, thus we only analyze the periods when $\frac{L_O}{\eta} >1 $ for the possible existence of Kolmogorov's energy cascade \citep{cheng2020model}, where $\eta=\big(\nu^3/\epsilon \big)^{1/4}$ is the Kolmogorov scale \citep{kolmogorov1941} and $\epsilon$ is the turbulent kinetic energy (TKE) dissipation rate. The friction Reynolds number $Re_\tau=u_\tau \delta_t/\nu$ at the selected time step of the DNS experiments ReD900 ($L^+=160$), ReD1800 ($L^+=800$), ReD1800 ($L^+=3200$), and ReD2700 ($L^+=160$) are 861, 1208, 1026 and 3122, respectively, where $\delta_t=u_\tau/f$ is the turbulent Ekman layer length scale. Following the suggestion of Shah and Bou-Zeid \citep{shah2014direct}, we compute the boundary layer height $z_i$ in DNS experiments as the height where maximum of velocity occurs. The eight stably stratified DNS experiments are described in Table \ref{tab:loglawdataDNS}. \begin{table} \caption[Details of the DNS set-up.]{Key parameters of the simulated stably stratified ABLs. $Re_\tau=\frac{u_\tau \delta_t}{\nu}$ is the friction Reynolds number, $u_\tau$ is the friction velocity, $\delta_t$ is the turbulent Ekman layer length scale, $\nu$ is the kinematic viscosity, $z_i$ is the boundary layer height determined from the height where maximum of velocity occurs, $L$ is the Obukhov length, $L_x$, $L_y$ and $L_z$ are the domain sizes in the $x$, $y$ and $z$ directions, respectively. $\Delta_x^+=(\Delta_x u_*)/\nu$, $\Delta_y^+$ and $\Delta_z^+$ are the spatial grid resolutions denoted by inner units in the $x$, $y$ and $z$ directions, respectively. $\kappa_{u}$ is the inverse of the velocity log law slope. The range of velocity log law is also indicated using $z^+$ and $\frac{z}{L}$.} \begin{center} \def~{\hphantom{0}} \begin{tabular}{lccccccccc} \hline \hline \makecell{DNS\\data} & \makecell{$Re_D$} & \makecell{$Re_\tau=\frac{u_\tau \delta_t}{\nu}$} & \makecell{$\frac{z_i}{L}$} & \makecell{$\Delta_x^+$ \\ ($\Delta_y^+$)} & $\Delta_z^+$ & \makecell{$L^+$} & $\kappa_{u}$ & \makecell{Log-law\\range in \\ $z^+$} &\makecell{Log-law\\range in \\ $\frac{z}{L}$} \\ [3pt] \midrule \makecell{ReD900} & 900 & 861 & 5.4 & $6.5$ &$0.82$ &160 &0.33 &$81\sim100$ &$1.01\sim1.25$ \\ \midrule \makecell{ReD900} & 900 & 806 & 2.2 & $6.3$ & $0.80$ &480 &0.28 & $59\sim70$ &$0.30\sim0.35$ \\ \midrule \makecell{ReD900} & 900 & 548 & 0.9 & $5.2$ & $0.66$ &1600 &0.28 &$83\sim94$ &$0.26\sim0.30$ \\ \midrule \makecell{ReD1800} & 1800 & 1208 & 6.6 & $3.8$ & $0.49$ &800 &0.20 &$81\sim97$ &$0.84\sim1.01$ \\ \midrule \makecell{ReD1800} & 1800 & 1088 & 2.0 & $3.6$ & $0.46$ &1600 &0.17 &$86\sim103$ &\makecell{$0.55\sim0.66$} \\ \midrule \makecell{ReD1800} & 1800 & 1026 & 3.7 & $3.5$ & $0.45$ &3200 &0.16 &$112\sim131$ &\makecell{$0.40\sim0.47$} \\ \midrule \makecell{ReD2700} & 2700 & 3122 & 42.0 & $4.1$ & $0.52$ &160 &0.24 &$100\sim121$ &\makecell{$3.04\sim3.67$} \\ \midrule \makecell{ReD2700} & 2700 & 1624 & 11.2 & $3.0$ & $0.38$ &1600 &0.15 &$102\sim122$ &\makecell{$1.16\sim1.38$} \\ \hline \hline \end{tabular} \label{tab:loglawdataDNS} \end{center} \end{table} \subsection{Field observations} A tower of 213 m has been installed in the Cabauw Experimental Site for Atmospheric Research (CESAR) \citep{apituley2008overview,bosveld2020fifty} (4.926\si{\degree} E, 51.97\si{\degree} N) in the Netherlands, where multi-level turbulence observations at 10 m, 20 m, 40 m, 80 m, 140 m and 200 m above a grass field are available in the ABL. We download a number of 30-minute data segments between 1:00 and 5:00 UTC in July 2019 from the CESAR data archive as the raw data, which has been quality controlled \citep{bosveld2020fifty}. These include velocity measurements from cup-anemometers at multiple levels and surface flux measurements from sonic anemometers at 3 m. Through detecting the top of an elevated aerosol layer, the boundary layer height $z_i$ is measured by the Lufft CHM 15k ceilometer \cite{choma2019comparative}. We calculate the average of ABL height over each 30-minute segment as the raw data. The raw data satisfying the following two conditions are further used to characterize the velocity profile: (1) the mean surface heat flux in the 30-minute sampling period has to be negative (i.e., heat transferred from air to ground); and (2) the boundary layer height $z_i$ is larger than 800 m. The first condition is used to select stably stratified ABL. The second condition is used to ensure that we include as many measurements as possible (especially those at 200 m) within the atmospheric surface layer through prescribing large boundary layer height $z_i$. These two conditions lead to 40 different stably stratified 30-minute periods. \section{results} \subsection{Existence of a velocity log law} The normalized mean velocity $\frac{U}{u_\tau}$ fits a log law with $z^+ \equiv \frac{z}{\delta_v}$ in the DNS datasets (Fig. \ref{fig:loglawFig1}). The coefficient of determination $R^2$ for $\frac{U}{u_\tau}$ and $\log(z^+)$ is 1.00 for the selected vertical layer (according to Fig. \ref{fig:DNSloglawKappaU}) near the wall across the DNS datasets. Following Lee and Moser (2015) \cite{lee2015direct}, we use a plateau of $\frac{z}{u_\tau} \frac{\partial U}{\partial z}$ to more clearly indicate the existence of a velocity log law (Fig. \ref{fig:DNSloglawKappaU}). The black dashed line (plateau) in Fig. \ref{fig:DNSloglawKappaU} is used to characterize the vertical layer where the velocity log law is identified (as detailed in Table 1). In the identified log law layer, the variations of turbulent momentum flux $\overline{w'u'}$ defined as $\frac{\max(\overline{w'u'})-\min(\overline{w'u'})}{\max(\overline{w'u'})}$ are $4.7 \%$ (ReD900, $L^+$=160), $10.6 \%$ (ReD1800, $L^+$=800), $5.3 \%$ (ReD1800, $L^+$=3200), and $4.3 \%$ (ReD2700, $L^+$=160), respectively. This is consistent with the definition of constant-flux layer \citep{stull1988introduction}, where flux variations are on the order of $~10\%$. This constant momentum flux zone is similar to the atmospheric surface layer, where the variation of turbulent fluxes should scale with the ratio of the atmospheric surface layer height to the ABL height \citep{wyngaard2010turbulence}, roughly $~10\%$. The coexistence of a velocity log law and constant momentum flux in the DNS datasets (Fig. \ref{fig:loglawFig1}) resembles that in turbulent shear flows \citep{george2007there}. As shown in Table 1, the range of Reynolds number of the DNS experiments is $548 \le Re_\tau \le 3122$, or equivalently $276 \le z_i/\delta_v \le 1345$. The vertical layers of velocity log law vary in different DNS experiments but all fall in the range $59 \le z^+ \le 122$. According to Marusic et al. \cite{marusic2013logarithmic}, the universal velocity log law of turbulent shear flows in the absence of buoyancy effects is found in the range $3 \left(z_i/\delta_v\right)^{1/2} < z^+ < 0.15 z_i/\delta_v $, which corresponds to $50 \le z^+ \le 202$ in our DNS experiments. Thus the range of the stably stratified velocity log law roughly falls within that of neutral velocity log law, although the influence of buoyancy effects cannot be neglected in our DNS experiments. In addition, the stably stratified velocity log law is found when $\frac{z}{L} > 3$ (in ReD2700, $L^+$=160) using the MOST stability parameter. However, MOST suggests that velocity profiles will significantly deviate from a log law in such stably stratified conditions (Fig. \ref{fig:DNSloglawKappaU}d). In fact, the stability correction functions of MOST are only defined in the range $0< \frac{z}{L} <1$ in stably stratified conditions due to its poor behaviour in more stratified conditions \citep{foken200650}. Therefore, the proposed velocity log law is fundamentally different from MOST and can be applied to a wider range of buoyant conditions. The slopes of the proposed velocity log law are $\frac{1}{0.82 \kappa}$ (ReD900, $L^+$=160), $\frac{1}{0.50 \kappa} $ (ReD1800, $L^+$=800), $\frac{1}{0.41 \kappa} $ (ReD1800, $L^+$=3200), and $\frac{1}{0.61 \kappa} $ (ReD2700, $L^+$=160), respectively (Fig. \ref{fig:loglawFig1}). In comparison, the slope of the universal log law for mean velocity in turbulent shear flows is constant, i.e., $\frac{1}{ \kappa} $ \citep{marusic2013logarithmic}. The variations of the slope of proposed velocity log law is due to buoyancy effects as well as Reynolds number effects. Similar dependence of the slope on buoyancy has been found in temperature log laws in the convective boundary layers \citep{cheng2021logarithmic} and Rayleigh-Bénard convection \citep{ahlers2012logarithmic}. The slope of the proposed velocity log law will be revisited later. In addition to the DNS experiments, we analyze field observations of the stably stratified ABL in the Cabauw experiment, which are at higher Reynold number ($1.4 \times 10^6 \le \frac{z_i}{\delta_v} \le 2.1 \times 10^7$). A linear relation is fitted between the normalized velocity $\frac{U-U_{h1}}{u_\tau}$ and $\log(z^+)$ in the 4 sampled periods (Fig. \ref{fig:SHEBA_loglaw}), where $U$ is the temporally averaged wind speed at heights of 20 m, 40 m, 80 m, 140 m, and 200 m in a 30-minute period, and $U_{h1}$ is the averaged wind speed at 10 m. The coefficient of determination for $\frac{U-U_{h1}}{u_\tau}$ and $\log(z^+)$ satisfies the condition that $R^2>0.88$ in all the selected 40 periods, indicating the linear relation between $\frac{U-U_{h1}}{u_\tau}$ and $\log(z^+)$ and thus the presence of a velocity log law. We also compare the velocity profile based on MOST \citep{panofsky1963determination,kramm2013hans} with field observations and find substantial deviations across stably stratified conditions in the range $0.22 \le z/L \le 15.68$ (Fig. \ref{fig:SHEBA_loglaw}), where $z=10$ m (at one level of tower observations). The field observations in the real ABL confirm the existence of the velocity log law observed in our DNS experiments. It is worth noting that few studies have investigated the logarithmic nature of the mean velocity profile in the stably stratified ABL. This is partly due to the sparse measurements in the vertical direction. In addition, large uncertainties remain in the observations of the stable ABL since instruments often operate near their threshold levels due to small turbulence intensities \citep{nieuwstadt1984turbulent,mahrt1985vertical,mahrt2006extremely,mahrt2011near}. Unlike DNS experiments, a plateau of $\frac{ z}{u_\tau} \frac{\partial U}{\partial z}$ can hardly be identified in field observations to support a velocity log law. Moreover, field observations are often analyzed within the framework of MOST since MOST is still regarded as the foundation of ABL turbulence theory \citep{foken200650}. \subsection{Slope of the velocity log law} \subsubsection{Dimensional analysis} Fully developed stably stratified boundary layer flow can be described by $\nu$, $u_{\tau}$, $z$, $\theta_*$ and the boundary layer height $z_i$. These variables can form 3 non-dimensional groups: \begin{equation} \frac{z}{L}=\frac{\kappa g \theta_*}{\Theta_r} \frac{z}{u_\tau^2}, \end{equation} \begin{equation} \frac{z_i}{L}=\frac{\kappa g \theta_*}{\Theta_r} \frac{z_i}{u_\tau^2}, \end{equation} and \begin{equation} \frac{z_i}{\delta_v}= \frac{u_\tau z_i}{\nu}. \end{equation} The mean velocity profile can be written as \begin{equation} U=u_{\tau}F_0 \bigg(\frac{z}{L}, \frac{z_i}{L}, \frac{z_i}{\delta_v} \bigg), \end{equation} where $F_0$ is a function of $\frac{z}{L}$, $\frac{z_i}{L}$, and $\frac{z_i}{\delta_v} $. Following the argument for velocity gradient in Pope (2000) \cite{pope2000turbulent}, $\frac{\partial U}{\partial z}$ can be written as \begin{equation} \frac{\partial U}{\partial z}=\frac{u_{\tau}}{z} \Phi \bigg(\frac{z}{L},\frac{z_i}{L},\frac{z_i}{\delta_v} \bigg). \end{equation} According to the DNS datasets, $\frac{z}{u_\tau} \frac{\partial U}{\partial z}$ is independent of $z$ in the log law region. For the existence of a velocity log law, $\Phi$ has to be independent of $\frac{z}{L}$, leading to \begin{equation} \frac{\partial U}{\partial z}=\frac{u_{\tau}}{z} \Phi \bigg(\frac{z_i}{L},\frac{z_i}{\delta_v} \bigg). \end{equation} We denote $\frac{1}{\kappa_u} \equiv \Phi \Big(\frac{z_i}{L},\frac{z_i}{\delta_v}\Big)$ and obtain \begin{equation} \frac{\partial U}{\partial z}=\frac{u_\tau}{z} \frac{1}{\kappa_u}. \end{equation} After integration from a reference height $z_r$ to $z$, we have \begin{equation} \label{eqnLogprofile} \frac{U-U_{z_r}}{u_\tau}= \frac{1}{\kappa_u} \log \Big( \frac{z}{z_r}\Big). \end{equation} Equation (\ref{eqnLogprofile}) is just the velocity log law since $\kappa_{u}$ is independent of $z$ and is a function of $\frac{z_i}{L}$ and $\frac{z_i}{\delta_v}$. This dimensional analysis points out the relevant parameters that determine the slope of the proposed velocity log law, which should be calibrated from numerical experiments or field observations. \begin{figure \centering \includegraphics[width=16cm,clip=true, trim = 18mm 5mm 0mm 00mm]{Figure_allRe_ulogprofile_constflux_20210227_Tmean} \caption{Normalized velocity $U/u_{\tau}$ and momentum flux $-\overline{w'u'}/u_{\tau}^2$ in the vertical direction of the DNS experiments. The blue line denotes the normalized momentum flux $-\overline{w'u'}/u_{\tau}^2$ in the velocity log law region.} \label{fig:loglawFig1} \end{figure} \begin{figure \centering \includegraphics[width=16cm,clip=true, trim = 18mm 05mm 9mm 00mm]{Figure_allRe_u_loglaw_20220707_Tmean \caption{Vertical profiles of normalized velocity gradient $(z/u_\tau{\partial u}/{\partial z})$ (equal to $1/\kappa_{u}$) in DNS datasets. The widely used Monin-Obukhov similarity function of Businger et al. \cite{businger1971flux} denoted by ``MOST'' is also shown.} \label{fig:DNSloglawKappaU} \end{figure} \begin{figure \centering \includegraphics[width=16cm,clip=true, trim = 9mm 5mm 14mm 00mm]{Figure_cabauw_U_profile_20220908_selected4periods \caption{Normalized velocity $\frac{U-U_{h1}}{u_\tau}$ in the vertical direction of Cabauw observations. $R^2$ denotes the coefficient of determination for $\frac{U-U_{h1}}{u_\tau}$ and $\log(z^+)$, and ${z_{10m}/L}$ denotes the stability parameter $z/L$ at the height 10 m. ``MOST'' denotes the computed velocity profiles based on Monin-Obukhov similarity theory.} \label{fig:SHEBA_loglaw} \end{figure} \begin{figure \centering \includegraphics[width=16cm,clip=true, trim = 6mm 1mm 8mm 2mm]{Figure_kappa_u_ziL_Retau_DNS_cabauw_20220908 \caption The ratio $\frac{\kappa_u}{\kappa}$ plotted against (a) $\frac{z_i}{L}$, and (b) $\frac{z_i}{\delta_v}$ under various stably stratified conditions in DNS experiments and Cabauw observations.} \label{fig:DNS_SHEBA_kappau} \end{figure} \subsubsection{DNS and field observations} The DNS and Cabauw datasets suggest that $\kappa_{u}/{\kappa}$ decreases nonlinearly with increasing $z_i/L$ (Fig. \ref{fig:DNS_SHEBA_kappau}a). That is to say, as buoyancy effects increase (i.e., $z_i/L$ increases), the slope of the stably stratified velocity log law deviates more from that of the neutral channel flow. When $z_i/\delta_v$ increases from $ \sim 10^3$ (DNS datasets) to $\sim 10^7$ (field observations), $\kappa_u/\kappa$ does not seem to show a monotonic trend (Fig. \ref{fig:DNS_SHEBA_kappau}b). At high Reynolds number ($1.4 \times 10^6 \le z_i/\delta_v \le 2.1 \times 10^7$), $\kappa_{u}/{\kappa}$ in Cabauw observations can assumed to be independent of Reynolds number thus buoyancy effects dominates. In comparison, the Reynolds number is $276 \le z_i/\delta_v \le 1345$ for the DNS datasets, thus there is a wide separation of Reynolds number between field observations and DNS experiments. The range of MOST stability parameter in the identified log law range are $0.22<z/L<313.70$ and $0.26<z/L<3.67$ for the Cabauw and DNS datasets, respectively. Therefore, both larger Reynolds number and stronger buoyancy effects can be found in Cabauw observations. More accurate measurements of turbulent fluxes and denser measurements of velocity in the vertical direction over a wider range of Reynolds numbers and stably stratified conditions as well as laboratory experiments (e.g., Williams et al. \citep{williams2017effect}) will better constrain $\kappa_u$ in the ABL. \subsubsection{Asymptotic analysis} At sufficiently high Reynolds numbers like the field observations, we conduct asymptotic analysis for the slope $\kappa_u$ in the ``neutral limit'' and ``strongly stratified limit'' by neglecting the Reynolds number effects. In the neutral limit (where there is no buoyancy) at sufficiently high Reynolds numbers, $z_i/L \rightarrow 0$, we expect that $\kappa_{u}/{\kappa} \rightarrow 1$ since von K\'arm\'an's universal log law is recovered. In the strongly stratified limit at sufficiently high Reynolds numbers, $z_i/L \rightarrow \infty$ and $u_\tau \rightarrow 0$, the following asymptotic relation is required to cancel out $u_\tau$: \begin{equation} \Phi \bigg( \frac{z_i}{\delta_v}, \frac{z_i}{L} \bigg) =\Phi \bigg(\frac{z_i}{L} \bigg)=\Phi \bigg( \frac{\kappa g z_i}{\Theta_r} \frac{\theta_*}{u_\tau^2} \bigg)=c_1 {\bigg( \frac{\kappa g z_i}{\Theta_r} \frac{\theta_*}{u_\tau^2} \bigg)}^{1/2} \text{, for } \frac{z_i}{L} \rightarrow \infty, \end{equation} where $c_1$ is a constant. Then the velocity gradient ${\partial U}/{\partial z}$ can then be rewritten as \begin{equation} \frac{\partial U}{\partial z}= c_1 \left( \frac{\kappa g z_i}{\Theta_r} \right)^{1/2} \frac{\theta_*^{1/2}}{z} \text{, for } \frac{z_i}{L} \rightarrow \infty. \end{equation} The above equation suggests that $\partial U/ \partial z \rightarrow \infty$ as ${z_i}/{L} \rightarrow \infty$ and $\theta_* \rightarrow \infty$, corresponding to extreme stratified conditions. As $\kappa_u = 1/\Phi$, we can obtain \begin{equation} \kappa_u=\frac{1}{c_1} \bigg(\frac{z_i}{L} \bigg)^{-1/2} \text{, for } \frac{z_i}{L} \rightarrow \infty. \end{equation} The slope between $\kappa_u$ and $z_i/L$ obtained from Cabauw observations in the log-log plot is around $-0.4$ (Fig. \ref{fig:DNS_SHEBA_kappau}a), which is not exactly the same as $-1/2$ based on the above equation. However, the asymptotic analysis still qualitatively captures the relation between $\kappa_u$ and $z_i/L$ in extreme stratified conditions at sufficiently high Reynolds number. It is worth noting that the extreme condition ${z_i}/{L} \rightarrow \infty$ is not observed in the Cabauw experiments, which might also be influenced by measurement uncertainties, thus leading to the difference between the observations and asymptotic analysis. \subsection{Discussion} The proposed velocity log law in the stably stratified boundary layers can be written as $\frac{\kappa_u z}{u_\tau} \frac{\partial U}{\partial z}=1$, or equivalently, \begin{equation} \frac{\kappa z}{u_\tau} \frac{\partial U}{\partial z}=\kappa \Phi \Big( \frac{z_i}{L},\frac{z_i}{\delta_v}\Big) = \frac{\kappa}{\kappa_u} , \end{equation} where $\Phi \big( \frac{z_i}{L},\frac{z_i}{\delta_v}\big) $ is independent of $z$ and needs to be determined by numerical experiments or observations. According to MOST, the normalized velocity gradient was instead assumed to depend on $z/L$ \citep{monin1954basic}, \begin{equation} \label{eqnMOST} \frac{\kappa z}{u_\tau} \frac{\partial U}{\partial z}= \phi_m \Big( \frac{z}{L}\Big) , \end{equation} where $\phi_m$ is a stability correction function dependent on the distance to the wall $z$, thus leading to a non-logarithmic profile. The widely used Businger profile \citep{businger1971flux} for MOST is shown in Fig. \ref{fig:DNSloglawKappaU}, which is characterized by a slope rather than the observed plateau for $\frac{\kappa z}{u_\tau} \frac{\partial U}{\partial z}$. In numerical experiments, the function $\Phi \big(\frac{z_i}{L},\frac{z_i}{\delta_v} \big)$ does not depend on $z$ thus leading to a log law. In our various stably stratified DNS datasets, $\frac{\kappa z}{u_\tau} \frac{\partial U}{\partial z}$ approaches a constant that is equal to $\frac{\kappa}{\kappa_u}$ (Fig. \ref{fig:DNSloglawKappaU}), thus supporting a log law rather than MOST. In addition, the slope of the proposed velocity log law depends on $\frac{z_i}{L}$ (buoyancy effects) and $\frac{z_i}{\delta_v}$ (Reynolds number effects). Such dependence of the slope on $\frac{z_i}{L}$ and $\frac{z_i}{\delta_v}$ has also been reported in temperature profiles in the convective boundary layers \citep{cheng2021logarithmic}. \section{Conclusion} We report new logarithmic velocity profiles in the near-wall region affected by buoyancy effects, through dimensional analysis, DNS experiments, and field observations of the stably stratified boundary layers. The new velocity log law can be described by $\frac{\kappa_u z}{u_\tau} \frac{\partial U}{\partial z}=1$, where $\kappa_u$ is a function of $\frac{z_i}{L}$ (buoyancy effects) and $\frac{z_i}{\delta_v}$ (Reynolds number effects) supported by both the DNS experiments and field observations. Asymptotic analysis for the slope $\kappa_u$ has been conducted in the neutral limit and strongly stratified limit at sufficiently high Reynolds numbers. More accurate observations over a wider range of Reynolds numbers and stably stratified conditions may better constrain $\kappa_u$ in the atmosphere. The proposed velocity log profile may replace Monin-Obukhov similarity function for velocity in global climate models and wall models for large eddy simulations, possibly leading to more realistic predictions of weather, climate, and hydrology, especially in polar regions. \begin{acknowledgments} We would like to thank Dr. Kaighin McColl for helpful discussions. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University. We would like to thank Dr. Fred C. Bosveld and Henk Klein Baltink for the help in obtaining the field data at the Cabauw Experimental Site for Atmospheric Research (http://www.cesar-database.nl). \end{acknowledgments}
1,116,691,501,149
arxiv
\section{Introduction} Most stars in the galactic disk are formed in dense molecular gas within giant molecular clouds (GMCs) as members of clusters (Lada \& Lada 2003). Thus, clusters must play a critical role in the origins of some of the most fundamental properties of the galactic stellar population. In addition, such clusters consist of stars of various masses, and the most massive stars exist at the centers of clusters (e.g., Raboud \& Mermilliod 1998). These features suggest that the physical environment in the cluster-forming region would vary from area to area because the mass of formed stars should be closely related to the physical environment. However, clusters do not always include massive stars. Some clusters contain only intermediate-mass and/or low-mass stars in low-mass star-forming regions (e.g., Tachihara et al. 2002). Based on these features, it appears that although the conditions of massive star formation easily satisfy the physical conditions for cluster formation, the conditions of cluster formation do not necessarily provide the physical conditions for massive star formation. The clusters appear to be continuously forming in the galactic disk and the direct study of their physical conditions and the processes leading to their formation is basically possible. However, discovering clusters at an early evolutionary stage via optical observations is difficult because such clusters are completely embedded in dense gas and dust. Therefore, observation by infrared instruments is necessary to study cluster formation. Recently, many young embedded clusters within molecular clouds have been discovered by infrared observations (e.g., Bica et al. 2003). On the basis of these observations, to investigate the physical conditions in the dense gas that forms rich clusters, dense molecular gas surveys were carried out on such young clusters that include massive stars, revealing that clusters are formed in gas systems with a size scale of a cluster ($\sim 0.3$ pc) in GMCs (e.g., Lada 1992; Lada \& Lada 2003; Saito et al. 2007) and these systems are usually called ``clumps.'' However, a few very compact gas systems having a size scale of $\sim 0.03$ pc similar to typical (cold) cores in low-mass star-forming regions (e.g., Onishi et al. 2002; Umemoto et al. 2002) have been discovered with very high-angular-resolution observations using an interferometer on young clusters that include massive (proto)stars (e.g., Churchwell et al. 1992; Olmi et al. 1993). These systems have high temperatures ($> 100$ K), a high H$_2$ density ($10^6$--$10^7$ cm$^{-3}$), and a high luminosity ($> 10^4$ $L_{\sun}$). These results indicate that these systems called ``hot cores,'' would include massive protostar(s) or massive young star(s). Generally, not a large number of stars form in a core even in cluster-forming regions (e.g., van der Tak et al. 2005). Hence, many cores must form in one clump to create a cluster with a large number of stars. If this multiple-core system is formed by the fragmentation of the inner structure of the clump, a close relationship should exist between the physical parameters of the clump and the characteristics of the formed cluster. Saito et al. (2007) carried out molecular line observations using the Nobeyama 45-m telescope directed toward the clumps in several cluster-forming regions and suggested that a good relationship indeed exists between the average H$_2$ density of the clumps and the stellar number density of formed clusters in these clumps. In addition, they found a good correlation between the line width of the clumps and the maximum mass of formed stars. Saito et al. (2006) investigated the cores in three cluster-forming regions and found two types of cores in a single clump. The first type was a core with a large mass and a large turbulent motion, and the second type was a core with a small mass and a small turbulent motion. They found that massive (proto)stars are associated with the cores with a large mass and large turbulent motion. This suggests a theoretical dependence between the mass of the star formed in a core and the turbulent motion in the core (e.g., McKee \& Tan 2003). According to the suggestion by McKee \& Tan (2003), the results of Saito et al. (2006) indicate that massive and low-mass stars can be formed from one clump. Such observations suggest that to understand the mechanism of cluster formation, it is important to elucidate the relationships between the physical properties of cores and physical condition of the clump that includes the cores, and between the mass of the star formed in a core and the physical condition of the core. If the cores in a clump are formed by gravitational instability as the theory suggests, a strong connection must exist between the spatial distribution of the cores in the clump and the inner structure of the clump. However, no detailed relationships between the cores and the clumps have been demonstrated. In addition, although young clusters have a tendency to concentrate massive stars in the center, whether massive cores with large turbulent motion are formed in the center of the clump remains unclear. Therefore, we must observe several clumps to obtain detailed physical parameters of both clumps and cores and reveal the relationships between the cores and the clumps themselves. To detect the cores in the clump, we must uncover the internal motion and the distribution of the H$_2$ column density in the clump. Thus, we need to carry out molecular line observations suitable for revealing velocity structures of the dense gas in the clump. Such observations are important to find a protocluster that consists of multiple cores --- a ``core cluster.'' We carried out C$^{18}$O observations toward nine IRAS point sources with young clusters containing massive stars. The C$^{18}$O line is a good candidate for tracing a large dynamic range in mass and column density in the dense gas because this line has a thin optical depth and a low critical density. In general, the C$^{18}$O line cannot be used to probe high-density regions ($> 10^{4}$ cm$^{-3}$) in starless cores because CO molecules are depleted onto dust grains in such a cold region with a temperature lower than the CO sublimation temperature of $\sim 20$ K (e.g., Caselli et al. 1999; Bacmann et al. 2002). However, previous CO observations have revealed that CO lines trace high-density regions ($> 10^{4}$ cm$^{-3}$) well with high temperatures of $\gtrsim 20$ K (e.g., Momose et al. 1998; Sch\"{o}ier et al. 2004). Therefore, if the temperatures of high-density regions ($> 10^{4}$ cm$^{-3}$) are higher than $\sim 20$ K, C$^{18}$O lines can be used to probe these regions. Typically, the regions where massive stars are formed in GMCs maintain high temperatures of $\gtrsim 20$ K, and thus C$^{18}$O molecules would not be depleted in such regions. In fact, the structures identified by the interferometric observations toward massive star-forming regions have a high average density of $> 10^{5}$ cm$^{-3}$ (e.g., Hunter et al. 2004; Saito et al. 2006). Therefore, C$^{18}$O observations can be used to reveal the distribution, mass, and kinetic motion of the dense gas in cluster-forming regions if this line is optically thin enough. The identification of the star-forming and starless cores requires observation with high-angular resolution of several arc-seconds because massive star-forming regions are generally located at a distance of several kpc. However, interferometric observation of C$^{18}$O cores in massive star-forming regions has rarely been performed. To find dense cores in the clumps with cluster formation, we carried out C$^{18}$O molecular line observations with high-angular resolution toward nine cluster-forming regions using the Nobeyama Millimeter Array (NMA). The physical conditions of the clumps in these regions were already derived by Saito et al. (2007). Results of the interferometric observations toward three of these nine regions were already reported in Saito et al. (2006). Here we present the results of the observations toward the other six regions. We present our millimeter interferometric observations in $\S$ 2. In $\S$ 3, the observational results are presented for millimeter continuum and C$^{18}$O molecular line. We discuss the implications of our results for the physical condition of the dense cores in $\S$ 4. In $\S$ 5, we summarize the main results of our study. \section{Observations} \subsection{Samples} To compare the physical conditions in the dense gas around young clusters containing young massive stars with the characteristics of the clusters, we need to select targets at similar evolutionary stages and distances and with similar bolometric luminosities, because the observed physical parameters depend on both the characteristics of formed stars (e.g., Saito et al. 2001) and the spatial resolutions. In addition, the evolution of an embedded stellar cluster is sensitively coupled to the evolution of its surrounding gas (e.g., Lada \& Lada 2003). Carpenter et al.\ (1990), Zinchenko et al.\ (1994, 1997, 1998), and Sridharan et al.\ (2002) carried out observations with low-angular resolution ($\sim 60 \arcsec$) toward luminous IRAS point sources associated with H$_2$O/CH$_{3}$OH masers and/or compact H\,{\footnotesize II} regions in the northern sky and identified 93 massive clumps with a size of $> 1.0$ pc. H$_2$O/CH$_{3}$OH masers and/or compact H\,{\footnotesize II} regions serve as the indicators of very young massive stellar objects in massive star-forming regions. We selected 14 IRAS point sources from these clumps on condition that the luminosity of the IRAS source is several times $10^{4}$ $L_{\sun}$, the distance is $\sim $2 -- 4 kpc, and the source is associated with both the indicators of very young massive stellar objects and NIR clusters. We observed nine of these where the physical parameters of the dense clumps were already derived by single-dish observations with high-angular resolution of $\sim 15 \arcsec$ (Saito et al. 2007). In this paper, we report the observational results obtained toward six objects except for three objects that were already reported by Saito et al. (2006). These IRAS sources are associated with the indicators of very young massive stellar objects: (1) (U)C H\,{\footnotesize II} region(s) for IRAS\,05375+3540 (e.g., Felli et al. 2006), IRAS\,06117+1350 (Turner \& Terzian 1985), IRAS\,19442+2427 (e.g., Barsony 1989), and IRAS\,19446+2505 (e.g., G\'{o}mez et al. 1995). (2) very weak centimeter continuum sources for IRAS\,06056+2131, and IRAS\,06061+2151 (Kurtz et al. 1994). This indicates that these massive (proto)stars are at a similar evolutional stage within several times 10$^{5}$ yr. Table 1 lists the position, distance, and luminosity of each source. \subsection{Nobeyama Millimeter Array(NMA) Observations} The observations were carried out from November 2002 through May 2005 with the six-element Nobeyama Millimeter Array (NMA) in the C$^{18}$O ($J\!=\!1$--$0$; 109.782 GHz) line, and 98 GHz and 110 GHz continuum. The 8-h tracks were obtained in the most compact (designated as D) and intermediate (designated as C) configurations. The projected antenna baseline length ranged from 13 m to 163 m, yielding the typical resolution of $\sim 4 \arcsec$ when natural weighting is applied to all visibility data. The resolution of each observation is given in Table 2. At 110 GHz, the primary beam of the 10-m telescope is about $62 \arcsec$. We used the SIS receivers, whose system noise temperatures in single sideband were $\sim 400$ K (including the atmosphere toward the zenith). Digital correlators UWBC and FX were configured for 1024 MHz and 32 MHz bandwidths with 128 and 1024 channels per baseline, respectively, yielding an equivalent resolution of 44 km s$^{-1}$ and 0.1 km s$^{-1}$ at 110 GHz, respectively. The UWBC can provide both the upper sideband (110 GHz) and the lower sideband data (98 GHz). The gain calibrations were carried out for each upper and lower sideband every 10-15 min by observing quasars 0528+134, 0552+398, and 2023+336. The bandpass calibration was performed with 3C454.3 and 3C84. Uranus was observed at 3 mm to estimate the absolute flux scale. The uncertainties in the flux scale are about 15\%. The visibility data were calibrated using the NRO software package UVPROC2, and final maps were made using the CLEAN algorithm of the Astronomical Image Processing System (AIPS) by the National Radio Astronomy Observatory (NRAO\footnote{The NRAO is operated by the National Science Foundation by Associates Universities, Inc., under a cooperative agreement.}). The 3-mm continuum maps are constructed from line-free channels of a 44 km s$^{-1}$ wide window. The uncertainties of the position in the maps are about $< 1 \arcsec$. The noise levels of the natural-weighted maps are summarized in Table 2. \subsection{Definition of C$^{18}$O cores} We defined a compact structure, a core, from the distribution of C$^{18}$O intensity to estimate physical properties of the dense condensations. We used the following procedures, similar to those adopted by Saito et al. (2007): (1) To identify the velocity components in the observed fields, we checked the C$^{18}$O molecular spectrum. If the C$^{18}$O spectrum has more than two line features and the separations among their radial velocities are larger than 0.5 km s$^{-1}$, each line feature was attributed to different velocity components. (2) We made a C$^{18}$O integrated intensity map for each velocity component. (3) We searched for the i-th intensity peak (peak-i) outside previously defined cores (i.e., core[1], core[2], ..., core[i-1]) in the C$^{18}$O integrated intensity map. (4) We drew a contour at the half-level of the intensity value at peak-i and defined the region enclosed by the half-intensity contour as a core[i]. (5) In cases when sub-peaks exist within the core[i], if the intensity at a sub-peak is larger than 6 $\sigma$ above the hollow between it and peak-i, we excluded the region around the sub-peak by splitting the core[i] along the hollow. (6) In cases when previously defined cores (i.e., core[1], ..., core[i-1]) exist within the boundary of the core[i], if the intensity at peak-i is larger than 6 $\sigma$ above the hollow between peak-i and the peaks of the previously defined cores, we excluded the region around the peaks of the previously defined cores by splitting the core[i] along the hollow. Otherwise, the core[i] was canceled. (7) We searched for the next ([i+1]-th) intensity peak outside the cores. (8) We repeated the procedures after procedure (4) until the value of the intensity of the local peak went below the 6 $\sigma$ noise level. We identified 171 C$^{18}$O molecular cores using these procedures. The observed properties of the C$^{18}$O cores, such as the absolute peak temperature, $T^{\ast}_{\rm R}$, the radial velocity, $V_{\rm LSR}$, the FWHM line width, $\Delta V$, at the peak position, and the composite line width, $\Delta V_{\rm com}$, of the core are summarized in Table 3. Here, the composite line width is the FWHM line width of the composite spectrum made by averaging all spectra within the core. The typical uncertainties of the temperature, radial velocity, and line width in the Gaussian fits are 0.1 K, 0.1 km s$^{-1}$, and 0.1 km s$^{-1}$, respectively. \subsection{Definition of the Millimeter Continuum Sources} We next defined the millimeter continuum sources (MCSs) from the continuum images to estimate the physical properties of the very dense regions and to identify the positions of the massive (proto)stars. Therefore, we identified MCSs except the known H\,{\footnotesize II} regions from the 98 GHz and 110 GHz continuum maps using procedures (3)--(8) adopted by the C$^{18}$O core definition. We also stipulated that the MCSs were detected at both the 98 GHz and the 110 GHz continuum bands. As a result, we identified 13 MCSs and detected six known (U)C H\,{\footnotesize II} regions. The observed parameters of these sources are summarized in Table 4. \section{Results} \subsection{Derivation of Physical Properties of the Cores} The goal of this study was to clarify physical processes of both cluster formation and massive star formation by investigating the indispensable physical characteristics of the dense gas that forms young clusters. In this paper, we defined a clump as a fundamental structure in which clusters are formed and defined a core as a fundamental structure in which individual stars in the cluster are formed. Previous studies (e.g., Umemoto et al. 2002; Saito et al. 2006, 2007) indicated that the clumps have a size scale of $\sim$ 0.3 pc and the cores have a size scale of $\sim 0.03$ pc. The physical quantities of the dense gas in the cluster-forming regions, including massive star formations, are derived from the C$^{18}$O-line emission. \subsubsection{C$^{18}$O line analysis} Generally, the C$^{18}$O molecule is chemically stable and the optical depth of C$^{18}$O ($J\!=\!1$--$0$) is very thin. Thus, this line is very useful for estimating the H$_2$ column density. Here, we calculate the C$^{18}$O optical depth and the H$_2$ column density under the local thermodynamic equilibrium (LTE) condition with the excitation temperature, $T_{\rm ex}$, estimated from the $^{12}$CO peak temperature of $T^{\ast}_{\rm R}$($^{12}$CO) = 31 -- 56 K at the position of each maser source (Barsony 1989; Wouterloot \& Brand 1989; Zinchenko et al.\ 1998; White \& Fridlund 1992). We assume that the excitation temperature of each core is the same as that at the position of the maser source in each region. In these cases, the excitation temperatures, $T_{\rm ex}$, were estimated to be 35 -- 60 K. With these excitation temperatures, the C$^{18}$O optical depth, $\tau$(C$^{18}$O), and C$^{18}$O column density, $N$(C$^{18}$O), were estimated using \begin{eqnarray} \tau({\rm C^{18}O}) = -\ln\Biggm\{1 - \frac{T^{\ast}_{\rm R}({\rm C^{18}O})}{5.27}\left[\frac{1}{\exp(5.27/T_{\rm ex})-1} - 0.166\right]^{-1}\Biggm\}, \end{eqnarray} and \begin{eqnarray} N({\rm C^{18}O}) = 2.42\times 10^{14}\times \Biggm\{\frac{\tau({\rm C^{18}O})\Delta V({\rm km\ s^{-1}})T_{\rm ex}} {\left[1 - \exp(-5.27/T_{\rm ex})\right]}\Biggm\} ({\rm cm^{-2}}) \end{eqnarray} (e.g., Rohlfs \& Wilson 2004), where $T^{\ast}_{\rm R}$(C$^{18}$O) is the C$^{18}$O radiation temperature in kelvins. We calculated the H$_2$ column density, $N$(H$_2$), assuming that the C$^{18}$O abundance ratio is $1.7 \times 10^{-7}$ (Frerking et al.\ 1982). The C$^{18}$O optical depth and H$_2$ column density at the intensity peak of the cores range from 0.02 to 0.19 and from $0.8 \times 10^{22}$ to $2.4 \times 10^{23}$ cm$^{-2}$, respectively. From these results, we found that the optical depth of the C$^{18}$O line is very thin. \subsubsection{The Physical Parameters of the Cores} First, we estimated the line width and the radius of the cores. We regard the line width of the composite line profile as the line width of the core. The line width of the cores ranges from 0.43 to 3.33 km s$^{-1}$ with a mean line width of 1.08 km s$^{-1}$. We estimated the radius of the cores assuming that the cores are spherical. We defined the radius of the core, $R_{\rm core}$, as the deconvolved radius and calculated it as \begin{eqnarray} R_{\rm core}=\sqrt{\frac{S - {\rm beam\ area}}{\pi}}, \end{eqnarray} where S and beam area are the area inside the core and the area of the synthesized beam, respectively. The radius ranges from $0.010$ to $0.090$ pc, with a mean radius of $0.035$ pc. We selected the cores with the ratio of the deconvolved radius to the observed radius of $> 0.5$ to select the cores resolved by the present observations. Note that we call the radius calculated from (S/$\pi$)$^{1/2}$ the ``observed radius.'' Hereafter, we use only the cores with high reliability for discussions involving the radius. Next, we estimated the LTE mass, $M_{\rm core}$, and the average H$_2$ density, $n$(H$_2$)$_{\rm core}$, of the cores. The LTE mass of the cores was derived by integrating the H$_2$ column density over the core. We assumed a mean molecular weight per one H$_2$ molecule of 2.8 by taking into account a relative helium abundance of 25\% by mass. The average H$_2$ density of the core is derived by assuming both a spherical shape and a uniform density and estimated by dividing the mass by a core volume, $(4/3)\, \pi R_{\rm core}\,^3$. The LTE mass and the average H$_2$ density of the cores range from 0.5 to 54.1 $M_{\sun}$ and $0.6 \times 10^{5}$ to $2.1 \times 10^{6}$ cm$^{-3}$, respectively. Finally, we estimated the virial mass, $M_{\rm vir}$, of the cores. The virial mass is one of the key quantities by which we can evaluate the equilibrium state of the cores. The virial mass of the core is derived using the following equation, assuming isothermal, spherical, and uniform density distribution with no external pressure or magnetic forces: \begin{equation} M_{\rm vir} = \frac{5\ R_{\rm core} (\Delta V_{\rm com})^{2}}{8\ G \ln 2}, \end{equation} where $G$ is the gravitational constant. The virial mass of the cores ranges from 0.7 to 143.4 $M_{\sun}$. These physical properties are summarized in Table 5. \subsection{Star-forming Cores and Starless Cores} Here, we distinguish starless cores and star-forming cores to investigate the star-formation activity in C$^{18}$O cores in cluster-forming regions. We compared the cores with near infrared sources (2MASS\footnote{The Two Micron All Sky Survey (2MASS) is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation (http://pegasus.phast.umass.edu).} sources), MCSs, and centimeter continuum sources (CCSs) as indicators of young stellar objects to reveal star formation in the cores. In particular, we selected the 2MASS sources detected at more than or equal to two bands with $A_V > 10$ mag among the sources associated with the clumps (Saito et al. 20007) because the sources embedded in the dense gas must have a large extinction such that the typical peak H$_2$ column density of the C$^{18}$O cores is more than $1 \times 10^{22}$ cm$^{-2}$. Note that the 2MASS sources identified by Saito et al. (2007) include young stellar objects (YSOs), which are the class I -- III type sources, and background sources. Here, we define an association as when an intensity peak of one object is located inside the boundary of the other object. Note that the cores associated with only MCSs are possibly very massive and dense starless cores. First, we compared 13 MCSs with CCSs and 2MASS sources. Three of these are associated with both 2MASS sources and CCSs, and four of 13 MCSs are associated only with 2MASS sources. Only one of 13 MCSs, MCS B for IRAS\,06056+2131, is associated only with CCSs and is seen as a dark lane in the $K_{\rm s}$-band image in Figures 2(b) and (c). The flux density of the CCSs associated with MCSs corresponds to that of the H\,{\footnotesize II} region formed by a B2--3 star or of the ionized stellar wind formed by a B0--1 star (e.g., Panagia 1973; Felli \& Panagia 1981). Therefore, the CCSs associated with MCSs would be caused by the activity of massive YSOs. We also derived spectral indices of MCSs from 98 GHz and 110 GHz data in Table 6. The spectral index of 8 MCSs with 2MASS sources and/or CCSs ranges from 2.2 to 4.6. This indicates that the millimeter continuum emission from these MCSs is subject to thermal dust radiation. Thus, the four MCSs with CCSs indicate the existence of young massive (proto)stars. Three other MCSs only associated with 2MASS sources except for MCS A for IRAS\,19442+2427 have a strong intensity peak in millimeter continuum emission, and 2MASS sources associated with these MCSs have the brightness corresponding to an early B type or earlier star. Therefore these three MCSs would include early B type or earlier (proto)stars. Indeed, two of these MCSs, MCS A for IRAS\,05375+3540 and MCS A for IRAS\,06117+1350, have a high luminosity of $\sim 1 \times 10^{3}$ $L_{\sun}$ (corresponding to a B3--4 star) and $\sim 7 \times 10^{4}$ $L_{\sun}$ (corresponding to a O9.5 star), respectively (Eiroa et al. 1994; Felli et al. 2004). Although the 2MASS source associated with MCS A for IRAS\,19442+2427 has a brightness corresponding to an early B star, this MCS has no strong intensity peak in millimeter continuum emission and the total integrated continuum emission from the MCS is very weak. In addition, although this MCS has a large H$_2$ column density ($\sim 10^{23}$ cm$^{-2}$), the associated bright 2MASS source has no large extinction ($A_V < 30$) Thus, this 2MASS source would be located in front of the MCS. However, four of five MCSs with neither 2MASS sources nor CCSs, MCSs A, B, and C for IRAS\,19446+2505 and MCS C for IRAS\,19442+2427, have a flux density of 110 GHz similar to or smaller than that of 98 GHz, and the spectral indices of these four MCSs are to be in the range between $-2.9$ and $+0.6$ in Table 6. In particular, three MCSs for IRAS\,19446+2505 are located at the edge of the optical H\,{\footnotesize II} region, S88\,B. Thus, we regard these four MCSs as consisting of only free-free emission. Note that MCS B for IRAS\,19446+2505 with a large negative index of $-2.9$ is more extended diffusely than the other MCS. Generally, it is probable that the missing flux of the 110 GHz observation is larger than that of the 98 GHz observation for a diffuse extended source because the minimum UV length of the 110 GHz observation is longer than that of the 98 GHz observation. Therefore, the MCS B would have such a negative index. Thus, we regard this MCS as a source consisting of only free-free emission. The other MCS with neither 2MASS sources nor CCSs, MCS B for IRAS\,19442+2427, has a strong intensity peak and a large spectral index of $\sim 3.7$, which is similar to that of the other massive-star-forming MCSs. Hence, in this paper, we regard this MCS as a source with massive protostars. Next, we compared 171 C$^{18}$O cores with MCSs, 2MASS sources, and CCSs. Eight of the cores are associated with both MCSs and 2MASS sources and two of 171 cores are associated only with MCSs. In addition, 19 of 171 cores are associated only with 2MASS sources. From this result, all MCSs except for four MCSs classified as F.F. (see Table 6) are associated with C$^{18}$O cores. However, cores N and R for IRAS\,19442+2427 are associated with same 2MASS source and same MCS, MCS A. The intensity distribution of the MCS corresponds to the part with the high H$_2$ column density of these cores in Figures 5(b) and (d). The MCS would be associated with high-density part of these cores. Therefore, cores N and R for IRAS\,19442+2427 with MCS A, which consist of cold dust emission, would be starless cores rather than cores with massive YSOs. From these results, we regard 27 cores associated with 2MASS sources and/or MCSs except for cores N and R for IRAS\,19442+2427 as star-forming cores. In particular, we regard eight star-forming cores associated with MCSs that include massive protostars as massive-star-forming cores. Therefore, we identified 27 star-forming cores, including eight massive-star-forming cores. These results are summarized in Table 5. Note that the NIR source associated with these cores is not necessarily a (proto)star itself, and the possibility exists that this NIR emission is a reflection nebula of one or more central heating source(s). In addition, it may be probable that some of the cores cataloged as starless are associated with mid-IR sources. \subsection{Mass of the Continuum Sources} Generally, the 98 and 110 GHz continuum emissions consist of free-free emission and/or thermal dust emission. First, we derived the flux of thermal dust component in MCSs with CCSs. The flux density of CCSs are much smaller than the total flux of MCSs at the millimeter wavelength as shown in Table 4. The spectral indices at 100 GHz of UC H\,{\footnotesize II} regions and ionized winds are usually $\sim -0.1$ and $\sim 0.5$ (e.g., Panagia \& Felli 1975), respectively. We assumed that these CCSs consist of the free-free emission with a spectral index of 0 from an optically thin H\,{\footnotesize II} region or ionized winds, and we estimated the flux of thermal dust component in MCSs by subtracting the contribution of the free-free emission from the total flux of MCSs. Next, we estimate the physical properties of the MCSs. The deconvolved radii of the MCSs, $R_{\rm d}$, are calculated using equation (3). The estimated radius of MCSs ranges from 0.018 to 0.110 pc. In particular, the radius of the MCSs associated with C$^{18}$O cores ranges from 0.018 to 0.055 pc and is smaller than that of the associated cores ($R_{\rm d}$/$R_{\rm core} \sim 0.8$). In addition, assuming an optically thin dust thermal emission at 100 GHz, we estimated the mass of MCSs using \begin{equation} M_{\rm d} = \frac{F_{\nu}\,D^{2}}{\kappa_{\nu}B(\nu,T_{\rm d})}, \end{equation} where $M_{\rm d}$ is the gas and dust mass of an MCS, $F_{\nu}$ is the total flux of the MCS, $B$($\nu,T_{\rm d}$) is the Planck function with the assumed dust temperature, $T_{\rm d}$, and $\kappa_{\nu} = \kappa_{\rm 230 GHz}(\frac{\nu}{\rm 230 GHz})^{\beta}$ is the dust opacity per gram with $\kappa_{\rm 230 GHz} = 0.005$ cm$^{2}$ g$^{-1}$ (Preibisch et al.\ 1993), which assumes a standard gas-to-dust mass ratio of 100. Here, we assume that the dust opacity index, $\beta$, of the dust in MCSs is 1.5 and that the dust temperature is equal to the temperature of the gas. First, we need to determine the temperatures of MCSs to estimate the mass of MCSs. The temperature of MCSs A and B for IRAS\,05375+3540 were estimated by Felli et al. (2004) using CH$_3$CN lines, and we use this temperature as the dust temperature of the MCS. Next, because no observation has been carried out toward the other MCSs using molecular lines, which trace the gas temperatures such as CH$_{3}$CN and NH$_{3}$, we estimate the temperature of MCSs with massive star formation to be $\sim 54$ -- $77$ K by applying the Stefan--Boltzmann law using the luminosity of (proto)stars associated with the MCSs. If we estimate the temperature of MCSs A and B for IRAS\,05375+3540 using the same method, the temperature of $\sim 30$ K is obtained, which is similar to the temperature derived by molecular line described above. Finally, concerning MCS A for IRAS\,19442+2427 without massive star formation, we use the excitation temperature estimated from the $^{12}$CO peak temperature as the dust temperature. We estimate the mass of MCSs using these dust temperatures and the mass of the MCSs ranges from $\sim 2.7$ to $111$ $M_{\sun}$. We estimate the average H$_2$ density of the MCSs using the radius and the mass of the MCSs, and the average H$_2$ density ranges from 1.0 $\times 10^{6}$ to 8.1 $\times 10^{6}$ cm$^{-3}$. \subsection{Individual Regions} The integrated intensity maps of C$^{18}$O as well as various kinds of objects related to star formation are given in Figures 1 -- 6. For each region, the distribution of C$^{18}$O clumps, which were taken from Saito et al. (2007), IRAS sources, H$_2$O masers, H\,{\footnotesize II} regions, and the primary beam area of the present NMA observation are given in Figure (a). In addition, the physical properties of the clumps are summarized in Table 7. The integrated intensity distribution of C$^{18}$O in the clumps is compared with the 98 GHz or 110 GHz continuum image in Figure (b). In Figure (c), the C$^{18}$O distribution is overlaid on the 2MASS $K_{\rm s}$-band image. The distribution of C$^{18}$O cores, CCSs, and MCSs are given in Figure (d). The strong peak in the continuum image indicate massive (proto)stars embedded in dense gas and the $K_{\rm s}$-band sources associated with cores indicate young stars or protostars. In addition, although each missing flux in the NMA primary beam area is roughly estimated to be $\sim 70$\% using the single dish observation data (Saito et al. 2007), the missing flux toward the present C$^{18}$O cores is small to be $\sim 30$\%. This suggests that the missing flux of the present observations would be caused by the structure more extended than the size of the one measured with the present observations. As the interferometer filters out the extended emission and C$^{18}$O observations can detect the dense gas, the present observations would accurately detect compact cores in a clump. In the following sections, we present descriptions of the individual regions. \subsubsection{IRAS\,05375+3540 (GL\,5162, S235A/B)} In Figure 1, we identified 21 C$^{18}$O cores and two MCSs in three clumps, clumps B, C, and D identified by Saito et al. (2007). In addition, we detected free-free emission from a part of the H\,{\footnotesize II} region, S235 A (e.g., Felli et al. 2006), by the present continuum observation. C$^{18}$O core J is associated with MCS A and a bright 2MASS source. Although this 2MASS source is associated with the intensity peak of MCS A, the intensity peak of core J has an offset by $\sim 4 \arcsec$ from this 2MASS source. C$^{18}$O core H is associated with MCS B, a bright 2MASS source, and a CCS (Felli et al. 2006). \subsubsection{IRAS 06056+2131 (GL\,6336s)} In Figure 2, we identified 19 C$^{18}$O cores and two MCSs in clump A identified by Saito et al. (2007). C$^{18}$O core E is associated with MCS A and a bright 2MASS source. Although the intensity peak of MCS A is associated with this 2MASS source, the intensity peak of core E has an offset by $\sim 6 \arcsec$ from this 2MASS source. C$^{18}$O core J is associated with MCS B and two CCSs and the intensity peak of this core is associated with a dark lane in the $K_{\rm s}$-band image. \subsubsection{IRAS 06061+2151 (GL\,5182)} In Figure 3, we identified 24 C$^{18}$O cores and two MCSs in clump B identified by Saito et al. (2007). C$^{18}$O core O is associated with MCS B, two CCSs, and two bright 2MASS sources. The intensity peak of this core is associated with one of these CCS and is located between these 2MASS sources. C$^{18}$O core P is associated with MCS A, a CCS, and a bright 2MASS source. The intensity peak of core P has an offset by $\sim 2 \arcsec$ from this CCS. In addition, this CCS has a larger flux density ($\sim 3.4$ mJy) than that ($\sim 0.6$ mJy) of the CCSs associated with core O. \subsubsection{IRAS 06117+1350 (GL\,902; S269)} In Figure 4, we identified 28 C$^{18}$O cores and one MCS in two clumps, clumps C and D identified by Saito et al. (2007). We also detected the free-free emission from a part of the H\,{\footnotesize II} region, S269 (e.g., Turner \& Terzian 1985), by the present continuum observation. The intensity peak of C$^{18}$O core M is located at both an intensity peak of MCS A and the position of a bright 2MASS source, IRS\,2w. According to the luminosity of IRS\,2w ($\sim 7 \times 10^{4} L_{\sun}$), C$^{18}$O core M may have an embedded massive (proto)star. \subsubsection{IRAS 19442+2427 (GL\,2454; S87)} In Figure 5, we identified 38 C$^{18}$O cores, three MCSs in three clumps, clumps A, B, and C identified by Saito et al. (2007). In addition, we detected the free-free emission from both the core and the extended structure of H\,{\footnotesize II} region S87 (e.g., Barsony 1989). Two velocity components, 21 km s$^{-1}$ and 24 km s$^{-1}$ components, exist in this region, and it has been suggested that the cluster, GL\,2454, was formed by the cloud--cloud collision of these components (Saito et al. 2007). C$^{18}$O cores, cores R, U, and AC, around the core structure of H\,{\footnotesize II} region S87 have a large mass ($\sim 10$ $M_{\sun}$) and a large line width ($\sim 1.5$ km s$^{-1}$). C$^{18}$O core AH is associated with MCS B, which has a strong intensity peak and a large spectral index, although this core is not associated with bright 2MASS sources. This indicates that the core could harbor an embedded massive YSO(s). In addition, the position of the intensity peak of this core in 20.7 km s$^{-1}$ component map has changed from that in 21.4 km s$^{-1}$ component map in Figure 5(d), and MCS B is located between the intensity peak in each component map. This indicates that core AH has a velocity gradient and could have a disklike system embedded in it. We suggest that this embedded massive YSO(s) in MCS B (see $\S$ 3.2) would be a massive protostar. \subsubsection{IRAS 19446+2505 (GL\,2455; S88B)} In Figure 6, we identified 41 C$^{18}$O cores, three MCSs in two clumps, clumps B and C identified by Saito et al. (2007). In addition, we detected millimeter continuum emission, which corresponds to two compact H\,{\footnotesize II} regions S88B1 and S88B2 (e.g., G\'{o}mez et al. 1995). The distribution of the C$^{18}$O emission is consistent with that of the dark region in the $K_{\rm s}$ band. C$^{18}$O cores, cores Q and W, around the compact H\,{\footnotesize II} region S88B1 have a large mass ($\sim 20 M_{\sun}$) and a large line width ($\sim 2.0$ km s$^{-1}$). Three MCSs exist at the boundary between the dense molecular gas and the optical H\,{\footnotesize II} region S88B. This indicates that the surface of the dense molecular gas would be ionized by the UV radiation from the massive stars in GL\,2455. \section{Discussion} \subsection{Physical Conditions of C$^{18}$O Cores} Saito et al. (2007) identified 39 clumps with a size scale of $\sim 0.3$ pc in cluster-forming regions including massive star formation, and discussed their physical conditions by analyzing the correlations between the physical quantities of the clumps. They found that the average H$_2$ density of the clumps increases with the line width and the clumps are self-gravitationally bound because they have a virial mass comparable to a gas mass. Here, we extend their analysis to the inner structure of the clumps (i.e., dense cores) and attempt to clarify the physical condition of the dense cores that are the direct site of each star formation in the cluster-forming region. First, we analyzed the relationships among the physical parameters of the cores. We used 171 C$^{18}$O cores identified in this study and 28 C$^{18}$O cores identified by Saito et al. (2006). The cores identified by Saito et al. (2006) were classified into three massive-star-forming cores, seven star-forming cores, and 18 starless cores using the classification adopted in this study. These cores exist in 18 dense clumps identified by Saito et al. (2007). Next we discuss the physical conditions of the dense cores inside the clumps in cluster-forming regions including massive star formation. \subsubsection{Line Width -- Radius Relationship} The line width of molecular line emissions provides us with internal kinetic properties such as thermal motion and turbulence. Several studies have suggested that a correlation exists between the line width and the radius of the core/cloud in the form of a single power-law relationship $\Delta V \sim R^{\alpha}$; $\Delta V$ and $R$ are a characteristic line width and the radius of the core/cloud (e.g., Larson 1981). In addition, the index $\alpha$ has recently been shown to depend on the environment (e.g., Caselli \& Myers 1995; Saito et al. 2006). The left panel of Figure 7 shows the relationship between the line width and the radius of the 181 cores with high reliability (see $\S$ 3.1.2) identified by this work and Saito et al. (2006) and 18 clumps identified by Saito et al. (2007). Note the other 18 cores are those with a large error in the radius in $\S$ 3.1.2. Although the line width of the clumps is in a narrow range of 1.5 to 3.2 km s$^{-1}$, the line width of the cores is in a much wider range of 0.43 to 3.33 km s$^{-1}$. In contrast, the range of the radius of the cores is relatively small. Such features are seen even in the individual regions in the right panels of Figure 7. We investigated the index $\alpha$ of the relationship between the line width and the radius of the core and the parental clumps. We estimated this relationship, $\Delta V_{\rm core}$ = $\Delta V_{\rm clump} \times$ ($R_{\rm core}/R_{\rm clump}$)$^{\alpha}$, for all cores. The index $\alpha$ of this relationship ranges from 0.06 to 0.89 and the long-- and the short--dashed lines in Figure 7 indicate the line width--radius relationships with the minimum and maximum index $\alpha$, respectively, using a mean line width and a mean radius of the clumps in the plots. Our results show that the relationship between the line width and the radius does not follow a single power-law relation. This trend agrees with the result of Saito et al. (2006), who used a smaller number of cores and clumps. This indicates that the degree of the dissipation of the turbulent motion varies even within a single clump. As discussed below in $\S$ 4.1.3, this may be due to the difference in the initial H$_2$ density or the external pressure of the core. We postulate that the difference in the dissipation of the turbulent motion depends on the region where the core is formed in the clump, considering that the clump generally has a density structure. \subsubsection{Stability of Cores in the Dense Clump} Generally, a core has to be gravitationally bound to result in star formation. Thus, a core with a large line width needs to have a large mass or high external pressure for star formation to occur. Here, we check the virial condition of the cores to investigate the stability of the system. Figure 8 shows correlations among the LTE mass, $M_{\rm LTE}$, and the virial mass, $M_{\rm vir}$, for 181 cores. Most of the cores have a larger virial mass than the LTE mass, and no clear difference is seen between star-forming cores and starless cores. The cores have a virial ratio (the ratio of the virial mass to the LTE mass) of $\sim 2$, although several massive-star-forming cores have somewhat larger virial ratios of $\sim 4$. Generally, more extended structures with star formation (such as clumps) have a similar virial mass to a LTE mass (Fontani et al. 2002; Saito et al. 2007). The main reason for the difference in the virial ratio between the clumps and the cores is the elapsed time after the structures were formed. The virial mass of the clump just after formation would be also larger than the LTE mass and the clump would be gravitationally bound by the external pressure. The virial mass of the clump will later become similar to the gas mass because the density structure and the internal kinetic motion of the clump change and cluster formation is expected to occur in the clump. Therefore, almost all of the clump would be virialized as far as we observe the clumps with cluster formation. However, many cores would also exist just after formation in the clump. Thus, many ``young cores,'' which have a larger virial mass than the gas mass, could be detected by the present observation. In addition, the virial mass of the cores, except for massive-star-forming cores, is approximately fitted by a single power-law function by \begin{equation} \log(M_{\rm vir}) = (0.17 \pm 0.03) + (1.11 \pm 0.04) \log(M_{\rm LTE}), \, C.C. = 0.87, \end{equation} where C.C. is a correlation coefficient. The coefficient of this relationship is close to unity, indicating that in the virial equation (see below), the difference in the gravitational and kinetic energy terms is almost proportional to the gravitational energy terms. To strike a good balance between this difference and the external pressure and for more massive cores to achieve the equilibrium state, the external pressure on the surface of the cores must be larger. Most of the cores require external pressure to maintain the structures. Here, we estimate the external pressure required to bind the present cores gravitationally. The virial equation is expressed by neglecting the magnetic fields as follows: \begin{equation} F = 2U + \Omega - 4 \pi R_{\rm core}^{3} P_{\rm ex}; (U = \frac{1}{2} M \sigma^{2}, \Omega = -\frac{3}{5} G \frac{M^{2}}{R_{\rm core}}), \end{equation} where $U$ and $\Omega$ are the kinetic and gravitational energy of the core, respectively, $P_{\rm ex}$ is the external pressure on the core surface, and $G$ and $\sigma$ are the gravitational constant and the velocity dispersion, $\Delta V_{\rm com}$/($2 \sqrt{(2 \ln 2) /3}$). The external pressure required to bind the core gravitationally, $P_{\rm R}/k$, can be estimated using these equations with $F$=$0$. The required pressure of the cores with a virial ratio of $> 1.0$ ranges from $1.3 \times 10^{5}$ to $4.3 \times 10^{8}$ K cm$^{-1}$. Next, we investigated the external pressure on the core surface. Since the external pressure is difficult to estimate, we used the average pressure in a clump (average clump pressure) $\overline{P_{\rm CL}}/k$ instead, although these values generally differ from each other. The average clump pressure was estimated using both the line width, $\Delta V_{\rm clump}$, and the average H$_2$ density, $\overline{{n({\rm H_2})}_{\rm clump}}$, of the clump. We estimated the average clump pressure using the parameters of the clumps identified by Saito et al. (2007) in Table 7 and found that the average clump pressure ranges from $1.1 \times 10^{7}$ to $1.9 \times 10^{8}$ K cm$^{-1}$. Figure 9 shows the required pressure of cores plotted against the projected distance from the center of the clump to the core (core distance), $R_{\rm cen}$. Here, the required pressure and the core distance are normalized to the average clump pressure and the radius of the clump, $R_{\rm CL}$, respectively. We found that the required pressure for most of the cores, except for massive-star-forming cores, is smaller than the average clump pressure. Therefore, our findings suggest that most of the cores are likely to be gravitationally bound by the external pressure. Finally, we investigated how the required pressure of cores changes with the position in a clump. In Figure 9, the cores with a large $P_{\rm R}$/$\overline{P_{\rm CL}}$ are concentrated in the center of the clump. Therefore, the nearer the cores are to the center of the clump, the higher the external pressure on the core surface is expected to be. When the H$_2$ density structure of the clump follows $n$(H$_2$)$_{\rm clump} \propto \,R_{\rm cen}\,^{-\gamma}$, the pressure in the clump (clump pressure), $P_{\rm CL}$, is estimated as a function of $R_{\rm cen}$ by $P_{\rm CL}$/$\overline{P_{\rm CL}}$ = ($1 - \gamma$/3)$\,(R_{\rm cen}$/$R_{\rm CL}$)$^{-\gamma}$. Here, we assumed that the turbulent velocity is constant within the clump. For comparison, the distributions of the clump pressure for $\gamma$ = 1.0, 1.5, and 2.0 are indicated in Figure 9 by the dotted, solid, and dashed lines, respectively. These lines are similar to the upper limit of the $P_{\rm R}$/$\overline{P_{\rm CL}}$--$R_{\rm cen}$/$R_{\rm CL}$ plot, which indicates that almost all cores would be gravitationally bound by the external pressure if the H$_2$ density structure in the clump has a relation of $n$(H$_2$)$_{\rm clump} \propto \,R_{\rm cen}\,^{-\gamma}$. In addition, these results suggest that the external pressure of the core surface would depend on the H$_2$ density structure in the clump. \subsubsection{Mass -- Line Width Relationship} Here, we investigated the relationship between the LTE mass and the line width of the core. Figure 10 shows that the LTE mass of the cores increases in proportion to the square of the line width, although the dispersion of the plot is large. This tendency agrees with the result of $\S$ 4.1.2: from equations (4) and (6), we obtain the relationship $M_{\rm LTE} \propto (\Delta V_{\rm core})^2$ considering that the radius of the cores is almost constant. From this tendency, dense gas with a large kinetic motion in a clump seems to be necessary to form massive cores because the LTE mass of the cores increases with the line width. Figure 10 also indicates that no clear difference exists between star-forming and starless cores, although several massive-star-forming cores (black circles) have somewhat larger line widths than the other cores with a similar mass. For example, the average line widths of massive-star-forming cores and the starless cores (massive starless cores) with a mass $> 16$ $M_{\sun}$ are calculated as 2.8 km s$^{-1}$ and 2.0 km s$^{-1}$, respectively. The main cause of the difference in the line width is supposed to be the effect of star formation. If massive stars in massive-star-forming cores have formed from massive starless cores with a similar mass of $16$ $M_{\sun}$, the increase in momentum due to star formation is roughly estimated to be $\sim 13$ $M_{\sun}$ km s$^{-1}$. The estimated increase in momentum is much smaller than the outflow momentum supplied by massive protostar(s). For example, the total outflow momentum supplied by massive protostar(s) with a luminosity of $\sim 10^4$ $L_{\sun}$ is estimated to be $\sim 100$ $M_{\sun}$ km s$^{-1}$ (Zhang et al. 2005). This indicates that at most only a tenth the total outflow momentum is injected to the present core, and almost all of the outflow momentum is likely to be supplied into the intercore gas or outside the clump. In other words, massive-star-forming cores should have a large line width even prior to star formation. Therefore, we suggest that although star formation activity tends to enhance the line width of the cores, massive starless cores should form from a dense gas with a large kinetic motion. Finally, considering that the range of the radius of the cores is small, the relationship between the mass and the line width of the cores suggests that the average H$_2$ density of the cores would increase with the line width of the cores. According to this result and the discussion on the external pressure in $\S$ 4.1.2, a small dissipation of the turbulent motion is sufficient for a core formed in the center of the clump with the H$_2$ density structure to be gravitationally bound because the initial condition of this core necessarily has a high H$_2$ density and a high external pressure. In this case, the index $\alpha$ of the line width--radius relationship from the clump to the core would become very small ($\sim 0.06$). However, a core formed at the edge of the clump must undergo a large dissipation of turbulent motion to be gravitationally bound, and the index $\alpha$ would become large ($\sim 0.9$). In other words, the physical parameters of the core would depend on the place where the core is formed in the clump. This relationship is discussed in detail in $\S$ 4.2.2. \subsection{Relationship among Young Stars, Dense Cores, and Dense Clumps} Here, we discuss the relationships between the mass of the star formed in the core and the physical parameters of the core and between the physical parameters and the positions of the cores in the clump. Generally, the mass of a star is expected to depend on the physical properties of the parental core in a clump. Thus, the characteristics of the stellar member of the cluster would depend on the characteristics of the cores in the clump. In addition, Saito et al. (2007) found that the maximum mass of the stars associated with the clump increases with the line width of the clump and that a good correlation exists between the stellar number density of the cluster and the average H$_2$ density of the clump. These results indicate the importance of revealing the relationships among the clump, the cores, and young (proto)star(s) to investigate both cluster formation and massive star formation. \subsubsection{Relationship between the Mass of Formed Stars and the Physical Properties of Cores} Typical massive-star-forming cores have a higher H$_2$ density, larger line width, and higher temperature than the low-mass star-forming cores (e.g., Churchwell et al. 1992; Olmi et al. 1993). Although many observations with such tendencies have been reported, the relationship between the mass of the star and the physical properties of the parental core has never been quantitatively investigated. The main reason is that the discovery of the cores forming stars with various masses in the same region has been very difficult. However, typical stellar clusters have many stars with various masses, and stars formed in star-forming cores described here are expected to have a wide range of masses. Therefore, we estimated the stellar masses of the stars using the same method as Saito et al. (2007). For stars whose luminosities have been estimated or stars with centimeter-continuum sources, we estimated their spectral types using these parameters. We estimated the spectral types of other stars from the $H-K_{\rm S}$/$K_{\rm S}$ color--magnitude diagrams using the 2MASS catalog. Figure 11 shows the relationships between the mass of (proto)stars associated with cores, $M_{\ast}$, and the physical parameters, LTE mass and line width, of the cores. A tendency exists for the mass of stars to increase with these parameters. These results indicate that the mass of the star would depend on the mass (or the H$_2$ density) and internal kinetic motion of the parental core. Generally, high average H$_2$ density and large internal kinetic motion would be expected to lead to a high mass accretion rate (e.g., McKee \& Tan 2003). Although the plot of the relationship between the mass of the stars and the mass of the cores has a large scattering, the plot can roughly be expressed by the relationship of $M_{\ast}$ = $M_{\rm core}$, which is indicated by the solid line. From this relationship, the star-formation efficiency (SFE) is roughly constant at $\sim 50$\% using the equation of SFE = $M_{\ast}$/($M_{\ast}$ + $M_{\rm core}$). Note that the mass of the core is expected to decrease with the age of the protostar(s). Based on the above relationships and the mass--line width relationship in $\S$ 4.1.3, the magnitude of internal kinetic energy of the core would determine the mass of the formed stars. In addition, the core must have existed in the clump at least during the lifetime of the clump for star formation to occur therein. It is therefore necessary for the core to be gravitationally bound. From these results, we suggest that the necessary and sufficient condition to form a higher mass star is that the core has a larger internal kinetic motion and is gravitationally bound. In this case, it is not necessary for the core to be self-gravitationally bound if high external pressure is brought to bear on the core surface. Because the internal kinetic motion basically decreases with time, a structure such as a molecular cloud much larger than a core or a clump must previously have had a large internal kinetic motion and must be gravitationally bound for massive star formation to occur. Thus, GMCs and clouds with an external effect, for example, H\,{\footnotesize II} regions and supernova remnants, would be good environments for massive star formation because such clouds would have a large external pressure and be supplied with large kinetic energy. \subsubsection{Relationship between the Cores and the Clumps} Here, we discuss the spatial distribution of the physical parameters of the cores in the clump. In $\S$ 4.1, we found that dense gas with a large kinetic motion in a clump seems to be necessary to form massive cores, and the external pressure on the surface of cores must be large for massive cores to achieve an equilibrium state. In addition, cores with large external pressure are concentrated in the center of the clump. Thus, the physical properties of the core are expected to depend on the physical condition of the region where the core is formed in the clump. First, we examined the relationship between the physical parameters, line width and LTE mass, of the cores and the core distance (see $\S$ 4.1.2). Figure 12(a) shows the masses of the cores plotted against the core distance. Although no clear tendency is seen in this plot, the upper limit of the core mass would gradually decrease with the core distance. If the average H$_2$ density of the cores is proportional to the H$_2$ density in the clump having a density structure of $\sim \,R_{\rm cen}\,^{-\gamma}$, at the region where the core was formed, the core mass can be expressed by the relationship \begin{equation} M_{\rm core} = \frac{4\,\pi}{3}\,R_{\rm core}\,^{3}\,\mu'\,m_{\rm H}\, n({\rm H_{2}})_{\rm core} = \frac{4\,\pi}{3}\,R_{\rm core}\,^{3}\,\mu'\,m_{\rm H}\, C\,n_0\,(\frac{R_{\rm cen}}{R_{\rm CL}})^{-\gamma}, \end{equation} where $\mu'$ and $m_{\rm H}$ are the mean molecular weight per one H$_2$ molecule and the hydrogen mass, respectively, and $n_0$ and C are the H$_2$ density in the clump at the distance of $R_{\rm CL}$ from the center of the clump and the proportional coefficient, respectively. Generally, the H$_2$ density structure of a clump is suggested to have a relationship with ($R_{\rm cen}$/$R_{\rm CL})^{-\gamma}$ ($\gamma \sim 1.0$--2.0; e.g., van der Tak et al. 2000). Here, if a clump has a mass of 300 $M_{\sun}$ and a radius of 0.3 pc and $R_{\rm core}$ and C are 0.03 pc and 50, which are typical respective values for the present sample, the mass--distance relationships with $\gamma$ = 1.0 and 2.0 are indicated by the solid and dashed lines in Figure 12(a), respectively. As shown in Figure 12(a), the distribution of the upper limit of the core mass is roughly similar to the relationships expressed by the two lines. From this result, the mass (or the average H$_2$ density) of the core is suggested to depend on the H$_2$ density of the region where the core is formed in the clump, and the distribution of the core mass is suggested to be controlled by the H$_2$ density structure in the clump. Figure 12(b) shows the ratio of the line width for the core and the clump plotted against the core distance. The starless cores with a high line width ratio ($> 0.7$) are concentrated in the center of the clump, and the upper limit of this ratio would gradually decrease with the core distance. The reason for this feature would be the change in both the core mass and the external pressure on the core surface. Here, if we assume the cores are gravitationally bound by the clump pressure, we can estimate the maximum line width of the cores using the virial equation (see $\S$ 4.1.2). If the clump with the physical parameters adopted in the above discussion is gravitationally bound, a line width of the clump is 2.2 km s$^{-1}$. In this case, the ratio of the maximum line width of the cores to the line width of the clump with $\gamma$ = 1.0 and 2.0 is indicated by the solid and dashed lines, respectively, in Figure 12(b). As shown in Figure 12(b), the distribution of the upper limit of the line width ratio is similar to the relationships expressed by the two lines. As we have seen in $\S$ 4.1.2 and the above-mentioned mass--distance relationship, the external pressure on the core surface and the core mass are likely to depend on the core distance due to the H$_2$ density structure in the clump. Therefore, to bind gravitationally, the internal kinetic motion of the core would also necessarily depend on the core distance. From the discussions in this subsection, we suggest that the central region of the clump, which has a density structure of $\sim \,R_{\rm cen}\,^{-\gamma}$, is the environment in which the core with a large internal kinetic motion and a high average H$_2$ density is easily formed and that the internal kinetic motion and the average H$_2$ density of the core would depend on the H$_2$ density structure in the clump. In addition, according to the result in $\S$ 4.2.1, the mass of a star formed in the core would decrease with the distance from the center of the clump. Therefore, the physical characteristics of the cluster would depend on the H$_2$ density structure, internal kinetic motion, and the mechanism of core formation in the clump. \subsection{Mechanism of Cluster Formation} Finally, we discuss the relationship between cluster formation and the dense gas structures, cores and clumps. As we have seen in $\S$ 4.1 and $\S$ 4.2, many cores with a size of $\sim 0.03$ pc that are formed in a clump with a size of $\sim 0.3$ pc have a wide range of the line width and the mass, and the spatial distribution of the physical parameters of the cores would depend on the H$_2$ density structure in the clump. In addition, the mass of the star would depend mainly on the internal kinetic motion of the parental core. The stellar number density of the cluster is mentioned as an important characteristic of a cluster. The stellar number density of the cluster is generally much higher than that of typical low-mass star-forming regions. In this regard, Saito et al.(2007) found a good relationship between the number density of YSOs associated with the clump and the average H$_2$ density of the clump, and they suggested that it is necessary for a clump to have a high average H$_2$ density for cluster formation to occur therein. This suggests that a relationship may exist between the number density of the cores in the clump and the average H$_2$ density of the clump. First, we estimated the number density of the cores to investigate this relationship. Figure 13(a) shows the relationship between the number density of the cores in the clump, $n_{\rm core}$, and the average H$_2$ density of the clump, $\overline{{\rm n(H_2)}_{\rm clump}}$. A good correlation exists between the number density of the cores and the average H$_2$ density of the clump. This relationship is roughly expressed by the equation $n_{\rm core} \sim$ $\overline{{\rm n(H_2)}_{\rm clump}}$\,$^{1.9}$ and the index of this relationship is similar to the index ($\sim 2.0$) of the relationship between the number density of the YSOs and the average H$_2$ density of the clump obtained by Saito et al. (2007). This indicates that star formation in the clump is based on core formation in the clump. Although these relations are not expressed by the mechanism of core formation using simple gravitational instability as proposed by Saito et al. (2007), we suggest that for core formation with a high number density of the cores, it is necessary for the clump to have a high average H$_2$ density as an initial condition. Next, we examine the relationship between the number of the cores and the YSOs to reveal the timescale of the cores. Figure 13(b) shows the number of YSOs, $N_{\ast}$, associated with the clump, which were identified by Saito et al. (2007), plotted against the number of cores in the clump, $N_{\rm core}$. The numbers of the YSOs are one to three times larger than those of the cores. Considering that almost all of the YSOs would be low-mass stars, the lifetime of the YSO would be $\sim 1 \times 10^7$ yr (e.g., Lada 1999). In this case, the lifetime of the core is roughly estimated to be $\sim 3 \times 10^{6}$ yr if only one star is formed per core. In addition, we find 27 star-forming cores in this study and 10 in Saito et al. (2006), and the number corresponds to $\sim 19$\% of all cores. This result indicates that the lifetime of the star-forming cores, which can be detected by our interferometric observation, is $\sim 6 \times 10^5$ yr. This timescale is consistent with the fact that the typical mass of the cores associated with class II stars, whose lifetime is $\sim 10^{6}$ yr (e.g., Lada 1999), was roughly estimated to be $\ll 1 M_{\sun}$ in low-mass star forming regions (e.g., Tsukagoshi et al. 2007) because the detection limit of the mass of the cores is $\sim 1 M_{\sun}$. Therefore, most of the YSOs in the clump are not associated with the cores with a mass of $> 1 M_{\sun}$ and the YSOs associated with these cores are regarded as young protostars. Finally, we look at the core-formation efficiency (CFE) of the clump. Figure 13(c) shows CFEs ($\Sigma M_{\rm core}$/$M_{\rm clump}$, where $M_{\rm core}$ and $M_{\rm clump}$ are the mass of the core in the clump and the mass of the clump, respectively) plotted against the number density of cores in the clump. Although no strong correlation is seen between the CFEs and the number density of cores, the CFE gradually increases with the number density of cores in the clump. Considering the relationship between the number density of cores and the average H$_2$ density of the clump in Figure 13(a), this feature indicates that the dense gas in a clump with a high average H$_2$ density is efficiently converted into many cores. From the results that the CFE is $\sim 3$--60\% and that the lifetime of the core is $\sim 3 \times 10^{6}$ yr in the above discussion, the conversion rate from the dense gas to cores per $\sim 1 \times 10^6$ yr (core-formation rate : CFR) in the clump is estimated to be $\sim 1$ -- 20\%/$10^6$ yr. Thus, in a clump with a low average H$_2$ density, core formation with a low CFR ($\sim$ a few \%/$10^6$ yr) could have occurred over a long time of $> 10^7$ yr if the CFR is always constant. However, only a part of the dense gas in the clump would be converted to cores because this timescale is longer than the typical lifetime of the GMC ($\sim 10^{7}$ yr; e.g., Blitz et al. 2007). On the contrary, in a clump with a high average H$_2$ density, core formation with a high CFR ($\ge 10$\%/$10^6$ yr) could have occurred over a short time of $\le 10^7$ yr. This indicates that most of the dense gas in the clump would be converted to cores because this timescale is shorter than the typical lifetime of the GMC. From the above considerations and the discussions in $\S$ 4.1 and 4.2, we determined that the mechanism of cluster formation would strongly depend on the mechanism of core formation in a clump and that the mechanism of core formation would be controlled by both the H$_2$ density structure and the kinetic motion in the clump. Moreover, the characteristics of the cluster would be determined by the distribution of the physical parameters of cores in the clump. If a clump with a very high average H$_2$ density ($\sim 10^6$ cm$^{-3}$), which has a size scale of $\sim 0.3$ pc, is formed in a GMC, this clump would be expected to form a cluster with a very high stellar number density very efficiently in a short time based on the discussion of the CFR. In addition, considering that the virial mass of the clump is similar to the LTE mass (Saito et al. 2007), the line width of the clump is estimated to be $\sim 10$ km s$^{-1}$ using the average H$_2$ density and the radius. According to the line width--radius relationship in $\S$ 4.1.1 and the stellar mass--line width relationship in $\S$ 4.2.1, the cluster formed in the clump would include very massive (e.g., O3) stars. Thus, the clumps with a very high average H$_2$ density ($\sim 10^6$ cm$^{-3}$) would be expected to form a cluster with both a very stellar number density and very massive stars. In addition, if many clumps are formed in a GMC, star-forming regions within the GMC will include many dense clusters with very massive stars and will be extended with a size of $\sim 100$ pc because the typical size of the GMC is $\sim 100$ pc. Indeed, very active star-forming regions occurring in external galaxies, for example, 30 Doradus region in LMC, are extended with a size scale of $\ge 100$ pc and have many dense clusters, which contain many O3 stars, with a size scale of $\sim 0.3$ pc, for example, R136a core in 30 Doradus. Therefore, we suggest that the mechanism of cluster formation in such active star-forming regions would be basically similar to that in our galaxy, although the clumps in such active star-forming regions would have much higher H$_2$ density and much larger kinetic motion than those in our galaxy. \section{CONCLUSION} We made high-angular resolution observations with the Nobeyama Millimeter Array (NMA) of six cluster-forming regions including massive (proto)stars, in the $J\!=\!1$--$0$ transition of C$^{18}$O molecular emission line and 98 GHz and 110 GHz continuum emission. We identified 171 C$^{18}$O cores in 12 clumps toward six cluster-forming regions including massive (proto)stars. Based on our observations, we studied the correlation between several physical quantities of the C$^{18}$O cores and clumps, and discussed the formation mechanism of star clusters. The main results of the present study are summarized as follows: 1. The ranges of the radius, line width, and molecular mass of the cores are 0.01--0.09 pc, 0.43--3.33 km s$^{-1}$, and 0.5--54.1 $M_{\sun}$, respectively. 2. We identified 13 millimeter continuum sources (MCSs). Seven are associated with 2MASS sources and/or centimeter continuum sources (CCSs). Together with the one MCS that shows similar characteristics to the above seven MCSs, these eight MCSs would include massive protostars. The range of the mass of MCSs is 2.7--111 $M_{\sun}$. In addition, we compared C$^{18}$O cores with MCSs, 2MASS sources, and CCSs. We identified 27 star-forming cores including eight massive-star-forming cores. 3. Some cores have various line widths in a single clump. The range of the index of the relationship between the line width and radius of the core and the parental clump differs from core to core. This implies that the degree of dissipation of the turbulent motion varies within a single clump. 4. For most cores, the virial masses are about twice larger than the LTE masses. This indicates that external pressure is necessary for the cores to be gravitationally bound. Based on the virial analysis, the distribution of the external pressure required to bind the cores is likely to be comparable to that of the pressure in the clump with an H$_2$ density structure of $\sim r^{-\gamma}$ ($\gamma \sim 1$--2). 5. The mass of the cores increases with the line width. This indicates that the average H$_2$ density of the cores would depend on the kinetic motion of the dense gas in the core because the range of the radius of the core is small. 6. The mass of the star depends on the line width and the mass of the parental core. In addition, we found that the star-formation efficiency (SFE) is roughly constant at $\sim 50$\% and that no clear dependence exists between the SFE and the mass of the formed stars. 7. The mass, line width, and external pressure of the core decrease with distance from the center of the clump, and a relationship exists between these decreases and the H$_2$ density structure of the clump. We suggest that the spatial distribution of physical parameters of the cores strongly depends on the H$_2$ density structure in the clump. 8. The number density of the cores and the number density of the young (proto) stars in the clump have a similar relationship to the average H$_2$ density of the clump. In addition, the core-formation rate (CFR) of the clump is estimated to be $\sim 1$--20\% per $\sim 1 \times 10^{6}$ yr and the CFR gradually increases with the average H$_2$ density of the clump. \acknowledgments We thank to the staff of the Nobeyama Radio Observatory (NRO) for both operating the Millimeter Array and helping us with data reduction. We thank to Kazuyoshi Sunada, Tomofumi Umemoto, and Takeshi Sakai and also to the referee for their valuable comments. This publication makes use of data produced by 2MASS, which is a joint project of the University of Massachusetts and Infrared Processing and Analysis Center funded by NASA and NSF. This study also made use of the SIMBAD database.
1,116,691,501,150
arxiv
\section{Analytic treatment of the periodically modulated double well} \label{ap:analytic} In the following we provide an analytic description of the periodically modulated double well system. First, the static Hamiltonian and its properties are discussed. Afterwards, the driven system is treated with a high frequency expansion approach. In this context, we derive explicit expressions for the effective static Hamiltonian and the time dynamics of the system within one period for both an off-resonant and a near resonant modulation of the double wells. \subsection{Static double well} We consider two distinguishable Fermions with spin $\uparrow$ and $\downarrow$ on a double well. The starting point is a continuum Hamiltonian for Fermions with two spin states $\sigma$ and zero range interactions $V_{\mathrm{int}}(\mathbf{r})=4\pi a/m \delta(\mathbf{r})$, where $m$ is the mass of the atoms and $a$ the s-wave scattering length (we set $\hbar=1$). The tight-binding Hubbard Hamiltonian is obtained upon replacing the field operators with $\hat{\Psi}^{\dagger}_{\sigma}(\mathbf{r})=\sum_{l=\mathrm{L,R}}w_l(\mathbf{r})c^{\dagger}_{l\sigma}$. Here, $c^{\dagger}_{\mathrm{L}\sigma}$ ($c^{\dagger}_{\mathrm{R}\sigma}$) denote the fermionic creation operators for a particle with spin $\sigma$ on the left (right) side and $w_{\mathrm{L}}(\mathbf{r})$ (respectively $w_{\mathrm{R}}(\mathbf{r})$) are the (real) Wannier functions of the underlying extended lattice, which are determined as eigenstates of the band projected position operator \cite{Uehlinger2013}. We choose to work in the Fock basis \begin{equation} \begin{aligned} &\left|\uparrow\downarrow,0\right\rangle &=\ \ & c^{\dagger}_{\mathrm{L}\downarrow}c^{\dagger}_{\mathrm{L}\uparrow}\left|0\right\rangle \\ &\left|\uparrow,\downarrow\right\rangle &=\ \ & c^{\dagger}_{\mathrm{R}\downarrow}c^{\dagger}_{\mathrm{L}\uparrow}\left|0\right\rangle \\ &\left|\downarrow,\uparrow\right\rangle &=\ \ & c^{\dagger}_{\mathrm{R}\uparrow}c^{\dagger}_{\mathrm{L}\downarrow}\left|0\right\rangle \\ &\left|0,\uparrow\downarrow\right\rangle &=\ \ & c^{\dagger}_{\mathrm{R}\downarrow}c^{\dagger}_{\mathrm{R}\uparrow}\left|0\right\rangle \end{aligned} \label{FockBasis} \end{equation} in which the Hamiltonian takes the form \begin{equation} H_0=\begin{pmatrix} U & -t-\delta t & t+\delta t & V_{\mathrm{ct}}\\ -t-\delta t & V_{\mathrm{nn}} & -V_{\mathrm{de}} & -t-\delta t\\ t+\delta t & -V_{\mathrm{de}} & V_{\mathrm{nn}} & t+\delta t\\ V_{\mathrm{ct}} & -t-\delta t & t+\delta t & U \end{pmatrix} \end{equation} Here, the tunneling amplitude $t$ and the on site interaction $U$ are given by \begin{eqnarray} t&=&-\int d^3r\,w_{\mathrm{L}}(\mathbf{r})\left[-\frac{1}{2m}\nabla^2+V(\mathbf{r})\right]w_{\mathrm{R}}(\mathbf{r})\\ U &=& \frac{4\pi a}{m}\int d^3r\,\left|w_{\mathrm{L,R}}(\mathbf{r})\right|^4 \end{eqnarray} with the lattice potential $V(\mathbf{r})$ \cite{si}. Since for this work, there is no static site offset between the two sites, the Wannier functions are symmetric around the center of the wells $w_{\mathrm{L}}(\mathbf{r})=w_{\mathrm{R}}(\mathbf{-r})$ and the interactions are equal on both sides. The other terms appearing in the Hamiltonian are higher band corrections, namely the correlated tunneling $V_{\mathrm{ct}}$ describing the hopping of atom pairs, the nearest-neighbor interaction $V_{\mathrm{nn}}$ and the direct spin exchange $V_{\mathrm{de}}$ connected to spin flips between the two Fermions on adjacent sites. In the two-site problem all of these terms are equal and obtained as \begin{equation} V_{\mathrm{ct}} = V_{\mathrm{nn}}=V_{\mathrm{de}}=\frac{4\pi a}{m}\int d^3r\,\left|w_{\mathrm{L}}(\mathbf{r})\right|^2\left|w_{\mathrm{R}}(\mathbf{r})\right|^2 \label{V} \end{equation} Furthermore, there is a density assisted hopping term $\delta t$, which accounts for a correction to the tunneling amplitude when there is another particle of opposite spin present in the double well. It is given by \begin{equation} \delta t = -\frac{4\pi a}{m}\int d^3r\,\left|w_{\mathrm{L,R}}(\mathbf{r})\right|^2w_{\mathrm{L}}(\mathbf{r})w_{\mathrm{R}}(\mathbf{r}) \label{Deltat} \end{equation} All of these latter corrections are small for the static lattices used in this paper. In order to estimate their magnitude, we use the same method as for the calculation of $K_0$ and consider a cut through the lattice potential at $y=z=0$ \cite{si}. We then make the approximation that the lattice potential is separable in the x- and z-directions, determine the Wannier functions in this one-dimensional problem as eigenstates of the band-projected position operator \cite{Uehlinger2013} and calculate the corrections according to Eqs.\:(\ref{V}) and (\ref{Deltat}). We find that $V_{\mathrm{ct}}/U,V_{\mathrm{nn}}/U,V_{\mathrm{de}}/U\approx 10^{-3}$ and $\delta t/U\approx 10^{-2}$. Therefore, we can first restrict the discussion to the case where we only have a tunneling $t$ and an on site interaction $U$ and the Hamiltonian is given by \begin{equation} H_0=\begin{pmatrix} U & -t & t & 0\\ -t & 0 & 0 & -t\\ t & 0 & 0 & t\\ 0 & -t & t & U \end{pmatrix} \label{SimpleStatic} \end{equation} In this case, it is convenient to change to a new basis which consists of a singlet state $\left|\mathrm{s}\right\rangle$, a triplet state $\left|\mathrm{t}\right\rangle$ and two states containing double occupancies $\left|\mathrm{D}_{\pm}\right\rangle$ given by \begin{equation} \begin{aligned} &\left|\mathrm{t}\right\rangle &=\ \ & \frac{1}{\sqrt{2}}\left(\left|\uparrow,\downarrow\right\rangle + \left|\downarrow,\uparrow\right\rangle\right) \\ &\left|\mathrm{D}_+\right\rangle &=\ \ & \frac{1}{\sqrt{2}}\left(\left|\uparrow\downarrow,0\right\rangle + \left|0,\uparrow\downarrow\right\rangle\right)\\ &\left|\mathrm{D}_-\right\rangle &=\ \ & \frac{1}{\sqrt{2}}\left(\left|\uparrow\downarrow,0\right\rangle - \left|0,\uparrow\downarrow\right\rangle\right) \\ &\left|\mathrm{s}\right\rangle &=\ \ & \frac{1}{\sqrt{2}}\left(\left|\uparrow,\downarrow\right\rangle - \left|\downarrow,\uparrow\right\rangle\right) \end{aligned} \label{SingletBasis} \end{equation} In this new basis, the Hamiltonian takes the simple form \begin{equation} H^{'}_0=\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & U & 0 & -2t \\ 0 & 0 & U & 0 \\ 0 & -2t & 0 & 0 \end{pmatrix} \label{SimpleStaticBasis} \end{equation} We see that $\left|\mathrm{t}\right\rangle$ and $\left|\mathrm{D}_{-}\right\rangle$ are eigenstates with energies $0$ and $U$, respectively. The other two eigenstates are superpositions of $\left|\mathrm{s}\right\rangle$ and $\left|\mathrm{D}_{+}\right\rangle$ with eigenenergies \begin{equation} E_{1,4}=\frac{1}{2}\left(U\mp\sqrt{16 t^2+U^2}\right) \end{equation} The spectrum is shown in Fig. \ref{fig1} in the main text. For large repulsion, the singlet is the ground state, whereas for strong attractive interactions it is the $\left|\mathrm{D}_{+}\right\rangle$ state containing double occupancies. \subsection{Periodically modulated system} Now we add the periodic modulation $V(\tau)$ to the Hamiltonian $H_0$, such that the total Hamiltonian in the lab frame has the form \begin{equation} H_{\mathrm{lab}}(\tau)=H_0+V(\tau) \label{Hlab} \end{equation} The time-dependent part $V(\tau)$ is given by \begin{equation} V(\tau)=\Delta(\tau) h_{\Delta} \label{DriveOp} \end{equation} where $\Delta(\tau)=\omega K_0 \cos(\omega\tau)$ is the modulated site offset expressed by the dimensionless shaking amplitude $K_0$ and the driving frequency $\omega$. The operator for the site offset $h_{\Delta}$ is most intuitively expressed in the Fock basis (\ref{FockBasis}) \begin{equation} h_{\Delta}=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} \end{equation} We will treat this Hamiltonian with the Floquet approach, where the time evolution operator from an initial time $\tau_i$ to a final time $\tau_f$ can be written as \cite{Goldman2014a, Bukov2015b} \begin{equation} U(\tau_f,\tau_i)=e^{-iK(\tau_f)}e^{-i(\tau_f-\tau_i)H_{\mathrm{eff}}}e^{iK(\tau_i)} \label{EvolOp} \end{equation} Here, the effective Hamiltonian $H_{\mathrm{eff}}$ is time independent and governs the long term dynamics of the system, while the time periodic kick operator $K(\tau)=K(\tau+T)$ describes the evolution within one period of the drive (the so called `micromotion'). Note that we choose to work with the non-stroboscopic approach where the effective Hamiltonian and kick operator do not depend on the starting phase of the modulation. The effective Hamiltonian and kick operator can be calculated perturbatively in a high frequency expansion according to \cite{Goldman2014a, Bukov2015b} \begin{equation} H_{\mathrm{eff}}=\sum_{n=0}^\infty H_{\mathrm{eff}}^{(n)},\ K(\tau)=\sum_{n=1}^\infty K^{(n)}(\tau) \label{HFE} \end{equation} where the operators are expanded in powers of the inverse frequency $H_{\mathrm{eff}}^{(n)}\propto \omega^{-n}$ and $K^{(n)}(\tau)\propto \omega^{-n}$. \subsubsection{Rotating frame} In the following we will consider the strong driving regime where the driving strength $K_0$ is of order $1$. Since in this case the amplitude of the modulation in (\ref{DriveOp}) becomes large in the high frequency limit, we go to a rotating frame via the unitary transformation \begin{equation} R_1(\tau)=\exp\left[-i\int{V(\tau)\mathrm{d}\tau}\right]=\exp\left[-i K_0 \sin(\omega \tau) h_{\Delta}\right] \label{R1} \end{equation} The Hamiltonian is transformed according to \begin{eqnarray} H_{\mathrm{rot}}(\tau)&=&R_1^{\dagger}(\tau)H_{\mathrm{lab}}(\tau)R_1(\tau)-i R_1^{\dagger}(\tau)\frac{\partial}{\partial\tau}R_1(\tau)\notag \\ &=&R_1^{\dagger}(\tau)H_0 R_1(\tau) \label{Hrot} \end{eqnarray} while all observables $\hat{O}$ and states $\left|\psi(\tau)\right\rangle$ in the rotating frame are given by \begin{eqnarray} \hat{O}_{\mathrm{rot}}(\tau) &=& R_1^{\dagger}(\tau)\hat{O}_{\mathrm{lab}} R_1(\tau) \\ \left|\psi_{\mathrm{rot}}(\tau)\right\rangle &=& R_1^{\dagger}(\tau)\left|\psi_{\mathrm{lab}}(\tau)\right\rangle \end{eqnarray} The time evolution operator (\ref{EvolOp}) in the rotating frame can be written as \begin{equation} U_{\mathrm{rot}}(\tau_f,\tau_i)=e^{-iK_{\mathrm{rot}}(\tau_f)}e^{-i(\tau_f-\tau_i)H_{\mathrm{eff}}}e^{iK_{\mathrm{rot}}(\tau_i)} \end{equation} Note that the exact effective Hamiltonian is the same in the lab and the rotating frame, while the relation between the kick operators is given by \begin{equation} e^{-iK(\tau)}=R_1(\tau)e^{-iK_{\mathrm{rot}}(\tau)} \label{MicroOp} \end{equation} For the time independent effective Hamiltonian $H_{\mathrm{eff}}$ we can then find eigenstates $\left|v\right\rangle$ and eigenvalues $\epsilon_v$ with \begin{equation} H_{\mathrm{eff}}\left|v\right\rangle=\epsilon_v \left|v\right\rangle\;\text{and}\;\left|v(\tau)\right\rangle=\exp(-i\epsilon_v \tau)\left|v\right\rangle \label{Defv} \end{equation} The eigenvalues $\epsilon_v$ are called quasi-energies in analogy to Bloch's theorem and are only defined up to multiples of $\omega$, such that they can be restricted to the first Floquet zone given by $-\omega/2<\epsilon<\omega/2$. In order to construct the eigenstates of the original time periodic Hamiltonian in the lab frame (\ref{Hlab}), we apply the micromotion operator (\ref{MicroOp}) \begin{equation} \left|v_{\mathrm{lab}}(\tau)\right\rangle=e^{-iK(\tau)}\left|v(\tau)\right\rangle=e^{-i\epsilon_v \tau}R_1(\tau)e^{-iK_{\mathrm{rot}}(\tau)}\left|v\right\rangle \label{vLab} \end{equation} which can also be written as \begin{equation} \left|v_{\mathrm{lab}}(\tau)\right\rangle=e^{-i\epsilon_v \tau}\left|\phi(\tau)\right\rangle \end{equation} with a time periodic state $\left|\phi(\tau)\right\rangle=\left|\phi(\tau+T)\right\rangle$, which is analogous to the form of Bloch waves in a spatially periodic system. The time dependent expectation value of an observable $\hat{O}$ measured in such an instantaneous eigenstate is given by \begin{equation} \langle \hat{O}\rangle_v(\tau)=\left\langle v\right|e^{iK_{\mathrm{rot}}(\tau)}R^{\dagger}_1(\tau) \hat{O}_{\mathrm{lab}} R_1(\tau)e^{-iK_{\mathrm{rot}}(\tau)}\left|v\right\rangle \label{TimeDepObs} \end{equation} Therefore, the expectation value of an observable may oscillate at the same frequency as the drive. In general, there are two contributions to this micromotion. The first one originates from the kick operator in the rotating frame $K_{\mathrm{rot}}(\tau)$ whose amplitude scales as $1/\omega$ by construction (\ref{HFE}). Hence, this contribution vanishes at high frequency. Second, the transformation from the lab frame to the rotating frame $R_1(\tau)$ (\ref{R1}) also induces an oscillation which is present as long as the driving amplitude is non-zero, even for infinite frequency. All observables measured in the main section of the paper (singlet and triplet fractions, double occupancy) are the same in both the lab frame and the rotating frame, \emph{i.e.} $\hat{O}_{\mathrm{rot}}(\tau) = \hat{O}_{\mathrm{lab}}$. Hence, the micromotion shown in Fig. \ref{fig4} only originates from the kick operator $K_{\mathrm{rot}}(\tau)$, and would not be visible for larger driving frequencies. Note that the time average of the observable over one oscillation period $\overline{\langle \hat{O}\rangle(\tau)}=1/T\int_0^{\tau} \mathrm{d}\tau\:\langle \hat{O}\rangle(\tau)$ is not necessarily equal to the expectation value of the observable in an eigenstate of the time independent effective Hamiltonian $\left\langle v\right|\hat{O}_{\mathrm{lab}}\left|v\right\rangle$, since in general \begin{equation} \overline{e^{iK_{\mathrm{rot}}(\tau)}R^{\dagger}_1(\tau) \hat{O}_{\mathrm{lab}} R_1(\tau)e^{-iK_{\mathrm{rot}}(\tau)}}\neq \hat{O}_{\mathrm{lab}} \end{equation} We derive the effective Hamiltonian and kick operator in the high frequency expansion (\ref{HFE}) \begin{equation} H_{\mathrm{eff}}=\sum_{n=0}^\infty H_{\mathrm{eff,rot}}^{(n)},\ K_{\mathrm{rot}}(\tau)=\sum_{n=1}^\infty K_{\mathrm{rot}}^{(n)}(\tau) \label{HFErot} \end{equation} In our notation we make explicit that the individual summands of the expansion $H_{\mathrm{eff,rot}}^{(n)}$ are different from the ones in the lab frame (\ref{HFE}), even though the full effective Hamiltonian $H_{\mathrm{eff}}$ is identical in both frames. In the following, we consider two different cases: First, we discuss the off-resonant modulation, where the driving frequency is much larger than the static parameters of the system ($\omega\gg U,t$). In a second step, we will treat the resonant case where $\omega\approx U\gg t$. \subsubsection{Off-resonant shaking} If the driving frequency is much larger than the tunneling and interaction $\omega\gg U,t$ the effective Hamiltonian in the rotating frame (\ref{Hrot}) reads \begin{equation} H_{\mathrm{rot}}(\tau)=\begin{pmatrix} U & -t(\tau) & t(\tau) & 0\\ -t^*(\tau) & 0 & 0 & -t(\tau)\\ t^*(\tau) & 0 & 0 & t(\tau)\\ 0 & -t^*(\tau) & t^*(\tau) & U \end{pmatrix} \end{equation} where $(...)^*$ denotes the complex conjugation. In the Hamiltonian, the time dependent site offset in the lab frame has been converted to a time dependent phase of the tunnelings \begin{equation} t(\tau)=t\exp\left[i K_0\sin(\omega\tau)\right] \end{equation} Performing the expansion (\ref{HFErot}), we find that the effective Hamiltonian is to lowest order given by \begin{equation} H_{\mathrm{eff,rot}}^{(0)}=\begin{pmatrix} U & -t \mathcal{J}_0(K_0) & t\mathcal{J}_0(K_0) & 0\\ -t\mathcal{J}_0(K_0) & 0 & 0 & -t\mathcal{J}_0(K_0)\\ t\mathcal{J}_0(K_0) & 0 & 0 & t\mathcal{J}_0(K_0)\\ 0 & -t\mathcal{J}_0(K_0) & t\mathcal{J}_0(K_0) & U \end{pmatrix} \label{LowOrderOffRes} \end{equation} which describes the renormalization of the static tunneling $t$ by a 0-th order Bessel function $\mathcal{J}_0(K_0)$ (compare to the static Hamiltonian (\ref{SimpleStatic})). The spectrum and the eigenstates to lowest order can therefore be obtained from the ones of the static Hamiltonian $H_0$ by replacing $t\longrightarrow t\mathcal{J}_0(K_0)$. The next order proportional to $1/\omega$ vanishes identically $H_{\mathrm{eff}}^{(1)}=\mathbf{0}$ and the leading corrections are obtained from $H_{\mathrm{eff}}^{(2)}$. It contains several new terms which are not present in the static Hamiltonian. They are listed in Table\:\ref{CorrOffRes} up to terms containing Bessel functions $\mathcal{J}_n(K_0)$ with $n\leq 1$. The corrections start to matter for the largest interaction that was used in the experiment as far as the spectrum is concerned, in particular the energy of the lowest energy state crosses the triplet energy for $U/h\approx \pm 2250$ Hz (see discussion in Appendix B and Fig.\:\ref{fig:ThVsNum_sp}). However, the influence on the observables double occupancy and singlet fraction are not very pronounced, see Fig.\:\ref{fig:ThVsNum}. \begin{table}[htb] \begin{center} \begin{tabular}{ | l | c | c |} \hline Quantity & $1/\omega^2$ correction & Value (U/h=3000 Hz) \\ \hline $t$ & $-4t^3/\omega^2\mathcal{J}_0(K_0)\mathcal{J}^2_1(K_0)$ & $-h\times 1.2$ Hz \\ \hline $U$ & $-4t^2 U/\omega^2\mathcal{J}^2_1(K_0)$ & $-h\times 19.0$ Hz \\ \hline $V_{\mathrm{nn}}$,$V_{\mathrm{de}}$,$V_{\mathrm{ct}}$ & $4t^2 U/\omega^2\mathcal{J}^2_1(K_0)$ & $h\times 19.0$ Hz \\ \hline \end{tabular} \end{center} \caption{Summary of the leading corrections to the lowest order expansion of the effective Hamiltonian Eq.\:(\ref{LowOrderOffRes}) in the off-resonant case. Terms containing Bessel functions $\mathcal{J}_n(K_0)$ with $n>1$ were omitted. The last column gives the values of the correction in Hz for the largest interaction $U/h=3000\:\mathrm{Hz}$ that was used in the experiment (for $t/h = 548$ Hz, $K_0=1.8$).} \label{CorrOffRes} \end{table} The two lowest orders of the kick operator are given by \begin{equation} K_{\mathrm{rot}}^{(1)}(\tau) = 2i\frac{t}{\omega}\mathcal{J}_1(K_0)\cos(\omega\tau)\begin{pmatrix} 0 & 1 & -1 & 0\\ -1 & 0 & 0 & 1\\ 1 & 0 & 0 & -1\\ 0 & -1 & 1 & 0 \end{pmatrix} \label{Koffres1} \end{equation} and \begin{eqnarray} &K_{\mathrm{rot}}^{(2)}(\tau) = 2\frac{tU}{\omega^2}\mathcal{J}_1(K_0)\sin(\omega\tau)\begin{pmatrix} 0 & 1 & -1 & 0 \notag\\ 1 & 0 & 0 & -1\\ -1 & 0 & 0 & 1\\ 0 & -1 & 1 & 0 \end{pmatrix}\\ &+ 8\frac{t^2}{\omega^2}\mathcal{J}_0(K_0)\mathcal{J}_1(K_0)\sin(\omega\tau)\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \end{eqnarray} It becomes apparent that to leading order the micromotion amplitude is determined by the ratio $t\mathcal{J}_1(K_0)/\omega$, while for the next terms the ratio $U/\omega$ also becomes important. \subsubsection{Near-resonant shaking} Now we turn to the resonant case where $\omega\approx U\gg t$ in the strong driving regime. Here, not only the amplitude of the modulation becomes large in the high frequency limit but also the interaction term proportional to $U$. Therefore, in addition to the transformation (\ref{R1}) we perform a second rotation according to \cite{Goldman2015} \begin{equation} R_2(\tau)=\exp\left[-i\omega\tau h_{U}\right] \label{R2} \end{equation} where the interaction operator $h_{U}$ is given by \begin{equation} h_{U}=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \end{equation} We then follow the discussion of the off-resonant case, replacing the operator $R_1(\tau)$ by the product $R(\tau)=R_2(\tau)R_1(\tau)$. The Hamiltonian (\ref{Hlab}) in this rotating frame thus becomes \begin{equation} H_{\mathrm{rot}}(\tau)=\begin{pmatrix} U-\omega & -t_+(\tau) & t_+(\tau) & 0\\ -t_+^*(\tau) & 0 & 0 & -t_-(\tau)\\ t_+^*(\tau) & 0 & 0 & t_-(\tau)\\ 0 & -t_-^*(\tau) & t_-^*(\tau) & U-\omega \end{pmatrix} \end{equation} with \begin{equation} t_{\pm}(\tau)=t\exp\left[i(\pm\omega\tau+K_0\sin(\omega\tau))\right] \end{equation} Again, the oscillating site offset has been converted to a phase factor for the tunneling. In addition, the second transformation adds another phase whose sign depends on which of the two states containing a double occupancy is involved in the tunneling process. Furthermore, the interaction $U$ has been replaced by the detuning from the resonance $\delta=\omega-U$, such that we can perform again a high frequency expansion (\ref{HFErot}) in the two small parameters $t/\omega$ and $\delta/\omega$. For the resonant case, the effective Hamiltonian to lowest order is given by \begin{equation} H_{\mathrm{eff,rot}}^{(0)}=\begin{pmatrix} U-\omega & t \mathcal{J}_1(K_0) & -t\mathcal{J}_1(K_0) & 0\\ t\mathcal{J}_1(K_0) & 0 & 0 & -t\mathcal{J}_1(K_0)\\ -t\mathcal{J}_1(K_0) & 0 & 0 & t\mathcal{J}_1(K_0)\\ 0 & -t\mathcal{J}_1(K_0) & t\mathcal{J}_1(K_0) & U-\omega \end{pmatrix} \label{LowOrderRes} \end{equation} Unlike in the off-resonant case, the tunneling matrix elements scale with the first order Bessel function $t\mathcal{J}_1(K_0)$. This can be interpreted as density assisted tunneling introduced by the resonant modulation, since the hopping process can only occur if a particle has a neighbor on the adjacent side with which it interacts. Also note the important sign change for the tunneling matrix elements compared to the static Hamiltonian (\ref{SimpleStatic}) and the off-resonant modulation (\ref{LowOrderOffRes}). As a consequence, the singlet now couples to the state which is adiabatically connected to the state $\left|\mathrm{D}_-\right\rangle$ in the static Hamiltonian rather than $\left|\mathrm{D}_+\right\rangle$. This becomes evident if we write the Hamiltonian (\ref{LowOrderRes}) in the basis (\ref{SingletBasis}) which yields \begin{equation} H_{\mathrm{eff,rot}}^{'(0)}=\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & U-\omega & 0 & 0 \\ 0 & 0 & U-\omega & 2t\:\mathcal{J}_1(K_0) \\ 0 & 0 & 2t\:\mathcal{J}_1(K_0) & 0 \end{pmatrix} \end{equation} Comparing this to the static Hamiltonian (\ref{SimpleStaticBasis}) shows that the singlet is coupled to the other double occupancy state $\left|\mathrm{D}_-\right\rangle$ around the resonance, which leads to the opening of a gap of size $4t\:\mathcal{J}_1(K_0)$ (see Fig.\:\ref{fig2}(d)). Finally, notice that the interaction has been replaced by $\delta=\omega-U$. Apart from the convergence criterion of the high frequency expansion which was mentioned before, this also has the physical consequence that the sign of the detuning determines whether the system effectively exhibits an attractive or repulsive interaction. The higher order corrections to $H_{\mathrm{eff}}$ up to terms $1/\omega^2$ are listed in Table\:\ref{CorrRes} including terms containing Bessel functions $\mathcal{J}_n(K_0)$ with $n\leq 1$. Unlike in the off-resonant case, the first order proportional to $1/\omega$ does not vanish. In fact, it reproduces the Schrieffer-Wolff transformation for the case $K_0=0$ and $\omega=U$ which allows to describe the Hubbard model with an effective spin Heisenberg model in the limit of large interactions $U\gg t$ \cite{Bukov2015b}. The second order proportional to $1/\omega^2$ makes it apparent that the series is an expansion in the two small parameters $t/\omega$ and $\delta/\omega=(\omega-U)/\omega$ and it breaks down if the detuning from the resonance is too large (see Fig. \ref{fig:ThVsNum_sp}). \begin{table}[htb] \begin{center} \begin{tabular}{ | l | c | c |} \hline Quantity & $1/\omega$ \\ \hline $t_{\pm}$ & - \\ \hline $U-\omega$ & $t^2/\omega [2\mathcal{J}^2_0(K_0)+\mathcal{J}^2_1(K_0)]$ \\ \hline $V_{\mathrm{nn}}$,$V_{\mathrm{de}}$ & $-t^2/\omega [2\mathcal{J}^2_0(K_0)+\mathcal{J}^2_1(K_0)]$ \\ \hline $V_{\mathrm{ct}}$ & $t^2/\omega [2\mathcal{J}^2_0(K_0)-\mathcal{J}^2_1(K_0)]$ \\ \hline \end{tabular} \begin{tabular}{ | l | c | c |} \hline Quantity & $1/\omega^2$ \\ \hline $t_{\pm}$ & $\pm t^3/\omega^2\mathcal{J}_1(K_0)) [2\mathcal{J}^2_0(K_0)+\mathcal{J}^2_1(K_0)]$ \\ \hline $U-\omega$ & $t^2/(2\omega^2)(\omega-U) [4\mathcal{J}^2_0(K_0)+\mathcal{J}^2_1(K_0)]$ \\ \hline $V_{\mathrm{nn}}$,$V_{\mathrm{de}}$ & $-t^2/(2\omega^2)(\omega-U) [4\mathcal{J}^2_0(K_0)+\mathcal{J}^2_1(K_0)]$ \\ \hline $V_{\mathrm{ct}}$ & $t^2/(2\omega^2)(\omega-U) [4\mathcal{J}^2_0(K_0)-\mathcal{J}^2_1(K_0)]$ \\ \hline \end{tabular} \end{center} \caption{Summary of the leading corrections to the lowest order expansion of the effective Hamiltonian Eq.\:(\ref{LowOrderRes}) in the resonant case. Corrections containing Bessel functions $\mathcal{J}_n(K_0)$ with $n>1$ were omitted. The terms proportional to $1/\omega$ reproduce the Schrieffer-Wolff transformation for the case $K_0=0$ and $\omega=U$.} \label{CorrRes} \end{table} Finally, the first order in the expansion of the kick operator is given by \begin{equation} \begin{aligned} &K_{\mathrm{rot}}^{(1)}(\tau) = i\frac{t}{\omega}\mathcal{J}_0(K_0) \begin{pmatrix} 0 & e^{i\omega\tau} & -e^{i\omega\tau} & 0\\ -e^{-i\omega\tau} & 0 & 0 & -e^{-i\omega\tau}\\ e^{-i\omega\tau} & 0 & 0 & e^{-i\omega\tau}\\ 0 & e^{i\omega\tau} & -e^{i\omega\tau} & 0 \end{pmatrix} \\ &+i\frac{t}{2\omega}\mathcal{J}_1(K_0) \begin{pmatrix} 0 & e^{2i\omega\tau} & -e^{2i\omega\tau} & 0\\ -e^{-2i\omega\tau} & 0 & 0 & e^{-2i\omega\tau}\\ e^{-2i\omega\tau} & 0 & 0 & -e^{-2i\omega\tau}\\ 0 & -e^{2i\omega\tau} & e^{2i\omega\tau} & 0 \end{pmatrix} \end{aligned} \end{equation} As in the off-resonant case (\ref{Koffres1}), the micromotion amplitude is to lowest order determined by the ratio $t\mathcal{J}_1(K_0)/\omega$. The first term in the kick operator proportional to $t\mathcal{J}_0(K_0)/\omega$ results from the rotation (\ref{R2}). It reproduces the Schrieffer-Wolff transformation matrix for $K_0=0$ and $\omega=U$ \begin{equation} \left.K_{\mathrm{rot}}^{(1)}(\tau)\right|_{K_0=0,\omega=U}=-i\frac{t}{U}\left(e^{iU\tau}h_+ - e^{-iU\tau}h_-\right) \end{equation} where the matrices $h_{\pm}$ are given by \begin{equation} h_+=\begin{pmatrix} 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 \end{pmatrix},\ h_-=\begin{pmatrix} 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & -1 \\ 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix} \end{equation} They describe hopping processes between the Mott bands where the double occupancy is increased or reduced by one, respectively. We have verified that the next order in the expansion of the kick operator contains terms which scale like $t^2/\omega^2$ and $t(\omega-U)/\omega^2$. \renewcommand{\theequation}{B\arabic{equation}} \makeatletter \renewcommand{\thefigure}{B\@arabic\c@figure} \renewcommand{\thetable}{B\@arabic\c@table} \setcounter{figure}{0} \section{Numerical simulation} \label{ap:numerical} \begin{figure}[t]% \includegraphics{Fig_B1}% \caption{Quasi-energy spectrum, calculated analytically (dashed line) or numerically (full line). As the \ket{\mathrm{t}} state is unaffected by the modulation, it is omitted for clarity. In (a), the frequency $\omega/2\pi=8000$ Hz is much larger than the tunneling $t/h=548$ Hz, and both methods agree well ($K_0=1.94$). In (b) the frequency $\omega/2\pi=2000$ Hz is lower and $K_0=0.60$, $t/h=445$. The off-resonant analytic derivation (left) does not apply anymore, and must be replaced by the near-resonant prediction (right).}% \label{fig:ThVsNum_sp}% \end{figure} In addition to the analytic derivation of the effective Hamiltonian, we also performed a numerical simulation of the two-site Hubbard model. In this way, the off-resonant and near-resonant regimes presented above can be studied simultaneously, and the micromotion of all observables can be obtained. To perform this calculation, we used a Trotter decomposition to evaluate the evolution operator over one period $\hat{U}(T+\tau_0,\tau_0)$, which evolves a quantum state from an initial time $\tau_0$ to time $\tau_0+T$. The time-dependent Hamiltonian $\hat{H}(\tau)$ is approximated by $\hat{H}(\tau_j)$, which is piece-wise constant on $N$ consecutive time intervals $[\tau_j,\tau_{j+1}[$, with $\tau_j = j\,T/N+\tau_0$ and $0\leq j < N$. The evolution operator can then be written as (we set $\hbar = 1$) \begin{equation} \hat{U}(T+\tau_0,\tau_0) = e^{-i \hat{H}(\tau_{N-1})\,T/N}\times ... \times e^{-i \hat{H}(\tau_0)\,T/N} \end{equation} For the evaluation we chose typically $N=50$. The eigenvalues $\lambda_v$ of this operator $\hat{U}(T+\tau_0,\tau_0)$ are directly related to the quasi-energies $\epsilon_v$ by \begin{equation} \lambda_v = \exp(-i \epsilon_v T) \end{equation} see also Eqs.\:(\ref{Defv}) and (\ref{vLab}). However, the eigenvectors \ket{v(\tau_0)} are not uniquely defined, and depend on the starting phase $\tau_0$ of the periodic drive. To fully describe the driven system, it is necessary to obtain as well the evolution of the quantum state during the modulation cycle, which is given by $\ket{v(\tau_j)} = \hat{U}(\tau_j,\tau_0)\ket{v(\tau_0)}$. By construction, $\ket{v(\tau_N)} = \ket{v(T+\tau_0)} = \ket{v(\tau_0)}$. With this time-dependent state, we can then evaluate the instantaneous expectation value of any observable $\hat{O}$ in a Floquet state by \begin{equation} \langle \hat{O} \rangle (\tau_j) = \bra{v(\tau_j)}\hat{O}\ket{v(\tau_j)} \end{equation} in analogy to Eq.\:(\ref{TimeDepObs}). This contains the full information about the evolution of the observable, in particular the amplitude of the micromotion can be determined from the Fourier components at the modulation frequency and its multiples. \begin{figure}[t] \includegraphics{Fig_B2.pdf}% \caption{Off-resonant driving of the double well. We show the difference between the numerical and analytic predictions of \ensuremath{p_{\mathrm{DO}}}\xspace (red) and \ensuremath{p_{\mathrm s}}\xspace (blue) in the Floquet state originating from the static ground state (\ket{\tilde{\mathrm D}_+} for $U<0$, \ket{\tilde{\mathrm s}} for $U>0$), as a function of the interaction strength, for $t/h = 548$ Hz and $K_0 = 1.9$. The expansion up to first order in $1/\omega$ is shown in full line, and to order $1/\omega^2$ in dashed line. All calculations include an average over the micromotion, similar to the procedure used experimentally. For the whole range of interactions, the difference between the two predictions is smaller than 0.01, and begins to increase as $U$ approaches the resonance conditions $U \approx\omega$.}% \label{fig:ThVsNum}% \end{figure} \subsection{Comparison to the analytic prediction} As the numerical prediction is not limited to a certain range of interactions, we can compute the exact spectrum and expectation values of the observables and compare it to the analytic derivation presented above. We first consider the off-resonant modulation for the experimental parameters used in Fig.\:\ref{fig1} of the main text. Here, the modulation frequency $\omega/2\pi = 8000$ Hz is much higher than the tunneling $t/h=548$ and the interaction $U/h$ is varied between $\pm 2800$ Hz. The quasi-energy spectrum obtained from the analytic derivation up to order $1/\omega^2$ (see Tables \ref{CorrOffRes} and \ref{CorrRes}) and the exact numerical eigenvalues are shown in Fig. \ref{fig:ThVsNum_sp}(a). Both methods agree quite well in the interaction range used in the experiment, while for $\left|U\right|>3000$ Hz, the analytic result starts to deviate from the exact result as the resonance at $U\approx \pm\omega$ is approached. Note that even for interactions as low as $U\approx 2250$ Hz, the singlet state becomes higher in energy than the triplet state, which is clearly beyond the scope of the lowest order effect that simply replaces $t\longrightarrow t\mathcal{J}_0(K_0)$. Nevertheless, this deviation has a negligible effect on the expectation value of our observables taken in the Floquet state originating from the static ground state, which is shown in Fig. \ref{fig:ThVsNum}. Even if the effective Hamiltonian is approximated by the lowest order (\ref{LowOrderOffRes}), the double occupancy and singlet fraction differ by less than 0.01 from the exact result. However, due to the inversion of the lowest to energy levels, it is possible that singlets are converted into triplets if there are symmetry breaking terms present like a magnetic field gradient or a residual coupling to higher bands. When the modulation frequency is decreased to $\omega/2\pi = 2000$ Hz, the analytic derivation in the off-resonant case differs significantly from the numerical evaluation for $U>1000$ Hz (see Fig. \ref{fig:ThVsNum_sp} left). Instead, the near-resonant description must be used, which agrees well with the numerical evaluation in the vicinity of the resonance $U\approx\omega$ (see Fig. \ref{fig:ThVsNum_sp} right). \begin{figure}[t]% \includegraphics{Fig_B3.pdf}% \caption{The experimental data shown in Fig. \ref{fig3b} in the main text (diamonds) is compared a numerical calculation of the system response. which includes an average over the micromotion (full lines), both for the double occupancies \ensuremath{\bar{p}_{\mathrm{DO}}}\xspace (left) and for the singlet fraction \ensuremath{\bar{p}_{\mathrm s}}\xspace (right). The red data corresponds to $\ensuremath{U_{\mathrm{load}}}\xspace/h=2800(80)$ Hz, and the blue data to $\ensuremath{U_{\mathrm{load}}}\xspace = 890(50)$ Hz. The simulated response is rescaled to range between $0.018$ and $0.66$ \ensuremath{\bar{p}_{\mathrm{DO}}}\xspace, and between $0.011$ and $0.64$ \ensuremath{\bar{p}_{\mathrm s}}\xspace, to account for the starting conditions of the experiment. In panel (a), the raw experimental data is shown, and a balanced double well with $\Delta = 0$ is used for the calculation. In panel (b), the finite fidelity of the ramps is taken into account in the experimental data, and we show $p^{\ast} = \bar{p}-(\bar{p}^{(R)}-\bar{p}^{(R)}(\ensuremath{U_{\mathrm{load}}}\xspace))/2$ instead. Finally, in panel (c), the calculation is performed with an energy bias $\Delta/h = 800$ Hz between the double wells.} \label{fig:branchesNum}% \end{figure} \subsection{Comparison to the observed Floquet states} The numerical results can also be confronted to the experimental results, for example as in the configuration of Fig. \ref{fig3b} of the main text, where an analytic prediction cannot be obtained for the full range of interactions. We show in Fig. \ref{fig:branchesNum}(a) the experimental data for both $\ensuremath{U_{\mathrm{load}}}\xspace/h = 890$ Hz and $\ensuremath{U_{\mathrm{load}}}\xspace/h = 2800$ Hz, along with the corresponding numerical prediction. Part of the discrepancy between the two is due to the imperfect adiabaticity of the interaction ramps. We assume that the excess of double occupancies and singlets observed in the return fractions \ensuremath{\bar{p}_{\mathrm{DO}}^{(R)}}\xspace and \ensuremath{\bar{p}_{\mathrm s}^{(R)}}\xspace is generated uniformly during the interaction ramps (see Fig. \ref{fig3b}), and account for this imperfection by considering instead $p^{\ast} = \bar{p}-(\bar{p}^{(R)}-\bar{p}^{(R)}(\ensuremath{U_{\mathrm{load}}}\xspace))/2$, where $p$ can designate either \ensuremath{p_{\mathrm{DO}}}\xspace or \ensuremath{p_{\mathrm s}}\xspace. The effect of this correction is shown in Fig. \ref{fig:branchesNum}(b). Furthermore, the deviation between numerical prediction and experiment around $U=0$ can be qualitatively explained by the presence of a residual energy bias between the two wells. We show in Fig. \ref{fig:branchesNum}(c) the numerical calculation with a potential bias $\Delta/h = 800$ Hz. In our experiment, even though the mean potential bias may not be as large, the harmonic confinement introduces an inhomogeneous potential bias. It can be modelled with an average bias given by $\bar{\Delta} = \int n(r) m \omega_{\mathrm{harm}}^2\, |r|\, d\, \mathrm{d}r/ \int n(r)\,\mathrm{d}r$, with $n(r)$ the probability that a double well located at distance $r$ from the origin is populated, and $\omega_{\mathrm{harm}}/2\pi = 114$ Hz the harmonic confinement frequency. In our system, at zero temperature, $\bar{\Delta} = h\times 360$ Hz. This value can be increased both by the finite temperature of the sample and by a remaining potential bias \cite{si}. \begin{figure}[t] \includegraphics{Fig_B4.pdf} \caption{With the numerical simulation of the system, we can also access the projection of the atomic state onto \ket{\mathrm{D}_-} (left) and \ket{\mathrm{D}_+} (right). The red line is the Floquet state accessed by setting $\ensuremath{U_{\mathrm{load}}}\xspace/h=2800$ Hz, and the blue line corresponds to $\ensuremath{U_{\mathrm{load}}}\xspace = 890$ Hz. In this parameter regime, the \ket{\mathrm{D}_+}state can only be accessed by choosing $\ensuremath{U_{\mathrm{load}}}\xspace/h = 890$ Hz, while the \ket{\mathrm{D}_-} state can be accessed with both Floquet states, at least for $U>0$.} \label{fig:Do_pm} \end{figure} The numerical results also allow us to supplement the experimental results by distinguishing the states \ket{\mathrm{D}_-} and \ket{\mathrm{D}_+}, as shown in Fig. \ref{fig:Do_pm}. When the drive is ramped up at $\ensuremath{U_{\mathrm{load}}}\xspace = 2800$ Hz followed by an interaction ramp to the attractive side, the initial \ket{\mathrm{s}} state is first transferred to the \ket{\mathrm{D}_-} state when $U=\omega$, and then back to the \ket{s} state when $U=-\omega$. Similarly, when the drive is ramped up at $\ensuremath{U_{\mathrm{load}}}\xspace = 890$ Hz, the initial \ket{s} state is transferred to the \ket{\mathrm{D}_-} state as interactions reach $U=\omega$, and to the \ket{\mathrm{D}_+} state when they are decreased and become attractive. \renewcommand{\theequation}{S\arabic{equation}} \makeatletter \renewcommand{\thefigure}{S\@arabic\c@figure} \renewcommand{\thetable}{S\@arabic\c@table} \setcounter{figure}{0} \setcounter{subsection}{0}
1,116,691,501,151
arxiv
\section{Introduction} \label{sec:intro} The thermodynamic limit, i.\,e.\ taking averages over larger and larger volumes, performs the transition from microscopic to macroscopic models. In the context of this paper, thermodynamic limits are used to define the integrated density of states (IDS) of discrete Schr\"odinger operators, in particular, random ones. The IDS of random Schr\"odinger operators has been studied in the mathematical literature at least since 1971, see e.\,g.\ \cite{Pastur-71,Shubin-79,KirschM-07,Veselic-08}. The IDS measures the number of quantum states per unit volume below a threshold energy~$E$ and is thus a function of~$E$. The rigorous implementation of this notion involves a thermodynamic limit, see \cref{sec:defsandmodels}. Classical results about the existence of this macroscopic volume limit apply to fixed energy, which is to say that the limit is considered pointwise. More recently, there has been increased interest to study stronger forms of convergence, for instance uniformly in the energy parameter. This has been studied in various geometric and stochastic contexts. For instance, there are works devoted to Hamiltonians associated to quasicrystals, \cite{LenzS-03a,LenzS-06}, to graphings \cite{Elek-07} and percolations graphs \cite{Veselic-05b,Veselic-06,AntunovicV-08b-short,SamavatSV-14}, as well as abstracts frameworks covering general classes of examples \cite{LenzV-09}, where this is certainly a non-exhaustive list of references. We will review here in particular the results obtained in \cite{LenzMV-08}, \cite{LenzSV-10,LenzSV-12}, \cite{PogorzelskiS-16}, \cite{SchumacherSV-17}, and \cite{SchumacherSV-18}. They have all in common that their findings can be formulated as Banach-valued ergodic theorems. The convergence results in this work are not restriced to the IDS of random Schr\"odinger operators, but provide a general framework for thermodynamic limits with respect to a Banach space topology. To this end, we introduce almost additive fields, which, after normalization with volume, converge as the volume exhausts the physical space. The microscopic structure, like the values of the potential of the Schr\"odinger operator and/or whether a percolation site is open or closed, are encoded in a \emph{coloring}. In applications, the coloring is often random, for the lack of detailed knowledge about the microscopic properties of the material. For the thermodynamic limit to exist, one needs a certain homogeneity of the coloring. In our case, finite portions of the coloring, called \emph{patterns}, should occur with a certain frequency. If there are only finitely many \emph{colors}, i.\,e.\ values of the coloring, this condition is natural and allows the existence of the thermodynamic limit to be proven in any Banach space. For random Schr\"odinger operators, one chooses the Banach space of right continuous bounded functions with $\infty$-norm. Unfortunately, typical potentials have infinitely many values. In that setting, we model the colorings as random fields with some independence. This allows to employ a multivariate version of the theorem of Glivenko--Cantelli. The paper is structured as follows. In \cref{sec:defsandmodels}, we introduce some specific random Schr\"odinger operators and their IDS in more detail. They serve as examples for the abstract theorems as well as illustrations of the meaning of our assumptions. We then present a series of theorems on thermodynamic limits with increasing complexity. \Cref{sec:finiteA} contains the oldest of the presented results, which was in some sense the motivation for further developments. It considers fields over~$\Z^d$ obeying the finiteness condition~\eqref{eq:Afinite} on the set of colors. Since the results are easier to motivate and less technical than later ones, they serve well as a point of departure. In \cref{sec:amenable}, we introduce briefly finitely generated amenable groups and show how the main result generalizes to this setting. We first consider amenable groups which satisfy an additional tiling condition. To remove this tiling condition, we outline quasi tilings and discuss the result on amenable groups, still keeping the finiteness condition. The finiteness condition~\eqref{eq:Afinite} is finally tackled and removed in \cref{sec:infiniteA}. For clarity, we first deal with the euclidean lattice~$\Z^d$ and only after that exhibit the most general formulation on finitely generated amenable groups without finiteness condition. An outlook, a list of symbols, and a list of references conclude the paper. \section{Physical models} \label{sec:defsandmodels} In this section, we introduce example systems to motivate the abstract results. For the moment we will stick to the simple geometry of~$\Z^d$ but use a notation that later generalizes to general geometries on amenable groups in a straightforward way. \subsection{The Anderson model on \texorpdfstring{$\Z^d$}{Zd}} \label{sec:Anderson_Zd} The Anderson model, introduced by P. W. Anderson in \cite{Anderson-58}, is a prototypical random Schr\"odinger operator. To define it, let us introduce some notation. As physical space we choose the group $G:=\Z^d$. To make neighborhood relations explicit, we introduce the \emph{Cayley graph} of~$G$. The Cayley graph of~$G$ has~$G$ itself as vertex set, and two vertices $v,w\in G$ are connected by an undirected edge, if $\norm[1]{v-w}=1$, where $\norm[1]\argmt$ is the usual norm on $\Z^d$ when viewed as subset of $\ell^1\{1,\dotsc,d\}$. In this case, we write $v\sim w$. Thus, the considered geometry is nothing but the usual $d$-dimensional lattice. However, to stay consistent with the notation required in later parts of the article, we already use the notion of a Cayley graph. The general definition of Cayley graphs is introduced in \cref{sec:amenable}. The group~$G$ acts transitively on its Cayley graph by translation: \begin{equation*} \tilde\tau\colon G\times G\to G\textq, \tilde\tau_gv:=(g,v):=v-g\text. \end{equation*} This group action lifts to a unitary group action on the square summable functions on the vertices of the Cayley graph, $\ell^2(G)$, which we denote by \begin{equation*} U_g\from\ell^2(G)\to\ell^2(G)\textq, (U_g\varphi)(v):=\varphi(\tilde\tau_gv)\text. \end{equation*} The \emph{Laplace operator} $\Laplace\from\ell^2(G)\to\ell^2(G)$, given by \begin{equation}\label{eq:laplaceop} (\Laplace\varphi)(v) :=\sum_{w\sim v}\bigl(\varphi(w)-\varphi(v)\bigr)\text, \end{equation} mimics the sum of the second derivatives. The operator~$-\Laplace$ is bounded, self-adjoint, positive semi-definite and serves as the quantum mechanical observable of the kinetic energy. Its spectrum is $\sigma(-\Laplace)=\Icc0{2d}$, as one can see with Fourier analysis. Note that the Laplace operator is equivariant with respect to $G$, i.\,e., for all $g\in G$, we have $\Laplace\circ U_g=U_g\circ\Laplace$. To build a Schr\"odinger operator, we need a multiplication operator~$V$ on $\ell^2(G)$, which plays the role of the observable of potential energy. At this stage, the randomness enters. To this end, we choose a (measurable) set~$\cA\subseteq\R$, which we call the set of colors, equip it with the trace topology inherited from~$\R$ and its Borel $\sigma$-algebra~$\Borel(\cA)$, and choose a probability measure~$\Prob_0$ on $(\cA,\Borel(\cA))$, i.\,e.\ the colors. The probability space for the random potential is $(\Omega,\cB,\Prob) :=(\cA,\Borel(\cA),\Prob_0)^{\tensor G} :=(\cA^G,\Borel(\cA)^{\tensor G},\Prob_0^{\tensor G})$. The \emph{random potential} $V\from\Omega\times G\to\cA$ returns for~$v\in G$ the $v$-th coordinate of~$\omega\in\Omega$: \begin{equation}\label{eq:randompotential} V_\omega(v):=V(\omega,v):=\omega_v:=\omega(v)\text, \end{equation} so that the random variables $\Omega\ni\omega\mapsto V_\omega(v)$, $v\in G$, are independent and identically distributed. For each $\omega\in\Omega$, the random potential~$V_\omega$ operates on $\ell^2(G)$ by multiplication. If~$\cA\in\Borel(\R)$ is bounded, the multiplication operator~$V_\omega$ is bounded and self-adjoint. Analogous to above, the group action of~$G$ on the Cayley graph lifts to an ergodic group action $\tau_g\from\Omega\to\Omega$ on~$\Omega$: \begin{equation*} (\tau_g\omega)_v:=\omega_{\tilde\tau_gv}\text, \end{equation*} and the random potential is equivariant, meaning $V_{\tau_g\omega}\circ U_g=U_g\circ V_\omega$ for all $g\in G$. For each $\omega\in\Omega$, the Schr\"odinger operator \begin{equation} \label{eq:schroedinger} H_\omega:=-\Laplace+V_\omega\from\ell^2(G)\to\ell^2(G) \end{equation} is well-defined, bounded, and self-adjoint. The operator family $(H_\omega)_{\omega\in\Omega}$ is the famous \emph{Anderson Hamiltonian} and is equivariant, too: $H_{\tau_g\omega}\circ U_g=U_g\circ H_\omega$ for all $g\in G$. This equivariance shows that the spectrum $\sigma(H_\omega)$ is a shift invariant quantity. Since the group action of~$G$ on~$\Omega$ is ergodic, the spectrum of $H_\omega$ is almost surely constant: there is a closed set $\Sigma\subseteq\R$ such that \begin{equation*} \Sigma=\sigma(H_\omega) \end{equation*} for $\Prob$-almost all $\omega\in\Omega$. In quantum mechanics, the spectrum is interpreted as the set of possible energies of the particle described by~$H_\omega$. The fact that it is deterministic and does not depend on the microscopic structure of the material makes the interpretation as a homogeneous material possible. More details can be found for example in \cite{FigotinP-92,Kirsch-08}. The main example to motivate and to illustrate the results in later chapters is the integrated density of states (IDS), also known as the spectral distribution function, of the Anderson Hamiltonian. Its definition is \begin{equation*} N\from\R\to\R\textq, N(E):=\E[\spr{\delta_0}{\ifu{\Ioc{-\infty}E}(H_\omega)\delta_0}]\text, \end{equation*} where $\delta_v\in\ell^2(G)$ is the Kronecker delta at $v\in G$, i.\,e.\ $\delta_0=\ifu{\{v\}}$, and $\ifu{\Ioc{-\infty}E}(H_\omega)$ is the spectral projection of~$H_\omega$ onto the energy interval $\Ioc{-\infty}E$. The IDS is monotone, bounded by~$1$, and gives rise to a probability measure~$\d N$, called the \emph{density of states measure}. The topological support of the density of states is the almost sure spectrum of~$H_\omega$. In the following, we will describe how to approximate the IDS using only finite matrices. For each $\Lambda\subseteq G$, we write $\ell^2(\Lambda)$ for the subspace of $\varphi\in\ell^2(G)$ with support $\supp\varphi\subseteq\Lambda$. The indicator function of~$\Lambda$, used as multiplication operator on~$\ell^2(G)$, is the self-adjoint orthogonal projection $\ifu\Lambda\from\ell^2(G)\to\ell^2(\Lambda)$. The operator $H_\omega^\Lambda\from\ell^2(\Lambda)\to\ell^2(\Lambda)$ is given by \begin{equation*} H_\omega^\Lambda:=\ifu\Lambda\circ H_\omega\circ(\ifu\Lambda)^*\text. \end{equation*} Now let~$\cF$ be the (countable) set of finite subsets of~$G$ and assume $\Lambda\in\cF$. The representing matrix of~$H_\omega^\Lambda$ contains the matrix elements $\spr{\delta_v}{H_\omega\delta_w}$ with $v,w\in\Lambda$ and is thus the $\Lambda\times\Lambda$ clipping of the representing $G\times G$ matrix of~$H_\omega$. Of course, $H_\omega^\Lambda$ is Hermitian and has real eigenvalues. For $\omega\in\Omega$, $\Lambda\in\cF$, the eigenvalue counting function $n(\Lambda,\omega)\from\R\to\R$, \begin{equation*} n(\Lambda,\omega)(E):=\tr\ifu{\Ioc{-\infty}E}(H_\omega^\Lambda) =\dim\{\varphi\in\ell^2(\Lambda)\mid\spr\varphi{H_\omega^\Lambda\varphi}\le E\} \end{equation*} is continuous from the right and counts the eigenvalues of~$H_\omega^\Lambda$ below the threshold energy~$E$ according to their multiplicity. We will view the family of eigenvalue counting functions as \begin{equation*} n\from\cF\times\Omega\to\B\text, \end{equation*} where~$\B$ is the Banach space of right continuous functions from~$\R$ to~$\R$. Denote by $\Lambda_L:=\Ico0L^d\isect\Z^d\in\cF$ a cube of side length $L\in\N$. The celebrated \emph{Pastur--Shubin formula} states that, for $\Prob$-almost all $\omega\in\Omega$ and all $E\in\R$ where the IDS~$N$ is continuous, we have \begin{equation}\label{eq:PasturShubin} N(E) =\lim_{L\to\infty}\setsize{\Lambda_L}^{-1}n(\Lambda_L,\omega)(E)\text, \end{equation} see e.\,g.\ \cite{SchumacherS-15}. In fact, since for this model in particular the IDS~$N$ is continuous at all energies, a straight forward argument, using also the boundedness and the monotonicity of~$N$, shows that the convergence does not only hold for all~$E\in\R$, but is actually uniform in~$E$. The Pastur--Shubin formula can be viewed as an ergodic theorem, since it states the equality of an ensemble average and a spatial average. \subsection{Quantum percolation models} \label{sec:percolation} \subsubsection{Site and edge percolation} \label{sec:siteedge} We first introduce \emph{site percolation} on the Cayley graph of $G=\Z^d$. Let $\cA:=\{0,1\}$, and as in \cref{sec:Anderson_Zd} let $(\Omega,\cB,\Prob):=(\cA,\Borel(\cA),\Psite)^{\tensor G}$ with a (non-de\-ge\-ne\-rate) Bernoulli probability~$\Psite$ on~$\cA$. In the percolation setting, the randomness does not determine the potential, but configuration space itself. The \emph{site percolation graph} $(\cV_\omega,\cE_\omega)$ corresponding to $\omega\in\Omega$ is induced by the Cayley graph of~$G$ on the vertex set \begin{equation*} \cV_\omega:=\{v\in G\mid\omega_v=1\}\text. \end{equation*} This means that the edge set is \begin{equation*} \cE_\omega:=\{\{v,w\}\subseteq\cV_\omega\mid v\sim w\}\text, \end{equation*} that is, we keep the edges of the Cayley graph which have both end points in~$\cV_\omega$. In \emph{edge percolation}, one does not erase random sites from the Cayley graph but instead random edges. The color set $\cA:=\{0,1\}^d$ is suitable to implement this strategy. Namely, for a (non-degenerate) Bernoulli probability~$\Pedge$ on $\{0,1\}$, we define $(\Omega,\cB,\Prob):=(\cA,\Borel(\cA),\Pedge^{\tensor d})^{\tensor G}$ and define the edge set corresponding to $\omega\in\Omega$ by \begin{equation*} \cE_\omega:=\Union\nolimits_{j=1}^d\bigl\{\{v,v+e_j\}\bigm|(\omega_v)_j=1\bigr\} \end{equation*} where~$e_j$ is the $j$-th standard basis vector of~$\Z^d$. The vertex set of the edge percolation graph contains all vertices to which an edge in~$\cE_\omega$ is attached: \begin{equation}\label{eq:verticesInEdgePercolation} \cV_\omega:=\{v\in G\mid\exists w\in G\colon\{v,w\}\in\cE_\omega\}\text. \end{equation} The Laplace operator on a subgraph $(\cV_\omega,\cE_\omega)$ of the Cayley graph of~$G$ is \begin{equation}\label{eq:laplace} \Laplace_\omega\from\ell^2(\cV_\omega)\to\ell^2(\cV_\omega)\textq, (\Laplace_\omega\varphi)(v):= \sum_{w\in\cV_\omega,\{v,w\}\in\cE_\omega}\bigl(\varphi(w)-\varphi(v)\bigr) \text. \end{equation} Since we want to use the group action of~$G$, we define the Hamiltonians on $\ell^2(G)$ via \begin{equation*} H_\omega\from\ell^2(G)\to\ell^2(G)\textq, (H_\omega\varphi)(v):= \begin{cases} -(\Laplace_\omega\varphi)(v)&\text{if $v\in\cV_\omega$ and}\\ \alpha\varphi(v) &\text{if $v\in G\setminus\cV_\omega$} \end{cases} \end{equation*} with a constant $\alpha\in\Ioo{2d}\infty$. The operator~$H_\omega$ leaves the subspaces $\ell^2(\cV_\omega)$ and $\ell^2(G\setminus\cV_\omega)$ invariant. Since $\Laplace_\omega$ is bounded by~$2d$ and $\alpha>2d$, the spectrum of of~$H_\omega$ is the disjoint union of $\sigma(-\Laplace_\omega)$ and $\{\alpha\}$, and the two components can be studied separately. In fact, the percolation Hamiltonians are equivariant, and again their spectrum is almost surely constant. As in \cref{sec:Anderson_Zd}, we define the IDS $N\from\R\to\R$ and the eigenvalue counting function $n\from\cF\times\Omega\to\B$ of~$H_\omega$. The Pastur--Shubin formula~\eqref{eq:PasturShubin} remains correct $\Prob$-almost surely for all energies $E\in\R$ at which~$N$ is continuous, see for instance \cite{Veselic-05a,Veselic-05b,KirschM-06,Veselic-06,AntunovicV-08b-short}. But in contrast to the Anderson Hamiltonian, the IDS of a percolation Hamiltonian is not continuous. In fact, the set of discontinuities of~$N$ is dense in~$\Sigma$. Note that since the IDS is monotone, so there can be at most countably many discontinuities. \begin{figure} \begin{center} \tikzsetnextfilename{prob4cluster}% \begin{tikzpicture} \draw[fill] (0,0) circle[radius=1mm] -- (1,0) circle[radius=1mm]; \foreach \x/\y in {0/1, -1/0, 0/-1} \draw[dotted] (0,0) -- (\x,\y) circle[radius=1mm]; \foreach \x/\y in {1/1, 2/0, 1/-1} \draw[dotted] (1,0) -- (\x,\y) circle[radius=1mm]; \begin{scope}[xshift=5cm] \draw[fill] (0,0) circle[radius=1mm] -- (1,0) circle[radius=1mm]; \foreach \x/\y in {0/1, -1/0, 0/-1} \draw[dotted] (0,0) -- (\x,\y); \foreach \x/\y in {1/1, 2/0, 1/-1} \draw[dotted] (1,0) -- (\x,\y); \end{scope} \end{tikzpicture} \end{center} \caption{The probability that $0\in G$ is contained in a cluster of size~$2$ is $2d\Psite(1)^2\Psite(0)^{4d-2}$ in site percolation and $2d\Pedge(1)\Pedge(0)^{4d-2}$ in edge percolation. Illustration for $d=2$.} \label{fig:clustersize2} \end{figure}% That the IDS is discontinuous, can be seen as follows. The percolation graph splits into its connected components, the so-called \emph{clusters}. More precisely, the clusters of a graph are the equivalence classes of the minimal equivalence relation for which neighbors are equivalent. The event that $0\in G$ is contained in a finite cluster has positive probability, see \cref{fig:clustersize2}. A finite cluster of size~$s\in\N$ supports~$s$ eigenfunctions of~$H_\omega$. In particular, the constant function with value $s^{-1/2}$ is an $\ell^2$-normalized eigenfunction of~$H_\omega$ with eigenvalue~$0$. On the event that~$0$ is contained in a finite cluster of size~$s$, we have $\spr{\delta_0}{\ifu{\Ioc{-\infty}0}(H_\omega)\delta_0}=s^{-1}$. Thus \begin{equation*} N(0) =\E[\spr{\delta_0}{\ifu{\Ioc{-\infty}0}(H_\omega)\delta_0}] \ge\sum_{s\in\N}s^{-1}\Prob(\text{the cluster of~$0$ has size~$s$}) >0\text, \end{equation*} and since $N(E)=0$ for all $E<0$, we verified that~$N$ is discontinuous. Note that there can be compactly supported eigenfunctions of~$H_\omega$ on infinite clusters. To construct an example, consider a finite symmetric cluster like $\{-e_1,0,e_1\}\subseteq G$. The symmetry that flips the sign of~$e_1$ commutes with the Laplace operator restricted to this cluster. Therefore, there are antisymmetric eigenfunctions like $(-1,0,1)/\sqrt2$, which have to vanish on each fixed point of the symmetry. Now connect the finite symmetric cluster to an infinite cluster with edges only touching the fixed points. For more details see \cite{ChayesCF-85a,Veselic-05b}. \subsubsection{The Anderson model on a percolation graph} \label{sec:AndPerc} We will now combine the random kinetic energy of the percolation Hamiltonian with the random potential energy of the Anderson model. A suitable set of colors is $\cA=\cA_0\times\{0,1\}$ for site and $\cA=\cA_0\times\{0,1\}^d$ for edge percolation, where the bounded set~$\cA_0\subseteq\R$ contains the values of the random potential. Accordingly, the probability space is \begin{equation*} (\Omega,\cB,\Prob):= \begin{cases} (\cA,\Borel(\cA),\Prob_0\tensor\Psite)^{\tensor G}&\text{or}\\ (\cA,\Borel(\cA),\Prob_0\tensor\Pedge^{\tensor d})^{\tensor G}\text. \end{cases} \end{equation*} For each $v\in G$, the random parameter $\omega=(\omega_v)_{v\in G}\in\Omega$ has a coordinate $\omega_v=(\omega'_v,\omega''_v)$ with $\omega'_v\in\cA_0$ and either $\omega''_v\in\{0,1\}$ or $\omega''_v\in\{0,1\}^d$. The random potential~$V\from G\to\cA_0\subseteq\R$ is defined by \begin{equation V_\omega(v):=V(\omega,v):=\omega'_v\text. \end{equation} Analogously to the construction in \cref{sec:siteedge}, we define the site percolation graph as the graph induced by the Caley graph of~$G$ on the vertex set \begin{equation*} \cV_\omega:=\{v\in G\mid\omega''_v=1\}\text. \end{equation*} The edge percolation graph has edge set \begin{equation*} \cE_\omega:=\Union\nolimits_{j=1}^d \bigl\{\{v,v+e_j\}\bigm|(\omega''_v)_j=1\bigr\} \end{equation*} and vertex set given by~\eqref{eq:verticesInEdgePercolation}. The Laplace operator on the percolation graph is given by~\eqref{eq:laplace}. The Hamiltonians $H_\omega\from\ell^2(G)\to\ell^2(G)$ are defined by \begin{equation*} (H_\omega\varphi)(v):= \begin{cases} -(\Laplace_\omega\varphi)(v)+V_\omega(v) &\text{if $v\in\cV_\omega$ and}\\ \alpha\varphi(v) &\text{if $v\in G\setminus\cV_\omega$} \end{cases} \end{equation*} with $\alpha>2d+\sup\cA_0$. Again, the spectrum $\sigma(H_\omega)$ will have a component~$\{\alpha\}$ and a disjoint remainder, which is the part we are most interested in. We have again an equivariant group action, namely for all $g\in G$, \begin{equation*} H_{\tau_g\omega}\circ U_g =U_g\circ H_{\tau_g\omega}\text. \end{equation*} If the potential has a probability density, the randomness will smooth out at least some discontinuities of the IDS, see \cite{Veselic-06}. \section{Ergodic theorems for finite colors on~\texorpdfstring{$\Z^d$}{Zd}} \label{sec:finiteA} In \cite[Theorem~2]{LenzMV-08}, the authors prove a quantitative version of the following statement. \begin{Theorem}\label{thm:LMV-ecf} In either of the settings presented in \cref{sec:defsandmodels}, assume that the set~$\cA$ of colors is finite: \begin{equation}\label{eq:Afinite} \setsize\cA<\infty\text. \end{equation} Then, for $\Prob$-almost all $\omega\in\Omega$, \begin{equation*} \lim_{L\to\infty}\norm[\infty] {\setsize{\Lambda_L}^{-1}n(\Lambda_L,\omega)-N} =0\text, \end{equation*} where $N\from\R\to\R$ is the IDS. \end{Theorem} Condition~\eqref{eq:Afinite} is automatically satisfied, unless a potential with infinitely many values is present. In \cite{LenzMV-08}, the authors do not talk about a probability space with many configurations, but rather fix one configuration and argue on some required properties. This properties turn out to be almost surely satisfied in the examples in \cref{sec:defsandmodels}. Nonetheless, it is illuminating to see a deterministic example, which we present next. \subsection{Visible points} \label{sec:visible} A point of~$\Z^d$ is \emph{visible} from the origin, if there is no other point of~$\Z^d$ on the straight line connecting the origin and the point in question, see \cref{fig:visiblepoints_small,fig:visiblepoints_large}. The set of visible points is thus \begin{equation*} \cV:=\{x\in\Z^d\mid\{tx\mid t\in\Icc01\}\isect\Z^d=\{0,x\}\}\text. \end{equation*} Equivalently, a point $x\in\Z^d$ is visible, if $x=0$ or if the greatest common divisor of its coordinates is~$1$. See \cite{BaakeMP-00} for a systematic exploration of~$\cV$. \begin{figure} \pgfmathsetmacro{\X}{10} \pgfmathsetmacro{\Y}{6} \begin{center} \tikzsetnextfilename{visible1}% \begin{tikzpicture} \foreach \y in {0, ..., \Y} { \foreach \x in {0, ..., \X} { \pgfmathsetmacro{\visible}{ gcd(\x,\y) == 1 } \ifthenelse{\visible = 1} {\draw[draw=gray!50] (0,0) -- (\x,\y); \filldraw[black] (\x,\y) circle[radius=0.5mm];} {\filldraw[black] (\x,\y) circle[radius=0.1mm];} } } \filldraw[black] (0,0) circle[radius=0.5mm]; \end{tikzpicture} \end{center} \caption{The visible points in $\Icc0\X\times\Icc0\Y$ with the line connecting them to the origin} \label{fig:visiblepoints_small} \end{figure}% The indicator function $\ifu\cV\from\Z^d\to\cA:=\{0,1\}$ can serve as a potential for a Schr\"odinger operator: \begin{equation*} \Hvis\from\ell^2(\Z^d)\to\ell^2(\Z^d)\textq, \Hvis:=-\Laplace+\ifu\cV\text. \end{equation*} Analogous to \cref{sec:Anderson_Zd}, we can define the eigenvalue counting function \begin{equation*} n\from\cF\to\B\textq, n(\Lambda)(E):=\tr\ifu{\Ioc{-\infty}E}(\Hvis^\Lambda)\text. \end{equation*} The results in \cite{LenzMV-08} imply that the thermodynamic limit \begin{equation*} \lim_{L\to\infty}\setsize{\Lambda_L}^{-1}n(\Lambda_L)(E) \end{equation*} exists uniformly for all $E\in\R$, but, to the best of our knowledge, there is no Pastur--Shubin formula. \begin{figure} \pgfmathsetmacro{\X}{150} \pgfmathsetmacro{\Y}{100} \begin{center} \pgfmathsetmacro{\scaleforpicture}{12/(\X+1)} \tikzsetnextfilename{visible2}% \begin{tikzpicture}[scale=\scaleforpicture] \filldraw (0,0) circle[radius=0.5mm]; \foreach \y in {0, ..., \Y} { \foreach \x in {0, ..., \X} { \pgfmathsetmacro{\visible}{ gcd(\x,\y) == 1 } \ifthenelse{\visible = 1} {\filldraw (\x,\y) circle[radius=0.5mm];} {} } } \end{tikzpicture} \end{center} \caption{The visible points in $\Icc0\X\times\Icc0\Y$} \label{fig:visiblepoints_large} \end{figure}% \subsection{Patterns and frequencies}\label{sec:patternsfrequencies} To understand the mechanism behind \cref{thm:LMV-ecf} better and to motivate the abstract formulation of the next theorem, let us examine the situation of percolation without potential in more detail. We already indicated in \cref{sec:siteedge} with the example of the eigenvalue~$0$, how discontinuities of~$N$ arise. We now want to understand, how the limit of the eigenvalue counting functions on large boxes obtains discontinuities of the same height as the IDS. Let us focus on finite clusters again. In $n(\Lambda_L,\omega)$, the eigenvalues corresponding to eigenfunctions supported on finite clusters are counted at least as often as a copy of their cluster is contained in~$\Lambda_L$. Because we normalize with the volume of the box, $\setsize{\Lambda_L}^{-1}n(\Lambda_L,\omega)$, the important quantity turns out to be the relative frequency with which a finite cluster occurs in a large box~$\Lambda_L$. Since the translations $(\tau_g)_{g\in G}$ act ergodically on~$\Omega$, the relative frequencies converge to the probability of their respective cluster at a fixed location. But exactly these probabilities cause the discontinuities of the IDS. Let us formalize the counting of copies of clusters, or, in the more general setting of a finite set of colors~$\setsize\cA<\infty$, shifts of patterns in boxes, following \cite{LenzMV-08}. Recall that the set of finite subsets of~$G$ is denoted by~$\cF$. A (finite) \emph{pattern} with \emph{domain} $\Lambda\in\cF$ is a map $P\from\Lambda\to\cA$. Since a pattern $P\in\cA^\Lambda$ can be identified with the set $\{\omega\in\Omega\mid\omega_\Lambda=P\}$, we reuse the notation for the group action of~$G$ on~$\Omega$ for the shifts of patterns: \begin{equation*} \tau_g\from\cA^\Lambda\to\cA^{\tilde\tau_g\Lambda}\textq, (\tau_gP)(v):=P(\tilde\tau_gv)\text. \end{equation*} This $G$-action induces an equivalence relation on the set of all finite patterns. We denote the equivalence class of a pattern $P\from\Lambda\to\cA$ by $[P]_G:=\{\tau_gP\mid g\in G\}$. A coloring $\omega\in\Omega=\cA^G$ and a finite subset $\Lambda\in\cF$ define the pattern $\omega_\Lambda:=\omega|_\Lambda\from\Lambda\to\cA$, $\omega_\Lambda(v):=\omega_v$. Similarly, for $\Lambda\subseteq\Lambda'\in\cF$, a pattern~$P'$ with domain~$\Lambda'$ induces the pattern $P'_\Lambda:=P'|_\Lambda\from\Lambda\to\cA$ via $P'_\Lambda(v):=P'(v)$. Given $\Lambda'\in\cF$, set \begin{equation*} P'|_\cF:= \{P'|_\Lambda\mid\Lambda\in\cF,\Lambda\subseteq\Lambda'\}\text. \end{equation*} This set of all finite patterns induced by $P'\in\cA^{\Lambda'}$ is useful to count how often a copy of a pattern $P\in\cA^\Lambda$ occurs in~$P'$: \begin{equation}\label{eq:patterncount} \sharp_PP':=\setsize{[P]_G\isect P'|_\cF}\text. \end{equation} See Figure \ref{fig:patterncount} for a visualisation of this pattern counting function. \begin{figure} \begin{tikzpicture}[scale=0.6] { {\color{black} { \foreach \y in {0,1,...,8}{ \draw (1,\y) --(9,\y); } \foreach \x in {1,2,...,9}{ \draw (\x,0) --(\x,8); } \filldraw[blue] (8,2) circle (4pt); \filldraw[blue] (7,8) circle (4pt); \filldraw[blue] (6,5) circle (4pt); \filldraw[blue] (7,2) circle (4pt); \filldraw[blue] (6,8) circle (4pt); \filldraw[red] (1,1) circle (4pt); \filldraw[green] (1,2) circle (4pt); \filldraw[green] (1,3) circle (4pt); \filldraw[red] (1,4) circle (4pt); \filldraw[red] (1,5) circle (4pt); \filldraw[red] (1,6) circle (4pt); \filldraw[green] (1,7) circle (4pt); \filldraw[green] (1,8) circle (4pt); \filldraw[red] (2,1) circle (4pt); \filldraw[green] (2,1) circle (4pt); \filldraw[red] (2,2) circle (4pt); \filldraw[green] (2,3) circle (4pt); \filldraw[blue] (2,4) circle (4pt); \filldraw[blue] (2,5) circle (4pt); \filldraw[red] (2,6) circle (4pt); \filldraw[red] (2,7) circle (4pt); \filldraw[green] (2,8) circle (4pt); \filldraw[green] (3,1) circle (4pt); \filldraw[red] (3,2) circle (4pt); \filldraw[red] (3,3) circle (4pt); \filldraw[red] (3,4) circle (4pt); \filldraw[green] (3,5) circle (4pt); \filldraw[red] (3,6) circle (4pt); \filldraw[blue] (3,7) circle (4pt); \filldraw[green] (3,8) circle (4pt); \filldraw[blue] (4,1) circle (4pt); \filldraw[green] (4,2) circle (4pt); \filldraw[green] (4,3) circle (4pt); \filldraw[green] (4,4) circle (4pt); \filldraw[blue] (4,5) circle (4pt); \filldraw[blue] (4,6) circle (4pt); \filldraw[blue] (4,7) circle (4pt); \filldraw[blue] (4,8) circle (4pt); \filldraw[red] (5,1) circle (4pt); \filldraw[blue] (5,2) circle (4pt); \filldraw[red] (5,3) circle (4pt); \filldraw[green] (5,4) circle (4pt); \filldraw[green] (5,5) circle (4pt); \filldraw[blue] (5,6) circle (4pt); \filldraw[blue] (5,7) circle (4pt); \filldraw[blue] (5,8) circle (4pt); \filldraw[red] (6,1) circle (4pt); \filldraw[red] (6,2) circle (4pt); \filldraw[green] (6,3) circle (4pt); \filldraw[red] (6,4) circle (4pt); \filldraw[green] (6,5) circle (4pt); \filldraw[red] (6,6) circle (4pt); \filldraw[red] (6,7) circle (4pt); \filldraw[green] (6,8) circle (4pt); \filldraw[green] (7,1) circle (4pt); \filldraw[blue] (7,2) circle (4pt); \filldraw[red] (7,3) circle (4pt); \filldraw[red] (7,4) circle (4pt); \filldraw[green] (7,5) circle (4pt); \filldraw[red] (7,6) circle (4pt); \filldraw[green] (7,7) circle (4pt); \filldraw[green] (7,8) circle (4pt); \filldraw[green] (8,1) circle (4pt); \filldraw[blue] (8,2) circle (4pt); \filldraw[green] (8,3) circle (4pt); \filldraw[green] (8,4) circle (4pt); \filldraw[blue] (8,5) circle (4pt); \filldraw[blue] (8,6) circle (4pt); \filldraw[red] (8,7) circle (4pt); \filldraw[green] (8,8) circle (4pt); \filldraw[red] (9,1) circle (4pt); \filldraw[blue] (9,2) circle (4pt); \filldraw[red] (9,3) circle (4pt); \filldraw[green] (9,4) circle (4pt); \filldraw[blue] (9,5) circle (4pt); \filldraw[blue] (9,6) circle (4pt); \filldraw[blue] (9,7) circle (4pt); \filldraw[red] (9,8) circle (4pt); \filldraw[blue] (1,0) circle (4pt); \filldraw[blue] (2,0) circle (4pt); \filldraw[red] (3,0) circle (4pt); \filldraw[red] (4,0) circle (4pt); \filldraw[green] (5,0) circle (4pt); \filldraw[blue] (6,0) circle (4pt); \filldraw[green] (7,0) circle (4pt); \filldraw[red] (8,0) circle (4pt); \filldraw[green] (9,0) circle (4pt); } } } \draw[rounded corners=0.1cm] (4.7,4.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (0.7,2.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (6.7,0.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (0.7,7.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (3.7,3.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (7.7,3.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (6.7,7.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \end{tikzpicture} \caption{The figure shows a pattern $P'$ defined on a square $Q$ of side length~$9$. The marked $2\times2$ squares highlight the $7$ copies of a pattern $P$ in $P'$. Thus, we count $\sharp_PP'=7$. Obviously, to obtian a meaningful proportion of the occurrences of $P$ in $P'$, an appropriate normalization term is the size of the domain of $P'$, i.\,e.~$\tfrac{\sharp_PP'}{\setsize Q}=\tfrac{7}{81}$.} \label{fig:patterncount} \end{figure} In this notation and for the probabilistic models from \cref{sec:defsandmodels}, the ergodic theorem for $\Z^d$-actions, see \cite{Keller-98}, states that, for all $\Lambda\in\cF$, $P\in\cA^\Lambda$, and $\Prob$-almost all $\omega\in\Omega$, \begin{equation*} \setsize{\Lambda_L}^{-1}\sharp_P(\omega_{\Lambda_L}) \xto{L\to\infty}\Prob\{\omega\in\Omega\mid\omega_\Lambda=P\}\text. \end{equation*} The existence of the limit $\lim_{L\to\infty}\setsize{\Lambda_L}^{-1}\sharp_P(\cV|_{\Lambda_L})$ for all patterns $P\in\cA^\Lambda$ with $\Lambda\in\cF$ in the setting of \cref{sec:visible} is shown in \cite{BaakeMP-00} with different methods. See \cref{fig:visiblepoints_large} for an optical impression. Another feature to extract from the percolation example is the following. By restricting the Hamiltonian to finite boxes, we modify some clusters so that some part of their boundary is more ``straight''. As a consequence, the clusters with one ``straight'' boundary are over-represented in the sample. Of course, as the boxes grow larger and larger, their surface increases as well. Fortunately, the even faster growth of the volume of the boxes makes the boundary negligible, or more precisely: the proportion of the surface to the volume vanishes in the limit. By this mechanism, the surplus of clusters with artificial straight boundary becomes negligible. To formalize the notions above, let us introduce the \emph{$r$-boundary} of~$\Lambda\in\cF$ for $r>0$ as \begin{equation}\label{eq:rboundary} \partial^r\Lambda:=\{x\in G\setminus\Lambda\mid\dist(x,\Lambda)\le r\} \union\{v\in\Lambda\mid\dist(v,G\setminus\Lambda)\le r\}\text. \end{equation} The distance~$\dist$ is the length of the shortest path in the Cayley graph, or, for $G=\Z^d$, the distance induced by the $1$-norm. A sequence $(Q_j)_{j\in\N}$ of finite sets $Q_j\in\cF$ is called a \emph{F\o{}lner sequence}, if, for all $r>0$, \begin{equation}\label{eq:folner} \lim_{j\to\infty}\frac{\setsize{\partial^rQ_j}}{\setsize{Q_j}} =0\text. \end{equation} Another common name for F\o{}lner sequences is \emph{van Hove sequence}. The finite boxes $\Lambda_L:=\Ico0L^d\isect\Z^d\in\cF$, $L\in\N$, form an example of a F\o{}lner sequence, because $\setsize{\partial^r\Lambda_L} =(L+2\floor r)^d-(L-2\floor r)^d =4\floor r\sum_{k=0}^{d-1}(L-2\floor r)^k(L-2\floor r)^{d-1+k} \le4dr(L+2r)^{d-1}$ is of the order of~$L^{d-1}$, while $\setsize{\Lambda_L}=L^d$. If, for a pattern $P\in\cA^\Lambda$, an $\omega\in\Omega$, and a F\o{}lner sequence $(Q_j)_{j\in\N}$, the limit \begin{equation*} \nu_P:=\lim_{j\to\infty}\frac{\sharp_P(\omega_{Q_j})}{\setsize{Q_j}} \end{equation*} exists, it is called the \emph{frequency of~$P$ along $(Q_j)_{j\in\N}$}. See Figure \ref{fig:patterncount} for an illustration of one element of this sequence. Note that the existence of $\nu_P$, for all patterns $P$ with finite domain, can be shown almost surely for all our examples from \cref{sec:defsandmodels}. The eigenvalue counting functions are in a certain sense local enough to reflect the effect of small boundary compared to the volume. The following notions encapsulate the crucial properties. \begin{Definition} A map $b\from\cF\to\Ico0\infty$ is called \emph{boundary term}, if \begin{itemize}[nosep] \item $b$ is invariant under~$G$: $b(\Lambda)=b(\tilde\tau_g\Lambda)$ for all $g\in G$, \item $\lim_{j\to\infty}\setsize{Q_j}^{-1}b(Q_j)=0$ for all F\o{}lner sequences $(Q_j)_{j\in\N}$, \item $b$ is bounded, i.\,e.\ its \emph{bound} $D_b:=\sup\{\setsize\Lambda^{-1}b(\Lambda)\mid \Lambda\in\cF,\Lambda\ne\emptyset\}<\infty$ is finite, and \item for $\Lambda,\Lambda'\in\cF$ we have $b(\Lambda\union\Lambda')\le b(\Lambda)+b(\Lambda')$, $b(\Lambda\isect\Lambda')\le b(\Lambda)+b(\Lambda')$, and $b(\Lambda\setminus\Lambda')\le b(\Lambda)+b(\Lambda')$. \end{itemize} \end{Definition} \begin{figure} \begin{center} \tikzsetnextfilename{boundaryrules}% \begin{tikzpicture} \draw [very thick] (0,0) -- (0.75,0.5) arc [start angle=30, end angle=335, x radius=2cm, y radius=1cm] -- cycle; \draw (0,0) -- (2,-1) -- (1.5,1) -- cycle; \draw (1,0) arc [ start angle=0 , end angle=360 , x radius=2cm , y radius=1cm ] -- cycle; \node at (-1.3,0) {$\Lambda\setminus\Lambda'$}; \node at (-2.9,.8) {$\Lambda$}; \node at (2,.6) {$\Lambda'$}; \end{tikzpicture} \end{center} \caption{The requirement $b(\Lambda\setminus\Lambda')\le b(\Lambda)+b(\Lambda')$ is motivated by the fact that the boundary of $\Lambda\setminus\Lambda'$ is a subset of the boundary of~$\Lambda$ united with the boundary of~$\Lambda'$.} \label{fig:pacman} \end{figure} The last property is natural to require, as illustrated in \cref{fig:pacman}, but in fact only necessary when used with quasi tilings, see \cref{sec:general}. \begin{Definition}\label{def:almost-additive} Consider a Banach space $(\B,\norm\argmt)$, a field $F\from\cF\to\B$, and a coloring $\omega\in\cA^G$. \begin{itemize} \item The field~$F$ is \emph{almost additive}, if there exists a boundary term~$b$ such that, for all $n\in\N$, disjoint sets $\Lambda_1,\dotsc\Lambda_n\in\cF$, and $\Lambda:=\Union_{k=1}^n\Lambda_k$, it holds true that \begin{equation*} \norm{F(\Lambda)-\sum_{k=1}^nF(\Lambda_k)} \le\sum_{k=1}^nb(\Lambda_k)\text. \end{equation*} \item The field~$F$ is \emph{$\omega$-invariant}, if for all $\Lambda,\Lambda'\in\cF$ such that the patterns $\omega_\Lambda$ and $\omega_{\Lambda'}$ are $G$-equivalent, we have \begin{equation*} F(\Lambda)=F(\Lambda')\text. \end{equation*} \end{itemize} \end{Definition} To an $\omega$-invariant field $F\from\cF\to\B$, the \emph{pattern function} $\tilde F\from\Union_{\Lambda\in\cF}\cA^\Lambda\to\B$ with \begin{equation*} \tilde F(P):= \begin{cases} F(\Lambda)&\text{if there is a set $\Lambda\in\cF$ with $P\in[\omega_\Lambda]_G$, and}\\ 0&\text{otherwise} \end{cases} \end{equation*} is well defined. Every almost additive field and $\omega$-invariant~$f$ is \emph{bounded} and has a \emph{bound} $C_F$ in the following sense: \begin{equation*} C_F:= \sup\{\setsize\Lambda^{-1}\norm{F(\Lambda)}\mid \Lambda\in\cF\setminus\{\emptyset\}\}<\infty\text. \end{equation*} Indeed, since~$\cA$ is finite, \begin{equation*} C_F \le\sup_{\Lambda\ne\emptyset}\frac1{\setsize\Lambda}\sum_{v\in\Lambda} \bigl(\norm{F(\{v\})}+b(\{v\})\bigr) \le\max_{a\in\cA^{\{0\}}}\tilde F(a)+b(\{0\}) <\infty \end{equation*} The eigenvalue counting functions are good examples for these notions. \begin{Proposition}[{\cite[Proposition~2]{LenzMV-08}}] For almost all $\omega\in\Omega$ and with respect to the Banach space $(\B,\norm[\infty]\argmt)$ of right continuous bounded $\R$-valued functions on~$\R$, the eigenvalue counting functions~$n$ of the models given in \cref{sec:defsandmodels} are $\omega$-invariant, almost additive with boundary term $b(\Lambda):=4\setsize{\partial^1\Lambda}$, $D_b\le8d+4$, and bounded with bound~$C_{n(\argmt,\omega)}=1$. \end{Proposition} The following theorem thus applies to the eigenvalue counting functions of \cref{sec:defsandmodels}. We emphasize that the error estimates imply that the IDS is approximated by the eigenvalue counting functions uniformly in the energy. \begin{Theorem}[{\cite[Theorem~1]{LenzMV-08}}]\label{thm:LenzMV} Let $\cA$ be a finite set, $\omega\in\cA^G$ a coloring, $(\B,\norm\argmt)$ an arbitrary Banach space, $(Q_j)_{j\in\N}$ a F\o{}lner sequence, $F\from\cF\to\B$ a bounded, $\omega$-invariant, and almost additive field with bound~$C_F$, pattern function $\tilde F$, boundary term~$b$, and bound~$D_b$ of~$b$. \par Assume that for every finite pattern~$P\from\Lambda\to\cA$, $\Lambda\in\cF$, the frequency $\nu_P:=\lim_{j\to\infty}\setsize{Q_j}^{-1} \sharp_P(\omega|_{Q_j})$ exists. Then, the limits \begin{equation}\label{eq:Fbar-F-Ftilde} \Fbar :=\lim_{j\to\infty}\frac{F(Q_j)}{\setsize{Q_j}} =\lim_{L\to\infty}\sum_{P\in\cA^{\Lambda_L}} \nu_P\frac{\tilde F(P)}{\setsize{\Lambda_L}} \end{equation} exist and are equal. Moreover, for all $j,L\in\N$, the bounds \begin{equation}\label{eq:Fbar-Ftilde} \biggnorm{\Fbar-\sum_{P\in\cA^{\Lambda_L}} \nu_P\frac{\tilde F(P)}{\setsize{\Lambda_L}}} \le\frac{b(\Lambda_L)}{\setsize{\Lambda_L}} \end{equation} and \begin{multline}\label{eq:Ftilde-F} \biggnorm{\frac{F(Q_j)}{\setsize{Q_j}} -\sum_{P\in\cA^{\Lambda_L}} \nu_P\frac{\tilde F(P)}{\setsize{\Lambda_L}}}\\ \le\frac{b(\Lambda_L)}{\setsize{\Lambda_L}} +(C_F+D_b)\frac{\setsize{\partial^LQ_j}}{\setsize{Q_j}} +C_F\sum_{P\in\cA^{\Lambda_L}} \Bigabs{\frac{\sharp_P(\omega|_{Q_j})}{\setsize{Q_j}}-\nu_P} \end{multline} hold true. \end{Theorem} The error estimates are the crucial part of the theorem. We note that there are two length scales, indexed by~$j$ and~$L$, in the approximation of the limiting object~$\Fbar$. They correspond to two stages of approximation in the proof. In the first step,~$\Fbar$ is compared to the weighted average of the contributions of the patterns on~$\Lambda_L$, the weights being the frequencies of the patterns. According to~\eqref{eq:Fbar-Ftilde}, the patterns on~$\Lambda_L$ capture the behavior of the limiting function up to an error of size $\setsize{\Lambda_L}^{-1}b(\Lambda_L)$. We mentioned above, that $(\Lambda_L)_L$ is a F\o{}lner sequence, so by choosing~$L$ large enough, we can make this error term small. \Cref{eq:Ftilde-F} addresses the problem that we used the limiting frequencies in~\eqref{eq:Fbar-F-Ftilde} and~\eqref{eq:Fbar-Ftilde} instead of the actual number of occurrences of patterns in the finite region~$Q_j$. This error bound requires us to choose~$j$ so large that the empirical frequencies of all patterns on~$\Lambda_L$ get close to their actual frequencies. To recapitulate: First, we have to choose~$L$ large enough to make the patterns on~$\Lambda_L$ meaningful for the limit. Then we have to choose~$j$ large enough such that the patterns on~$\Lambda_L$ are actually observed in proportions that are close to the asymptotic frequencies. In the random Schr\"odinger operator settings from \cref{sec:defsandmodels}, the second error is controlled by the randomness. In fact, the ergodic theorem predicts that the relative number of occurrences of a pattern converges to its frequencies almost surely. It follows from the theory of large deviations that the probability of a fixed difference between the two is exponentially small for large samples, that is, for large~$j$. The frequencies $(\nu_P)_{P\in\cA^{\Lambda_L}}$ as well as their empirical counterparts $(\setsize{Q_j}^{-1}\sharp_P(\omega|_{Q_j}))_{P\in\cA^{\Lambda_L}}$ can be interpreted as a probability mass function on~$\cA^{\Lambda_L}$. The $1$-norm of their difference in~\eqref{eq:Ftilde-F} is also known as the \emph{total variation norm}\label{tv_mentioned} of the corresponding probability measures. In the remainder of this note, we present two generalizations which correspond to the two types of errors described above. The main property of the group~$G=\Z^d$ used in \cref{thm:LenzMV} is that it hosts F\o{}lner sequences. That this is actually the crucial property necessary for this proof is made explicit by generalizing the result to amenable groups, which are characterized by the existence of a F\o{}lner sequence, see \cref{sec:amenable}. The second generalization concerns the finiteness of the set of colors~$\cA$. The main obstacle to overcome this restriction is the probabilistic error. The total variation norm of the difference of the distribution and the empirical measure of a random variable does not converge to zero for continuous random variables. In order to allow infinitely many colors, we will be forced to exploit more properties of the eigenvalue counting functions, see \cref{sec:infiniteA}. \section{Ergodic theorems for finite colors on amenable groups}\label{sec:amenable} In this section we discuss how the above ideas generalize to less restricted geometries. In particular, we present Banach space-valued ergodic theorems for Cayley graphs generated by amenable groups. Let us emphasize that the groups considered in this paper will always be finitely generated and therefore countable. An amenable group is by definition a group containing subsets with an arbitrary small ratio between boundary and volume. It is well known, that amenability is equivalent to the existence of a F\o{}lner sequence. It turns out that even though amenability is the natural condition to generalize the geometry of $\Z^d$, there is an additional requirement needed to almost directly implement the $\Z^d$-methods to this setting. Here we are speaking about a so-called tiling condition, namely the condition that there exists a F\o{}lner sequence consisting of monotiles. Obviously, in~$\Z^d$ a sequence of cubes serves as such a sequence, since for each~$j$ the group~$\Z^d$ can be tiled with cubes of side length~$j$. For an arbitrary amenable group it is not known whether such a sequence exists or not. Therefore, this section is structured in a first part discussing the monotile situation and a second more involved part dealing with the general amenable groups using the technique of quasi tilings. We proceed with some definitions which generalize the notion of previous sections. Given a finitely generated group $G$ with a finite and symmetric generating set $S\subseteq G$, i.\,e.\ $G=\langle S\rangle$, $\setsize S<\infty$ and $S=S^{-1}\not\ni \mbox{id}$, the corresponding Cayley graph~$\Gamma$ has vertex set~$G$ and two vertices $v,w\in G$ are connected if and only if $vs=w$ for some $s\in S$. The induced graph distance, sometimes called word metric, is denoted by~$\dist$. A sequence $(Q_j)$ of finite subsets of~$G$ is called a F\o{}lner sequence if $\setsize{\partial^r Q_j}/\setsize{Q_j}\to 0$ as $j\to\infty$ for all $r>0$, see \eqref{eq:folner}. Here the $r$-boundary is given as in \eqref{eq:rboundary}. For an amenable group~$G$, the introduction of the physical models of \cref{sec:Anderson_Zd,sec:percolation} works completely analogous. Due to the fact that for general groups the group action is usually written as a multiplication, the only difference is that the group action $\tilde\tau$ is defined here as \begin{equation}\label{def:tau} \tilde\tau \from G\times G \to G ,\quad (g,v)\mapsto \tilde\tau_gv := vg^{-1}. \end{equation} As in \eqref{eq:schroedinger} the Sch\"odinger operator in the Anderson model is given by \[ H_\omega:=-\Laplace+V_\omega\from\ell^2(G)\to\ell^2(G) \] with Laplace operator $\Delta\from\ell^2 (G)\to\ell^2(G)$ as in~\eqref{eq:laplaceop} and a (random) potential $V\from\Omega\times G\to \cA$ as in~\eqref{eq:randompotential}. As before we consider the probability space $(\Omega,\cB,\Prob) :=(\cA^G,\Borel(A)^{\tensor G},\Prob_0^{\tensor G})$ with a finite set $\cA$. An element $\omega\in\Omega$ is then interpreted as a coloring of the elements of the group using the finite set of colors~$\cA$. Besides this, also the definition of site and edge percolation does not depend on the $\Z^d$-structure and generalizes straightforwardly to the case of amenable groups. Thereby the Anderson model on percolation Cayley graphs over finitely generated amenable groups is well-defined. \subsection{Symmetrical tiling condition} As mentioned before an additional condition is needed in order to implement the methods of~$\Z^d$ to amenable groups. This so-called symmetric tiling condition is formulated as follows. \begin{Definition}\label{def:STamenable} Let $G$ be a group. A subset $\Lambda\subseteq G$ \emph{symmetrically tiles} $G$ if there exists a set $T$ such that \begin{enumerate}[(i)] \item $T=T^{-1}$, \item $\Lambda T = G$, \item $\{\Lambda t \mid t\in T \}$ are pairwise disjoint. \end{enumerate} An amenable $G$ is said to satisfy the \emph{symmetric tiling condition}, if there exists a F\o{}lner sequence $(\Lambda_L)$ such that each $\Lambda_L$, $L\in \N$, symmetrically tiles $G$. In this situation we call $G$ an \emph{ST-amenable group}. \end{Definition} Note that here $\Lambda T:=\{xt \mid x\in \Lambda, t\in T\}$ and $\Lambda t=\{xt\mid x\in \Lambda\}$. Let us briefly remark on this condition. Krieger proved in \cite{Krieger-07} based on work of Weiss \cite{Weiss-01} that an amenable group satisfies the symmetrical tiling condition if it is residually finite. For instance, each group of polynomial volume growth is nilpotent (by Gromov's theorem) and thus residually finite. Since it is also of subexponential growth, it is amenable, too, and hence ST-amenable. An intensively studied and slightly more general condition than the one stated in \cref{def:STamenable} can be obtained when not assuming the symmetry (i). In this situation a set $Q$ satisfying (ii) and (iii) is usually referred to as a \emph{monotile} and a group $G$ containing a F\o{}lner sequence consisting only of monotiles is called \emph{monotileable}. Let us remark that it is still not known if there exists an amenable group which is not monotileable. In the situation of ST-amenable groups one can prove the following: \begin{Theorem}[{\cite{LenzSV-10,LenzSV-12}}]\label{thm:LenzSV} Let $G$ be a finitely generated ST-amenable group. Let~$(Q_j)$ and~$(\Lambda_L)$ be F\o{}lner sequences such that each $\Lambda_L$, $L\in\N$ symmetrically tiles~$G$. Let $\cA$ be finite and $\omega\in \cA^G$ be given. Moreover, let $(\B,\norm\argmt)$ be a Banach space and $F\from\cF\to\B$ a bounded, $\omega$-invariant and almost additive field with bound~$C_F$, pattern function $\tilde F$, boundary term~$b$, and bound~$D_b$ of~$b$. As before we use the notation $\cF=\{A\subseteq G\mid A \text{ finite}\}$. Assume that for every finite pattern~$P\from\Lambda\to\cA$, $\Lambda\in\cF$, the frequency $\nu_P:=\lim_{j\to\infty}\setsize{Q_j}^{-1} \sharp_P(\omega|_{Q_j})$ exists. Then, the limits \begin{equation}\label{eq:Fbar-F-Ftilde.amenable} \Fbar :=\lim_{j\to\infty}\frac{F(Q_j)}{\setsize{Q_j}} =\lim_{L\to\infty}\sum_{P\in\cA^{\Lambda_L}} \nu_P\frac{\tilde F(P)}{\setsize{\Lambda_L}} \end{equation} exist and are equal. Moreover, for all $j,L\in\N$, the bound \begin{multline}\label{eq:Ftilde-F.amenable} \biggnorm{\frac{F(Q_j)}{\setsize{Q_j}} -\sum_{P\in\cA^{\Lambda_L}} \nu_P\frac{\tilde F(P)}{\setsize{\Lambda_L}}}\\ \le\frac{b(\Lambda_L)}{\setsize{\Lambda_L}} +(C_F+D_b)\frac{\setsize{\partial^{\diam(\Lambda_L)}Q_j}}{\setsize{Q_j}} +C_F\sum_{P\in\cA^{\Lambda_L}} \Bigabs{\frac{\sharp_P(\omega|_{Q_j})}{\setsize{Q_j}}-\nu_P} \end{multline} holds true. Here, $\diam(\Lambda_L)=\max\{\dist(x,y)\mid x,y\in \Lambda_L\}$ is the diameter of the finite set~$\Lambda_L$. \end{Theorem} When comparing \cref{thm:LenzMV} and \cref{thm:LenzSV}, it turns out that the transition from $\Z^d$ to ST-amenable groups does not imply a quantitative difference in the strength of the result. In particular, the only difference is that in the $\Z^d$ setting $(\Lambda_L)$ is a sequence of cubes with side length $L$ (which naturally tiles $\Z^d$) and in the ST-amenable group setting $(\Lambda_L)$ is a F\o{}lner sequence assumed to be symmetrically tiling. In the error bound this results in the substitution of the side length $L$ of a cube by the diameter of $\Lambda_L$. In the following we give a sufficient condition for the existence of the frequencies in the situation of a random coloring. Here we need the notion of a \emph{tempered} F\o{}lner sequence, which is a F\o{}lner $(Q_j)$ sequence with the additional property that there is some $C>0$ such that \[ \Biggl|\bigcup_{k<n}Q_k^{-1}Q_j\Biggr| \leq C \setsize{Q_j} \] for all $j\in\N$. Each F\o{}lner sequence has a tempered subsequence, see \cite{Lindenstrauss-01}. \begin{Theorem} Let~$G$ be a finitely generated amenable group, $\cA$ some finite set and let~$\mu$ be a probability measure on $(\Omega,\cB) =(\cA^G,\Borel(A)^{\tensor G})$. We assume that the action $\tilde\tau$ given via~\eqref{def:tau} of~$G$ on~$\Omega$ is measure preserving and ergodic w.\,r.\,t.~$\mu$. Then, for any tempered F\o{}lner sequence $(Q_j)$, there exists a event $\tilde\Omega$ of full measure, such that the limit \begin{equation*} \lim_{j\to\infty}\frac{\sharp_P(\omega|_{Q_j})}{\abs{Q_j}} \end{equation*} exists for all patterns $P\in\bigcup_{Q\in\cF}\cA^{Q}$ and all $\omega\in\tilde\Omega$. Moreover, the limit is deterministic in the sense that it is independent of the specific choice of~$\omega\in\tilde\Omega$. \end{Theorem} The above result is a direct consequence (see for instance \cite{Schwarzenberger-08}) of Lindenstrauss ergodic theorem \cite{Lindenstrauss-01}. We omit the precise definitions of measure preserving and ergodic action. However, we want to emphasize that in the particular case where $\mu=\Prob_0^{\tensor G}$ is a product measure, the assumptions on~$\tilde\tau$ are met. Thus, in the percolation setting of \cref{sec:percolation} one obtains that almost all configurations satisfy the assumption of well-defined frequencies. \subsection{General amenable groups}\label{sec:general} As outlined before, it is still not known if each amenable group satisfies the symmetrical tiling condition. Roughly speaking, in the previous sections this tiling condition is the crucial tool to mediate between the two F\o{}lner sequence $(\Lambda_L)$ and $(Q_j)$. Thus, when considering amenable groups without an additional tiling assumption the situation is far more challenging. A way to overcome this lack is to apply the theory of $\epsilon$-quasi tilings developed by Ornstein and Weiss \cite{OrnsteinW-87} in 1987, see also \cite{PogorzelskiS-16} for the quantitative estimates used is the present setting. The key idea here is to soften the condition of a \emph{perfect} tiling with copies of \emph{one} set taken from a F\o{}lner sequence, and rather \begin{itemize} \item use finitely many different sets of the F\o{}lner sequence, and \item allow imperfectness in the tiling (in sense of small \emph{overlaps} and \emph{uncovered areas}). \end{itemize} More precisely we use the following definition: \begin{Definition}\label{def:quasitiling} Let $G$ be a finitely generated group, $Q\subseteq G$ a finite set and $\epsilon>0$. We say that $K_1,\dots,K_N\subseteq G$ with center sets $T_1,\dots, T_N$, short $(K_i,T_i)_{i=1}^N$, are an \emph{$\epsilon$-quasi tiling} of~$Q$ if \begin{enumerate}[(i)] \item the sets $K_iT_i$, $i\in\{1,\dotsc,N\}$ are pairwise disjoint and subsets of $Q$; \item $\setsize{Q\setminus \bigcup_{i=1}^N K_iT_i} \leq 2\epsilon \setsize{Q}$; \item there are subsets $\mathring{K_i}\subseteq K_i$, $i\in\{1,\dotsc,N\}$, such that \begin{itemize} \item for each $i$ the sets $\mathring{K_i} t$, $t\in T_i$ are pairwise disjoint \item $\setsize{K_i\setminus \mathring{K_i}}\leq \epsilon \setsize{K_i}$ \end{itemize} \end{enumerate} \end{Definition} For technical reasons, the sequence $(\Lambda_L)$ which will provide the F\o{}lner sets $K_i$ to quasi tile the group is assumed to be \emph{nested}, i.\,e.\ for each $L\in\N$ we have $\id \in \Lambda_L\subseteq \Lambda_{L+1}$. Note that, starting from an arbitrary F\o{}lner sequence, one can construct a nested F\o{}lner sequence by translating elements of an appropriate subsequence. In the following it turns out to be convenient to define for given $\epsilon\in(0,1)$ and $i\in\N_0$ the numbers $N(\epsilon)$ and $\eta_i(\epsilon)$ by \begin{equation}\label{eq:def:N,eta} N(\epsilon):=\biggceil{\frac{\ln(\epsilon)}{\ln(1-\epsilon)}} \qtextq{and} \eta_i(\epsilon):=\epsilon(1-\epsilon)^{N(\epsilon)-i} \text. \end{equation} As usual we use the Gau\ss{}ian bracket notation $\ceil b:=\inf\{z\in\Z\mid z\ge b\}$. The following theorem shows that~$N(\epsilon)$ is the number of required shapes~$K_i$ in order to $\epsilon$-quasi tile a set~$Q$. Moreover, for fixed~$i$ the~$\eta_i(\epsilon)$ can be interpreted as the ratio of the points covered by copies of~$K_i$ in the $\epsilon$-quasi tiling. \begin{Theorem}\label{thm:STP} Let~$G$ be a finitely generated amenable group, $(\Lambda_L)$ a nested F{\o}lner sequence, and $\epsilon\in\Ioo0{0.1}$. Then there is a finite and strictly increasing selection of sets $K_i\in\{\Lambda_L\mid L\in\N\}$, $i\in\{1,\dotsc,N(\epsilon)\}$, with the following property: For each F{\o}lner sequence~$(Q_j)$, there exists $j_0(\epsilon)\in\N$ satisfying that for all $j\geq j_0(\epsilon)$ there exists sets~$T_1^j, \ldots, T_{N(\epsilon)}^j$ such that $(K_i,T_i^j)_{i=1}^{N(\epsilon)}$ is an $\epsilon$-quasi tiling of~$Q_j$. Moreover, for all $j\ge j_0(\ve)$ and all $i\in\{1,\dots,N(\epsilon)\}$, the proportion of~$Q_j$ covered by the tile~$K_i$ satisfies \begin{equation}\label{eq:etdensity} \biggabs{\frac{\setsize{K_iT_i^j}}{\setsize{Q_j}}-\eta_i(\ve)} \le\frac{\ve^2}{N(\ve)}\text. \end{equation} \end{Theorem} The proof of \cref{thm:STP} is to be found in \cite{PogorzelskiS-16}. Although \cref{thm:STP} provides all the elements used to formulate the desired ergodic theorem for general amenable groups (\cref{thm:PSerg}), a far more involved result is applied in the proof. More precisely, in the proof of \cref{thm:PSerg} one does not only need \emph{one} possibility to quasi tile a given set $Q_j$ with $K_1,\dots,K_{N(\epsilon)}$, but one rather needs a bunch of \emph{different} possibilities to quasi tile $Q_j$ with $K_1,\dots,K_{N(\epsilon)}$. These different tilings of $Q_j$ need to be chosen such that for (almost) all $g\in Q_j$ the frequency (over different tilings) that it is covered by one $K_i$ is up to a small error $\eta_i(\epsilon)/|K_i|$. In this sense, this covering result is referred to as uniform $\epsilon$-quasi tiling. We refer to \cite{PogorzelskiS-16} for details. Let us formulate the ergodic theorem based on the $\epsilon$-quasi tiling results. \begin{Theorem}\label{thm:PSerg} Assume: \begin{itemize} \item $G$ is a finitely generated amenable group. \item $\cA$ is a finite set and $\omega\in \cA^G$. \item $(Q_j)$ is a F\o{}lner sequence such that the frequency $\nu_P:=\lim_{j\to\infty}\setsize{Q_j}^{-1} \sharp_P(\omega|_{Q_j})$ exists for every finite pattern~$P\from\Lambda\to\cA$, $\Lambda\in\cF$. \item $(\Lambda_L)$ is a nested F\o{}lner sequence. \item For given $\epsilon\in(0,\tfrac1{10})$ the sets $K_i$, $i\in\{1,\dots,N(\epsilon)\}$ are chosen according to \cref{thm:STP}. \item $(\B,\norm\argmt)$ is a Banach space and $F\from\cF\to\B$ is $\omega$-invariant, and almost additive with bound~$C_F$, pattern function $\tilde F$, boundary term~$b$, and bound~$D_b$ of~$b$. \end{itemize} Then, the limits \begin{equation} \Fbar:= \lim_{j\to\infty}\frac{F(Q_j)}{\abs{Q_j}} = \lim_{\substack{\epsilon \searrow 0 \\ \epsilon<0.1}} \sum_{i=1}^{N(\epsilon)} \eta_i(\epsilon) \sum_{P\in \cA^{K_i}} \nu_P \frac{\tilde F(P)}{\abs{K_i}} \end{equation} exist and are equal. Moreover, for given $\epsilon\in(0,\tfrac1{10})$ there exist some $j(\epsilon),r(\epsilon)\in\N$ such that for all $j\geq j(\epsilon)$ the bound \begin{multline}\label{eq:Ftilde-F.amenable.gen} \biggnorm{ \frac{F(Q_j)}{\abs{Q_j}} - \sum_{i=1}^{N(\epsilon)} \eta_i(\epsilon) \sum_{P\in \cA^{K_i}} \nu_P \frac{\tilde F(P)}{\abs{K_i}}}\\ \le 4 \sum_{i=1}^{N(\epsilon)} \eta_i(\epsilon) \frac{b(K_i )}{|K_i|} +(C_F+4D_b)\frac{\abs{\partial^{r(\epsilon)}Q_j}}{\abs{Q_j}}\sum_{i=1}^{N(\epsilon)} \abs{K_i} \\ + C_F \sum_{i=1}^{N(\epsilon)} \eta_i(\epsilon) \sum_{P \in \cA^{K_i}} \left| \frac{\sharp_P(\omega|_{Q_j})}{|Q_j|} - \nu_P \right| + (11C_F+32D_b) \epsilon \end{multline} holds true. \end{Theorem} When comparing the estimate with the one in \cref{thm:LenzSV}, note that the difference \eqref{eq:Ftilde-F.amenable.gen} gets small if one firstly executes the limit ${j\to\infty}$ and afterwards ${\epsilon\searrow 0}$. Here the limit $\epsilon\searrow0$ corresponds to $L\to\infty$ in the previous setting. For a detailed discussion why the error terms tend to zero we refer to \cite{PogorzelskiS-16}. We confine ourselves to the (rough) statement that the first three terms in the estimate \eqref{eq:Ftilde-F.amenable.gen} correspond in this ordering to three terms in \eqref{eq:Ftilde-F.amenable} or \eqref{eq:Ftilde-F}, respectively. \section{Glivenko--Cantelli type theorems}\label{sec:infiniteA} In this section we will consider the situation that the set of colors~$\cA$ is no longer finite. This means that we are leaving a combinatorial setting and relying on a probabilistic framework instead. This has been already introduced for a number of examples in \cref{sec:Anderson_Zd,sec:percolation}. In particular, we will need some independence and monotonicity assumptions. The monotonicity property will allow us to smooth out and regularize certain quantities which we otherwise do not know how to estimate. The independence assumption gives us a tool to describe the existence of frequencies used in Section \ref{sec:finiteA} in a constructive and quantitative manner. We reformulate the notions of \cref{sec:finiteA} involving fields $F\from\cF\to\B$ with values in an arbitrary Banach space~$\B$ for fields of the form $f\from\cF\times\Omega\to\B$ with a probability space~$\Omega$ and the specific Banach space of right continuous functions with $\sup$-norm. For easy distinction, we denote the latter with lowercase letters. Almost additive is assumed to hold true uniformly on~$\Omega$. The property $\omega$-invariance for fixed $\omega\in\Omega$ is split up in equivariance and locality. \begin{Definition}\label{def:admissible} A field $f\from\cF\times\Omega\to\B$ is \begin{itemize \ite \emph{almost additive}, if there is a boundary term $b\from\cF\to\Ico0\infty$ such that for all $\omega\in\Omega$, pairwise disjoint $\Lambda_1,\dots,\Lambda_n\in\cF$, and $\Lambda:=\bigcup_{i=1}^n\Lambda_i$, we have \begin{equation*} \Bignorm{f(\Lambda,\omega)-\sum_{i=1}^nf(\Lambda_i,\omega)} \le\sum_{i=1}^nb(\Lambda_i)\text. \end{equation*} \ite \emph{equivariant}, if for $\Lambda\in\cF$, $g\in G$ and $\omega\in\Omega$ we have $f(\tilde\tau_g\Lambda,\omega)=f(\Lambda,\tau_g\omega)$. \ite \emph{local}, if for all $\Lambda\in\cF$ and $\omega,\omega'\in\Omega$, $\omega_\Lambda=\omega'_\Lambda$ implies $f(\Lambda,\omega)=f(\Lambda,\omega')$. \ite \emph{bounded}, if $ \sup_{\omega\in\Omega}\norm{f(\{\id\},\omega)} <\infty\text. $ \end{itemize} \end{Definition} \subsection{Glivenko--Cantelli theory} To simplify the motivation, we first consider $G=\Z^d$ and assume that the probability measure~$\Prob=\Prob_0^{\tensor G}$ on~$\Omega$ is a product measure. This implies in particular that, for every local field $f\from\cF\times\Omega\to\B$, the random variables $\omega\mapsto f(\{0\},\tau_g\omega)$, $g\in G$, are independent. Here, we used that for local~$f$, in particular, $f(\{0\},\omega)$ depends only on~$\omega_0$ and not on~$\omega_{G\setminus\{0\}}$. To explain the relation of our methods to Glivenko--Cantelli theory, assume for the moment that $f\from\cF\times\Omega\to\B$ is not only almost additive but \emph{exactly} additive. That means that for a finite subset $\Lambda\in\cF$ of~$G$ and a realization of colors $\omega\in\Omega=\cA^G$, we can split $f(\Lambda,\omega)$ without any errors into a sum over singleton sets \begin{equation*} f(\Lambda,\omega) =\sum_{v\in\Lambda}f(\{v\},\omega)\text. \end{equation*} By equivariance, we can rewrite $f(\{v\},\omega)=f(\{0\},\tau_v^{-1}\omega)$. For the special case $\B=\R$, the law of large numbers allows us to calculate the thermodynamic limit \begin{equation*} \lim_{L\to\infty}\frac{f(\Lambda_L,\omega)}{\setsize{\Lambda_L}} =\lim_{L\to\infty}\frac1{\setsize{\Lambda_L}} \sum_{v\in\Lambda_L}f(\{0\},\tau_v^{-1}\omega) =\E[f(\{0\},\argmt)] \end{equation*} $\Prob$-almost surely. Of course, in view of our examples in \cref{sec:defsandmodels}, we are more interested in the Banach space~$\B$ of right continuous functions from~$\R$ to~$\R$ with $\sup$-norm. To head in this direction it is advantageous to lift the point of view to the empirical probability $\widehat\Prob_\Lambda^\omega :=\setsize\Lambda^{-1}\sum_{v\in\Lambda}\delta_{\omega_v}$ and to write integration as dual pair $\spr\argmt\argmt$. In this notation, the average from above can be written as \begin{equation*} \setsize\Lambda^{-1}f(\Lambda,\omega) =\spr{f(\{0\},\argmt)}{\widehat\Prob_\Lambda^\omega} \text. \end{equation*} Here is a special case, where a theorem from classical probability theory helps. Assume that $\cA\in\Borel(\R)$, and let $f\from\cF\times\Omega\to\B$ count the number of random variables in~$\Lambda$ with value less than a given threshold $E\in\R$: \begin{equation*} f(\Lambda,\omega)(E) :=\sum_{v\in\Lambda}\ifu{\Ico{\omega_v}\infty}(E) =\sum_{v\in\Lambda}\ifu{\Ioc{-\infty}E}(\omega_v) =\setsize\Lambda\spr{\ifu{\Ioc{-\infty}E}}{\widehat\Prob_\Lambda^\omega}\text. \end{equation*} This defines an additive, local, and equivariant field, which can be interpreted as the (not normalized) empirical distribution function of the sample~% $(\omega_v)_{v\in\Lambda}$. The thermodynamic limit in~$\B$, i.\,e.\ uniformly, is $\Prob$-almost surely \begin{equation*} \lim_{L\to\infty}\setsize{\Lambda_L}^{-1}f(\Lambda_L,\omega) =\lim_{L\to\infty} \spr{\ifu{\Ioc{-\infty}\argmt}}{\widehat\Prob_\Lambda^\omega} =\spr{\ifu{\Ioc{-\infty}\argmt}}{\Prob_0} =\Prob(\omega_0\le\argmt) \end{equation*} by the theorem of Glivenko and Cantelli: \begin{Theorem}[\cite{Glivenko-33,Cantelli-33}]\label{thm:GC} Let $V_j$, $j\in\N$, be real valued, independent and identically distributed random variables on $(\Omega,\Borel,\Prob)$ and $\widehat\Prob_n:=\frac1n\sum_{j=1}^n\delta_{V_j}$ the corresponding empirical distribution. Then there exists an event $\Omega_{\mathrm{unif}}$ with probability $\Prob(\Omega_{\mathrm{unif}})=1$ such that for all $\omega\in\Omega_{\mathrm{unif}}$: \begin{equation*} \sup_{E\in\R}\abs{\spr{\ifu{\Ioc{-\infty}E}}{\widehat\Prob_n(\omega)-\Prob}} \xto{n\to\infty}0\text. \end{equation*} \end{Theorem} We learn that the choice of this Banach space~$\B$ means to prove the convergence of~$\widehat\Prob_\Lambda^\omega$ to~$\Prob_0$ with respect to a supremum over appropriate test functions. The route pursued in \cref{thm:LenzMV,thm:LenzSV,thm:PSerg} for finite alphabets corresponds to the estimate \begin{equation*} \abs{\spr g{\widehat\Prob_\Lambda^\omega-\Prob_0}} \le\norm[\infty]g\TVnorm{\widehat\Prob_\Lambda^\omega-\Prob_0} \end{equation*} with $g:=f(\{0\},\argmt)$. But, as the next example shows, for smooth random variables, the difference does not converge to zero in total variation. \begin{Example} Let $\cA=\Icc01$ and~$X_n$, $n\in\N$, be real valued i.\,i.\,d.\ random variables, distributed uniformly on~$\cA$. The empirical distribution $\widehat\Prob_n:=n^{-1}\sum_{j=1}^n\delta_{X_j}$ is an atomic measure on~$\cA$, while the uniform distribution on~$\cA$ is absolutely continuous with respect to Lebesgue measure. The TV-norm of their difference does not vanish for $n\to\infty$: \begin{equation*} \TVnorm{\widehat\Prob_n-\Prob} =\sup_{A\in\Borel(\cA)}\abs{\widehat\Prob_n(A)-\Prob(A)} \ge1\text, \end{equation*} as the set $A:=\{X_n\mid n\in\N\}$ shows. \end{Example} We have to follow a different path. Assume again $\cA\in\Borel(\R)$. Let us abbreviate the difference of the cumulative distribution functions by \begin{equation*} F_\Lambda(E) :=\spr{\ifu{\Ioc{-\infty}E}}{\widehat\Prob_\Lambda^\omega-\Prob_0}\textq, E\in\R\text, \end{equation*} and assume that~$g=f(\{0\},\argmt)$ has bounded variation, or more specifically that it is monotone. Then, we can perform the following partial integration with Riemann-Stieltjes integrals: \begin{align*} \abs{\spr g{\widehat\Prob_\Lambda^\omega-\Prob_0}_{\cA}}& =\Bigabs{\int_\cA g(E)\dd F_\Lambda(E)} =\Bigabs{-\int_\cA F_\Lambda(E)\dd g(E)}\\& \le\norm[\infty]{F_\Lambda}\int_\cA\d\abs g(E) =\TVnorm g\cdot\sup_{E\in\R} \abs{\spr{\ifu{\Ioc{-\infty}E}}{\widehat\Prob_\Lambda^\omega-\Prob_0}_\cA} \text. \end{align*} This calculations generalizes the theorem by Glivenko and Cantelli to bounded monotone functions: For $M>0$, we have \begin{equation*} \sup\{\abs{\spr g{\widehat\Prob_{\Lambda_L}-\Prob_0}} \mid\text{$g\from\R\to\Icc{-M}M$ monotone}\} \xto{L\to\infty}0\text. \end{equation*} In order to deal with fields that are only \emph{almost} additive, we have to treat patterns of all finite sizes and not only singletons. Each pattern corresponds to a multivariate random variable. This means that we require a multivariate version of Glivenko--Cantelli theory. To formulate this, we introduce an multivariate version of the empirical measure: For given (large) set $\Lambda_j$, smaller set $\Lambda_L$, a grid $T_{j,L}$ and a coloring $\omega\in \cA^G$ we define the empirical measure by \[ \widehat \Prob_{j,L}^\omega :\cB(\cA^{\Lambda_j}) \to [0,1],\qquad \widehat\Prob_{j,L}^\omega:=\frac{1}{|T_{j,L}|}\sum_{t\in T_{j,L}} \delta_{(\tau_t\omega)_{\Lambda_L}}. \] Here, the grid $T_{j,L}$ is a set of basepoints to (almost) cover the $\Lambda_j$ with translated versions of $\Lambda_L$ along $T_{j,L}$. An illustration of this it to be found in Figure \ref{fig:emp.measure}. Let us emphasize that the illustration serves well in the $\Z^d$-case or in the ST-amenable case. However, for general amenable groups there is usually no grid $T_{j,L}$ for a (perfect) covering a set with \emph{one} set. In this case one can still use the above definition of the empirical measure, but, as in Section \ref{sec:general}, one needs to implement the technique of $\epsilon$-quasi tilings. Moreover, let use emphasize that counting patterns in the empirical measure along a grid is substantially different from counting patterns in the definition of frequencies in Section \ref{sec:patternsfrequencies}, see the definition of $\sharp_PP'$ in \eqref{eq:patterncount} and compare Figure \ref{fig:patterncount} with Figure \ref{fig:emp.measure}. \begin{figure} \begin{tikzpicture}[scale=0.6] { {\color{black} { \foreach \y in {0,1,...,8}{ \draw (1,\y) --(9,\y); } \foreach \x in {1,2,...,9}{ \draw (\x,0) --(\x,8); } \foreach \x in {2,4,6,8}{ \draw[dotted, thick] (\x+0.5,-0.5) -- (\x+0.5,8.5); \draw[dotted, thick] (0.5,\x-0.5) -- (9.5,\x-0.5); } \filldraw[blue] (8,2) circle (4pt); \filldraw[blue] (7,8) circle (4pt); \filldraw[blue] (6,5) circle (4pt); \filldraw[blue] (7,2) circle (4pt); \filldraw[blue] (6,8) circle (4pt); \filldraw[red] (1,1) circle (4pt); \filldraw[green] (1,2) circle (4pt); \filldraw[green] (1,3) circle (4pt); \filldraw[red] (1,4) circle (4pt); \filldraw[red] (1,5) circle (4pt); \filldraw[red] (1,6) circle (4pt); \filldraw[green] (1,7) circle (4pt); \filldraw[green] (1,8) circle (4pt); \filldraw[red] (2,1) circle (4pt); \filldraw[green] (2,1) circle (4pt); \filldraw[red] (2,2) circle (4pt); \filldraw[green] (2,3) circle (4pt); \filldraw[blue] (2,4) circle (4pt); \filldraw[blue] (2,5) circle (4pt); \filldraw[red] (2,6) circle (4pt); \filldraw[red] (2,7) circle (4pt); \filldraw[green] (2,8) circle (4pt); \filldraw[green] (3,1) circle (4pt); \filldraw[red] (3,2) circle (4pt); \filldraw[red] (3,3) circle (4pt); \filldraw[red] (3,4) circle (4pt); \filldraw[green] (3,5) circle (4pt); \filldraw[red] (3,6) circle (4pt); \filldraw[blue] (3,7) circle (4pt); \filldraw[green] (3,8) circle (4pt); \filldraw[blue] (4,1) circle (4pt); \filldraw[green] (4,2) circle (4pt); \filldraw[green] (4,3) circle (4pt); \filldraw[green] (4,4) circle (4pt); \filldraw[blue] (4,5) circle (4pt); \filldraw[blue] (4,6) circle (4pt); \filldraw[blue] (4,7) circle (4pt); \filldraw[blue] (4,8) circle (4pt); \filldraw[red] (5,1) circle (4pt); \filldraw[blue] (5,2) circle (4pt); \filldraw[red] (5,3) circle (4pt); \filldraw[green] (5,4) circle (4pt); \filldraw[green] (5,5) circle (4pt); \filldraw[blue] (5,6) circle (4pt); \filldraw[blue] (5,7) circle (4pt); \filldraw[blue] (5,8) circle (4pt); \filldraw[red] (6,1) circle (4pt); \filldraw[red] (6,2) circle (4pt); \filldraw[green] (6,3) circle (4pt); \filldraw[red] (6,4) circle (4pt); \filldraw[green] (6,5) circle (4pt); \filldraw[red] (6,6) circle (4pt); \filldraw[red] (6,7) circle (4pt); \filldraw[green] (6,8) circle (4pt); \filldraw[green] (7,1) circle (4pt); \filldraw[blue] (7,2) circle (4pt); \filldraw[red] (7,3) circle (4pt); \filldraw[red] (7,4) circle (4pt); \filldraw[green] (7,5) circle (4pt); \filldraw[red] (7,6) circle (4pt); \filldraw[green] (7,7) circle (4pt); \filldraw[green] (7,8) circle (4pt); \filldraw[green] (8,1) circle (4pt); \filldraw[blue] (8,2) circle (4pt); \filldraw[green] (8,3) circle (4pt); \filldraw[green] (8,4) circle (4pt); \filldraw[blue] (8,5) circle (4pt); \filldraw[blue] (8,6) circle (4pt); \filldraw[red] (8,7) circle (4pt); \filldraw[green] (8,8) circle (4pt); \filldraw[red] (9,1) circle (4pt); \filldraw[blue] (9,2) circle (4pt); \filldraw[red] (9,3) circle (4pt); \filldraw[green] (9,4) circle (4pt); \filldraw[blue] (9,5) circle (4pt); \filldraw[blue] (9,6) circle (4pt); \filldraw[blue] (9,7) circle (4pt); \filldraw[red] (9,8) circle (4pt); \filldraw[blue] (1,0) circle (4pt); \filldraw[blue] (2,0) circle (4pt); \filldraw[red] (3,0) circle (4pt); \filldraw[red] (4,0) circle (4pt); \filldraw[green] (5,0) circle (4pt); \filldraw[blue] (6,0) circle (4pt); \filldraw[green] (7,0) circle (4pt); \filldraw[red] (8,0) circle (4pt); \filldraw[green] (9,0) circle (4pt); } } } \draw[rounded corners=0.1cm] (4.7,4.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (0.7,2.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \draw[rounded corners=0.1cm] (6.7,0.2)-- ++(0,1.1)-- ++(1.6,0)-- ++(0,-1.6)-- ++(-1.6,0)-- ++(0,0.5); \end{tikzpicture} \caption{The set $\Lambda_j$ is a square of side length~$9$ and $\Lambda_L$ a square of side length~$2$. $\Lambda_j$ is (almost) covered when translating $\Lambda_L$ along the positions of the dashed grid (given by $T_{j,L}$). The marked pattern $P$ is found at $3$ (of $16$ possible) positions along the grid. Thus, the empirical measure of this pattern is $\widehat \Prob_{j,L}^\omega(P)= \widehat \Prob_{9,2}^\omega(P)= \frac{3}{16}$.} \label{fig:emp.measure} \end{figure} In order to apply the multivariate version of Glivenko--Cantelli, we aim to integrate functions mapping from $\cA^\Lambda$ to~$\R$. Such a function $g\from\cA^\Lambda\to\R$ is called monotone, if it is monotone in each coordinate. Besides these generalizations due to higher dimensionality, there is another fundamental difference between univariate and multivariate Glivenko--Cantelli theory: While \Cref{thm:GC} makes no assumptions on the distribution of the random variables, the following example shows that we will have to impose some restrictions on the joint distribution of the coordinates of the random vector. \begin{Example} Let $X_j\sim\Normal(0,1)$, $j\in\N$, be i.\,i.\,d.\ standard normal random variables and $Y_j:=-X_j$. We consider the vectors $(X_j,Y_j)\in\R^2$. Let~$\widehat\Prob_n(\omega):=n^{-1}\sum_{j=1}^n\delta_{(X_j,Y_j)}$ be the empirical distribution of $(X_j,Y_j)_{j=1}^n$ on~$\R^2$. The random test function \begin{equation*} g_\omega:=\ifu{\{(x,y)\in\R^2\mid x+y<0\}} +\ifu{\{(X_j(\omega),Y_j(\omega))\mid j\in\N\}} \from\R^2\to\Icc{-1}1 \end{equation*} is monotone in each coordinate, and we have \begin{equation*} \sup\{\abs{\spr g{\widehat\Prob_n^\omega-\Prob}}\mid\text{$g\from\R^2\to\Icc{-1}1$ monotone}\} \ge\abs{\spr{g_\omega}{\widehat\Prob_n^\omega-\Prob}} =1\text. \end{equation*} \end{Example} The problem arises because the set of discontinuities of the monotone function has positive probability. A correct generalization of \cref{thm:GC} to the multivariate case is as follows. \begin{Theorem}[DeHardt \cite{DeHardt-71}, Wright \cite{Wright-81}]\label{wright:LDP} Let \begin{itemize} \item $V_j$, $j\in\N$, be i.\,i.\,d.\ random variables with values in~$\R^k$ and distribution~$\Prob$, \item $\widehat\Prob_n:=\frac1n\sum_{j=1}^n\delta_{V_j}$ for $n\in\N$ the empirical distribution, and \item $M>0$ and $\cM:=\{g\from\R^k\to\Icc{-M}M\mid\text{$g$ monotone}\}$. \end{itemize} Then, the following are equivalent. \begin{enumerate}[(i)] \item\label{GC-i} For all $J\subseteq\{1,\dotsc,k\}$, $J\ne\emptyset$, strictly monotone $g\from\R^J\to\R$, and $E\in\R$, the continuous part~$\Prob_c^J$ of the marginal~$\Prob^J$ of~$\Prob$ satisfies \begin{equation*} \Prob_c^J\bigl(\partial g^{-1}\bigl(\Ioc{-\infty}E\bigr)\bigr)=0\text. \end{equation*} \item There exists an almost sure event~$\Omega_{\mathrm{unif}}$ on which \begin{equation*} \sup_{g\in\cM}\abs{\spr g{\widehat\Prob_n-\Prob}}\xto{n\to\infty}0\text. \end{equation*} \item For all $\kappa>0$, there are $a_\kappa,b_\kappa>0$ such that for all $n\in\N$, there is an event~$\Omega_{\kappa,n}$ with $\Prob(\Omega_{\kappa,n})\ge1-b_\kappa\exp(-a_\kappa n)$ on which \begin{equation*} \sup_{g\in\cM}\abs{\spr g{\widehat\Prob_n-\Prob}}\le\kappa\text. \end{equation*} \end{enumerate} \end{Theorem} Here $\partial g^{-1}\bigl(\Ioc{-\infty}E\bigr)$ denotes the boundary of the sublevel set $g^{-1}\bigl(\Ioc{-\infty}E$. Condition~\labelcref{GC-i} is trivial in the classical case $k=1$. Also, in any dimension, each product measure~$\Prob$ satisfies condition~\labelcref{GC-i}. In fact, the following theorem holds true. \begin{Theorem}\label{thm:strictlymonotone} Let~$\Prob$ be a probability measure on~$\R^k$ which is absolutely continuous with respect to a product measure $\Tensor_{j=1}^k\mu_j$ on~$\R^k$, where~$\mu_j$, $j\in\{1,\dotsc,k\}$ are measures on~$\R$. Then, condition \cref{wright:LDP}\labelcref{GC-i} is satisfied. \end{Theorem} See \cite[Theorem~5.5]{SchumacherSV-17} for a proof. \subsection{Uniform limits for monotone fields} The theorems which follow for the case of infinitely many colors~$\cA$ all have an additional assumption, namely the monotonicity in the random parameters. This is a natural assumption, indeed: The IDS depends on the potential antitonely in our models. Also, the IDS is monotone in site percolation. \begin{Example} We revisit the Anderson model on a site percolation graph from \cref{sec:AndPerc}, this time with $\cA\subseteq\R$. Fix a bounded set $\cA_0\in\Borel(\R)$ for the values of the potential, and let $\cA:=\cA_0\union\{\alpha\}$ with $\alpha>2d+\sup\cA_0$. The value~$\alpha$ of the potential is interpreted as a closed site in the percolation graph: \begin{equation*} \cV_\omega:=\{v\in G\mid V_\omega(v)\ne\alpha\}\text. \end{equation*} The edges of the percolation graph are as before \begin{equation*} \cE_\omega:=\{\{v,w\}\subseteq\cV_\omega\mid v\sim w\}\text. \end{equation*} The Hamiltonian $H_\omega\from\ell^2(G)\to\ell^2(G)$ is given by \begin{equation*} (H_\omega\varphi)(v):= \begin{cases} -\Laplace_\omega\varphi(v)+V_\omega\varphi(v) &\text{if $v\in\cV_\omega$, and}\\ \alpha\varphi(v) &\text{if $v\in G\setminus\cV_\omega$.} \end{cases} \end{equation*} By the min-max principle, the eigenvalues do not decrease when we increase the potential at a site~$v\in G$. Particularly, when the potential reaches the value~$\alpha$ and the site closes, the eigenfunction experiences de facto a Dirichlet boundary condition on that site, which also at most increases the kinetic energy. \par The eigenvalue counting functions count less eigenvalues below a given threshold, if the eigenvalues increase. Therefore, the eigenvalue counting functions decrease when the potential is raised. The same holds true for the limit, i.\,e.\ the IDS. This is the reason why the IDS in the quantum percolation model with random potential is antitone in the randomness. \end{Example} We first turn to the special case $G=\Z^d$. It is physically most relevant and, since~the group~$\Z^d$ satisfies the tiling property, we do not need to resort to quasi tilings in this case. \begin{Theorem}[\cite{SchumacherSV-17}]\label{thm:infiniteA_Zd} Let~$\cA\in\Borel(\R)$, $\Omega:=\cA^{\Z^d}$, and let $(\Omega,\Borel(\Omega),\Prob)$ be a probability space such that~$\Prob$ satisfies \begin{itemize} \item $\Prob$ is translation invariant with respect to the $\Z^d$-action, \item for all $\Lambda\in\cF$, the marginal~$\Prob_\Lambda$ is absolutely continuous with respect to a product measure on $\R^\Lambda$, and \item for a given $r\ge0$ and all $\Lambda_j\in\cF$, $j\in\N$, with $\min\{\dist(\Lambda_i,\Lambda_j)\mid i\ne j\}>r$, the random variables $\omega\mapsto\omega_{\Lambda_j}$, $j\in\N$, are independent. \end{itemize} Further, let $f\colon\cF\times\Omega\to\B$ be a translation invariant, local, almost additive, monotone, bounded field. Then there exists an event~$\tilde\Omega$ of full probability and a function $\fbar\in\B$ such that for every $\omega\in\tilde\Omega$ we have \begin{equation}\label{eqmain} \lim_{j\to\infty}\biggnorm{ \frac{f(\Lambda_j,\omega)}{\setsize{\Lambda_j}}-\fbar} =0\text, \end{equation} where $\Lambda_j:=\Ico0j\isect\Z^d$ for $j\in\N$. For an estimate on the speed of convergence, denote the bound of~$f$ by~$C_f$, the bound of the boundary term~$b$ of~$f$ by~$D_b$. Then, for every $\kappa>0$ and $L\in\N$, $L>2r$, there are $a,b>0$, depending on~$\kappa$, $L$, and~$C_f$, such that for all $j\in\N$, $j>2L$, there is an event~$\Omega_{\kappa,j}$ with probability $\Prob(\Omega_{\kappa,j})\ge1-b\exp(-a\floor{j/L}^d)$, on which \begin{equation*} \Bignorm{\frac{f(\Lambda_j,\omega)}{\setsize{\Lambda_j}}- \fbar} \le2^{2d+1}\Bigl(\frac{(2C_f+D_b)L^d+D_br^d}{j-2L} +\frac{2(C_f+D_b)r^d+3D_br^d}{L-2r}\Bigr) +\kappa\text. \end{equation*} holds true. \end{Theorem} Of course, there is a version of \cref{thm:infiniteA_Zd} for amenable groups. We follow the strategy in \cref{sec:general} and use quasi tilings to deal with infinitely many colors on amenable groups. This brings new challenges. \Cref{wright:LDP} needs as input i.\,i.\,d.\ samples. But quasi tilings are allowed to overlap (in a relatively small volume), see \cref{def:quasitiling}. This destroys the independence of eigenvalue counting functions associated to overlapping tiles. One is tempted to excise the overlap from some of the tiles. However, this would leave us with an independent but not identically distributed sample. In this situation Glivenko--Cantelli-Theory is difficult to apply. The solution is to independently resample the portions of the quasi tiles which overlap and to account for the error by a volume estimate. The last result we present here treats fields with infinitely many colors on amenable groups. \begin{Theorem}[\cite{SchumacherSV-18}]\label{thm:main} Let~$G$ be a finitely generated amenable group with a F{\o}lner sequence~$(Q_j)_j$. Further, fix~$\cA\in\cB(\R)$, and let $(\Omega=\cA^G,\cB(\Omega),\Prob)$ be a probability space such that~$\Prob$ is translation invariant with respect to~$G$, has finite marginals with density w.\,r.\,t.\ a product measure, and independence at a distance. Further, let~$\cU$ be a set of translation invariant, local, almost additive, monotone, bounded fields $f\from\cF\times\Omega\to\B$ with common bound, i.\,e.\ $C:=\sup\{C_f\mid f\in\cU\}<\infty$, common boundary term~$b\from\cF\to\R$ with bound~$D:=D_b$. \begin{enumerate}[(a), wide] \item Then, there exists an event $\tilde\Omega\in\cB(\Omega)$ such that $\Prob(\tilde\Omega)=1$ and for any $f\in\cU$ there exists a function~$\fbar\in\B$, which does not depend on the specific F{\o}lner sequence~$(Q_j)_j$, with \begin{equation*} \forall\omega\in\tilde\Omega\colon\quad \lim_{j\to\infty} \sup_{f\in\cU}\biggnorm{\frac{f(Q_j,\omega)}{\setsize{Q_j}}-\fbar} =0 \text. \end{equation*} \item Furthermore, for each $\ve\in\Ioo0{1/10}$, there exist $j_0(\ve)\in\N$, independent of~$C$, such that for all $f\in\cU$, there are $a(\ve,C),b(\ve,C)>0$, such that for all $j\in\N$, $j\ge j_0(\ve)$, there is an event $\Omega_{j,\ve,C}\in\Borel(\Omega)$, with the properties \begin{equation*} \Prob(\Omega_{j,\ve,C}) \ge1-b(\ve,C) \exp\bigl(-a(\ve,C)\setsize{Q_j}\bigr) \end{equation*} and \begin{align*} \biggnorm{\frac{f(Q_j,\omega)}{\setsize{Q_j}}-\fbar}& \le(37C_f+47\constb[\cU]+47) \ve \qtext{ for all $\omega\in\Omega_{j,\ve,C}$ and all $f\in\cU$.} \end{align*} \end{enumerate} \end{Theorem} \section{Outlook} We have presented a number of theorems concerning convergence in sup-norm and other Banach-space norms of averaged almost additive fields. More important than the convergence itself are the corresponding quantitative error estimates. They split into two parts of two different origins: The geometric part and the probabilistic part. The probabilistic error measures how far off certain empirical measures are from their theoretical counterparts. This difference can be estimated using large deviations techniques, which is implicit in our use of the \cref{wright:LDP} of DeHardt and Wright. An aspect which is not completely satisfactory is that we are not able to specify the dependence of the positive coefficients~$a_\kappa$ and~$b_\kappa$ on the small parameter $\kappa>0$ and the dimension of the pattern $k\in\N$. For this reason we are not able to choose the two lengths scales which tend both to infinity as functions of each other. Furthermore, the Theorems in \cref{sec:infiniteA} assume certain monotonicity with respect to individual random parameters. While this is sufficient for a wide variety of models in statistical physics, there are examples, e.\,g.\ random hopping Hamiltonians, which do not satisfy this assumption. For this reason it is desirable to relax the monotonicity assumption, or formulate alternative sufficient conditions. These are aims which we will pursue in a forthcoming project. \small \begin{longtabu} to\linewidth { r @{$\colon$} X[l] } \toprule \multicolumn{2}{c}{Table of Notation\label{tableofnotations}}\\ \midrule \endfirsthead \toprule \multicolumn{2}{c}{Table of Notation (continued)}\\ \midrule \endhead \midrule \multicolumn{2}{r}{\footnotesize continues on next page}\\ \endfoot \bottomrule \endlastfoot $G$ & a finitely generated amenable group, or its Cayley graph. \\ $g$ & element of~$G$, when~$G$ is used as group \\ $v,w$ & elements of~$G$, when~$G$ is used as Cayley graph \\ $\tilde\tau_g$ & group action of the group~$G$ on its Cayley graph~$G$ \\ $U_g$ & unitary group action of~$G$ on $\ell^2(G)$ \\ $\Laplace$ & Laplace operator on $\ell^2(G)$, $-\Laplace\ge0$ \\ $\cA$ & set of colors \\ $\Borel(\cA)$ & Borel sets on~$\cA$ \\ $\Prob_0$ & probability measure on $(\cA,\Borel(\cA))$ \\ $\Omega$ & set of colorings of~$G$ \\ $\omega$ & coloring \\ $\Prob$ & probability measure on $(\Omega,\Borel(\Omega))$ \\ $V$ & random potential \\ $\tau_g$ & group action of~$G$ on~$\Omega$ and on patterns \\ $H_\omega$ & random Schr\"odinger operator \\ $\Sigma$ & almost sure spectrum of $H_\omega$ \\ $N$ & integrated density of states (IDS) \\ $\d N$ & density of states measure \\ $\Lambda$ & (finite) subset of~$G$, often domain of pattern \\ $\ell^2(\Lambda)$ & subspace of $\ell^2(G)$ \\ $\ifu\Lambda$ & projection $\ell^2(G)\to\ell^2(\Lambda)$ \\ $H_\omega^\Lambda$ & restriction of~$H_\omega$ to $\ell^2(\Lambda)$ \\ $\cF$ & set of finite subsets of~$G$ \\ $\delta_v$ & Kronecker delta on~$v$ \\ $n(\Lambda,\omega)$ & eigenvalue counting function, not normalized \\ $\B$ & Banach space, often the right continuous $\R\to\R$-functions \\ $\Lambda_L$ & cube of side length~$L$ or small F\o{}lner sequence \\ $\Psite$ & Bernoulli measure for site percolation \\ $\cV_\omega$ & vertices of percolation graph \\ $\cE_\omega$ & edges of percolation graph \\ $\Pedge$ & Bernoulli measure for edge percolation \\ $e_j$ & $j$-th basis vector of $\Z^d$ \\ $\Laplace_\omega$ & Laplace operator on percolation graph \\ $\alpha$ & eigenvalue of~$H_\omega$ on $G\setminus\cV_\omega$ \\ $\cA_0$ & set of values of the random potential on a percolation graph \\ $\cV$ & set of visible points in~$\Z^d$ \\ $\Hvis$ & Schr\"odinger operator with~$\ifu\cV$ as potential \\ $\tilde N$ & limiting function in \cref{thm:LMV-ecf} \\ $P$ & pattern, $P\from\Lambda\to\cA$ \\ $[P]_G$ & equivalence class of patterns with respect to~$G$ \\ $\omega_\Lambda$ & pattern induced by~$\omega$ on~$\Lambda$ \\ $P'|_\Lambda$ & pattern induced by $P'\in\cA^{\Lambda'}$ on~$\Lambda$ \\ $P'|_\cF$ & set of all patterns induced by~$P'$ \\ $\sharp_PP'$ & number of occurrences of~$P$ in~$P'$ \\ $\partial^r\Lambda$ & (two-sided) $r$-boundary of~$\Lambda$ \\ $\dist$ & graph distance on Cayley graph \\ $(Q_j)_j$ & F\o{}lner sequence \\ $\nu_P$ & (asymptotic) frequency of~$P$ \\ $b$ & boundary term \\ $D_b$ & bound of~$b$ \\ $F$ & field (without explicit dependence on~$\omega$) \\ $\tilde F$ & pattern function of~$F$ \\ $C_F$ & bound of field~$F$ \\ $\Fbar$ & limit in \cref{thm:LenzMV} \\ $\widehat\Prob_\Lambda^\omega$ & empirical distribution \end{longtabu} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
1,116,691,501,152
arxiv
\section{Introduction} Quantifying biochemical processes at the cellular level is becoming increasingly central to modern molecular biology \cite{Elowitz2002, Pedraza2005}. Much of this attention can be attributed to the development of new methodologies for measuring biochemical events. In particular, the use of green fluorescent protein and related fluorophores, high sensitivity light microscopy and flow cytometry have provided researchers with greater details of the dynamics of cellular processes at the level of single cells \cite{Elowitz2002,Pedraza2005,Xie2008a} with single-molecule sensitivity \cite{Yu2006}. Quantitative laboratory measurements demand theoretical models that are able to describe and interpret the data. The traditional approach to modeling cellular processes has been to use deterministic kinetic equations with continuous dynamic variables: It is assumed that concentrations of metabolites, proteins, etc.~vary deterministically in a continuous manner. This may be a reasonable assumption when one deals with molecules that occur in relatively large numbers. In other cases, the number of molecules can be much lower; for example, the number of LacI tetrameric repressor proteins in a single {\it E.~coli} bacterium has been estimated to be of the order of 10 to 50 molecules \cite{Oehler1994}. Such small numbers suggest that a continuous-variable model is no longer an appropriate description of the reality. In fact, recent experimental measurements have clearly highlighted the stochastic nature of biochemical processes in individual cells \cite{Elowitz2002,Choi2008,Qian2012}. Stochastic reaction processes can be analyzed in terms of probability distributions from an isogenic population or from stochastic trajectories of individual cells. The former has been theoretically investigated in terms of the chemical master equation (CME) \cite{Gillespie1992, Mcquarrie1967}. For general reaction systems, analytically solving this equation is extremely challenging because the rates of biochemical reactions are often nonlinear functions of concentrations. In addition, numerical solutions are equally impractical because of the significant increase in the number of states; progress on the computational approach to the CME has been slow \cite{Liang2010}. Alternatively, stochastic trajectories can be computed via Monte Carlo procedures. Methods such as Gillespie algorithm \cite{Gillespie1977} can also be highly intensive computationally for reaction systems involving several molecular species. Stochastic dynamics, therefore, are often studied with approximations \cite{Kampen2001, Gomez2007, Scott2007, Munsky2006}. G\'{o}mez-Uribe et al.~\cite{Gomez2007} provided an approximate method called mass fluctuation kinetics. Recall that in a biological reaction system, the rate of a reaction $v(s)$ is usually a nonlinear function of reactant concentrations, where $s$ is a vector of concentrations. In terms of cellular biological functions, $s$ is usually referred to as a `signal', and we shall adopt this terminology. When the signal is noisy, the expected value of the nonlinear function, $\big\langle v(s) \big\rangle$, is different from the function of the expectation of the concentrations, $v\big(\langle s \rangle\big)$, as illustrated in Fig.~\ref{fig:cur-var-effect} \cite{KimPJ2010}. (Note that for a uni-molecular reaction, $v(s) = ks$ with $k$ being a constant. Then the mean rate is the same as the rate of the mean concentration: $\langle ks \rangle = k \langle s \rangle$.) G\'{o}mez-Uribe et al.~\cite{Gomez2007} tried to quantify this difference in terms of both the the noise strength of a signal and the degree of reaction nonlinearity with a first-order approximation, assuming the noise strength is sufficiently small. We revisit this nonlinear signal processing of noise and investigate this problem more thoroughly. Noise in biochemical signals following biological reaction pathways is dependent not only on the `local' reaction rate functions \cite{Fell1992, Kacser1995, Fell1996, Paulsson2004, Sauro2011} but also on the `global' structure of the pathway that the signal `propagates' through \cite{Paulsson2004, Pedraza2005, Hooshangi2005}. The propagation of noise has been studied mainly by identifying the mathematical analytical structure \cite{Paulsson2004, Tanase2006, Bruggeman2009}. As the system size increases, such a structure quickly becomes intractable. Here, we provide a more manageable approach for understanding noise propagation and propose a simple principle we call {\em stochastic focusing compensation}. {This principle states that a sensitivity increase (stochastic focusing) \cite{Paulsson2000} in one parameter region is typically achieved by reducing sensitivity in another region due to a geometric constraint.} To show its wide applicability, we design three different synthetic biochemical networks that exhibit noise-induced phenotypes: (1) noise-enhanced concentration detection, (2) noise-induced bistable ON-OFF switching, and (3) noise-induced linear amplification. These examples illustrate the proposed method as a powerful tool for designing, analyzing, and understanding noise-induced phenotypes. \section{Results} \subsection{Model systems.} When molecules undergo chemical reactions particularly at very low concentrations, changes in amounts are more conveniently described using discrete numbers to represent individual molecules \cite{Kampen2001}. The random nature of reaction events also dictates the changes being stochastic. Thus, the time evolution of a system is described by discrete-state continuous-time stochastic processes. Under appropriate conditions (spatially homogeneous and statistically independent reaction events), the stochastic processes are fully described by the CME \cite{Gillespie1992}, or more precisely the Delbr\"{u}ck-Gillespie process \cite{Qian2011}. The CME describes the time evolution of a probability distribution function, which represents the probability that one finds the number of molecules for each species at a given time. \subsection{Elasticity.} Our analysis is based on the concept of elasticity ($\varepsilon$), which quantifies the change in a reaction rate due to a change in a reactant, product or effector concentration. Mathematically, the sensitivity is defined as the ratio of the fold change in the rate $v$ to the fold change in the concentration $s$: \[ \varepsilon_d = \frac{s}{v}\frac{\partial v}{\partial s} = \frac{\partial \ln v}{ \partial \ln s}. \] $\varepsilon_d$ has been widely used in metabolic control analysis \cite{Fell1992, Kacser1995, Fell1996} and here we expand its application to the stochastic regime. \subsection{Elasticity vs. noise propagation.} The mean values of reaction rates can be affected by concentration fluctuations. This effect was formulated in terms of concentration variances and the curvatures of reaction rate functions \cite{Kampen2001, Gomez2007, KimPJ2010}. Consider a reaction step shown in Fig.~\ref{fig:cur-var-effect}a. Although the concentration of $S$, denoted by $s$, is assumed to fluctuate `symmetrically' with respect to its mean value, the reaction rate $v(s)$ fluctuates `asymmetrically' \cite{Ochab2010}. Let us assume that an increase in $s$ by the amount $\delta s$ causes $v$ to increase by an amount $\delta v$. A decrease in $s$ by the same amount $\delta s$, however, will not cause $v$ to decrease by the same amount $\delta v$. Instead it is larger than $\delta v$, due to the down-curved shape of the reaction rate function $v(s)$ (Fig.~\ref{fig:cur-var-effect}b). As the curvature of the reaction rate function increases, the fluctuations in $v$ become more asymmetric and the mean value of the rate function $\langle v (s) \rangle$ gets smaller, resulting in more deviation from the deterministic prediction $v (\langle s \rangle)$. Here, the angle bracket $\langle \rangle$ denotes ensemble average. As the distribution function of $s$ gets narrower, the distribution function of $v$ also becomes narrower and the difference between $v(\langle s \rangle)$ and $\langle v(s) \rangle$ smaller. The first order correction to the deterministic rate was expressed proportional to both concentration variance (covariance in general cases) and the curvature of a rate function, and the estimate of the true mean rate $\langle v \rangle$ was expressed \cite{Kampen2001, Gomez2007}: \begin{equation} w(s^*) = v(s^*) + \left.\frac{1}{2}\frac{\partial^2 v}{\partial s^2}\right|_{s=s^*} \Sigma^*, \label{eqn:MM-deg} \end{equation} where $s^*$ is the estimate of the true mean concentration $\langle s \rangle$ and $\Sigma^*$ that of concentration variance (Methods). The variance in $s$ is dependent on how $s$ is synthesized and degraded as well as other sources of random fluctuations from the connected networks. {Thus, the interplay between the nonlinearity in the rate function $v$ and the noise in the concentration $s$ causes a change in the mean reaction rate, resulting in changes in elasticities.} Since elasticities together with stoichiometry are the determinants of dynamic behavior, changes in the elasticities due to noise will result in {changes in system dynamics.} This result allows us to relate deterministic properties of a network to the nonlinear transmission of noise. We will define elasticities that depend on noise propagation as: \[ \varepsilon_{s} = \frac{\partial \ln \langle v \rangle}{ \partial \ln \langle s \rangle}. \] and its first-order approximation as $\varepsilon_s \simeq \partial \ln w/\partial \ln s^*$. {This approximation can be used to derive analytical solutions of mean values and covariances of concentrations, as a next level of the linear noise approximation \cite{Kampen2001}, but needs to be verified on a case by case basis; for example, the level of $X_0$ in Fig.~\ref{fig:ff-diag}a was assumed to be constant but it can be a function of other species concentrations (e.g., Fig.~S2c), which could result in additional noise being propagated into the synthesis rate.} \subsection{Stochastic Focusing Compensation.} Consider the curvature of $v$ in Fig.~\ref{fig:cur-var-effect}c and \ref{fig:SDF}a which represent an inhibitory regulation. The curvature is positive for all $s$ except when $s\rightarrow 0$ or $\infty$. Thus, from Eq.~\eqref{eqn:MM-deg} the mean reaction rate $w$ is shown to be larger than the deterministic rate $v$. When $s\rightarrow \infty$ the curvature approaches zero and when $s \rightarrow 0$ the variance of $s$ vanishes. Thus, in these limits $w$ becomes equal to $v$. Inspection of Fig.~\ref{fig:cur-var-effect}c and \ref{fig:SDF}a shows that the slope of the mean rate ($w$ or $\langle v \rangle$) is smaller than that of $v(s)$ when $s$ is small, implying $\varepsilon_s < \varepsilon_d$. This elasticity decrease due to noise will be called stochastic de-focusing (SD) (Fig.~\ref{fig:SDF}b). However, as $s$ increases the slope of the mean rate becomes larger than $v(s)$, implying that $\varepsilon_s > \varepsilon_d$. This sensitivity increase will be called stochastic focusing (SF) (Fig.~\ref{fig:SDF}b). A minor difference from the original definition of SF \cite{Paulsson2000} is discussed in the Appendix. The previous discussion shows that elasticities that increase due to noise in one parameter region will typically come with elasticity decreases in another region. Consider the case that $v(s)$ satisfies the Hill equation with Hill coefficient 4 (Fig.~\ref{fig:SDF}c). The curvature of $v(s)$ changes from positive to negative as $s$ increases from zero, and the curvature vanishes as $ s \rightarrow 0$ or $\infty$. Thus, defocusing appears in between two focused regions (Fig.~\ref{fig:SDF}d). In general, focusing does not, however, always come with defocusing. For example, when $v(s)$ is a Michaelis-Menten-type rate function, only focusing appears as shown in Fig.~\ref{fig:SDF}e and f. When $s$ satisfies the Poisson distribution, focusing and defocusing can occur strongly. With a different type of distribution such as the Gamma distribution, these can appear even more strongly (Fig.~\ref{fig:SDF}). The Gamma distribution has been frequently found in the case when translation events occur in a burst manner \cite{Taniguchi2010}. One of the important differences between the Poisson and Gamma distributions is that in the Gamma distribution one can control the noise (coefficient of variation; CV) and mean levels independently, while one cannot do so in the Poisson distribution \cite{Kim2012}. {Furthermore, the Poisson distribution is the asymptotic case of the Gamma distribution as the burst size in the translation events reduces to zero. These two facts imply that the noise strength (CV) increases from that of the Poisson case by turning on the bursting events, resulting in a larger deviation from the deterministic case based on Eq.~\eqref{eqn:MM-deg}.} This is the reason that stronger focusing and defocusing were observed for the case of the Gamma distribution as shown in Fig.~\ref{fig:SDF}. In this section, we have shown that higher elasticities in one region of parameters can be achieved by reducing elasticities in another region, and have explained this \emph{stochastic focusing compensation} by using the curvature-variance correction to a mean reaction rate (Eq.~\eqref{eqn:MM-deg}). The compensation was intuitively understood as a geometric effect of the reaction rate. In the following sections we will show how this effect can be used to modify the behavior in gene regulatory networks. \subsection{Application 1: Noise-improved Concentration Detector.} Let us consider an incoherent feed-forward gene regulatory network that functions as a concentration detector \cite{Entus2007, Kaplan2008}. We can use noise to enhance the detection amplitude and sensitivity of this system. Consider a reaction process where a protein ($S_2$ in Fig.~\ref{fig:ff-diag}a) is regulated by two different pathways: either via direct transcriptional activation by $X_0$ or indirect inhibition by $S_1$. When the concentration of $X_0$ is zero, $S_2$ is not expressed. As $x_0$ increases, $s_2$ increases together (this region of $x_0$ will be named OFF$\rightarrow$ON as shown in Fig.~\ref{fig:ff-diag}b). When $x_0$ becomes larger than a thresh-hold point, it begins, however, to decrease and is eventually dominated by $S_2$'s inhibition (this region of $x_0$ will be named ON$\rightarrow$OFF as shown in Fig.~\ref{fig:ff-diag}b). Thus, one can detect a specific range of the concentration of $X_0$ by monitoring the concentration of $S_2$. The plot of $x_0$ vs. $s_2$ (Fig.~\ref{fig:ff-diag}b) will be used to characterize the concentration detection by using the slope as sensitivity, its width at the half maximum as bandwidth, and the peak value of $s_2$ as amplitude. We will apply stochastic focusing compensation to enhance the detection amplitude and sensitivity. {In this system, noise in the concentration levels of $S_1$ and $S_2$ is generated from a series of stochastic reaction events of synthesis and degradation of $S_1$ and $S_2$. Since $S_1$ regulates the synthesis of $S_2$, noise in the concentration level of $S_1$ introduces additional noise into $S_2$, i.e., the noise propagates. Based on the fact that the regulation is inhibitory}, we can conclude that both strong SF and SD can appear in the ON$\rightarrow$OFF region (refer to Fig.~\ref{fig:SDF}a,b). For simplicity, here we assumed that the concentration of $X_0$ does not fluctuate stochastically and thus the activation by $X_0$ does not have any noise effect. Therefore, we focus on the inhibition pathway and aim to increase the detection sensitivity in the ON$\rightarrow$OFF region. For this aim, strong SF is required to appear in the ON$\rightarrow$OFF region, while minimizing the appearance of SD in the same region. In the hyperbolic inhibition case (Fig.~\ref{fig:SDF}a), strong SF was achieved for the value of $K_M \lesssim 0.1$~nM and for that of $S_1$ between 1 and 10~nM (for the Poisson case in Fig.~\ref{fig:SDF}a,b) \cite{SI}. Thus, we tuned the values of the terms corresponding to $K_M$, $(1+p_3 x_0)/p_4 x_0$, to less than 0.1~nM and the mean value of $s_1$, $p_1 x_0/p_2$, between 1 and 10~nM. These two conditions provided the possible range of $x_0$: $\mbox{MAX}\left( \frac{1}{0.1 p_4-p_3}, \frac{p_2}{p_1} \right) < x_0 < 10\left(\frac{p_2}{p_1}\right)$, where the function MAX chooses the larger value of its parameter arguments. The above $x_0$ region was placed to the right side of the detection peak but not too far away from the peak to maximize the appearance of strong SF and to minimize the effect of strong SD in the region. Then, concentration detection was enhanced 5 times in the ON$\rightarrow$OFF sensitivity and 7 times in the amplitude (Fig.~\ref{fig:ff-diag}b) due to the strong SF \cite{SI}. However, the bandwidth stayed the same (arrows in Fig.~\ref{fig:ff-diag}b), which is related to the stochastic focusing compensation; SD appears as $x_0$ decreases away from the region of SF, resulting in reduced detection sensitivity and eventually preventing the detection bandwidth from decreasing. To enhance the detection amplitude and sensitivity further, we used the Gamma distribution to allow larger noise levels in $s_1$. As the burst size $b$ -- number of translation events from a single mRNA during its average lifetime -- increases, the amplitude of the detection is significantly enhanced: 20 times for $b=5$, 83 times for $b=50$ compared to the deterministic case. However, when we allow $x_0$ to fluctuate stochastically, its signal noise turned out to slightly decrease the amplitude due to the positive curvature contribution in the activation by $X_0$ as well as the noise correlation between $x_0$ and $s_1$ \cite{SI}. This further analysis shows that our curvature-variance-based analysis can successfully predict the change in the system properties. \subsection{Application 2: Noise-induced Bistable ON-OFF switch.} Bistability is a common feature of many biological systems \cite{Ozbudak2004, Brandman2005}. In the stochastic framework, bistability can sometimes be observed as a bimodal distribution because the system can jump from one stable state to the other. In this section we will show how a non-bistable system can be made bistable through stochastic focusing compensation. This example shows the predictive power of our method in system design and furthermore provides a novel mechanism of noise-induced bistability. We consider the model of mutual inhibition between two genes (\emph{g1} and \emph{g2} in Fig.~\ref{fig:bimodal}a), which regulate each other non-cooperatively (Hill coefficient = 1). Under these conditions it is impossible for the system to show bistability with first-order degradation in the deterministic case \cite{Sauro2009}. Now we consider including noise in the system and show noise-induced bistability. The dissociation constant of $S_2$, $K_{M_2}$, was varied while that of $S_1$, $K_{M1}$, was fixed at $0.1$~nM. The reason that this $K_{M1}$ value was chosen is to achieve strong SF and SD in the synthesis rate of $S_2$, $v_3$. For $K_{M_2} \simeq $~0.1~nM, another strong SF and SD in $v_1$ appears. This doublet of SF-SD can lead to ultra-sensitive positive feedback (Fig.~\ref{fig:bimodal}c), leading to bimodality (Fig.~\ref{fig:bimodal}b). As $K_{M_2}$ increases, the SF-SD in $v_1$ becomes weaker and the bimodal distribution becomes uni-modal. To understand how the doublet of SF-SD causes bimodality, the nullclines of $d\langle s_1 \rangle/dt$ and $d\langle s_2 \rangle /dt$ were plotted in Fig.~\ref{fig:bimodal}d. {The nullclines were computed using the Poisson distributions for $s_1$ and $s_2$ as a first level of approximation.} In Fig.~\ref{fig:bimodal}d, the solid line (NC1) corresponds to $\langle v_3 \rangle = p_4 \langle s_2 \rangle$ with $K_{M_1} = 0.1$~nM and was significantly distorted due to the strong SF-SD (cf. Fig.~\ref{fig:SDF}a). With $K_{M_2} = 0.1$~nM, the other nullcline (NC2) corresponding to $\langle v_1 \rangle = p_2 \langle s_1 \rangle$ was also significantly distorted. These two distorted curves, NC1 and NC2, crossed each other by making a shape of $\infty$. As $K_{M_2}$ increases, NC2 became straightened out due to weaker SF-SD and shifted upward. Thus the bimodality disappeared. The above analysis is limited to unimodal distributions and thus significant care may need to be taken when the distribution deviates from unimodality and enters into bimodality. {Therefore, we checked the numerical accuracy of our analysis on this system.} The phase diagram for bimodality and unimodality were plotted for different values of $p_1$ and $K_{M_2}$. The predicted phase diagram (gray and white areas in Fig.~\ref{fig:bimodal}e) matched well with the results based on exact stochastic simulations (dots in Fig.~\ref{fig:bimodal}e). {This shows that the noise-induced bimodality appears due to the interplay between the system nonlinearity and stochasticity that are described by the CME description and } that our method based on stochastic focusing compensation can be used for predicting bimodality in a rational way. \subsection{Application 3: Noise-induced Linear Amplifier.} A linear amplifier is a device that transfers an input signal to an output without distorting the signal shape, resulting in reliable information transfer. Such devices have been observed in biological systems, for example, in signal transduction pathways in yeast cells (\emph{Saccharomyces cerevisiae}) \cite{Yu2008} and mammalian cells \cite{Sturm2010}. This linear amplification has been attributed to negative feedback mechanisms \cite{Sauro2007, Sturm2010}. Here we present a novel way to achieve linear amplification by exploiting noise without resorting to any feedback. The SF-SD makes a sigmoidal curve less steep as illustrated in Fig.~\ref{fig:SDF}c. The contribution becomes larger as the strength of noise increases and the SF-SD can appear more strongly (Eq.~\eqref{eqn:MM-deg}), i.e., a transfer curve can become less sigmoidal with the increase of noise strength. Based on this prediction, we investigated how gene expression noise can be leveraged to build a linear amplifier without using any feedback. We considered {two different activation reaction systems} (Fig.~\ref{fig:linear}a): a Hill equation with Hill coefficient = 12 (Fig.~\ref{fig:linear}b) and a piece-wise linear function (Fig.~\ref{fig:linear}c). An activator $S$ is assumed to satisfy the negative binomial distribution -- a discrete version of the Gamma distribution: \begin{equation} P_s = \frac{\Gamma(a+s)}{\Gamma(s+1)\Gamma(a)}\left( \frac{b}{1+b}\right)^s\left( 1- \frac{b}{1+b}\right)^a \label{eqn:neg} \end{equation} where $a$ is a burst frequency (number of transcription events during protein lifetime), ranging between 0.1 and 11 in \emph{E. coli} with a YFP-fusion tag \cite{Taniguchi2010}, and $b$ a burst size, ranging between 1 and 900 \cite{Taniguchi2010}. Here the mean of the distribution is equal to $ab$ and the strength of noise (standard deviation divided by mean) is $\sqrt{(1+b)/ab} \simeq 1/\sqrt{a}$ for $b \gg 1$. We varied the strength of noise by increasing $b$ (more precisely for a fixed mean value, increasing $b$ corresponds to decreasing $a$, resulting in an increase in the noise strength) and observed the change in $\langle v(s) \rangle$. It turned out that $\langle v(s) \rangle$ became dramatically linearized as $b$ increased (Fig.~\ref{fig:linear}b; cf. Fig.~3 in \cite{Thattai2002}). More surprisingly, this linearized response was observed not only for the Hill function-type activation with Hill coefficients ranging from 1 to 20 \cite{SI}, but also holds for various types of activation patterns \cite{SI} including piece-wise linear functions (Fig.~\ref{fig:linear}c). To confirm this noise-induced linear amplification, stochastic simulations were performed for a three-gene cascade (Fig.~\ref{fig:linear}d) by using the Gillespie stochastic simulation algorithm, with transcription processes included. We achieved a linear response for the transfer curve for $s_1$ vs. $s_3$ (Fig.~\ref{fig:linear}e). This result implies that gene expression noise generated upstream can linearize downstream nonlinear transcriptional responses. This leads to the non-intuitive result that signals upstream can be transferred reliably without distortion as a result of noise. This noise-induced linearization may be observed in protein signaling pathways as well if the protein concentration distributions are wide \cite{Birtwistle2012, Thattai2002}. This could serve as an alternative mechanism for dose-response alignment phenomenon \cite{Yu2008, Sturm2010} without strongly resorting to negative feedback. \section{Discussion} We analyzed how noise propagation, in the steady state of stochastic biochemical reaction systems \cite{Paulsson2004, Pedraza2005, Hooshangi2005}, affects the population average of reaction rates and thus their elasticities (sensitivities). The change in the elasticity was explained in terms of a curvature-variance contribution, which led us to discover stochastic focusing compensation: A (stochastic) elasticity increase in one parameter region due to noise can lead to an elasticity decrease in another region. This stochastic focusing compensation was used to enhance the functional properties of a concentration detector having an incoherent feedforward network structure. We designed strong stochastic focusing to a desired detection region and improved the detection amplitude and sensitivity. The detection bandwidth was, however, not enhanced due to stochastic defocusing In addition, we applied the same idea of stochastic focusing compensation in designing a noise-induced bistable ON-OFF switch that has a non-cooperative positive feedback loop (thus showing a single stable state in the absence of the noise). The underlying mechanisms of the emergent bimodality were related to the doublet of stochastic focusing-defocusing that appear in two individual inhibitory interactions, with the two as a whole comprising a positive feedback loop. Lastly, the stochastic focusing compensation principle was also applied to the design of a noise-induced linear amplifier. Surprisingly, the biologically-relevant distribution of proteins (Gamma distribution) were shown to have important properties that linearize a wide range of sigmoidal responses. In our linear amplification study, it seems paradoxical that greater noise enables more reliable information transfer. However, the fact that the linearization becomes saturated as the noise strength (CV) increases (refer to \cite{SI}) implies that there will be an optimal level of noise satisfying optimal signal transfer much like stochastic resonance \cite{Gammaitoni1998} but with a completely different mechanism. Experimental evidence of the Gamma distribution of intracellular protein copy numbers \cite{Paulsson2000a, Friedman2006, Shahrezaei2008, Taniguchi2010, Birtwistle2012a} implies that when measuring the Hill coefficient of transcriptional regulation, significant care may need to be taken to include this stochastic focusing compensation. Furthermore, in signal transduction networks, cell-to-cell variability in receptor occupancies and intra-cellular enzyme copy numbers \cite{Birtwistle2012, Birtwistle2012a} could lead to lowering the observed Hill coefficients of downstream regulation and contributing to dose-response alignment. It would be important to investigate the degree of individual contributions of this noise effect (SF-SD) and negative feedback loops \cite{Yu2008}. \section{Conclusions} Noise propagation can have significant effects on cellular phenotypes \cite{Arkin1998, Maamar2007, Wang2011}. It is important to be able to explain propagation mechanisms and to use them for designing or adjusting phenotypes \cite{Kim2012}. We provide a manageable, yet fully quantitative approach for studying noise propagation and its effect, in terms of the degree of a system's nonlinearity and the strength of noise. We show that an increased sensitivity in one parameter region often comes with a decreased sensitivity in another parameter region. We applied this sensitivity compensation characteristics in designing three gene regulatory networks: a concentration detector, a noise-induced ON-OFF switch, and a noise-induced linear amplifier. These examples illustrate how our sensitivity-based approach can be used to design, analyze, and optimize the functionalities of stochastic reaction networks by exploiting noise. \section*{acknowledgments} We acknowledge the support from the National Science Foundation (Theoretical Biology 0827592 and Molecular and Cellular Biosciences 1158573; for Preliminary studies, FIBR 0527023).
1,116,691,501,153
arxiv
\section{Introduction}\label{sec:pre} Many notions of topology and combinatorics of the reals have been reformulated and investigated in terms of ideals on the natural numbers (always assuming that an ideal contains all the finite sets of natural numbers). For instance, the usual notion of convergence on a topological space, which states that a sequence $\langle x_n:\, n<\omega\rangle$ in a topological space converges to a point $x\in X$ when the set $\set{n<\omega}{x_n\notin U}$ is finite for any open neighborhood $U$ of $x$, is generalized in terms of ideals $J$ on the natural numbers by changing the latter requirement by $\set{n<\omega}{ x_n\notin U}\in J$ (see e.g.~\cite{I-conv}). More recent and remarkable examples are the so called \emph{selection principles}, which are reformulated in terms of ideals, and show deep connections with cardinal characteristics of the real line~\cite{BDS,SJ,SS,Miro1,Miro2}. In combinatorics of the real line, some classical cardinal characteristics have been reformulated in terms of ideals (and in many cases they are connected to selection principles in topology). The most natural examples are the reformulations of the bounding number $\mathfrak{b}_J$ and the dominating number $\mathfrak{d}_J$ in terms of an ideal $J$ on $\omega$, more concretely, with respect to the relation $\leq ^J$ on ${}^\omega \omega$, which states that $x \leq^J y$ iff $\set{n<\omega}{x(n)\nleq y(n)}\in J$. These have been investigated by e.g.\ Canjar~\cite{Canjar}, Blass and Mildenberger~\cite{BM}, also in connection with arithmetic in the sense that, for any maximal ideal $J$, $\mathfrak{b}_J=\mathfrak{d}_J$ is the cofinality of the ultrapower (on the dual filter of $J$) of $\omega$. Other classical cardinal characteristics have been reformulated in terms of ideals on $\omega$, like the almost disjointness number~\cite{FaSo,Bakke,RaSt} and the pseudo-intersection number~\cite{BNF,SJpseudo}, among others (see \cite[Sec.~8.4]{Hrusak}). In the~present paper we offer a reformulation, in terms of ideals on $\omega$, of the ideal of Lebesgue measure zero subsets of the reals. Our reformulation does not come from a definition of the Lebesgue measure in terms of an ideal $J$ on $\omega$, but it is inspired from one combinatorial characterization of measure zero. Details of this new definition are provided in \autoref{sec:NmodI}. We work in the Cantor space ${}^\omega 2$ for simplicity, but the same reformulations and results can be obtained in other standard Polish spaces with a measure (\autoref{sec:var} presents a detailed discussion). We denote by $\mathcal{N}_J$ the collection of \emph{null subsets modulo $J$} of ${}^\omega 2$, and it will be clear that $\mathcal{N}_\mathrm{Fin}=\mathcal{N}$, the ideal of Lebesgue measure subsets of ${}^\omega 2$, where $\mathrm{Fin}$ denotes the ideal of finite subsets of $\omega$. We also provide a reformulation of $\mathcal{E}$, the ideal generated by the $F_\sigma$ measure zero subsets of ${}^\omega 2$, in terms of an ideal $J$ on $\omega$, which we denote by $\mathcal{N}^*_J$. As expected, we have $\mathcal{N}^*_\mathrm{Fin} = \mathcal{E}$. We obtain that, for any ideal $J$ on the natural numbers, $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are actually $\sigma$-ideals on ${}^\omega 2$ and that, whenever $K$ is another ideal on $\omega$ and $J\subseteq K$, \[\mathcal{E}\subseteq \mathcal{N}^*_J\subseteq \mathcal{N}^*_K\subseteq \mathcal{N}_K\subseteq \mathcal{N}_J\subseteq \mathcal{N}.\] In fact, it will be clear from the definitions that, whenever $K$ is a maximal ideal on $\omega$, $\mathcal{N}^*_K=\mathcal{N}_K$. Our first set of main results work as interesting characterizations of ideals on $\omega$ with the Baire property: \begin{mainthm}\label{mainBP} Let $J$ be an ideal on $\omega$. Then, the following statements are equivalent: \begin{enumerate}[label = \rm (\roman*)] \item $J$ has the Baire property. \item $\mathcal{N}_J=\mathcal{N}$. \item $\mathcal{N}^*_J=\mathcal{E}$. \end{enumerate} \end{mainthm} Although no new ideals on the reals are obtained from ideals with the Baire property, we obtain new characterizations of the ideals $\mathcal{N}$ and $\mathcal{E}$. Moreover, ideals without the Baire property offer new ideals on the reals that are worth of research: the previous result can be expanded in connection with $\mathcal{M}$, the ideal of meager subsets of ${}^\omega 2$. \begin{mainthm}\label{mainBP2} Let $J$ be an ideal on $\omega$. Then, the following statements are equivalent: \vspace{-5pt} \begin{multicols}{2} \begin{enumerate}[label = \rm (\roman*)] \item $J$ does not have the Baire property. \item $\mathcal{N}_J\subsetneq\mathcal{N}$. \item $\mathcal{E} \subsetneq \mathcal{N}^*_J$. \item No member of $\mathcal{N}_J$ is co-meager. \item $\mathcal{M}\cap\mathcal{N}\nsubseteq\mathcal{N}_J$. \item $\mathcal{N}^*_J\nsubseteq\mathcal{M}$. \end{enumerate} \end{multicols} \end{mainthm} \autoref{mainBP} and~\ref{mainBP2} summarize \autoref{BaireForNJ2}, \autoref{cor:notcomeager}, \autoref{charaktNj2} and~\ref{conseqforE2}. There are two elements providing the proof of these results. The first corresponds to monotonicity results with respect to the well-known \emph{Kat\v etov-Blass order $\mathrel{\leq_{\mathrm{KB}}}$} and \emph{Rudin-Blass order $\mathrel{\leq_{\mathrm{RB}}}$} between ideals (\autoref{RBeq}), and the second is \emph{Galvin and Mackenzie} game from the 1980's (see e.g.~\cite{CH}) that characterizes filters (and hence ideals) with the Baire property, which we use to prove many properties of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ for any ideal $J$ on $\omega$ without the Baire property, specifically that $\mathcal{N}_J$ cannot contain co-meager subsets of ${}^\omega 2$, and that $\mathcal{N}^*_J$ contains non-meager sets. About the connection between $\mathcal{N}_J$ and $\mathcal{N}_K$ for different ideals $J$ and $K$ on $\omega$, (and likewise for $\mathcal{N}^*_J$ and $\mathcal{N}^*_K$), we discovered a deep connection between these ideals and the notion of \emph{nearly coherence of ideals (or filters)} on $\omega$, original from Blass~\cite{blass86}. The ideals $J$ and $K$ are \emph{nearly coherent} if there is some finite-to-one function $f\colon \omega\to \omega$ such that $\set{y\subseteq\omega}{f^{-1}[y]\in J\cup K}$ generates an ideal. We prove that nearly coherence of ideals is characterized as follows: \begin{mainthm}[\autoref{nc-char}]\label{maincoherence} Let $J$ and $K$ be ideals on $\omega$. Then the following statements are equivalent: \begin{enumerate}[label = \rm (\roman*)] \item\label{it:coh1} $J$ and $K$ are nearly coherent. \item\label{it:coh2} There is some ideal $K'$ such that $\mathcal{N}^*_J\cup \mathcal{N}^*_K\subseteq \mathcal{N}^*_{K'}\subseteq \mathcal{N}_{K'}\subseteq \mathcal{N}_J\cap \mathcal{N}_K$. \item $\mathcal{N}^*_J\subseteq\mathcal{N}_K$. \end{enumerate} \end{mainthm} This means that, whenever $J$ and $K$ are \underline{not} nearly coherent, the ideals $\mathcal{N}_J$ and $\mathcal{N}_K$ are quite different, likewise for $\mathcal{N}^*_J$ and $\mathcal{N}^*_K$. Blass and Shelah~\cite{blassshelah} proved that it is consistent with ZFC that any pair of ideals are nearly coherent, which is known as \ref{NCF}, the principle of \emph{nearly coherence of filters}. \autoref{maincoherence} implies that, under \ref{NCF}, there is only one $\mathcal{N}_J$ for maximal ideals $J$ on $\omega$. We still do not know whether \ref{NCF} implies that there is just one $\mathcal{N}_J$ (or $\mathcal{N}^*_J$) for $J$ without the Baire property. On the other hand, when we assume that there are not nearly coherent ideals (which is consistent with ZFC, e.g.\ it is valid under CH and in random model, see~\cite[Sec.~4]{blass86}), we can construct a non-meager ideal $K$ on $\omega$ such that $\mathcal{N}^*_K\neq \mathcal{N}_K$ (\autoref{noncoh:sum}). In contrast with the previous question, we do not know whether ZFC proves the existence of an ideal $K$ without the Baire property such that $\mathcal{N}^*_K\neq \mathcal{N}_K$. The proof of \autoref{maincoherence} uses Eisworth's game that characterizes nearly coherence~\cite{Eis01}. Another element relevant to this proof is the order $\mathrel{\leq_{\overline{\mathrm{KB}}}}$, which is the converse of the Kat\v{e}tov-Blass order (see \autoref{def:RB}). If $J$ and $K$ are nearly coherent then it is clear that there is some ideal $K'$ such that $K'\mathrel{\leq_{\overline{\mathrm{KB}}}} J,K$, but the converse is also true thanks to \autoref{maincoherence}. This equivalence is claimed in~\cite{blass86}, but here we present an alternative proof using our new ideals. We also study the cardinal characteristics associated with the ideals $\mathcal{N}_J$ and $\mathcal{N}^*_J$, i.e., additivity, covering, uniformity, and cofinality. Recall that $\mathfrak{s}$ denotes the \emph{splitting number} and $\mathfrak{r}$ the \emph{reaping number}.\footnote{We assume that the reader is somewhat familiar with classical cardinal characteristics of the continuum, so we do not repeat their definitions in this paper. The reader can refer to e.g.~\cite{blassbook}.} In ZFC, we can prove: \begin{mainthm}\label{mainZFCcard} Let $J$ be an ideal on $\omega$. With respect to Cicho\'n's diagram (see \autoref{fig:cichon}): \begin{enumerate}[label =\rm (\alph*)] \item\label{it:covnon} $\cov(\mathcal{N})\leq \cov(\mathcal{N}_J) \leq \cov(\mathcal{N}^*_J)\leq \cov(\mathcal{E}) \leq \min\{\cof(\mathcal{M}),\mathfrak{r}\}$ and \[\max\{\add(\mathcal{M}),\mathfrak{s}\}\leq \non(\mathcal{E})\leq \non(\mathcal{N}^*_J)\leq \non(\mathcal{N}_J) \leq \non(\mathcal{N}),\] i.e.\ the coverings of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\cov(\mathcal{N})$ and $\min\{\cof(\mathcal{M}),\mathfrak{r}\}$, and their uniformities are between $\min\{\add(\mathcal{M}),\mathfrak{s}\}$ and $\non(\mathcal{N})$ \item\label{it:addcof} The additivites of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\add(\mathcal{N})$ and $\cov(\mathcal{M})$, and their cofinalities are between $\non(\mathcal{M})$ and $\cof(\mathcal{N})$ \end{enumerate} \end{mainthm} \begin{figure}[ht] \centering \begin{tikzpicture \small{ \node (aleph1) at (-1.5,0) {$\aleph_1$}; \node (addn) at (0,0){$\add(\mathcal{N})$}; \node (adNJ) at (1,2) {$\add(\mathcal{N}_J)$}; \node (adNJ*) at (3,3) {$\add(\mathcal{N}^*_J)$}; \node (covn) at (0,10){$\cov(\mathcal{N})$}; \node (cvNJ) at (1,8) {$\cov(\mathcal{N}_J)$}; \node (cvNJ*) at (3,7) {$\cov(\mathcal{N}^*_J)$}; \node (cove) at (5,6) {$\cov(\mathcal{E})$}; \node (b) at (4,5) {$\mathfrak{b}$}; \node (addm) at (4,0) {$\add(\mathcal{M})$} ; \node (nonm) at (4,10) {$\non(\mathcal{M})$} ; \node (none) at (7,4) {$\non(\mathcal{E})$}; \node (d) at (8,5) {$\mathfrak{d}$}; \node (covm) at (8,0) {$\cov(\mathcal{M})$} ; \node (cfm) at (8,10) {$\cof(\mathcal{M})$} ; \node (nonn) at (12,0) {$\non(\mathcal{N})$} ; \node (nnNJ) at (11,2) {$\non(\mathcal{N}_J)$}; \node (nnNJ*) at (9,3) {$\non(\mathcal{N}^*_J)$}; \node (cfn) at (12,10) {$\cof(\mathcal{N})$} ; \node (cfNJ) at (11,8) {$\cof(\mathcal{N}_J)$}; \node (cfNJ*) at (9,7){$\cof(\mathcal{N}^*_J)$}; \node (s) at (-1,3) {$\mathfrak{s}$}; \node (r) at (13,7) {$\mathfrak{r}$}; \node (c) at (13.5,10) {$\mathfrak{c}$}; \foreach \from/\to in { aleph1/addn, cfn/c, aleph1/s, r/c, s/none, s/d, b/r, cove/r, adNJ/nnNJ, adNJ*/nnNJ*, cvNJ/cfNJ, cvNJ*/cfNJ*, addm/b, b/nonm, b/d, addm/covm, covm/d, nonm/cfm, d/cfm, addm/cove, addm/none, cove/cfm, none/cfm, covm/cove, none/nonm, addn/adNJ, addn/adNJ*, covn/cvNJ, cvNJ/cvNJ*, cvNJ*/cove, addn/covn, adNJ/cvNJ, adNJ*/cvNJ*, addn/addm, covn/nonm, covm/nonn, cfm/cfn, nonn/cfn, adNJ/covm, adNJ*/covm, none/nnNJ*, nnNJ*/nnNJ, nnNJ/nonn, nnNJ/cfNJ, nnNJ*/cfNJ*, cfNJ*/cfn, cfNJ/cfn, nonm/cfNJ*, nonm/cfNJ} { \path[-,draw=white,line width=3pt] (\from) edge (\to); \path[->,] (\from) edge (\to); } } \end{tikzpicture} \caption{Cicho\'n's diagram including the cardinal characteristics associated with our ideals, $\mathfrak{s}$ and $\mathfrak{r}$, as stated in \autoref{mainZFCcard}.} \label{fig:cichon} \end{figure} The previous theorem summarizes \autoref{covnonvscichon} and \autoref{addN-covM}. Item~\ref{it:covnon} follows directly by the subset relation between the ideals, and also because $\add(\mathcal{E})=\add(\mathcal{M})$ and $\cof(\mathcal{E})=\cof(\mathcal{M})$ due to Bartoszy\'nski and Shelah~\cite{BartSh}. Results from the latter reference guarantee easily the connections with $\cov(\mathcal{M})$ and $\non(\mathcal{M})$ in~\ref{it:addcof}, but the connections with $\add(\mathcal{N})$ and $\cof(\mathcal{N})$ require quite some work. To prove this, we define two cardinal characteristics $\mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})$. It will not be hard to show that the additivites and cofinalities of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})$ (\autoref{def:Sbf}), and that $\mathfrak{b}_\mathrm{Fin}({\overline{\Omega}})\leq \mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})\leq \mathfrak{d}_\mathrm{Fin}({\overline{\Omega}})$. The real effort is to prove the following new characterization of $\add(\mathcal{N})$ and $\cof(\mathcal{N})$. \begin{mainthm}[\autoref{omegabarFin}]\label{maincharadd} $\mathfrak{b}_\mathrm{Fin}({\overline{\Omega}})=\add(\mathcal{N})$ and $\mathfrak{d}_\mathrm{Fin}({\overline{\Omega}})=\cof(\mathcal{N})$. \end{mainthm} In terms of inequalities with classical characteristics of the continuum, \autoref{mainZFCcard} seems to be the most optimal: we also manage to prove that, in most cases, no further inequalities can be proved, not just with the cardinals in Cicho\'n's diagram, but with many cardinal characteristics as illustrated in \autoref{fig:all20}. We just leave few open questions, for example, whether it is consistent that $\cov(\mathcal{N}_J)<\add(\mathcal{M})$ (and even smaller than the pseudo-intersection number $\mathfrak{p}$) for some maximal ideal $J$ (likewise $\cof(\mathcal{M})<\non(\mathcal{N}_J)$). This is all dealt with in \autoref{sec:cons}. Many consistency results supporting the above comes from the forcing model after adding uncountably many Cohen reals. \begin{mainthm}[\autoref{cohenthm}]\label{mainCohen} Let $\lambda$ be an uncountable cardinal. After adding $\lambda$-many Cohen reals: for any regular uncountable $\kappa\leq\lambda$ there is some (maximal) ideal $J^\kappa$ on $\omega$ such that $\add(\mathcal{N}_{J^\kappa})=\cof(\mathcal{N}_{J^\kappa})=\kappa$. \end{mainthm} This shows that there are many different values for the cardinal characteristics associated with different $\mathcal{N}_J$ after adding many Cohen reals. This potentially shows that many of these values can be strictly between $\non(\mathcal{M})$ and $\cov(\mathcal{M})$ because, after adding $\lambda$-many Cohen reals, $\non(\mathcal{M})=\aleph_1$ and $\lambda\leq\cov(\mathcal{M})$ (and $\mathfrak{c}=\lambda$ when $\lambda^{\aleph_0}=\lambda$). See more in \autoref{sec:cons}, specifically item~\ref{modelCohen}. This is inspired in Canjar's result~\cite{Canjar} stating that, after adding $\lambda$ many Cohen reals, for any uncountable regular $\kappa\leq\lambda$ there is some (maximal) ideal $J^\kappa$ such that $\mathfrak{b}_{J^\kappa}=\mathfrak{d}_{J^\kappa}=\kappa$. \subsection*{Structure of the paper} In \autoref{sec:NmodI} we define $\mathcal{N}_J$ and $\mathcal{N}^*_J$, prove their basic properties, the monotonicity with respect to the orders $\mathrel{\leq_{\mathrm{KB}}}$, $\mathrel{\leq_{\overline{\mathrm{KB}}}}$ and $\mathrel{\leq_{\mathrm{RB}}}$, and that $\mathcal{N}_J=\mathcal{N}$ and $\mathcal{N}^*_J=\mathcal{E}$ when $J$ has the Baire property. In \autoref{sec:nBP} we deal with ideals without the Baire property, we finish to prove \autoref{mainBP} and prove \autoref{mainBP2}. \autoref{sec:near} is devoted to our results related to nearly coherence of ideals, specifically with the proof of \autoref{maincoherence}. \autoref{sec:cardinalityNJ} presents ZFC results about the cardinal characteristics associated with our new ideals, mainly the proof of \autoref{mainZFCcard} and \autoref{maincharadd}, and \autoref{sec:cons} deals with the consistency results and \autoref{mainCohen}. In addition, we discuss in \autoref{sec:var} the notions of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ in other classical Polish spaces (like ${\mathbb R}$) and some weak version of our ideals. Finally, in \autoref{sec:Q} we summarize some open questions related to this work. \subsection*{Acknowledgments} The authors thank Miguel Cardona and the~participants of Ko\v sice set-theoretical seminar, especially Jaroslav \v Supina, for very fruitful discussions about this work. \section{Measure zero modulo ideals}\label{sec:NmodI} We first present some basic notation. In general, by an~\emph{ideal on $M$} we understand a~family $J}%{\mathcal{J}\subseteq \pts(M)$ that is hereditary (i.e. $a\in J}%{\mathcal{J}$ for any $a \subseteq b\in J}%{\mathcal{J}$), closed under finite unions, containing all finite subsets of~$M$ and such that $M\notin J$. Let us emphasize that ideals on $\omega$ or on any countable set can not be $\sigma$-ideals. We focus on ideals on $\omega$ and we use the letters $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ exclusively to denote such ideals. For $P\subseteq\pts(M)$ we denote \[ P^{d}=\set{a\subseteq M}{M\smallsetminus a\inA}%{\mathcal{A}}. \] Recall that $F}%{\mathcal{F}\subseteq\pts(M)$ is a~\emph{filter} when $F}%{\mathcal{F}^d$ is an~ideal. A~maximal filter $U}%{\mathcal{U}\subseteq\pts(M)$ is called an~\emph{ultrafilter}. For an~ideal $K}%{\mathcal{K}\subseteq\pts(M)$ we denote $K}%{\mathcal{K}^{+}=\pts(M)\smallsetminusK}%{\mathcal{K}$. One can see that $a\inK}%{\mathcal{K}^+$ if and only if $M\smallsetminus a\notin K}%{\mathcal{K}^d$. We denote by $\mu$ the~Lebesgue measure defined on~the Cantor space ${}^\omega2$, that is, the (completion of the) product measure on ${}^\omega 2=\prod_{n<\omega}2$ where $2=\{0,1\}$ is endowed with the probability measure that sets $\{0\}$ of measure $\frac{1}{2}$. In fact, for any $s\in 2^{<\omega}$, $\mu([s])=2^{-|s|}$ where $|s|$ denotes the length of $s$ and $[s]:=\set{x\in {}^\omega 2}{s\subseteq x}$. Recall that $\set{[s]}{s\in 2^{<\omega}}$ is a base of clopen sets of the topology of ${}^\omega 2$. Then \[ \mathcal{N}:=\set{A\subseteq {}^\omega2}{\leb{A}=0} \text{ \quad and \quad } \mathcal{M}:=\set{A\subseteq {}^\omega2}{A \text{ is meager}} \] are $\sigma$-ideals, i.e.\ the union of any countable subset of the ideal belongs to the ideal. The ideal $\mathcal{N}$ has a combinatorial characterization in terms of clopen sets. To present this, we fix the following terminology. We usually write $\bar{c} :=\seqn{c_n}{n\in\omega}$ for sequences of sets. \begin{definition}\label{def:clpN} Denote $\Omega:=\set{c\subseteq {}^\omega2}{c \text{ is a~clopen set}}$. For $\varepsilon\colon \omega\to(0,\infty)$, consider the set \[ \Omega^*_\varepsilon := \set{\bar{c}\in{}^\omega\Omega}{(\forall\, n\in\omega)\ \leb{c_n}\leq \varepsilon_n}. \] For each $\bar{c}\in{}^\omega\Omega$ (or in ${}^\omega\pts(\omega)$ in general) denote \[ \begin{split} N(\bar{c}) & :=\bigcap_{m<\omega}\bigcup_{n\geq m}c_n =\set{x\in {}^\omega2}{|\set{n\in\omega}{x\in c_n}|=\aleph_0},\\ N^*(\bar c) & :=\bigcup_{m<\omega}\bigcap_{n\geq m}c_n =\set{x\in{}^\omega 2}{|\set{n\in\omega}{x\notin c_n}|<\aleph_0}. \end{split} \] \end{definition} \begin{fact}[{\cite[Lemma~2.3.10]{BJ}}]\label{basicN2} Let $\varepsilon:\omega\to(0,\infty)$, and assume that $\sum_{i\in\omega}\varepsilon_i<\infty$. Then: \begin{enumerate}[label=\rm (\alph*)] \item For any $\bar{c}\in\Omega^*_\varepsilon$, $N(\bar{c})\in\mathcal{N}$. \item For any $X\subseteq {}^\omega2$, $X\in\mathcal{N}$ iff $(\exists\, \bar{c}\in\Omega^*_\varepsilon)\ X\subseteq N(\bar{c})$. \end{enumerate} \end{fact} The analog of \autoref{basicN2} using $N^*(\bar c)$ becomes the characterization of $\mathcal{E}$, the $\sigma$-ideal generated by the closed measure zero subsets of ${}^\omega 2$. \begin{fact}\label{basicE} Let $\varepsilon:\omega\to(0,\infty)$ and assume that $\liminf_{i\to\infty}\varepsilon_i=0$. Then: \begin{enumerate}[label=\rm (\alph*)] \item\label{it:inE} For any $\bar{c}\in\Omega^*_\varepsilon$, $N^*(\bar{c})\in\mathcal{E}$. \item\label{it:charE} For any $X\subseteq {}^\omega2$, $X\in\mathcal{E}$ iff $(\exists\, \bar{c}\in\Omega^*_\varepsilon)\ X\subseteq N^*(\bar{c})$. \end{enumerate} \end{fact} \begin{proof} Item~\ref{it:inE} is clear because, for any $n\in\omega$, $\bigcap_{m\geq n}c_m$ is closed and, since $\liminf\limits_{i\to\infty}\varepsilon_i=0$ and $\bar c\in\Omega^*_\varepsilon$, it has measure zero. For~\ref{it:charE}, the implication $\Leftarrow$ is clear by~\ref{it:inE}. To see $\Rightarrow$, let $X\in\mathcal{E}$, i.e. $X\subseteq \bigcup_{n<\omega}F_n$ for some increasing sequence $\langle F_n:\, n<\omega\rangle$ of closed measure zero sets. For each $n<\omega$, we can cover $F_n$ with countably many basic clopen sets $[s_{n,k}]$ ($k<\omega$) such that $\sum\limits_{k<\omega}\mu([s_{n,k}])<\varepsilon_n$, but by compactness only finitely many of them cover $F_n$, so $F_n\subseteq c_n:=\bigcup_{k<m_n}[s_{n,k}]$ for some $m_n<\omega$, and $\mu(c_n)<\varepsilon_n$. Then $\bar c:=\langle c_n:\, n<\omega\rangle$ is as required. \end{proof} Motivated by the~combinatorial characterization of $\mathcal{N}$ and $\mathcal{E}$ presented in \autoref{basicN2} and~\ref{basicE}, we introduce a smooth modification of these via ideals on~$\omega$. To start, we fix more terminology and strengthen the previous characterizations. \begin{definition}\label{def:omegabar} Denote ${\overline{\Omega}} :=\set{\bar{c}\in{}^\omega\Omega}{ N(\bar{c})\in \mathcal{N}}$. \end{definition} By \autoref{basicN2}~\ref{it:inE} we have that $\Omega^*_\varepsilon\subseteq{\overline{\Omega}}$ whenever $\varepsilon\colon\omega\to(0,\infty)$ and $\sum_{i<\omega}\varepsilon_i<\infty$. Hence, as a direct consequence of \autoref{basicN2}: \begin{fact}\label{basicNE} For any $X\subseteq{}^\omega 2$, $X\in\mathcal{N}$ iff $X\subseteq N(\bar c)$ for some $\bar c\in{\overline{\Omega}}$. \end{fact} We also have the analog version of $\mathcal{E}$. Before stating it, we characterize ${\overline{\Omega}}$ as follows. \begin{lemma}\label{chomegabar} For any sequence $\bar c=\langle c_i:\, i<\omega\rangle$, the following statements are equivalent. \begin{enumerate}[label=\rm (\roman*)] \item\label{it:inombar} $\bar c\in{\overline{\Omega}}$. \item\label{it:borelch} $(\forall\, \varepsilon\in {\mathbb Q}^+)\ (\exists\, N<\omega)\ (\forall\, n\geq N)\ \mu\left(\bigcup_{N\leq i<n}c_i\right)<\varepsilon$ \item\label{it:anypart} For any $\varepsilon\colon \omega\to(0,\infty)$ there is some interval partition $\bar I=\langle I_n:\, n<\omega\rangle$ of $\omega$ such that $\mu\left(\bigcup_{i\in I_n}c_i\right)<\varepsilon_n$ for all $n>0$. \item\label{it:expart} There is an interval partition $\bar I=\langle I_n:\, n<\omega\rangle$ of $\omega$ such that \[\sum_{n<\omega}\mu\left(\bigcup_{i\in I_n}c_i\right)<\infty.\] \end{enumerate} \end{lemma} \begin{proof} $\ref{it:inombar} \mathrel{\mbox{$\Rightarrow$}} \ref{it:borelch}$: Let $\varepsilon$ be a positive rational number. Since $\bar c\in{\overline{\Omega}}$, $\mu(N(\bar c))=0$, so $\lim\limits_{n\to\infty}\mu\left(\bigcup_{i\geq n}c_i\right)=0$. Then $\mu\left(\bigcup_{i\geq N}c_i\right)<\varepsilon$ for some $N<\omega$, which clearly implies that $\mu\left(\bigcup_{N\leq i<n}c_i\right)<\varepsilon$ for all $n\geq N$. $\ref{it:borelch}\mathrel{\mbox{$\Rightarrow$}}\ref{it:anypart}$: Let $\varepsilon\colon \omega\to(0,\infty)$. Using~\ref{it:borelch}, by recursion on $n<\omega$ we define an increasing sequence $\langle m_n:\, n<\omega\rangle$ with $m_0=0$ such that $\mu\left(\bigcup_{m_{n+1}\leq i<k}c_i\right)<\varepsilon_{n+1}$ for all $k\geq m_{n+1}$. Then $I_n:=[m_n,m_{n+1})$ is as required. $\ref{it:anypart}\mathrel{\mbox{$\Rightarrow$}}\ref{it:expart}$: Apply~\ref{it:anypart} to $\varepsilon_n:=2^{-n}$. $\ref{it:expart}\mathrel{\mbox{$\Rightarrow$}}\ref{it:inombar}$: Choose $\bar I$ as in~\ref{it:expart}, and let $c'_n:=\bigcup_{i\in I_n}c_i$. It is clear that $N(\bar c)=N(\bar c')$ and $\bar c'\in\Omega^*_\varepsilon$ where $\varepsilon\colon\omega\to(0,\infty)$, $\varepsilon_n:=\mu(c'_n)+2^{-n}$. Since $\sum_{n<\omega}\mu(c'_n)<\infty$, by \autoref{basicN2} we obtain that $N(\bar c)=N(\bar c')\in\mathcal{N}$. Thus $\bar c\in{\overline{\Omega}}$. \end{proof} As a consequence of \autoref{chomegabar}~\ref{it:borelch}, considering $\Omega$ as a countable discrete space: \begin{corollary}\label{cor:omegabarBorel} The set ${\overline{\Omega}}$ is Borel in ${}^\omega\Omega$. \end{corollary} \begin{lemma}\label{charE} The ideal $\mathcal{E}$ is characterized as follows. \begin{enumerate}[label=\rm (\alph*)] \item\label{it:N*c} $N^*(\bar c)\in\mathcal{E}$ for any $\bar c\in{\overline{\Omega}}$. \item\label{it:Ech} For $X\subseteq{}^\omega 2$, $X\in\mathcal{E}$ iff $(\exists\, \bar c\in{\overline{\Omega}})\ X\subseteq N^*(\bar c)$. \end{enumerate} \end{lemma} \begin{proof} \ref{it:N*c} is proved similarly as \autoref{basicE}~\ref{it:inE}, noting that $\bar c\in{\overline{\Omega}}$ implies that $\lim_{i\to\infty}\mu(c_i)=0$ (by \autoref{chomegabar}~\ref{it:borelch}). \ref{it:Ech} follows by~\ref{it:N*c} and \autoref{basicE}. \end{proof} We use this characterization to introduce the promised generalized versions of $\mathcal{N}$ and $\mathcal{E}$. Consider the ideal $\mathrm{Fin}$ of finite subsets of $\omega$. For $\bar c\in{}^\omega\Omega$ and $x\in{}^\omega 2$, note that, \begin{linenomath} \begin{align*} x\in N(\bar c) & {\ \mbox{$\Leftrightarrow$} \ } \set{n\in\omega}{x\in c_n}\in\mathrm{Fin}^+,\\ x\in N^*(\bar c) & {\ \mbox{$\Leftrightarrow$} \ } \set{n\in\omega}{x\in c_n}\in\mathrm{Fin}^d. \end{align*} \end{linenomath} Replacing $\mathrm{Fin}$ by an arbitrary ideal on $\omega$, we obtain the following notion. \begin{definition}\label{def:NJ} Fix an ideal $J}%{\mathcal{J}$ on $\omega$. For $\bar c\in{}^\omega\Omega$, define \begin{linenomath} \begin{align*} N_{J}%{\mathcal{J}}(\bar{c})&:=\set{x\in {}^\omega2}{\set{n\in\omega}{x\in c_n}\inJ}%{\mathcal{J}^+},\\ N^*_{J}%{\mathcal{J}}(\bar{c})&:=\set{x\in {}^\omega2}{\set{n\in\omega}{x\in c_n}\in J}%{\mathcal{J}^d}. \end{align*} \end{linenomath} These sets are used to define the families \begin{linenomath} \begin{align}\label{Nj2} \NidealJ{J}%{\mathcal{J}}& := \set{X\subseteq {}^\omega2}{(\exists\, \bar{c}\in {\overline{\Omega}})\ X\subseteq N_{\mathcal{J }}(\bar{c})},\\ \label{Nstarj2} \mathcal{N}^*_{J}%{\mathcal{J}}&:=\set{X\subseteq {}^\omega2}{(\exists\, \bar{c}\in{\overline{\Omega}})\ X\subseteq N^*_{J}%{\mathcal{J}}(\bar{c})}. \end{align} \end{linenomath} We say that the members of $\mathcal{N}_J}%{\mathcal{J}$ \emph{have measure zero (or are null) modulo $J}%{\mathcal{J}$}. \end{definition} Due to \autoref{basicNE} and~\autoref{charE}, we obtain $\NidealJ{\mathrm{Fin}}=\mathcal{N}$ and $\mathcal{E}=\mathcal{N}^*_{\mathrm{Fin}}$. Moreover, one can easily see that, if $J}%{\mathcal{J}\subseteqK}%{\mathcal{K}$ are ideals on $\omega$, then \begin{equation}\label{eq:cont} \mathcal{N}^*_{J}%{\mathcal{J}}\ \subseteq\ \mathcal{N}^*_{K}%{\mathcal{K}}\ \subseteq\ \mathcal{N}_K}%{\mathcal{K}\ \subseteq\ \mathcal{N}_J}%{\mathcal{J}. \end{equation} In particular, we obtain \begin{equation}\label{eq:contfin} \mathcal{E}=\mathcal{N}^*_\mathrm{Fin}\subseteq\mathcal{N}^*_{J}%{\mathcal{J}}\subseteq\mathcal{N}_J}%{\mathcal{J}\subseteq\mathcal{N}_\mathrm{Fin}=\mathcal{N}. \end{equation} Furthermore, if $J}%{\mathcal{J}$ is a~maximal ideal (i.e.\ its dual $J}%{\mathcal{J}^d$ is an~ultrafilter), then $\NidealJ{J}%{\mathcal{J}}=\NstaridealJ{J}%{\mathcal{J}}$. We can prove that, indeed, both $\mathcal{N}_J}%{\mathcal{J}$ and $\mathcal{N}^*_J}%{\mathcal{J}$ are $\sigma$-ideals on the reals, exactly like in the original notions. \begin{lemma}\label{NJsigma2} Let $J}%{\mathcal{J}$ be an ideal on~$\omega$. Then both $\NidealJ{J}%{\mathcal{J}}$ and $\NstaridealJ{J}%{\mathcal{J}}$ are $\sigma$-ideals on~${}^\omega 2$. \end{lemma} \begin{proof} Thanks to \autoref{eq:contfin}, both $\NidealJ{J}%{\mathcal{J}}$ and $\NstaridealJ{J}%{\mathcal{J}}$ contain all finite subsets of ${}^\omega 2$ and the whole space ${}^\omega 2$ does not belong to them. It is also clear that both families are downwards closed under $\subseteq$, so it is enough to verify that both $\NidealJ{J}%{\mathcal{J}}$ and $\NstaridealJ{J}%{\mathcal{J}}$ are closed under countable unions. Consider $\bar{c}^k \in{\overline{\Omega}}$ for each $k\in\omega$. By recursion, using \autoref{chomegabar}~\ref{it:borelch}, we define an increasing sequence $\seqn{n_l}{l<\omega}$ of natural numbers such that $\mu\left(\bigcup_{n\geq n_\ell}c^k_n\right)<\frac{1}{(l+1)2^l}$ for all $k\leq l$. Let $I_0:=[0,n_1)$ and $I_l:=[n_l,n_{l+1})$ for $l>0$. Then, we define the~sequence $\bar{c}$ by \begin{linenomath} \begin{equation*} c_n = \begin{cases} \bigcup\limits_{k\leq l}c^k_n & \text{if } n\in[n_l,n_{l+1}),\\[2ex] \emptyset & \text{if }n<n_0. \end{cases} \end{equation*} \end{linenomath} Finally, \[\sum_{l<\omega}\mu\left(\bigcup_{n\in I_l}c_n\right)\leq \sum_{l<\omega}\sum_{k\leq\ell}\mu\left(\bigcup_{n\geq n_l}c^k_n\right)\leq \sum_{l<\omega}\sum_{k\leq\ell}\frac{1}{(l+1)2^l}=\sum_{l<\omega}\frac{1}{2^l}<\infty,\] so $\bar c\in{\overline{\Omega}}$. It is clear that $\bigcup_{k\in\omega}N_J}%{\mathcal{J}(\bar{c}^k)\subseteq N_J}%{\mathcal{J}(\bar{c})$ and $\bigcup_{k\in\omega}N^*_J}%{\mathcal{J}(\bar{c}^k)\subseteq N^*_J}%{\mathcal{J}(\bar{c})$. \end{proof} \begin{remark}\label{rem:opennonew} The following alternative definition does not bring anything new: Let ${\overline{\Omega}}_0$ be the set of countable sequences $\bar a=\langle a_n:n<\omega\rangle$ of open subsets of ${}^\omega2$ such that $N(\bar a)\in\mathcal{N}$. Define $N_J}%{\mathcal{J}(\bar{a})$ similarly, and $\mathcal{N}^0_J}%{\mathcal{J}$ as the family of subsets of ${}^\omega2$ that are contained in some set of the form $N_J}%{\mathcal{J}(\bar a)$ for some $\bar a\in{\overline{\Omega}}_0$. Define $\mathcal{N}^{*0}_J}%{\mathcal{J}$ analogously. It is not hard to show that $\mathcal{N}^{*0}_J}%{\mathcal{J}=\mathcal{N}^0_J}%{\mathcal{J}=\mathcal{N}$. The inclusions $\mathcal{N}^{*0}_J}%{\mathcal{J}\subseteq\mathcal{N}^0_J}%{\mathcal{J}\subseteq\mathcal{N}$ are clear; to see $\mathcal{N}\subseteq\mathcal{N}^{*0}_J}%{\mathcal{J}$, if $B\in\mathcal{N}$, then we can find some $\bar a\in{\overline{\Omega}}_0$ such that $B\subseteq\bigcap_{n\in\omega}a_n$ and $\mu(a_n)<2^{-n}$, so it is clear that $B\subseteq N^*_J}%{\mathcal{J}(\bar a)$. For this reason, it is uninteresting to consider sequences of open sets instead of clopen sets. \end{remark} \begin{example}\label{exm:sum} Consider $\omega={\mathbb N}_1\cup{\mathbb N}_2$ as a disjoint union of infinite sets, let $J_1$ be an ideal on ${\mathbb N}_1$ and $J_2$ an ideal on ${\mathbb N}_2$. Recall the ideal \[J_1\oplus J_2=\set{x\subseteq\omega}{x\cap{\mathbb N}_1\in J_1 \text{ and } x\cap{\mathbb N}_2 \in J_2}.\] Then, considering ${}^\omega 2={}^{{\mathbb N}_1}2 \times {}^{{\mathbb N}_2}2$, \begin{linenomath} \begin{align*} N_{J_1\oplus J_2}(\bar c) & =(N_{J_1}(\bar c{\upharpoonright}{\mathbb N}_1)\times {}^{{\mathbb N}_2}2) \cup ({}^{{\mathbb N}_1}2\times N_{J_2}(\bar c{\upharpoonright}{\mathbb N}_2)) \text{ and }\\ N^*_{J_1\oplus J_2}(\bar c) & =N^*_{J_1}(\bar c{\upharpoonright}{\mathbb N}_1)\times N^*_{J_2}(\bar c{\upharpoonright}{\mathbb N}_2). \end{align*} \end{linenomath} By allowing $\pts(\omega)$ instead of an ideal, $N_{\pts(\omega)}(\bar c)=\emptyset$ and $N^*_{\pts(\omega)}(\bar c)={}^\omega 2$, so $N_{J_1\oplus\pts({\mathbb N}_2)}(\bar c)=N_{J_1}(\bar c{\upharpoonright} {\mathbb N}_1)\times {}^{{\mathbb N}_2}2$ and $N^*_{\pts({\mathbb N}_1)\oplus J_2}(\bar c)= {}^{{\mathbb N}_1}2\times N^*_{J_2}(\bar c{\upharpoonright}{\mathbb N}_2)$. Therefore \begin{linenomath} \begin{align*} N_{J_1\oplus J_2}(\bar c) & = N_{J_1\oplus \pts({\mathbb N}_2)}(\bar c)\cup N_{\pts({\mathbb N}_1)\oplus J_2}(\bar c),\\ N^*_{J_1\oplus J_2}(\bar c) & = N^*_{J_1\oplus \pts({\mathbb N}_2)}(\bar c)\cap N^*_{\pts({\mathbb N}_1)\oplus J_2}(\bar c), \end{align*} \end{linenomath} hence $\mathcal{N}_{J_1\oplus J_2}$ is generated by $\mathcal{N}_{J_1\oplus \pts({\mathbb N}_2)}\cup \mathcal{N}_{\pts({\mathbb N}_1)\oplus J_2}$, and $\mathcal{N}^*_{J_1\oplus J_2}=\mathcal{N}^*_{J_1\oplus \pts({\mathbb N}_2)}\cap \mathcal{N}^*_{\pts({\mathbb N}_1)\oplus J_2}$. \end{example} We now review the following classical order on ideals. \begin{definition}\label{def:RB} Let $M_1$ and $M_2$ be infinite sets, $K}%{\mathcal{K}_1\subseteq\pts(M_1)$ and $K}%{\mathcal{K}_2\subseteq\pts(M_2)$. If $\varphi\colon M_2\to M_1$, the \emph{projection of $K}%{\mathcal{K}_2$ under $\varphi$} is the family \[ \varphi^{\to}(K}%{\mathcal{K}_2)=\set{A\subseteq M_1}{\varphi^{-1}(A)\inK}%{\mathcal{K}_2}. \] We write $K}%{\mathcal{K}_1\leq_{\varphi}K}%{\mathcal{K}_2$ when $K}%{\mathcal{K}_1\subseteq\varphi^{\to}(K}%{\mathcal{K}_2)$, i.e., $\varphi^{-1}(I)\inK}%{\mathcal{K}_2$ for any $I\inK}%{\mathcal{K}_1$. \begin{enumerate}[label = \rm (\arabic*)] \item \cite{katetov,blass86} The \emph{Kat\v{e}tov-Blass order} is defined by $K}%{\mathcal{K}_1\leq_{\text{RB}}K}%{\mathcal{K}_2$ iff there is a~finite-to-one function $\varphi\colon M_2\to M_1$ such that $I\inK}%{\mathcal{K}_1$ implies $\varphi^{-1}(I)\inK}%{\mathcal{K}_2$, i.e.\ $K}%{\mathcal{K}_1\subseteq\varphi^{\to}(K}%{\mathcal{K}_2)$. \item \cite{blass86,LZ} The \emph{Rudin-Blass order} is defined by $K}%{\mathcal{K}_1\leq_{\text{RB}}K}%{\mathcal{K}_2$ iff there is a~finite-to-one function $\varphi\colon M_2\to M_1$ such that $I\inK}%{\mathcal{K}_1$ if and only if $\varphi^{-1}(I)\inK}%{\mathcal{K}_2$, i.e.\ $\varphi^{\to}(K}%{\mathcal{K}_2)=K}%{\mathcal{K}_1$. \item We also consider the ``converse" of the Kat\v{e}tov-Blass order: $K_1\mathrel{\leq_{\overline{\mathrm{KB}}}} K_2$ iff there is some finite-to-one function $\varphi\colon M_2\to M_1$ such that $\varphi^\to(K_2)\subseteq K_1$. \end{enumerate} Note that $K_1\mathrel{\leq_{\mathrm{RB}}} K_2$ implies $K_1\mathrel{\leq_{\mathrm{KB}}} K_2$ and $K_1\mathrel{\leq_{\overline{\mathrm{KB}}}} K_2$. Also recall that $K_1\subseteq K_2$ implies $K_1\mathrel{\leq_{\mathrm{KB}}} K_2$ and $K_2\mathrel{\leq_{\overline{\mathrm{KB}}}} K_1$ (using the identity function). Recall that, if $K}%{\mathcal{K}_2$ is an~ideal on $M_2$, then $\varphi^{\to}(K}%{\mathcal{K}_2)$ is downwards closed under $\subseteq$ and closed under finite unions, and $M_1\not\in\varphi^{\to}(K}%{\mathcal{K}_2)$. If $\varphi$ is in addition finite-to-one then $\varphi^{\to}(K}%{\mathcal{K}_2)$ is an ideal. \end{definition} We show that our defined $\sigma$-ideals behave well under the previous orders. In fact, this is a somewhat expected result that can usually be obtained for many well-known objects in topology. For example, it is known from \cite{FaSo} that: \begin{enumerate}[label = \rm (\arabic*)] \item If $K\mathrel{\leq_{\mathrm{KB}}} J$ then $\mathfrak{b}_K\leq \mathfrak{b}_J$ and $\mathfrak{d}_J\leq \mathfrak{d}_K$. \item If $K \mathrel{\leq_{\overline{\mathrm{KB}}}} J$ then $\mathfrak{b}_J\leq \mathfrak{b}_K$ and $\mathfrak{d}_K\leq \mathfrak{d}_J$. \item If $K\mathrel{\leq_{\mathrm{RB}}} J$ then $\mathfrak{b}_I=\mathfrak{b}_J$ and $\mathfrak{d}_I=\mathfrak{d}_J$. \end{enumerate} We present another similar example in \autoref{KBomegabar}.\footnote{More similar examples of such implications can be found in~\cite{Hrusak,SS,Miro2}. } \begin{theorem}\label{RBeq} Let $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ be ideals on $\omega$. \begin{enumerate}[label=\rm(\alph*)] \item\label{it:underKB} If $K}%{\mathcal{K}\mathrel{\leq_{\mathrm{KB}}} J}%{\mathcal{J}$ then $\mathcal{N}^*_K}%{\mathcal{K}\subseteq\mathcal{N}^*_J}%{\mathcal{J}$ and $\mathcal{N}_J}%{\mathcal{J}\subseteq\mathcal{N}_K}%{\mathcal{K}$. \item\label{it:underKB'} If $K\mathrel{\leq_{\overline{\mathrm{KB}}}} J$ then $\mathcal{N}^*_J\subseteq \mathcal{N}^*_K$ and $\mathcal{N}_K\subseteq \mathcal{N}_J$. \item\label{it:underRB} If $K}%{\mathcal{K}\mathrel{\leq_{\mathrm{RB}}} J}%{\mathcal{J}$ then $\mathcal{N}^*_J}%{\mathcal{J}=\mathcal{N}^*_K}%{\mathcal{K}$ and $\mathcal{N}_J}%{\mathcal{J}=\mathcal{N}_K}%{\mathcal{K}$. \end{enumerate} \end{theorem} \begin{proof} Fix a finite-to-one function $f\colon \omega\to\omega$ and let $I_n:=f^{-1}[\{n\}]$ for any $n<\omega$. Given $\bar c\in {\overline{\Omega}}$ we define sequences $\bar c'$ and $\bar c^-$ by $c'_n:=\bigcup_{k\in I_n}c_k$ and $c^-_k:= c_{f(k)}$. Since $N(\bar c')= N(\bar c)$ and $N(\bar c^-)\subseteq N(\bar c)$, we have that $\bar c',\bar c^- \in {\overline{\Omega}}$. It is enough to show \begin{enumerate}[label= \rm (\roman*)] \item\label{it:conKB} $K\subseteq f^\to(J)$ implies $N^*_K(\bar c) \subseteq N^*_J(\bar c^-)$ and $N_J(\bar c) \subseteq N_K(\bar c')$, and \item\label{itconKB'} $f^\to(J)\subseteq K$ implies $N^*_J(\bar c)\subseteq N^*_K(\bar c')$ and $N_K(\bar c)\subseteq N_J(\bar c^-)$. \end{enumerate} \ref{it:conKB}: Assume $K\subseteq f^\to(J)$. If $x\in N^*_K}%{\mathcal{K}(\bar c)$ then $\{n<\omega:\, x\notin c_n\}\inK}%{\mathcal{K}$, so \[\{k<\omega:\, x\notin c^-_k\}=f^{-1}[\set{n<\omega}{x\notin c_n}]\inJ}%{\mathcal{J},\] i.e.\ $x\in N^*_J}%{\mathcal{J}(\bar c^-)$; and if $x\in N_J}%{\mathcal{J}(\bar c)$, i.e.\ $\{k<\omega:\, x\in c_k\}\notinJ}%{\mathcal{J}$, then \[ f^{-1}[\{n<\omega:\, x\in c'_n\}] = \{k<\omega:\, x\in c'_{f(k)}\}\supseteq\{k<\omega:\, x\in c_k\}, \] so $f^{-1}[\{n<\omega:\, x\in c'_n\}]\notinJ}%{\mathcal{J}$, i.e.\ $\{n<\omega:\, x\in c'_n\}\notinK}%{\mathcal{K}$, which means that $x\in N_K}%{\mathcal{K}(\bar c')$. \ref{itconKB'} Assume $f^\to(J)\subseteq K$. If $x\in N^*_J}%{\mathcal{J}(\bar c)$, i.e.\ $\{k<\omega:\, x\notin c_k\}\inJ}%{\mathcal{J}$, then \[ f^{-1}[\{n<\omega:\, x\notin c'_n\}] = \{k<\omega:\, x\notin c'_{f(k)}\}\subseteq\{k<\omega:\, x\notin c_k\}, \] so $f^{-1}[\{n<\omega:\, x\notin c'_n\}]\in J}%{\mathcal{J}$, which implies that $\{n<\omega:\, x\notin c'_n\}\inK}%{\mathcal{K}$, i.e.\ $x\in N^*_K}%{\mathcal{K}(\bar c')$; and if $x\in N_K}%{\mathcal{K}(\bar c)$ then $\{n<\omega:\, x\in c_n\}\notinK}%{\mathcal{K}$, so \[\{k<\omega:\, x\in c^-_k\}=f^{-1}[\set{n<\omega}{x\in c_n}]\notinJ}%{\mathcal{J},\] i.e.\ $x\in N_J}%{\mathcal{J}(\bar c^-)$. \end{proof} \begin{example}\label{exm:sum2} Notice that $J\mathrel{\leq_{\mathrm{RB}}} J\oplus\pts(\omega)$, however, $J$ and $\pts(\omega)$ should come from different sets. Concretely, if $J$ is an ideal on $\omega$ then $J\oplus\pts(\omega)$ should be formally taken as $J\oplus\pts({\mathbb N}')$ where ${\mathbb N}'$ is an infinite countable set and $\omega\cap {\mathbb N}'=\emptyset$, so while $\mathcal{N}_{J}$ is an ideal on ${}^\omega 2$, $\mathcal{N}_{J\oplus\pts({\mathbb N}')}$ is an ideal on ${}^\omega 2\times {}^{{\mathbb N}'}2$. In spite of this, \autoref{RBeq} hints us that $\mathcal{N}_{J}$ and $\mathcal{N}_{J\oplus\pts({\mathbb N}')}$ are, morally, the same ideal. More precisely, for any bijection $g\colon \omega\cup {\mathbb N}' \to \omega$, we get $\mathcal{N}_{g^\to(J\oplus \pts({\mathbb N}'))}=\mathcal{N}_J$ (if $f\colon \omega\cup{\mathbb N}' \to \omega$ is a finite-to-one function that witnesses $J\mathrel{\leq_{\mathrm{RB}}} J\oplus\pts({\mathbb N}')$ and $h:=f\circ g^{-1}$, then $J=h^\to(g^\to(J\oplus \pts({\mathbb N}')))$). In other words, there is an isomorphism from ${}^\omega 2\times {}^{{\mathbb N}'}2$ onto ${}^\omega 2$ that makes $\mathcal{N}_{J\oplus\pts({\mathbb N}')}$ and $\mathcal{N}_J$ isomorphic ideals. The same discussion applies to $\mathcal{N}^*_{J\oplus \pts(\omega)}$. \end{example} Recall the following well-known result that characterizes ideals on $\omega$ with the Baire property. \begin{theorem}[Jalani-Naini and Talagrand~{\cite{Talagrand}}]\label{thm:talagrand} Let $J}%{\mathcal{J}$ be an ideal on $\omega$. Then the following statements are equivalent. \begin{multicols}{2} \begin{enumerate}[label=\rm (\roman*)] \item $J}%{\mathcal{J}$ has the Baire property in $\pts(\omega)$. \item $J}%{\mathcal{J}$ is meager in $\pts(\omega)$. \item $\mathrm{Fin}\mathrel{\leq_{\mathrm{RB}}} J}%{\mathcal{J}$. \end{enumerate} \end{multicols} \end{theorem} Therefore, as a consequence of \autoref{RBeq}: \begin{theorem}\label{BaireForNJ2} If $J}%{\mathcal{J}$ is an ideal on $\omega$ with the Baire property, then $\mathcal{N}_J}%{\mathcal{J}=\mathcal{N}$ and $\mathcal{N}^*_J}%{\mathcal{J}=\mathcal{E}$. \end{theorem} So the $\sigma$-ideals associated with definable (analytic) ideals do not give new $\sigma$-ideals on ${}^\omega 2$, but they give new characterizations of $\mathcal{N}$ and $\mathcal{E}$. The converse of the previous result is also true, which we fully discuss in the next section. \section{Ideals without the Baire property}\label{sec:nBP} In this section, we study the ideals $\mathcal{N}_J$ and $\mathcal{N}^*_J$ when $J$ does not have the Baire property. With respect to our main results, we finish to prove \autoref{mainBP}: an ideal $J}%{\mathcal{J}$ on $\omega$ without the Baire property gives us new $\sigma$-ideals $\mathcal{N}_J}%{\mathcal{J}$ and $\mathcal{N}^*_J}%{\mathcal{J}$. We prove \autoref{mainBP2} as well (see \autoref{conseqforE2}). One of the~main tools in our study of $\NidealJ{J}%{\mathcal{J}}$ and $\NstaridealJ{J}%{\mathcal{J}}$ is the~technique of \emph{filter games}.\footnote{One can find a~good systematic treatment of filter games and their dual ideal versions in~\cite{Laf, Laflear}.} For the main results mentioned in the previous paragraph, we will use the meager game. \begin{definition}\label{def:nonMgame} Let $F}%{\mathcal{F}$ be a~filter on~$\omega$. The following game of length $\omega$ between two players is called the \emph{meager game $M_F}%{\mathcal{F}$}: \begin{itemize} \item In the $n$-th move, Player~I plays a~finite set $A_n\in[\omega]^{<\omega}$ and Player~II responds with a~finite set $B_n\in[\omega]^{<\omega}$ disjoint from $A_n$. \item After $\omega$ many moves, Player~II wins if $\bigcup\set{B_n}{n\in\omega}\inF}%{\mathcal{F}$, and Player~I wins otherwise. \end{itemize} \end{definition} Let us recall an~important result about the~aforementioned game. \begin{theorem}[Galvin and Mackenzie~1980~(see e.g.~{\cite{CH}})]\label{nonMgame} Let $F}%{\mathcal{F}$ be a~filter on~$\omega$. Then Player~I does not have a~winning strategy in the meager game for the filter $F}%{\mathcal{F}$ if and only if $F}%{\mathcal{F}$ is not meager in $\pts(\omega)$. \end{theorem} We use the meager game to show that, whenever $J$ does not have the Baire property, no member of $\mathcal{N}_J$ can be co-meager with respect to any closed subset of ${}^\omega 2$ of positive measure whose non-empty relative open subsets have positive measure. \begin{mainlemma}\label{notcomeager2} Let $K\subseteq{}^\omega 2$ be closed such that $\leb{K}>0$ and $$(\forall\, s\in 2^{<\omega})\ ([s]\cap K\neq \emptyset\Rightarrow \leb{[s]\cap K}> 0).$$ If $J}%{\mathcal{J}$ is not meager then $\displaystyle N_J}%{\mathcal{J}(\bar{c})\cap K$ is not co-meager in $K$ for each $\bar{c}\in{\overline{\Omega}}$. As a consequence, $Z\cap K$ is not co-meager in $K$ for any $Z\in\mathcal{N}_J}%{\mathcal{J}$. \end{mainlemma} \begin{proof} Let $K\subseteq{}^\omega 2$ be a~closed set as considered in the hypothesis. Let $G\subseteq K$ be a~co-meager subset in $K$. Then there is a~sequence $\seqn{D_n}{n\in\omega}$ of open dense sets in $K$ such that $\bigcap_{n\in\omega}D_n\subseteq G$. Moreover, since $K$ is closed, there is a~tree $T$ such that $K=[T]$. Consider $\bar{c}\in{\overline{\Omega}}$ and construct the~following strategy of Player~I for the meager game for $J}%{\mathcal{J}^d$.\smallskip \underline{The~first move}: Player~I picks an $s_0\in T$ such that $[s_0]\cap K\subseteq D_0$ and chooses $n_0<\omega$ such that $\leb{\bigcup_{n\geq n_0}c_n}<\leb{K\cap[s_0]}$ (which exists because $\bar c\in{\overline{\Omega}}$). Player~I's move is $n_0$.\smallskip \underline{Second move and further}: Player~II replies with $B_0\in[\omega]^{<\omega}$ such that $n_0\cap B_0=\emptyset$. Since $\leb{\bigcup_{n_0\in B_0} c_n}<\leb{K\cap[s_0]}$, $K\cap [s_0]\nsubseteq \bigcup_{n\in B_0}c_n$, so there exists an $x_0\in K\cap [s_0]\smallsetminus \bigcup_{n\in B_0} c_n$. Then Player~I finds an $m_0>|s_0|$ such that $[x_0{\upharpoonright}m_0]\cap \bigcup_{n\in B_0} c_n=\emptyset$. Since $D_1$ is dense in $K$, Player~I can pick an $s_1\in T$ such that $s_0\subseteq x_0{\upharpoonright} m_0\subseteq s_1$ and $[s_1]\cap K\subseteq D_1$. Then Player~I moves with an $n_1\in\omega$ such that $\leb{\bigcup_{n\geq n_1} c_n}<\leb{K\cap [s_1]}$. Player~II responds with $B_1\in[\omega]^{<\omega}$ such that $n_1\cap B_1=\emptyset$, and the~game continues in the~same way we just described. Finally, since $J}%{\mathcal{J}$ is not meager, Player~I does not have a~winning strategy in the meager game for $J}%{\mathcal{J}^d$. Hence, there is some round $\seqn{(n_k, B_k)}{k\in\omega}$ where Player~I uses the~aforementioned strategy and Player~II wins. Thus, we have $F:=\bigcup_{k\in\omega} B_k\inJ}%{\mathcal{J}^d$. Define $x:=\bigcup_{k\in\omega}s_k$. Then, $x\in\bigcap_{k\in\omega}D_k$. On the~other hand, $x\notin \bigcup_{n\in F}c_n$ (because $[s_{k+1}]\cap\bigcup_{n\in B_k}c_k=\emptyset$) and hence $x\notin N_{J}%{\mathcal{J}}(\bar{c})$. \end{proof} \autoref{notcomeager2} shows a connection between $\mathcal{N}_J}%{\mathcal{J}$ and meager sets. First recall: \begin{lemma}[See e.g.~{\cite{kechris}}]\label{ncominclp} Let $A\subseteq {}^\omega2$. If $A$ is meager in ${}^\omega 2$ then, for any $s\in 2^{<\omega}$, $[s]\cap A$ is not co-meager in $[s]$. The~converse is true when $A$ has the~Baire property. \end{lemma} Consequences of \autoref{notcomeager2} are: \begin{corollary}\label{cor:notcomeager} If $J}%{\mathcal{J}$ is an ideal on $\omega$, then $J}%{\mathcal{J}$ does not have the Baire property iff $Z$ is not co-meager in ${}^\omega 2$ for any $Z\in\mathcal{N}_J}%{\mathcal{J}$. \end{corollary} \begin{proof} For the implication from left to right, apply \autoref{notcomeager2} to $K={}^\omega 2$. For the converse, if $J}%{\mathcal{J}$ has the Baire property then $\mathcal{N}_J}%{\mathcal{J}=\mathcal{N}$ by \autoref{BaireForNJ2}, and it is well-known that $\mathcal{N}$ contains a co-meager set in ${}^\omega 2$ (Rothberger's Theorem~\cite{Roth38}). \end{proof} \begin{corollary}\label{Njmeag} Assume that $J}%{\mathcal{J}$ is an ideal on $\omega$ without the Baire property. Then, for any $Z\in\mathcal{N}_J}%{\mathcal{J}$, $Z$ is meager in ${}^\omega 2$ iff it has the Baire property. \end{corollary} \begin{proof} Assume that $Z\in \mathcal{N}_J}%{\mathcal{J}$ has the Baire property. By \autoref{notcomeager2}, $Z\cap[s]$ is not co-meager in $[s]$ for all $s\in 2^{<\omega}$. Therefore, by \autoref{ncominclp}, $Z$ is meager in ${}^\omega 2$. \end{proof} \autoref{notcomeager2} gives us the converse of \autoref{BaireForNJ2} for $\mathcal{N}_J}%{\mathcal{J}$. Furthermore, we can prove that there is a~meager set of Lebesgue measure zero which is not contained in $\NidealJ{J}%{\mathcal{J}}$ for non-meager $J}%{\mathcal{J}$. \begin{theorem}\label{charaktNj2} Let $J}%{\mathcal{J}$ be an~ideal on~$\omega$. Then, the following statements are equivalent. \begin{multicols}{3} \begin{enumerate}[label=\rm (\roman*)] \item\label{it:NJneqN} $\mathcal{N}_J}%{\mathcal{J}\subsetneq\mathcal{N}$. \item\label{it:MNnsubNJ} $\mathcal{M}\cap{\mathcal{N}} \nsubseteq \NidealJ{J}%{\mathcal{J}}$. \item\label{it:JnB} $J}%{\mathcal{J}$ is not meager. \end{enumerate} \end{multicols} \end{theorem} \begin{proof} The~implication $\ref{it:NJneqN}\rightarrow\ref{it:JnB}$ follows directly from \autoref{BaireForNJ2}. On the other hand, since $\mathcal{M}\cap{\mathcal{N}} \subseteq \mathcal{N}$ the~implication $\ref{it:MNnsubNJ}\to\ref{it:NJneqN}$ is obvious. It remains to show $\ref{it:JnB}\to\ref{it:MNnsubNJ}$. First, we choose a closed nowhere dense $K\subseteq ^\omega2$ of positive measure, which can be found as in the hypothesis of \autoref{notcomeager2}. Then, we find $G\subseteq K$ which is co-meager in $K$ and has Lebesgue measure zero. Hence $G\in\mathcal{M}\cap\mathcal{N}$, but $G\notin\mathcal{N}_J}%{\mathcal{J}$ by \autoref{notcomeager2}. \end{proof} The proof of $\ref{it:JnB}\to\ref{it:MNnsubNJ}$ is similar to the proof of $\mathcal{E}\subsetneq\mathcal{M}\cap\mathcal{N}$ from~\cite[Lemma~2.6.1]{BJ}. Actually, the latter is already implied by $\ref{it:MNnsubNJ}$ (see \autoref{eq:contfin}). However, $\mathcal{N}_J}%{\mathcal{J}$, and even $\mathcal{N}^*_J}%{\mathcal{J}$, contain many non-meager sets when $J}%{\mathcal{J}$ is not meager. Examples can be obtained from the following construction. \begin{definition}\label{def:cT} For any non-empty tree $T\subseteq2^{<\omega}$ without maximal nodes we define $\bar{c}^T$ as follows. Enumerate $T=\{t_n:n\in\omega\}$ in such a way that $t_m\subseteq t_n$ implies $m\leq n$. By recursion on $n$, construct $\{t^n_k:k\leq n\}\subseteq T$ such that: \begin{enumerate}[label=\rm (\roman*)] \item\label{it:t0} $t^0_0\in T$ extends $t_0$ and $|t^0_0|\geq1$ \item\label{it:tn+1} $t^{n+1}_{n+1}\supseteq t_{n+1}$, \item\label{it:tn+1'} for each $k\leq n$, $t^{n+1}_k\supseteq t^n_k$ and, \item for each $k\leq n+1$, $|t^{n+1}_k|\geq 2n+2$. \end{enumerate} Define $c^T_n:=\bigcup_{k\leq n}[t^n_k]$ and $\bar{c}^T:=\seqn{c^T_n}{n\in\omega}$. In addition, the~inequality $\leb{c^T_n}\leq2^{-(n+1)}$ holds, so $\bar c^T\in{\overline{\Omega}}$. \end{definition} In general, for any sequence $\bar\varepsilon=\seqn{\varepsilon_n}{n<\omega}$ of positive reals, it is possible to construct a similar $\bar{c}^{T,\bar\varepsilon}$ such that $\leb{c^T_n}<\varepsilon_n$ for all $n<\omega$, that is, $\bar{c}^{T,\bar\varepsilon}\in\Omega^*_{\bar\varepsilon}$. For this, we just need to modify the length of each $t^n_k$ accordingly. \begin{mainlemma}\label{capcomeag2} Let $T\subseteq2^{<\omega}$ be a non-empty tree without maximal nodes and let $J}%{\mathcal{J}$ be a non-meager ideal on~$\omega$. Then $N^*_J}%{\mathcal{J}(\bar{c}^T)\cap G\neq\emptyset$ for every co-meager set \mbox{$G\subseteq[T]$} in $[T]$, i.e.\ $N^*_J}%{\mathcal{J}(\bar{c}^T)$ is not meager in $[T]$. In other words, for any non-empty closed $K\subseteq{}^\omega 2$ there is some $\bar{c}^K\in{\overline{\Omega}}$ such that, for any non-meager ideal $J}%{\mathcal{J}$ on $\omega$, $N^*_J}%{\mathcal{J}(\bar c^K)\cap K$ is non-meager in $K$. \end{mainlemma} \begin{proof} Let $G\subseteq[T]$ be co-meager in $[T]$. Then, there is a sequence $\seqn{D_n}{n\in\omega}$ of open dense sets in $[T]$ such that $\bigcap_{n\in\omega}D_n\subseteq G$. We look for an $x\in N^*_J}%{\mathcal{J}(\bar{c}^T)\cap\bigcap_{n\in\omega}D_n$ by using the non-meager game. Construct the following strategy of Player~I for the non-meager game for $J}%{\mathcal{J}^d$ along with fragments of the~desired $x$. \underline{The~first move}: Player~I chooses some $s_0\in T$ extending $t^0_0$ such that $[s_0]\cap[T]\subseteq D_0$. Since $s_0\in T$, $s_0=t_{n_0}$ for some $n_0\in\omega$. Player~I moves with $A_0:=n_0+1$. \underline{Second move and further}: Player~II replies with $B_0$. Since $B_0\cap A_0=\emptyset$, by~\ref{it:tn+1}--\ref{it:tn+1'} of \autoref{def:cT}, Player~I can extend $s_0$ to some $s'_0\in T$ such that, for any $\ell\in B_0$, $s'_0$ extends $t^\ell_{n_0}$, and further finds $s_1\in T$ extending $s'_0$ such that $[s_1]\cap[T]\subseteq D_1$. Here, $s_1=t_{n_1}$ for some $n_1\in\omega$, and Player~I moves with $A_1:=n_1+1$. Player~II would reply with some $B_1$ not intersecting $A_1$, and Player~I continues playing in the same way. Now, since $J}%{\mathcal{J}$ is not meager, Player~I does not have a winning strategy. In particular, there is some round $\seqn{(A_n,B_n)}{n\in\omega}$ of the game where Player~I uses the strategy defined above and Player~II wins, i.e.\ $F:=\bigcup_{n\in\omega}B_n$ is in $J}%{\mathcal{J}^d$. On the~other hand, Player~I constructed the increasing sequence $\seqn{s_n}{n\in\omega}$ of members of $T$, so $x:=\bigcup_{n\in\omega}s_n$ is a branch of the tree. By the definition of the strategy, we have that $x\in c^T_m$ for any $m\in F$, so $x\in N^*_J}%{\mathcal{J}(\bar{c}^T)$. Also $x\in D_n$ for every $n\in\omega$. \end{proof} As a consequence for $T=2^{<\omega}$, we conclude that $\NstaridealJ{J}%{\mathcal{J}}$ contains non-meager subsets of ${}^\omega 2$ when $J}%{\mathcal{J}$ is an ideal on $\omega$ without the Baire property. Moreover, let us emphasize that the~same is true for $\NidealJ{J}%{\mathcal{J}}$ as well (because $N^*_J}%{\mathcal{J}(\bar c)\subseteq N_J}%{\mathcal{J}(\bar c)$). \begin{corollary}\label{cor:ML2} If $J}%{\mathcal{J}$ is not meager then $N^*_J}%{\mathcal{J}(\bar{c}^T)$ and $N_J}%{\mathcal{J}(\bar{c}^T)$ are not meager in $^\omega2$. \end{corollary} In contrast with \autoref{capcomeag2}, if $J}%{\mathcal{J}$ has the Baire property then $N^*_J}%{\mathcal{J}(\bar c^T)$ is meager in ${}^\omega 2$ (and even $N^*_J}%{\mathcal{J}(\bar c^T)\cap[T]$ is meager in $[T]$ when $\leb{[T]}>0$) because $N_J}%{\mathcal{J}^*(\bar c^T)\in\mathcal{E}$ by \autoref{BaireForNJ2}. As a consequence of \autoref{capcomeag2}, we can finally conclude that $\mathcal{N}_J}%{\mathcal{J}=\mathcal{E}$ iff $J}%{\mathcal{J}$ has the Baire property. \begin{theorem}\label{conseqforE2} Let $J}%{\mathcal{J}$ be an ideal on $\omega$. Then the~following statements are equivalent: \begin{multicols}{2} \begin{enumerate}[label=\rm (\roman*)] \item\label{it:N*1} $\mathcal{N}^*_J}%{\mathcal{J}\nsubseteq\mathcal{M}$. \item\label{it:N*2} $\mathcal{N}_J}%{\mathcal{J}^*\nsubseteq\mathcal{M}\cap{\mathcal{N}}$. \item\label{it:N*3} $\mathcal{E}\subsetneq\mathcal{N}^*_J}%{\mathcal{J}$. \item\label{it:N*4} $J}%{\mathcal{J}$ is not meager. \end{enumerate} \end{multicols} \end{theorem} \begin{proof} $\ref{it:N*4}\rightarrow\ref{it:N*1}$ follows by \autoref{cor:ML2}; $\ref{it:N*1}\rightarrow\ref{it:N*2}$ is obvious; $\ref{it:N*2}\rightarrow\ref{it:N*3}$ is a consequence of $\mathcal{E}\subseteq\mathcal{M}\cap\mathcal{N}$; $\ref{it:N*3}\rightarrow\ref{it:N*4}$ is a consequence of \autoref{BaireForNJ2}. \end{proof} In the case of $\mathcal{N}_J}%{\mathcal{J}$, independent of whether $J}%{\mathcal{J}$ is meager or not, $\mathcal{N}_J}%{\mathcal{J}$ contains non-meager sets. \begin{corollary} For every ideal $J}%{\mathcal{J}$ on $\omega$, $\mathcal{N}_J}%{\mathcal{J}\nsubseteq\mathcal{M}$ and $\NidealJ{J}%{\mathcal{J}}\nsubseteq\mathcal{M}\cap{\mathcal{N}}$. \end{corollary} \begin{proof} Since $\mathcal{N}_J}%{\mathcal{J}^*\subseteq\mathcal{N}_J}%{\mathcal{J}$, the conclusion is clear by \autoref{conseqforE2} when $J}%{\mathcal{J}$ is not meager. Otherwise $\mathcal{N}_J}%{\mathcal{J}=\mathcal{N}$, and $\mathcal{N}$ contains a co-meager set. \end{proof} \autoref{relationNJ2} summarizes the situation in \autoref{charaktNj2} and~\ref{conseqforE2} when $J}%{\mathcal{J}$ is a non-meager ideal on $\omega$. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, squarednode/.style={rectangle, draw=white, very thick, minimum size=7mm}] \node (a) at (-1, -1.5) {$\NstaridealJ{J}%{\mathcal{J}}$}; \node (f) at (1, -1.5) {$\NidealJ{J}%{\mathcal{J}}$}; \node (bb) at (-4.6, 0) {$\NstaridealJ{\mathrm{Fin}}=$}; \node (b) at (-3.5, 0) {$\mathcal{E}$}; \node (c) at (3.5, 0) {$\mathcal{N}$}; \node (cc) at (4.6, 0) {$=\NidealJ{\mathrm{Fin}}$}; \node[squarednode] (d) at (0, 1.5) {$\mathcal{N}\cap\mathcal{M}$}; \draw[->] (a) edge (f); \foreach \from/\to in {b/a, b/d, d/c, f/c} { \path[->,shift left=0.75ex] (\from) edge (\to); \path[->,shift left=0.75ex] (\to) edge node{\scriptsize $\times$} (\from); } \foreach \from/\to in {a/d, f/d} { \path[->,shift left=0.75ex] (\from) edge node{\scriptsize $\times$} (\to); \path[->,shift left=0.75ex] (\to) edge node{\scriptsize $\times$} (\from); } \end{tikzpicture} \caption{The situation when $J}%{\mathcal{J}$ is an ideal on $\omega$ without the Baire property. An arrow denotes $\subseteq$, while a crossed arrow denotes $\nsubseteq$. The arrow on the bottom could be reversed, e.g.\ when $J}%{\mathcal{J}$ is a maximal ideal.} \label{relationNJ2} \end{figure} \section{The effect of nearly coherence of filters}\label{sec:near} In this section, we prove a characterization of nearly coherence of filters (or ideals) in terms of the ideals $\mathcal{N}_J}%{\mathcal{J}$ and $\mathcal{N}^*_J}%{\mathcal{J}$. We first recall the notion of nearly coherence. \begin{definition}[A.~Blass~{\cite{blass86}}]\label{def:NC} Two filters $F}%{\mathcal{F}_0$ and $F}%{\mathcal{F}_1$ on~$\omega$ are \emph{nearly coherent} if there is a~finite-to-one function $\varphi\in{}^\omega\omega$ such that $\varphi^\to(F}%{\mathcal{F}_0)\cup \varphi^\to(F}%{\mathcal{F}_1)$ has the~finite intersection property. Dually, we say that two ideals $J}%{\mathcal{J}_0$ and $J}%{\mathcal{J}_1$ are \emph{nearly coherent} if there is a finite-to-one function $\varphi\in{}^\omega\omega$ such that $\varphi^\to(J}%{\mathcal{J}_0)\cup\varphi^\to(J}%{\mathcal{J}_1)$ is contained in some ideal. \end{definition} If $J_0$ and $J_1$ are nearly coherent ideals on $\omega$, then there is some ideal $K$ in $\omega$ which is $\mathrel{\leq_{\overline{\mathrm{KB}}}}$-below both $J_0$ and $J_1$. As a consequence of \autoref{RBeq}~\ref{it:underKB'}: \begin{lemma}\label{lem:coh} If $J}%{\mathcal{J}_0$ and $J}%{\mathcal{J}_1$ are nearly coherent ideals on $\omega$, then there is some ideal $K}%{\mathcal{K}$ on $\omega$ such that $\mathcal{N}^*_{J}%{\mathcal{J}_0}\cup\mathcal{N}^*_{J}%{\mathcal{J}_1} \subseteq \mathcal{N}^*_K}%{\mathcal{K}\subseteq\mathcal{N}_K}%{\mathcal{K} \subseteq \mathcal{N}_{J}%{\mathcal{J}_0}\cap\mathcal{N}_{J}%{\mathcal{J}_1}$ (see \autoref{fig:coherence}). \end{lemma} \begin{figure}[H] \centering \begin{tikzpicture \node (N*I) at (-1,2) {$\mathcal{N}^*_{J}%{\mathcal{J}_0}$}; \node (N*J) at (-1,0) {$\mathcal{N}^*_{J}%{\mathcal{J}_1}$}; \node (N*K) at (0,1) {$\mathcal{N}^*_K}%{\mathcal{K}$}; \node (NK) at (2,1) {$\mathcal{N}_K}%{\mathcal{K}$}; \node (NJ) at (3,0) {$\mathcal{N}_{J}%{\mathcal{J}_1}$}; \node (NI) at (3,2) {$\mathcal{N}_{J}%{\mathcal{J}_0}$}; \draw (N*I) edge[->] (N*K) (N*J) edge[->] (N*K) (N*K) edge[->] (NK) (N*I) edge[->] (NI) (N*J) edge[->] (NJ) (NK) edge[->] (NI) (NK) edge [->] (NJ); \end{tikzpicture} \caption{Situation describing \autoref{lem:coh} (an arrow denotes $\subseteq$).}\label{fig:coherence} \end{figure} Since $\mathcal{N}^*_J}%{\mathcal{J}=\mathcal{N}_J}%{\mathcal{J}$ for any maximal ideal $J}%{\mathcal{J}$ on $\omega$, we immediately obtain: \begin{corollary}\label{cor:NCuf} If $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ are nearly coherent ideals and $K}%{\mathcal{K}$ is \underline{maximal}, then $\mathcal{N}^*_J}%{\mathcal{J}\subseteq\mathcal{N}_K}%{\mathcal{K}\subseteq\mathcal{N}_J}%{\mathcal{J}$. In particular, if $J}%{\mathcal{J}$ is also a \underline{maximal} ideal, then $\mathcal{N}_J}%{\mathcal{J}=\mathcal{N}_K}%{\mathcal{K}$. \end{corollary} A.~Blass~\cite{blass86} has introduced the following principle, which was proved consistent with ZFC by Blass and Shelah~\cite{blassshelah}. \begin{enumerate}[label=\rm (NCF)] \item\label{NCF} \emph{Near coherence of filters:} Any pair of filters on $\omega$ are nearly coherent. \end{enumerate} In fact, $\mathfrak{u}<\mathfrak{g}$ (which holds in Miller model~\cite{BlassSP,blassshelah3}) implies \ref{NCF} (\cite[Cor.~9.18]{blassbook}). On the other hand, it is possible to obtain not nearly coherence pairs of filters under CH and in random model~\cite[Sec.~4]{blass86}, as well as in Cohen model (e.g.~\autoref{cohenthm}). As a consequence of \autoref{cor:NCuf}: \begin{corollary}\label{cor:NCFNuf} \ref{NCF} implies that all $\mathcal{N}_J}%{\mathcal{J}$ with $J}%{\mathcal{J}$ maximal ideal on $\omega$ are the same. \end{corollary} We prove the converse of \autoref{lem:coh} and of \autoref{cor:NCuf} and~\ref{cor:NCFNuf}, which gives us a characterization of nearly coherence and~\ref{NCF}. For this purpose, we use the following game, formulated by T.~Eisworth, that characterizes nearly coherence. \begin{definition}[Eisworth~{\cite{Eis01}}] Let $F}%{\mathcal{F}_0$, $F}%{\mathcal{F}_1$ be two filters on~$\omega$. The following game of length $\omega$ between two players is called the \emph{nearly coherence game $C_{F}%{\mathcal{F}_0,F}%{\mathcal{F}_1}$}: \begin{itemize} \item In the $n$-th move, Player~I plays a~finite set $A_n\in[\omega]^{<\omega}$ and Player~II responds with a~finite set $B_n\in[\omega]^{<\omega}$ disjoint from $A_n$. \item After $\omega$ many moves, Player~II wins if $\bigcup\set{B_{2n+i}}{n\in\omega}\inF}%{\mathcal{F}_i$ for $i\in\{0,1\}$. Otherwise, Player~I wins. \end{itemize} \end{definition} \begin{theorem}[Eisworth~{\cite{Eis01}}] Let $F}%{\mathcal{F}_0$, $F}%{\mathcal{F}_1$ be two filters on~$\omega$. Then $F}%{\mathcal{F}_0$ and $F}%{\mathcal{F}_1$ are nearly coherent iff Player~I has a winning strategy of the game $C_{F}%{\mathcal{F}_0,F}%{\mathcal{F}_1}$. \end{theorem} We use the~nearly coherence game $C_{F}%{\mathcal{F}_0,F}%{\mathcal{F}_1}$ to prove the~following technical lemma. \begin{mainlemma}\label{notnear2} Let $K\subseteq {}^\omega2$ be a closed set of positive measure such that \[ (\forall s\in 2^{<\omega})\ [s]\cap K\neq \emptyset \mathrel{\mbox{$\Rightarrow$}} \leb{[s]\cap K}> 0. \] If $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ are not nearly coherent ideals on $\omega$ then $N^*_J}%{\mathcal{J}(\bar{c}^T)\notin\mathcal{N}_K}%{\mathcal{K}$ where $T$ is the~tree without maximal nodes such that $K=[T]$. \end{mainlemma} \begin{proof} We use the nearly coherence game $C_{J}%{\mathcal{J}^d,K}%{\mathcal{K}^d}$ and build a strategy for Player~I, in order to get an~\mbox{$x\in N^*_J}%{\mathcal{J}(\bar{c}^T)\smallsetminus N_K}%{\mathcal{K}(\bar{c}')$} for any $\bar{c}'\in{\overline{\Omega}}$. \underline{The~first move}: Player~I first moves with $A_0=\{0\}$, and put $s_0:= t_0$ (which is the~empty sequence). \underline{The second and third moves, and further}: After Player~II replies with $B_0\subseteq \omega\smallsetminus A_0$, Player~I finds $s_1\supseteq s_0$ in $T$ such that $s_1\supseteq t^n_0$ for every $n\in B_0$, and further finds $n_1\geq|s_1|$ such that $\mu\left(\bigcup_{\ell\geq n_1}c'_\ell\right)<\mu(K\cap[s_1])$, which implies that there is some $x_1\in K\cap [s_1]\smallsetminus\bigcup_{\ell\geq n_1}c'_\ell$. Player~I moves with $A_1=n_1+1$. After Player~II replies with $B_1\subseteq \omega\smallsetminus A_1$, Player~I finds some $m_2\geq n_1$ such that $[s_2]\cap\bigcup_{n\in B_1}c'_n=\emptyset$ where $s_2:=x_1{\upharpoonright}m_2$. Player~I moves $A_2=n_2+1$ where $n_2<\omega$ is such that $s_2=t_{n_2}$. Afterwards, Player~II moves with $B_2\subseteq\omega\smallsetminus A_2$, and the same dynamic is repeated: Player~I finds an $s_3\supseteq s_2$ in $T$ such that $s_3\supseteq t^n_{n_2}$ for every $n\in B_2$, and some $n_3\geq|s_3|$ such that $\mu\left( \bigcup_{\ell\geq n_3} c'_\ell\right)<\mu(K\cap[s_3])$, which implies that there is an $x_3\in K\cap [s_3]\smallsetminus\bigcup_{\ell\geq n_3}c'_\ell$. Player~I moves with $A_3=n_3+1$, and the game continues as described so far. Since Player~I does not have a winning strategy, there is some run as described above where Player~II wins. Thus, $F_0:=\bigcup_{n\in\omega}B_{2n}\inJ}%{\mathcal{J}^d$ and $F_1:=\bigcup_{n\in\omega}B_{2n+1}\inK}%{\mathcal{K}^d$. Set $x:=\bigcup_{n\in\omega}s_n$. Note that $x\in c^T_\ell$ for any $\ell\in F_0$, and $x\notin c'_\ell$ for any $\ell\in F_1$, which means that $x\in N^*_J}%{\mathcal{J}(\bar{c}^T)$ and $x\notin N_K}%{\mathcal{K}(\bar{c}')$. \end{proof} An application of the previous result to $K={}^\omega 2$ yields: \begin{theorem}\label{thm:nNCNJ} If $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ are not near-coherent ideals on $\omega$ then $\mathcal{N}^*_J}%{\mathcal{J}\nsubseteq\mathcal{N}_K}%{\mathcal{K}$ and $\mathcal{N}^*_K}%{\mathcal{K}\nsubseteq\mathcal{N}_J}%{\mathcal{J}$. In particular, $\NidealJ{K}%{\mathcal{K}}\neq \NidealJ{J}%{\mathcal{J}}$ and $\mathcal{N}^*_J}%{\mathcal{J}\neq\mathcal{N}^*_K}%{\mathcal{K}$. \end{theorem} It is clear that $\mathrm{Fin}$ is nearly coherent with any filter on $\omega$. Therefore, any pair of not nearly coherent filters must be non-meager. As a consequence of \autoref{charaktNj2} and~\ref{conseqforE2}: \begin{corollary}\label{cor:nnc-NE} Let $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ be not nearly coherent ideals on~$\omega$. Then the ideals $\mathcal{E}$, $\NstaridealJ{J}%{\mathcal{J}}$, $\NidealJ{K}%{\mathcal{K}}$ and $\mathcal{N}$ are pairwise different. \end{corollary} The situation in \autoref{thm:nNCNJ} and \autoref{cor:nnc-NE} is illustrated in \autoref{nearcoherence}. \begin{figure}[H] \centering \begin{tikzpicture \node (a) at (-2, -1.5) {$\NstaridealJ{J}%{\mathcal{J}}$}; \node (f) at (2, -1.5) {$\NidealJ{J}%{\mathcal{J}}$}; \node (b) at (-3.5, 0) {$\mathcal{E}$}; \node (c) at (3.5, 0) {$\mathcal{N}$}; \node (d) at (-2, 1.5) {$\NstaridealJ{K}%{\mathcal{K}}$}; \node (ff) at (2, 1.5) {$\NidealJ{K}%{\mathcal{K}}$}; \foreach \from/\to in { a/f, d/ff \draw [->] (\from) -- (\to); \foreach \from/\to in { a/ff, d/f, ff/a, f/d, a/d, f/ff, d/a, ff/f} { \path[-,shift left=0.75ex,draw=white,line width=3pt] (\from) edge (\to); \path[->,shift left=0.75ex] (\from) edge node[near start]{\scriptsize /} (\to); } \foreach \from/\to in {b/a, b/d, f/c, ff/c} { \path[->,shift left=0.75ex] (\from) edge (\to); \path[->,shift left=0.75ex] (\to) edge node{\scriptsize /} (\from); } \end{tikzpicture} \caption{Diagram corresponding to the situation in \autoref{thm:nNCNJ} where $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ are not nearly coherent ideals on $\omega$. An arrow denotes $\subseteq$, and a crossed arrow denotes $\nsubseteq$. The arrow on the top could be reversed, e.g.\ when $K}%{\mathcal{K}$ is a maximal ideal (likewise for the arrow on the bottom).} \label{nearcoherence} \end{figure} We summarize our results as a characterization of nearly coherence. \begin{theorem}\label{nc-char} Let $J$ and $K$ be ideals on $\omega$. The following statements are equivalent. \begin{enumerate}[label =\rm (\roman*)] \item\label{it:nc} $J$ and $K$ are nearly coherent. \item\label{it:snc} There is some ideal $K'$ on $\omega$ such that $K' \mathrel{\leq_{\overline{\mathrm{KB}}}} J$ and $K'\mathrel{\leq_{\overline{\mathrm{KB}}}} K $. \item\label{it:aida} There is some ideal $K'$ on $\omega$ such that $\mathcal{N}^*_J\cup \mathcal{N}^*_{K}\subseteq \mathcal{N}^*_{K'}\subseteq \mathcal{N}_{K'}\subseteq \mathcal{N}_J \cap \mathcal{N}_K$. \item\label{it:subset} $\mathcal{N}^*_J\subseteq \mathcal{N}_K$. \end{enumerate} \end{theorem} \begin{proof} $\ref{it:nc}\mathrel{\mbox{$\Rightarrow$}} \ref{it:snc}$ is obvious; $\ref{it:snc}\mathrel{\mbox{$\Rightarrow$}} \ref{it:aida}$ is immediate from \autoref{RBeq}; $\ref{it:aida}\mathrel{\mbox{$\Rightarrow$}}\ref{it:subset}$ is obvious; and $\ref{it:subset}\mathrel{\mbox{$\Rightarrow$}} \ref{it:nc}$ follows by (the contrapositive of) \autoref{thm:nNCNJ}. \end{proof} The non-nearly coherence of filters also gives us examples of non-meager ideals $J$ such that $\mathcal{N}^*_J\neq\mathcal{N}_J$. We do not know how to construct such an example in ZFC. \begin{lemma}\label{noncoh:sum} Assume that $J_1$ and $J_2$ are non-nearly coherent ideals on $\omega$. Then $J_1\oplus J_2$ is non-meager and $\mathcal{N}^*_{J_1\oplus J_2}\neq \mathcal{N}_{J_1\oplus J_2}$. \end{lemma} \begin{proof} Partition $\omega={\mathbb N}_1\cup {\mathbb N}_2$ into two infinite sets, let $g_e\colon \omega\to{\mathbb N}_e$ be a bijection for each $e\in\{1,2\}$, and let $J'_e= g_e^\to(J_e)$, which is an ideal on ${\mathbb N}_e$ isomorphic with $J_e$. Here, our interpretation of $J_1\oplus J_2$ is $J'_1\oplus J'_2$. Recall that non-nearly coherent ideals must be non-meager (this also follows by \autoref{thm:nNCNJ}), so $J'_1\times J'_2$ is a non-meager subset of ${}^\omega 2= {}^{{\mathbb N}_1}2 \times {}^{{\mathbb N}_2}2$ by Kuratowski-Ulam Theorem, which implies that $J'_1\oplus J'_2$ is non-meager. We must also have that $J'_1\oplus\pts({\mathbb N}_2)$ and $\pts({\mathbb N}_1)\oplus J'_2$ are not nearly-coherent. Assume otherwise that there is some ideal $K$ on $\omega$ which is $\mathrel{\leq_{\overline{\mathrm{KB}}}}$-below both ideals. Since $J'_1\oplus\pts({\mathbb N}_2) \mathrel{\leq_{\overline{\mathrm{KB}}}} J_1$ and $\pts({\mathbb N}_1)\oplus J'_2 \mathrel{\leq_{\overline{\mathrm{KB}}}} J_2$ (by using $g_1$ and $g_2$), we conclude that $K$ is $\mathrel{\leq_{\overline{\mathrm{KB}}}}$ below $J_1$ and $J_2$, i.e.\ they are nearly coherent by \autoref{nc-char}, a contradiction. Therefore, by \autoref{nc-char}, $\mathcal{N}^*_{J'_1\oplus\pts({\mathbb N}_2)}\nsubseteq \mathcal{N}_{\pts({\mathbb N}_1)\oplus J'_2}$. On the other hand, both ideals are between $\mathcal{N}^*_{J'_1\oplus J'_2}$ and $\mathcal{N}_{J'_1\oplus J'_2}$, so $\mathcal{N}^*_{J'_1\oplus J'_2}\neq \mathcal{N}_{J'_1\oplus J'_2}$. \end{proof} In contrast with the previous result, we do not know whether \ref{NCF} implies that all $\mathcal{N}_J$ are the same for non-meager $J$. \section{Cardinal characteristics}\label{sec:cardinalityNJ} In this section, we focus on investigating the cardinal characteristics associated with $\mathcal{N}_J$ and $\mathcal{N}^*_J$. We review some basic notation about cardinal characteristics. Many cardinal characteristics are defined using relational systems in the following way~\cite{Vojtas}. A \emph{relational system} is a triplet $\mathbf{R}=\langle X,Y,R\rangle$ where $R$ is a relation and $X$ and $Y$ are non-empty sets. Define \begin{linenomath} \begin{align*} \mathfrak{b}(\mathbf{R}) & :=\min\set{|F|}{F\subseteq X,\ \neg\,(\exists\, y\in Y)\ (\forall\, x\in F)\ x\, R\, y}\\ \mathfrak{d}(\mathbf{R}) & :=\min\set{|D|}{D\subseteq Y,\ (\forall\, x\in X)\ (\exists\, y\in D)\ x\, R\, y}. \end{align*} \end{linenomath} The \emph{dual of $\mathbf{R}$} is defined by $\mathbf{R}^\perp:=\langle Y,X,R^\perp\rangle$ where $y\, R^\perp\, x$ iff $\neg\, x\, R\, y$. Hence $\mathfrak{b}(\mathbf{R}^\perp)=\mathfrak{d}(\mathbf{R})$ and $\mathfrak{d}(\mathbf{R}^\perp)=\mathfrak{b}(\mathbf{R})$. Given another relational system $\mathbf{R}'=\langle X', Y', R'\rangle$, say that a pair $(\varphi_-,\varphi_+)$ is a \emph{Tukey connection from $\mathbf{R}$ to $\mathbf{R}'$} if $\varphi_-:X\to X'$, $\varphi_+:Y'\to Y$ and, for any $x\in X$ and $y'\in Y'$, if $\varphi_-(x)\, R'\, y'$ then $x\, R\, \varphi_+(y')$. We say that \emph{$\mathbf{R}$ is Tukey below $\mathbf{R}'$}, denoted by $\mathbf{R}\mathrel{\preceq_{\mathrm{T}}}\mathbf{R}'$, if there is a Tukey connection from $\mathbf{R}$ to $\mathbf{R}'$. Say that $\mathbf{R}$ is \emph{Tukey equivalent} to $\mathbf{R}'$, denoted by $\mathbf{R}\mathrel{\cong_{\mathrm{T}}}\mathbf{R}'$, if $\mathbf{R}\mathrel{\preceq_{\mathrm{T}}}\mathbf{R}'$ and $\mathbf{R}'\mathrel{\preceq_{\mathrm{T}}}\mathbf{R}$. It is known that $\mathbf{R}\mathrel{\preceq_{\mathrm{T}}}\mathbf{R}'$ implies $\mathfrak{d}(\mathbf{R})\leq\mathfrak{d}(\mathbf{R}')$ and $\mathfrak{b}(\mathbf{R}')\leq\mathfrak{b}(\mathbf{R})$. Hence $\mathbf{R}\mathrel{\cong_{\mathrm{T}}}\mathbf{R}'$ implies $\mathfrak{d}(\mathbf{R})=\mathfrak{d}(\mathbf{R}')$ and $\mathfrak{b}(\mathbf{R}')=\mathfrak{b}(\mathbf{R})$. We constantly use the relational system $\mathbf{c}^\mathcal{J}_\mathcal{I}:=\langle\mathcal{I},\mathcal{J},\subseteq\rangle$ discussed in~\cite[Ch.~2]{BJ}, and we identify $\mathcal{I}$ with the relational system $\mathbf{c}^\mathcal{I}_\mathcal{I}$. Denote $\add(\mathcal{I},\mathcal{J}):=\mathfrak{b}(\mathbf{c}^\mathcal{J}_\mathcal{I})$ and $\cof(\mathcal{I},\mathcal{J}):=\mathfrak{d}(\mathbf{c}^\mathcal{J}_\mathcal{I})$. These cardinal characteristics are interesting when $\mathcal{I}\subseteq\mathcal{J}$ are ideals on some set $X$. It is well-known that, for any ideal $\mathcal{I}$ on $X$, we can express the \emph{cardinal characteristics associated with $\mathcal{I}$} as follows: \begin{linenomath} \begin{align*} \add(\mathcal{I})& =\add(\mathcal{I},\mathcal{I}), & \cof(\mathcal{I}) &=\cof(\mathcal{I},\mathcal{I}),\\ \non(\mathcal{I})& =\add([X]^{<\aleph_0},\mathcal{I}), & \cov(\mathcal{I})& =\cof([X]^{<\aleph_0},\mathcal{I}). \end{align*} \end{linenomath} In fact, via the relational system $\mathbf{C}_\mathcal{I}:=\langle X,\mathcal{I},\in\rangle$, we obtain $\non(\mathcal{I})=\mathfrak{b}(\mathbf{C}_\mathcal{I})$ and $\cov(\mathcal{I})=\mathfrak{d}(\mathbf{C}_\mathcal{I})$. The following easy claims illustrate basic relations between these cardinal characteristics. \begin{fact}\label{wcov} If $\mathcal{I}$ is an ideal on $X$, $\mathcal{I}\subseteq\mathcal{I}'$ and $\mathcal{J}\subseteq\pts(X)\smallsetminus \{X\}$, then $(\mathbf{c}_{\mathcal{I}'}^\mathcal{J})^\perp \mathrel{\preceq_{\mathrm{T}}} \mathbf{C}_\mathcal{I}\mathrel{\preceq_{\mathrm{T}}}\mathbf{c}_{[X]^{<\aleph_0}}^\mathcal{I}$. In particular, $\add(\mathcal{I}',\mathcal{J})\leq\cov(\mathcal{I})$ and $\non(\mathcal{I})\leq\cof(\mathcal{I}',\mathcal{J})$. \end{fact} \begin{proof} We only show the first Tukey connection. Define $F\colon \mathcal{J}\to X$ such that $F(B)\in X\smallsetminus B$ (which exists because $X\notin \mathcal{J}$), and define $G\colon \mathcal{I}\to \mathcal{I}'$ by $G(A):=A$. Then, for $A\in\mathcal{I}$ and $B\in\mathcal{J}$, $F(B)\in A$ implies $B\nsupseteq A$. Hence, $(F,G)$ witnesses $(\mathbf{c}_{\mathcal{I}'}^\mathcal{J})^\perp \mathrel{\preceq_{\mathrm{T}}} \mathbf{C}_\mathcal{I}$. \end{proof} \begin{fact}\label{cofIJrel} If $\mathcal{I}\subseteq\mathcal{I}'$ and $\mathcal{J}'\subseteq \mathcal{J}$ then $\mathbf{c}^\mathcal{J}_\mathcal{I}\mathrel{\preceq_{\mathrm{T}}} \mathbf{c}^{\mathcal{J}'}_{\mathcal{I}'}$. In particular, $\add(\mathcal{I}',\mathcal{J}')\leq \add(\mathcal{I},\mathcal{J})$ and $\cof(\mathcal{I},\mathcal{J})\leq \cof(\mathcal{I}',\mathcal{J}')$. \end{fact} \begin{corollary}\label{corcovsub} If $\mathcal{J}'\subseteq\mathcal{J}$ are ideals on $X$, then $\mathbf{c}^{\mathcal{J}}_{[X]^{<\aleph_0}} \mathrel{\preceq_{\mathrm{T}}} \mathbf{c}^{\mathcal{J}'}_{[X]^{<\aleph_0}}$ and $\mathbf{C}_{\mathcal{J}}\mathrel{\preceq_{\mathrm{T}}} \mathbf{C}_{\mathcal{J}'}$. In particular, $\cov(\mathcal{J})\leq \cov(\mathcal{J}')$ and $\non(\mathcal{J}')\leq\non(\mathcal{J})$. \end{corollary} We look at the cardinal characteristics associated with $\mathcal{N}_J$ and $\mathcal{N}^*_J$ when $J$ is an ideal on $\omega$. If $J}%{\mathcal{J}$ has the~Baire property then the cardinal characteristics associated with $\mathcal{N}_J}%{\mathcal{J}$ equal to those associated with $\mathcal{N}$ because $\mathcal{N}_J=\mathcal{N}$ (\autoref{BaireForNJ2}). Moreover, since $\mathcal{N}^*_J=\mathcal{E}$, the~cardinal characteristics associated with $\NstaridealJ{J}%{\mathcal{J}}$ equal to those associated with $\mathcal{E}$. We recall below some results about the cardinal characteristics associated with $\mathcal{E}$. \begin{theorem}[{\cite{BartSh}}, see also~{\cite[Sec.~2.6]{BJ}}]\label{thm:E} \ \begin{enumerate}[label =\rm (\alph*)] \item $\min\{\mathfrak{b},\non(\mathcal{N})\} \leq\non(\mathcal{E})\leq\min\{\non(\mathcal{M}),\non(\mathcal{N})\}$ \item $\max\{\cov(\mathcal{M}),\cov(\mathcal{N})\}\leq\cov(\mathcal{E}) \leq\max\{\mathfrak{d},\cov(\mathcal{N})\}$. \item $\add(\mathcal{E},\mathcal{N})=\cov(\mathcal{M})$ and $\cof(\mathcal{E},\mathcal{N})=\non(\mathcal{M})$. \item $\add(\mathcal{E})=\add(\mathcal{M})$ and $\cof(\mathcal{E})=\cof(\mathcal{M})$. \end{enumerate} \end{theorem} \begin{theorem}[{\cite[Lem.~7.4.3]{BJ}}] $\mathfrak{s}\leq \non(\mathcal{E})$ and $\cov(\mathcal{E})\leq \mathfrak{r}$. \end{theorem} Thanks to the previous results, and the fact that $\mathcal{E}\subseteq\NstaridealJ{J}%{\mathcal{J}}\subseteq\NidealJ{J}%{\mathcal{J}}\subseteq\mathcal{N}$, we obtain: \begin{theorem}\label{covnonvscichon} The covering numbers of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\cov(\mathcal{N})$ and $\cof(\mathcal{M})$ (in the upper part of Cicho\'n's diagram), and their uniformity numbers are between $\add(\mathcal{M})$ and $\non(\mathcal{N})$ (in the lower part of Cicho\'n's diagram). Concretely, \[\begin{array}{ccccccccc} \cov(\mathcal{N}) &\leq & \cov(\mathcal{N}_J) & \leq & \cov(\mathcal{N}^*_J) &\leq & \cov(\mathcal{E}) &\leq & \min\{\cof(\mathcal{M}),\mathfrak{r}\},\\[1ex] \max\{\add(\mathcal{M}), \mathfrak{s}\} &\leq & \non(\mathcal{E}) &\leq & \non(\mathcal{N}^*_J) &\leq & \non(\NidealJ{J}%{\mathcal{J}}) &\leq & \non(\mathcal{N}). \end{array}\] \end{theorem} We now turn to the additivity and cofinality numbers. In the case of $J=\mathrm{Fin}$, we can characterize $\add(\mathcal{N})$ and $\cof(\mathcal{N})$ using slaloms. \begin{definition} Let $b=\langle b(n):\, n<\omega\rangle$ be a sequence of non-empty sets, and let $h\in\omega^\omega$. Denote \[\begin{split} \prod b & := \prod_{n<\omega}b(n),\\ S}%{\mathcal{S}(b,h) & := \prod_{n<\omega}[b(n)]^{\leq h(n)}. \end{split}\] Define the relational system $\mathbf{Lc}(b,h):=\langle \prod b, S}%{\mathcal{S}(b,h), \in^*\rangle$ where \[x\in ^* y \text{ iff }\set{n<\omega}{x(n)\notin y(n)} \text{ is finite}.\] Denote $\mathfrak{b}^{\rm Lc}_{b,h}:=\mathfrak{b}(\mathbf{Lc}(b,h))$ and $\mathfrak{d}^{\rm Lc}_{b,h}:=\mathfrak{d}(\mathbf{Lc}(b,h))$. When $b$ is the constant sequence $\omega$, we use the notation $\mathbf{Lc}(\omega,h)$ and denote its associated cardinal characteristics by $\mathfrak{b}^{\rm Lc}_{\omega,h}$ and $\mathfrak{d}^{\rm Lc}_{\omega,h}$. \end{definition} \begin{theorem}[Bartoszy\'nski~{\cite{BA84,BartInv}}]\label{thmBart} Assume that $h\in\omega^\omega$ diverges to infinity. Then, $\mathbf{Lc}(b,h)\mathrel{\cong_{\mathrm{T}}} \mathcal{N}$. In particular, $\mathfrak{b}^{\rm Lc}_{\omega,h}=\add(\mathcal{N})$ and $\mathfrak{d}^{\rm Lc}_{\omega,h}=\cof(\mathcal{N})$. \end{theorem} We propose the following relational system, which is practical to find bounds for the additivity and cofinality of $\mathcal{N}_J$ and $\mathcal{N}^*_J$. \begin{definition}\label{def:Sbf} Let $J$ be an ideal on $\omega$. For $c,d\in{\overline{\Omega}}$, define the relation \[c\subseteq^J d \text{ iff }\set{n<\omega}{c_n\nsubseteq d_n}\in J.\] Define the relational system $\mathbf{S}_J:=\langle {\overline{\Omega}}, {\overline{\Omega}}, \subseteq^J\rangle$, and denote $\mathfrak{b}_J({\overline{\Omega}}):=\mathfrak{b}(\mathbf{S}_J)$ and $\mathfrak{d}_J({\overline{\Omega}}):=\mathfrak{d}(\mathbf{S}_J)$. \end{definition} It is clear that $\mathfrak{b}_J({\overline{\Omega}})$ is regular and $\mathfrak{b}_J({\overline{\Omega}})\leq\cf(\mathfrak{d}_J({\overline{\Omega}}))\leq\mathfrak{d}_J({\overline{\Omega}})$. \begin{theorem}\label{omegabarvsNJ} Let $J$ be an ideal on $\omega$. Then $\mathcal{N}_J\mathrel{\preceq_{\mathrm{T}}} \mathbf{S}_J$ and $\mathcal{N}^*_J\mathrel{\preceq_{\mathrm{T}}} \mathbf{S}_J$. In particular, the additivities and cofinalities of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})$ (see \autoref{fig:bomegabar}). \end{theorem} \begin{figure}[h!] \centering \begin{tikzpicture \small{ \node (bI) at (-2,0) {$\mathfrak{b}_J({\overline{\Omega}})$}; \node (adN*I) at (0,-1) {$\add(\mathcal{N}^*_J)$}; \node (cfN*I) at (2,-1) {$\cof(\mathcal{N}^*_J)$}; \node (adNI) at (0,1) {$\add(\mathcal{N}_J)$}; \node (cfNI) at (2,1) {$\cof(\mathcal{N}_J)$}; \node (dI) at (4,0) {$\mathfrak{d}_J({\overline{\Omega}})$}; \draw (bI) edge[->] (adN*I) (bI) edge[->] (adNI) (adN*I) edge[->] (cfN*I) (adNI) edge[->] (cfNI) (cfNI) edge [->] (dI) (cfN*I) edge [->] (dI); } \end{tikzpicture} \caption{Diagram of inequalities between the cardinal characteristics associated with $\mathbf{S}_J$, and the additivities and cofinalities of $\mathcal{N}_J$ and $\mathcal{N}^*_J$.}\label{fig:bomegabar} \end{figure} \begin{proof} Define $F\colon \mathcal{N}_J\to {\overline{\Omega}}$ such that $X\subseteq N_J(F(X))$ for any $X\in\mathcal{N}_J$, and define $G\colon {\overline{\Omega}} \to \mathcal{N}_J$ by $G(\bar d):=N_J(\bar d)$. It is clear that $(F,G)$ is a Tukey connection from $\mathcal{N}_J$ into $\mathbf{S}_J$, because $\bar c\subseteq^J \bar d$ implies $N_J(\bar c)\subseteq N_J(\bar d)$. Similarly, we obtain a Tukey connection from $\mathcal{N}^*_J$ into $\mathbf{S}_J$ via functions $F^*\colon \mathcal{N}^*_J\to {\overline{\Omega}}$ and $G^*\colon {\overline{\Omega}} \to \mathcal{N}_J$ such that $X\subseteq N^*_J(F^*(X))$ for $X\in\mathcal{N}^*_J$ and $G^*(\bar d):= N^*_J(\bar d)$. \end{proof} Analogous to \autoref{RBeq}, we have the following result about $\mathbf{S}_J$. \begin{theorem}\label{KBomegabar} Let $J$ and $K$ be ideals on $\omega$. \begin{enumerate}[label= \rm (\alph*)] \item\label{it:KBomegabar} If $K\mathrel{\leq_{\mathrm{KB}}} J$ then $\mathbf{S}_J\mathrel{\preceq_{\mathrm{T}}} \mathbf{S}_K$, in particular $\mathfrak{b}_K({\overline{\Omega}})\leq\mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})\leq \mathfrak{d}_K({\overline{\Omega}})$. \item\label{it:KB'omegabar} If $K \mathrel{\leq_{\overline{\mathrm{KB}}}} J$ then $\mathbf{S}_K\mathrel{\preceq_{\mathrm{T}}} \mathbf{S}_J$, in particular $\mathfrak{b}_J({\overline{\Omega}})\leq\mathfrak{b}_K({\overline{\Omega}})$ and $\mathfrak{d}_K({\overline{\Omega}})\leq \mathfrak{d}_J({\overline{\Omega}})$. \item\label{it:RBomegabar} If $K \mathrel{\leq_{\mathrm{RB}}} J$ then $\mathbf{S}_K\mathrel{\cong_{\mathrm{T}}} \mathbf{S}_J$, in particular $\mathfrak{b}_K({\overline{\Omega}}) =\mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_K({\overline{\Omega}}) = \mathfrak{d}_J({\overline{\Omega}})$. \end{enumerate} \end{theorem} \begin{proof} It is clear that~\ref{it:RBomegabar} follows from~\ref{it:KBomegabar} and~\ref{it:KB'omegabar}. Let $f\colon\omega\to \omega$ be a finite-to-one function and denote $I_n:=f^{-1}[\{n\}]$. Like in the proof of \autoref{RBeq}, define $F'\colon {\overline{\Omega}}\to {\overline{\Omega}}$ by $F'(\bar c):=\bar c'$ where $c'_n:=\bigcup_{k\in I_n}c_k$, and define $F^-\colon {\overline{\Omega}}\to {\overline{\Omega}}$ by $F^-(\bar c):=\bar c^-$ where $c^-_k:=c_{f(k)}$. If $K\subseteq f^{\to}(J)$ then $(F',F^-)$ is a Tukey connection from $\mathbf{S}_J$ into $\mathbf{S}_K$, which shows~\ref{it:KBomegabar}. To prove this, assume $\bar c,\bar d\in{\overline{\Omega}}$ and $\bar c'\subseteq^K \bar d$, and we show $\bar c \subseteq^J \bar d^-$. The hypothesis indicates that $\set{n<\omega}{c'_n\subseteq d_n}\in K^d$, which implies that $\set{k<\omega}{c'_{f(k)}\subseteq d_{f(k)}}\in J^d$. Since $c_k\subseteq c'_{f(k)}$, the previous set is contained in $\set{k<\omega}{c_k\subseteq d^-_k}$, so $\bar c \subseteq^J \bar d^-$. To show~\ref{it:KB'omegabar}, we verify that, whenever $f^\to(J)\subseteq K$, $(F^-,F')$ is a Tukey connection from $\mathbf{S}_K$ into $\mathbf{S}_J$. Let $\bar c,\bar d\in{\overline{\Omega}}$ and assume that $\bar c^-\subseteq^J \bar d$, i.e.\ $\set{k<\omega}{c_{f(k)}\subseteq d_k}\in J^d$. Since $d_k\subseteq d'_{f(k)}$, this set is contained in $\set{k<\omega}{c_{f(k)}\subseteq d'_{f(k)}}$, so $\set{n<\omega}{c_n\subseteq d'_n}\in K^d$, i.e.\ $\bar c\subseteq^K \bar d'$. \end{proof} In the case $J=\mathrm{Fin}$, we obtain the following characterization of the additivity and cofinality of $\mathcal{N}$. \begin{theorem}\label{omegabarFin} $\mathfrak{b}_\mathrm{Fin}({\overline{\Omega}})=\add(\mathcal{N})$ and $\mathfrak{d}_\mathrm{Fin}({\overline{\Omega}})=\cof(\mathcal{N})$. \end{theorem} \begin{proof} Note that $\mathcal{N}=\mathcal{N}_\mathrm{Fin}\mathrel{\preceq_{\mathrm{T}}} \mathbf{S}_\mathrm{Fin}$ by \autoref{omegabarvsNJ}, so $\mathfrak{b}_\mathrm{Fin}({\overline{\Omega}})\leq\add(\mathcal{N})$ and $\cof(\mathcal{N})\leq \mathfrak{d}_\mathrm{Fin}({\overline{\Omega}})$. We show the converse inequality for $\add(\mathcal{N})$. It is enough to prove that, whenever $F\subseteq {\overline{\Omega}}$ has size ${<}\add(\mathcal{N})$, it has some upper $\subseteq^\mathrm{Fin}$-bound. For each $\bar c\in F$, find a function $f^{\bar c}\in \omega^\omega$ such that $\mu\big(\bigcup_{k\geq f^{\bar c}(n)}c_k\big)<\frac{1}{(n+1)2^n}$ for all $n<\omega$. Now $|F|<\add(\mathcal{N})\leq\mathfrak{b}$, so there is some increasing $f\in\omega^\omega$ with $f(0)=0$ dominating $\set{f^{\bar c}}{\bar c\in F}$, which means that, for any $\bar c\in F$, $\mu\big(\bigcup_{k\geq f(n)}c_k\big)<\frac{1}{(n+1)2^n}$ for all but finitely many $n<\omega$. By making finitely many modifications to each $\bar c\in F$, we can assume that the previous inequality is valid for all $n<\omega$. For each $n<\omega$, let $I_n:=[f(n),f(n+1))$. Consider the functions $b^f$ and $h$ with domain $\omega$ such that $b^f(n):=\set{s\in\Omega^{I_n}}{\mu\left( \bigcup_{k\in I_n}s_k\right)<\frac{1}{(n+1)2^n}}$ and $h(n):=n+1$. Since $\mathbf{Lc}(b^f,h)\mathrel{\cong_{\mathrm{T}}} \mathbf{Lc}(\omega,h)\mathrel{\cong_{\mathrm{T}}} \mathcal{N}$, we obtain that $\mathfrak{b}^{\rm Lc}_{b^f,h}=\add(\mathcal{N})$ by \autoref{thmBart}. Since $F$ can be seen as a subset of $\prod b^f$, there is some $\varphi\in S}%{\mathcal{S}(b^f,h)$ such that, for any $\bar c\in F$, $\bar c{\upharpoonright} I_n\in \varphi(n)$ for all but finitely many $n<\omega$. Define $\bar d\in\Omega^\omega$ by $d_k:=\bigcup_{s\in \varphi(n)}s_k$ for $k\in I_n$. Note that \[\mu\left(\bigcup_{k\in I_n}d_k\right) = \mu\left( \bigcup_{s\in\varphi(n)} \bigcup_{k\in I_n}s_k\right)<\frac{1}{(n+1)2^n}(n+1)=\frac{1}{2^n},\] thus $\bar d\in{\overline{\Omega}}$. On the other hand, for any $\bar c\in F$, $c_k\subseteq d_k$ for all but finitely many $k<\omega$, i.e.\ $\bar c\subseteq^\mathrm{Fin} \bar d$. The proof of $\mathfrak{d}_\mathrm{Fin}({\overline{\Omega}})\leq \cof(\mathcal{N})$ is similar. Fix a dominating family $D$ of size $\mathfrak{d}$ formed by increasing functions $f$ such that $f(0)=0$. For each $f\in D$, note that $\mathfrak{d}^{\rm Lc}_{b^f,h}=\cof(\mathcal{N})$ by \autoref{thmBart}, so we can choose some witness $S^f\subseteq S}%{\mathcal{S}(b^f,h)$ and, for each $\varphi\in S^f$, define $d^{f,\varphi}_k:=\bigcup_{s\in \varphi(n)}s_k$ for $k\in[f(n),f(n+1))$. Then, $E:=\set{\bar d^{f,\varphi}}{\varphi^f\in S^f,\ f\in D}$ has size ${\leq}\cof(\mathcal{N})$ and it is $\mathbf{S}_\mathrm{Fin}$-dominating, i.e.\ any $\bar c\in{\overline{\Omega}}$ is $\subseteq^\mathrm{Fin}$-bounded by some $\bar d\in E$. \end{proof} \begin{corollary}\label{addN-covM} The additivites of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\add(\mathcal{N})$ and $\cov(\mathcal{M})$ (at the bottom of Cicho\'n's diagram); the cofinalities of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are between $\non(\mathcal{M})$ and $\cof(\mathcal{N})$ (at the top of Cicho\'n's diagram). \end{corollary} \begin{proof} Since $\mathrm{Fin}\mathrel{\leq_{\mathrm{KB}}} J$, by \autoref{KBomegabar} we obtain that $\mathfrak{b}_\mathrm{Fin}({\overline{\Omega}})\leq \mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})\leq\mathfrak{d}_J({\overline{\Omega}})$. Hence, by \autoref{omegabarvsNJ} and~\ref{omegabarFin}, we obtain \[\begin{split} \add(\mathcal{N})\leq &\ \mathfrak{b}_J({\overline{\Omega}})\leq \min\{\add(\mathcal{N}_J),\add(\mathcal{N}^*_J)\} \text{ and }\\ \max\{\cof(\mathcal{N}_J),\cof(\mathcal{N}^*_J)\}\leq &\ \mathfrak{d}_J({\overline{\Omega}})\leq \cof(\mathcal{N}). \end{split}\] On the other hand, by \autoref{cofIJrel} and \autoref{thm:E}, $\add(\mathcal{N}_J)=\add(\mathcal{N}_J,\mathcal{N}_J)\leq \add(\mathcal{E},\mathcal{N})=\cov(\mathcal{M})$ and $\non(\mathcal{M})= \cof(\mathcal{E},\mathcal{N}) \leq \cof(\mathcal{N}_J,\mathcal{N}_J)=\cof(\mathcal{N}_J)$, likewise for $\mathcal{N}^*_J$. \end{proof} \section{Consistency results}\label{sec:cons} We show the behaviour of the cardinal characteristics associated with $\mathcal{N}_J$ and $\mathcal{N}^*_J$ in different forcing models. As usual, we start with the Cohen model, where the behaviour of these cardinal characteristics are similar to $\mathfrak{b}_J$ and $\mathfrak{d}_J$, in the sense of~\cite{Canjar}. Inspired by this reference, we present the following effect of adding a single Cohen real. \begin{lemma}\label{cohenlemma} Cohen forcing $\mathbb{C}$ adds a real $\bar e\in{\overline{\Omega}}$ such that, for any ideal $J$ on $\omega$ in the ground model, there is some ideal $J'\supseteq J$ on $\omega$ in the generic extension, such that $\bar c\subseteq^{J'} \bar e$ for any $\bar c\in{\overline{\Omega}}$ in the ground model. \end{lemma} \begin{proof} Consider Cohen forcing $\mathbb{C}$ as the poset formed by pairs of finite sequences $p=(n^p,c^p)$ such that $n^p=\langle n^p_k:\, k<m \rangle$ is an increasing sequence of natural numbers, $c^p= \langle c^p_i :\, i<n^p_{m-1}\rangle$ is a sequence of clopen subsets of $2^\omega$ and \begin{equation}\label{eq:cohencond} \mu\left(\bigcup_{i=n^p_k}^{n^p_{k+1}-1} c^p_i\right)<2^{-(k+1)} \end{equation} for any $k<m-1$. The order is $q\leq p$ iff $n^q$ end-extends $n^p$ and $c^q$ end-extends $c^p$. If $G$ is $\mathbb{C}$-generic over the ground model $V$, we define $\bar e$ by $e_k:=c^p_k$ for some $p\in G$ (this value does not depend on such a $p$). It is easy to show that $\bar e \in {\overline{\Omega}}$. It is enough to show that, in the Cohen extension, \[J\cup\set{\set{k<\omega}{c_k\nsubseteq e_k}}{\bar c\in {\overline{\Omega}}\cap V} \text{ has the finite union property,}\] i.e.\ $\omega$ cannot be covered by finitely many members of that collection. So, in the ground model, fix $a\in J$, $F\subseteq {\overline{\Omega}}$ finite and $p\in\mathbb{C}$ with $n^p=\langle n^p_k:\, k<m \rangle$ and $c^p= \langle c^p_i :\, i<n^p_{m-1}\rangle$. We have to show that there is some $q\leq p$ and some $k<\omega$ such that $q$ forces \[k\notin a\cup\bigcup_{\bar c\in F}\set{k<\omega}{c_k\nsubseteq e_k},\] that is, $k\notin a$ and $c_k\subseteq c^q_k$ for any $\bar c\in F$. To see this, find some $n'\in \omega\smallsetminus a$ larger than $n^p_{m-1}$ such that, for any $\bar c\in F$, \[\mu\left(\bigcup_{k\geq n'}c_k\right)<\frac{1}{(|F|+1)2^{m+1}}.\] Define $q\in\mathbb{C}$ such that $n^q$ has length $m+2$, it extends $n^p$, $n^q_m:=n'$ and $n^q_{m+1}:=n'+1$, and such that $c^q$ extends $c^p$, $c^q_i=\emptyset$ for all $n^p_{m-1}\leq i< n'$, and $c^q_{n'}:= \bigcup_{\bar c\in F}c_{n'}$. It is clear that $q\in\mathbb{C}$ is stronger than $p$, and that $c^q_{n'}$ contains $c_{n'}$ for all $\bar c\in F$. So $k:=n'$ works. \end{proof} Since FS (finite support) iterations of (non-trivial) posets adds Cohen reals at limit steps, we have the following general consequence of the previous lemma. \begin{theorem}\label{thm:FSit} Let $\pi$ be a limit ordinal with uncountable cofinality and let $\mathbb{P}=\langle \mathbb{P}_\alpha,\dot{\mathbb{Q}}_\alpha:\, \alpha<\pi\rangle$ be a FS iteration of non-trivial $\cf(\pi)$-cc posets. Then, $\mathbb{P}$ forces that there is some (maximal) ideal $J$ such that $\mathfrak{b}_{J}({\overline{\Omega}})=\add(\mathcal{N}_{J})=\add(\mathcal{N}^*_{J})=\cof(\mathcal{N}^*_{J})= \cof(\mathcal{N}_{J}) =\mathfrak{d}_{J}({\overline{\Omega}})=\cf(\pi)$. \end{theorem} \begin{proof} Let $\kappa:=\cf(\pi)$ and $L:=\{0\}\cup\set{\alpha<\pi}{\alpha \text{ limit}}$. For each $\alpha\in L$ let $\bar e_\alpha$ be a $\mathbb{P}_{\alpha+\omega}$-name of a Cohen real in ${\overline{\Omega}}$ (in the sense of the proof of \autoref{cohenlemma}) over the $\mathbb{P}_\alpha$-extension. We construct, by recursion, a sequence $\langle \dot J_\alpha:\, \alpha\in L\cup\{\pi\}\rangle$ such that $\dot J_\alpha$ is a $\mathbb{P}_\alpha$-name of an ideal on $\omega$ and $\mathbb{P}$ forces that $\dot J_\alpha\subseteq \dot J_\beta$ when $\alpha<\beta$. We let $\dot J_0$ be (the $\mathbb{P}_0$-name of) any ideal $J_0$ in the ground model.\footnote{Recall that $\mathbb{P}_0$ is the trivial poset, so its generic extension is the ground model itself.} For the successor step, assume we have constructed $\dot J_\alpha$ at a stage $\alpha\in L$. By \autoref{cohenlemma}, we obtain a $\mathbb{P}_{\alpha+\omega}$-name $\dot J_{\alpha+\omega}$ of an ideal extending $\dot J_{\alpha}$, such that $\bar e_\alpha$ $\subseteq^{\dot J_{\alpha+\omega}}$-dominates all the $\bar c\in{\overline{\Omega}}$ from the $\mathbb{P}_{\alpha}$-extension. For the limit step, when $\gamma$ is a limit point of $L$ (which includes the case $\gamma=\pi$), just let $\dot J_\gamma$ be the $\mathbb{P}_\gamma$-name of $\bigcup_{\alpha<\gamma}\dot J_\alpha$. This finishes the construction. We show that $\dot J_\pi$ is as required. In the final generic extension, since $\bar e_\alpha \subseteq^{J_{\beta+\omega}} \bar e_\beta$ for all $\alpha<\beta$ in $L$ and $J_{\beta+\omega}\subseteq J_\pi$, we obtain $\bar e_\alpha \subseteq^{J_\pi} \bar e_\beta$. To conclude $\mathfrak{b}_{J_\pi}({\overline{\Omega}})=\mathfrak{d}_{J_\pi}({\overline{\Omega}})=\cf(\pi)$, it remains to show that $\set{\bar e_\alpha}{\alpha\in L}$ is $\subseteq^{J_\pi}$-dominating (because $L$ is cofinal in $\pi$, and the rest follows by \autoref{omegabarvsNJ}). Indeed, if $\bar c\in{\overline{\Omega}}$ in the final extension, then $\bar c$ is in some intermediate extension at $\alpha\in L$, so $\bar c \subseteq^{J_{\alpha+\omega}} \bar e_\alpha$ and hence $\bar c \subseteq^{J_\pi} \bar e_\alpha$. Note that any (maximal) ideal extending $J_\pi$ also satisfies the conclusion (by \autoref{KBomegabar}). \end{proof} The previous results gives a lot of information about the effect of adding many Cohen reals. \begin{theorem}\label{cohenthm} Let $\lambda$ be an uncountable cardinal. Then $\mathbb{C}_\lambda$ forces that, for any regular $\aleph_1\leq\kappa\leq\lambda$, there is some (maximal) ideal $J^\kappa$ on $\omega$ such that $\mathfrak{b}_{J^\kappa}({\overline{\Omega}})=\add(\mathcal{N}_{J^\kappa})=\add(\mathcal{N}^*_{J^\kappa})=\cof(\mathcal{N}^*_{J^\kappa})= \cof(\mathcal{N}_{J^\kappa}) =\mathfrak{d}_{J^\kappa}({\overline{\Omega}})=\kappa$. \end{theorem} \begin{proof} Let $\kappa$ be a regular cardinal between $\aleph_1$ and $\lambda$. Recall that $\mathbb{C}_\lambda$ is forcing equivalent with $\mathbb{C}_{\lambda+\kappa}$, so we show that the latter adds the required $J^\kappa$. In fact, $\mathbb{C}_{\lambda+\kappa}$ can be seen as the FS iteration of $\mathbb{C}$ of length $\lambda+\kappa$. Since $\cf(\lambda+\kappa)=\kappa$, by \autoref{thm:FSit} we get that $\mathbb{C}_{\lambda+\kappa}$ adds the required $J^\kappa$. Note that any (maximal) ideal extending $J^\kappa$ also satisfies the conclusion (by \autoref{KBomegabar}). \end{proof} Using sums of ideals, we can obtain from the previous theorem that, after adding many Cohen reals, there are non-meager ideals $K$ satisfying $\non(\mathcal{N}^*_K)<\non(\mathcal{N}_K)$ and $\cov(\mathcal{N}_K)<\cov(\mathcal{N}^*_K)$ (\autoref{cor:CohenI+J}). Before proving this, we calculate the cardinal characteristics associated with measure zero modulo sum of ideals. \begin{lemma}\label{lem:intid} Let $\mathcal{I}$ and $\mathcal{J}$ be ideals on an infinite set $X$. Then: \begin{enumerate}[label= \rm (\alph*)] \item\label{it:addIcapJ} $\min\{\add(\mathcal{I}),\add(\mathcal{J})\} \leq \add(\mathcal{I}\cap\mathcal{J})$ and $\cof(\mathcal{I}\cap\mathcal{J}) \leq \max\{\cof(\mathcal{I}),\cof(\mathcal{J})\}$. \item $\non(\mathcal{I}\cap\mathcal{J})=\min\{\non(\mathcal{I}), \non(\mathcal{J})\}$ and $\cov(\mathcal{I}\cap\mathcal{J}) = \max\{\cov(\mathcal{I}),\cov(\mathcal{J})\}$. \end{enumerate} For the following items, assume that $\mathcal{I}\cup\mathcal{J}$ generates an ideal $\mathcal{K}$. \begin{enumerate}[resume*] \item\label{it:gencof} $\min\{\add(\mathcal{I}),\add(\mathcal{J})\} \leq \add(\mathcal{K})$ and $\cof(\mathcal{K}) \leq \max\{\cof(\mathcal{I}),\cof(\mathcal{J})\}$. \item\label{it:gencov} $\max\{\non(\mathcal{I}),\non(\mathcal{J})\} \leq \non(\mathcal{K})$ and $\cov(\mathcal{K}) \leq \min\{\cov(\mathcal{I}),\cov(\mathcal{J})\}$. \end{enumerate} \end{lemma} \begin{proof} We use the product of relational systems to shorten this proof. If $\mathbf{R}=\langle X,Y,R\rangle$ and $\mathbf{R}'=\langle X',Y',R'\rangle$ are relational systems, we define $\mathbf{R}\times\mathbf{R}':=\langle X\times X', Y\times Y', R^\times\rangle$ with the relation $(x,x')\, R^\times\, (y,y')$ iff $x\, R\, y$ and $x'\, R'\, y'$. Recall that $\mathfrak{b}(\mathbf{R}\times\mathbf{R}')=\min\{\mathfrak{b}(\mathbf{R}),\mathfrak{b}(\mathbf{R}')\}$ and $\max\{\mathfrak{d}(\mathbf{R}),\mathfrak{d}(\mathbf{R}')\}\leq \mathfrak{d}(\mathbf{R}\times\mathbf{R}')\leq \mathfrak{d}(\mathbf{R})\cdot \mathfrak{d}(\mathbf{R}')$ (so equality holds when some $\mathfrak{d}$-number is infinite), see e.g.~\cite[Sec.~4]{blassbook}. As relational systems, it is easy to show that $\mathcal{I}\cap\mathcal{J} \mathrel{\preceq_{\mathrm{T}}} \mathcal{I}\times\mathcal{J}$ and $\mathbf{C}_{\mathcal{I}\cap\mathcal{J}}\mathrel{\preceq_{\mathrm{T}}} \mathbf{C}_\mathcal{I}\times \mathbf{C}_\mathcal{J}$, which implies~\ref{it:addIcapJ}, $\min\{\non(\mathcal{I}), \non(\mathcal{J})\}\leq \non(\mathcal{I}\cap\mathcal{J})$ and $\cov(\mathcal{I}\cap\mathcal{J}) \leq \max\{\cov(\mathcal{I}),\cov(\mathcal{J})\}$. The converse inequality for the uniformity and the covering follows by \autoref{corcovsub}, as well as~\ref{it:gencov}. We can also show that $\mathcal{K}\mathrel{\preceq_{\mathrm{T}}} \mathcal{I}\times\mathcal{J}$, which implies~\ref{it:gencof}. \end{proof} \begin{lemma}\label{lem:cardplus} Let $J_1$ and $J_2$ be ideals on $\omega$. Then \begin{linenomath} \begin{align*} \add(\mathcal{N}^*_{J_1\oplus J_2}) & = \min\{\add(\mathcal{N}^*_{J_1}), \add(\mathcal{N}^*_{J_2})\}, & \non(\mathcal{N}^*_{J_1\oplus J_2}) & = \min\{\add(\mathcal{N}^*_{J_1}), \add(\mathcal{N}^*_{J_2})\},\\ \cof(\mathcal{N}^*_{J_1\oplus J_2}) & = \max\{\cof(\mathcal{N}^*_{J_1}), \cof(\mathcal{N}^*_{J_2})\}, & \cov(\mathcal{N}^*_{J_1\oplus J_2}) & = \max\{\cov(\mathcal{N}^*_{J_1}), \cov(\mathcal{N}^*_{J_2})\}, \end{align*} \end{linenomath} and \begin{linenomath} \begin{align*} \add(\mathcal{N}_{J_1\oplus J_2}) & = \min\{\add(\mathcal{N}_{J_1}), \add(\mathcal{N}_{J_2})\}, & \non(\mathcal{N}_{J_1\oplus J_2}) & = \max\{\add(\mathcal{N}_{J_1}), \add(\mathcal{N}_{J_2})\},\\ \cof(\mathcal{N}_{J_1\oplus J_2}) & = \max\{\cof(\mathcal{N}_{J_1}), \cof(\mathcal{N}_{J_2})\}, & \cov(\mathcal{N}_{J_1\oplus J_2}) & = \min\{\cov(\mathcal{N}_{J_1}), \cov(\mathcal{N}_{J_2})\}. \end{align*} \end{linenomath} \end{lemma} \begin{proof} Wlog assume that $\omega={\mathbb N}_1\cup{\mathbb N}_2$ as a disjoint union of infinite sets and $J_e$ is an ideal on ${\mathbb N}_e$ for $e\in\{1,2\}$. Let $K_1:= J_1\oplus \pts({\mathbb N}_2)$, $K_2:= \pts({\mathbb N}_1)\oplus {\mathbb N}_2$ and $K:= J_1\oplus J_2$. Recall from \autoref{exm:sum} and~\ref{exm:sum2} that $J_e$ is isomorphic with $K_e$, $\mathcal{N}^*_K=\mathcal{N}^*_{K_1}\cap \mathcal{N}^*_{K_2}$ and $\mathcal{N}_K$ is the ideal generated by $\mathcal{N}_{K_1} \cup \mathcal{N}_{K_2}$. Then, $\mathcal{N}^*_{J_e}$ and $\mathcal{N}^*_{K_e}$ have the same associated cardinal characteristics, likewise for $\mathcal{N}_{J_e}$ and $\mathcal{N}_{K_e}$, so we get some equalities and inequalities from \autoref{lem:intid}. We show the remaining converse inequalities by using Tukey connections. First, we show $\mathcal{N}_{K_1}\times \mathcal{N}_{K_2} \mathrel{\preceq_{\mathrm{T}}} \mathcal{N}_K$, which guarantees\footnote{Even more, the proof of \autoref{lem:intid}~\ref{it:gencof} allows us to conclude $\mathcal{N}_K \mathrel{\cong_{\mathrm{T}}} \mathcal{N}_{J_1} \times \mathcal{N}_{J_2}$.} \[\add(\mathcal{N}_K)\leq \min\{\add(\mathcal{N}_{J_1}), \add(\mathcal{N}_{J_2})\} \text{ and } \max\{\cof(\mathcal{N}_{J_1}), \cof(\mathcal{N}_{J_2})\} \leq \cof(\mathcal{N}_K).\] Define $F_1\colon \mathcal{N}_{K_1}\times \mathcal{N}_{K_2}\to \mathcal{N}_K$ as follows: If $X\in\mathcal{N}_{K_1}$ and $Y\in\mathcal{N}_{K_2}$ then $X\subseteq N_{K_1}(\bar c^X)= N_{J_1}(\bar c^X{\upharpoonright} {\mathbb N}_1)\times {}^{{\mathbb N}_2}2$ and $Y\subseteq N_{K_2}(\bar d^Y)= {}^{{\mathbb N}_1}2\times N_{J_2}(\bar d^Y{\upharpoonright} {\mathbb N}_2)$ for some $\bar c^X, \bar d^Y\in{\overline{\Omega}}$, so we let \[F_1(X,Y):= N_K(\bar c^{(X,Y)})= N_{K_1}(\bar c^{(X,Y)})\cup N_{K_2}(\bar c^{(X,Y)})\] where $\bar c^{(X,Y)} := (\bar c^X {\upharpoonright} {\mathbb N}_1)\cup (\bar d^Y{\upharpoonright} {\mathbb N}_2)$. To define $G_1\colon \mathcal{N}_K\to \mathcal{N}_{K_1}\times \mathcal{N}_{K_2}$, for $Z\in \mathcal{N}_K$ choose $\bar e^Z\in{\overline{\Omega}}$ such that $Z\subseteq N_K(\bar e^Z)= N_{K_1}(\bar e^Z)\cup N_{K_2}(\bar e^Z)$, so we let $G_1(Z):= (N_{K_1}(\bar e^Z), N_{K_2}(\bar e^Z))$. Then $(F_1,G_1)$ is the desired Tukey connection: Assume $X\in \mathcal{N}_{K_1}$, $Y\in \mathcal{N}_{K_2}$, $Z\in \mathcal{N}_K$ and $F_1(X,Y)\subseteq Z$. Then $N_{K_1}(\bar c^{(X,Y)})\cup N_{K_2}(\bar c^{(X,Y)}) \subseteq N_{K_1}(\bar e^Z)\cup N_{K_2}(\bar e^Z)$. Note that $N_{K_1}(\bar c^{(X,Y)}) = N_{K_1}(\bar c^X)$ and $N_{K_2}(\bar c^{(X,Y)})= N_{K_2}(\bar d^Y)$, so it is enough to show that $N_{K_1}(\bar c^X) \subseteq N_{K_1}(\bar e^Z)$ and $N_{K_2}(\bar d^Y) \subseteq N_{K_2}(\bar e^Z)$. We prove one of the contentions (the other is analogous). Choose some $y'\in {}^{{\mathbb N}_2}2 \smallsetminus N_{J_2}(\bar e^Z {\upharpoonright} {\mathbb N}_2)$. If $(x,y)\in N_{K_1}(\bar c^X)$ then $(x,y')\in N_{K_1}(\bar c^X)$, so $(x,y')\in N_{K_1}(\bar e^Z)\cup N_{K_2}(\bar e^Z)$. Since $y'\notin N_{J_2}(\bar e^Z {\upharpoonright} {\mathbb N}_2)$, we get $(x,y')\notin N_{K_2}(\bar e^Z)$, so $(x,y')\in N_{K_1}(\bar e^Z)$ and hence $(x,y) \in N_{K_1}(\bar e^Z)$. A similar argument guarantees $\mathcal{N}^*_{K_1}\times \mathcal{N}^*_{K_2} \mathrel{\preceq_{\mathrm{T}}} \mathcal{N}^*_K$.\footnote{And, by the proof of \autoref{lem:intid}~\ref{it:addIcapJ}, $\mathcal{N}^*_K\mathrel{\cong_{\mathrm{T}}} \mathcal{N}^*_{J_1}\times \mathcal{N}^*_{J_2}$.} Define $F_2\colon \mathcal{N}^*_{K_1}\times \mathcal{N}^*_{K_2}\to \mathcal{N}^*_K$ as follows: If $X\in\mathcal{N}^*_{K_1}$ and $Y\in\mathcal{N}^*_{K_2}$ then $X\subseteq N^*_{K_1}(\bar c_*^{X})= N^*_{J_1}(\bar c_*^{X}{\upharpoonright} {\mathbb N}_1)\times {}^{{\mathbb N}_2}2$ and $Y\subseteq N^*_{K_2}(\bar d_*^Y)= {}^{{\mathbb N}_1}2\times N^*_{J_2}(\bar d_*^Y{\upharpoonright} {\mathbb N}_2)$ for some $\bar c_*^X, \bar d_*^Y\in{\overline{\Omega}}$, which are chosen in such a way that $N^*_{K_1}(\bar c_*^{X})$ and $N^*_{K_2}(\bar d_*^Y)$ are non-empty. So we let \[F_2(X,Y):= N^*_K(\bar c_*^{(X,Y)})= N^*_{K_1}(\bar c_*^{(X,Y)})\cap N^*_{K_2}(\bar c_*^{(X,Y)}) = N^*_{J_1}(\bar c_*^X {\upharpoonright} {\mathbb N}_1)\times N^*_{K_2}(\bar d_*^Y{\upharpoonright} {\mathbb N}_2)\] where $\bar c_*^{(X,Y)} := (\bar c_*^X {\upharpoonright} {\mathbb N}_1)\cup (\bar d_*^Y{\upharpoonright} {\mathbb N}_2)$. To define $G_2\colon \mathcal{N}_K\to \mathcal{N}_{K_1}\times \mathcal{N}_{K_2}$, for $Z\in \mathcal{N}^*_K$ choose $\bar e_*^Z\in{\overline{\Omega}}$ such that $Z\subseteq N^*_K(\bar e_*^Z)= N^*_{K_1}(\bar e_*^Z)\cap N^*_{K_2}(\bar e_*^Z)$, so we let $G_2(Z):= (N^*_{K_1}(\bar e_*^Z), N^*_{K_2}(\bar e_*^Z))$. It is routine to check that $(F_2,G_2)$ is the required Tukey connection. To show the remaining inequalities (the converses corresponding to \autoref{lem:intid}~\ref{it:gencov}), we look at the following operation of relational systems. If $\mathbf{R}=\langle X,Y,R\rangle$ and $\mathbf{R}'=\langle X',Y',R'\rangle$ are relational systems, we define the \emph{dual product} \[\mathbf{R} \times^\perp \mathbf{R}:=\langle X\times X', Y\times Y', R^{\times,\perp} \rangle\] with the relation $(x,x') R^{\times,\perp} (y,y')$ iff either $x R y$ or $x' R' y'$. Note that \[\mathbf{R} \times^\perp \mathbf{R}'= (\mathbf{R}^\perp \times {\mathbf{R}'}^\perp)^\perp,\] therefore \[\mathfrak{d}(\mathbf{R}\times^\perp \mathbf{R}')=\min\{\mathfrak{d}(\mathbf{R}),\mathfrak{d}(\mathbf{R}')\} \text{ and } \max\{\mathfrak{b}(\mathbf{R}),\mathfrak{b}(\mathbf{R}')\}\leq \mathfrak{b}(\mathbf{R}\times^\perp\mathbf{R}')\leq \mathfrak{b}(\mathbf{R})\cdot \mathfrak{b}(\mathbf{R}')\] (so equality holds when some $\mathfrak{b}$-number is infinite). We show that $\mathbf{C}_{\mathcal{N}_{J_1}}\times^\perp \mathbf{C}_{\mathcal{N}_{J_2}}\mathrel{\preceq_{\mathrm{T}}} \mathbf{C}_{\mathcal{N}_K}$. We can think of ${}^\omega 2 = {}^{{\mathbb N}_1}2 \times {}^{{\mathbb N}_2}2$ by identifying $(x,y)\in {}^{{\mathbb N}_1}2 \times {}^{{\mathbb N}_2}2$ with the concatenation of $x$ and $y$. So consider $f\colon {}^{{\mathbb N}_1}2\times {}^{{\mathbb N}_2}2 \to {}^\omega 2$ as the identity function; and define $G\colon \mathcal{N}_K\to \mathcal{N}_{J_1}\times \mathcal{N}_{J_2}$ by $G(Z):=(N_{J_1}(\bar e^Z{\upharpoonright} {\mathbb N}_1),N_{J_2}(\bar e^Z{\upharpoonright} {\mathbb N}_2))$. Then $(f,G_1)$ is the desired Tukey connection: assume that $x\in {}^{{\mathbb N}_1}2$, $y\in {}^{{\mathbb N}_2}2$, $Z\in\ \mathcal{N}_K$ and $f(x,y) \in Z$. Then \[f(x,y)\in N_K(\bar e^Z)= N_{K_1}(\bar e^Z) \cup N_{K_2}(\bar e^Z)=(N_{J_1}(\bar e^Z{\upharpoonright} {\mathbb N}_1)\times {}^{{\mathbb N}_2}2) \cup ({}^{{\mathbb N}_1}2\times N_{J_2}(\bar e^Z{\upharpoonright} {\mathbb N}_2)),\] so either $x\in N_{J_1}(\bar e^Z{\upharpoonright} {\mathbb N}_1)$ or $y\in N_{J_2}(\bar e^Z{\upharpoonright} {\mathbb N}_2)$. Although the lemma is verified up to here, we can even guarantee that \[\mathbf{C}_{\mathcal{N}_K}\mathrel{\cong_{\mathrm{T}}} \mathbf{C}_{\mathcal{N}_{J_1}}\times^\perp \mathbf{C}_{\mathcal{N}_{J_2}} \text{ and } \mathbf{C}_{\mathcal{N}^*_K}\mathrel{\cong_{\mathrm{T}}} \mathbf{C}_{\mathcal{N}^*_{J_1}}\times \mathbf{C}_{\mathcal{N}^*_{J_2}}.\qedhere\] \end{proof} \begin{corollary}[of \autoref{cohenthm}]\label{cor:CohenI+J} The poset $\mathbb{C}_\lambda$ forces that, for any regular $\aleph_1\leq\kappa_1\leq\kappa_2\leq\lambda$, there is some non-meager ideal $K$ (e.g.\ $J^{\kappa_1}\oplus J^{\kappa_2}$) such that \begin{linenomath} \begin{align*} \add(\mathcal{N}^*_K)= \add(\mathcal{N}_K) = \non(\mathcal{N}^*_K) = \cov(\mathcal{N}_K) & = \kappa_1,\\ \cof(\mathcal{N}^*_K)= \cof(\mathcal{N}_K) = \cov(\mathcal{N}^*_K) = \non(\mathcal{N}_K) & = \kappa_2. \end{align*} \end{linenomath} \end{corollary} \begin{proof} In the $\mathbb{C}_\lambda$-generic extension, let $J^\kappa$ be a maximal ideal as in \autoref{cohenthm}. Then, $J^{\kappa_1}\oplus J^{\kappa_2}$ is the required ideal by \autoref{lem:cardplus}. \end{proof} We do not know how to force that there is some non-meager ideal $K$ such that $\add(\mathcal{N}^*_K)\neq \add(\mathcal{N}_K)$, likewise for the cofinality. Using \autoref{cohenthm} and well-known forcing models, we can show that ZFC cannot prove more inequalities of the cardinal characteristics associated with our new ideals with the classical cardinal characteristics of the continuum of \autoref{fig:all20}, but leaving some few open questions. We skip most of the details in the following items, but the reader can refer to the definition of the cardinal characteristics and their inequalities in~\cite{BJ,blassbook,BHH}, and learn the forcing techniques from e.g. ~\cite{Br,BJ,blassbook,Mmatrix,mejia2}. \begin{figure} \begin{tikzpicture \small{ \node (aleph1) at (-1.5,-1) {$\aleph_1$}; \node (addn) at (0,0){$\add(\mathcal{N})$}; \node (adNJ) at (1,2) {$\add(\mathcal{N}_J)$}; \node (adNJ*) at (3,3) {$\add(\mathcal{N}^*_J)$}; \node (covn) at (0,10){$\cov(\mathcal{N})$}; \node (cvNJ) at (1,8) {$\cov(\mathcal{N}_J)$}; \node (cvNJ*) at (3,7) {$\cov(\mathcal{N}^*_J)$}; \node (cove) at (5,6) {$\cov(\mathcal{E})$}; \node (b) at (4,5) {$\mathfrak{b}$}; \node (addm) at (4,0) {$\add(\mathcal{M})$} ; \node (nonm) at (4,10) {$\non(\mathcal{M})$} ; \node (none) at (7,4) {$\non(\mathcal{E})$}; \node (d) at (8,5) {$\mathfrak{d}$}; \node (covm) at (8,0) {$\cov(\mathcal{M})$} ; \node (cfm) at (8,10) {$\cof(\mathcal{M})$} ; \node (nonn) at (12,0) {$\non(\mathcal{N})$} ; \node (nnNJ) at (11,2) {$\non(\mathcal{N}_J)$}; \node (nnNJ*) at (9,3) {$\non(\mathcal{N}^*_J)$}; \node (cfn) at (12,10) {$\cof(\mathcal{N})$} ; \node (cfNJ) at (11,8) {$\cof(\mathcal{N}_J)$}; \node (cfNJ*) at (9,7){$\cof(\mathcal{N}^*_J)$}; \node (s) at (-1,3) {$\mathfrak{s}$}; \node (r) at (13,7) {$\mathfrak{r}$}; \node (c) at (13.5,10) {$\mathfrak{c}$}; \node (m) at (-1.5,0) {$\mathfrak m$}; \node (p) at (-1.25,1) {$\mathfrak p$}; \node (h) at (-1.125,2) {$\mathfrak h$}; \node (g) at (-1.5,3.5) {$\mathfrak g$}; \node (e) at (3,1) {$\mathfrak e$}; \node (a) at (13.5,6.5) {$\mathfrak a$}; \node (u) at (13.25,8.5) {$\mathfrak u$}; \node (i) at (12.75,9) {$\mathfrak i$}; \foreach \from/\to in { m/addn, cfn/c, aleph1/m, m/p, p/h, h/s, h/g, r/u, u/c, h/b, p/addm, g/d, p/e, addn/e, e/none, e/covm, b/a, a/c, cfm/i, r/i, i/c, s/none, s/d, b/r, cove/r, adNJ/nnNJ, adNJ*/nnNJ*, cvNJ/cfNJ, cvNJ*/cfNJ*, addm/b, b/nonm, b/d, addm/covm, covm/d, nonm/cfm, d/cfm, addm/cove, addm/none, cove/cfm, none/cfm, covm/cove, none/nonm, addn/adNJ, addn/adNJ*, covn/cvNJ, cvNJ/cvNJ*, cvNJ*/cove, addn/covn, adNJ/cvNJ, adNJ*/cvNJ*, addn/addm, covn/nonm, covm/nonn, cfm/cfn, nonn/cfn, adNJ/covm, adNJ*/covm, none/nnNJ*, nnNJ*/nnNJ, nnNJ/nonn, nnNJ/cfNJ, nnNJ*/cfNJ*, cfNJ*/cfn, cfNJ/cfn, nonm/cfNJ*, nonm/cfNJ} { \path[-,draw=white,line width=3pt] (\from) edge (\to); \path[->,] (\from) edge (\to); } } \end{tikzpicture} \caption{Cicho\'n's diagram and the Blass diagram combined, also including the coverings and uniformities of our new ideals. An arrow $\mathfrak x\rightarrow \mathfrak y$ means that ZFC proves $\mathfrak x\le \mathfrak y$.}\label{fig:all20} \end{figure} \begin{enumerate}[label=\rm (M\arabic*)] \item\label{modelCohen} Using $\lambda>\aleph_2$, $\mathbb{C}_\lambda$ forces that there is some (maximal) ideal $J$ on $\omega$ such that $\non(\mathcal{M})=\mathfrak{g} =\mathfrak{a}=\aleph_1< \add(\mathcal{N}_J)=\cof(\mathcal{N}_J) =\add(\mathcal{N}^*_J) =\cof(\mathcal{N}^*_J)<\cov(\mathcal{M})$ (see \autoref{cohenthm}). \item We can iterate the Hechler poset, followed by a large random algebra, to force $\non(\mathcal{N})=\aleph_1<\mathfrak{b}=\mathfrak{d}=\mathfrak{a}<\cov(\mathcal{N})=\mathfrak{c}$. In this generic extension, we obtain $\add(\mathcal{N}_J)= \add(\mathcal{N}^*_J) = \non(\mathcal{N}_J)= \non(\mathcal{N}^*_J)=\mathfrak{g} =\aleph_1<\mathfrak{b}=\mathfrak{d}=\mathfrak{a}<\cov(\mathcal{N}_J)= \cov(\mathcal{N}^*_J) = \cof(\mathcal{N}_J)= \cof(\mathcal{N}^*_J)=\mathfrak{c}$ for any ideal $J$ on $\omega$. This idea to force with a Random algebra after some other FS iteration of ccc posets is original from Brendle, but some details can be found in~\cite[Sec.~5]{collapsing}. \item\label{M:Miller} In the Miller model, $\non(\mathcal{M})=\non(\mathcal{N})=\mathfrak{u}=\mathfrak{a}=\aleph_1<\mathfrak{g}=\mathfrak{c}=\aleph_2$, so \ref{NCF} follows. Hence $\add(\mathcal{N}_J)= \add(\mathcal{N}^*_J) = \non(\mathcal{N}_J)= \non(\mathcal{N}^*_J)= \cov(\mathcal{N}_J) = \cov(\mathcal{N}^*_J)=\aleph_1 < \mathfrak{g}=\mathfrak{c}=\aleph_2$ for any ideal $J$ on $\omega$, e.g., since ZFC proves $\cov(\mathcal{E})\leq \mathfrak{r}\leq \mathfrak{u}$, $\cov(\mathcal{E})=\aleph_1$ in the Miller model. Although $\cof(\mathcal{N}_J)$ and $\cof(\mathcal{N}^*_J)$ are $\aleph_2$ when $J$ has the Baire property, we do not know what happens when $J$ does not have the Baire property (or just in the case of maximal ideals). \item In the Mathias model, $\cov(\mathcal{E})=\aleph_1<\mathfrak{h}=\mathfrak{c}=\aleph_2$ (the first equality due to the Laver property). Hence $\add(\mathcal{N}_J)= \add(\mathcal{N}^*_J)= \cov(\mathcal{N}_J) = \cov(\mathcal{N}^*_J)=\aleph_1 < \non(\mathcal{N}_J)= \non(\mathcal{N}^*_J)= \cof(\mathcal{N}_J) = \cof(\mathcal{N}^*_J)= \aleph_2$. \item\label{C:Hechler} Assume $\kappa$ and $\lambda$ cardinals such that $\aleph_1\leq \kappa\leq\lambda=\lambda^{\aleph_0}$. The FS iteration of Hechler forcing of length $\lambda\kappa$ (ordinal product) forces $\mathfrak{e}= \mathfrak{g}=\mathfrak{s} =\aleph_1$, $\add(\mathcal{M})=\cof(\mathcal{M})=\kappa$ and $\non(\mathcal{N})=\mathfrak{r}=\mathfrak{c}=\lambda$. Thanks to \autoref{thm:FSit}, there is a (maximal) ideal $J^\kappa$ on $\omega$ such that $\add(\mathcal{N}_{J^\kappa})=\add(\mathcal{N}^*_{J^\kappa})=\cof(\mathcal{N}^*_{J^\kappa})= \cof(\mathcal{N}_{J^\kappa})=\kappa$. In general, we can just say that the additivities and coverings of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are below $\kappa$, and that their uniformities and cofinalities are above $\kappa$ for any ideal $J$ on $\omega$. Since the Hechler poset makes the set of reals from the ground model meager, we have that any ideal $J_0$ on $\omega$ with a set of generators in the ground model (or in any intermediate extension) is meager in the final extension, so $\cov(\mathcal{N}_{J_0})=\aleph_1$ and $\non(\mathcal{N}_{J_0})=\mathfrak{c}=\lambda$. We still do not know how to obtain a maximal (or just non-meager) ideal $J'$ in the final extension such that $\cov(\mathcal{N}_{J'})=\aleph_1$ and $\non(\mathcal{N}_{J'})=\lambda$. \item\label{M:sigmac} With $\kappa$ and $\lambda$ as in~\ref{C:Hechler}, there is a FS iteration of length $\lambda\kappa$ of $\sigma$-centered posets forcing $\cov(\mathcal{N})=\aleph_1$, $\mathfrak{p}=\mathfrak{u}=\mathfrak{a}=\mathfrak{i}=\kappa$ and $\non(\mathcal{N})=\mathfrak{c}=\lambda$. For example, by counting arguments, we make sure to force with all $\sigma$-centered posets of size ${<}\kappa$, while adding witnesses of $\mathfrak{u}$, $\mathfrak{a}$ and $\mathfrak{i}$ of size $\kappa$ using a cofinal subset of $\lambda\kappa$ of size $\kappa$ (for $\mathfrak{u}$, use Mathias-Prikry forcing with ultrafilters, and for $\mathfrak{a}$ and $\mathfrak{i}$ use Hechler-type posets as in \cite{Hechlermad}, all of which are $\sigma$-centered). We have exactly the same situation of the previous model for the cardinal characteristics associated with $\mathcal{N}_J$ and $\mathcal{N}^*_J$. \item Over a model of CH, the countable support iteration of length $\omega_2$ of the tree forcing from~\cite[Lem.~2]{brendle99} forces $\mathfrak{d}= \cov(\mathcal{E})=\aleph_1<\non(\mathcal{E})=\aleph_2$. Concretely, this forcing is proper, $\omega^\omega$-bounding, and does not add random reals (see also~\cite[Sec.~7.3B]{BJ}), hence $\cov(\mathcal{E})\leq\max\{\mathfrak{d},\cov(\mathcal{N})\}=\aleph_1$ (see \autoref{thm:E}). In this generic extension, all additivites and coverings of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are $\aleph_1$, while the uniformities and cofinalities are $\aleph_2$. \item Brendle's model~\cite{brendle02} of $\cof(\mathcal{N})<\mathfrak{a}$ clearly satisfies that the cardinal characteristics associated with $\mathcal{N}_J$ and $\mathcal{N}^*_J$ are strictly smaller than $\mathfrak{a}$ for every ideal $J$ on $\omega$. \end{enumerate} It is not clear from the above whether $\mathfrak{g} \leq \cof(\mathcal{N}_J)$ in ZFC for all $J$, and whether we can force $\cov(\mathcal{N}_J)<\mathfrak{p}$ and $\non(\mathcal{N}_J)>\max\{\mathfrak{u},\mathfrak{i}\}$ for some non-meager $J$. \section{Some variations}\label{sec:var} \subsection{Measure zero modulo ideals with respect to other Polish spaces}\label{subsec:Polish} Our notion of $\mathcal{N}_J$ and $\mathcal{N}^*_J$ has been studied for the Cantor space ${}^\omega 2$, but how about other Polish spaces (with a measure)? We look at the same notions for other classical spaces like the real line ${\mathbb R}$, the unit interval $[0,1]$, and product spaces of the form $\prod b$ for some sequence $b=\langle b(n):\, n<\omega\rangle$ of finite discrete spaces such that $\set{n<\omega}{|b(n)|>1}$ is infinite. We start with ${\mathbb R}$. Let $\Omega_{{\mathbb R}}$ be the collection of open subsets of ${\mathbb R}$ that can be expressed as a finite union of open intervals and, for any ideal $J$ on $\omega$ and $\bar c\in {}^\omega \pts({\mathbb R})$, define \[\begin{split} N^{\mathbb R}_J(\bar c) := \set{x\in {\mathbb R}}{ \set{n<\omega}{x\in c_n}\notin J},\\ N^{*{\mathbb R}}_J(\bar c) := \set{x\in {\mathbb R}}{ \set{n<\omega}{x\notin c_n}\in J}. \end{split}\] The upper index ${\mathbb R}$ can be omitted when we know we are working in ${\mathbb R}$. We can characterize, as in \autoref{chomegabar}, the $\bar c\in {}^\omega \Omega_{{\mathbb R}}$ such that $N_\mathrm{Fin}(\bar c)$ has Lebesgue measure zero. Denote the collection of such $\bar c$ by ${\overline{\Omega}}_{\mathbb R}$ and define $\mathcal{N}_J({\mathbb R})$ and $\mathcal{N}^*_J({\mathbb R})$ in the natural way. As in \autoref{basicNE}, we obtain that $\mathcal{N}_\mathrm{Fin}({\mathbb R})$ is $\mathcal{N}({\mathbb R})$, the collection of measure zero subsets of ${\mathbb R}$. It is possible to restrict ${\overline{\Omega}}_{\mathbb R}$ to any (countable) base $B$ of open intervals, without affecting the resulting $\mathcal{N}_J({\mathbb R})$ and $\mathcal{N}^*_J({\mathbb R})$. Let ${\overline{\Omega}}_B$ be the collection of $\bar d\in{\overline{\Omega}}_{\mathbb R}$ such that each $d_n$ is a finite union of members of $B$. If $\bar c\in{\overline{\Omega}}_{\mathbb R}$, then only finitely many $c_n$ are unbounded, for which we define $d_n:=\emptyset$; for each bounded $c_n$, we can find some $d_n$ such that $c_n\subseteq d_n$, $d_n\smallsetminus c_n$ has measure ${<}2^{-n}$, and $d_n$ is a finite union of members of $B$. Then $\bar d\in{\overline{\Omega}}_B$, and it is clear that $N_J(\bar c)\subseteq N_J(\bar d)$ and $N^*_J(\bar c)\subseteq N^*_J(\bar d)$. Also, as in \autoref{charE}, \begin{lemma}\label{charE:R} The collection $\mathcal{N}^*_\mathrm{Fin}({\mathbb R})$ coincides with $\mathcal{E}({\mathbb R})$, the ideal generated by the $F_\sigma$ measure zero subsets of ${\mathbb R}$. \end{lemma} \begin{proof} If $\bar c\in{\overline{\Omega}}_{\mathbb R}$ then $\bar c':=\langle \cl(c_n):\, n<\omega\rangle$ (where $\cl(A)$ denotes the topological closure of $A$) still satisfies that the measure of $\bigcup_{n\geq m}\cl(c_n)$ tends to $0$ as $m\to\infty$, so $N^{*{\mathbb R}}_\mathrm{Fin}(\bar c)\subseteq N^{*{\mathbb R}}_\mathrm{Fin}(\bar c')$ and, clearly, $N^{*{\mathbb R}}_\mathrm{Fin}(\bar c')\in \mathcal{E}({\mathbb R})$, so $N^{*{\mathbb R}}_\mathrm{Fin}(\bar c)\in \mathcal{E}({\mathbb R})$. This shows $\mathcal{N}^*_\mathrm{Fin}({\mathbb R})\subseteq\mathcal{E}({\mathbb R})$. For the converse contention, assume that $X=\bigcup_{n<\omega} F_n$ where each $F_n$ is a closed measure zero subset of ${\mathbb R}$. Without loss of generality, we can assume that the sequence $\langle F_n:\, n<\omega\rangle$ is increasing and that each $F_n$ is bounded (hence, compact). As in the proof of \autoref{basicE}, if we fix a sequence $\langle \varepsilon_n:\, n<\omega\rangle$ of positive real numbers such that $\sum_{i<\omega}\varepsilon_i<\infty$, we can cover each $F_n$ with some $c_n\in \Omega_{\mathbb R}$ of measure ${<}\varepsilon_n$. Then $\bar c\in{\overline{\Omega}}_{\mathbb R}$ and $X\subseteq N^*_\mathrm{Fin}(\bar c)$. \end{proof} As in \autoref{NJsigma2}, both $\mathcal{N}_J({\mathbb R})$ and $\mathcal{N}^*_J({\mathbb R})$ are $\sigma$-ideals on ${\mathbb R}$. Although all the results of this work can be verified for these ideals, these follow by the connection between the ideals and those ideals corresponding to the Cantor space. Before showing this connection, let us look at the case of other Polish spaces. Consider the closed interval $[a,b]$ with $a<b$ in ${\mathbb R}$. For any base $B$ of open intervals with endpoints in $[a,b]$ (including some intervals of the form $[a,x)$ and $(y,b]$, and even allowing $[a,b]$ itself), we define ${\overline{\Omega}}_B$ as before, as well as $N^{[a,b]}_J(\bar c)$ and $N^{*[a,b]}_J(\bar c)$ (for any $\bar c\in {}^\omega \pts([a,b])$. It is easy to check that \[ \begin{split} \set{X\subseteq [a,b]}{\exists\, \bar c\in {\overline{\Omega}}_B\colon X\subseteq N^{[a,b]}_J(\bar c)} & = \set{Y\cap [a,b]}{Y\in \mathcal{N}_J({\mathbb R})},\\ \set{X\subseteq [a,b]}{\exists\, \bar c\in {\overline{\Omega}}_B\colon X\subseteq N^{*[a,b]}_J(\bar c)} & = \set{Y\cap [a,b]}{Y\in \mathcal{N}^*_J({\mathbb R})}, \end{split} \] which we denote by $\mathcal{N}_J([a,b])$ and $\mathcal{N}^*_J([a,b])$, respectively. It is clear that these are $\sigma$-ideals. Now let $b=\langle b(n):\, n<\omega\rangle$ be a sequence of finite discrete spaces such that $|b(n)|>1$ for infinitely many $n$, and consider the perfect compact Polish space $\prod b$ (notice that the Cantor space is a particular case). The clopen sets $[s]_b:=\set{x\in\prod b}{s\subseteq x}$ for $s\in\sqw b:=\bigcup_{n<\omega}\prod_{i<n}b(i)$ form a base of $\prod b$, and every clopen subset of $\prod b$ is a finite union of basic clopen sets. The Lebesgue measure $\mu_b$ on $\prod b$ is the unique measure on (the completion of) the Borel $\sigma$-algebra such that $\mu_b([s]_b)=\prod_{i<|n|}\frac{1}{|b(n)|}$ for each $s\in\sqw b$. Let $\Omega_b$ be the collection of clopen subsets of $\prod b$, and define $N^b_J(\bar c)$ and $N^{*b}_J(\bar c)$ as usual for $\bar c\in {}^\omega \pts(\prod b)$. Let ${\overline{\Omega}}_b$ be the collection of $\bar c\in {}^\omega \Omega_b$ such that $\mu_b(N_\mathrm{Fin}^b(\bar c))=0$, and define $\mathcal{N}_J(\prod b)$ and $\mathcal{N}^*_J(\prod b)$ in the natural way. As in the case of the Cantor space, we can verify that these are $\sigma$-ideals, $\mathcal{N}_\mathrm{Fin}(\prod b)$ is the ideal of measure zero subsets of $\prod b$, and $\mathcal{N}^*_\mathrm{Fin}(\prod b)$ is the ideal generated by the $F_\sigma$ measure zero subsets of $\prod b$. We show the connection between $\prod b$ and $[0,1]$. Without loss of generality, assume that each $b(n)$ is a natural number. Consider the map $F_b\colon \prod b\to[0,1]$ defined by \[F_b(x):=\sum_{i<\omega}\frac{x_i}{b(i)}.\] This is a continuous onto map, and actually an homeomorphism modulo some countable sets. Concretely, let ${\mathbb Q}_b$ be the set of $\sum_{i<|s|}\frac{x_i}{|b(i)|}$ for $s\in\sqw b$, which is a countable dense subset of $[0,1]$. Each member of ${\mathbb Q}_b\smallsetminus\{0,1\}$ has two pre-images under $F_b$, while each member of $\{0,1\}\cup((0,1)\smallsetminus {\mathbb Q}_b)$ has exactly one pre-image. Letting ${\mathbb Q}'_b:=F_b^{-1}[{\mathbb Q}_b]$, we obtain that $F_b{\upharpoonright}(\prod b\smallsetminus {\mathbb Q}'_b)$ is a homeomorphism from $\prod b\smallsetminus {\mathbb Q}'_b$ onto $[0,1]\smallsetminus {\mathbb Q}_b$. Note that ${\mathbb Q}'_b$ is the collection of sequences $x\in \prod b$ such that, for some $m<\omega$, either $(\forall\, i\geq m)\ x_i=0$ or $(\forall\, i\geq m)\ x_i=b(i)-1$. The $F_b$ gives a connection between the ideals $\mathcal{N}_J$ and $\mathcal{N}^*_J$ for $\prod b$ and $[0,1]$. Using the base on $[0,1]$ of open intervals with extremes in ${\mathbb Q}_b$, we can show that, for any $X\subseteq \prod b$ and $Y\subseteq [0,1]$, \[ \begin{split} X \in \mathcal{N}_J\left(\prod b\right) & \text{ iff } F_b[X] \in \mathcal{N}_J([0,1]),\\ Y \in \mathcal{N}_J([0,1]) & \text{ iff } F_b[X] \in \mathcal{N}_J\left(\prod b\right), \end{split} \] and likewise for $\mathcal{N}^*_J$. Since the Cantor space is a particular case of $\prod b$, the map $F_{b_2}$ for $b_2(n)=2$ can be used to transfer all the results of this work from the Cantor space to $[0,1]$, and $F_b$ can be used to transfer the results from $[0,1]$ to $\prod b$. For example, the map $F_b$ allows to get the Tukey equivalences $\mathcal{N}_J(\prod b)\mathrel{\cong_{\mathrm{T}}} \mathcal{N}_J([0,1])$ and $\mathbf{C}_{\mathcal{N}_J(\prod b)} \mathrel{\cong_{\mathrm{T}}} \mathbf{C}_{\mathcal{N}_J([0,1])}$, and likewise for $\mathcal{N}^*_J$, so the cardinal characteristics associated with $\mathcal{N}_J$ and $\mathcal{N}^*_J$ do not depend on the Polish space we are working on. Similarly, we get a connection between ${\mathbb R}$ and $[0,1]$ via the maps $\pi_{[0,1]}\colon {\mathbb R}\to[0,1]$ defined by $\pi_{[0,1]}(x):=x-\lfloor x\rfloor$, and $F^{\mathbb R}\colon \pts([0,1])\to \pts({\mathbb R})$ defined by $F^{\mathbb R}(X):=\bigcup_{n\in {\mathbb Z}}(n+X)$. \subsection{Alternative definition} We explore one alternative weaker definition of $\mathcal{N}_J$ and $\mathcal{N}^*_J$. This is developed in detail by the first author in~\cite{Vthesis}, which we summarize here and compare with the notions defined in this paper. We work in the Cantor space ${}^\omega 2$ (but results can be transferred to other Polish spaces by using the connections presented in the previous subsection). Consider \[\lone{\Omega}:=\set{\bar c\in {}^\omega \Omega}{\sum_{n<\omega}\mu(c_n) < \infty}.\] We define our ideals with respect to $\lone{\Omega}$, that is, \begin{equation} \begin{split} \mathcal{N}^-_J & :=\set{X\subseteq{}^\omega 2}{\exists\bar c\in \lone{\Omega}\colon X\subseteq N_J(\bar c)}\\ \mathcal{N}^{-*}_J & :=\set{X\subseteq {}^\omega 2}{\exists\bar c\in \lone{\Omega}\colon X\subseteq N^*_J(\bar c)}. \end{split} \end{equation} It is clear that $\mathcal{N}^-_J\subseteq \mathcal{N}_J$ and $\mathcal{N}^{-*}_J\subseteq \mathcal{N}^*_J$, but we do not know whether they can be equal for non-meager ideals. Most of the proofs of this work can be easily modified to obtain similar results for these collections. By \autoref{basicN2} and \autoref{basicE}, we have that $\mathcal{N}^-_\mathrm{Fin}=\mathcal{N}$ and $\mathcal{N}^{-*}_\mathrm{Fin} = \mathcal{E}$, and the proof of \autoref{NJsigma2} can be modified to conclude that $\mathcal{N}^-_J$ and $\mathcal{N}^{-*}_J$ are $\sigma$-ideals. The behaviour of sum of ideals presented in \autoref{exm:sum} can be repeated in this new context. However, \autoref{RBeq} cannot be exactly repeated because, whenever $\bar c\in \lone{\Omega}$, although the $\bar c'$ defined in the proof is in $\lone{\Omega}$, we cannot guarantee that $\bar c^-$ is in $\lone{\Omega}$ ($\bar c^-$ could repeat some $c_n$'s too many times so that the sum of the measures can diverge). Therefore, we can only state the following: \begin{theorem}\label{RBeq-} Let $J}%{\mathcal{J}$ and $K}%{\mathcal{K}$ be ideals on $\omega$. \begin{enumerate}[label=\rm(\alph*)] \item\label{it:underKB-} If $K}%{\mathcal{K}\mathrel{\leq_{\mathrm{KB}}} J}%{\mathcal{J}$ then $\mathcal{N}^-_J}%{\mathcal{J}\subseteq\mathcal{N}^-_K}%{\mathcal{K}$. \item\label{it:underKB'-} If $K\mathrel{\leq_{\overline{\mathrm{KB}}}} J$ then $\mathcal{N}^{-*}_J\subseteq \mathcal{N}^{-*}_K$. \item\label{it:underRB-} If $K}%{\mathcal{K}\mathrel{\leq_{\mathrm{RB}}} J}%{\mathcal{J}$ then $\mathcal{N}^{-*}_J}%{\mathcal{J}\subseteq\mathcal{N}^{-*}_K}%{\mathcal{K}$ and $\mathcal{N}^-_J}%{\mathcal{J}\subseteq \mathcal{N}^-_K}%{\mathcal{K}$. \end{enumerate} \end{theorem} Thanks to this theorem, we still have that $J\subseteq K$ implies $\mathcal{N}^{-*}_J\subseteq \mathcal{N}^{-*}_K\subseteq \mathcal{N}^-_K\subseteq \mathcal{N}^-_J$. Therefore, when $J$ has the Baire property, we still have $\mathcal{N}^-_J=\mathcal{N}$ and $\mathcal{N}^{-*}_J=\mathcal{E}$. However, we lose some results due to the asymmetry in \autoref{RBeq-}. The results in \autoref{sec:nBP} are valid in this new context, but we can not repeat everything from \autoref{sec:near} in this new context. We lose part of \autoref{lem:coh}, namely, we cannot guarantee that, whenever $J_0$ and $J_1$ are nearly coherent ideals on $\omega$, there is some ideal $K$ such that $\mathcal{N}^-_K\subseteq \mathcal{N}^-_{J_0}\cap \mathcal{N}^-_{J_1}$ (but we can still say that $\mathcal{N}^{-*}_{J_0}\cup \mathcal{N}^{-*}_{J_1}\subseteq \mathcal{N}^{-*}_K$). As a consequence, \autoref{cor:NCuf} and~\ref{cor:NCFNuf} cannot be guaranteed in this new context, e.g., we cannot claim anymore that \ref{NCF} implies that there is a single $\mathcal{N}^-_J$ for maximal $J$. The results using the nearly coherence game can be obtained in the new context, in particular, \autoref{notnear2} and~\autoref{cor:nnc-NE} still holds. However, in \autoref{nc-char},~\ref{it:aida} is reduced to $\mathcal{N}^*_J\cup \mathcal{N}^*_K\subseteq \mathcal{N}^*_{K'}$, and it becomes unclear whether this reduced~\ref{it:aida} and~\ref{it:subset} are still equivalent to nearly coherence of $J$ and $K$ (we can just say that~\ref{it:subset} implies nearly coherence, and that nearly coherence implies the reduced~\ref{it:aida}). \autoref{noncoh:sum} is valid. For the associated cardinal characteristics as discussed in \autoref{sec:cardinalityNJ}, the discussion about the covering and the uniformity is exactly the same, and \autoref{addN-covM} still holds. However, the proof of the latter and the discussion about $\langle {\overline{\Omega}},\subseteq^J\rangle$ changes a lot. In this case, we must look at the relational system $\lone{\Omega,J}:=\langle \lone{\Omega},\lone{\Omega},\subseteq^J\rangle$ and to their associated cardinal characteristics $\mathfrak{b}_J(\lone{\Omega}):=\mathfrak{b}(\lone{\Omega,J})$ and $\mathfrak{d}_J(\lone{\Omega}):=\mathfrak{d}(\lone{\Omega,J})$. Although $\lone{\Omega,J}\mathrel{\preceq_{\mathrm{T}}} \mathcal{N}^-_J, \mathcal{N}^{-*}_J$ can be shown as in \autoref{omegabarvsNJ}, we cannot guarantee monotonicity with the orders $\mathrel{\leq_{\mathrm{KB}}}$, $\mathrel{\leq_{\overline{\mathrm{KB}}}}$ and $\mathrel{\leq_{\mathrm{RB}}}$ as in \autoref{KBomegabar}. We only have that $\lone{\Omega,J}\mathrel{\preceq_{\mathrm{T}}} \lone{\Omega,K}$ whenever $K\subseteq J$. We can still prove that $\mathfrak{b}_\mathrm{Fin}(\lone{\Omega})=\add(\mathcal{N})$ and $\mathfrak{d}_\mathrm{Fin}(\lone{\Omega})=\cof(\mathcal{N})$ by some modification of the proof of \autoref{omegabarFin}, and thus deriving the result in \autoref{addN-covM}, namely, the additivities of $\mathcal{N}^-_J$ and $\mathcal{N}^{-*}_J$ are between $\add(\mathcal{N})$ and $\cov(\mathcal{M})$, and that their cofinalities are between $\non(\mathcal{M})$ and $\cof(\mathcal{N})$. The consistency results in \autoref{sec:cons} can be verified for $\mathcal{N}^-_J$, $\mathcal{N}^{-*}_J$, and for $\mathfrak{b}_J(\lone{\Omega})$ and $\mathfrak{d}_J(\lone{\Omega})$ in place of $\mathfrak{b}_J({\overline{\Omega}})$ and $\mathfrak{d}_J({\overline{\Omega}})$, respectively. The point is to change the condition \eqref{eq:cohencond} for the Cohen poset by \[ \sum_{i=n^p_k}^{n^p_{k+1}-1}\mu(c^p_i)<2^{-k} \] to guarantee that the Cohen generic $\bar e$ is in $\lone{\Omega}$. \section{Open questions}\label{sec:Q} We address here several open questions of this work. \begin{question} Does some converse of the statements in \autoref{RBeq} hold in ZFC? \end{question} Concerning nearly coherence: \begin{question}\label{Q:NCF} Does \ref{NCF} imply that there is only one $\mathcal{N}_J$ for non-meager $J$? \end{question} A positive answer to the previous question implies (under \ref{NCF}) that $\mathcal{N}^*_J=\mathcal{N}_J$ for any non-meager $J$. In contrast, we ask: \begin{question}\label{Q:N*noN} Can we prove in $\mathrm{ZFC}$ that there is a non-meager ideal $J$ on $\omega$ such that $\mathcal{N}^*_J\neq \mathcal{N}_J$? \end{question} Recall that we produced such an example in \autoref{noncoh:sum} under the assumption that there is a pair of non-nearly coherent ideals on $\omega$. The answer to the previous questions would expand our knowledge about the difference between $\mathcal{N}_J$ and $\mathcal{N}_K$ for different $J$ and $K$. Concerning the cardinal characteristics, we showed that it is possible to have $\non(\mathcal{N}^*_K)<\non(\mathcal{N}_K)$ and $\cov(\mathcal{N}_K)<\cov(\mathcal{N}^*_K)$ for some non-meager ideal $K$ (in Cohen model, see \autoref{cor:CohenI+J}). However, we do not know what happens to the additivities and the cofinalities. \begin{question}\label{Q:addcof} Is it consistent that $\add(\mathcal{N}_J)$ and $\add(\mathcal{N}^*_J)$ are different for some non-meager ideal $J$ on $\omega$? The same is asked for the cofinalities. \end{question} \begin{question}\label{Q:addadd*} Does $\mathrm{ZFC}$ prove some inequality between $\add(\mathcal{N}_J)$ and $\add(\mathcal{N}^*_J)$? The same is asked for the cofinalities. \end{question} The second author~\cite{Mmatrix} has constructed a forcing model where the four cardinal characteristics associated with $\mathcal{N}$ are pairwise different. Cardona~\cite{Cfriendly} has produced a similar model for $\mathcal{E}$. In this context, we ask: \begin{question}\label{fourNJ} Is it consistent with $\mathrm{ZFC}$ that, for some non-meager (or maximal) ideal $J$ on $\omega$, the four cardinal characteristics associated with $\mathcal{N}_J$ are pairwise different? \end{question} In relation with the cardinal characteristics in \autoref{fig:all20}, to have a complete answer that no other inequality can be proved for the cardinal characteristics associated with our ideals, it remains to solve: \begin{question}\label{Q:g} Does $\mathrm{ZFC}$ prove $\mathfrak{g}\leq\cof(\mathcal{N}_J)$ for any ideal $J$ on $\omega$? \end{question} \begin{question}\label{Q:cov} Is it consistent that $\cov(\mathcal{N}_J)<\mathfrak{p}$ for some maximal ideal $J$? \end{question} \begin{question}\label{Q:non} Is it consistent that $\max\{\mathfrak{u},\mathfrak{i}\}<\non(\mathcal{N}_J)$ for some maximal ideal $J$? \end{question} Concerning the structure of our ideals, we ask: \begin{question} What is the intersection of all $\mathcal{N}_J$? What is the union of all $\mathcal{N}^*_J$? \end{question} Note that such intersection is $\mathcal{N}_{\min}$ and the union is $\mathcal{N}_{\max}$, where \[\begin{split} \mathcal{N}_{\min} & :=\bigcap\set{\mathcal{N}_J}{J \text{ maximal ideal}},\\ \mathcal{N}_{\max} & :=\bigcup\set{\mathcal{N}_J}{J \text{ maximal ideal}}. \end{split} \] According to \autoref{cor:NCFNuf}, under~\ref{NCF}, $\mathcal{N}_{\min}=\mathcal{N}_{\max}$. It is curious what would these families be when allowing non-nearly coherent ideals. We finish this paper with a brief discussion about strong measure zero sets. Denote by $\mathcal{SN}$ the ideal of strong measure zero subsets of ${}^\omega 2$. We know that $\mathcal{SN}\subseteq \mathcal{N}$ and $\mathcal{E}\nsubseteq \mathcal{SN}$ (because perfect subsets of ${}^\omega 2$ cannot be in $\mathcal{SN}$). Under Borel's conjecture, we have $\mathcal{SN}\subseteq\mathcal{E}$, however, $\cov(\mathcal{SN})<\cov(\mathcal{E})$ holds in Cohen's model~\cite{P90} (see also~\cite[Sec.~8.4A]{BJ}), which implies $\mathcal{SN}\nsubseteq \mathcal{E}$. This motivates to ask: \begin{question} Does ZFC prove that there is some non-meager ideal $J$ such that $\mathcal{SN}\subseteq \mathcal{N}_J$, or even $\mathcal{SN}\subseteq \mathcal{N}^*_J$? \end{question}
1,116,691,501,154
arxiv
\section{Introduction} A basic task in Fractal Geometry is to determine or estimate the various dimensions of fractal sets. Fractal dimensions are introduced to measure the sizes of fractal sets and are used in many different disciplines. Many results on fractal dimensions and measures are obtained, among which the studies on self-similar sets are the most rich and thorough (cf. \cite{H, M, MM, MU, RW, S}). In this paper, we have discussed about the topological pressure, fractal dimensions and measures of a typical fractal known as cookie-cutter-like set. Let $J$ be a bounded nonempty closed interval in $\D R$, and let $J_1, J_2, \cdots, J_N$, $N\geq 2$, be a collection of disjoint closed subintervals of $J$. Let $f : J_1\uu\cdots \uu J_N \to J$ be such that each $J_j$ is mapped bijectively onto $J$. We assume that $f$ has a continuous derivative and is expanding so that $|f'(x)|>1$ for all $x\in J_1\uu\cdots \uu J_N$. Let us write \[E=\set{x \in J : f^k(x) \te{ is defined and in } J_1\uu\cdots \uu J_N \te{ for all } k=0, 1, 2, \cdots},\] where $f^k$ is the $k$th iterate of $f$. Thus $E$ is the set of points that remain in $J_1\uu\cdots \uu J_N$ under the iteration of $f$. Since $E=\ii_{k=0}^\infty f^{-k}(J)$ is the intersection of a decreasing sequence of compact sets, the set $E$ is compact and nonempty. $E$ is invariant under $f$, in that \begin{equation} \label{eq111} f(E)=E=f^{-1}(E),\end{equation} since $x\in E$ if and only if $f(x) \in E$. Moreover, $E$ is repeller, in the sense that the points not in $E$ are eventually mapped outside of $J_1\uu\cdots \uu J_N$ under iteration by $f$. Let $\vp_1, \vp_2, \cdots, \vp_N$ be the $N$ branches of the inverses of $f$, i.e., for all $1\leq j\leq N$, $\vp_j:=(f|_{J_j})^{-1}$, and so $\vp_j : J \to J$ is such that \[\vp_j (x)=f^{-1}(x) \II J_j,\] and thus $\vp_1, \vp_2, \cdots, \vp_N$ map $J$ bijectively onto $J_1, J_2, \cdots, J_N$ respectively. Since $f$ has a continuous derivative with $|f'(x)|>1$ on the compact set $J_1\uu\cdots \uu J_N$, there are numbers $0<c_{\min}\leq c_{\max}<1$ such that $1<c_{\max}^{-1} \leq |f'(x)|\leq c_{\min}^{-1}<\infty$ for all $x\in J_1\uu\cdots \uu J_N$. It follows that the inverse function $\vp_j$ are differentiable with $0<c_{\min}\leq |\vp_j'(x)|\leq c_{\max}<1$ for all $x \in J$ and for all $1\leq j\leq N$. By the mean value theorem, for $1\leq j\leq N$, we have \[c_{\min} |x-y|\leq |\vp_j(x)-\vp_j(y)|\leq c_{\max}|x-y|\] for $x, y\in J$. By \eqref{eq111} the repeller $E$ of $f$ satisfies \[E=\UU_{j=1}^N \vp_j(E).\] Since each $\vp_j$ is a contraction on $J$, using the fundamental IFS property (cf. \cite[Theorem 2.6]{F2}), the repeller $E$ of $f$ is the attractor or the invariant set of the set of contraction mappings $\set{\vp_1, \vp_2, \cdots, \vp_N}$. We say that the collection of contraction mappings $\set{\vp_1, \vp_2, \cdots, \vp_N}$ satisfies the open set condition (OSC) if there exists a bounded nonempty open set $U\sci \D R$ such that $\UU_{j=1}^N \vp_j(U)\sci U$ with the union disjoint. A dynamical system $f:J_1\uu \cdots \uu J_N \to J$ of this form is called a cookie-cutter mapping, the equivalent iterated function system $\set{\vp_1, \vp_2, \cdots, \vp_N}$ on $J$ is termed a cookie-cutter system and the set $E$ is called a cookie-cutter set. In general the mappings $\vp_1, \vp_2, \cdots, \vp_N$ are not similarity transformations and $E$ is a `distorted' Cantor set, which nevertheless is `approximately self-similar'. If $E$ is a cookie-cutter set with dim$_H(E)=s$, then it is known that $\ul{\te{dim}}_B E=\ol { \te{dim}}_B E=\te{dim}_P E=s$ and $0<\C H^s(E)<\infty$. For details about it one could see \cite{F2, B}. In this paper, we have considered a sequence $\set{f_k}_{k=1}^\infty$ of cookie-cutter mappings, and call the corresponding repeller $E$ the cookie-cutter-like set. We denote by $\C H^s(E)$, $\C P^s(E)$, $\te{dim}_H E$, $\te{dim}_P E$, $\ul{\te{dim}}_B E$ and $\ol{\te{dim}}_B E$ the $s$-dimensional Hausdorff measure, the $s$-dimensional packing measure, the Hausdorff dimension, the packing dimension, the lower box-counting and the upper box-counting dimension of the set $E$ respectively. In this paper, for the cookie-cutter-like set $E$ we have defined the topological pressure function $P(t)$, and showed that there exists a unique $h\in (0, +\infty)$ such that $P(h)=0$, and then using Banach limit we have shown that there exists a unique Borel probability measure $\mu_h$ supported by $E$. Using the consequence of the topological pressure and the probability measure, we have shown that \[\te{dim}_H(E)=\te{dim}_P(E)=\ul{\te{dim}}_B(E)=\ol{\te{dim}}_B(E)=h, \te{ and } 0<\C H^h(E)\leq \C P^h(E) <\infty.\] The result in this paper is a nonlinear extension, as well as a generalization of the classical result about self-similar sets given by the following theorem (cf. \cite{H}): \tbf{Theorem:} Let $E$ be the limit set generated by a finite system $\set{S_1, S_2, \cdots, S_N}$ of self-similar mappings satisfying the open set condition, where $c_n$ are the similarity ratios of $S_n$. Then \[\te{dim}_H(E)=\te{dim}_P(E)=\ul{\te{dim}}_B(E)=\ol{\te{dim}}_B(E)=s, \te{ and } 0<\C H^s(E)\leq \C P^s(E) <\infty,\] where $s$ is the unique positive real number given by $\sum_{j=1}^N c_j^s=1$. \section{Basic results, cookie-cutter-like sets and the topological pressure} In this section, first we adopt the following definitions and notations, which can be found in \cite{F1, F2}. Let $E$ be a nonempty bounded subset of $\D R^n$ where $n\geq 1$, and $s\geq 0$. We denote by $\C H^s(E)$, $\C P^s(E)$, $\te{dim}_H E$, $\te{dim}_P E$, $\ul{\te{dim}}_B E$ and $\ol{\te{dim}}_B E$ the $s$-dimensional Hausdorff measure, the $s$-dimensional packing measure, the Hausdorff dimension, the packing dimension, the lower box-counting and the upper box-counting dimension of the set $E$ respectively. There are some basic inequalities between these dimensions: \begin{equation} \label{eq0} \te{dim}_H E \leq \te{dim}_P E \leq \ol{\te{dim}}_B E, \te{ and } \te{dim}_H E \leq \ul{\te{dim}}_B E \leq \ol{\te{dim}}_B E.\end{equation} Moreover, it is well-known that $\C H^s(E)\leq \C P^s(E)$. Let $s\geq 0$, and let $\C U=\set{U_i}$ be a countable collection of sets of $\D R^n$. We define \[\|\C U\|^s : =\sum_{U_i\in \C U} |U_i|^s,\] where $|A|$ denotes the diameter of a set $A$. \subsection{Cookie-cutter-like set:} Let $J$ be a bounded nonempty closed interval in $\D R$. For simplicity we take the diameter of the set $J$ to be one. A mapping $f$ is called a cookie-cutter, if there exists a finite collection of disjoint closed intervals $J_1, J_2, \cdots, J_N \sci J$, such that $(C1)$ $f$ is defined in a neighborhood of each $J_j$, $1\leq j\leq N$, the restriction of $f$ to each initial interval $J_j$ is 1-1 and onto, the corresponding branch inverse is denoted by $\vp_j:=(f|_{J_j})^{-1} : J \to J_j$, i.e., $\vp_j(x)=f^{-1}(x) \ii J_j$ for all $ 1\leq j\leq N$; $(C2)$ $f$ is differentiable with H\"older continuous derivative $f'$, i.e., there exist constants $c_f>0$ and $\gg_f \in (0, 1]$ such that for $x, y\in J_j$, $1\leq j\leq N$, \[\left|f'(x)-f'(y)\right|\leq c_f|x-y|^{\gg_f};\] $(C3)$ $f$ is boundedly expanding in the sense that there exist constants $b_f$ and $B_f$ such that \[1<b_f:=\inf_x\set{|f'(x)|} \leq \sup_x\set{|f'(x)| }:=B_f<+\infty.\] $[\UU_{j=1}^N J_j; c_f, \gg_f, b_f, B_f]$ is called the defining data of the cookie-cutter mapping $f$. Let $N\geq 2$ be fixed, and consider a sequence of cookie-cutter mappings $\set{f_k}_{k\geq 1}$ with defining data $[\UU_{j=1}^{N} J_{k, j}; c_k, \gg_k, b_k, B_k].$ Let us write $\vp_{k,j}:=\left(f_k|_{J_{k,j}}\right)^{-1}$ to denote the corresponding branch inverse of $f_k$, where $k\geq 1$ and $1\leq j\leq N$. We always assume that \[1<\inf \set{b_k}\leq \sup \set{B_k} <\infty, \ 0<\inf\set{\gg_k } \leq \sup\set{\gg_k} \leq 1 \te{ and } 0<\inf \set{c_k} \leq \sup \set{c_k}<\infty.\] Let $\gO_0$ be the empty set. For $n\geq 1$, define \[\gO_{n}=\set{1, 2, \cdots, N}^n, \ \gO_\infty=\lim_{n\to \infty} \gO_n \te{ and } \gO=\UU_{k=0}^\infty \gO_k.\] Elements of $\gO$ are called words. For any $\gs \in \gO$ if $\gs=(\gs_1, \gs_2, \cdots, \gs_n) \in \gO_n$, we write $\gs^-=(\gs_1, \gs_2, \cdots, \gs_{n-1})$ to denote the word obtained by deleting the last letter of $\gs$, $|\gs|=n$ to denote the length of $\gs$, and $\gs|_k:=(\gs_1, \gs_2, \cdots, \gs_k)$, $k\leq n$, to denote the truncation of $\gs$ to the length $k$. For any two words $\gs=(\gs_1, \gs_2, \cdots, \gs_k)$ and $\gt=(\gt_1, \gt_2, \cdots, \gt_m)$, we write $\gs\gt=\gs\ast \gt=(\gs_1, \cdots, \gs_k, \gt_1, \cdots, \gt_m)$ to denote the juxtaposition of $\gs, \gt \in \gO$. A word of length zero is called the empty word and is denoted by $\es$. For $\gs \in \gO$ and $\gt\in \gO\uu \gO_\infty$ we say $\gt$ is an extension of $\gs$, written as $\gs\prec \gt$, if $\gt|_{|\gs|}=\gs$. For $\gs \in \gO_k$, the cylinder set $C(\gs)$ is defined as $C(\gs)=\set{\gt \in \gO_\infty : \gt|_k =\gs}$. For $\gs=(\gs_1, \gs_2, \cdots, \gs_n) \in \gO_n$, let us write $\vp_\gs=\vp_{1,\gs_1} \circ \cdots \circ\vp_{n,\gs_n}$, and define the rank-$n$ basic interval corresponding to $\gs$ by \[J_\gs=J_{(\gs_1, \gs_2, \cdots, \gs_n)}=\vp_{\gs}(J),\] where $1\leq \gs_k \leq N$, $1\leq k\leq n$. It is easy to see that the set of basic intervals $\set{J_\gs : \gs \in \gO}$ has the following net properties: $(i)$ $J_{\gs \ast j} \sci J_\gs$ for each $\gs \in \gO_n$ and $1\leq j\leq N$ for all $n\geq 1$; $(ii)$ $J_{\gs} \II J_{\gt}=\es$, if $\gs, \gt \in \gO_n$ for all $n\geq 1$ and $\gs \neq \gt$. Let $b=\inf\set{b_k}$ and $B=\sup\set{B_k}$. Then by our assumption, $1<b\leq B<\infty$. Moreover by $(C2)$, as $\vp_{k,j}$ is a corresponding branch inverse of $f_k$, where $k\geq 1$ and $1\leq j\leq N$, for all $x \in J$, we have $\left|f_k'(\vp_{k,j}(x))\right|\left|\vp_{k,j}'(x)\right|=1$, and so \begin{equation} \label{eq1} B^{-1} \leq \left|\vp_{k,j}'(x)\right|\leq b^{-1}. \end{equation} Now let \[E =\II_{n=0}^\infty \UU_{\gs \in \gO_n} J_\gs.\] Choose $x, y$ to be the end points of $J$, and then $\vp_\gs(x), \vp_\gs(y)$ are the end points of $J_\gs$ for each $\gs \in \gO$, and so by mean value theorem, \[|J_\gs|=|\vp_\gs(x)-\vp_\gs(y)|=|\vp_\gs'(w)||x-y|=|\vp_\gs'(w)|,\] for some $w\in J_\gs$. Thus $B^{-n} \leq |J_\gs| \leq b^{-n}$ for any $\gs \in \gO_n$, and so the diameter $\left|J_\gs\right| \to 0$ as $|\gs|\to \infty$. Since given $\gs =(\gs_i)_{i=1}^\infty \in \gO_\infty$ the diameters of the compact sets $J_{\gs|_k}, \, k\geq 1,$ converge to zero and since they form a descending family, the set \[\II_{k=0}^\infty J_{\gs|_k}\] is a singleton and therefore, if we denote its element by $\pi(\gs)$, this defines the coding map $\pi : \gO_\infty \to J$, and so it follows that \[E=\pi(\gO_\infty)=\UU_{\gs \in \gO_\infty} \II_{k=0}^\infty J_{\gs|_k}.\] Moreover, $\pi(C(\gs))=E\ii J_\gs$ for $\gs\in \gO$. With the net properties it follows that $E$ is a perfect, nowhere dense and totally disconnected subset of $J$. The set $E$ is called the \tit{cookie-cutter-like} (CC-like) set generated by the cookie-cutter sequence $\set{f_k}_{k=1}^\infty$. \begin{remark} For each $k\geq 1$, take $f_k=f$, where $f$ is a cookie-cutter mapping, then $E$ is the classical cookie-cutter set, it is the unique invariant set of $f$, i.e., $E=f^{-1} (E)$.\end{remark} \begin{defi} Let $E$ be a CC-like set and $r>0$. The family of basic intervals $\C U_r=\set{J_\gs : |J_\gs|\leq r<|J_{\gs^-}|}\sci \set{J_\gs : \gs \in \gO}$ is called the $r$-Moran covering of $E$ provided it is a covering of $E$, i.e., $E\ci \UU_{J_\gs \in \C U_r}J_\gs$. \end{defi} From the definition it follows that the elements of a Moran covering are disjoint, have almost equal sizes, and are often of different ranks. We need the following lemma that appeared in \cite{MRW}. For completeness, the proof is given here. \begin{lemma} (cf. \cite[Lemma 2.1]{MRW}) \label{lemma1} (\tbf{Bounded variation principle}) There exists a constant $1<\gx<+\infty$ such that for each $n\geq 1$, $\gs \in \gO_n$, and $x, y \in J_\gs$, we have \[\gx^{-1} \leq \frac{|F_n'(x)|}{|F_n'(y)|} \leq \gx\] where $F_n(x)=f_n\circ f_{n-1}\circ \cdots \circ f_1(x)$. \end{lemma} \begin{proof} Fix $n\geq 1$, $\gs=(\gs_1, \gs_2, \cdots, \gs_n)\in \gO_n$ and $x, y\in J_\gs$. Notice that for each $k\leq n$, $F_{k-1}$ maps $J_\gs$ diffeomorphically to the set $\vp_{k, \gs_k} \circ \vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(J)$, and so \[\left|F_{k-1}(x)-F_{k-1}(y)\right| \leq \te{diam}\left( \vp_{k, \gs_k} \circ \vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(J)\right)=|\vp_{k, \gs_k} \circ \vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(J)|.\] By mean value theorem, \begin{align*} &|\vp_{k, \gs_k} \circ \vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(J)|\\ &=\sup_{x, y\in J} \left|\vp_{k, \gs_k} \left(\vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(x)\right)-\vp_{k, \gs_k} \left(\vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(y)\right)\right|\\ &\leq b^{-1} \left|\vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(J)\right|. \end{align*} Thus proceeding inductively, \[\left |\vp_{k, \gs_k} \circ \vp_{k+1, \gs_{k+1}}\circ \cdots \circ \vp_{n, \gs_n}(J)\right |\leq b^{-(n-k+1)}.\] Then, H\"older continuity of $f_k'$ gives \begin{align*}\left |f_k'(F_{k-1}(x))-f_k'(F_{k-1}(y))\right | \leq c \left |F_{k-1}(x)-F_{k-1}(y)\right|^\gg\leq cb^{-(n-k+1)\gg},\end{align*} and so by mean value theorem and the assumption $|f_k'|>1$, we have \begin{align*} &\Big |\log |f_k'(F_{k-1}(x))|-\log |f_k'(F_{k-1}(y))| \Big |\\ &\leq \Big ||f_k'(F_{k-1}(x))| -|f_k'(F_{k-1}(y))| \Big |\\ & \leq cb^{-(n-k+1)\gg}. \end{align*} Therefore, by the above inequality and the chain rule, \begin{align*} &\Big |\log |F_n'(x)|-\log |F_n'(y)|\Big |\\ &=\Big |\sum_{k=1}^n \log |f_k'(F_{k-1}(x))|-\sum_{k=1}^n \log |f_k'(F_{k-1}(y))|\Big |\\ &\leq \sum_{k=1}^n \Big |\log |f_k'(F_{k-1}(x))|-\log |f_k'(F_{k-1}(y))|\Big |\\ &\leq \sum_{k=1}^n cb^{-(n-k+1)\gg}\leq \frac{cb^{-\gg}}{1-b^{-\gg}}. \end{align*} Take $\gx=\exp\left\{\frac{c}{b^{\gg}-1}\right\}$. Since $\frac{c}{b^{\gg}-1}>0$, we have $1< \gx<+\infty$, and thus the proposition follows. \end{proof} Let us now prove the following proposition. \begin{prop}. \label{prop31} (\tbf{Bounded distortion principle}) For any $n\geq 1$, $\gs\in \gO_n$, $x \in J_\gs$, we have \[\gx^{-1} \leq |F_n'(x)|\cdot |J_\gs| \leq \gx,\] where $F_n(x)=f_n\circ f_{n-1}\circ \cdots \circ f_1(x)$. Moreover, for each $1\leq j\leq N$, we get $\left|J_{\gs\ast j}\right| \geq \gx^{-1} B^{-1}|J_\gs|$, where $\gx$ is the constant of Lemma~\ref{lemma1}. \end{prop} \begin{proof} Note that for $\gs \in \gO_n$, $F_n : J_\gs \to J$ is a differentiable bijection. So by mean value theorem, if $y, z \in J_\gs$, there exists $w \in J_\gs$ such that \[F_n(y)-F_n(z)=F_n'(w) (y-z).\] Choose $y, z$ to be the end points of $J_\gs$, and then $F_n(y), F_n(z)$ are the end points of $J$, and so \begin{equation} \label{eq1111}|J|=|F_n'(w)|\cdot|J_\gs|, \te{ i.e. } |F_n'(w)|\cdot|J_\gs|=1.\end{equation} Hence, using bounded variation principle, we have \begin{equation} \label{eq1112} \gx^{-1} \leq |F_n'(x)|\cdot |J_\gs| \leq \gx\end{equation} for all $x \in J_\gs$. Thus for $1\leq j\leq N$, using \eqref{eq1112}, we have \[\gx^{-1} \leq |F_{n+1}'(x)|\cdot |J_{\gs\ast j} |=|f_n'(F_n(x))|\cdot |F_n'(x)|\cdot |J_{\gs\ast j}| \leq B |F_n'(x)| \cdot |J_{\gs\ast j}|.\] Then \eqref{eq1111} implies \[ |J_{\gs\ast j}|\geq \gx^{-1} B^{-1}|J_\gs|, \] which completes the proof of the proposition. \end{proof} Let us now prove the following proposition. \begin{prop} \label{prop1} Let $\gs\in\gO_n$, $n\geq 1$, $x, y \in J$ and $\gx$ be the constant of Lemma~\ref{lemma1}. Then, \[\gx^{-1} |\vp_\gs'(y)| \leq |\vp_\gs'(x)| \leq \gx |\vp_\gs'(y)|.\] \end{prop} \begin{proof} For $\gs\in \gO_n$, $n\geq 1$, and $x \in J$, we know $F_n(\vp_\gs(x))=x$. Thus \[|F_n'(\vp_\gs(x))|\cdot|\vp_\gs'(x)|=1.\] Again for all $x \in J$, $\vp_\gs(x) \in J_\gs$. Hence, Lemma~\ref{lemma1} yields \[\gx^{-1} |\vp_\gs'(y)| \leq |\vp_\gs'(x)| \leq \gx |\vp_\gs'(y)|,\] and thus the proposition is obtained. \end{proof} For any $\gs \in \gO$, let us write $\|\vp_\gs\|=\sup_{x\in J} |\vp_\gs(x)|$. From the above proposition the following lemma easily follows. \begin{lemma}\label{lemma3} Let $\gs, \gt \in \gO$. Then \[\gx^{-1} \|\vp_\gs'\|\|\vp_{\gt}'\|\leq \|\vp_{\gs\gt}'\|\leq \|\vp_\gs'\|\|\vp_{\gt}'\|. \] \end{lemma} \subsection{Topological pressure:}For $t\in \D R$ and $n\geq 1$, let us write $Z_n(t)=\sum_{\gs \in \gO_n} \|\vp_\gs'\|^t$. Then for $n, p\geq 1$, \[Z_{n+p}(t)=\sum_{\gs\in \gO_n} \sum_{\gt\in \in \gO_p}\|\vp_{\gs\gt}'\|^t.\] By Lemma~\ref{lemma3}, if $t\geq 0$ then \[ Z_{n+p}(t) \leq Z_n(t)Z_p(t),\] and if $t<0$ then \[ Z_{n+p}(t) \leq \gx^{-t}Z_n(t)Z_p(t).\] Hence by the standard theory of sub-additive sequences, $\lim_{k\to\infty} \frac 1 k \log Z_k(t)$ exists (cf. \cite[Corollary 1.2]{F2}). Let us denote it by $P(t)$, i.e., \begin{equation} \label{eq2} P(t)=\lim_{k\to\infty} \frac 1 k \log \sum_{\gs\in \gO_k}\|\vp_\gs'\|^t.\end{equation} The above function $P(t)$ is called the \tit{topological pressure} of the CC-like set $E$. Lemma~\ref{lemma11} and Lemma~\ref{lemma12} give some properties of the function $P(t)$. \begin{lemma} \label{lemma11} The function $P(t)$ is strictly decreasing, convex and hence continuous on $\D R$. \end{lemma} \begin{proof} To prove that $P(t)$ is strictly decreasing let $\gd>0$. Then by Lemma~\ref{lemma3} and Inequality~\eqref{eq1}, \begin{align*} P(t+\gd)&=\lim_{k\to\infty} \frac 1 k \log \sum_{\gs\in \gO_k}\|\vp_\gs'\|^{t+\gd}\leq \lim_{k\to\infty} \frac 1 k \log \sum_{\gs\in \gO_k}\|\vp_\gs'\|^t b^{-k\gd}\\ &=P(q, t) -\gd \log b<P(q, t), \end{align*} i.e., $P(t)$ is strictly decreasing. For $t_1, t_2 \in \D R$ and $a_1, a_2 > 0$ with $a_1+a_2=1$, using H\"older's inequality, we have \begin{align*} P(a_1 t_1+a_2t_2)&= \lim_{n\to \infty}\frac 1 k \log\sum_{\gs \in \gO_k} \|\vp_\gs'\|^{a_1 t_1+a_2 t_2}=\lim_{k\to \infty}\frac 1k \log\sum_{\gs \in \gO_k}\Big ({\|\vp_\gs'\|}^{t_1}\Big)^{a_1}\Big( \|\vp_\gs'\|^{t_2}\Big)^{a_2}\\ &\leq \lim_{k\to \infty}\frac 1 k \log\Big(\sum_{\gs \in \gO_k}\|\vp_\gs'\|^{t_1} \Big)^{a_1}\Big(\sum_{\gs \in \gO_k}\|\vp_\gs'\|^{t_2}\Big)^{a_2}\\ &=a_1P(t_1)+a_2 P(t_2), \end{align*} i.e., $P(t)$ is convex and hence continuous on $\D R$. \end{proof} Let us now prove the following lemma. \begin{lemma} \label{lemma12} There exists a unique $h \in (0, +\infty)$ such that $P(h)=0$. \end{lemma} \begin{proof} By Lemma~\ref{lemma11}, the function $P(t)$ is strictly decreasing and continuous on $\D R$, and so there exists a unique $h \in \D R$ such that $P(h)=0$. Note that \[ P(0)=\lim_{k\to\infty} \frac 1 k \log \sum_{\gs\in \gO_k}1=\lim_{k\to\infty} \frac 1 k \log N^k=\log N \geq \log 2>0. \] In order to conclude the proof it therefore suffices to show that $\lim_{t \to +\infty} P(t)=-\infty$. For $t>0$, \[P(t)=\lim_{k\to\infty} \frac 1 k \log \sum_{\gs\in \gO_k}\|\vp_\gs'\|^t \leq \lim_{k\to\infty} \frac 1 k\log \sum_{\gs\in \gO_k} b^{-kt}=\lim_{k\to\infty} \frac 1 k\log N^{k} -t \log b=\log N-t\log b<0.\] Since $b>1$, it follows that $\lim_{t\to +\infty}P(t)=-\infty$, and hence the lemma follows. \end{proof} In the next section, we state and prove the main result of the paper. \section{Main result} The relationship between the unique zero $h$ of the pressure function $P(t)$ and the fractal dimensions of the cookie-cutter-like set $E$ is given by the following theorem. Moreover, it shows that the $h$-dimensional Hausdorff measure and the $h$-dimensional packing measure are finite and positive. \begin{theorem}\label{theorem} Let $E$ be the cookie-cutter-like set associated with the family $\set{f_k}_{k=1}^\infty$ of cookie-cutter mappings, and $h \in (0, +\infty)$ be such that $P(h)=0$. Then \[\te{dim}_H(E)=\te{dim}_P(E)=\ul{\te{dim}}_B(E)=\ol{\te{dim}}_B(E)=h,\] and \[0<\C H^h(E)\leq \C P^h(E) <\infty.\] \end{theorem} We need the following lemma. \begin{lemma} \label{lemma32} Let $n\geq 1$, $\gs\in \gO_n$ and $x \in J$. Then \[\gx^{-1} |J_\gs| \leq |\vp_\gs'(x)| \leq \gx |J_\gs|,\] where $\gx$ is the constant of Lemma~\ref{lemma1}. \end{lemma} \begin{proof} Let $x \in J$, and then $\vp_\gs(x) \in J_\gs$, where $\gs \in \gO_n$ and $n\geq 1$. Moreover, $F_n(\vp_\gs(x))=x$, and so $|F_n'(\vp_\gs(x))|\cdot |\vp_\gs'(x)|=1$ where $F_n(x)=f_n\circ f_{n-1}\circ \cdots \circ f_1(x)$. Now use Proposition~\ref{prop31} to obtain the lemma. \end{proof} Let us now prove the following lemma. \begin{lemma}\label{lemma33} Let $\gs, \gt \in \gO$. Then \[\gx^{-3} |J_\gs||J_\gt| \leq |J_{\gs\gt}| \leq \gx^3 |J_\gs||J_\gt|,\] where $\gx$ is the constant of Lemma~\ref{lemma1}. \end{lemma} \begin{proof} For $\gs, \gt \in \gO$, we have $|\vp_{\gs\gt}'(x)|=|\vp_\gs'(y)||\vp_\gt'(x)|$ where $y=\vp_\gt(x)$ for $x \in J$. Again Proposition~\ref{prop1} gives that for any $x, y \in J$ \[\gx^{-1} |\vp_\gs'(y)| \leq |\vp_\gs'(x)| \leq \gx |\vp_\gs'(y)|.\] Hence, Lemma~\ref{lemma32} implies \[\gx^{-3} |J_\gs||J_\gt|\leq \gx^{-1} |\vp_\gs'(y)||\vp_\gt'(x)|=\gx^{-1} |\vp_{\gs\gt}'(x)|\leq |J_{\gs\gt}|\leq \gx |\vp_{\gs\gt}'(x)|\leq \gx^{3} |J_\gs||J_\gt|,\] and thus the lemma is obtained. \end{proof} \begin{note} Lemma~\ref{lemma32} implies \[\gx^{-1} |J_\gs| \leq \sup_{x \in J}|\vp_\gs'(x)|=\|\vp_\gs'\| \leq \gx |J_\gs|,\] and so the topological pressure $P(t)$ can be written as follows: \[P(t)=\lim_{k\to \infty} \frac 1 k \log \sum_{\gs\in \gO_k}|J_\gs|^t. \] \end{note} Let us now prove the following proposition, which plays a vital role in the paper. \begin{prop} \label{prop21} Let $h \in (0, +\infty)$ be unique such that $P(h)=0$, and let $s_\ast$ and $s^\ast$ be any two arbitrary real numbers with $0<s_\ast<h<s^\ast$. Then for all $n\geq 1$, \[\gx^{-3s_\ast}< \sum_{\gs \in \gO_n}|J_\gs|^{s_\ast} \te{ and } \sum_{\gs \in \gO_n}|J_\gs|^{s^\ast}<\gx^{3s^\ast},\] where $\gx$ is the constant of Lemma~\ref{lemma1}. \end{prop} \begin{proof} Let $s_\ast<h$. As the pressure function $P(t)$ is strictly decreasing, $P(s_\ast)>P(h)=0$. Then for any positive integer $n$, by Lemma~\ref{lemma33}, we have \begin{align*} 0&<P(s_\ast) =\lim_{p\to \infty} \frac 1{np} \log \sum_{\go\in \gO_{np}}|J_\go|^{s_\ast}\leq \lim_{p\to \infty} \frac 1{np} \log \gx^{3(p-1)s_\ast} \left(\sum_{\gs\in \gO_{n}}|J_\gs|^{s_\ast}\right)^p, \end{align*} which implies \[0< \frac 1n \log \left(\gx^{3s_\ast} \sum_{\gs\in \gO_{n}}|J_\gs|^{s_\ast}\right) \te{ and so } \sum_{\gs\in \gO_{n}}|J_\gs|^{s_\ast}>\gx^{-3s_\ast}. \] Now if $h<s^\ast$, then $P(s^\ast)<0$ as $P(t)$ is strictly decreasing. Then for any positive integer $n$, by Lemma~\ref{lemma33}, we have \begin{align*} 0>P(s^\ast) =\lim_{p\to \infty} \frac 1{np} \log \sum_{\go\in \gO_{np}}|J_\go|^{s^\ast}\geq \lim_{p\to \infty} \frac 1{np} \log \gx^{-3(p-1)s^\ast} \left(\sum_{\gs\in \gO_{n}}|J_\gs|^{s^\ast}\right)^p, \end{align*} which implies \[0>\frac 1n \log \left(\gx^{-3s^\ast}\sum_{\gs\in \gO_{n}}|J_\gs|^{s^\ast}\right) \te{ and so } \sum_{\gs\in \gO_{n}}|J_\gs|^{s^\ast}<\gx^{3s^\ast}. \] Thus the proposition is obtained. \end{proof} \begin{cor} \label{cor1} Since $s_\ast$ and $s^\ast$ be any two arbitrary real numbers with $0<s_\ast<h<s^\ast$, from the above proposition it follows that for all $n\geq 1$, \[\gx^{-3h}\leq \sum_{\gs \in \gO_n}|J_\gs|^h\leq \gx^{3h}.\] \end{cor} \begin{proof} Let us first prove $\sum_{\gs\in \gO_n} |J_\gs|^h\leq \gx^{3h}$. If not let $\sum_{\gs\in \gO_n} |J_\gs|^h>\gx^{3h}$. Define $f_n(t)=\gx^{3t}-\sum_{\gs \in \gO_n}|J_\gs|^{t}$. Then $f_n(t)$ is a real valued continuous function. Moreover, $f_n(s^\ast)>0$, and $f_n(h)<0$. Then by intermediate value theorem, there exists a real number $s$, where $h<s<s^\ast$, such that $f_n(s)=0$, which implies $\sum_{\gs \in \gO_n}|J_\gs|^s=\gx^{3s}$. But $h<s^\ast$ is arbitrary for which $\sum_{\gs \in \gO_n}|J_\gs|^{s^\ast}<\gx^{3s^\ast}$, and so a contradiction arises. Hence $\sum_{\gs\in \gO_n} |J_\gs|^h\leq \gx^{3h}$. Similarly, $\sum_{\gs\in \gO_n} |J_\gs|^h\geq \gx^{-3h}$. Thus the corollary follows. \end{proof} The following proposition plays an important role in the rest of the paper. \begin{prop} \label{prop32} Let $h\in (0, +\infty)$ be such that $P(h)=0$. Then there exists a constant $\gh>1$ and a probability measure $\mu_h$ supported by $E$ such that for any $\gs\in \gO$, \[\gh^{-1} |J_{\gs}|^h\leq \mu_h(J_{\gs}) \leq \gh |J_{\gs}|^h.\] \end{prop} \begin{proof} For $\gs\in \gO$, $n\geq 1$, define \[\gn_n(C(\gs))=\frac{\sum_{\gt\in \gO_n} (\te{diam} J_{\gs\gt})^h}{\sum_{\gt\in \gO_{|\gs|+n}} (\te{diam}J_{\gt})^h}.\] Then using Lemma~\ref{lemma33} and Corollary~\ref{cor1}, we have \[\gn_n(C(\gs))\leq \frac{\gx^{3h} (\te{diam}J_\gs)^h \sum_{\gt\in \gO_n}(\te{diam} J_\gt)^h}{\sum_{\gt\in \gO_{|\gs|+n}} (\te{diam}J_{\gt})^h}\leq \gx^{9h}(\te{diam}J_\gs)^h, \] and similarly, $\gn_n(C(\gs))\geq \gx^{-9h}(\te{diam}J_\gs)^h$. Thus for a given $\gs \in \gO$, $\set{\gn_n(C(\gs))}_{n=1}^\infty$ is a bounded sequence of real numbers, and so Banach limit, denoted by Lim, is defined. For $\gs \in \gO$, let \[\gn(C(\gs))=\te{Lim}_{n\to \infty} \gn_n(C(\gs)).\] Then \[\sum_{j=1}^N \gn(C(\gs j))=\te{Lim}_{n\to \infty} \sum_{j=1}^N \frac{\sum_{\gt\in \gO_n} (\te{diam} J_{\gs j \gt})^h}{\sum_{\gt\in \gO_{|\gs|+1+n}} (\te{diam}J_{\gt})^h}=\te{Lim}_{n\to \infty} \frac{\sum_{\gt\in \gO_{n+1}} (\te{diam} J_{\gs \gt})^h}{\sum_{\gt\in \gO_{|\gs|+n+1}} (\te{diam}J_{\gt})^h}, \] and so \[\sum_{j=1}^N \gn(C(\gs j))=\te{Lim}_{n\to \infty} \gn_{n+1}(C(\gs))=\te{Lim}_{n\to \infty} \gn_{n}(C(\gs))=\gn(C(\gs)).\] Thus by Kolmogorov's extension theorem, $\gn$ can be extended to a unique Borel probability measure $\gg$ on $\gO_\infty$. Let $\mu_h$ be the image measure of $\gg$ under the coding map $\pi$, i.e., $\mu_h=\gg\circ \pi^{-1}$. Then $\mu_h$ is a unique Borel probability measure supported by $E$. Moreover, for any $\gs\in \gO$, \[\mu_h(J_\gs)=\gg(C(\gs))=\te{Lim}_{n\to \infty} \gn_n(C(\gs)) \leq \te{Lim}_{n\to\infty} \gx^{9h}(\te{diam}J_\gs)^h= \gx^{9h}(\te{diam}J_\gs)^h,\] and similarly, \[\mu_h(J_\gs)\geq \gx^{-9h}(\te{diam}J_\gs)^h.\] Write $\gh=\gx^{9h}$, and then $\gh>1$, and thus the proof of the proposition is complete. \end{proof} The following proposition is useful. \begin{prop} \label{prop111} Let $r>0$, and let $\C U_r$ be the $r$-Moran covering of $E$. Then there exists a positive integer $M$ such that the ball $B(x, r)$ of radius $r$, where $x\in J$, intersects at most $M$ elements of $\C U_r$. \end{prop} \begin{proof} By Proposition~\ref{prop31}, for any $\gs\in \gO$, we get $\left|J_\gs\right|\geq \gx^{-1} B^{-1}|J_{\gs^-}|$. Let $\C U_r$ be the $r$-Moran covering of $E$. Fix any $x\in J$, and write $V=B(x, r)$. Define, \[Q_V=\set{J_\gs \in \C U_r : J_\gs\ii V \neq \es, \, \gs \in \gO}.\] Note that any interval $J_\gs$ in $Q_V$ contains a ball of radius $\frac 12 |J_\gs|$, and all such balls are disjoint. Moreover, all the balls are contained in a ball of radius $2r$ concentric with $V$, and so comparing the volumes (in fact lengths), \[2r\geq \left(\# Q_V\right)\frac 1 2 |J_\gs|\geq \left(\# Q_V\right) \frac 1 2\gx^{-1} B^{-1}|J_{\gs^-}|>\left(\# Q_V\right)\frac 12 \gx^{-1} B^{-1}r,\] which implies $\#Q_V< 4 \gx B$. Hence $M:=\lfloor 4 \gx B \rfloor$ fulfills the statement of the proposition. \end{proof} \begin{prop} \label{prop121} Let $h \in (0, +\infty)$ be such that $P(h)=0$. Then, \[0<\C H^h(E)<\infty \te{ and }\te{dim}_H(E)=h.\] \end{prop} \begin{proof} For any $n\geq 1$, the set $\set{J_\gs : \gs \in \gO_n}$ is a covering of the $E$ and so by Corollary~\ref{cor1}, \[\C H^h(E) \leq \liminf_{n\to \infty} \sum_{\gs \in \gO_n}|J_\gs|^h\leq \gx^{3h}<\infty,\] which yields dim$_H(E)\leq h$. Let $\mu_h$ be the probability measure defined in Proposition~\ref{prop32}, and $k\geq 1$. Then we get $\mu_h(J_\gs)\leq \gh |J_\gs|^h$. Let $r>0$ and let $\C U_r=\set{J_\go : |J_{\go}|\leq r<|J_{\go^{-}}|}$ be the $r$-Moran covering of $E$. Then by Proposition~\ref{prop111}, we get \[\mu_h(B(x, r))\leq \sum_{J_{\go} \ii B(x, r) \neq \es}\mu_h(J_{\go}) \leq \gh \sum_{J_{\go} \ii B(x, r) \neq \es}|J_{\go}|^h\leq \gh M r^h.\] Thus \[\limsup_{r\to 0} \frac{\mu_h(B(x, r))}{r^h}\leq \gh M,\] and so by Proposition~2.2 in \cite{F2}, $\C H^h(E) \geq \gh^{-1} M^{-1}>0$, which implies dim$_H(E) \geq h$. Thus the proposition is yielded. \end{proof} Let us now prove the following lemma. \begin{lemma} \label{lemma122} Let $h \in (0, +\infty)$ be such that $P(h)=0$. Then $\ol{\te{dim}}_B(E)\leq h$. \end{lemma} \begin{proof} Let $\mu_h$ be the probability measure defined in Proposition~\ref{prop32}, and for $r>0$ let $\C U_r=\set{J_\go : |J_\go|\leq r<|J_{\go^{-}}|}$ be the $r$-Moran covering of $E$. Then for any $J_\gs \in \C U_r$, we get $\mu_h(J_\gs)\geq \gh^{-1} |J_\gs|^h$. Thus it follows that \[\|\C U_r\|^h =\sum_{J_\gs \in \C U_r} |J_\gs|^h \leq \gh \sum_{J_\gs \in \C U_r}\mu_h(J_\gs)= \gh.\] Again for any $J_\gs \in \C U_r$, it follows that $|J_\gs|\geq \gx^{-1}B^{-1} |J_{\gs^-}| >\gx^{-1}B^{-1}r$. Hence, $\left(\gx B\right)^{-h}r^h N_r(E) \leq \|\C U_r\|^h\leq \gh$, where $N_r(E)$ is the smallest number of sets of diameter at most $r$ that can cover $E$, which implies $N_r(E)\leq \gh \left(\gx B\right)^h r^{-h}$ and so \[\log N_r(E) \leq \log \left[\gh \left(\gx B\right)^h\right] -h\log r,\] which yields \[\ol{\te{dim}}_B E=\limsup_{r\to 0} \frac{\log N_r(E)}{-\log r} \leq h,\] and thus the lemma is obtained. \end{proof} Let us now prove the following proposition. \begin{prop} \label{prop123} Let $h \in (0, +\infty)$ be such that $P(h)=0$, and then $\C P^h(E)<\infty$. \end{prop} \begin{proof} For the probability measure $\mu_h$, by Proposition~\ref{prop32}, there exists a constant $\gh\geq 1$ such that $\mu_h(J_\gs)\geq \gh^{-1} |J_\gs|^h$. Again Proposition~\ref{prop31} gives that \[\left|J_{\gs}\right| \geq \gx^{-1} B^{-1}|J_{\gs^-}|.\] Let $\C U_r=\set{J_\go : |J_\go|\leq r<|J_{\go^{-}}|}$ be the $r$-Moran covering of $E$ for some $r>0$. Let $x\in J_\gs$ for some $J_\gs \in \C U_r$, and then $J_\gs \sci B(x, r)$. Therefore, \[\mu_h(B(x, r))\geq \mu_h(J_\gs)\geq \gh^{-1} |J_\gs|^h>\gh^{-1}\gx^{-h} B^{-h} r^h,\] which implies \[\liminf_{r\to 0} \frac{\mu_h(B(x, r))}{r^h}\geq \gh^{-1}\gx^{-h} B^{-h},\] and so by Proposition~2.2 in \cite{F2}, \[\C P^h(E)\leq 2^h \gh\gx^{h} B^{h}<\infty,\] and thus the proposition is obtained. \end{proof} \subsection*{Proof of Theorem~\ref{theorem}} Proposition~\ref{prop121} tells us that dim$_H(E)=h$, and Lemma~\ref{lemma122} gives that $\ol{\te{dim}}_B E\leq h$. Combining these with the inequalities in \eqref{eq0}, we have \[\te{dim}_H(E)=\te{dim}_P(E)=\ul{\te{dim}}_B(E)=\ol{\te{dim}}_B(E)=h.\] Again from Proposition~\ref{prop121} and Proposition~\ref{prop123} it follows that \[0<\C H^h(E)\leq \C P^h(E)<\infty.\] Thus the proof of the theorem is complete.
1,116,691,501,155
arxiv
\subsection{Shapes3D} \begin{table*}[h] { \begin{center} \small \begin{tabular}{ccccc} Layer & Units & Kernel size & Activation & Stride \\ \hline \\ Conv2D & 32 & 3x3 & ReLU & 1 \\ Conv2D & 32 & 3x3 & ReLU & 1 \\ Conv2D & 64 & 3x3 & ReLU & 2 \\ Conv2D & 64 & 3x3 & ReLU & 1 \\ Conv2D & 128 & 3x3 & ReLU & 1 \\ Conv2D & 128 & 3x3 & ReLU & 2 \\ Flatten & -- & -- & -- & -- \\ Dense & 128 & -- & ReLU & --\\ Dense & Embedding dimension (64) & -- & Linear & --\\ \end{tabular} \end{center} } \caption{\textbf{Architecture used for Shapes3D experiments (Section 4.1).} Input shape is [64, 64, 3]. } \label{tab:shapes3d_arch} \end{table*} For the experiments of Figures~\ref{fig:shapes3d_scatter}\&3 we used the network architecture listed in Table~\ref{tab:shapes3d_arch}, and trained for 2000 steps with a learning rate of $3\times10^{-5}$. We used a set size of 32 and squared L2 distance as the embedding space metric, with a temperature of 1. To curate a set for training, we randomly sample from among the possible values for the inactive factor(s) and then filter the dataset according to it. This takes longer when there are more inactive factors, as more of the dataset must be sieved out to acquire each stack. \subsection{MNIST} \begin{table}[h] { \begin{center} \small \begin{tabular}{ccccc} Layer & Units & Kernel size & Activation & Stride \\ \hline \\ Conv2D & 32 & 3x3 & ReLU & 1 \\ Conv2D & 32 & 3x3 & ReLU & 1 \\ Conv2D & 32 & 3x3 & ReLU & 2 \\ Conv2D & 32 & 3x3 & ReLU & 1 \\ Conv2D & 32 & 3x3 & ReLU & 1 \\ Flatten & -- & -- & -- & -- \\ Dense & 128 & -- & ReLU & --\\ Dense & Embedding dimension (8) & -- & Linear & --\\ \end{tabular} \end{center} } \caption{\textbf{Architecture used for MNIST experiments (Section 4.2).} Input shape is [28, 28, 1]. } \label{tab:mnist_arch} \end{table} For the MNIST experiments we used the architecture specified in Table~\ref{tab:mnist_arch}. The set size was 64. We used a learning rate of $10^{-4}$ and trained for 500 steps. We used squared L2 distance as the embedding space metric and a temperature of 1. All instances of the digit 9 are held out at training time, and images of the other digits are formed into stacks before being randomly paired each training batch. This ran in under 30 seconds on an NVIDIA Tesla V100 GPU. \subsection{Pose estimation} \begin{table}[h] { \begin{center} \small \begin{tabular}{ccccc} Layer & Units & Kernel size & Activation & Stride \\ \hline \\ ResNet50, up to conv4\_block6 & -- & -- & -- & -- \\ Conv2D & 256 & 3x3 & ReLU & 1 \\ Global Average Pooling & -- & -- & -- & -- \\ Flatten & -- & -- & -- & -- \\ Dense & 128 & -- & tanh & --\\ Dense & Embedding dimension (64) & -- & Linear & --\\ \end{tabular} \end{center} } \caption{\textbf{Architecture used for pose estimation experiments (Section 4.3).} Input shape is [128, 128, 3].} \label{tab:pose_arch} \end{table} For both the pose estimation lookup (Table 1) and regression (Table 2) tasks, we use the same base network to embed the images, described in Table~\ref{tab:pose_arch}. In contrast to the Shapes3D and MNIST experiments, we train with mini-batches consisting of 4 pairs of image sets, each of size 32. We use cosine similarity and a temperature of 0.1 for lookup and 0.05 for regression. For the lookup task, the network trained for 40k steps with a learning rate that starts at $10^{-4}$ and decays by a factor of 2 every 10k steps. The beginning of training is purely synthetic images and then ramping up linearly to 10\% real images folded into the unconstrained stack, stepping every 4k steps. For regression, the embeddings are then fed, separately for each Euler angle, as input to a 128 unit dense layer with tanh activation, which is then split off into two dense layers with 2 and 4 units and linear activation for the angle magnitude and quadrant, respectively, as in \cite{s2reg}. To maintain consistency between how the embeddings are processed for the ABC loss and how they are fed into the regression sub-network, the embeddings are L2-normalized to lie on the 64-dimensional unit sphere before the regression. The angle magnitudes are passed through a spherical exponential activation function \cite{s2reg}, which is the square root of a softmax. The magnitudes are then compared with ground truth $(|\text{sin} \phi_i|, |\text{cos} \phi_i)|$, with $i$ spanning the three Euler angles, through a cosine similarity loss. The quadrant outputs are trained as a classification task with categorical cross entropy against the ground truth angle quadrants, defined as $(\text{sign}(\text{sin} \phi_i), \text{sign}(\text{cos} \phi_i))$. Training proceeds for 60k steps with a learning rate that starts at $10^{-4}$ and decays by a factor of 2 every 20k steps. To more closely match the distribution of camera pose in real images, we filter the ShapeNet renderings by elevation: 0.5 radians and 1.3 radians for the max elevation for cars and chairs, respectively (for all of the baselines as well). \subsection{Factor isolation on real images only} \begin{table}[h] { \begin{center} \small \begin{tabular}{ccccc} Layer & Units & Kernel size & Activation & Stride \\ \hline \\ Conv2D & 64 & 3x3 & ReLU & 1 \\ Conv2D & 64 & 3x3 & ReLU & 2 \\ Conv2D & 128 & 3x3 & ReLU & 1 \\ Conv2D & 128 & 3x3 & ReLU & 2 \\ Conv2D & 256 & 3x3 & ReLU & 1 \\ Conv2D & 256 & 3x3 & ReLU & 2 \\ Flatten & -- & -- & -- & -- \\ Dense & 128 & -- & ReLU & --\\ Dense & Embedding dimension (64) & -- & Linear & --\\ \end{tabular} \end{center} } \caption{\textbf{Architecture used for the Extended YaleB Face and EPFL Multi-view Car experiments (Section 4.4).} Input shape is [64, 64, 1] for the face images and [64, 64, 3] for the car images. } \label{tab:mnist_arch} \end{table} \subsubsection{Extended YaleB Face} Images of all 28 people were used for training. There are 9 poses per person and 64 illumination angles. With the illumination as the only active factor, the full set of 64 images was used for each set; with illumination and pose active, 384 images were randomly sampled for each person (out of 576) every time a set was made for training. Each image was augmented twice with a random shift of contrast, and then a random crop (though the faces were not tight-cropped for this dataset). We used negative L2 distance as the similarity measure, and a learning rate of $3\times10^{-4}$ for 10,000 steps (illumination active) or 50,000 steps (illumination and pose active). \subsubsection{EPFL Multi-view Car} We used a set size of 81 images randomly sampled from the sequence of images for a particular car. All 20 car sequences were used for training. Each image was augmented twice with a random shift of contrast and saturation, and then a random crop (though the cars were not tight-cropped for this dataset). We used negative L2 distance as the similarity measure, and a learning rate of $3\times10^{-4}$ for 10,000 steps. \subsection{Baselines} Imagenet-pretrained ResNet: We use the same ResNet50V2 base as for the ABC embedding network, and compare representations for each image by cosine similarity (which performed better than comparing by L2 distance). Sanchez et al.~\cite{sanchez20eccv}: We used the colored-MNIST architecture specifications and hyperparameters described in the Supplemental Material for the MNIST experiments of Section 4.2. As the colored-MNIST factors of variation isolated by \cite{sanchez20eccv} are simpler in nature (color of foreground/background from specific digit, versus digit identity from style), we found better results by boosting the dimension of the exclusive representation to 64 (up from the original 8 for the color description). CCVAE~\cite{jha2018disentangling} and ML-VAE~\cite{bouchacourt18aaai}: We translated the publicly available pytorch code to tensorflow for training MNIST\footnote{\url{https://github.com/ananyahjha93/cycle-consistent-vae}},\footnote{\url{https://github.com/DianeBouchacourt/multi-level-vae}}. We were unable to find code for their experiments on larger image sizes, but we followed the encoder and decoder specifications for the 64x64 RGB images in the Supplemental for \cite{jha2018disentangling}, found here\footnote{\url{https://arxiv.org/pdf/1804.10469.pdf}}, for both methods. We optimized hyperparameters in a grid search around the published numbers, and used a group size for \cite{bouchacourt18aaai} which matched the stack size used for the ABC method. As with \cite{sanchez20eccv}, we downsized the ShapeNet renderings and Pascal3D+ tight crops to 64x64, after attempts to scale the encoder-decoder architecture up to 128x128 were unsuccessful. LORD~\cite{gabbay2020lord}: We ran the publicly available tensorflow code with the parameters for the paper's Cars3D experiments\footnote{\url{https://github.com/avivga/lord}}. Because a separate latent vector for each instance must be learned, memory became an issue when using the full training set (as used for all other methods). We used 200 instances each for cars and chairs to closely match the 183 Cars3D instances used in their work, with images downsized and tight-cropped to 64x64 to match this works' architecture. \subsection{Timing calculation on MNIST} \begin{table}[h] { \begin{center} \small \begin{tabular}{cccc} & ABC (ours) & Sanchez et al.~\cite{sanchez20eccv} & CC-VAE~\cite{jha2018disentangling} \\ Seconds/epoch & \cellcolor[HTML]{BBBBBB}{47.8} & \cellcolor[HTML]{DDDDDD}{70.2} & 150.2 \\ \end{tabular} \end{center} } \caption{\textbf{Training timing for style isolation on MNIST (Section 4.2).} These comparisons were run on an NVIDIA Tesla K80.} \label{tab:mnist_timing} \end{table} We compare measurements of training time in Table~\ref{tab:mnist_timing}, all run in Tensorflow on an NVIDIA Tesla K80. The discriminative approaches -- ABC and \cite{sanchez20eccv} -- are far faster to train than the generative approach of \cite{jha2018disentangling}. ABC is fastest by a wide margin due to its simplicity, requiring only one embedding network and a relatively simple loss calculation, in contrast to the seven networks and involved loss calculations required for \cite{sanchez20eccv}. Note that by having the fastest training time per epoch, ABC further widens the gulf to the baselines, which require orders of magnitude more epochs to yield representations which isolate digit style. \section{Introduction} \input{sections/intro} \section{Related work} \input{sections/related} \section{Methods} \label{sec:methods} \input{sections/methods} \section{Experiments} \input{sections/experiments} \section{Discussion} \input{sections/discussion} {\small \bibliographystyle{ieee_fullname} \subsection{Systematic evaluations on Shapes3D} \label{sec:shapes3d} Images from the Shapes3D dataset consist of a geometric primitive with a floor and background wall (Fig.~\ref{fig:factors}). There are six factors of variation in the dataset: three color factors (wall, object and floor hue) and three geometric factors (scale, shape and orientation). Images were grouped with certain generative factors held inactive for each of many different training scenarios in Fig.~\ref{fig:shapes3d_ensemble}; no augmentations were used. We probed ABC-learned representations through the mutual information $I(U;G)$ between representations $U$ and known latent factors $G$, estimated using MINE~\cite{mine2018} and averaged over ten runs each. Deterministic networks generally preserve all information between input and output, so noise was added for a meaningful quantity $I(U+\eta;G)$, with $\eta\sim\mathcal{N}(0,\,\sigma^{2})$~\cite{saxe2019,Elad2019}. In the case where $s(u,v)$ is negative Euclidean distance, $\tau$ serves as a natural length scale of the correspondence loss so we used $\sigma=\tau$ (Supp.). We discuss noteworthy aspects of learned representations below. \noindent \textit{All inactive factors are suppressed; a subset of active factors are isolated:} In Fig.~\ref{fig:shapes3d_ensemble} information with respect to all inactive factors was suppressed, and a subset of active factors---not necessarily all---were isolated. Only when all three hue factors were inactive (Fig.~\ref{fig:shapes3d_ensemble}c) were the geometric factors present in the learned representations. Presumably the hue factors are easier to learn and serve as shortcuts~\cite{tian2020info}, allowing the representations to ignore other factors. \noindent \textit{Semi-supervised ABC-X is equally effective:} Correspondence is found through active factors common to both sets, which means if one set consistently has additional active factors, they will not be useful for optimizing the ABC loss. In semi-supervised scenarios with one set-supervised set per mini-batch and the other consisting of random samples over the entire dataset (e.g., Fig.~\ref{fig:factors}c), ABC-X performed as well as ABC with full set supervision (Fig.~\ref{fig:shapes3d_ensemble}a-c). \noindent \textit{Increasing set size isolates more active factors:} Intuitively, finding a one-to-one correspondence between sets with more elements requires more discerning power. In Fig.~\ref{fig:shapes3d_ensemble}d, information in the learned representations about all active factors increased with the set size used during training. The set size effectively serves as the number of negative samples in the InfoNCE loss, and it has been found that more negative samples benefits contrastive learning \cite{hjelm2019DIM}. \subsection{Fast digit style isolation} \label{sec:mnist} Handwritten digits, such as from MNIST~\cite{mnist}, have a natural partitioning of factors of variation into digit class (e.g., 2 or 8) and style (stroke width, slant, shape, etc.). Our goal is to learn style information generalized across digit class, without access to style annotations or images grouped with matching style. Images were grouped by class into sets of size 64 and embedded to $\mathbb{R}^8$; no augmentations were used. \begin{figure}[h] \centering \includegraphics[width=0.85\columnwidth]{figures/mnist20220328.pdf} \caption{\emph{\textbf{Fast style isolation from MNIST digits.} With digit class as the inactive factor during training, ABC isolated style. % \textbf{(a)} Embeddings of the digit 9---withheld during training--- % fan out by thickness and slant, active factors common to all digit classes. \textbf{(b)} The boxed images along the diagonal were queries for retrieval from the test set; the other images in each row were the nearest embeddings per class. ABC isolated style information more than an order of magnitude faster than \textbf{(c)} the discriminative approach of \cite{sanchez20eccv} and \textbf{(d)} the VAE approach of~\cite{jha2018disentangling}.} } \label{fig:mnist} \end{figure} \begin{table*}[h] \resizebox{\linewidth}{!}{ \centering \small \begin{tabular}{lccccccccc} & & \multicolumn{4}{c}{Cars} & \multicolumn{4}{c}{Chairs}\\ \cmidrule(lr){3-6}\cmidrule(lr){7-10} & Dim ($\mathbb{R}^N$) & Med ($^\circ$) $\downarrow$ & \makecell{Acc.\ \\ @10$^\circ$ $\uparrow$} & \makecell{Acc.\ \\ @15$^\circ$ $\uparrow$} & \makecell{Acc.\ \\ @30$^\circ$ $\uparrow$} & Med ($^\circ$) $\downarrow$ & \makecell{Acc.\ \\ @10$^\circ$ $\uparrow$} & \makecell{Acc.\ \\ @15$^\circ$ $\uparrow$} & \makecell{Acc.\ \\ @30$^\circ$ $\uparrow$} \\ \cmidrule(lr){2-2}\cmidrule(lr){3-6}\cmidrule(lr){7-10} CCVAE\cite{jha2018disentangling} & 256 & 54.9 & 0.03 & 0.07 & 0.27 & 81.5 & 0.04 & 0.07 & 0.18 \\ ML-VAE\cite{bouchacourt18aaai} & 32 & 75.6 & 0.05 & 0.10 & 0.27 & 80.6 & 0.03 & 0.07 & 0.19 \\ LORD\cite{gabbay2020lord} & 128 & 71.3 & 0.09 & 0.15 & 0.32 & 89.8 & 0.03 & 0.05 & 0.15 \\ ResNet & 2048 & 85.3 & 0.07 & 0.14 & 0.28 & 80.7 & 0.04 & 0.07 & 0.19 \\ ResNet-Intermediate & 16,384 & 15.8 & 0.30 & 0.49 & 0.64 & 47.7 & 0.08 & 0.15 & 0.37 \\ \cmidrule(lr){1-1} Set supervision w/ TCC loss\cite{Dwibedi2019}& 64 & 23.1 & 0.14 & 0.29 & 0.59 & 58.3 & 0.09 & 0.16 & 0.40 \\ Augmentation alone (with \cite{chen2020simple}) & 64 & 80.2 & 0.16 & 0.24 & 0.33 & 84.4 & 0.04 & 0.09 & 0.21 \\ ABC & 64 & \cellcolor[HTML]{DDDDDD}{15.1} & \cellcolor[HTML]{DDDDDD}{0.34} & \cellcolor[HTML]{DDDDDD}{0.50} & \cellcolor[HTML]{DDDDDD}{0.65} & \cellcolor[HTML]{DDDDDD}{22.1} & \cellcolor[HTML]{DDDDDD}{0.17} & \cellcolor[HTML]{DDDDDD}{0.33} & \cellcolor[HTML]{DDDDDD}{0.60} \\ ABC-X & 64 & \cellcolor[HTML]{BBBBBB}{13.0} & \cellcolor[HTML]{BBBBBB}{0.37} & \cellcolor[HTML]{BBBBBB}{0.56} & \cellcolor[HTML]{BBBBBB}{0.73} & \cellcolor[HTML]{BBBBBB}{16.8} & \cellcolor[HTML]{BBBBBB}{0.27} & \cellcolor[HTML]{BBBBBB}{0.45} & \cellcolor[HTML]{BBBBBB}{0.74} \end{tabular} } \caption{\small\emph{\textbf{Pose estimation with no pose annotations at training, set supervision on synthetic images.} Median error and accuracies (the fraction of errors better than the threshold value) on the Pascal3D+ car and chair test sets. Pose estimates were obtained through nearest neighbor lookup into a `codebook' of 1800 synthetic images with associated GT pose; reported values are the average over ten random codebooks. The full ABC-X method---able to suppress augmentable nuisance factors of variation and to utilize unannotated real images during training---outperformed everything else, particularly in the difficult chair category.}} \label{tab:pose} \end{table*} \begin{table}[t] \centering \small \begin{tabular}{@{}lcccc} & \multicolumn{2}{c}{Cars} & \multicolumn{2}{c}{Chairs}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} & Med ($^\circ$) $\downarrow$ & \makecell{Acc.\ \\ @30$^\circ$ $\uparrow$} & Med ($^\circ$) $\downarrow$ & \makecell{Acc.\ \\@30$^\circ$ $\uparrow$} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} Liao et al.\cite{s2reg} & 12.3 & \cellcolor[HTML]{DDDDDD}{0.85} & 30.8 & 0.49 \\ \quad + ABC-S & \cellcolor[HTML]{DDDDDD}{11.0} & 0.79 & \cellcolor[HTML]{DDDDDD}{28.1} & \cellcolor[HTML]{DDDDDD}{0.52} \\ \quad \makecell[l]{ + ABC-SX } & \cellcolor[HTML]{BBBBBB}{9.3} & \cellcolor[HTML]{BBBBBB}{0.87} & \cellcolor[HTML]{BBBBBB}{26.0} & \ \cellcolor[HTML]{BBBBBB}{0.55} \end{tabular} \caption{\small\emph{\textbf{Leveraging pose annotations on synthetic images, wholly unsupervised real images.} ABC-X is effective as an additional loss term when the data consists of annotated synthetic images and unannotated real images. It provided a means to incorporate the latter which helped bridge the domain gap. }} \label{tab:pose_combo} \end{table} ABC-learned embeddings of the digit 9---withheld during training---organized according to stroke thickness and slant (Fig.~\ref{fig:mnist}a), demonstrating generalization of isolated style information across digit classes. In Fig.~\ref{fig:mnist}b-d we retrieved the most similar digits of each class to a set of test digits. Without having to learn a full description of the data, ABC yielded style-informative embeddings orders of magnitude faster than related approaches. \subsection{Pose transfer from synthetic to real images} \label{sec:pose} We next utilized ABC-X for object pose estimation in real images without pose annotations at training time. The goal was the effective isolation of pose information from set-supervised synthetic images, which generalizes to the category level and bridges the synthetic/real domain gap. The ability of ABC-X to handle extraneous active factors of variation in one set allowed the incorporation of unsupervised real images. This significantly extends ABC-X in Sec.~\ref{sec:shapes3d} by introducing active factors of variation which do not exist in the synthetic domain (e.g. lighting effects, occlusions). The learned representations isolated pose, as the only factor actively varying across both sets in each training pair, while suppressing the additional domain-specific factors. We used images of ShapeNet models~\cite{shapenet} from viewpoints randomly distributed over the upper hemisphere~\cite{Suwajanakorn2018}. Images were grouped in sets with their source 3D model inactive (as in set $\set{A}$, Fig.~\ref{fig:teaser}). We gradually incorporated unsupervised real images from the Comp\-Cars~\cite{compcars} and Cars196~\cite{cars196} datasets for the car category, and 1000 images from the Pascal3D+~\cite{pascal3d} training split for chairs. We evaluated on the test split of Pascal3D+. All images were tight-cropped. \begin{figure}[t] \centering \includegraphics[width=0.98\columnwidth]{figures/abcx_chars20220328_singlecol.pdf} \caption{\small\emph{\textbf{Retrieval from ABC-X and ResNet-Intermediate.} Given a query image from the Pascal3D+ test set, we display the nearest neighbors in embedding space, from 1800 ShapeNet images and the Pascal3D+ train split. The accuracy and visual diversity of the ABC-X retrievals illustrate effective isolation of pose information generalized across the category and the synthetic/real domain gap.}} \label{fig:synth_retrieval} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/rebuttal_retrieval20220328.pdf} \caption{\small\emph{\textbf{Active factor isolation with real images only.} We trained ABC on the Extended YaleB face dataset with only 28 different identities (\textbf{left}), and the EPFL Multi-View Car dataset with only 20 different car instances (\textbf{right}). We compare to ResNet feature vectors, and display the nearest three neighbors (per method) to the query image out of the remainder of the dataset. Orange outlines for YaleB indicate all active factors exactly match the query image.}} \label{fig:real_retrieval} \end{figure*} The augmentation loss (Sec.~\ref{sec:extensions}) helps bridge the domain gap by removing nuisance factors of variation which could shortcut the task of finding correspondence through pose~\cite{tian2020info}. Images were randomly augmented with cropping, recoloring, and painting the background with random crops from images of ImageNet-A \cite{hendrycks2019nae}, following augmentations used to bridge the synthetic/real domain gap in~\cite{SundermeyerAAE2018,SundermeyerAAE2020}. Images were embedded to $\mathbb{R}^{64}$ using a few layers on top of an ImageNet-pre-trained ResNet50~\cite{resnet}. We used cosine similarity with temperature $\tau=0.1$ (ablations in Supp.). \subsubsection{Mixed-domain pose isolation} \label{sec:pose1} In the first experiment there were no pose annotations, for real nor synthetic images. The learned representations had no sense of absolute pose, but if pose information was successfully isolated then similar representations would have similar pose, regardless of the instance-specific details or domain of the image. To assign a pose estimate to each image of the test set, we found the most similar synthetic image (in representation space) out of a pool of 1800, unseen at training, each with associated ground-truth pose. We compare ABC with the VAE-based approaches of \cite{jha2018disentangling} and \cite{bouchacourt18aaai}, the latent optimization method of~\cite{gabbay2020lord}, and to feature vectors of a pre-trained ResNet (Table~\ref{tab:pose}). We found that an intermediate output (ResNet-Intermediate), though impractical due to its high dimensionality, is a surprisingly effective baseline. While all methods were performant when tested in the synthetic domain (Supp.), most have no means of utilizing the unsupervised real images to bridge the domain gap and consequently performed poorly when tested on real images. Ablative comparisons illustrate the synergy of the components of ABC-X. Applying only the correspondence loss used in a limited setting of video alignment by~\cite{Dwibedi2019} (TCC), we found reasonable performance on the car category but a failure to isolate pose in chairs. Suppressing irrelevant factors from the representations via augmentation without seeking correspondence did not isolate pose for either category. The incorporation of real images in ABC-X, ramped linearly to an average of 10\% per set $\set{B}$ by the end of training, boosted performance over ABC. Retrieval examples (Fig.~\ref{fig:synth_retrieval}) qualitatively illustrate the generalization across instance and domain-specific factors of variation. \subsubsection{Boosting cross-domain pose regression} \label{sec:pose2} We next seek to regress the pose of objects in the real domain given pose annotations in the synthetic domain only, assuming synthetic images can be grouped by instance as in Sec.~\ref{sec:pose1}. Starting with the spherical regression framework of~\cite{s2reg} we incorporated ABC-SX to condition an intermediate representation space, as described in Sec.~\ref{sec:extensions}. We trained on a weighted sum of a regression loss on the pose annotations for the synthetic images, and the ABC-SX loss used for Sec.~\ref{sec:pose1} (though with a subset of the augmentations). In principle, any typical supervised pose regression network can be integrated with ABC-S. We specifically used \cite{s2reg} as it has shown superior performance on supervised pose benchmarks, and in particular training with synthetic data (created by RenderForCNN\cite{su15iccv}) mixed with real images. Even without real images during training, ABC-S improved performance by better conditioning the intermediate latent space (Table~\ref{tab:pose_combo}). A further boost for both categories resulted from a small amount of real images (2\%) folded in to ABC-SX gradually over training. Thus ABC-SX can be advantageous in scenarios where there is more supervision available than set supervision, here serving to help bridge the real/synthetic domain gap by encouraging the suppression of factors of variation irrelevant to pose estimation. \subsection{Set supervision in the wild} \label{sec:realimages} We conclude with experiments demonstrating active factor isolation with ABC from real images only. The data is more limited in quantity and plagued by nuisance factors (such as complex backgrounds) than when the training data can incorporate a wealth of synthetic examples. The Extended YaleB Face dataset~\cite{lee2005yaleb} has three dominant factors of variation: face identity (of which there are only 28), face pose, and illumination orientation. We trained the augmentation variant of ABC with illumination as the only active factor, and with both illumination and face pose as the active factors, and compare retrieval results with those from a ResNet feature vector in Fig.~\ref{fig:real_retrieval}. Because both factors have a discrete set of possible values, the retrieval results can match the query's active factors perfectly. We highlight perfect retrieval results in orange; ABC was remarkably successful because it can overlook person identity to find images which match in the active factor(s), something the ResNet representations failed to do. We also trained ABC on the EPFL Multi-View Car dataset~\cite{epfl_car_turntable}, consisting of images of 20 cars on turntables with different rotation speeds, backgrounds, camera focal lengths, and rotation ranges: the hypothetical example from the Introduction. The visual disparity of ABC-retrieval images in Fig.~\ref{fig:real_retrieval} compared to those of ResNet embeddings demonstrates the success of ABC at suppressing the many inactive factors of variation in this challenging dataset. \subsection{The ABC Algorithm} \noindent \textbf{Setup and notation:} We follow the setup and notation from \cite{Locatello2021}, which uses a latent variable model for the theoretical modeling of self-supervised learning methods. Let us denote an input image as $x$ from the observation space $\set{X}$ and an associated latent code as $z$ from the representational space $\set{Z}$. As per the latent variable model, the observations can be generated from the latent code using an invertible function $x=f(z)$, with $z \sim p_z$. Without loss of generality, we assume that the latent vector $z$ can be partitioned into inactive $z_i$ and active $z_a$ components such that all elements within each set share identical $z_i$. Let $\phi(x): \set{X} \rightarrow \mathbb{R}^E $ be the function that maps the input vector to an embedding $u$ in $E$-dimensional space. Our goal is to learn this function so that $u$ may be informative with respect to the active partition $z_a$ of the true underlying latent code $z$. \noindent \textbf{Formation of pairs of sets for training:} We either leverage natural groupings of images or curate images into sets by controlling for certain factors of variation during mini-batch construction, where each mini-batch consists of two such sets. For example, we show example sets with different active and inactive factors of variation curated from the Shapes3D dataset~\cite{shapes3d} in Fig.~\ref{fig:factors}. Values for the inactive factors are randomly sampled and held fixed for each set, with the active factors free to vary (Fig.~\ref{fig:factors}a,b). \noindent \textbf{Approach:} Let the pair of sets for a particular mini-batch be given by $\set{A}=\{a_1,\dots,a_n\}$ and $\set{B}=\{b_1,\dots,b_m\}$, respectively. Let us denote the associated embeddings as $\set{U}=\{u_1,\dots,u_n\}$ and $\set{V}=\{v_1,\dots,v_m\}$, where $u_i= \phi(a_i,w)$ and $v_i= \phi(b_i, w)$. Functionally, we parameterize $\phi$ with the same neural network (with weights $w$) for both $\set{A}$ and $\set{B}$. Let $s(u,v)$ denote a similarity metric between points in embedding space, with $s(u,v)=s(v,u)$. To create an end-to-end differentiable loss, we use the soft nearest neighbor~\cite{Goldberger2004,Movshovitz-Attias17,Rocco2018,Snell2017,Dwibedi2019} to establish correspondence. \begin{defn}[\emph{Soft nearest neighbor}] Given a point $u$ and a set of points $\set{V}=\{v_1,\dots,v_m\}$, the soft nearest neighbor of $u$ in the set $V$ is given by $\tilde{u} = \sum_{j=1}^m \alpha_j v_j$, where $\alpha_j = \frac{\textnormal{exp}(s(u_i,v_j)/\tau)}{\sum_{k=1}^m \textnormal{exp}(s(u_i,v_k)/\tau)}$ and $\tau$ is a temperature parameter. \end{defn} We first compute the soft nearest neighbor for each $u_i \in \set{U}$ as $\tilde{u}_i = \sum_{j=1}^m \alpha_j v_j$. A soft bijective correspondence between the two sets is quantified through an InfoNCE loss~\cite{oord2019representation}, averaged over every element in each of the sets. \begin{defn}[\emph{Approximate Bijective Correspondence loss}] The correspondence loss from $\set{U}$ to $\set{V}$ is given by $\mathcal{L}(\set{U}, \set{V})=-\frac{1}{n} \sum_i^n \textnormal{log} \frac{\textnormal{exp}(s(u_i,\tilde{u}_i)/\tau)}{\sum_j^n \textnormal{exp}(s(u_j,\tilde{u}_i)/\tau)}$. The full loss is the sum, $\mathcal{L}=\mathcal{L}(\set{U}, \set{V})+\mathcal{L}(\set{V}, \set{U})$. \end{defn} The temperature parameter $\tau$ sets a length scale in embedding space as the natural units for the loss. It is generally unimportant when using an unbounded similarity metric such as negative Euclidean distance (Supp.). By contrast, a metric like cosine similarity benefits from tuning $\tau$. \noindent \textbf{Double augmentation:} We introduce a modification to the correspondence loss which allows suppression of factors of variation which can be augmented. We assume a group of transforms $H$ is known to leave desired factors of variation unchanged~\cite{Higgins2018,chenAugmentation2020,chen2020simple}. We randomly sample two transforms $h^{(1)},h^{(2)} \in H$ per image per training step. Let $u_i^{(1)}=\phi(h^{(1)}a_i, w)$ and similarly for $u_i^{(2)}$. The soft nearest neighbor is found using $u_i^{(1)}$, and then the correspondence is evaluated using $u_i^{(2)}$. The correspondence loss becomes $\mathcal{L}(\set{U}, \set{V})=-\frac{1}{n} \sum_i^n \textnormal{log} \frac{\textnormal{exp}(s(u_i^{(2)},\tilde{u}_i^{(1)})/\tau)}{\sum_j^n \textnormal{exp}(s(u_j^{(2)},\tilde{u}_i^{(1)})/\tau)}$. The effect is to make the representations $u_i^{(1)}$ and $u_i^{(2)}$ invariant to the augmented factors of variation. In summary, we sample pairs of sets for every mini-batch and learn an embedding network $\phi$ that produces embeddings which minimize the ABC loss through correspondence between elements in the sets. For every element in a set, the soft nearest neighbor serves as the correspondent point in the opposite set. The correspondence loss taken over both sets measures how close the correspondence is to being bijective. \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{figures/shapes3d20220325.pdf} \caption{\small\emph{\textbf{Factor isolation even with one unsupervised set; more factors isolated with larger set sizes during training.} We estimate the mutual information $I(U;G)$ between the learned representations and each of the generative factors using MINE \cite{mine2018}. Error bars display the standard deviation across ten random seeds. The inactive factors during training are indicated by shading. \textbf{(a-c)} We find the isolation of active factors to be unchanged when training with one of the two sets unsupervised (ABC-X). \textbf{(d)} Increasing the set size isolates more of the active factors of variation because finding correspondence requires more discerning power.}} \label{fig:shapes3d_ensemble} \end{figure*} \subsection{Extensions} \label{sec:extensions} The ABC method can be extended to incorporate both fully unsupervised and supervised data. \noindent \textbf{\textit{ABC-X} for incorporating unsupervised data:} Only the active factors of variation common to \textit{both} sets are useful for establishing correspondence. Information about one set's inactive factor of variation cannot help distinguish between elements of that set and therefore cannot help form correspondence with elements of another, even if the factor actively varies in the second set. This has the powerful consequence that ABC can work just as well when one of the sets in each pair is completely unconstrained, as in Figs.~\ref{fig:teaser} and \ref{fig:factors}c. \textit{Wholly unsupervised, and even out-of-domain data with additional active factors, can be utilized.} We denote this version of the method ABC-Extraneous, or \textit{ABC-X}. \noindent \textbf{\textit{ABC-S} for incorporating annotated data:} ABC can be organically applied to an intermediate representation space in a network trained with full supervision on a particular factor of variation, by training on a weighted sum of ABC and other losses. If set supervision is available with the supervised factor active, ABC can condition the intermediate representation space by isolating certain factors and suppressing others, and to incorporate unsupervised data. We denote this version of the method ABC-Supervised, or \textit{ABC-S}. \subsection{ABC in the context of contrastive learning} While both ABC and self-supervised learning (SSL) methods such as SimCLR~\cite{chen2020simple} use the InfoNCE loss on positive and negative pairs, a fundamental difference arises from how one acquires the pairs. In SSL a representation space is learned around explicitly provided positive pairs, obtained through augmentations known to affect certain factors while leaving others invariant. In ABC, a representation space is learned which also yields the positive pairs, as they are unknown \textit{a priori} and must be formed by matching nearby embeddings across sets for every evaluation of the loss. ABC finds representations that produce good positive pairs, and does so by isolating the active factors, i.e., style, which would be inaccessible to general SSL methods. ABC can thus be seen as complementary to common SSL methods.
1,116,691,501,156
arxiv
\section{Introduction} \subsection{} The present paper is a companion to \cite{arXiv:1703.05599} and is similarly inspired by the classification results of \cite{MR3600042}; however it can be read independently. We recall the setting. We have a non-archimedean locally compact field $F$ of residue characteristic $p$ and a connected reductive $F$-group $\mathbf G$. We fix a commutative ring $R$ and study the smooth $R$-representations of $G = \mathbf{G}(F)$. In \cite{MR3600042} the irreducible admissible $R$-representations of $G$ are classified in terms of supersingular ones when $R$ is an algebraically closed field of characteristic $p$. That classification is expressed in terms of representations $I_G(P,\sigma,Q)$, which make sense for any $R$. In that notation, $P$ is a parabolic subgroup of $G$ with a Levi decomposition $P=MN$ and $\sigma$ a smooth $R$-representation of the Levi subgroup $M$; there is a maximal parabolic subgroup $P(\sigma)$ of $G$ containing $P$ to which $\sigma$ inflated to $P$ extends to a representation $e_{P(\sigma)}(\sigma)$, and $Q$ is a parabolic subgroup of $G$ with $P\subset Q \subset P(\sigma)$. Then $$I_G(P,\sigma,Q)=\Ind_{P(\sigma)}^G(e_{P(\sigma)}(\sigma)\otimes \St_Q^{P(\sigma)})$$ where $\Ind$ stands for parabolic induction and $\St_Q^{P(\sigma)}= \Ind_Q^{P(\sigma)}R/ \sum\Ind_{Q'}^{P(\sigma)}R$, the sum being over parabolic subgroups $Q'$ of $G$ with $Q\subsetneq Q'\subset P(\sigma)$. Alternatively, $I_G(P,\sigma,Q)$ is the quotient of $\Ind_{P(\sigma)}^G(e_{P(\sigma)}(\sigma))$ by $ \sum\Ind_{Q'}^{G}e_{Q'}(\sigma)$ with $Q'$ as above, where $e_{Q}(\sigma)$ is the restriction of $e_{P(\sigma)}(\sigma)$ to $Q$, similarly for $Q'$. In \cite{arXiv:1703.05599} we mainly studied what happens to $I_G(P,\sigma,Q)$ when we apply to it, for a parabolic subgroup $P_1$ of $G$, the left adjoint of $\Ind_{P_1}^G$, or its right adjoint. Here we tackle a different question. We fix a pro-$p$ parahoric subgroup $\mathcal{U}$ of $G$ in good position with respect to $P$, so that in particular $\mathcal{U}_M=\mathcal{U}\cap M$ is a pro-$p$ parahoric subgroup of $M$. One of our main goals is to identify the $R$-module $I_G(P,\sigma,Q)^\mathcal{U}$ of $\mathcal{U}$-invariants, as a right module over the Hecke algebra $\mathcal{H}=\mathcal{H}_G$ of $\mathcal{U}$ in $G$ - the convolution algebra on the double coset space $\mathcal{U} \backslash G/\mathcal{U}$ - in terms on the module $\sigma^{\mathcal{U}_M}$ over the Hecke algebra $ \mathcal{H}_M$ of $\mathcal{U}_M$ in $M$. That goal is achieved in section \ref{S:9.0}, Theorem~\ref{thm:main}. \subsection{} The initial work has been done in \cite[\S 4]{arXiv:1703.04921} where $(\Ind_{P }^G\sigma)^\mathcal{U}$ is identified. Precisely, writing $M^+$ for the monoid of elements $m\in M$ with $m(\mathcal{U} \cap N) m^{-1} \subset \mathcal{U} \cap N$, the subalgebra $\mathcal{H}_{M^+}$ of $\mathcal{H}_M$ with support in $M^+$, has a natural algebra embedding $\theta$ into $\mathcal{H}$ and \cite[Proposition 4.4]{arXiv:1703.04921} identifies $(\Ind_{P }^G\sigma)^\mathcal{U}$ with $\Ind_{\mathcal{H}_M}^{\mathcal{H}} (\sigma^{\mathcal{U}_M})=\sigma^{\mathcal{U}_M}\otimes_{\mathcal{H}_{M^+}}\mathcal{H}$. So in a sense, this paper is a sequel to \cite{arXiv:1703.04921} although some of our results here are used in \cite[\S 5]{arXiv:1703.04921}. As $I_G(P,\sigma,Q)$ is only a subquotient of $\Ind_{P }^G\sigma$ and taking $\mathcal{U}$-invariants is only left exact, it is not straightforward to describe $I_G(P,\sigma,Q)^\mathcal{U}$ from the previous result. However, that takes care of the parabolic induction step, so in a first approach we may assume $P(\sigma)=G$ so that $I_G(P,\sigma,Q)=e_{G}(\sigma)\otimes \St_Q^{G}$. The crucial case is when moreover $\sigma$ is $e$-minimal, that is, not an extension $e_M(\tau)$ of a smooth $R$-representation $\tau$ of a proper Levi subgroup of $M$. That case is treated first and the general case in section \ref{S:9.0} only. \subsection{}\label{S:1.3} To explain our results, we need more notation. We choose a maximal $F$-split torus $T$ in $G$, a minimal parabolic subgroup $B=ZU$ with Levi component $Z$ the $G$-centralizer of $T$. We assume that $P=MN$ contains $B$ and $M$ contains $Z$, and that $\mathcal{U}$ corresponds to an alcove in the apartment associated to $T$ in the adjoint building of $G$. It turns out that when $\sigma$ is $e$-minimal, the set $\Delta_M$ of simple roots of $T$ in $\Lie N$ is orthogonal to its complement in the set $\Delta$ of simple roots of $T$ in $\Lie U$. We assume until the end of this section \S \ref{S:1.3}, that $\Delta_{M}$ and $\Delta_2=\Delta \setminus \Delta_M$ are orthogonal. If $M_2$ is the Levi subgroup - containing $Z$ - corresponding to $\Delta_2$, both $M$ and $M_2$ are normal in $G$, $M\cap M_2=Z$ and $G=M_1M_2$. Moreover the normal subgroup $M'_2$ of $G$ generated by $N$ is included in $M_2$ and $G=MM'_2$. We say that a right $\mathcal{H}_M$-module $\mathcal{V}$ is extensible to $\mathcal{H}$ if $T_z^M $ acts trivially on $\mathcal{V}$ for $z\in Z\cap M'_2$ (\S \ref{S:8.2}). In this case, we show that there is a natural structure of right $\mathcal{H}$-module $e_{\mathcal{H}}(\mathcal{V})$ on $\mathcal{V}$ such that $T_g\in \mathcal{H}$ corresponding to $\mathcal{U} g \mathcal{U} $ for $g\in M'_2$ acts as in the trivial character of $G$ (\S \ref{S:8.1}). We call $e_\mathcal{H}(\mathcal{V})$ the extension of $\mathcal{V}$ to $\mathcal{H}$ though $\mathcal{H}_M$ is not a subalgebra of $\mathcal{H}$. That notion is already present in \cite{arXiv:1406.1003_accepted} in the case where $R$ has characteristic $p$. Here we extend the construction to any $R$ and prove some more properties. In particular we produce an $\mathcal{H}$-equivariant embedding $e_\mathcal{H}(\mathcal{V})$ into $\Ind_{\mathcal{H}_M}^{\mathcal{H}} \mathcal{V}$ (Lemma \ref{lemma:sigmaemb}). If $Q$ is a parabolic subgroup of $G$ containing $P$, we go further and put on $e_\mathcal{H}(\mathcal{V})\otimes_R(\Ind_Q^GR)^\mathcal{U}$ and $e_\mathcal{H}(\mathcal{V})\otimes_R(\St_Q^G )^\mathcal{U}$ structures of $\mathcal{H}$-modules (Proposition \ref{lemma:tensor} and Corollary~\ref{cor:VS}) - note that $\mathcal{H}$ is not a group algebra and there is no obvious notion of tensor product of $\mathcal{H}$-modules. If $\sigma$ is an $R$-representation of $M$ extensible to $G$, then its extension $e_G(\sigma)$ is simply obtained by letting $M'_2$ acting trivially on the space of $\sigma$; moreover it is clear that $\sigma^{\mathcal{U}_M}$ is extensible to $\mathcal{H}$, and one shows easily that $e_G(\sigma)^\mathcal{U}=e_\mathcal{H}(\sigma^{\mathcal{U}_M})$ as an $\mathcal{H}$-module (\S \ref{S:8.4}). Moreover, the natural inclusion of $\sigma$ into $\Ind_P^G\sigma$ induces on taking pro-$p$ Iwahori invariants an embedding $e_\mathcal{H}(\sigma^{\mathcal{U}_M})\to (\Ind_{P }^G\sigma)^\mathcal{U}$ which, via the isomorphism of \cite{arXiv:1703.04921}, yields exactly the above embedding of $\mathcal{H}$-modules of $e_\mathcal{H}(\sigma^{\mathcal{U}_M})$ into $\Ind_{\mathcal{H}_M}^{\mathcal{H}} (\sigma^{\mathcal{U}_M})$. Then we show that the $\mathcal{H}$-modules $(e_G(\sigma)\otimes_R\Ind_Q^GR)^\mathcal{U}$ and $e_\mathcal{H}(\sigma^{\mathcal{U}_M})\otimes_R(\Ind_Q^GR)^\mathcal{U}$ are equal, and similarly $(e_G(\sigma)\otimes_R\St_Q^G)^\mathcal{U} $and $e_\mathcal{H}(\sigma^{\mathcal{U}_M})\otimes_R(\St_Q^G)^\mathcal{U} $ are equal (Theorem~\ref{thm:main8}). \subsection{} \label{S:1.4}We turn back to the general case where we do not assume that $\Delta_{M}$ and $ \Delta \setminus \Delta_M$ are orthogonal. Nevertheless, given a right $\mathcal{H}_M$-module $\mathcal{V}$, there exists a largest Levi subgroup $M(\mathcal{V})$ of $G$ - containing $Z$ - corresponding to $\Delta \cup \Delta_1$ where $\Delta_1$ is a subset of $ \Delta \setminus \Delta_M$ orthogonal to $\Delta_M$, such that $\mathcal{V}$ extends to a right $\mathcal{H}_{M(\mathcal{V})}$-module $e_{M(\mathcal{V})}(\mathcal{V})$ with the notation of section \eqref{S:1.3}. For any parabolic subgroup $Q$ between $P$ and $P(\mathcal{V})=M(\mathcal{V})U$ we put (Definition \ref{def:Htriple}) $$I_\mathcal{H}(P,\mathcal{V},Q)= \Ind_{\mathcal{H}_M}^\mathcal{H} (e_{M(\mathcal{V})}(\mathcal{V})\otimes_R (\St^{M(\mathcal{V})}_{Q\cap M(\mathcal{V}))})^{\mathcal{U}_{M(\mathcal{V})}}).$$ We refer to Theorem \ref{thm:main} for the description of the right $\mathcal{H}$-module $I_G(P,\sigma,Q)^\mathcal{U}$ for any smooth $R$-representation $\sigma$ of $\mathcal{U}$. As a special case, it says that when $\sigma$ is $e$-minimal then $P(\sigma)\supset P(\sigma^{\mathcal{U}_M})$ and if moreover $P(\sigma)=P(\sigma^{\mathcal{U}_M})$ then $I_G(P,\sigma,Q)^\mathcal{U}$ is isomorphic to $I_\mathcal{H}(P,\sigma^{\mathcal{U}_M},Q)$. \begin{remark} In \cite{arXiv:1406.1003_accepted} are attached similar $\mathcal{H}$-modules to $(P,\mathcal{V},Q)$; here we write them $CI_\mathcal{H}(P,\mathcal{V},Q)$ because their definition uses, instead of $\Ind_{\mathcal{H}_M}^\mathcal{H}$ a different kind of induction, which we call coinduction. In loc.\ cit.\ those modules are use to give, when $R$ is an algebraically closed field of characteristic $p$, a classification of simple $\mathcal{H}$-modules in terms of supersingular modules - that classification is similar to the classification of irreducible admissible $R$-representations of $G$ in \cite{MR3600042}. Using the comparison between induced and coinduced modules established in \cite[4.3]{MR3437789} for any $R$, our corollary \ref{cor:isoCI} expresses $CI_\mathcal{H}(P,\mathcal{V},Q)$ as a module $I_\mathcal{H}(P_1,\mathcal{V}_1,Q_1)$; consequently we show in \S \ref{S:9.5} that the classification of \cite{arXiv:1406.1003_accepted} can also be expressed in terms of modules $I_\mathcal{H}(P,\mathcal{V},Q)$. \end{remark} \subsection{} In a reverse direction one can associate to a right $\mathcal{H}$-module $\mathcal{V}$ a smooth $R$-representation $\mathcal{V} \otimes_\mathcal{H} R[\mathcal{U}\backslash G]$ of $G$ (seeing $\mathcal{H}$ as the endomorphism ring of the $R[G]$-module $R[\mathcal{U}\backslash G]$). If $\mathcal{V}$ is a right $\mathcal{H}_M$-module, we construct, again using \cite{arXiv:1703.04921}, a natural $R[G]$-map $$I_\mathcal{H}(P,\mathcal{V},Q)\otimes_\mathcal{H} R[\mathcal{U}\backslash G]\to \Ind_{P(\mathcal{V})}^G (e_{M(\mathcal{V})}(\mathcal{V})\otimes_R \St^{M(\mathcal{V})}_{Q\cap M(\mathcal{V})}), $$ with the notation of \eqref{S:1.4}. We show in \S \ref{S:10} that it is an isomorphism under a mild assumption on the $\mathbb Z$-torsion in $\mathcal{V}$; in particular it is an isomorphism if $p=0$ in $R$. \subsection{} In the final section \S \ref{S:9}, we turn back to the case where $R$ is an algebraically closed field of characteristic $p$. We prove that the smooth dual of an irreducible admissible $R$-representation $V$ of $G$ is $0$ unless $V$ is finite dimensional - that result is new if $F$ has positive characteristic, a case where the proof of Kohlhaase \cite{kohlhasse-smooth-duality} for $\Char (F)=0$ does not apply. Our proof first reduces to the case where $V$ is supercuspidal (by \cite{MR3600042}) then uses again the $\mathcal{H}$-module $V^\mathcal{U}$. \section{Notation, useful facts and preliminaries}\label{S:2} \subsection{The group $G$ and its standard parabolic subgroups $P=MN$}\label{S:2.1} In all that follows, $p$ is a prime number, $F$ is a local field with finite residue field $k$ of characteristic $p$; We denote an algebraic group over $F$ by a bold letter, like $\mathbf H$, and use the same ordinary letter for the group of $F$-points, $H=\mathbf H(F)$. We fix a connected reductive $F$-group $\mathbf G$. We fix a maximal $F$-split subtorus $\mathbf T$ and write $\mathbf Z$ for its $\mathbf G$-centralizer; we also fix a minimal parabolic subgroup $\mathbf B$ of $\mathbf G$ with Levi component $\mathbf Z$, so that $\mathbf B=\mathbf Z \mathbf U$ where $ \mathbf U$ is the unipotent radical of $\mathbf B$. Let $X^*( \mathbf T)$ be the group of $F$-rational characters of $\mathbf T$ and $\Phi$ the subset of roots of $\mathbf T$ in the Lie algebra of $\mathbf G$. Then $ \mathbf B$ determines a subset $\Phi^+$ of positive roots - the roots of $\mathbf T$ in the Lie algebra of $\mathbf U$- and a subset of simple roots $\Delta$. The $\mathbf G$-normalizer $\mathbf N_{\mathbf G}$ of $\mathbf T$ acts on $X^*( \mathbf T)$ and through that action, $\mathbf N_{\mathbf G}/ \mathbf Z$ identifies with the Weyl group of the root system $\Phi$. Set $\mathcal N:=\mathbf N_{\mathbf G}(F)$ and note that $\mathbf N_{\mathbf G}/ \mathbf Z \simeq \mathcal N/Z$; we write $\mathbb W $ for $\mathcal N/Z$. A standard parabolic subgroup of $\mathbf G$ is a parabolic $F$-subgroup containing $\mathbf B$. Such a parabolic subgroup $\mathbf P$ has a unique Levi subgroup $\mathbf M$ containing $\mathbf Z$, so that $\mathbf P=\mathbf M\mathbf N$ where $\mathbf N$ is the unipotent radical of $\mathbf P$ - we also call $\mathbf M$ standard. By a common abuse of language to describe the preceding situation, we simply say ``let $P=MN$ be a standard parabolic subgroup of $G$''; we sometimes write $N_P$ for $N$ and $M_P$ for $M$. The parabolic subgroup of $G$ opposite to $P$ will be written $\overline P$ and its unipotent radical $\overline N$, so that $\overline P=M\overline N$, but beware that $\overline P$ is not standard ! We write $\mathbb W_M$ for the Weyl group $(M\cap\mathcal{N})/Z$. If $\mathbf P=\mathbf M\mathbf N$ is a standard parabolic subgroup of $G$, then $\mathbf M\cap \mathbf B$ is a minimal parabolic subgroup of $\mathbf M$. If $\Phi_M$ denotes the set of roots of $\mathbf T$ in the Lie algebra of $\mathbf M$, with respect to $\mathbf M\cap \mathbf B$ we have $\Phi_M^+=\Phi_M\cap \Phi^+ $ and $\Delta_M=\Phi_M\cap \Delta$. We also write $\Delta_P$ for $\Delta_M$ as $P$ and $M$ determine each other, $P=MU$. Thus we obtain a bijection $P\mapsto \Delta_P$ from standard parabolic subgroups of $G$ to subsets of $\Delta$, with $B$ corresponds to $\Phi$ and $G$ to $\Delta$. If $I$ is a subset of $\Delta$, we sometimes denote by $P_I=M_IN_I$ the corresponding standard parabolic subgroup of $G$. If $I=\{\alpha\}$ is a singleton, we write $P_\alpha=M_\alpha N_\alpha$. We note a few useful properties. If $P_1$ is another standard parabolic subgroup of $G$, then $P\subset P_1$ if and only if $\Delta_P\subset \Delta_{P_1}$; we have $\Delta_{P\cap P_1}= \Delta_P\cap \Delta_{P_1}$ and the parabolic subgroup corresponding to $\Delta_P\cup \Delta_{P_1}$ is the subgroup $\langle P,P_1\rangle$ of $G$ generated by $ P $ and $P_1$. The standard parabolic subgroup of $M$ associated to $\Delta_M \cap \Delta_{M_1}$ is $M\cap P_1=(M\cap M_1)(M\cap N_1)$ \cite[Proposition 2.8.9]{MR794307}. It is convenient to write $G'$ for the subgroup of $G$ generated by the unipotent radicals of the parabolic subgroups; it is also the normal subgroup of $G$ generated by $U$, and we have $G=ZG'$. For future references, we give now a useful lemma extracted from \cite{MR3600042}: \begin{lemma}\label{lemma:ZG'}The group $Z\cap G'$ is generated by the $Z\cap M'_\alpha$, $\alpha $ running through $\Delta $. \end{lemma} \begin{proof} Take $I = \emptyset$ in \cite[II.6.Proposition]{MR3600042}. \end{proof} Let $v_F$ be the normalized valuation of $F$. For each $\alpha\in X^*(T)$, the homomorphism $x\mapsto v_F(\alpha(x)):T\to\mathbb Z$ extends uniquely to a homomorphism $Z\to\mathbb Q$ that we denote in the same way. This defines a homomorphism $Z\xrightarrow{v} X_*(T)\otimes \mathbb Q$ such that $\alpha(v(z))= v_F(\alpha(z))$ for $z\in Z, \alpha \in X^*(T)$. \bigskip An interesting situation occurs when $\Delta=I\sqcup J$ is the union of two orthogonal subsets $I$ and $J$. In that case, $G'=M'_IM'_J$, $M'_I$ and $M'_J$ commute with each other, and their intersection is finite and central in $G$ \cite[II.7 Remark 4]{MR3600042}. \subsection{$I_G(P,\sigma, Q)$ and minimality}\label{S:2.22} We recall from \cite{MR3600042} the construction of $I_G(P,\sigma,Q)$, our main object of study. Let $\sigma$ be an $R$-representation of $M$ and $P(\sigma)$ be the standard parabolic subgroup with \[ \Delta_{P(\sigma)}=\{\alpha \in \Delta\setminus \Delta_P\ | \ Z\cap M'_\alpha \ \text{acts trivially on } \ \sigma\}\cup \Delta_P. \] This is the largest parabolic subgroup $P(\sigma)$ containing $P$ to which $\sigma$ extends, here $N\subset P$ acts on $\sigma$ trivially. Clearly when $P\subset Q \subset P(\sigma)$, $\sigma$ extends to $Q$ and the extension is denoted by $e_Q(\sigma)$. The restriction of $e_{P(\sigma)}(\sigma)$ to $Q$ is $e_Q(\sigma)$. If there is no risk of ambiguity, we write \[ e(\sigma)=e_{P(\sigma)}(\sigma). \] \begin{definition}\label{def:Gtriple} An {\bf $R[G]$-triple} is a triple $(P,\sigma,Q)$ made out of a standard parabolic subgroup $P=MN$ of $G$, a smooth $R$-representation of $M$, and a parabolic subgroup $Q$ of $G$ with $P\subset Q \subset P(\sigma)$. To an $R[G]$-triple $(P,\sigma,Q)$ is associated a smooth $R$-representation of $G$: $$I_G(P,\sigma,Q) =\Ind_{P(\sigma)}^G(e(\sigma)\otimes \St_Q^{P(\sigma)})$$ where $\St_Q^{P(\sigma)}$ is the quotient of $ \Ind_Q^{P(\sigma)}\charone$, $\charone$ denoting the trivial $R$-representation of $Q$, by the sum of its subrepresentations $\Ind_{Q'}^{P(\sigma)}\charone$, the sum being over the set of parabolic subgroups $Q'$ of $G$ with $Q\subsetneq Q'\subset P(\sigma)$. \end{definition} Note that $I_G(P,\sigma,Q) $ is naturally isomorphic to the quotient of $\Ind_{Q}^G(e_Q(\sigma))$ by the sum of its subrepresentations $\Ind_{Q'}^G(e_{Q'}(\sigma))$ for $Q\subsetneq Q'\subset P(\sigma)$ by Lemma 2.5. \bigskip It might happen that $\sigma$ itself has the form $e_{P}(\sigma_1)$ for some standard parabolic subgroup $P_1=M_1N_1$ contained in $P$ and some $R$-representation $\sigma_1$ of $M_1$. In that case, $P(\sigma_1)=P(\sigma)$ and $e(\sigma)=e(\sigma_1)$. We say that $\sigma$ is {\bf $e$-minimal} if $\sigma=e_{P}(\sigma_1)$ implies $P_1=P, \sigma_1=\sigma$. \begin{lemma}[{\cite[Lemma~2.9]{arXiv:1703.05599}}] \label{lemma:min}Let $P=MN $ be a standard parabolic subgroup of $G$ and let $\sigma$ be an $R$-representation of $M$. There exists a unique standard parabolic subgroup $P_{\min,\sigma}=M_{\min,\sigma}N_{\min,\sigma}$ of $G$ and a unique $e$-minimal representation of $\sigma_{\min}$ of $M_{\min,\sigma}$ with $\sigma= e_P(\sigma_{\min})$. Moreover $P(\sigma)=P(\sigma_{\min})$ and $e(\sigma)=e(\sigma_{\min})$. \end{lemma} \begin{lemma} \label{lemma:2.2}Let $P=MN $ be a standard parabolic subgroup of $G$ and $\sigma$ an $e$-minimal $R$-representation of $M$. Then $\Delta_P$ and $\Delta_{P(\sigma)}\setminus \Delta_P$ are orthogonal. \end{lemma} That comes from \cite[II.7 Corollary~2]{MR3600042}. That corollary of loc.\ cit.\ also shows that when $R$ is a field and $\sigma$ is supercuspidal, then $\sigma$ is $e$-minimal. Lemma \ref{lemma:2.2} shows that $\Delta_{P_{\min,\sigma}}$ and $\Delta_{P(\sigma_{\min})}\setminus\Delta_{P_{\min,\sigma}}$ are orthogonal. Note that when $\Delta_P$ and $\Delta_\sigma$ are orthogonal of union $\Delta=\Delta_P\sqcup \Delta_\sigma$, then $G=P(\sigma)= M M'_{ \sigma}$ and $e(\sigma) $ is the $R$-representation of $G$ simply obtained by extending $\sigma$ trivially on $M'_{ \sigma}$. \begin{lemma}[{\cite[Lemma~2.11]{arXiv:1703.05599}}] \label{lemma:2.3} Let $(P,\sigma,Q)$ be an $R[G]$-triple. Then $(P_{\min,\sigma},\sigma_{\min},Q)$ is an $R[G]$-triple and $I_G(P,l\sigma,Q) =I_G(P_{\min,\sigma},\sigma_{\min},Q)$. \end{lemma} \subsection{Pro-$p$ Iwahori Hecke algebras}\label{S:2.3} We fix a special parahoric subgroup $\mathcal K$ of $G$ fixing a special vertex $x_0$ in the apartment $\mathcal A$ associated to $T$ in the Bruhat-Tits building of the adjoint group of $G$. We let $\mathcal B$ be the Iwahori subgroup fixing the alcove $\mathcal C$ in $\mathcal A$ with vertex $x_0$ contained in the Weyl chamber (of vertex $x_0$) associated to $B$. We let $\mathcal U$ be the pro-$p$ radical of $\mathcal B$ (the pro-$p$ Iwahori subgroup). The pro-$p$ Iwahori Hecke ring $\mathcal{H}=\mathcal{H}(G,\mathcal U) $ is the convolution ring of compactly supported functions $G\to \mathbb Z$ constant on the double classes of $G$ modulo $\mathcal U$. We denote by $T(g)$ the characteristic function of $\mathcal{U} g \mathcal{U}$ for $g\in G$, seen as an element of $\mathcal{H}$. Let $R$ be a commutative ring. The pro-$p$ Iwahori Hecke $R$-algebra $\mathcal{H}_{M,R}$ is $R\otimes_{\mathbb Z}\mathcal{H}_{M}$. We will follow the custom to still denote by $h$ the natural image $1\otimes h$ of $h\in \mathcal{H}$ in $\mathcal{H}_R$. For $P=MN$ a standard parabolic subgroup of $G$, the similar objects for $M $ are indexed by $M$, we have $\mathcal K_M=\mathcal K\cap M, \mathcal B_M=\mathcal B\cap M, \mathcal U_M=\mathcal U\cap M$, the pro-$p$ Iwahori Hecke ring $\mathcal{H}_M=\mathcal{H}(M,\mathcal{U}_M) $, $T^M(m)$ the characteristic function of $\mathcal{U}_M m \mathcal{U}_M$ for $m\in M$, seen as an element of $\mathcal{H}_M $. The pro-$p$ Iwahori group $\mathcal{U}$ of $G$ satisfies the Iwahori decomposition with respect to $P$: $$\mathcal U= \mathcal{U}_N\mathcal U_M\mathcal U_{\overline N} ,$$ where $\mathcal{U}_N= \mathcal{U} \cap N, \mathcal U_{\overline N}=\mathcal U\cap \overline N$. The linear map \begin{equation}\label{eq:theta} \mathcal{H}_{M }\xrightarrow{\theta} \mathcal{H}, \quad \theta(T^{M}(m))=T(m) \quad (m\in M)\end{equation} does not respect the product. But if we introduce the monoid $M^+$ of elements $m\in M$ contracting $\mathcal U_N$, meaning $m\mathcal U_N m^{-1}\subset \mathcal U_N $, and the submodule $ \mathcal{H}_{M^+ }\subset \mathcal{H}_M$ of functions with support in $M^+$, we have \cite[Theorem~1.4]{MR3437789}: \bigskip {\sl $ \mathcal{H}_{M^+ }$ is a subring of $\mathcal{H}_M$ and $\mathcal{H}_M$ is the localization of $ \mathcal{H}_{M^+ }$ at an element $\tau^M\in \mathcal{H}_{M^+}$ central and invertible in $\mathcal{H}_M$, meaning $\mathcal{H}_M= \cup_{n\in \mathbb N} \mathcal{H}_{M^+}(\tau^M)^{-n}$. The map $ \mathcal{H}_{M }\xrightarrow{\theta} \mathcal{H}$ is injective and its restriction $\theta|_{ \mathcal{H}_{M^+ }}$ to $\mathcal{H}_{M^+ }$ respects the product. These properties are also true when $(M^+, \tau^M)$ is replaced by its inverse $(M^-, (\tau^M)^{-1})$ where $M^-=\{m^{-1}\in M \ | \ m\in M^+\}$.} \section{Pro-$p$ Iwahori invariants of $I_G(P,\sigma,Q)$}\label{S:8} \subsection{Pro-$p$ Iwahori Hecke algebras: structures} We supplement here the notations of \S \ref{S:2.1} and \S \ref{S:2.3}. The subgroups $Z^0=Z\cap \mathcal K= Z\cap \mathcal B$ and $Z^1=Z\cap \mathcal U$ are normal in $\mathcal N$ and we put $$W= \mathcal N/ Z^0, \ W(1)= \mathcal N/ Z^1,\ \Lambda= Z/ Z^0, \ \Lambda(1)= Z/ Z^1, \ Z_k=Z^0/Z^1.$$ We have $\mathcal N = (\mathcal N\cap \mathcal K) Z$ so that we see the finite Weyl group $\mathbb W=\mathcal N/Z$ as the subgroup $(\mathcal N\cap \mathcal K)/Z^0$ of $W$; in this way $W$ is the semi-direct product $\Lambda \rtimes \mathbb W$. The image $W_{G'}=W'$ of $\mathcal N\cap G'$ in $W$ is an affine Weyl group generated by the set $S^{\mathrm{aff}}$ of affine reflections determined by the walls of the alcove $\mathcal C$. The group $W'$ is normal in $W$ and $W$ is the semi-direct product $W'\rtimes \Omega$ where $\Omega$ is the image in $W$ of the normalizer $\mathcal N_{\mathcal C}$ of $\mathcal C$ in $\mathcal N$. The length function $\ell$ on the affine Weyl system $(W',S^{\mathrm{aff}})$ extends to a length function on $W$ such that $\Omega$ is the set of elements of length $0$. We also view $\ell$ as a function of $W(1)$ via the quotient map $W(1)\to W$. We write \begin{equation}\label{eq:hattildew}(\hat w,\tilde w, w)\in \mathcal N\times W(1) \times W \ \text{corresponding via the quotient maps } \ \mathcal N\to W(1)\to W. \end{equation} When $w=s$ in $S^{\mathrm{aff}}$ or more generally $w$ in $W_{G'}$, we will most of the time choose $\hat w$ in $ \mathcal N \cap G' $ and $\tilde w$ in the image ${}_1W_{G'}$ of $\mathcal N\cap G'$ in $W(1)$. \bigskip We are now ready to describe the pro-$p$ Iwahori Hecke ring $\mathcal{H}=\mathcal{H}(G,\mathcal{U})$ \cite{MR3484112}. We have $G=\mathcal{U} \mathcal N \mathcal{U}$ and for $n,n'\in \mathcal N$ we have $\mathcal{U} n \mathcal{U}=\mathcal{U} n' \mathcal{U}$ if and only if $nZ^1=n'Z^1$. For $n\in \mathcal N$ of image $w\in W(1)$ and $g\in \mathcal{U} n \mathcal{U}$ we denote $T_w=T(n)=T(g)$ in $\mathcal{H}$. The relations among the basis elements $(T_w)_{w\in W(1)}$ of $\mathcal{H}$ are: \begin{enumerate} \item Braid relations : $T_w T_{w'}=T_{ww'}$ for $w,w'\in W(1)$ with $ \ell(ww')=\ell(w)+\ell(w')$. \item Quadratic relations : $T_{\tilde s}^2= q_sT_{\widetilde{s}^2} + c_{\tilde s} T_{\tilde s}$ \end{enumerate} for $\tilde s\in W(1)$ lifting $s\in S^{\mathrm{aff}}$, where $q_s=q_G(s)=|\mathcal{U}/\mathcal{U}\cap \hat s \mathcal{U} (\hat s)^{-1}|$ depends only on $s$, and $ c_{\tilde s}=\sum_{t\in Z_k} c_{\tilde s}(t) T_t$ for integers $c_{\tilde s}(t)\in \mathbb N$ summing to $q_s-1$. We shall need the basis elements $(T^*_w)_{w\in W(1)}$ of $\mathcal{H}$ defined by: \begin{enumerate} \item $T^*_w=T_w$ for $w \in W(1)$ of length $\ell(w)=0$. \item $T^*_{\tilde s}=T_{\tilde s}- c_{\tilde s}$ for $\tilde s\in W(1)$ lifting $s\in S^{\mathrm{aff}}$. \item $T^*_{ww'}=T^*_wT^*_{w'}$ for $w,w'\in W(1)$ with $ \ell(ww')=\ell(w)+\ell(w')$. \end{enumerate}\bigskip We need more notation for the definition of the admissible lifts of $S^{\mathrm{aff}}$ in $\mathcal N_G$. Let $s\in S^{\mathrm{aff}}$ fixing a face $\mathcal C_s$ of the alcove $\mathcal C$ and $\mathcal K_s$ the parahoric subgroup of $G$ fixing $\mathcal C_s$. The theory of Bruhat-Tits associates to $\mathcal C_s$ a certain root $\alpha_s\in \Phi^+$ \cite[\S 4.2]{MR3484112}. We consider the group $G'_s$ generated by $U_{\alpha_s}\cup U_{-\alpha_s}$ where $U_{\pm \alpha_s}$ the root subgroup of $\pm \alpha_s$ (if $2 \alpha_s\in \Phi$, then $U_{2 \alpha_s}\subset U_{ \alpha_s}$) and the group $\mathcal G'_s$ generated by $\mathcal{U}_{\alpha_s} \cup \,\mathcal{U}_{-\alpha_s}$ where $\mathcal{U}_{\pm \alpha_s}= U_{\pm \alpha_s}\cap \mathcal K_s$. When $u\in \mathcal{U}_{ \alpha_s}-\{1\}$, the intersection $\mathcal N_G \cap \mathcal{U}_{-\alpha_s}u \,\mathcal{U}_{-\alpha_s}$ (equal to $\mathcal N_G \cap U_{-\alpha_s}u U_{-\alpha_s}$ \cite[6.2.1 (V5)]{MR0327923} \cite[\S 3.3 (19)]{MR3484112}) possesses a single element $n_s(u)$. The group $Z'_s=Z\cap \mathcal G'_s$ is contained in $Z\cap \mathcal K_s=Z^0$; its image in $Z_k$ is denoted by $Z'_{k,s}$. The elements $n_s(u)$ for $u\in \mathcal{U}_{ \alpha_s}-\{1\}$ are the admissible lifts of $s$ in $\mathcal N_G$; their images in $W(1)$ are the admissible lifts of $s$ in $W(1)$. By \cite[Theorem 2.2, Proposition 4.4]{MR3484112}, when $\tilde s\in W(1)$ is an admissible lift of $s$, $c_{\tilde s}(t)=0$ if $t\in Z_k\setminus Z'_{k,s}$, and \begin{equation}\label{eq:cs} c_{\tilde s}\equiv (q_s-1)|Z'_{k,s}|^{-1}\sum _{t\in Z'_{k,s}} T_t \quad \mod p. \end{equation} The admissible lifts of $ S$ in $\mathcal N_G$ are contained in $\mathcal N_G \cap \mathcal K$ because $\mathcal K_s \subset \mathcal K$ when $s\in S$. \begin{definition}\label{defn:admissible_lift} An admissible lift of the finite Weyl group $\mathbb W$ in $\mathcal N_G $ is a map $w\mapsto \hat w:\mathbb W\to \mathcal N_G \cap \mathcal K$ such that $\hat s$ is admissible for all $s\in S$ and $\hat w=\hat w_1 \hat w_2$ for $w_1,w_2\in \mathbb W$ such that $w=w_1 w_2$ and $\ell(w)=\ell(w_1)+\ell(w_2)$. \end{definition} Any choice of admissible lifts of $S $ in $\mathcal N_G\cap \mathcal K$ extends uniquely to an admissible lift of $\mathbb W$ (\cite[IV.6]{MR3600042}, \cite[Proposition 2.7]{arXiv:1703.04921}). \bigskip Let $P=MN$ be a standard parabolic subgroup of $G$. The groups $Z, Z^0=Z\cap \mathcal K_M=Z\cap \mathcal B_M, Z^1=Z\cap \mathcal U_M$ are the same for $G$ and $M$, but $\mathcal N _M= \mathcal N\cap M$ and $M\cap G'$ are subgroups of $ \mathcal N$ and $G'$. The monoid $M^+$ (\S \ref{S:2.3}) contains $(\mathcal N _M\cap \mathcal K)$ and is equal to $M^+=\mathcal{U}_M \mathcal N _{M^+}\mathcal{U}_M$ where $\mathcal N _{M^+}= \mathcal N\cap M^+$. An element $z\in Z$ belongs to $M^+$ if and only if $v_F(\alpha(z))\geq 0$ for all $\alpha \in \Phi^+\setminus \Phi_M^+$ (see \cite[Lemme 2.2]{MR3437789}). Put $W_M = \mathcal{N}_M/Z^0$ and $W_M(1) = \mathcal{N}/Z^1$. Let $\epsilon=+$ or $\epsilon=-$. We denote by $W_{M^\epsilon}$ the images of $\mathcal N _{M^\epsilon}$ in $W_M, W_M(1)$. We see the groups $W_M, W_M(1), {}_1W_{M'}$ as subgroups of $W, W(1), {}_1W_{G'}$. As $\theta$ (\S \ref{S:2.3}), the linear injective map \begin{equation}\label{eq:theta*} \mathcal{H}_{M }\xrightarrow{\theta^*} \mathcal{H}, \quad \theta^*(T_w^{M,*})= T^*_w, \quad (w\in W_M(1)), \end{equation} respects the product on the subring $ \mathcal{H}_{M^\epsilon } $. Note that $\theta$ and $\theta^*$ satisfy the obvious transitivity property with respect to a change of parabolic subgroups. \subsection{Orthogonal case}\label{S:8.0} Let us examine the case where $\Delta_M$ and $\Delta\setminus \Delta_M$ are orthogonal, writing $M_2=M_{\Delta\setminus \Delta_M}$ as in \S \ref{S:1.3} From $M\cap M_2=Z$ we get $W_M\cap W_{M_2}=\Lambda, W_M(1)\cap W_{M_2}(1)= \Lambda(1)$, the semisimple building of $G$ is the product of those of $M$ and $M_2$ and $S^{\mathrm{aff}}$ is the disjoint union of $S_M^{\mathrm{aff}}$ and $S_{M_2}^{\mathrm{aff}}$, the group $W_{G'}$ is the direct product of $W_{M'}$ and $W_{M_2'}$. For $\tilde s\in W_M(1)$ lifting $s\in S_M^{\mathrm{aff}}$, the elements $T^M_{\tilde s}\in \mathcal{H}_M$ and $T_{\tilde s}\in \mathcal{H}$ satisfy the same quadratic relations. A word of caution is necessary for the lengths $\ell_M$ of $W_M $ and $\ell_{M_2}$ of $W_{M_2} $ different from the restrictions of the length $\ell$ of $W_M$, for example $\ell_M(\lambda)=0$ for $\lambda \in \Lambda \cap W_{M_2'}$. \begin{lemma}We have $\Lambda =(W_{M^\epsilon} \cap \Lambda) (W_{M_2'}\cap \Lambda)$. \end{lemma} \begin{proof} We prove the lemma for $\epsilon=-$. The case $\epsilon=+$ is similar. The map $v:Z\to X_*(T)\otimes \mathbb Q$ defined in \S \ref{S:2.1} is trivial on $Z^0$ and we also write $v$ for the resulting homomorphism on $\Lambda$. For $\lambda \in \Lambda$ there exists $\lambda_2\in W_{M'_2}\cap \Lambda $ such that $\lambda \lambda_2\in W_{M^-}$, or equivalently $\alpha (v(\lambda \lambda_2))\leq 0$ for all $\alpha \in \Phi^+\setminus \Phi_M^+ =\Phi_{M_2}^+$. It suffices to have the inequality for $\alpha \in \Delta_{M_2}$. The matrix $(\alpha(\beta^\vee))_{\alpha, \beta\in \Delta_{M_2}}$ is invertible, hence there exist $n_\beta \in \mathbb Z$ such that $\sum_{\beta \in \Delta_{M_2}} n_\beta \alpha(\beta^\vee) \leq - \alpha (v(\lambda))$ for all $\alpha \in \Delta_{M_2}$. As $v (W_{M'_2}\cap \Lambda)$ contains $ \oplus _{\alpha \in \Delta_{M_2}} \mathbb Z \alpha^\vee$ where $\alpha^\vee$ is the coroot of $\alpha$ \cite[after formula (71)]{MR3484112}, there exists $\lambda_2\in W_{M'_2}\cap \Lambda$ with $v(\lambda_2) = \sum_{\beta\in\Delta_{M_2}}n_\beta\beta^\vee$. \end{proof} The groups $\mathcal N\cap M'$ and $ \mathcal N\cap M_2'$ are normal in $\mathcal N$, and $\mathcal N= (\mathcal N\cap M') \mathcal N_{\mathcal C}( \mathcal N\cap M_2') =Z (\mathcal N\cap M')( \mathcal N\cap M_2')$, and $$W= W_{M'}\Omega W_{M_2'}=W_MW_{M_2'}=W_{M^+} W_{M_2'}= W_{M^-} W_{M_2'}$$ The first two equalities are clear, the equality $W_MW_{M_2'}=W_{M^\epsilon} W_{M_2'}$ follows from $W_M= \mathbb W_M \Lambda $, $\mathbb W_M \subset W_{M^\epsilon}$ and the lemma. The inverse image in $W(1)$ of these groups are \begin{equation}\label{Wdec}W(1)= {}_1W_{M'}\, \Omega(1)\, {}_1W_{M_2'}= W_M(1) \, {}_1W_{M_2'}= W_{M^+}(1) \, {}_1W_{M_2'}= W_{M^-}(1) \, {}_1W_{M_2'}. \end{equation} We recall the function $q_G(n)=q(n)= | \mathcal U /( \mathcal U \cap n^{-1}\mathcal U n)|$ on $\mathcal N$ \cite[Proposition 3.38]{MR3484112} and we extend to $\mathcal N$ the functions $ q_M$ on $\mathcal N\cap M$ and $q_{M_2}$ on $\mathcal N\cap M_2$: \begin{equation}\label{eq:q} q_{M}(n) = | \mathcal U_{M}/( \mathcal U_{M}\cap n^{-1}\mathcal U_{M} n)| ,\quad q_{M_2}(n) = | \mathcal U_{M_2}/( \mathcal U_{M_2}\cap n^{-1}\mathcal U_{M_2} n)| . \end{equation} The functions $q, q_M, q_{M_2}$ descend to functions on $W(1) $ and on $W$, also denoted by $q, q_M, q_{M_2}$. \begin{lemma} \label{lemma:0} Let $n\in \mathcal N$ of image $w\in W$. We have \begin{enumerate} \item $q(n)=q_M(n) q_{M_2}(n)$. \item $q_M(n)=q_M(n_M)$ if $n=n_Mn_2$, $n_M\in \mathcal N \cap M, n_2\in \mathcal N \cap M'_2$ and similarly when $M$ and $M_2$ are permuted. \item $ q( w)=1 \Leftrightarrow q_M(\lambda w_{M})=q_{M_2}(\lambda w_{M_2})=1 $, if $w= \lambda w_{M} w_{M_2}, (\lambda, w_{M}, w_{M_2})\in \Lambda\times \mathbb W_{M}\times \mathbb W_{M_2}$. \item On the coset $(\mathcal N \cap M'_2) \mathcal N_{\mathcal C} n$, $q_M$ is constant equal to $q_{M}(n_{M'})$ for any element $n_{M'}\in M' \cap (\mathcal N \cap M'_2) \mathcal N_{\mathcal C} n$. A similar result is true when $M$ and $M_2$ are permuted. \end{enumerate}\end{lemma} \begin{proof} The product map \begin{equation}\label{eq:U}Z^1 \prod_{\alpha \in \Phi_{M,red}} \mathcal U_{\alpha} \prod_{\alpha \in \Phi_{M_2,red}} \mathcal U_{\alpha}\to \mathcal U \end{equation} with $ \mathcal U_{\alpha}= U_{\alpha}\cap \mathcal U $, is a homeomorphism. We have $\mathcal U_{M }=Z^1 \mathcal Y_{M' }$, $\mathcal U_{M'}= (Z^1\cap M') \mathcal Y_{M' }$ where $\mathcal Y_{M'}=\prod_{\alpha \in \Phi_{M,red}} \mathcal U_{\alpha} $ and $\mathcal N \cap M'_2$ normalizes $ \mathcal Y_{M'} $. Similar results are true when $M$ and $M_2$ are permuted, and $\mathcal{U}=\mathcal{U}_{M'}\mathcal{U}_{M_2}= \mathcal{U}_{M}\mathcal{U}_{M'_2}$. Writing $\mathcal N=Z (\mathcal N\cap M')( \mathcal N\cap M_2')$ (in any order), we see that the product map \begin{equation}\label{eq:nUn} Z^1 ( \mathcal Y_{M'}\cap n^{-1}\mathcal Y_{M'} n )(\mathcal Y_{M'_2}\cap n^{-1}\mathcal Y_{M'_2} n)\to \mathcal U \cap n^{-1}\mathcal U n \end{equation} is an homeomorphism. The inclusions induce bijections \begin{equation}\label{eq:UY}\mathcal Y_{M'}/( \mathcal Y_{M'}\cap n^{-1}\mathcal Y_{M'} n)\simeq \mathcal U_{M' }/ (\mathcal U_{M' }\cap n^{-1}\mathcal U_{M'} n)\simeq \mathcal U_{M}/( \mathcal U_{M}\cap n^{-1}\mathcal U_{M} n) , \end{equation} similarly for $M_2$, and also a bijection \begin{align}\label{eq:UnU}\mathcal U / (\mathcal U \cap n^{-1}\mathcal U n)&\simeq \mathcal Y_{M'_2}/ (\mathcal Y_{M'_2}\cap n^{-1}\mathcal Y_{M'_2} n)\times (\mathcal Y_{M'}/( \mathcal Y_{M'}\cap n^{-1}\mathcal Y_{M'} n). \end{align} The assertion (1) in the lemma follows from \eqref{eq:UY}, \eqref{eq:UnU}. The assertion (2) follows from \eqref{eq:nUn}; it implies the assertion (3). A subgroup of $\mathcal N$ normalizes $\mathcal{U}_M$ if and only if it normalizes $\mathcal Y_{M'}$ by \eqref{eq:UY} if and only if $q_M=1$ on this group. The group $\mathcal N \cap M'_2 $ normalizes $\mathcal Y_{M'}$ because the elements of $M'_2$ commute with those of $M'$ and $q_M$ is trivial on $\mathcal N_{\mathcal C} $ by (2). Therefore the group $(\mathcal N \cap M'_2) \mathcal N_{\mathcal C} $ normalizes $\mathcal{U}_M$. The coset $ (\mathcal N \cap M'_2) \mathcal N_{\mathcal C} n$ contains an element $n_{M'}\in M'$. For $x\in (\mathcal N \cap M'_2) \mathcal N_{\mathcal C}$, $ (xn_{M'})^{-1} \mathcal{U} xn_{M'}= n_{M'}^{-1} \mathcal{U} n_{M'}$ hence $q_{M}(xn_{M'})=q_{M}(n_{M'})$. \end{proof} \subsection{Extension of an $\mathcal{H}_M$-module to $\mathcal{H}$}\label{S:8.2} This section is inspired by similar results for the pro-$p$ Iwahori Hecke algebras over an algebraically closed field field of characteristic $p$~\cite[Proposition 4.16]{arXiv:1406.1003_accepted}. We keep the setting of \S \ref{S:8.0} and we introduce ideals: \begin{itemize} \item $\mathcal J_\ell$ (resp.\ $\mathcal J_r$) the left (resp.\ right) ideal of $\mathcal{H}$ generated by $T_w^*-1_\mathcal{H}$ for all $w\in {}_1W_{M'_2}$, \item $\mathcal J_{M,\ell}$ (resp.\ $\mathcal J_{M,r}$) the left (resp.\ right) ideal of $\mathcal{H}_M$ generated by $T_\lambda^{M,*}-1_{\mathcal{H}_M}$ for all $\lambda$ in $ {}_1W_{M'_2}\cap W_M(1) = {}_1W_{M'_2}\cap \Lambda(1)$. \end{itemize} \noindent The next proposition shows that the ideals $ \mathcal J_\ell= \mathcal J_{r}$ are equal and similarly $ \mathcal J_{M, \ell}= \mathcal J_{M,r} $. After the proposition, we will drop the indices $\ell$ and $r$. \begin{proposition}\label{prop:ideal} The ideals $ \mathcal J_\ell$ and $\mathcal J_{r}$ are equal to the submodule $\mathcal J'$ of $\mathcal{H}$ generated by $T^*_w- T^*_{ww_2}$ for all $w\in W(1)$ and $w_2\in {}_1W_{M'_2}$. The ideals $ \mathcal J_{M, \ell}$ and $ \mathcal J_{M,r} $ are equal to the submodule $\mathcal J'_M$of $\mathcal{H}_M$ generated by $T^{M,*}_w- T^{M,*}_{w\lambda_2}$ for all $w\in W_M(1)$ and $\lambda_2\in \Lambda (1) \cap {}_1W_{M'_2} $. \end{proposition} \begin{proof} (1) We prove $\mathcal J_{\ell} = \mathcal J'$. Let $w\in W(1), w_2\in {}_1W_{M'_2}$. We prove by induction on the length of $w_2$ that $T^*_w(T^*_{w_2}-1)\in \mathcal J'$. This is obvious when $\ell(w_2)=0$ because $T^*_wT^*_{w_2}=T^*_ {ww_2}$. Assume that $\ell(w_2)=1$ and put $s=w_2$. If $\ell(ws)=\ell(w)+1$, as before $T^*_w(T^*_{s}-1)\in \mathcal J'$ because $T^*_wT^*_{s}=T^*_ {ws}$. Otherwise $\ell(ws)=\ell(w)-1$ and $T^*_w =T^*_{ws^{-1}}T_s^*$ hence $$T^*_w(T^*_{s}-1)=T^*_{ws^{-1}}(T^*_{s})^2-T^*_w=T^*_{ws^{-1}}(q_s T^*_{s^2} - T^*_{s} c_s)-T^*_w= q_s T^*_{ws } - T^*_w (c_s+1). $$ Recalling from \ref{S:2.3} that $c_s+1= \sum_{t\in Z'_k} c_s(t)T_t$ with $c_s(t)\in \mathbb N$ and $\sum_{t\in Z'_k} c_s(t)=q_s$, $$q_s T^*_{ws } - T^*_w (c_s+1)= \sum_{t\in Z'_k} c_s(t)(T^*_{ws } - T^*_wT^*_t)= \sum_{t\in Z'_k} c_s(t)(T^*_{ws } - T^*_{ws s^{-1}t}) \in \mathcal J'. $$ Assume now that $\ell(w_2)>1$. Then, we factorize $w_2=xy$ with $x,y \in {}_1W_{M_2}$ of length $\ell(x), \ell(y)< \ell(w_2)$ and $ \ell(w_2)=\ell(x)+\ell(y)$. The element $T^*_w(T^*_{w_2}-1)=T^*_w T^*_x(T^*_y-1)+T^*_w(T^*_x-1)$ lies in $\mathcal J'$ by induction. Conversely, we prove $T^*_{w w_2}-T^*_w\in \mathcal J_{\ell}$. We factorize $w=xy$ with $y\in {}_1W_{M_2}$ and $x\in {}_1W_{M'} \Omega(1)$. Then, we have $\ell(w)=\ell(x)+\ell(y)$ and $\ell(ww_2)=\ell(x)+\ell(yw_2)$. Hence $$T^*_{ww_2}-T^*_w=T_x^*(T^*_{yw_2}-T^*_y)=T_x^*(T^*_{yw_2}-1)-T_x^*(T^*_{y }-1) \in \mathcal J_{\ell}.$$ This ends the proof of $\mathcal J_{\ell} = \mathcal J'$. By the same argument, the right ideal $\mathcal J_{r}$ of $\mathcal{H}$ is equal to the submodule of $\mathcal{H}$ generated by $T^*_{ w_2w }-T^*_w$ for all $w\in W(1)$ and $ w_2\in {}_1W_{M'_2}$. But this latter submodule is equal to $\mathcal J'$ because ${}_1W_{M'_2}$ is normal in $W(1)$. Therefore we proved $\mathcal J'= \mathcal J_{r}=\mathcal J_{\ell} $. (2) Proof of the second assertion. We prove $ \mathcal J_{M,\ell}= \mathcal J'_M$. The proof is easier than in (1) because for $w\in W_M(1)$ and $\lambda_2\in {}_1W_{M'_2}\cap \Lambda(1)$, we have $\ell(w \lambda_2)=\ell(w)+\ell(\lambda_2)$ hence $T^{M,*}_w (T^{M,*} _{ \lambda_2}-1)= T^{M,*}_{w \lambda_2}-T^{M,*}_w$. We have also $\ell( \lambda_2 w)=\ell(\lambda_2)+\ell(w)$ hence $(T^{M,*} _{ \lambda_2}-1)T^{M,*}_w= T^{M,*}_{ \lambda_2 w}-T^{M,*}_w$ hence $ \mathcal J_{M,r}$ is equal to the submodule of $\mathcal{H}_M$ generated by $T^{M,*}_{ \lambda_2 w}-T^{M,*}_w$ for all $w\in W_M(1)$ and $\lambda_2\in {}_1W_{M'_2}\cap \Lambda(1)$. This latter submodule is $\mathcal J'_M$, as ${}_1W_{M'_2} \cap \Lambda (1)={}_1W_{M'_2} \cap W_M (1) $ is normal in $W_M(1)$. Therefore $\mathcal J'_M= \mathcal J_{M,r}=\mathcal J_{M,\ell}$. \end{proof} By Proposition \ref{prop:ideal}, a basis of $\mathcal J$ is $T^*_w- T^*_{ww_2}$ for $w $ in a system of representatives of $W(1)/ {}_1W_{M'_2}$, and $w_2\in {}_1W_{M'_2} \setminus \{1\}$. Similarly a basis of $\mathcal J_M$ is $T^{M,*}_w- T^{M,*}_{w\lambda_2}$ for $w$ in a system of representatives of $W_M(1)/ ( \Lambda (1) \cap {}_1W_{M'_2})$. and $\lambda_2\in (\Lambda (1) \cap {}_1W_{M'_2} )\setminus \{1\}$. \begin{proposition} \label{prop:ext} The natural ring inclusion of $\mathcal{H}_{M^- }$ in $\mathcal{H}_{M}$ and the ring inclusion of $\mathcal{H}_{M^- }$ in $\mathcal{H}$ via $\theta^*$ induce ring isomorphisms $$ \mathcal{H}_{M}/\mathcal J_M \xleftarrow{\sim} \mathcal{H}_{M^-}/(\mathcal J_M \cap \mathcal{H}_{M^-} )\xrightarrow{\sim} \mathcal{H}/\mathcal J.$$ \end{proposition} \begin{proof} (1) The left map is obviously injective. We prove the surjectivity. Let $ w\in W_M(1)$. Let $ \lambda_2 \in {}_1W_{M'_2}\cap \Lambda(1)$ such that $ w \lambda_2^{-1}\in W_{M^-}(1)$ (see \eqref{Wdec}). We have $T^{M,*}_{ w \lambda_2^{-1}}\in \mathcal{H}_{M^-}$ and $T^{M,*}_{ w}= T^{M,*}_{ w \lambda_2^{-1}}T^{M,*}_ {\lambda_2 }= T^{M,*}_{ w \lambda_2^{-1}}+T^{M,*}_{ w \lambda_2^{-1}}(T^{M,*}_{ \lambda_2 }-1)$. Therefore $T^{M,*}_{ w} \in \mathcal{H}_{M^-}+\mathcal J_M $. As $w$ is arbitrary, $\mathcal{H}_M= \mathcal{H}_{M^-}+\mathcal J_M $. (2) The right map is surjective: let $w\in W(1) $ and $w_2\in {}_1W_{M'_2} $ such that $ww_2^{-1} \in W_{M^-}(1)$ (see \eqref{Wdec}). Then $T^{*}_{\hat w}- T^{*}_{w w_2^{-1} } \in \mathcal J$ with the same arguments than in (1), using Proposition \ref{prop:ideal}. Therefore $\mathcal{H}= \theta^*(\mathcal{H}_{M^-}) + \mathcal J$. We prove the injectivity: $\theta^*(\mathcal{H}_{M^-}) \cap \mathcal J= \theta^* (\mathcal{H}_{M^-} \cap \mathcal J_M)$. Let $\sum_{w\in W_{M^-}(1)} c_w T_{w}^{M,*}$, with $c_w\in \mathbb Z$, be an element of $\mathcal{H}_{M^-}$. Its image by $\theta^*$ is $\sum_{w\in W (1)} c_w T_{w}^{*}$ where we have set $c_w=0$ for $w\in W(1)\setminus W_{M^-}(1)$. We have $\sum_{w\in W (1)} c_w T_{w}^{*} \in \mathcal J$ if and only if $\sum _{w_2\in {}_1W_{M'_2}} c_{ww_2}=0 $ for all $w\in W(1)$. If $c_{ww_2}\neq 0$ then $w_2 \in {}_1W_{M'_2}\cap W_{M}(1)$, that is, $w_2 \in {}_1W_{M'_2}\cap \Lambda(1)$. The sum $\sum _{w_2\in {}_1W_{M'_2}} c_{ww_2}$ is equal to $ \sum _{\lambda_2\in {}_1W_{M'_2}\cap \Lambda(1)} c_{w\lambda_2} $. By Proposition \ref{prop:ideal}, $\sum_{w\in W (1)} c_w T_{w}^{*} \in \mathcal J$ if and only if $\sum_{w\in W_{M^-}(1)} c_w T_{w}^{M,*}\in \mathcal J_M$. \end{proof} We construct a ring isomorphism $$e^*: \mathcal{H}_{M}/\mathcal J_M \xrightarrow{\sim} \mathcal{H}/\mathcal J $$ by using Proposition \ref{prop:ext}. For any $w\in W(1)$, $T^{ *}_w +\mathcal J= e^*(T^{M,*}_{w_{M^-}}+\mathcal J_M)$ where $w_{M^-}\in W_{M^-}(1)\cap w \ {}_1 W_{M'_2} $ (see \eqref{Wdec}), because by Proposition \ref{prop:ideal}, $T^{ *}_w +\mathcal J= T^{ *}_{w _{M^-}} +\mathcal J$ and $T^{ *}_{w_{M^-}} +\mathcal J= e^*(T^{M,*}_{w_{M^-}}+\mathcal J_M)$ by construction of $e^*$. We check that $e^*$ is induced by $\theta^*$: \begin{theorem} \label{thm:ext0} The linear map $\mathcal{H}_{M } \xrightarrow{\theta^*}\mathcal{H}$ induces a ring isomorphism $$e^*: \mathcal{H}_{M}/\mathcal J_M \xrightarrow{\sim} \mathcal{H}/\mathcal J.$$ \end{theorem} \begin{proof} Let $w\in W_M(1)$. We have to show that $T^{ *}_w +\mathcal J= e^*(T^{M,*}_{w}+\mathcal J_M)$. We saw above that $T^{ *}_w +\mathcal J= e^*(T^{M,*}_{w_{M^-}}+\mathcal J_M)$ with $w=w_{M^-}\lambda_2$ with $\lambda_2\in {}_1W_{M'_2} \cap W_M(1) $. As $\ell_M(\lambda_2)=0$, $T^{M,*}_{w}=T^{M,*}_{w_{M^-}} T^{M,*}_{\lambda_2} \in T^{M,*}_{w_{M^-}} +\mathcal J_M$. Therefore $T^{M,*}_{w_{M^-}}+\mathcal J_M= T^{M,*}_{w }+\mathcal J_M $, this ends the proof of the theorem. \end{proof} We wish now to compute $e^*$ in terms of the $T_w$ instead of the $T_w^*$. \begin{proposition}\label{prop:extT} Let $w\in W(1)$. Then, $ T_w+\mathcal J= e^*(T^M_{w_M} q_{M_2}(w)+\mathcal J_M),$ for any $w_M\in W_M(1) \cap w \ {}_1 W_{M'_2}$. \end{proposition} \begin{proof} The element $w_M$ is unique modulo right multiplication by an element $\lambda_2\in W_M(1)\cap {}_1 W_{M'_2} $ of length $\ell_M(\lambda_2)=0$ and $ T^M_{w_M} q_{M_2}(w)+\mathcal J_M $ does not depend on the choice of $w_M$. We choose a decomposition (see \eqref{Wdec}): $$w=\tilde s_1\ldots \tilde s_a u \tilde s_{a+1}\ldots \tilde s_{a+b},\quad \ell(w)=a+b,$$ for $u\in \Omega(1)$, $\tilde s_i \in {}_1W_{M'}$ lifting $s_i\in S_M ^{\mathrm{aff}}$ for $1\leq i \leq a$ and $\tilde s_i \in {}_1W_{M'_2}$ lifting $s_i\in S_{M_2}^{\mathrm{aff}}$ for $a+1\leq i \leq a+b$, and we choose $u_M\in W_M(1)$ such that $u\in u_M \, {}_1 W_{M'_2}$. Then $$w_M =\tilde s_1\ldots \tilde s_a u_M \in W_M(1) \cap w \ {}_1 W_{M'_2}$$ and $q_{M_2}(w)=q_{M_2}(\tilde s_{a+1} \ldots \tilde s_{a+b})$ (Lemma \ref{lemma:0} 4)). We check first the proposition in three simple cases: Case 1. Let $w=\tilde s \in {}_1W_{M'}$ lifting $s\in S_{M}^{\mathrm{aff}}$; we have $T_{\tilde s} +\mathcal J= e^*(T^M_{\tilde s} + \mathcal J_M)$ because $T_{\tilde s}^*-e^*(T_{\tilde s}^{M,*})\in \mathcal J$, $T_{\tilde s}= T_{\tilde s}^* + c_{\tilde s} $, $T^M_{\tilde s}= T_{\tilde s}^{M,*} + c_{\tilde s} $ and $1= q_{M_2}(\tilde s) $. Case 2. Let $w=u \in W(1)$ of length $\ell(u)=0$ and $u_M\in W_M(1)$ such that $u\in u_M \, {}_1 W_{M'_2}$. We have $\ell_M(u_M)=0$ and $q_{M_2}(u )=1$ (Lemma \ref{lemma:0}). We deduce $T_u+\mathcal J= e^*(T^M_{u_M} + \mathcal J_M)$ because $ T^*_u+\mathcal J=T^*_{u_M} + \mathcal J= e^*(T_{u_M}^{M,*}+ \mathcal J_M)$, and $ T_u = T^*_u,T^M_{u_M} = T^{M,*}_{u_M} $. Case 3. Let $w=\tilde s \in {}_1W_{M'_2}$ lifting $s\in S_{M_2}^{\mathrm{aff}}$; we have $T_{\tilde s} +\mathcal J= e^*(q_{M_2}(\tilde s) + \mathcal J_M)$ because $T_{\tilde s}^*-1, c_{\tilde s} - (q_s-1) \in \mathcal J$, $T_{\tilde s}= T_{\tilde s}^* + c_{\tilde s} \in q_s +\mathcal J$ and $q_s= q_{M_2}(\tilde s) $. In general, the braid relations $T_w =T_{\tilde s_1}\ldots \tilde T_{s_a}T_ u T_{\tilde s_{a+1}}\ldots T_{\tilde s_{a+b}}$ give a similar product decomposition of $T_w+\mathcal J$, and the simple cases 1, 2, 3 imply that $T_w+\mathcal J$ is equal to \begin{align*} &e^*(T^M_{\tilde s_1}+\mathcal J_M)\ldots e^*(T^M_{\tilde s_a}+\mathcal J_M) e^*(T^M_{u_M} + \mathcal J_M)e^*(q_{M_2}(\tilde s_{a+1}) + \mathcal J_M)\ldots e^*(q_{M_2}(\tilde s_{a+b}) + \mathcal J_M)\\ &=e^*(T^M_{w_M}q_{M_2}(w)+ \mathcal J_M). \end{align*} The proposition is proved. \end{proof} Propositions \ref{prop:ideal}, \ref{prop:ext}\ref{prop:extT}, and Theorem \ref{thm:ext0} are valid over any commutative ring $R $ (instead of $\mathbb Z$). The two-sided ideal of $\mathcal{H}_R$ generated by $T_w^*-1$ for all $w\in {}_1W_{M'_2}$ is $\mathcal J_R=\mathcal J \otimes_{\mathbb Z} R$, the two-sided ideal of $\mathcal{H}_{M,R}$ generated by $T_\lambda^*-1$ for all $\lambda\in {}_1W_{M'_2}\cap \Lambda(1)$ is $\mathcal J_{M,R}=\mathcal J_M \otimes_{\mathbb Z} R$, and we get as in Proposition \ref{prop:ext} isomorphisms $$ \mathcal{H}_{M,R}/\mathcal J_{M,R} \xleftarrow{\sim} \mathcal{H}_{M^-,R}/(\mathcal J_{M,R} \cap \mathcal{H}_{M^-,R} )\xrightarrow{\sim} \mathcal{H}_R /\mathcal J_R,$$ giving an isomorphism $ \mathcal{H}_{M,R}/\mathcal J_{M,R}\to \mathcal{H}_R /\mathcal J_R$ induced by $\theta^*$. Therefore, we have an isomorphism from the category of right $\mathcal{H}_{M,R}$-modules where $\mathcal J_M$ acts by $0$ onto the category of right $\mathcal{H}_R$-modules where $\mathcal J$ acts by $0$. \begin{definition}\label{def:ext} A right $\mathcal{H}_{M,R}$-module $\mathcal V$ where $\mathcal J_M$ acts by $0$ is called extensible to $\mathcal{H}$. The corresponding $\mathcal{H}_R$-module where $\mathcal J$ acts by $0$ is called its extension to $\mathcal{H}$ and denoted by $e_{\mathcal{H}}(\mathcal V)$ or $e(\mathcal V)$.\end{definition} With the element basis $T^*_w$, $\mathcal V$ is extensible to $\mathcal{H}$ if and only if \begin{equation}\label{eq:condition} vT_{\lambda_2}^{M,*}= v \ \text{for all } \ v\in \mathcal V \ \text{and} \ \lambda_2\in {}_1W_{M'_2} \cap \Lambda(1).\end{equation} The $\mathcal{H}$-module structure on the $R$-module $e(\mathcal V)=\mathcal V$ is determined by \begin{equation}\label{eq:structure} vT_{w_2}^*= v, \quad vT_w^*= v T^{M,*}_w , \quad \text{for all } \ v\in \mathcal V, w_2\in {}_1W_{M'_2}, w\in W_M(1).\end{equation} It is also determined by the action of $T_w^*$ for $w\in {}_1W_{M'_2} \cup W_{M^+}(1)$ (or $w\in {}_1W_{M'_2} \cup W_{M^-}(1)$). Conversely, a right $\mathcal{H}$-module $\mathcal W$ over $R$ is extended from an $\mathcal{H}_M$-module if and only if \begin{equation}\label{eq:structure2} vT_{w_2}^*= v, \quad \text{for all } \ v\in \mathcal W, w_2\in {}_1W_{M'_2}.\end{equation} In terms of the basis elements $T_w$ instead of $T_w^*$, this says: \begin{corollary}\label{cor:extT} A right $\mathcal{H}_M$-module $\mathcal V$ over $R$ is extensible to $\mathcal{H}$ if and only if \begin{equation}\label{eq:conditionT} vT_{\lambda_2}^{M}= v \ \text{for all } \ v\in \mathcal V \ \text{and} \ \lambda_2\in {}_1W_{M'_2} \cap \Lambda(1).\end{equation} Then, the structure of $\mathcal{H}$-module on the $R$-module $e(\mathcal V)=\mathcal V$ is determined by \begin{equation}\label{eq:structureT} vT_{w_2}= v q_{w_2}, \quad vT_w= v T^{M}_w q_{M_2}(w), \quad \text{for all } \ v\in \mathcal{V}, w_2\in {}_1W_{M'_2}, w\in W_M(1).\end{equation} ($W_{M^+}(1)$ or $W_{M^-}(1)$ instead of $W_M(1)$ is enough.) A right $\mathcal{H}$-module $\mathcal W$ over $R$ is extended from an $\mathcal{H}_M$-module if and only if \begin{equation}\label{eq:structure2T} vT_{w_2}= vq_{w_2}, \quad \text{for all } \ v\in \mathcal W, w_2\in {}_1W_{M'_2}.\end{equation} \end{corollary} \subsection{$\sigma^{\mathcal U_M}$ is extensible to $\mathcal{H}$ of extension $e(\sigma^{\mathcal U_M})=e(\sigma)^{\mathcal U}$}\label{S:8.1} Let $P=MN$ be a standard parabolic subgroup of $G$ such that $\Delta_P $ and $ \Delta\setminus \Delta_P$ are orthogonal, and $\sigma$ a smooth $R$-representation of $M$ extensible to $G$. Let $P_2=M_2N_2$ denote the standard parabolic subgroup of $G$ with $\Delta_{P_2}= \Delta\setminus \Delta_P $. Recall that $G=MM'_2$, that $M\cap M'_2=Z\cap M'_2$ acts trivially on $\sigma $, $e(\sigma)$ is the representation of $G$ equal to $\sigma$ on $M$ and trivial on $M'_2$. We will describe the $\mathcal{H}$-module $e(\sigma)^{\mathcal U}$ in this section. We first consider $e(\sigma)$ as a subrepresentation of $\Ind_P^G\sigma$. For $v\in \sigma$, let $f_v\in (\Ind_P^G\sigma)^{M'_2}$ be the unique function with value $v$ on $M'_2$. Then, the map \begin{equation}\label{eq:HVt}v\mapsto f_{ v}:\sigma \to \Ind_P^G\sigma \end{equation} is the natural $G$-equivariant embedding of $e(\sigma)$ in $\Ind_P^G\sigma $. As $\sigma^{\mathcal U_M}=e(\sigma)^{\mathcal U }$ as $R$-modules, the image of $e(\sigma)^{\mathcal U }$ in $(\Ind_P^G\sigma)^\mathcal U$ is made out of the $f_v$ for $v\in \sigma^{\mathcal U_M}$. We now recall the explicit description of $(\Ind_P^G\sigma)^\mathcal U$. For each $d\in \mathbb W_{M_2}$, we fix a lift $\hat d\in {}_1W_{M'_2}$ and for $v\in \sigma^{\mathcal U_M}$ let $f_{P\hat d\mathcal U, v}\in (\Ind_P^G\sigma)^{\mathcal U}$ for the function with support contained in $P\hat d\mathcal U$ and value $v$ on $\hat d\mathcal U$. As $Z\cap M'_2$ acts trivially on $\sigma$, the function $f_{P\hat d\mathcal U, v}$ does not depend on the choice of the lift $\hat d\in {}_1W_{M'_2}$ of $d$. By \cite[Lemma 4.5]{arXiv:1703.04921}: \bigskip {\sl The map $ \oplus_{d\in \mathbb W_{M_2}}\sigma^{\mathcal U_M} \to (\Ind_P^G\sigma)^{\mathcal U}$ given on each $d$-component by $v\mapsto f_{P\hat d\mathcal U, v}$, is an $\mathcal{H}_{M^+}$-equivariant isomorphism where $\mathcal{H}_{M^+}$ is seen as a subring of $\mathcal{H}$ via $\theta$, and induces an $\mathcal{H}_R$-module isomorphism} \begin{equation}\label{eq:OV}v\otimes h \mapsto f_{P\mathcal U, v}h: \sigma^{\mathcal U_M}\otimes_{\mathcal{H}_{M^+}, \theta}\mathcal{H} \to (\Ind_P^G\sigma)^{\mathcal U}. \end{equation} In particular for $v\in \sigma^{\mathcal U _M}$, $v\otimes T(\hat d)$ does not depend on the choice of the lift $\hat d\in {}_1W_{M'_2}$ of $d$ and \begin{equation}\label{eq:ovvP}f_{P\hat d\mathcal U, v} =f_{P\mathcal U, v} T(\hat d). \end{equation} As $G$ is the disjoint union of $P \hat d \mathcal U$ for $d\in \mathbb W_{M_2}$, we have $f_v= \sum_{d\in \mathbb W_{M_2}}f_{P\hat d\mathcal U, v}$ and $f_v$ is the image of $v\otimes e _{M_2}$ in \eqref{eq:OV}, where \begin{equation}\label{eq:e} e _{M_2}= \sum_{d\in \mathbb W_{M_2}} T(\hat d). \end{equation} Recalling \eqref{eq:HVt} we get: \begin{lemma}\label{lemma:sigmaemb} The map $v\mapsto v\otimes e _{M_2}: e( \sigma)^{\mathcal U}\to \sigma^{\mathcal U_M}\otimes_{\mathcal{H}_{M^+}, \theta}\mathcal{H} $ is an $\mathcal{H}_R$-equivariant embedding. \end{lemma} \begin{remark} The trivial map $v\mapsto v\otimes 1_\mathcal{H}$ is not an $\mathcal{H}_R$-equivariant embedding. \end{remark} We describe the action of $T(n)$ on $e( \sigma)^{\mathcal U}$ for $n\in \mathcal N$. By definition for $v\in e(\sigma)^{\mathcal U}$, \begin{equation}\label{eq:vTn}vT(n)=\sum_{y\in \mathcal U / (\mathcal U \cap n^{-1}\mathcal U n)} yn^{-1}v.\end{equation} \begin{proposition}\label{prop:eu} We have $vT(n)= vT^M(n_M) q_{M_2}(n)$ for any $n_N\in \mathcal N \cap M$ is such that $n=n_M (\mathcal N \cap M'_2)$. \end{proposition} \begin{proof} The description \eqref{eq:UnU} of $ \mathcal U / (\mathcal U \cap n^{-1}\mathcal U n)$ gives $$vT(n)=\sum_{y_1\in \mathcal U_{M} / (\mathcal U_{M} \cap n^{-1}\mathcal U_{M} n)}y_1 \sum_{y_2\in \mathcal U_{M'_2} / (\mathcal U_{M'_2} \cap n^{-1}\mathcal U_{M'_2} n)} y_2n^{-1} v.$$ As $M'_2$ acts trivially on $e(\sigma)$, we obtain \begin{equation*} vT(n)=q_{M_2}(n) \sum_{y_1\in \mathcal U_{M} / (\mathcal U_{M} \cap n^{-1}\mathcal U_{M} n)}y_1n_{M}^{-1} v=q_{M_2}(n) \, v T^M(n_{M}). \end{equation*} \end{proof} \begin{theorem} \label{thm:ouf} Let $\sigma$ be a smooth $R$-representation of $M$. If $P(\sigma)=G$, then $\sigma^{\mathcal U_M}$ is extensible to $\mathcal{H}$ of extension $e (\sigma^{\mathcal U_M})= e(\sigma)^{\mathcal U}$. Conversely, if $\sigma^{\mathcal U_M}$ is extensible to $\mathcal{H}$ and generates $\sigma$, then $P(\sigma)=G$. \end{theorem} \begin{proof} (1) The $\mathcal{H}_M$-module $\sigma^{\mathcal U_M}$ is extensible to $\mathcal{H}$ if and only if $Z\cap M'_2 $ acts trivially on $ \sigma^{\mathcal U_M}$. Indeed, for $ v\in \sigma^{\mathcal U_M}, z_2\in Z\cap M'_2 $, $$ vT^M(z_2)=\sum_{y\in \mathcal U_M / (\mathcal U_M \cap z_2^{-1}\mathcal U _Mz_2)} yz_2^{-1}v=\sum_{y\in \mathcal Y_M / (\mathcal Y_M \cap z_2^{-1}\mathcal Y _Mz_2)} yz_2^{-1}v =z_2^{-1}v,$$ by \eqref{eq:vTn}, then \eqref{eq:UnU}, then the fact that $z_2^{-1} $ commutes with the elements of $\mathcal Y_M$. (2) $P(\sigma)=G$ if and only if $Z \cap M'_2 $ acts trivially on $\sigma$ (the group $Z \cap M'_2 $ is generated by $Z \cap \mathcal M'_\alpha $ for $\alpha \in \Delta_{M_2}$ by Lemma~\ref{lemma:ZG'}). The $R$-submodule $\sigma^{Z \cap M'_2}$ of elements fixed by $Z \cap M'_2$ is stable by $M$, because $M=ZM'$, the elements of $M'$ commute with those of $Z \cap M'_2$ and $Z$ normalizes $Z \cap M'_2$. (3) Apply (1) and (2) to get the theorem except the equality $e (\sigma^{\mathcal U_M})= e(\sigma)^{\mathcal U}$ when $P(\sigma)=G$ which follows from Propositions \ref{prop:eu} and \ref{prop:extT}. \end{proof} Let ${\bf 1 }_M$ denote the trivial representation of $M$ over $R$ (or ${\bf 1}$ when there is no ambiguity on $M$). The right $\mathcal{H}_R$-module $({\bf 1 }_G)^{\mathcal U}={\bf 1 }_{\mathcal{H}}$ (or ${\bf 1 }$ if there is no ambiguity) is the trivial right $\mathcal{H}_R$-module: for $w\in W_M(1)$, $T_w=q_w \id $ and $T^{*}_w=\id$ on ${\bf 1 }_{\mathcal{H}}$. \begin{example}\label{ex:extrivial} The $\mathcal{H}$-module $(\Ind_{P}^G {\bf 1 }) ^{\mathcal U}$ is the extension of the $\mathcal{H}_{M_2}$-module $ (\Ind_{M_2\cap B}^{M_2} {\bf 1 }) ^{\mathcal U_{M_2}}$. Indeed, the representation $\Ind_{P}^G {\bf 1 }$ of $G$ is trivial on $N_2$, as $G=M M'_2$ and $N_2\subset M'$ (as $\Phi= \Phi_{M}\cup \Phi_{M_2}$). For $g=mm'_2 $ with $m\in M, m'_2\in M'_2$ and $n_2 \in N_2 $, we have $Pgn_2 =Pm'_2n_2 =Pn_2 m'_2=Pm'_2=Pg$. The group $M_2\cap B= M_2\cap P$ is the standard minimal parabolic subgroup of $M_2$ and $(\Ind_{P}^G {\bf 1 })|_{M_2}=\Ind_{M_2\cap B}^{M_2} {\bf 1 }$. Apply Theorem \ref{thm:ouf}: \end{example} \subsection{The $\mathcal{H}_R$-module $e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U $}\label{S:8.4} Let $P=MN$ be a standard parabolic subgroup of $G$ such that $\Delta_P$ and $\Delta\setminus \Delta_P$ are orthogonal, let $\mathcal V$ be a right $\mathcal{H}_{M,R}$-module which is extensible to $\mathcal{H}_R$ of extension $e(\mathcal{V})$ and let $Q $ be a parabolic subgroup of $G$ containing $P$. We define on the $R$-module $e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U $ a structure of right $\mathcal{H}_R$-module: \begin{proposition}\label{lemma:tensor} \begin{enumerate} \item The diagonal action of $T_w^*$ for $w\in W(1)$ on $e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U $ defines a structure of right $\mathcal{H}_R$-module. \item The action of the $T_w$ is also diagonal and satisfies: $$((v\otimes f)T_w, (v\otimes f)T^*_w)= ( vT_{uw_{M'}} \otimes fT_{uw_{M'_2}} , vT^*_{uw_{M'}} \otimes fT^*_{uw_{M'_2}}) ,$$ where $w=u w_{M'} w_{M'_2}$ with $u\in W(1), \ell(u)=0, w_{M'}\in {}_1W_{M'}, w_{M'_2}\in {}_1W_{M'_2}$.\end{enumerate} \end{proposition} \begin{proof} If the lemma is true for $P$ it is also true for $Q$, because the $R$-module $e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U $ naturally embedded in $e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U $ is stable by the action of $\mathcal{H}$ defined in the lemma. So, we suppose $Q=P$. Suppose that $T^*_w$ for $w\in W(1)$ acts on $e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U $ as in (1). The braid relations obviously hold. The quadratic relations hold because $T^*_s$ with $s\in {}_1S^{\mathrm{aff}}$, acts trivially either on $e(\mathcal V)$ or on $(\Ind_P^G {\bf 1})^\mathcal U $. Indeed, ${}_1S^{\mathrm{aff}}= {}_1S_M^{\mathrm{aff}} \cup {}_1S_{M_2}^{\mathrm{aff}}$, $T^*_s$ for $s\in {}_1S_M^{\mathrm{aff}}$, acts trivially on $(\Ind_P^G \bf 1)^\mathcal U$ which is extended from a $\mathcal{H}_{M_2}$-module (Example \ref{ex:extrivial}), and $T^*_s$ for $s\in {}_1S_{M'_2}^{\mathrm{aff}}$, acts trivially on $ e(\mathcal V)$ which is extended from a $\mathcal{H}_{M }$-module. This proves (1). We describe now the action of $T_w$ instead of $T_w^*$ on the $\mathcal{H}$-module $e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U $. Let $w\in W(1)$. We write $ w=u w_{M'} w_{M'_2}= u w_{M'_2} w_{M'}$ with $u\in W(1), \ell(u)=0, w_{M'}\in {}_1W_{M'}, w_{M'_2}\in {}_1W_{M'_2}$. We have $\ell (w) =\ell(w_{M'})+\ell(w_{M'_2})$ hence $T_w= T_u T_{w_{M'}}T_{w_{M'_2}}$. For $w=u$, we have $T_u=T^*_u$ and $ (v\otimes f)T _u= (v\otimes f)T^*_u= vT^*_u\otimes f T^*_u= vT _u\otimes f T _u$. For $ w=w_{M'}$, $ (v\otimes f)T^*_w= vT^*_w \otimes f$; in particular for $s\in {}_1S_M^{\mathrm{aff}}$, $c_s=\sum_{t\in Z_k\cap {}_1W_{M'}} c_s(t) T^*_t$, we have $ (v\otimes f)T_s= (v\otimes f)(T^*_s+c_s)=v(T^*_s+c_s)\otimes f=vT_s \otimes f$. Hence $ (v\otimes f)T_w= vT_w \otimes f$. For $w=w_{M'_2}$, we have similarly $ (v\otimes f)T^*_w= v \otimes fT^*_w$ and $ (v\otimes f)T_w= v \otimes fT_w$. \end{proof} \begin{example} \label{ex:toto} Let $\mathcal X$ be a right $\mathcal{H}_R$-module. Then ${\bf 1 }_{\mathcal{H}}\otimes_R \mathcal X$ where the $T^{*}_w $ acts diagonally is a $\mathcal{H}_R$-module isomorphic to $\mathcal X$. But the action of the $T_w$ on ${\bf 1 }_{\mathcal{H}}\otimes_R \mathcal X$ is not diagonal. \end{example} It is known \cite{MR3402357} that $(\Ind_{Q'}^G{\bf 1})^{\mathcal U}$ and $(\St_Q^G)^{\mathcal U}$ are free $R$-modules and that $ (\St_Q^G)^{\mathcal U}$ is the cokernel of the natural $\mathcal{H}_R$-map \begin{align} \label{eq:StQU}\oplus _{Q\subsetneq Q'}(\Ind_{Q'}^G{\bf 1})^{\mathcal U}\to (\Ind_Q^G{\bf 1} )^{\mathcal U} \end{align} although the invariant functor $(-) ^{\mathcal U}$ is only left exact. \begin{corollary}\label{cor:VS} The diagonal action of $T_w^*$ for $w\in W(1)$ on $e(\mathcal V)\otimes_R ( \St_Q^G)^\mathcal U $ defines a structure of right $\mathcal{H}_R$-module satisfying Proposition \ref{lemma:tensor} (2). \end{corollary} \section{Hecke module $I_\mathcal{H}(P,\mathcal V,Q)$ } \label{S:9.0} \subsection{Case $\mathcal{V}$ extensible to $\mathcal{H}$} \label{S:9.1} Let $P=MN$ be a standard parabolic subgroup of $G$ such that $\Delta_P$ and $\Delta\setminus \Delta_P$ are orthogonal, $\mathcal{V}$ a right $\mathcal{H}_{M,R}$-module extensible to $\mathcal{H}_R$ of extension $e(\mathcal{V})$, and $Q$ be a parabolic subgroup of $G$ containing $P$. As $Q$ and $M_Q$ determine each other: $Q=M_QU$, we denote also $\mathcal{H}_{M_Q}=\mathcal{H}_{Q}$ and $\mathcal{H}_{M_Q,R}=\mathcal{H}_{Q,R}$ when $Q\neq P, G$. When $Q=G$ we drop $G$ and we denote $e_\mathcal{H}(\mathcal{V})=e(\mathcal{V})$ when $Q=G$. \begin{lemma} \label{lemma:trans} $\mathcal V$ is extensible to an $\mathcal{H}_{Q,R}$-module $e_{\mathcal{H}_{Q}}(\mathcal V)$. \end{lemma} \begin{proof} This is straightforward. By Corollary \ref{cor:extT}, $\mathcal V$ extensible to $\mathcal{H}$ means that $T^{M,*}(z)$ acts trivially on $\mathcal V$ for all $z \in \mathcal N_{M'_{2}}\cap Z$. We have $M_Q=M M'_{2,Q} $ with $M'_{2,Q}\subset M_Q\cap M'_2 $ and $ \mathcal N_{M'_{2,Q}} \subset \mathcal N_{M'_{2}} $; hence $T^{M,*}(z)$ acts trivially on $\mathcal V$ for all $z \in \mathcal N_{M'_{2,Q}}\cap Z$ meaning that $\mathcal V$ is extensible to $\mathcal{H}_{Q}$. \end{proof} \begin{remark} We cannot say that $e_{\mathcal{H}_{Q}}(\mathcal V)$ is extensible to $\mathcal{H}$ of extension $e(\mathcal{V})$ when the set of roots $\Delta_Q$ and $\Delta \setminus \Delta_Q$ are not orthogonal (Definition \ref{def:ext}). \end{remark} Let $Q'$ be an arbitrary parabolic subgroup of $G$ containing $Q$. We are going to define a $\mathcal{H}_R$-embedding $\Ind_{\mathcal{H}_{Q'}}^\mathcal{H}(e_{\mathcal{H}_{Q'}}(\mathcal V)) \xrightarrow{\iota(Q,Q') } \Ind_{\mathcal{H}_{Q}}^\mathcal{H}(e_{\mathcal{H}_{Q}}(\mathcal V))= e_{\mathcal{H}_{Q}}(\mathcal V)\otimes_{ \mathcal{H}_{M_Q^+},\theta}\mathcal{H}$ defining a $\mathcal{H}_R$-homomorphism $$\oplus_{Q\subsetneq Q'\subset G} \Ind_{\mathcal \mathcal{H}_{Q'}}^\mathcal{H}( e_{\\mathcal{H}_{Q'}} (\mathcal V))\to \Ind_{\mathcal{H}_Q}^\mathcal{H} ( e_{\mathcal{H}_Q}(\mathcal V))$$ of cokernel isomorphic to $e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal U$. In the extreme case $(Q,Q')=(P,G)$, the $\mathcal{H}_R$-embedding $e(\mathcal{V}) \xrightarrow{\iota(P,G)}\Ind_{\mathcal{H}_{M}}^\mathcal{H}(\mathcal V)$ is given in the following lemma where $f_G$ and $f_{P\mathcal U}\in (\Ind_P^G {\bf 1})^{\mathcal U}$ of $P\mathcal U$ denote the characteristic functions of $G$ and $P\mathcal U$, $f_G=f_{P\mathcal U}e_{M_2}$ (see \eqref{eq:e}). \begin{lemma} \label{lemma:ovv} There is a natural $\mathcal{H}_R$-isomorphism $$v\otimes 1 _{\mathcal{H}}\mapsto v\otimes f_{P\mathcal U} :\Ind_{\mathcal{H}_{M}}^\mathcal{H}(\mathcal V)= \mathcal V\otimes_{ \mathcal{H}_{M^+},\theta}\mathcal{H}\xrightarrow{\kappa_P} e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U, $$ and compatible $\mathcal{H}_R$-embeddings \begin{align*}&v\mapsto v\otimes f_G : e(\mathcal V) \xrightarrow{} e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U,\\ &v\mapsto v\otimes e_{M_2}: e(\mathcal V) \xrightarrow{\iota(P,G)} \Ind_{\mathcal{H}_{M}}^\mathcal{H}(\mathcal V). \end{align*} \end{lemma} \begin{proof} We show first that the map \begin{align}\label{eq:f0}v\mapsto v\otimes f_{P\mathcal U}: \mathcal V \xrightarrow{} e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U \end{align} is $\mathcal{H}_{M^+}$-equivariant. Let $ w\in W_{M^+}(1)$. We write $w=u w_{M'} w_{M'_2} $ as in Lemma \ref{lemma:tensor} (2), so that $f_{P\mathcal U}T_w= f_{P\mathcal U} T_{uw_{M'_2}}$. We have $ f_{P\mathcal U} T_{uw_{M'_2}} =f_{P\mathcal U}$ because ${}_1 W_{M'}\subset W_{M^+}(1)\cap W_{M^-}(1)$ hence $uw_{M'_2}= w w_{M'}^{-1} \in W_{M^+}(1)$ and in $ {\bf 1}_{\mathcal{H}_M}\otimes _{\mathcal{H}_{M^+},\theta}\mathcal{H}$ we have $(1\otimes 1 _{\mathcal{H}}) T_{uw_{M'_2}} = 1T^M_{uw_{M'_2}} \otimes 1 _{\mathcal{H}}$, and $T^M_{uw_{M'_2}}$ acts trivially in $ {\bf 1}_{\mathcal{H}_M}$ because $\ell_M(uw_{M'_2})=0$. We deduce $(v\otimes f_{P\mathcal U})T_w= vT_w \otimes f_{P\mathcal U}T_w= v T^M_{w}\otimes f_{P\mathcal U}$. By adjunction \eqref{eq:f0} gives an $\mathcal{H}_R$-equivariant linear map \begin{equation}\label{eq:ovv1}v\otimes 1_{\mathcal{H}}\mapsto v \otimes f_{P\mathcal U}: \mathcal V\otimes_{ \mathcal{H}_{M^+},\theta}\mathcal{H}\xrightarrow{\kappa_P} e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U. \end{equation} We prove that $\kappa_P$ is an isomorphism. Recalling $ \hat d \in \mathcal N \cap M'_2, \tilde d \in {}_1 W_{M'_2}$ lift $d$, one knows that \begin{equation}\label{eq:ovv2}{\mathcal V}\otimes_{\mathcal{H}_{M^+},\theta}\mathcal{H}= \oplus _{d\in \mathbb W_{M_2}} {\mathcal V} \otimes T_{\tilde d}, \quad e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U=\oplus _{d\in \mathbb W_{M_2}} \mathcal V \otimes f_{P\hat d \mathcal U}, \end{equation} where each summand is isomorphic to $\mathcal V$. The left equality follows from \S 4.1 and Remark 3.7 in \cite{MR3437789} recalling that $w\in \mathbb W_{M_2} $ is of minimal length in its coset $\mathbb W_Mw=w\mathbb W_M$ as $\Delta_M$ and $\Delta_{M_2}$ are orthogonal; for the second equality see \S \ref{S:8.1} \eqref{eq:ovvP}. We have $\kappa_P(v\otimes T_{\tilde d})=(v \otimes f_{P\mathcal U})T_{\tilde d} = v \otimes f_{P\mathcal U} T_{\tilde d} $ (Lemma \ref{lemma:tensor}). Hence $\kappa_P$ is an isomorphism. We consider the composite map $$ v\mapsto v\otimes 1 \mapsto v\otimes f_{P \mathcal U}e_{M_2} :e(\mathcal V) \to e(\mathcal V)\otimes_R {\bf 1}_{\mathcal{H}} \to e(\mathcal V) \otimes_R (\Ind_P^G {\bf 1})^\mathcal U,$$ where the right map is the tensor product $e(\mathcal V)\otimes_R - $ of the $\mathcal{H}_R$-equivariant embedding ${\bf 1}_{\mathcal{H}}\to (\Ind_P^G {\bf 1})^\mathcal U$ sending $1_R$ to $ f_{P \mathcal U}e_{M_2} $ (Lemma \ref{lemma:sigmaemb}); this map is injective because $(\Ind_P^G\mathbf{1})^{\mathcal{U}}/\mathbf{1}$ is a free $R$-module; it is $\mathcal{H}_R$-equivariant for the diagonal action of the $T_w^*$ on the tensor products (Example \ref{ex:toto} for the first map). By compatibility with (1), we get the $\mathcal{H}_R$-equivariant embedding $v\mapsto v\otimes e_{M_2}: e(\mathcal V) \xrightarrow{\iota(P,G)} \Ind_{\mathcal{H}_{M}}^\mathcal{H}(\mathcal V)$. \end{proof} For a general $(Q,Q')$ the $\mathcal{H}_R$-embedding $\Ind_{\mathcal \mathcal{H}_{Q'}}^\mathcal{H}(e_{\mathcal \mathcal{H}_{Q'}}(\mathcal V)) \xrightarrow{\iota(Q,Q')} \Ind_{\mathcal{H}_{Q}}^\mathcal{H}(e_{\mathcal{H}_{Q}}(\mathcal V))$ is given in the next proposition generalizing Lemma \ref{lemma:ovv}. The element $e_{M_2}$ of $\mathcal{H}_R$ appearing in the definition of $\iota(P,G')$ is replaced in the definition of $\iota(Q,Q')$ by an element $\theta_{Q'}(e_{Q} ^{Q'})\in \mathcal{H}_R$ that we define first. Until the end of \S \ref{S:9.0}, we fix an admissible lift $w\mapsto \hat w: \mathbb W\to \mathcal N \cap \mathcal K$ (Definition \ref{defn:admissible_lift}) and $\tilde w$ denotes the image of $\hat w$ in $W(1)$. We denote $\mathbb W_{M_{Q}}= \mathbb W_Q$ and by ${}^{\mathbb W_Q}\mathbb W $ the set of $w\in\mathbb W $ of minimal length in their coset $\mathbb W_Qw$. The group $G$ is the disjoint union of $Q \hat d \mathcal{U}$ for $d$ running through $ {}^{\mathbb W_Q}\mathbb W $ \cite[Lemma 2.18 (2)]{arXiv:1703.04921}. \begin{equation}\label{eq:QQ'U}Q'\mathcal{U}=\sqcup_{d\in {}^{\mathbb W_ Q}\mathbb W_{ Q'}} Q \hat d \mathcal{U}, \end{equation} Set \begin{equation}\label{eq:eQQ'} e_{Q} ^{Q'} =\sum_{d \in {}^{\mathbb W_ Q}\mathbb W_{ Q'}} T^{M_{Q'}}_{\tilde d}. \end{equation} We write $e_{Q}^{G} =e_Q$. We have $e_{P} ^{Q} =\sum_{d\in \mathbb W_{M_{2,Q}}} T_{\tilde d}^{M_{Q}}$. \begin{remark} Note that ${}^{\mathbb W_M}\mathbb W=\mathbb W_{M_2}$ and $e_P= e_{M_2}$, where $M_2$ is the standard Levi subgroup of $G$ with $\Delta_{M_2}=\Delta\setminus \Delta_{M}$, as $\Delta_M$ and $\Delta\setminus \Delta_{M}$ are orthogonal. More generally, $ {}^{\mathbb W_ Q}\mathbb W_{M_{Q'}}= {}^{\mathbb W_{M_{2,Q}}}\mathbb W_{M_{2,Q'}} $ where $M_{2,Q'}=M_2\cap M_{Q'}$. \end{remark} Note that $e_{Q} ^{Q'} \in \mathcal{H}_{M^+}\cap \mathcal{H}_{M^-}$. We consider the linear map $$\theta_{Q}^{Q'}: \mathcal{H}_{Q }\to \mathcal \mathcal{H}_{Q'} \quad T_w^{M_Q }\mapsto T_w^{M_{Q '}} \quad (w\in W_{M_Q}(1)).$$ We write $\theta_{Q}^{G} =\theta_Q$ so $\theta_{Q} (T_w^{M_Q })=T_w $. When $Q=P $ this is the map $\theta$ defined earlier. Similarly we denote by $\theta_{Q}^{Q',*}$ the linear map sending the $T_w^{M_Q,*}$ to $T_w^{M_{Q '},*}$ and $\theta_{Q}^{G,*} =\theta_Q^*$. We have \begin{equation}\label{eq:eQQ'2} \theta_{Q'}(e_{Q} ^{Q'}) =\sum_{d \in {}^{\mathbb W_ Q}\mathbb W_{ Q'}} T _{\tilde d},\quad \theta_{Q'}(e_{P} ^{Q'} )=\theta_Q(e_{P} ^{Q}) \theta_{Q'} (e_{Q} ^{Q'}). \end{equation} \begin{proposition} \label{prop:ovvv} There exists an $\mathcal{H}_R$-isomorphism \begin{equation}\label{eq:cal} v\otimes 1_{\mathcal{H}} \mapsto v\otimes f_{Q\mathcal U}: \Ind_{\mathcal{H}_{Q}}^\mathcal{H}( e_{\mathcal{H}_{Q}}(\mathcal V))= e_{\mathcal{H}_{Q}}(\mathcal V) \otimes_{ \mathcal{H}_{M_Q^+},\theta} \mathcal{H} \xrightarrow{\kappa_Q} e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U, \end{equation} and compatible $\mathcal{H}_R$-embeddings \begin{align}\label{eq:canonical2} &v\otimes f_{Q'\mathcal U} \mapsto v\otimes f_{Q'\mathcal U}: e_{\mathcal{H}_{Q'}}(\mathcal V)\otimes_R (\Ind_{Q'}^G {\bf 1})^\mathcal U \to e_{\mathcal{H}_{Q}}(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U,\\ \label{eq:canonical} &v\otimes 1_{\mathcal{H}} \mapsto v\otimes \theta_{Q'}(e_{Q} ^{Q'}): \Ind_{\mathcal \mathcal{H}_{Q'}}^\mathcal{H}( e_{\mathcal{H}_{Q'}} (\mathcal V)) \xrightarrow{\iota(Q,Q') } \Ind_{ \mathcal{H}_{Q}}^\mathcal{H} ( e_{\mathcal{H}_Q}(\mathcal V)). \end{align} \end{proposition} \begin{proof} We have the $\mathcal{H}_{M_{Q},R}$-embedding $$v\mapsto v\otimes e_{P} ^{Q} : e_{\mathcal{H}_Q}(\mathcal V)\to \mathcal V \otimes_{ \mathcal{H}_{M ^+},\theta} \mathcal{H}_Q = \Ind_{\mathcal{H}_{M}}^{\mathcal{H}_Q} (\mathcal V)$$ by Lemma \ref{lemma:ovv} (2) as $\Delta_M$ is orthogonal to $\Delta_{M_{Q}}\setminus \Delta_M$. Applying the parabolic induction which is exact, we get the $\mathcal{H}$-embedding $$v\otimes 1_{\mathcal{H}}\mapsto v\otimes e_{P} ^{Q} \otimes 1_{\mathcal{H}}: \Ind_{\mathcal{H}_Q }^{\mathcal{H}}(e_{\mathcal{H}_Q }(\mathcal V))\to \Ind_{\mathcal{H}_Q }^{\mathcal{H}}( \Ind^{\mathcal{H}_Q }_{\mathcal{H}_M } (\mathcal V)).$$ Note that $T^{M_{Q}}_{\tilde d} \in \mathcal{H}_{M_{Q}^+} $ for $ d\in \mathbb W_{M_{Q}} $. By transitivity of the parabolic induction, it is equal to the $\mathcal{H}_R$-embedding \begin{equation}\label{eq:embe}v\otimes 1_{\mathcal{H}}\mapsto v\otimes \theta_{Q}(e_{P} ^{Q} ) :\Ind_{\mathcal{H}_Q}^{\mathcal{H}}(e_{\mathcal{H}_Q}(\mathcal V))\to \Ind_{\mathcal{H}_{M}}^{\mathcal{H}} (\mathcal V). \end{equation} On the other hand we have the $\mathcal{H}_R$-embedding \begin{equation}\label{eq:embee} v\otimes f_{Q\mathcal U} \mapsto v\otimes \theta_Q(e_P^Q):e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U\to \Ind_{\mathcal{H}_{M}}^{\mathcal{H}} (\mathcal V)\end{equation} given by the restriction to $e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal U$ of the $\mathcal{H}_R$-isomorphism given in Lemma \ref{lemma:ovv} (1), from $e(\mathcal V)\otimes_R (\Ind_P^G {\bf 1})^\mathcal U$ to $\mathcal V\otimes_{ \mathcal{H}_{M^+},\theta}\mathcal{H}$ sending $v\otimes f_{P \mathcal U}$ to $v\otimes 1_{\mathcal{H}}$, noting that $v\otimes f_{Q\mathcal U} = (v\otimes f_{P \mathcal U}) \theta_Q ( e_P^Q)$ by Lemma \ref{lemma:tensor}, $f_{Q \mathcal U}= f_{P \mathcal U}\theta_Q (e_ P^Q)$ and $\theta_Q (e_ P^Q)$ acts trivially on $e(\mathcal V)$ (this is true for $T_{\tilde d}$ for $\tilde d \in {}_1 W_{M'_2}$). Comparing the embeddings \eqref{eq:embe} and \eqref{eq:embee}, we get the $\mathcal{H}_R$-isomorphism \eqref{eq:cal}. We can replace $Q$ by $Q'$ in the $\mathcal{H}_R$-homomorphisms \eqref{eq:cal}, \eqref{eq:embe} and \eqref{eq:embee}. With \eqref{eq:embe} we see $\Ind_{\mathcal \mathcal{H}_{Q'}}^{\mathcal{H}}(e_{\mathcal \mathcal{H}_{Q'}}(\mathcal V))$ and $\Ind_{\mathcal{H}_Q}^{\mathcal{H}}(e_{\mathcal{H}_Q}(\mathcal V))$ as $\mathcal{H}_R$-submodules of $\Ind_{\mathcal{H}_{M}}^{\mathcal{H}} (\mathcal V)$. As seen in \eqref{eq:eQQ'2} we have $\theta_{Q'}(e_P^{Q'})=\theta_Q(e_{P} ^{Q}) \theta_{Q'} (e_{Q} ^{Q'})$. We deduce the $\mathcal{H}_R$-embedding \eqref{eq:canonical}. By \eqref{eq:ovvP} for $Q$ and \eqref{eq:QQ'U}, $$f_{Q'U}=\sum_{d \in {}^{\mathbb W_ Q}\mathbb W_{ Q'}} f_{QU} T_{\tilde d}= f_{QU} \theta_{Q'} (e_{Q} ^{Q'})$$ in $(\ind_Q^G {\bf 1})^\mathcal{U}$. We deduce that the $\mathcal{H}_R$-embedding corresponding to \eqref{eq:canonical} via $\kappa_Q$ and $\kappa_{Q'}$ is the $\mathcal{H}_R$-embedding \eqref{eq:canonical2}. \end{proof} We recall that $\Delta_P$ and $\Delta \setminus \Delta_P$ are orthogonal and that $\mathcal{V}$ is extensible to $\mathcal{H}$ of extension $e(\mathcal{V})$. \begin{corollary}\label{cor:IHPVQ} The cokernel of the $\mathcal{H}_R$-map $$\oplus_{Q\subsetneq Q'\subset G} \Ind_{\mathcal \mathcal{H}_{Q'}}^\mathcal{H}( e_{\\mathcal{H}_{Q'}} (\mathcal V))\to \Ind_{\mathcal{H}_{Q}}^\mathcal{H} ( e_{\mathcal{H}_{Q}}(\mathcal V))$$ defined by the $\iota(Q,Q') $, is isomorphic to $e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal U$ via $\kappa_Q$. \end{corollary} \subsection{Invariants in the tensor product} We return to the setting where $P=MN$ is a standard parabolic subgroup of $G$, $\sigma$ is a smooth $R$-representation of $M$ with $P(\sigma)=G$ of extension $e(\sigma)$ to $G$, and $Q$ a parabolic subgroup of $G$ containing $P$. We still assume that $\Delta_P$ and $\Delta\setminus \Delta_P$ are orthogonal. The $\mathcal{H}_R$-modules $e(\sigma^{\mathcal{U}_M})=e(\sigma)^\mathcal{U}$ are equal (Theorem \ref{thm:ouf}). We compute $I_G(P,\sigma,Q)^\mathcal{U}=(e (\sigma)\otimes_R \St_Q^G)^{\mathcal U}$. \begin{theorem}\label{thm:invariant} The natural linear maps $e(\sigma)^\mathcal{U}\otimes _R (\Ind_Q^G {\bf 1})^\mathcal{U}\to (e(\sigma)\otimes _R \Ind_Q^G {\bf 1})^\mathcal{U}$ and $e(\sigma)^\mathcal{U}\otimes _R (\St_Q^G)^\mathcal{U}\to (e(\sigma)\otimes _R \St_Q^G)^\mathcal{U}$ are isomorphisms. \end{theorem} \begin{proof} We need some preliminaries. In \cite{MR3263032}, \cite{MR3402357}, is introduced a finite free $\mathbb Z$-module $\mathfrak M$ (depending on $\Delta_Q$) and a $\mathcal B$-equivariant embedding $\St_Q^G \mathbb Z \xrightarrow{\iota} C_c^\infty(\mathcal B, \mathfrak M)$ (we indicate the coefficient ring in the Steinberg representation) which induces an isomorphism $(\St_Q^G \mathbb Z)^{\mathcal B}\simeq C_c^\infty(\mathcal B, \mathfrak M) ^{\mathcal B}$. \begin{lemma} \label{lemma:facteur} \begin{enumerate} \item $(\Ind_Q^G \mathbb Z)^{\mathcal B}$ is a direct factor of $\Ind_Q^G \mathbb Z $. \item $(\St_Q^G \mathbb Z)^{\mathcal B}$ is a direct factor of $\St_Q^G \mathbb Z$. \end{enumerate}\end{lemma} \begin{proof} (1) \cite[Example~2.2]{arXiv:1703.05599}. (2) As $\mathfrak M$ is a free $\mathbb Z$-module, $C_c^\infty(\mathcal B, \mathfrak M) ^{\mathcal B}$ is a direct factor of $C_c^\infty(\mathcal B, \mathfrak M) $. Consequently, $\iota((\St_Q^G \mathbb Z)^{\mathcal B})= C_c^\infty(\mathcal B, \mathfrak M) ^{\mathcal B}$ is a direct factor of $\iota(\St_Q^G \mathbb Z)$. As $\iota$ is injective, we get (2). \end{proof} We prove now Theorem \ref{thm:invariant}. We may and do assume that $\sigma$ is $e$-minimal (because $P(\sigma)=P(\sigma_{\min}), e(\sigma)=e(\sigma_{\min})$) so that $\Delta_M$ and $ \Delta \setminus \Delta_M$ are orthogonal and we use the same notation as in \S \ref{S:8.0} in particular $M_2= M_{ \Delta \setminus \Delta_M}$. Let $V$ be the space of $e(\sigma)$ on which $M'_2$ acts trivially. The restriction of $\Ind_Q^G \mathbb Z $ to $M_2$ is $\Ind_{Q\cap M_2}^{M_2} \mathbb Z $, that of $\St_{Q }^{G} \mathbb Z $ is $\St_{Q\cap M_2}^{M_2} \mathbb Z $. As in \cite[Example~2.2]{arXiv:1703.05599},$((\Ind_{Q\cap M_2}^{M_2} \mathbb Z)\otimes V)^{\mathcal{U}_{M'_2}} \simeq (\Ind_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M'_2}} \otimes V$. We have $$(\Ind_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M'_2}} =(\Ind_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M_2}} =(\Ind_{Q}^{G} \mathbb Z)^{\mathcal{U} } .$$ The first equality follows from $M_2=(Q\cap M_2)\mathbb W_{M_2}\mathcal{U}_{M_2}$, $\mathcal{U}_{M_2}=Z^1\mathcal{U}_{M'_2}$ and $Z^1$ normalizes $\mathcal{U}_{M'_2}$ and is normalized by $\mathbb W_{M_2}$. The second equality follows from $\mathcal{U}=\mathcal{U}_{M'} \mathcal{U} _{M_2}$ and $\Ind_Q^G \mathbb Z$ is trivial on $M'$. Therefore $((\Ind_{Q }^{G} \mathbb Z)\otimes V)^{\mathcal{U}_{M'_2}} \simeq (\Ind_{Q }^{G} \mathbb Z)^{\mathcal{U} } \otimes V$. Taking now fixed points under $\mathcal{U}_M$, as $\mathcal{U}=\mathcal{U}_{M'_2}\mathcal{U}_M$, $$((\Ind_{Q }^{G} \mathbb Z)\otimes V)^{\mathcal{U} } \simeq ((\Ind_{Q }^{G} \mathbb Z)^{\mathcal{U} } \otimes V)^{\mathcal{U}_M}= (\Ind_{Q }^{G} \mathbb Z)^{\mathcal{U} } \otimes V ^{\mathcal{U}_M}$$ The equality uses that the $\mathbb Z$-module $\Ind_{Q }^{G} \mathbb Z$ is free. We get the first part of the theorem as $(\Ind_{Q }^{G} \mathbb Z)^{\mathcal{U} } \otimes V ^{\mathcal{U}_M}\simeq (\Ind_{Q }^{G} R)^{\mathcal{U} } \otimes _RV ^{\mathcal{U}_M}$. Tensoring with $R$ the usual exact sequence defining $\St_Q^G \mathbb Z$ gives an isomorphism $\St_Q^G \mathbb Z \otimes R \simeq \St_Q^G R$ and in loc.\ cit.\ it is proved that the resulting map $\St_Q^G R \xrightarrow{\iota_R} C^\infty(\mathcal B, \mathfrak M \otimes R)$ is also injective. Their proof in no way uses the ring structure of $R$, and for any $\mathbb Z$-module $V$, tensoring with $V$ gives a $\mathcal B$-equivariant embedding $\St_Q^G \mathbb Z \otimes V \xrightarrow{\iota_V} C_c^\infty(\mathcal B, \mathfrak M \otimes V)$. The natural map $(\St_Q^G \mathbb Z)^{\mathcal B}\otimes V \to \St_Q^G \mathbb Z\otimes V$ is also injective by Lemma \ref{lemma:facteur} (2). Taking $\mathcal B$-fixed points we get inclusions \begin{equation}\label{eq:|B}(\St_Q^G \mathbb Z)^{\mathcal B}\otimes V \to (\St_Q^G \mathbb Z\otimes V)^{\mathcal B} \to C_c^\infty(\mathcal B, \mathfrak M \otimes V)^{\mathcal B} \simeq \mathfrak M \otimes V. \end{equation} The composite map is surjective, so the inclusions are isomorphisms. The image of $\iota_V$ consists of functions which are left $Z^0$-invariant, and $\mathcal B=Z^0 \mathcal{U}'$ where $\mathcal{U}'=G'\cap \mathcal{U}$. It follows that $\iota$ yields an isomorphism $(\St_Q^G \mathbb Z)^{\mathcal{U}'}\simeq C_c^\infty(Z^0\backslash \mathcal B, \mathfrak M) ^{\mathcal{U}'}$ again consisting of the constant functions. So that in particular $(\St_Q^G \mathbb Z)^{\mathcal{U}'}=(\St_Q^G \mathbb Z)^{\mathcal B}$ and reasoning as previously we get isomorphisms \begin{equation}\label{eq:|U'}(\St_Q^G \mathbb Z)^{\mathcal{U}'}\otimes V \simeq (\St_Q^G \mathbb Z\otimes V)^{\mathcal{U}'} \simeq \mathfrak M \otimes V. \end{equation} The equality $(\St_Q^G \mathbb Z)^{\mathcal{U}'}=(\St_Q^G \mathbb Z)^{\mathcal B}$ and the isomorphisms remain true when we replace $\mathcal{U}'$ by any group between $\mathcal B$ and $\mathcal{U}' $. We apply these results to $\St_{Q\cap M_2}^{M_2} \mathbb Z \otimes V$ to get that the natural map $(\St_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M'_2}} \otimes V\to (\St_{Q\cap M_2}^{M_2} \mathbb Z \otimes V)^{\mathcal{U}_{M'_2}}$ is an isomorphism and also that $(\St_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M'_2}}=(\St_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M_2}}$. We have $\mathcal{U}=\mathcal{U}_{M'}\mathcal{U}_{M_2}$ so $(\St_{Q}^{G} \mathbb Z)^{\mathcal{U}}=(\St_{Q\cap M_2}^{M_2} \mathbb Z)^{\mathcal{U}_{M_2}}$ and the natural map $(\St_{Q}^{G} \mathbb Z)^{\mathcal{U} } \otimes V\to (\St_{Q }^{G} \mathbb Z \otimes V)^{\mathcal{U}_{M'_2}}$ is an isomorphism. The $\mathbb Z$-module $(\St_{Q}^{G} \mathbb Z)^{\mathcal{U} }$ is free and the $V^{\mathcal{U}_M}= V^{\mathcal{U}}$, so taking fixed points under $\mathcal{U}_M$, we get $(\St_{Q}^{G} \mathbb Z)^{\mathcal{U} } \otimes V^\mathcal{U}\simeq (\St_{Q }^{G} \mathbb Z \otimes V)^{\mathcal{U}}$. We have $\St_{Q }^{G} \mathbb Z \otimes V=\St_{Q }^{G} R \otimes_R V$ and $(\St_{Q}^{G} \mathbb Z)^{\mathcal{U} } \otimes V^\mathcal{U}= (\St_{Q}^{G} R)^{\mathcal{U} } \otimes_R V^\mathcal{U}$. This ends the proof of the theorem. \end{proof} \begin{theorem} \label{thm:main8} The $\mathcal{H}_R$-modules $(e(\sigma) \otimes_R \Ind_Q^G{\bf 1} )^{\mathcal U}= e(\sigma)^\mathcal{U}\otimes_R(\Ind_Q^G{\bf 1} )^{\mathcal U}$ are equal. The $\mathcal{H}_R$-modules $(e (\sigma)\otimes_R \St_Q^G)^{\mathcal U}=e(\sigma )^\mathcal{U}\otimes_R (\St_Q^G )^\mathcal{U}$ are also equal. \end{theorem} \begin{proof} We already know that the $R$-modules are equal (Theorem \ref{thm:invariant}). We show that they are equal as $\mathcal{H}$-modules. The $\mathcal{H}_R$-modules $ e(\sigma)^\mathcal{U}\otimes_R(\Ind_Q^G{\bf 1} ) = e_\mathcal{H}(\sigma^{\mathcal{U}_M})^\mathcal{U}\otimes_R(\Ind_Q^G{\bf 1} )^{\mathcal U}$ are equal (Theorem~\ref{thm:ouf}), they are isomorphic to $\Ind_{\mathcal{H}_Q}^\mathcal{H}(e_{\mathcal{H}_Q}(\sigma^{\mathcal{U}_M}))$ (Proposition \ref{prop:ovvv}), to $(\Ind_Q^G (e_Q(\sigma)))^\mathcal{U}$ \cite[Proposition 4.4]{arXiv:1703.04921} and to $ (e(\sigma)\otimes_R \Ind_Q^G{\bf 1})^\mathcal{U}$ \cite[Lemma~2.5]{arXiv:1703.05599}). We deduce that the $\mathcal{H}_R$-modules $e (\sigma)^{\mathcal U}\otimes_R (\Ind_{Q} ^{G} {\bf 1})^\mathcal{U}= (e (\sigma)\otimes_R \Ind_{Q} ^{G} {\bf 1})^{\mathcal U}$ are equal. The same is true when $Q$ is replaced by a parabolic subgroup $Q'$ of $G$ containing $Q$. The representation $e (\sigma)\otimes_R \St_Q^G$ is the cokernel of the natural $R[G]$-map \begin{align*} \oplus _{Q\subsetneq Q'}e(\sigma) \otimes_R\Ind_{Q'}^G{\bf 1} \xrightarrow{\alpha_Q} e(\sigma) \otimes_R\Ind_Q^G{\bf 1} \end{align*} and the $\mathcal{H}_R$-module $e (\sigma)^{\mathcal U}\otimes_R (\St_Q^G)^{\mathcal U}$ is the cokernel of the natural $\mathcal{H}_R$-map \begin{align*} \oplus _{Q\subsetneq Q'}e(\sigma)^\mathcal{U} \otimes_R(\Ind_{Q'}^G{\bf 1})^{\mathcal U}\xrightarrow{\beta_Q} e(\sigma)^\mathcal{U}\otimes_R(\Ind_Q^G{\bf 1} )^{\mathcal U} \end{align*} obtained by tensoring \eqref{eq:StQU} by $e(\sigma)^\mathcal{U}$ over $R$, because the tensor product is right exact. The maps $\beta_Q=\alpha_Q^U$ are equal and the $R$-modules $(\sigma)^{\mathcal U}\otimes_R(\St_{Q} ^{G} )^\mathcal{U}= (e (\sigma)\otimes_R \St_{Q} ^{G})^{\mathcal{U} }$ are equal. This implies that the $\mathcal{H}_R$-modules $(\sigma)^{\mathcal U}\otimes_R(\St_{Q} ^{G} )^\mathcal{U}= (e (\sigma)\otimes_R \St_{Q} ^{G})^{\mathcal{U} }$ are equal. \end{proof} \begin{remark}\label{rem:surj} The proof shows that the representations $e(\sigma)\otimes_R \Ind_Q^G {\bf 1}$ and $e(\sigma)\otimes \St_Q^G$ of $G$ are generated by their $\mathcal{U}$-fixed vectors if the representation $\sigma$ of $M$ is generated by its $\mathcal{U}_M$-fixed vectors. Indeed, the $R$-modules $e(\sigma)^\mathcal{U}=\sigma^{\mathcal{U}_M}, (\Ind_Q^G {\bf 1})^{\mathcal{U}_{M_2'}}=(\Ind_Q^G {\bf 1})^\mathcal{U}$ are equal. If $\sigma^{\mathcal{U}_M}$ generates $\sigma$, then $e(\sigma)$ is generated by $e(\sigma)^\mathcal{U}$. The representation $\Ind_Q^G {\bf 1}|_{M'_2}$ is generated by $(\Ind_Q^G {\bf 1})^{\mathcal{U}}$ (this follows from the lemma below), we have $G=MM'_2$ and $M'_2$ acts trivially on $e(\sigma)$. Therefore the $R[G]$-module generated by $\sigma^{\mathcal{U}} \otimes_R (\Ind_Q^G {\bf 1})^{\mathcal{U}}$ is $e(\sigma)\otimes_R \Ind_Q^G {\bf 1}$. As $e(\sigma)\otimes_R \St_Q^G$ is a quotient of $e(\sigma)\otimes_R \Ind_Q^G {\bf 1}$, the $R[G]$-module generated by $\sigma^{\mathcal{U}} \otimes_R (\St_Q^G)^{\mathcal{U}}$ is $e(\sigma)\otimes_R \St_Q^G$. \end{remark} \begin{lemma}For any standard parabolic subgroup $P$ of $G$, the representation $\Ind_P^G {\bf 1}|_{G'}$ is generated by its $\mathcal{U}$-fixed vectors. \end{lemma} \begin{proof} Because $G=PG'$ it suffices to prove that if $J$ is an open compact subgroup of $\overline N$ the characteristic function $1_{PJ}$ of $PJ$ is a finite sum of translates of $1_{P\mathcal{U}}= 1_{P\mathcal{U}_{\overline N}}$ by $G'$. For $t\in T$ we have $ P\mathcal{U} t=Pt^{-1}\mathcal{U}_{\overline N}t$ and we can choose $t\in T\cap J'$ such that $t^{-1}\mathcal{U}_{\overline N}t \subset J$. \end{proof} \subsection{General triples} Let $P=MN$ be a standard parabolic subgroup of $G$. We now investigate situations where $\Delta_P$ and $\Delta\setminus \Delta_P$ are not necessarily orthogonal. Let $ \mathcal V $ a right $\mathcal{H}_{M,R}$-module. \begin{definition}\label{def:Htriple} Let $P(\mathcal{V})=M(\mathcal{V}) N(\mathcal{V})$ be the standard parabolic subgroup of $G$ with $\Delta_{P(\mathcal{V})}= \Delta_P \cup \Delta_{ \mathcal V}$ and $$ \Delta_{ \mathcal V}= \{\alpha \in \Delta \ \text{ orthogonal to $ \Delta_M$, $ T^{M,*}(z)$ acts trivially on $\mathcal V$ for all $z\in Z\cap M'_\alpha$}\} .$$ If $Q$ is a parabolic subgroup of $G$ between $P$ and $P(\mathcal{V})$, the triple $(P,\mathcal{V},Q)$ called an $\mathcal{H}_R$-triple, defines a right $\mathcal{H}_R$-module $I_\mathcal{H}(P,\mathcal{V},Q)$ equal to $$\Ind_ {\mathcal{H}_{M(\mathcal{V})}}^\mathcal{H}( e(\mathcal V)\otimes_R (\St_{Q\cap M(\mathcal{V})}^{M(\mathcal{V})})^{\mathcal U_{{M(\mathcal{V})}}})= ( e(\mathcal V)\otimes_R (\St_{Q\cap M(\mathcal{V})}^{M(\mathcal{V})})^{\mathcal U_{{M(\mathcal{V})}}}) \otimes_{\mathcal{H}_{M(\mathcal{V})^+,R},\theta} \mathcal{H}_R$$ where $e(\mathcal V)$ is the extension of $\mathcal{V}$ to $\mathcal{H}_{M(\mathcal{V})}$. \end{definition} This definition is justified by the fact that $M(\mathcal{V})$ is the maximal standard Levi subgroup of $G$ such that the $\mathcal{H}_{M,R}$-module $\mathcal{V}$ is extensible to $\mathcal{H}_{M(\mathcal{V})}$: \begin{lemma}\label{lemma:ext} $\Delta_{\mathcal V}$ is the maximal subset of $\Delta \setminus \Delta_P$ orthogonal to $\Delta_P$ such that $T^{M,*}_\lambda$ acts trivially on $\mathcal V$ for all $\lambda \in \Lambda(1) \cap {}_1W_{M'_{\mathcal V}}$. \end{lemma} \begin{proof} For $J\subset \Delta$ let $M_J$ denote the standard Levi subgroup of $G$ with $\Delta_{M_J}=J$. The group $Z\cap M'_J$ is generated by the $Z\cap M'_\alpha$ for all $\alpha \in J$ (Lemma \ref{lemma:ZG'}). When $J$ is orthogonal to $\Delta_M$ and $\lambda\in \Lambda_{M'_J}(1)$, $\ell_M( \lambda)=0$ where $\ell_M$ is the length associated to $S_M^{\mathrm{aff}}$, and the map $\lambda \mapsto T_\lambda^{M,*}=T_\lambda^M: \Lambda_{M'_J}(1)\to \mathcal{H}_M$ is multiplicative. \end{proof} The following is the natural generalisation of Proposition \ref{prop:ovvv} and Corollary \ref{cor:IHPVQ}. Let $Q'$ be a parabolic subgroup of $G$ with $Q\subset Q'\subset P(\mathcal{V})$. Applying the results of \S \ref{S:9.1} to $M(\mathcal{V})$ and its standard parabolic subgroups $Q\cap M(\mathcal{V})\subset Q'\cap M(\mathcal{V})$, we have an $\mathcal{H}_{M(\mathcal{V}),R}$-isomorphism \begin{align*} \Ind_{\mathcal{H}_{Q}}^{\mathcal{H}_{M(\mathcal{V})}}( e_{\mathcal{H}_{Q}}(\mathcal V))= e_{\mathcal{H}_{Q}}(\mathcal V) \otimes_{ \mathcal{H}_{M_Q^+},\theta} \mathcal{H}_{M(\mathcal{V}),R}&\xrightarrow{\kappa_{Q\cap M(\mathcal{V})}} e(\mathcal V)\otimes_R (\Ind_{Q\cap M(\mathcal{V})}^{M(\mathcal{V})} {\bf 1})^{\mathcal U_{M(\mathcal{V})}}\\ v\otimes 1_{\mathcal{H}} &\mapsto v\otimes f_{Q\mathcal U\cap M(\mathcal{V})}: \end{align*} and an $\mathcal{H}_{M(\mathcal{V}),R}$-embedding \begin{align*}\Ind_{\mathcal{H}_{Q'}}^{\mathcal{H}_{M(\mathcal{V})}}( e_{\mathcal{H}_{Q'}} (\mathcal V))&\xrightarrow{\iota(Q\cap M(\mathcal{V}),Q'\cap M(\mathcal{V})) } \Ind_{\mathcal{H}_Q}^{\mathcal{H}_{M(\mathcal{V})}}( e_{\mathcal{H}_Q} (\mathcal V))\\ v \otimes 1_{\mathcal{H}_{M(\mathcal{V})}}&\mapsto v \otimes \theta_{Q' }^{P(\mathcal{V})} (e_{ Q }^{Q' }). \end{align*} Applying the parabolic induction $\Ind_ {\mathcal{H}_{M(\mathcal{V})}}^\mathcal{H}$ which is exact and transitive, we obtain an $\mathcal{H}_R$-isomorphism $\kappa_Q=\Ind_ {\mathcal{H}_{M(\mathcal{V})}}^\mathcal{H}(\kappa_{Q\cap M(\mathcal{V})})$, \begin{align}\label{eq:kQ} \Ind_{\mathcal{H}_{Q}}^\mathcal{H} (e_{\mathcal{H}_{Q}}(\mathcal{V}) )\xrightarrow{\kappa_Q} \Ind_ {\mathcal{H}_{M(\mathcal{V})}}^\mathcal{H}( e(\mathcal V)\otimes_R (\Ind_{Q\cap M(\mathcal{V})}^{M(\mathcal{V})} {\bf 1}_{M_Q})^{\mathcal U_{{M(\mathcal{V})}}}) \end{align} $$ v \otimes 1_\mathcal{H}\mapsto v\otimes f_{Q\mathcal{U}_{M(\mathcal{V})}}\otimes 1_\mathcal{H} \quad \quad \quad \quad \quad \quad \quad $$ and an $\mathcal{H}_R$-embedding $\iota(Q,Q') = \Ind_{\mathcal{H}_{M(\mathcal{V})}}^\mathcal{H} (\iota(Q,Q')^{M(\mathcal{V})})$ \begin{equation}\label{eq:iotaQQ'}v \otimes 1_{\mathcal{H} }\mapsto v \otimes \theta_{Q' } (e_{ Q }^{Q' }):\Ind_{\mathcal \mathcal{H}_{Q'}}^\mathcal{H}(e_{\mathcal{H}_{Q'}}(\mathcal V)) \xrightarrow{\iota(Q,Q') } \Ind_{\mathcal{H}_{Q}}^\mathcal{H}(e_{\mathcal{H}_{Q}}(\mathcal V)).\end{equation} Applying Corollary \ref{cor:IHPVQ} we obtain: \begin{theorem} \label{thm:IHP}Let $(P,\mathcal V,Q)$ be an $\mathcal{H}_R$-triple. Then, the cokernel of the $\mathcal{H}_R$-map $$\oplus_{Q\subsetneq Q'\subset P(\mathcal{V})} \Ind_{\mathcal{H}_{Q'}}^\mathcal{H}( e_{\mathcal{H}_{Q'}} (\mathcal V))\to \Ind_{\mathcal{H}_{Q}}^\mathcal{H} ( e_{\mathcal{H}_{Q}}(\mathcal V)),$$ defined by the $\iota(Q,Q') $ is isomorphic to $ I_\mathcal{H}(P,\mathcal{V},Q)$ via the $\mathcal{H}_R$-isomorphism $\kappa_Q$. \end{theorem} Let $\sigma$ be a smooth $R$-representation of $M$ and $Q$ a parabolic subgroup of $G$ with $P\subset Q\subset P(\sigma)$. \begin{remark} The $\mathcal{H}_R$-module $ I_\mathcal{H} (P ,\sigma^{\mathcal{U}_M} ,Q)$ is defined if $\Delta_Q \setminus \Delta_P$ and $\Delta_P$ are orthogonal because $Q\subset P(\sigma)\subset P(\sigma^{\mathcal{U}_M})$ (Theorem \ref{thm:ouf}). \end{remark} We denote here by $P_{\min}=M_{\min}N_{\min}$ the minimal standard parabolic subgroup of $G$ contained in $P$ such that $\sigma=e_P( \sigma|_{M_{\min}} )$ (Lemma \ref {lemma:min}, we drop the index $\sigma$). The sets of roots $\Delta_{P_{\min}}$ and $\Delta_{P(\sigma|_{M_{\min}})}\setminus \Delta_{P_{\min}}$ are orthogonal (Lemma \ref{lemma:2.2}). The groups $P(\sigma)=P( \sigma|_{M_{\min}})$, the representations $e(\sigma)=e(\sigma|_{M_{\min}})$ of $M(\sigma)$, the representations $I_G(P,\sigma,Q)=I_G(P_{\min},\sigma|_{M_{\min}} ,Q)=\Ind_{P(\sigma)}^G (e(\sigma ) \otimes_R \St_{Q }^{P(\sigma)})$ of $G$, and the $R$-modules $\sigma ^{\mathcal{U}_{M_{\min}}}=\sigma^{\mathcal{U}_M}$ are equal. From Theorem \ref{thm:ouf}, $$P(\sigma) \subset P(\sigma ^{\mathcal{U}_{M_{\min}}}), \quad e_{\mathcal{H}_{M(\sigma)}} (\sigma^{\mathcal U_{M_{\min}}})= e(\sigma)^{\mathcal U_{M(\sigma)}}, $$ and $P(\sigma^{\mathcal U_{M(\sigma)}})= P(\sigma)$ if $\sigma^{\mathcal U_{M(\sigma)}}$ generates the representation $\sigma|_{M_{\min}}$. The $\mathcal{H}_R$-module $$ I_\mathcal{H} (P_{\min},\sigma ^{\mathcal{U}_{M_{\min}}},Q)=\Ind_ {\mathcal{H}_{M(\sigma ^{\mathcal{U}_{M_{\min}}})}}^\mathcal{H}( e(\sigma ^{\mathcal{U}_{M_{\min}}})\otimes_R (\St_{Q } ^{P(\sigma ^{\mathcal{U}_{M_{\min}}})})^{\mathcal{U}_{M(\sigma ^{\mathcal{U}_{M_{\min}}})}})$$ is defined because $\Delta_{P_{\min}}$ and $\Delta_{P(\sigma ^{\mathcal{U}_{M_{\min}}})}\setminus \Delta_{P_{\min}}$ are orthogonal and $P\subset Q\subset P(\sigma) \subset P(\sigma ^{\mathcal{U}_{M_{\min}}})$. \begin{remark} If $\sigma^{\mathcal U_{M(\sigma)}}$ generates the representation $\sigma|_{M_{\min}}$ (in particular if $R=C$ and $\sigma$ is irrreducible), then $P(\sigma)=P(\sigma ^{\mathcal{U}_{M_{\min}}})$ hence $$ I_\mathcal{H} (P_{\min},\sigma ^{\mathcal{U}_{M_{\min}}},Q)= \Ind_{\mathcal{H}_{M(\sigma)}}^\mathcal{H} (e_{\mathcal{H}_{M(\sigma)}} (\sigma^{\mathcal U_{M_{\min}}})\otimes_R (\St_{Q\cap M(\sigma)}^{M(\sigma)} )^{\mathcal{U}_{M(\sigma)}}).$$ \end{remark} Applying Theorem \ref{thm:main8} to $(P_{\min} \cap M(\sigma), \sigma|_{M_{\min}}, Q\cap M(\sigma))$, the $\mathcal{H}_{M(\sigma),R}$-modules \begin{equation}\label{eq:main8}e_{\mathcal{H}_{M(\sigma)}} (\sigma^{\mathcal U_{M_{\min}}})\otimes_R (\St_{Q\cap M(\sigma)}^{M(\sigma)})^{\mathcal{U}_{M(\sigma)}}=(e_{M(\sigma)}(\sigma ) \otimes_R \St_{Q\cap M(\sigma)}^{M(\sigma)})^{\mathcal{U}_{M(\sigma)}} \end{equation} are equal. We have the $\mathcal{H}_R$-isomorphism \cite[Proposition 4.4]{arXiv:1703.04921}: \begin{align*} I_G(P,\sigma,Q)^\mathcal{U}=(\Ind_{P(\sigma)}^G (e(\sigma ) \otimes_R \St_{Q }^{P(\sigma)}))^{\mathcal{U}}&\xrightarrow{ov} \Ind_{\mathcal{H}_{M(\sigma)}}^\mathcal{H} ((e(\sigma ) \otimes_R \St_{Q\cap M(\sigma)}^{M(\sigma)})^{\mathcal{U}_{M(\sigma)}})\\ f_{P(\sigma)\mathcal{U},x}&\mapsto x\otimes 1_{\mathcal{H} } \quad (x\in (e(\sigma ) \otimes_R \St_{Q\cap M(\sigma)}^{M(\sigma)})^{\mathcal{U}_{M(\sigma)}}).\end{align*} We deduce: \begin{theorem}\label{thm:main} Let $(P,\sigma,Q)$ be a $R[G]$-triple. Then, we have the $\mathcal{H}_R$-isomorphism $$ I_G(P,\sigma,Q)^\mathcal{U}\xrightarrow{ov}\Ind_{\mathcal{H}_{M(\sigma)}}^\mathcal{H} (e_{\mathcal{H}_{M(\sigma)}} (\sigma^{\mathcal U_{M_{\min}}})\otimes_R (\St_{Q\cap M(\sigma)}^{M(\sigma)} )^{\mathcal{U}_{M(\sigma)}}).$$ In particular, $$ I_G(P,\sigma,Q)^\mathcal{U}\simeq \begin{cases} I_\mathcal{H} (P_{\min},\sigma ^{\mathcal{U}_{M_{\min}}},Q) &\ \text{if} \ P(\sigma)=P(\sigma ^{\mathcal{U}_{M_{\min}}})\\ I_\mathcal{H} (P ,\sigma ^{\mathcal{U}_{M }},Q) &\ \text{if} \ P=P_{\min}, P(\sigma)=P(\sigma ^{\mathcal{U}_M})\end{cases}. $$ \end{theorem} \subsection{Comparison of the parabolic induction and coinduction}\label{S:comp} Let $P=MN$ be a standard parabolic subgroup of $G$, $\mathcal V$ a right $\mathcal{H}_R$-module and $Q $ a parabolic subgroup of $G$ with $ Q \subset P(\mathcal V)$. When $R=C$, in \cite{arXiv:1406.1003_accepted}, we associated to $(P,\mathcal{V},Q)$ an $\mathcal{H}_R$-module using the parabolic coinduction $$\Coind_{\mathcal{H}_M}^\mathcal{H}(-)=\Hom_{\mathcal{H}_{ M ^-, \theta^*}}(\mathcal{H}, -):\Mod_R(\mathcal{H}_M)\to \Mod_R(\mathcal{H})$$ instead of the parabolic induction $\Ind_{\mathcal{H}_M}^\mathcal{H}(-)=- \otimes_{\mathcal{H}_{M^+}, \theta}\mathcal{H}$. The index $\theta^*$ in the parabolic coinduction means that $\mathcal{H}_{M_{Q}^-} $ embeds in $\mathcal{H}$ by $\theta_Q^*$. Our terminology is different from the one in \cite{arXiv:1406.1003_accepted} where the parabolic coinduction is called induction. For a parabolic subgroup $Q'$ of $G$ with $ Q\subset Q' \subset P(\mathcal V)$, there is a natural inclusion of $\mathcal{H}_R$-modules \cite[Proposition 4.19]{arXiv:1406.1003_accepted} \begin{equation}\label{eq:canonicalH} \Hom_{\mathcal{H}_{ M_{Q'}^-, \theta^*}}(\mathcal{H}, e_{\mathcal{H}_{Q'} }(\mathcal V))\xrightarrow{i(Q,Q')} \mathcal \Hom_{\mathcal{H}_{M_{Q}^-, \theta^*}}(\mathcal{H},e_{\mathcal{H}_{Q}}(\mathcal V)). \end{equation} because $\theta^*(\mathcal{H}_{ M_{Q}^-})\subset \theta^*(\mathcal{H}_{ M_{Q'}^-})$ as $W_{M_Q^-}(1)\subset W_{M_{Q'}^-}(1)$, and $vT_w^{ M_{Q'}*}=v T^{M_Q *}_w$ for $w\in W_{M_Q^-}(1)$ and $v\in \mathcal V$. \begin{definition} \label{def:coind} Let $ CI_{\mathcal{H}}(P,\mathcal V,Q)$ denote the cokernel of the map $$\oplus_{Q \subsetneq Q'\subset P(\mathcal V)} \ \Hom_{\mathcal{H}_{ M_{Q'}^-, \theta^*}}(\mathcal{H}, e_{\mathcal{H}_{Q'} }(\mathcal V))\to \mathcal \Hom_{\mathcal{H}_{M_{Q}^-, \theta^*}}(\mathcal{H},e_{\mathcal{H}_{Q}}(\mathcal V)) $$ defined by the $\mathcal{H}_R$-embeddings $i(Q,Q')$. \end{definition} When $R=C$, we showed that the $\mathcal{H}_C$-module $\mathcal CI_{\mathcal{H}}(P,\mathcal V,Q)$ is simple when $ \mathcal V$ is simple and supersingular (Definition \ref{def:supersingularH}), and that any simple $\mathcal{H}_C$-module is of this form for a $\mathcal{H}_C$-triple $(P,\mathcal{V},Q)$ where $ \mathcal V$ is simple and supersingular, $P,Q$ and the isomorphism class of $\mathcal{V}$ are unique~\cite{arXiv:1406.1003_accepted}. The aim of this section is to compare the $\mathcal{H}_R$-modules $I_\mathcal{H}(P,\mathcal{V},Q)$ with the $\mathcal{H}_R$-modules $CI_\mathcal{H}(P,\mathcal{V},Q)$ and to show that the classification is also valid with the $\mathcal{H}_C$-modules $\mathcal I_{\mathcal{H}}(P,\mathcal V,Q)$. \bigskip It is already known that a parabolically coinduced module is a parabolically induced module and vice versa \cite{arXiv:1406.1003_accepted} \cite{MR3437789}. To make it more precise we need to introduce notations. We lift the elements $w$ of the finite Weyl group $\mathbb W$ to $\hat w \in \mathcal N_G\cap \mathcal K$ as in \cite[IV.6]{MR3600042}, \cite[Proposition 2.7]{arXiv:1703.04921}: they satisfy the braid relations $\hat w _1\hat w_2 =(w_1 w_2)\hat {} $ when $\ell(w_1)+\ell(w_2)=\ell(w_1w_2)$ and when $s\in S$, $\hat s$ is admissible, in particular lies in $ {}_1 W_{G'}$. Let $\mathbf w, \mathbf w_M, \mathbf w^M $ denote respectively the longest elements in $\mathbb W, \mathbb W_M$ and $\mathbf w \mathbf w_M$. We have $\mathbf w=\mathbf w^{-1} = \mathbf w^M \mathbf w_M, \mathbf w_M= \mathbf w_M^{-1}$, $\hat{ \mathbf w}= \hat{ \mathbf w}^M \hat{ \mathbf w}_M$, $$\mathbf w^M( \Delta_M)= - \mathbf w ( \Delta_M) \subset \Delta, \quad\mathbf w^M(\Phi^+ \setminus \Phi^+_M)=\mathbf w (\Phi^+ \setminus \Phi^+_M) .$$ Let $\mathbf w.M$ be the standard Levi subgroup of $G$ with $\Delta_{\mathbf w.M}=\mathbf w^M( \Delta_M)$ and $\mathbf w.P$ the standard parabolic subgroup of $G$ with Levi $\mathbf w.M$. We have $$\mathbf w.M= \hat{ \mathbf w}^M M ( \hat{ \mathbf w}^M)^{-1}= \hat{ \mathbf w} M ( \hat{ \mathbf w} )^{-1}, \quad \mathbf w^{\mathbf w.M}= \mathbf w_M\mathbf w = ( \mathbf w^M)^{-1}.$$The conjugation $w\mapsto \mathbf w^M w ( \mathbf w^M)^{-1}$ in $W$ gives a group isomorphism $ W_M\to W_{\mathbf w.M} $ sending $S^{\mathrm{aff}}_M$ onto $S^{\mathrm{aff}}_{\mathbf w.M}$, respecting the finite Weyl subgroups $ \mathbf w^M \mathbb W_M ( \mathbf w^M)^{-1}= \mathbb W_{\mathbf w.M}=\mathbf w \mathbb W_M\mathbf w^{-1}$, and echanging $W_{M^+}$ and $W_{(\mathbf w.M)^-} =\mathbf w W_{M^+}\mathbf w^{-1}$. The conjugation by $ \tilde{ \mathbf w}^M$ restricts to a group isomorphism $W_M(1)\to W_{\mathbf w.M}(1)$ sending $W_{M^+}(1)$ onto $ W_{(\mathbf w.M)^-}(1)$. The linear isomorphism \begin{equation}\label{eq:isoH} \mathcal{H}_{ M}\xrightarrow{\iota ( \tilde{ \mathbf w}^M) }\mathcal{H}_{\mathbf w. M} \quad T^M_{w} \mapsto T^{\mathbf w.M}_{ \tilde{ \mathbf w}^M w ( \tilde{ \mathbf w}^M)^{-1}} \ \text{for} \ w\in W_M(1), \end{equation} is a ring isomorphism between the pro-$p$-Iwahori Hecke rings of $M$ and $\mathbf w. M$. It sends the positive part $\mathcal{H}_{ M^+}$ of $\mathcal{H}_M$ onto the negative part $ \mathcal{H}_{(\mathbf w. M)^-}$ of $\mathcal{H}_{\mathbf w. M}$ \cite[Proposition 2.20]{MR3437789}. We have $\tilde{ \mathbf w}=\tilde{ \mathbf w}_M\tilde{ \mathbf w}^{\mathbf w.M}=\tilde{ \mathbf w}^{M}\tilde{ \mathbf w}_M$, $(\tilde{ \mathbf w}^M )^{-1}=\tilde{ \mathbf w}^{\mathbf w.M}t_M$ where $t_M= \tilde{ \mathbf w}^2\tilde{ \mathbf w}_M^{-2} \in Z_k$. \begin{definition} The {\bf twist $\tilde{ \mathbf w} ^M.\mathcal V$ of $\mathcal{V}$ by $\tilde{ \mathbf w} ^M$} is the right $\mathcal{H}_{\mathbf w. M}$-module deduced from the right $\mathcal{H}_{ M}$-module $\mathcal V$ by functoriality: as $R$-modules $\tilde{ \mathbf w} ^M.\mathcal V=\mathcal V $ and for $v\in \mathcal V, w\in W_M(1)$ we have $ vT^{\mathbf w.M}_{ \tilde{ \mathbf w}^M w ( \tilde{ \mathbf w}^M)^{-1}}=v T_w^M$. \end{definition} We can define the twist $\tilde{ \mathbf w} ^M.\mathcal V$ of $\mathcal{V}$ with the $T_w^{M,*}$ instead of $T_w^{M}$. \begin{lemma} For $v\in \mathcal V, w\in W_M(1)$ we have $ vT^{\mathbf w.M,*}_{ \tilde{ \mathbf w}^M w ( \tilde{ \mathbf w}^M)^{-1}}=v T_w^{M,*} $ in $\tilde{ \mathbf w} ^M.\mathcal V$. \end{lemma} \begin{proof} By the ring isomorphism $\mathcal{H}_{ M}\xrightarrow{\iota ( \tilde{ \mathbf w}^M) }\mathcal{H}_{\mathbf w. M}$, we have $ c^{\mathbf w.M}_{\tilde{ \mathbf w}^M \tilde s ( \tilde{ \mathbf w}^M)^{-1}}= c^{M}_{ \tilde s }$ when $\tilde s\in W_M(1)$ lifts $s\in S_M^{\mathrm{aff}} $. So the equality of the lemma is true for $w=\tilde s $. Apply the braid relations to get the equality for all $w\in W_M(1)$. \end{proof} We return to the $\mathcal{H}_R$-module $\Hom_{\mathcal{H}_{ M ^-, \theta^*}}(\mathcal{H}, V)$ parabolically coinduced from $\mathcal{V}$. It has a natural direct decomposition indexed by the set $ \mathbb W^{ \mathbb W _M}$ of elements $d$ in the finite Weyl group $\mathbb W $ of minimal length in the coset $d \mathbb W _M$. Indeed it is known that the linear map $$f\mapsto (f(T_{\tilde d}))_{d\in \mathbb W^{ \mathbb W _M}}: \Hom_{\mathcal{H}_{ M ^-},\theta^*}(\mathcal{H}, \mathcal V)\to \oplus _{d\in \mathbb W^{ \mathbb W _M}} \mathcal V$$ is an isomorphism. For $v\in \mathcal V$ and $d\in \mathbb W^{ \mathbb W _M}$, there is a unique element $$f_{\tilde d,v} \in \Hom_{\mathcal{H}_{ M ^-},\theta^*}(\mathcal{H}, \mathcal V) \ \text{ satisfying $f(T_{\tilde d})=v$ and $f(T_{\tilde d'} ) =0$ for $d'\in \mathbb W^{ \mathbb W _M}\setminus \{d\}$}.$$ It is known that the map $v \mapsto f_{ \tilde {\mathbf w}^M,v}: \tilde {\mathbf w}^M.\mathcal V \to \Hom_{\mathcal{H}_{ M ^-},\theta^*}(\mathcal{H}, \mathcal V)$ is $\mathcal{H}_{(\mathbf w.M)^+ }$-equivariant: $f_{ \tilde {\mathbf w}^M,vT^{\mathbf w. M} _ w } = f_{ \tilde {\mathbf w}^M,v} T_ w $ for all $v\in \mathcal{V}, w \in W_{\mathbf w .M^+}(1) $. By adjunction, this $\mathcal{H}_{(\mathbf w.M)^+ }$-equivariant map gives an $\mathcal{H}_R$-homomorphism from an induced module to a coinduced module: \begin{equation}\label{eq:Abe} v\otimes 1_{\mathcal{H}}\mapsto f_{ \tilde {\mathbf w}^M,v}: \tilde {\mathbf w}^M.\mathcal V \otimes_{\mathcal{H}_{(\mathbf w.M)^+}, \theta}\mathcal{H}\xrightarrow{\mu_P}\Hom_{\mathcal{H}_{ M ^-},\theta^*}(\mathcal{H},\mathcal V). \end{equation} This is an isomorphism \cite{arXiv:1406.1003_accepted}, \cite{MR3437789} . The naive guess that a variant $\mu_Q$ of $\mu_P$ induces an $\mathcal{H}_R$-isomorphism between the $\mathcal{H}_R$-modules $I_\mathcal{H}(\mathbf w.P,\tilde {\mathbf w}^M.\mathcal V,\mathbf w.Q)$ and $ CI_{\mathcal{H}}(P,\mathcal V,Q)$ turns out to be true. The proof is the aim of the rest of this section. \bigskip The $\mathcal{H}_R$-module $I_\mathcal{H}(\mathbf w.P,\tilde {\mathbf w}^M.\mathcal V,\mathbf w.Q)$ is well defined because the parabolic subgroups of $G$ containing $\mathbf w. P$ and contained in $P(\tilde {\mathbf w}^M.\mathcal V) $ are $\mathbf w. Q$ for $P\subset Q \subset P(\mathcal{V})$, as follows from: \begin{lemma}\label{lemma:delt} $\Delta_{\tilde{ \mathbf w}^M.\mathcal{V}}=- {\mathbf w} (\Delta_{\mathcal{V}})$. \end{lemma} \begin{proof} Recall that $\Delta_{\mathcal V}$ is the set of simple roots $\alpha \in \Delta \setminus \Delta_M$ orthogonal to $\Delta_M$ and $T^{M,*}(z)$ acts trivially on $\mathcal V$ for all $z\in Z\cap M'_\alpha$, and the corresponding standard parabolic subgroup $P_{\mathcal V}=M_{\mathcal V}N_{\mathcal V}$. The $Z\cap M'_\alpha$ for $\alpha \in \Delta_{\mathcal V}$ generate the group $ Z\cap M'_{\mathcal V}$. A root $\alpha \in \Delta \setminus \Delta_M$ orthogonal to $\Delta_M$ is fixed by ${ \mathbf w}_M$ so ${ \mathbf w}^M(\alpha)=\mathbf w (\alpha)$ and $${\hat { \mathbf w}}^M M_\mathcal{V} ( {\hat{ \mathbf w}}^M)^{-1}= {\hat { \mathbf w}} M_\mathcal{V} ( {\hat{ \mathbf w}})^{-1}.$$ The proof of Lemma \ref{lemma:delt} is straightforward as $\Delta= -{\mathbf w}(\Delta)$, $ \Delta_{ \mathbf w. M}= -{\mathbf w}(\Delta_M)$. \end{proof} Before going further, we check the commutativity of the extension with the twist. As $Q=M_QU$ and $M_Q$ determine each other we denote ${\mathbf w}_{M_Q}= { \mathbf w}_{ Q}, {\mathbf w}^{M_Q}= {\mathbf w}^{ Q}$ when $Q\neq P,G$. \begin{lemma} \label{lemma:eH} $e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V)= \tilde{ \mathbf w}^{ Q}. e_{\mathcal{H}_{Q}}(\mathcal V)$. \end{lemma} \begin{proof} As $R$-modules $\mathcal{V}=e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V)= \tilde{ \mathbf w}^{ Q}. e_{\mathcal{H}_{Q}}(\mathcal V)$. A direct computation shows that the Hecke element $T^{{ \mathbf w}.Q,*}_w$ acts in the $\mathcal{H}_R$-module $ e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V)$, by the identity if $w\in \tilde { \mathbf w}^Q {}_1W_{M'_2} ({ \mathbf w}^Q)^{-1}$ and by $T^{M,*}_{(\tilde { \mathbf w}^Q)^{-1} w \tilde { \mathbf w}^Q}$ if $w\in \tilde { \mathbf w}^Q {}_1W_{M'_2} ({ \mathbf w}^Q)^{-1}$ where $M_2$ denotes the standard Levi subgroup with $\Delta_{M_2}=\Delta_Q\setminus \Delta_P$. Whereas in the $\mathcal{H}_R$-module $\tilde{ \mathbf w}^{ Q}. e_{\mathcal{H}_{Q}}(\mathcal V)$, the Hecke element $T^{{ \mathbf w}.Q,*}_w$ acts by the identity if $w\in {}_1 W_{{\mathbf w}.M'_2}$ and by $T^{M,*}_{(\tilde { \mathbf w}^M)^{-1} w \tilde { \mathbf w}^M}$ if $w\in W_{{\mathbf w}.M}(1)$. So the lemma means that $${}_1 W_{{\mathbf w}.M'_2}= \tilde { \mathbf w}^Q {}_1W_{M'_2} ({ \mathbf w}^Q)^{-1}, \quad (\tilde { \mathbf w}^Q)^{-1} w \tilde { \mathbf w}^Q =(\tilde { \mathbf w}^M)^{-1} w \tilde { \mathbf w}^M \ \text{if} \ w\in W_{{\mathbf w}.M}(1).$$ These properties are easily proved using that ${}_1W_{G'}$ is normal in $W(1)$ and that the sets of roots $\Delta_P$ and $\Delta_Q\setminus \Delta_P$ are orthogonal: ${ \mathbf w}_{ Q}= {\mathbf w}_{M_2}{\mathbf w}_M $, the elements ${\mathbf w}_{M_2}$ and ${\mathbf w}_M$ normalise $ W_M $ and $W_{M_2}$, the elements of $\mathbb W_{M_2}$ commutes with the elements of $\mathbb W_M$. \end{proof} We return to our guess. The variant $\mu_Q$ of $\mu_P$ is obtained by combining the commutativity of the extension with the twist and the isomorphism \ref{eq:Abe} applied to $(Q, e_{\mathcal{H}_{Q}}(\mathcal V))$ instead of $(P,\mathcal{V})$. The $\mathcal{H}_R$-isomorphism $\mu_Q$ is: \begin{equation}\label{eq:muQ} v\otimes 1_{\mathcal{H}}\mapsto f_{ \tilde {\mathbf w}^M,v}: \Ind_{\mathcal{H}_{ \mathbf w.M_Q }}^\mathcal{H}(e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V))\xrightarrow{\mu_Q}\Hom_{\mathcal{H}_{ M_Q ^-},\theta^*}(\mathcal{H},e_{\mathcal{H}_{Q}}( \mathcal V)). \end{equation} Our guess is that $\mu_Q$ induces an $\mathcal{H}_R$-isomorphism from the cokernel of the $\mathcal{H}_R$-map $$\oplus_{Q\subsetneq Q'\subset P( \mathcal{V})} \Ind_{\mathcal{H}_{\mathbf w.Q'}}^\mathcal{H}( e_{\mathcal{H}_{\mathbf w.Q'}} (\tilde {\mathbf w}^M.\mathcal{V}))\to \Ind_{\mathcal{H}_{\mathbf w.Q}}^\mathcal{H} ( e_{\mathcal{H}_{\mathbf w.Q}}(\tilde {\mathbf w}^M.\mathcal{V}))$$ defined by the $\mathcal{H}_R$-embeddings $\iota(\mathbf w.Q,\mathbf w.Q')$, isomorphic to $ I_{\mathcal{H}}(\mathbf w .P, \tilde { \mathbf w}^M\mathcal V,\mathbf w. \overline Q)$ via $\kappa_{{\mathbf w}.Q}$ (Theorem \ref{thm:IHP}), onto the cokernel $ CI_{\mathcal{H}}(P,\mathcal V,Q)$ the $\mathcal{H}_R$-map $$\oplus_{Q \subsetneq Q'\subset P(\mathcal V)} \ \Hom_{\mathcal{H}_{ M_{Q'}^-, \theta^*}}(\mathcal{H}, e_{\mathcal{H}_{Q'} }(\mathcal V))\to \mathcal \Hom_{\mathcal{H}_{M_{Q}^-, \theta^*}}(\mathcal{H},e_{\mathcal{H}_{Q}}(\mathcal V)) $$ defined by the $\mathcal{H}_R$-embeddings $i(Q,Q')$. This is true if $i(Q,Q')$ corresponds to $\iota(\mathbf w.Q,\mathbf w.Q')$ via the isomorphisms $\mu_{Q'}$ and $\mu_Q$. This is the content of the next proposition. \begin{proposition} \label{prop:comp} For all $Q\subsetneq Q'\subset P(\mathcal V)$ we have $$i(Q,Q') \circ \mu_{Q'} = \mu_Q \circ \iota(\mathbf w.Q,\mathbf w.Q').$$ \end{proposition} We postpone to section \S \ref{S:comdia} the rather long proof of the proposition. \begin{corollary} \label{cor:isoCI}The $\mathcal{H}_R$-isomorphism $\mu_Q\circ \kappa_{{\mathbf w}.Q}^{-1}$ induces an $\mathcal{H}_R$-isomorphism $$ I_{\mathcal{H}}(\mathbf w .P, \tilde { \mathbf w}^M\mathcal V,\mathbf w. \overline Q) \to CI_{\mathcal{H}}(P,\mathcal V,Q).$$ \end{corollary} \subsection{Supersingular $\mathcal{H}_R$-modules, classification of simple $\mathcal{H}_C$-modules}\label{S:9.5} We recall first the notion of supersingularity based on the action of center of $\mathcal{H} $. The center of $\mathcal{H} $ \cite[Theorem 1.3]{MR3271250} contains a subalgebra $\mathcal Z_{T^{ +} }$ isomorphic to $\mathbb Z[T^{ +}/T_1]$ where $T^{ +} $ is the monoid of dominant elements of $T$ and $T_1$ is the pro-$p$-Sylow subgroup of the maximal compact subgroup of $T$. Let $t\in T$ of image $\mu_t\in W(1)$ and let $(E_o (w))_{w\in W (1)}$ denote the alcove walk basis of $\mathcal{H} $ associated to a closed Weyl chamber $o$ of $\mathbb W $. The element $$E_o ( C(\mu_t))= \sum_{\mu'} E_o(\mu') $$ is the sum over the elements in $ \mu'$ in the conjugacy class $ C(\mu_t)$ of $\mu_t$ in $W(1)$. It is a central element of $\mathcal{H} $ and does not depend on the choice of $o$. We write also $z (t)=E_o ( C(\mu_t))$. \begin{definition} \label{def:supersingularH} A non-zero right $\mathcal{H}_{ R}$-module $\mathcal{V}$ is called supersingular when, for any $v\in \mathcal{V}$ and any non-invertible $t\in T^{ +}$, there exists a positive integer $n \in \mathbb N$ such that $ v(z (t))^n=0$. If one can choose $n $ independent on $(v,t)$, then $\mathcal{V}$ is called uniformly supersingular.\end{definition} \begin{remark} One can choose $n $ independent on $(v,t)$ when $\mathcal{V}$ is finitely generated as a right $\mathcal{H}_{ R}$-module. If $R$ is a field and $\mathcal{V}$ is simple we can take $n=1$. When $G$ is compact modulo the center, $T^+=T$, and any non-zero $\mathcal{H}_{ R}$-module is supersingular. \end{remark} The induction functor $\Ind_{\mathcal{H}_M}^\mathcal{H}:\Mod (\mathcal{H}_{M,R})\to \Mod (\mathcal{H}_R)$ has a left adjoint $\mathcal L_{\mathcal{H}_M}^\mathcal{H}$ and a right adjoint $\mathcal R_{\mathcal{H}_M}^\mathcal{H}$ \cite{MR3437789}: for $\mathcal{V}\in \Mod (\mathcal{H}_R)$, \begin{equation}\label{eq:LR}\mathcal L_{\mathcal{H}_M}^\mathcal{H}(\mathcal{V})= \tilde { \mathbf w}^{ \mathbf w.M} \circ (\mathcal{V}\otimes_{\mathcal{H}_{( \mathbf w.M)^-}, \theta^*}\mathcal{H}_{ \mathbf w.M}), \quad \mathcal R_{\mathcal{H}_M}^\mathcal{H}(\mathcal{V})=\Hom_{\mathcal{H}_{M^+},\theta}(\mathcal{H}_M,\mathcal{V}). \end{equation} In the left adjoint, $\mathcal{V}$ is seen as a right $\mathcal{H}_{( \mathbf w.M)^- }$-module via the ring homomorphism $\theta^*_{\mathbf w.M}\colon \mathcal{H}_{ (\mathbf w.M)^-}\to\mathcal{H}$; in the right adjoint, $\mathcal{V}$ is seen as a right $\mathcal{H}_{ M^+}$-module via the ring homomorphism $\theta_M \colon\mathcal{H}_{ M^+}\to\mathcal{H}$ (\S \ref{S:2.3}). \begin{proposition}\label{pro:ssLR} Assume that $\mathcal{V}$ is a supersingular right $\mathcal{H}_{R}$-module and that $p$ is nilpotent in $\mathcal{V}$. Then $\mathcal L_{\mathcal{H}_M}^\mathcal{H} (\mathcal{V})= 0$, and if $\mathcal{V}$ is uniformly supersingular $\mathcal R_{\mathcal{H}_M}^\mathcal{H} (\mathcal{V})=0$.\end{proposition} \begin{proof} This is a consequence of three known properties: \begin{enumerate} \item $\mathcal{H}_M$ is the localisation of $\mathcal{H}_{M^+}$ (resp.\ $\mathcal{H}_{M^-}$) at $T^M_\mu$ for any element $\mu \in \Lambda_T(1)$, central in $W_M(1)$ and strictly $N$-positive (resp.\ $N$-negative), and $T^M_\mu=T^{M,*}_\mu$. See \cite[Theorem 1.4]{MR3437789}. \item When $o$ is anti-dominant, $E_o(\mu)=T_{\mu}$ if $\mu\in \Lambda^+(1)$ and $E_o(\mu)=T^*_{\mu}$ if $\mu\in \Lambda^-(1)$. \item Let an integer $n>0$ and $\mu\in \Lambda(1)$ such that the $\mathbb W$-orbit of $v(\mu)\in X_*(T)\otimes \mathbb Q$ (Definition in \S \ref{S:2.1}) and of $\mu$ have the same number of elements. Then $$(E_{o} ( C(\mu)))^{n} E_o (\mu) - E_o (\mu)^{n+1} \in p \mathcal{H}. $$ See \cite[Lemma 6.5]{Vigneras-prop-III}, where the hypotheses are given in the proof (but not written in the lemma). \end{enumerate} Let $\mu\in \Lambda_T^+(1)$ satisfying (1) for $M^+$ and (3), similarly let $\mathbf w. \mu\in \Lambda_T^-(1)$ satisfying (1) for $(\mathbf w.M)^-$ and (3). For $(R,\mathcal{V})$ as in the proposition, let $v\in \mathcal{V}$ and $n>0$ such that $vE_{o} ( C(\mu))^n= vE_{o} ( C(\mathbf w.\mu))^n=0$. Multiplying by $E_o (\mu)$ or $E_o(\mathbf w.\mu)$, and applying (3) and (2) for $o$ anti-dominant we get: $$vE_o(\mu^{n+1})= vT_\mu^{n+1} \in p \mathcal{V}, \quad vE_o((\mathbf w.\mu)^{n+1})= v(T^*_{\mathbf w.\mu})^{n+1}\in p \mathcal{V} .$$ The proposition follows from: $vT_\mu^{n+1}, v(T^*_{\mathbf w.\mu})^{n+1}$ in $ p \mathcal{V}$ (as explained in \cite[Proposition 5.17]{arXiv:1612.01312} when $p=0$ in $R$). From $ v(T^*_{\mathbf w.\mu})^{n+1}$ in $p \mathcal{V} $, we get $v \otimes (T^{\mathbf w.M,*}_{\mathbf w.\mu})^{n+1}=v(T^*_{\mathbf w.\mu})^{n+1}\otimes 1_{\mathcal{H}_{ \mathbf w.M}}$ in $p\mathcal{V}\otimes_{\mathcal{H}_{( \mathbf w.M)^-}, \theta^*}\mathcal{H}_{ \mathbf w.M}$. As $T^{\mathbf w.M,*}=T^{\mathbf w.M}$ is invertible in $\mathcal{H}_{ \mathbf w.M}$ we get $v \otimes 1_{\mathcal{H}_{ \mathbf w.M}}$ in $ p\mathcal{V}\otimes_{\mathcal{H}_{( \mathbf w.M)^-}, \theta^*}\mathcal{H}_{ \mathbf w.M}$. As $v$ was arbitrary, $\mathcal{V}\otimes_{\mathcal{H}_{( \mathbf w.M)^-}, \theta^*}\mathcal{H}_{ \mathbf w.M}\subset p \mathcal{V}\otimes_{\mathcal{H}_{( \mathbf w.M)^-}, \theta^*}\mathcal{H}_{ \mathbf w.M}$. If $p$ is nilpotent in $\mathcal{V}$, then $ \mathcal{V}\otimes_{\mathcal{H}_{( \mathbf w.M)^-}, \theta^*}\mathcal{H}_{ \mathbf w.M}=0$. Suppose now that there exists $n>0$ such that $\mathcal{V}(z (t))^n=0$ for any non-invertible $t\in T^{ +}$, then $\mathcal{V} T_\mu^{n+1} \subset p \mathcal{V} $ where $\mu = \mu_t$; hence $\varphi (h)=\varphi (h T^M_{\mu^{-n-1}} )T_\mu^{n+1}$ in $p\mathcal{V}$ for an arbitrary $\varphi\in \Hom_{\mathcal{H}_{M^+},\theta}(\mathcal{H}_M,\mathcal{V})$ and an arbitrary $h\in \mathcal{H}_M$. We deduce $ \Hom_{\mathcal{H}_{M^+},\theta}(\mathcal{H}_M,\mathcal{V}) \subset \Hom_{\mathcal{H}_{M^+},\theta}(\mathcal{H}_M,p\mathcal{V})$. If $p$ is nilpotent in $\mathcal{V}$, then $ \Hom_{\mathcal{H}_{M^+},\theta}(\mathcal{H}_M,\mathcal{V})=0$. \end{proof} Recalling that $\tilde { \mathbf w}^M.\mathcal V$ is obtained by functoriality from $\mathcal{V}$ and the ring isomorphism $\iota ( \tilde{ \mathbf w}^M)$ defined in \eqref{eq:isoH}, the equivalence between $\mathcal{V}$ supersingular and $\tilde { \mathbf w}^M\mathcal V$ supersingular follows from: \begin{lemma} \begin{enumerate} \item Let $t\in T$. Then $t$ is dominant for $U_M$ if and only if $\hat{ \mathbf w}^M t ( \hat{ \mathbf w}^M)^{-1} \in T $ is dominant for $U_{\mathbf w.M}$. \item The $R$-algebra isomorphism $\mathcal{H}_{M,R} \xrightarrow {\iota ( \tilde{ \mathbf w}^M)}\mathcal{H}_{ \mathbf w .M, R}$, $ T^M_{w}\mapsto T^{\mathbf w.M}_{ \tilde{ \mathbf w}^M w ( \tilde{ \mathbf w}^M)^{-1}}$ for $w\in W_M(1) $ sends $z^M(t)$ to $z^{\mathbf w.M}(\hat{ \mathbf w}^M t ( \hat{ \mathbf w}^M)^{-1})$ for $t\in T$ dominant for $U_M$.\end{enumerate} \end{lemma} \begin{proof} The conjugation by $\hat{ \mathbf w}^M $ stabilizes $T$, sends $U_M$ to $U_{\mathbf w.M}$ and sends the $\mathbb W_{M}$-orbit of $t\in T$ to the $\mathbb W_{{\mathbf w}.M}$-orbit of $\hat{ \mathbf w}^M t ( \hat{ \mathbf w}^M)^{-1}$, as ${\mathbf w}^M \mathbb W_M ({\mathbf w}^M)^{-1}=\mathbb W_{{\mathbf w}.M}$. It is known that $\iota ( \tilde{ \mathbf w}^M)$ respects the antidominant alcove walk bases \cite[Proposition~2.20]{MR3437789}: it sends $E^M(w)$ to $E^{\mathbf w.M} ( \tilde{ \mathbf w}^M w ( \tilde{ \mathbf w}^M)^{-1})$ for $w\in W_M(1) $. \end{proof} We deduce: \begin{corollary} \label{cor:9.29}Let $\mathcal{V}$ be a right $\mathcal{H}_{M,R}$-module. Then $\mathcal{V}$ is supersingular if and only if the right $\mathcal{H}_{{ \mathbf w}.M,R}$-module $\tilde { \mathbf w}^M\mathcal V$ is supersingular. \end{corollary} Assume $R=C$. The supersingular simple $\mathcal{H}_{M,C}$-modules are classified in \cite{Vigneras-prop-III}. By Corollaries \ref{cor:isoCI} and \ref{cor:9.29}, the classification of the simple $\mathcal{H}_C $-modules in \cite{arXiv:1406.1003_accepted} remains valid with the $\mathcal{H}_C$-modules $I_{\mathcal{H}}(P,\mathcal V,Q)$ instead of $ \mathcal CI_{\mathcal{H}}(P,\mathcal V,Q)$: \begin{corollary}[Classification of simple $\mathcal{H}_C$-modules] \label{cor:suH} Assume $R=C$. Let $(P,\mathcal V,Q)$ be a $\mathcal{H}_C$-triple where $ \mathcal V $ is simple and supersingular. Then, the $\mathcal{H}_C$-module $ \mathcal I_{\mathcal{H}}(P,\mathcal V,Q)$ is simple. A simple $\mathcal{H}_C$-module is isomorphic to $ \mathcal I_{\mathcal{H}}(P,\mathcal{V},Q)$ for a $\mathcal{H}_C$-triple $(P,\mathcal{V},Q)$ where $ \mathcal V $ is simple and supersingular, $P,Q$ and the isomorphism class of $\mathcal{V}$ are unique. \end{corollary} \subsection{A commutative diagram} \label{S:comdia} We prove in this section Proposition \ref{prop:comp}. For $Q\subset Q'\subset P(\mathcal{V})$ we show by an explicit computation that \begin{equation*} \mu_Q^{-1}\circ i(Q,Q') \circ \mu_{Q'}: \Ind_{\mathcal{H}_{ \mathbf w.Q '}}^\mathcal{H}(e_{\mathcal{H}_{\mathbf w.Q'}}( \tilde{ \mathbf w}^M.\mathcal V))\to\Ind_{\mathcal{H}_{ \mathbf w.Q }}^\mathcal{H}(e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V)). \end{equation*} is equal to $\iota(\mathbf w.Q,\mathbf w.Q')$. The $R$-module $e_{\mathcal{H}_{\mathbf w.Q'}}( \tilde{ \mathbf w}^M.\mathcal V)\otimes 1_\mathcal{H}$ generates the $\mathcal{H}_R$-module $e_{\mathcal{H}_{\mathbf w.Q'}}( \tilde{ \mathbf w}^M.\mathcal V)\otimes_{\mathcal{H}_{{ \mathbf w}.Q', R}, \theta^+} \mathcal{H}_R=\Ind_{\mathcal{H}_{ \mathbf w.Q '}}^\mathcal{H}(e_{\mathcal{H}_{\mathbf w.Q'}}( \tilde{ \mathbf w}^M.\mathcal V))$ and by \eqref{eq:iotaQQ'}) \begin{equation}\label{iota(v1)} \iota(\mathbf w.Q,\mathbf w.Q') (v \otimes 1_{\mathcal{H}} ) = v \otimes \sum_{d \in {}^{\mathbb W_{M_{\mathbf w. Q}}}\mathbb W_{M_{\mathbf w. Q'}}} T_{\tilde d}\end{equation} for $v\in \mathcal{V}$ seen as an element of $e_{\mathcal{H}_{\mathbf w.Q'}}( \tilde{ \mathbf w}^M.\mathcal V)$ in the LHS and an element of $e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V)$ in the RHS. \begin{lemma}\label{lemma:jeannette} $( \mu_Q^{-1}\circ i(Q,Q') \circ \mu_{Q'})(v \otimes 1_{\mathcal{H}} ) =v \otimes \sum_{d\in \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}} \, q_d \,T^*_{ \tilde {\mathbf w}^{Q} (\tilde{ \mathbf w}^{Q' }\tilde d)^{-1}}$. \end{lemma} \begin{proof} $\mu_{Q'}(v \otimes 1_{\mathcal{H}} )$ is the unique homomorphism $ f_{\tilde {\mathbf w}^{M_{Q'}}, v} \in \Hom_{\mathcal{H}_{ M_{Q'} ^-},\theta^*}(\mathcal{H}, e_{\mathcal{H}_{Q'}}( \mathcal V))$ sending $T_{\tilde {\mathbf w}^{Q'}}$ to $v$ and vanishing on $T_{\tilde d'}$ for $d'\in \mathbb W^{ \mathbb W _{M_{Q'}}}\setminus \{ {\mathbf w}^{Q'} \}$ by \eqref{eq:muQ}. By \eqref{eq:canonicalH}, $i(Q,Q')$ is the natural embedding of $ \Hom_{\mathcal{H}_{ M_{Q'}^-, \theta^*}}(\mathcal{H}, e_{\mathcal{H}_{Q'} }(\mathcal V)) $ in $ \mathcal \Hom_{\mathcal{H}_{M_{Q}^-, \theta^*}}(\mathcal{H},e_{\mathcal{H}_{Q}}(\mathcal V))$ therefore $i(Q,Q')( f_{\tilde {\mathbf w}^{M_{Q'}}, v})$ is the unique homomorphism $ \mathcal \Hom_{\mathcal{H}_{M_{Q}^-, \theta^*}}(\mathcal{H},e_{\mathcal{H}_{Q}}(\mathcal V))$ sending $T_{\tilde {\mathbf w}^{Q'}}$ to $v$ and vanishing on $T_{\tilde d'}$ for $d'\in \mathbb W^{ \mathbb W _{M_{Q'}}}\setminus \{ {\mathbf w}^{Q'} \}$. As $\mathbb W^{\mathbb W_{M_Q}}= \mathbb W^{\mathbb W_{ Q'}}\mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$, this homomorphism vanishes on $T_{\tilde w} $ for $w$ not in $ \mathbf w^{M_{Q'} }\mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$. By \cite[Lemma 2.22]{arXiv:1406.1003_accepted}, the inverse of $\mu_Q$ is the $\mathcal{H}_R$-isomorphism: \begin{equation}\label{eq:muQ-1} \Hom_{\mathcal{H}_{ M_Q ^-},\theta^*}(\mathcal{H},e_{\mathcal{H}_{Q}}( \mathcal V))\xrightarrow{\mu_Q^{-1}} \Ind_{\mathcal{H}_{ \mathbf w.M_Q }}^\mathcal{H}(e_{\mathcal{H}_{\mathbf w.Q}}( \tilde{ \mathbf w}^M.\mathcal V)) \end{equation} $$\quad \quad \quad \quad \quad \quad \quad \quad \quad f\mapsto \sum_{d\in \mathbb W^{\mathbb W_{M}}} f (T_{\tilde d})\otimes T^*_{ \tilde {\mathbf w}^M \tilde d^{-1}} , $$ where $ \mathbb W^{\mathbb W_{M}}$ is the set of $d\in \mathbb W$ with minimal length in the coset $d\mathbb W_{M}$. We deduce the explicit formula: $$( \mu_Q^{-1}\circ i(Q,Q') \circ \mu_{Q'})(v \otimes 1_{\mathcal{H}} ) = \sum_{w\in \mathbb W^{\mathbb W_{M_Q}}} i(Q,Q')(f^{Q'}_{\tilde {\mathbf w}^{M_{Q'}}, v})(T_{\tilde w})\otimes T^*_{ \tilde {\mathbf w}^{M_Q} \tilde w^{-1}} . $$ Some terms are zero: the terms for $w\in \mathbb W^{\mathbb W_{M_Q}}$ not in $\mathbf w^{M_{Q'} }\mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$. We analyse the other terms for $ w$ in $ \mathbb W^{\mathbb W_{M_Q}}\cap \mathbf w^{M_{Q'} }\mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$; this set is $\mathbf w^{M_{Q'} } \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$. Let $w=\mathbf w^{M_{Q'} } d, d\in \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$, and $\tilde w=\tilde{ \mathbf w}^{M_{Q'} }\tilde d$ with $ \tilde d\in {}_1 W_{G'}$ lifting $d $. By the braid relations $T_{\tilde w}=T_{\tilde {\mathbf w}^{M_{Q'}} }T_{\tilde d}$. We have $T_{ \tilde d} = \theta^*(T^{M_{Q'}}_{ \tilde d} )$ by the braid relations because $d\in \mathbb W_{ M_{Q'}}$, $S_{M_{Q'}} \subset S^{\mathrm{aff}}$ and $\theta^*( c^{M_{Q'}}_{ \tilde s}) =c _{ \tilde s}$ for $s\in S_{M_{Q'}} $. As $\mathbb W_{ M_{Q'}}\subset W_{ M_{Q'} ^-}\cap W_{ M_{Q'} ^+}$, we deduce: \begin{align*} i(Q,Q')(f^{Q'}_{\tilde {\mathbf w}^{M_{Q'}}, v})(T_{\tilde w})&= i(Q,Q')(f^{Q'}_{\tilde {\mathbf w}^{M_{Q'}}, v})(T_{\tilde {\mathbf w}^{M_{Q'}}}T_{ \tilde d}) = i(Q,Q')(f^{Q'}_{\tilde {\mathbf w}^{M_{Q'}}, v})(T_{\tilde {\mathbf w}^{M_{Q'}} }) T^{M_{Q'}}_{\tilde d}\\ &= vT^{M_{Q'}}_{\tilde d} = q_d v. \end{align*} Corollary \ref{cor:extT} gives the last equality. \end{proof} The formula for $( \mu_Q^{-1}\circ i(Q,Q') \circ \mu_{Q'})(v \otimes 1_{\mathcal{H}} )$ given in Lemma \ref{lemma:jeannette} is different from the formula \eqref{iota(v1)} for $\iota(\mathbf w.Q,\mathbf w.Q') (v \otimes 1_{\mathcal{H}} )$. It needs some work to prove that they are equal. A first reassuring remark is that $ {}^{\mathbb W_{M_{\mathbf w. Q}}}\mathbb W_{M_{\mathbf w. Q'}}= \{ \mathbf w d^{-1} \mathbf w \ | \ d\in \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}\}$, so the two summation sets have the same number of elements. But better, $$ {}^{\mathbb W_{M_{\mathbf w. Q}}}\mathbb W_{M_{\mathbf w. Q'}}= \{ \mathbf w^{Q} (\mathbf w^{Q'}d)^{-1} \ | \ d\in \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}\}$$ because $ \mathbf w_{Q'} \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}} \mathbf w_{Q} = \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}$. To prove the latter equality, we apply the criterion: $w\in \mathbb W_{M_{Q'}}$ lies in $ \mathbb W_{M_{Q'}} {}^{\mathbb W_{M_Q }}$ if and only if $w(\alpha)>0$ for all $\alpha \in \Delta_Q$ noticing that $ d\in \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}} $ implies $ \mathbf w_{Q} (\alpha)\in - \Delta_Q$, $d \mathbf w_{Q} (\alpha) \in -\Phi_{M_{Q'}}$, $\mathbf w_{Q'} d \mathbf w_{Q} (\alpha)>0$. Let $x_d = \mathbf{w}^Q (\mathbf{w}^{Q'} d)^{-1}$. We have $\tilde {\mathbf w}^{M_Q} (\tilde{ \mathbf w}^{M_{Q'} }\tilde d)^{-1}=\tilde x_d$ because the lifts $\tilde w$ of the elements $w\in \mathbb W$ satisfy the braid relations and $\ell(x_d)=\ell( \mathbf w_{Q} d^{-1} \mathbf w_{Q'} )= \ell (\mathbf w_{Q'} )- \ell ( \mathbf w_{Q} d^{-1})= \ell (\mathbf w_{Q'} )- \ell ( \mathbf w_{Q}) -\ell ( d^{-1}) = \ell (\mathbf w_{Q'} )- \ell ( \mathbf w_{Q}) -\ell ( d )= -\ell (\mathbf w^{Q'})+ \ell ( \mathbf w^{Q}) -\ell ( d )$. We have $q_d= q_{ \mathbf w_{\mathbf w.Q}x_d\mathbf w_{\mathbf w.Q'}} $ because $\mathbf w d^{-1} \mathbf w= \mathbf w_{\mathbf w.Q}x_d\mathbf w_{\mathbf w.Q'}$, and $q_d=q_{ d^{-1}}= q_{\mathbf w d^{-1} \mathbf w}$. So $$ \sum_{d\in \mathbb W_{M_{Q'}}^{\mathbb W_{M_Q}}} q_d T^*_{ \tilde {\mathbf w}^{Q} (\tilde{ \mathbf w}^{Q' }\tilde d)^{-1}} = \sum_{x_d \in {}^{\mathbb W_{M_{\mathbf w. Q}}}\mathbb W_{M_{\mathbf w. Q'}}} q_{ \mathbf w_{\mathbf w.Q}x_d\mathbf w_{\mathbf w.Q'}}T^*_{ \tilde x_d}.$$ In the RHS, only $ \tilde {\mathbf w}^M.\mathcal{V},{\mathbf w}.Q,{\mathbf w}.Q'$ appear. The same holds true in the formula \eqref{iota(v1)}. The map $(P,\mathcal{V},Q,Q') \mapsto ({\mathbf w}.P,\tilde {\mathbf w}^M.\mathcal{V},{\mathbf w}.Q,{\mathbf w}.Q')$ is a bijection of the set of triples $(P,\mathcal{V},Q,Q')$ where $P=MN,Q,Q'$ are standard parabolic subgroups of $G$, $\mathcal{V}$ a right $\mathcal{H}_R$-module, $Q\subset Q'\subset P(\mathcal{V})$ by Lemma \ref{lemma:delt}. So we can replace $({\mathbf w}.P,\tilde {\mathbf w}^M.\mathcal{V},{\mathbf w}.Q,{\mathbf w}.Q')$ by $(P,\mathcal{V},Q,Q')$. Our task is reduced to prove in $e_{\mathcal{H}_{Q}}(\mathcal{V}) \otimes_{\mathcal{H}_{M^+_Q}, \theta} \mathcal{H}_R$: \begin{equation}\label{eq:finalrs} v \otimes \sum_{d \in {}^{\mathbb W_{M_{ Q}}}\mathbb W_{M_{ Q'}}} T_{\tilde d}= v \otimes \sum_{d \in {}^{\mathbb W_{M_{ Q}}}\mathbb W_{M_{ Q'}}}q_{\mathbf w_{Q}d\mathbf w_{Q'}} T^*_{\tilde d}. \end{equation} A second simplification is possible: we can replace $Q\subset Q'$ by the standard parabolic subgroups $Q_2\subset Q'_2 $ of $G$ with $\Delta_{Q_2}= \Delta_{Q}\setminus \Delta_P$ and $\Delta_{Q'_2}= \Delta_{Q'}\setminus \Delta_P$, because $\Delta_P$ and $\Delta_{P(\mathcal{V})}\setminus \Delta_P$ are orthogonal. Indeed, $\mathbb W_{M_{ Q'}} = \mathbb W_{M} \times \mathbb W_{M_{Q'_2}}$ and $\mathbb W_{M_{ Q}}=\mathbb W_{M} \times \mathbb W_{M _{Q_2}}$ are direct products, the longest elements $\mathbf w_{Q'}=\mathbf w_{M}\mathbf w_{Q'_2}, \mathbf w_{Q}=\mathbf w_{M}\mathbf w_{Q_2}$ are direct products and $${}^{\mathbb W_{M_{ Q}}}\mathbb W_{M_{ Q'}}={}^{\mathbb W_{M_{ Q _2}}}\mathbb W_{M_{ Q' _2}}, \quad \mathbf w_{Q}d\mathbf w_{Q'}= \mathbf w_{Q_2 } d \mathbf w_{ Q'_2} .$$ Once this is done, we use the properties of $e_{\mathcal{H}_{Q}}(\mathcal{V})$: $vh\otimes 1_\mathcal{H}= v\otimes \theta_Q(h)$ for $h\in \mathcal{H}_{M^+_{Q_2}}$, and $T^{Q,*}_w$ acts trivially on $e_{\mathcal{H}_{Q}}(\mathcal{V})$ for $w\in {}_1W_{M_{Q_2}'} \cup (\Lambda(1)\cap {}_1 W_{M_{Q'_2}'})$. Set ${}_1\mathbb{W}_{M'_{Q'_2}} = \{w\in {}_1W_{M'_{Q'_2}}\mid \text{$w$ is a lift of some element in $\mathbb{W}_{M_{Q'_2}}$}\}$ and ${}_1\mathbb{W}_{M'_{Q_2}}$ similarly. Then $Z_k\cap {}_1\mathbb{W}_{M'_{Q'_2}}\subset (\Lambda(1)\cap {}_1W_{M'_{Q'_2}})\cap {}_1W_{M^+_{Q_2}}$ and ${}_1\mathbb{W}_{M'_{Q_2}}\subset {}_1W_{M'_{Q_2}}\cap {}_1W_{M^+_{Q_2}}$. This implies that \eqref{eq:finalrs} where $Q\subset Q'$ has been replaced by $Q_2\subset Q'_2$ follows from a congruence \begin{equation}\label{eq:finalrss} \sum_{d \in {}^{\mathbb W_{M_{ Q_2}}}\mathbb W_{M_{ Q'_2}}} T_{\tilde d}\equiv \sum_{d \in {}^{\mathbb W_{M_{ Q_2}}}\mathbb W_{M_{ Q'_2}}}q_{\mathbf w_{Q_2}d\mathbf w_{Q'_2}} T^*_{\tilde d}. \end{equation} in the finite subring $H({}_1\mathbb{W}_{M_{ Q'_2}})$ of $\mathcal{H}$ generated by $\{T_w\mid w\in {}_1\mathbb{W}_{M'_{Q'_2}}\}$ modulo the the right ideal $\mathcal J_2$ with generators $\{\theta_Q(T^{Q,*}_w)-1\mid w\in (Z_k\cap {}_1\mathbb{W}_{M'_{Q'_2}}) \cup {}_1 \mathbb{W}_{M'_{Q_2}} \}$. Another simplification concerns $T^*_{\tilde d}$ modulo $\mathcal J_2$ for $d\in \mathbb W_{M_{ Q'_2}}$. We recall that for any reduced decomposition $d=s_1\ldots s_n$ with $s_i \in S\cap \mathbb W_{M_{Q'_2}}$ we have $T^*_{\tilde d}= (T_{\tilde s_1}-c_{\tilde s_1} ) \ldots (T_{\tilde s_n}-c_{\tilde s_n} )$ where the $\tilde s_i$ are admissible. For $\tilde s$ admissible, by \eqref{eq:cs} $$ c_{\tilde s}\equiv q_{s}-1 .$$ Therefore $$ T_d^* \equiv (T_{\tilde s_1}-q_{s_1}+1 ) \dotsm (T_{\tilde s_n}-q_{ s_n}+1 ).$$ Let $\mathcal{J}'\subset \mathcal{J}_2$ be the ideal of $H({}_1\mathbb{W}_{M'_{Q'_2}})$ generated by $\{T_t - 1\mid t\in Z_k\cap {}_1W_{M'_{Q'_2}}\}$. Then the ring $H({}_1\mathbb{W}_{M'_{ Q'_2}})/\mathcal{J}'$ and its right ideal $\mathcal{J}_2/\mathcal{J}'$ are the specialisation of the generic finite ring $H(\mathbb W_{M_{ Q'_2}})^{g}$ over $\mathbb Z[(q_s)_{s\in S_{M_{ Q'_2}}}]$ where the $q_s$ for $s\in S_{M_{ Q'_2}}=S\cap \mathbb W_{M_{ Q'_2}} $ are indeterminates, and of its right ideal $\mathcal J_2^{g}$ with the same generators. The similar congruence modulo $\mathcal J_2^{g}$ in $H(\mathbb W_{M_{ Q'_2}})^{g}$ (the generic congruence) implies the congruence \eqref{eq:finalrss} by specialisation. We will prove the generic congruence in a more general setting where $H$ is the generic Hecke ring of a finite Coxeter system$( \mathbb W, S)$ and parameters $(q_s)_{s\in S}$ such that $q_s=q_{s'}$ when $s,s'$ are conjugate in $\mathbb W$. The Hecke ring $H $ is a $\mathbb Z[(q_s)_{s\in S}] $-free module of basis $(T_w)_{w\in \mathbb W}$ satisfying the braid relations and the quadratic relations $T_s^2=q_s+(q_s-1)T_s$ for $s\in S$. The other basis $(T^*_w)_{w\in \mathbb W}$ satisfies the braid relations and the quadratic relations $(T^*_s)^2=q_s-(q_s-1)T^*_s$ for $s\in S$, and is related to the first basis by $T_s^*= T_s-(q_s-1)$ for $s\in S$, and more generally $T_w T^*_{w^{-1}}= T^*_{w^{-1}} T_w= q_w$ for $w\in \mathbb W$ \cite[Proposition 4.13]{MR3484112}. Let $J\subset S$ and $\mathcal J$ is the right ideal of $H$ with generators $T_w^*-1$ for all $w$ in the group $ \mathbb W_J$ generated by $J$. \begin{lemma}\label{lemma:ba} A basis of $\mathcal J$ is $(T^*_{w_1}-1)T^*_{w_2}$ for $w_1\in \mathbb W_J\setminus \{1\}, w_2\in {}^{\mathbb W_J}\mathbb W $, and adding $T^*_{w_2}$ for $w_2\in {}^{\mathbb W_J}\mathbb W$ gives a basis of $H$. In particular, $\mathcal J$ is a direct factor of $\mathcal{H}$. \end{lemma} \begin{proof} The elements $(T^*_{w_1}-1)T^*_{w }$ for $w_1\in \mathbb W_J, w \in {} \mathbb W $ generate $\mathcal J$. We write $w=u_1w_2$ with unique elements $u_1\in \mathbb W_J, w_2\in {}^{\mathbb W_J}\mathbb W $, and $T^*_w=T^*_{u_1}T^*_{w_2}$. Therefore, $(T^*_{w_1}-1)T^*_{u_1 }T^*_{w_2 }$. By an induction on the length of $u_1$, one proves that $(T^*_{w_1}-1)T^*_{u_1 }$ is a linear combination of $(T^*_{v_1}-1)$ for $v_1\in \mathbb W_J$ as in the proof of Proposition \ref{prop:ideal}. It is clear that the elements $(T^*_{w_1}-1)T^*_{w_2}$ and $ T^*_{w_2}$ for $w_1\in \mathbb W_J\setminus \{1\}, w_2\in {}^{\mathbb W_J}\mathbb W $ form a basis of $H$. \end{proof} Let $\mathbf w_J$ denote the longest element of $ \mathbb W_J$ and $\mathbf w= \mathbf w_S$. \begin{lemma} In the generic Hecke ring $H$, the congruence modulo $\mathcal J$ $$ \sum_{d \in {}^{\mathbb W_J}\mathbb W } T_{d}\equiv \sum_{d \in {}^{\mathbb W_J}\mathbb W }q_{\mathbf w_Jd\mathbf w} T^*_{d} $$ holds true. \end{lemma} \begin{proof} Step 1. We show: $$ {}^{\mathbb W_J}\mathbb W={ \mathbf w_J } {}^{\mathbb W_J}\mathbb W \,\mathbf w,\quad q_{\mathbf w_J}q_{\mathbf w_Jd\mathbf w}T_d^*= T_{\mathbf w_J}T_{\mathbf w_J d{\mathbf w} } T^*_{\mathbf w }.$$ The equality between the groups follows from the characterisation of $ {}^{\mathbb W_J}\mathbb W$ in $\mathbb W$: an element $d\in \mathbb W$ has minimal length in $\mathbb W_J d$ if and only if $\ell(u d)=\ell (u)+\ell(d)$ for all $u\in \mathbb W_J$. An easy computation shows that $\ell(u { \mathbf w_J }d\mathbf w)=\ell (u)+\ell({ \mathbf w_J }d\mathbf w)$ for all $u\in \mathbb W_J, d\in {}^{\mathbb W_J}\mathbb W$ (both sides are equal to $\ell (u)+\ell(\mathbf w)- \ell(({ \mathbf w_J })-\ell(d)$). The second equality follows from $q_{\mathbf w_J} q_{\mathbf w_J d\mathbf w}= q_{ d\mathbf w}$ because $(\mathbf w_J)^2=1 $ and $\ell (\mathbf w_J) +\ell (\mathbf w_J d\mathbf w )=\ell( d\mathbf w)$ (both sides are $\ell( \mathbf w) -\ell( d)$) and from $ q_{ d\mathbf w}T^*_d = T_{ d\mathbf w}T^*_{\mathbf w d^{-1}} T^*_d= T_{ d\mathbf w} T^*_{\mathbf w }$. We also have $T_{d\mathbf{w}} = T_{\mathbf{w}_J}T_{\mathbf{w}_Jd\mathbf{w}}$. Step 2. The multiplication by $q_{\mathbf w_J}$ on the quotient $H /\mathcal J $ is injective (Lemma \ref{lemma:ba}) and $q_{\mathbf w_J} \equiv T _{\mathbf w_J} $. By Step 1, $q_{\mathbf w_Jd\mathbf w}T_d^* \equiv T_{\mathbf w_J d{\mathbf w} } T^*_{\mathbf w }$ and $$ \sum_{d \in {}^{\mathbb W_J}\mathbb W }q_{\mathbf w_Jd\mathbf w} T^*_{d} \equiv \sum_{d \in {}^{\mathbb W_J}\mathbb W } T_{d}T^*_{\mathbf w }.$$ The congruence \begin{equation}\label{eq:congJ} \sum_{d \in {}^{\mathbb W_J}\mathbb W } T_{d} \equiv \sum_{d \in {}^{\mathbb W_J}\mathbb W } T_{d}T^*_{s } \end{equation} for all $s\in S$ implies the lemma because $T^*_{\mathbf w }=T^*_{s_1 }\ldots T^*_{s_n}$ for any reduced decomposition $\mathbf w=s_1\ldots s_n$ with $s_i\in S$. Step 3. When $J=\emptyset$, the congruence \eqref{eq:congJ} is an equality: \begin{equation}\label{eq:cong} \sum_{w \in \mathbb W } T_{w} = \sum_{w \in \mathbb W } T_{w}T^*_{s }. \end{equation} It holds true because $ \sum_{w\in \mathbb{W}}T_w = \sum_{w < ws}T_w(T_s + 1) $ and $ (T_s + 1)T_s^* = T_sT_s^* + T_s^* = q_s + T_s^* = T_s + 1 $. Step 4. Conversely the congruence \eqref{eq:congJ} follows from \eqref{eq:cong} because $$\sum_{w \in \mathbb W } T_{w}=(\sum_{u\in W_J} T_u)\sum_{d \in {}^{\mathbb W_J}\mathbb W } T_{d} \equiv (\sum_{u\in W_J} q_u)\sum_{d \in {}^{\mathbb W_J}\mathbb W } T_{d}$$ (recall $ q_u=T_{u^{-1}}^*T_u \equiv T_u$) and we can simplify by $\sum_{u\in W_J} q_u$ in $H/\mathcal J$. \end{proof} This ends the proof of Proposition \ref{prop:comp}. \section{Universal representation $I_\mathcal{H}(P,\mathcal V,Q)\otimes_\mathcal{H} R[\mathcal U\backslash G] $ }\label{S:10} The invariant functor $(-)^\mathcal{U}$ by the pro-$p$ Iwahori subgroup $\mathcal U$ of $G$ has a left adjoint $$- \otimes_{\mathcal{H}_R} R[\mathcal U\backslash G] : \Mod_R( \mathcal{H}) \to \Mod_R^\infty (G).$$ The smooth $R$-representation $\mathcal{V} \otimes_{\mathcal{H}_R}R[\mathcal U\backslash G] $ of $G$ constructed from the right $\mathcal{H}_R$-module $\mathcal{V}$ is called universal. We write $$R[\mathcal U\backslash G]=\mathbb X .$$ \begin{question}\label{que:non0} Does $\mathcal{V}\neq 0$ implies $\mathcal{V} \otimes_{\mathcal{H}_R} \mathbb X\neq 0$ ? or does $ v\otimes 1_{\mathcal{U} }=0 $ for $v\in \mathcal{V}$ implies $v=0$ ? We have no counter-example. If $R$ is a field and the $\mathcal{H}_{R}$-module $\mathcal{V}$ is simple, the two questions are equivalent: $\mathcal{V} \otimes_{\mathcal{H}_R} \mathbb X\neq 0$ if and only if the map $v\mapsto v\otimes 1_{\mathcal{U}}$ is injective. When $R=C$, $\mathcal{V} \otimes_{\mathcal{H}_R} \mathbb X\neq 0$ for all simple $\mathcal{H}_{C}$-modules $\mathcal{V}$ if this is true for $\mathcal{V}$ simple supersingular (this is a consequence of Corollary \ref{cor:10.13}). \end{question} The functor $- \otimes_{\mathcal{H}_R} \mathbb X$ satisfies a few good properties: it has a right adjoint and is compatible with the parabolic induction and the left adjoint (of the parabolic induction). Let $P=MN$ be a standard parabolic subgroup and $\mathbb X_M=R[\mathcal{U}_M \backslash M]$. We have functor isomorphisms \begin{align}&(- \otimes_{\mathcal{H}_R} \mathbb X)\circ \mathcal Ind_{\mathcal{H}_M}^\mathcal{H} \to \Ind_P^G \circ (-\otimes_{\mathcal{H}_R} \mathbb X_M),\\ \label{eq:JT}&(-)_N\circ (- \otimes_{\mathcal{H}_R} \mathbb X) \to(-\otimes_{\mathcal{H}_R} \mathbb X_M) \circ \mathcal L_{\mathcal{H}_M}^\mathcal{H}. \end{align} The first one is \cite[formula 4.15]{arXiv:1703.04921}, the second one is obtained by left adjunction from the isomorphism $ \mathcal Ind_{\mathcal{H}_M}^\mathcal{H} \circ (-)^{\mathcal{U}_M}\to (-)^\mathcal{U}\circ \Ind_P^G $ \cite[formula (4.14)]{arXiv:1703.04921}. If $\mathcal{V}$ is a right $\mathcal{H}_R$-supersingular module and $p$ is nilpotent in $\mathcal{V}$, then $ \mathcal{L}_{\mathcal{H}_M}^\mathcal{H} (\mathcal{V})=0$ if $M\neq G$ (Proposition \ref{pro:ssLR}). Applying \eqref{eq:JT} we deduce: \begin{proposition}\label{pro:p0} If $p $ is nilpotent in $\mathcal{V}$ and $\mathcal{V}$ supersingular, then $\mathcal{V} \otimes_{\mathcal{H}_R} \mathbb X$ is left cuspidal. \end{proposition} \begin{remark}\label{rem:or} For a non-zero smooth $R$-representation $\tau $ of $M$, $\Delta_\tau$ is orthogonal to $\Delta_P$ if $\tau$ is left cuspidal. Indeed, we recall from \cite[II.7 Corollary~2]{MR3600042} that $\Delta_{ \tau}$ is not orthogonal to $\Delta_P$ if and only if it exists a proper standard parabolic subgroup $X $ of $M$ such that $\sigma$ is trivial on the unipotent radical of $X$; moreover $\tau$ is a subrepresentation of $\Ind_X^M (\tau|_X)$, so the image of $\tau$ by the left adjoint of $\Ind_X^M$ is not $0$. \end{remark} From now on, $\mathcal{V}$ is a non-zero right $\mathcal{H}_{M,R}$-module and $$\sigma=\mathcal{V} \otimes_{\mathcal{H}_{M,R}} \mathbb X_M.$$ In general, when $\sigma\neq 0$, let $P_{\perp}(\sigma)$ be the standard parabolic subgroup of $ G$ with $\Delta_{P_\perp(\sigma)}= \Delta_P\cup \Delta_{\perp,\sigma}$ where $ \Delta_{\perp,\sigma}$ is the set of simple roots $\alpha \in \Delta_\sigma$ orthogonal to $\Delta_P$. \begin{proposition}\label{prop:orthtriple} \begin{enumerate} \item $P(\mathcal{V})\subset P_{\perp}(\sigma)$ if $\sigma\neq 0$. \item $ P(\mathcal{V})= P_{\perp}(\sigma)$ if the map $v\mapsto v\otimes 1_{\mathcal{U}_M}$ is injective. \item $ P(\mathcal{V})=P(\sigma)$ if the map $v\mapsto v\otimes 1_{\mathcal{U}_M}$ is injective, $p $ nilpotent in $\mathcal{V}$ and $\mathcal{V}$ supersingular. \item $ P(\mathcal{V})=P(\sigma)$ if $\sigma\neq 0 $, $R$ is a field of characteristic $p$ and $\mathcal{V}$ simple supersingular. \end{enumerate}\end{proposition} \begin{proof} (1) $ P(\mathcal{V})\subset P_{\perp}(\sigma)$ means that $ Z\cap M'_{\mathcal V} \ \text{ acts trivially on } \ \mathcal V\otimes 1_{\mathcal{U}_M} , $ where $M_{\mathcal{V}}$ is the standard Levi subgroup such that $\Delta_{M_{\mathcal{V}}} = \Delta_{\mathcal{V}}$. Let $z\in Z\cap M'_{\mathcal{V}}$ and $v\in \mathcal V$. As $\Delta_M$ and $\Delta_{\mathcal V}$ are orthogonal, we have $T ^{M,*}(z)= T ^{M}(z)$ and $\mathcal{U}_Mz\mathcal{U}_M=\mathcal{U}_M z$. We have $v\otimes 1_{\mathcal{U}_M}= v T ^{M}(z)\otimes 1_{\mathcal{U}_M}=v\otimes T ^{M}(z) 1_{\mathcal{U}_M}=v\otimes {\bf 1}_{\mathcal{U}_Mz}= v\otimes z^{-1} 1_{\mathcal{U}_M}= z^{-1}(v\otimes 1_{\mathcal{U}_M})$. (2) If $ v\otimes 1_{\mathcal{U}_M}=0 $ for $v\in \mathcal{V}$ implies $v=0$, then $\sigma\neq 0$ because $\mathcal{V}\neq 0$. By (1) $P(\mathcal V) \subset P_{\perp}(\sigma) $. As in the proof of (1), for $z\in Z\cap M'_{\perp, \sigma}$ we have $ v T ^{M,*}(z)\otimes 1_{\mathcal{U}_M}= v T ^{M}(z)\otimes 1_{\mathcal{U}_M}= v \otimes 1_{\mathcal{U}_M}$ and our hypothesis implies $v T ^{M,*}(z)=v$ hence $P(\mathcal V) \supset P_{\perp}(\sigma) $. (3) Proposition \ref{pro:p0}, Remark \ref{rem:or} and (2). (4) Question \ref{que:non0} and (3). \end{proof} Let $Q$ be a parabolic subgroup of $G$ with $P\subset Q\subset P(\mathcal{V})$. In this chapter we will compute $I_\mathcal{H}(P,\mathcal V,Q)\otimes_\mathcal{H} R[\mathcal{U}\backslash G]$ where $I_\mathcal{H}(P,\mathcal{V},Q)=\Ind_{\mathcal{H}_{M(\mathcal{V})}}^\mathcal{H} (e(\mathcal{V}) \otimes (\Ind_Q^{P(\mathcal{V})} {\bf 1})^{\mathcal{U}_{M(\mathcal{V})} } )$ (Theorem \ref{thm:10.10}). The smooth $R$-representation $I_G(P,\sigma,Q)$ of $G$ is well defined: it is $0$ if $\sigma=0$ and $\Ind_{P(\sigma)}^G (e(\sigma)\otimes \St_Q^{P(\sigma)})$ if $\sigma\neq 0$ because $(P,\sigma,Q)$ is an $R[G]$-triple by Proposition \ref{pro:p0}. We will show that the universal representation $I_\mathcal{H}(P,\mathcal V,Q)\otimes_\mathcal{H} R[\mathcal{U}\backslash G]$ is isomorphic to $I_G(P,\sigma,Q)$, if $P(\mathcal{V})=P(\sigma)$ and $p=0$, or if $\sigma=0$ (Corollary \ref{cor:10.12}). In particular, when $R=C$ and $I_\mathcal{H}(P,\mathcal V,Q)\otimes_\mathcal{H} R[\mathcal{U}\backslash G] \simeq I_G(P,\sigma,Q)$ when $\mathcal{V}$ is supersingular \subsection{$Q=G$} \label{S:Q=G} We consider first the case $Q=G$. We are in the simple situation where $\mathcal{V}$ is extensible to $\mathcal{H}$ and $P(\mathcal V)=P(\sigma)=G$, $I_\mathcal{H}(P,\mathcal V,G) = e(\mathcal V)$ and $I_G(P,\sigma,G) =e(\sigma)$. We recall that $\Delta \setminus \Delta_P$ is orthogonal to $\Delta_P$ and that $M_2$ denotes the standard Levi subgroup of $G$ with $\Delta_{M_2}=\Delta \setminus \Delta_P$. The $\mathcal{H}_{ R}$-morphism $e (\mathcal V)\to e ( \sigma)^\mathcal{U}=\sigma^{\mathcal{U}_M}$ sending $v $ to $v\otimes 1_{\mathcal{U}_M}$ for $v\in \mathcal V$, gives by adjunction an $R[G]$-homomorphism $$ v\otimes {\bf 1}_{\mathcal{U}} \mapsto v\otimes 1_{\mathcal{U}_M} : e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X \xrightarrow{\Phi^G} e ( \sigma),$$ If $\Phi^G$ is an isomorphism, then $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ is the extension to $G$ of $ (e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X )|_M$, meaning that $ M'_2 $ acts trivially on $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $. The converse is true: \begin{lemma} If $ M'_2 $ acts trivially on $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X,$ then $\Phi^G$ is an isomorphism.\end{lemma} \begin{proof} Suppose that $ M'_2 $ acts trivially on $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $. Then $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ is the extension to $G$ of $ (e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X )|_M$, and by Theorem \ref{thm:ouf}, $(e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X )^\mathcal{U}$ is the extension of $(e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X )^{\mathcal{U}_M}$. Therefore \begin{equation*} (v \otimes 1_{\mathcal{U}})T^{ *}_w= (v \otimes 1_{\mathcal{U}})T^{M, *}_w \quad \text{for all} \ v\in \mathcal V, w\in W_M(1). \end{equation*} As $\mathcal{V}$ is extensible to $\mathcal{H}$, the natural map $v\mapsto v\otimes 1_{\mathcal{U}} :\mathcal V\xrightarrow{\Psi} (e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X )^{\mathcal{U}_M} $ is $\mathcal{H}_M$-equivariant, i.e.: $$ v T^{M,*}_w\otimes 1_{\mathcal{U}}= (v \otimes 1_\mathcal{U})T^{M, *}_w \quad \text{for all} \ v\in \mathcal V, w\in W_M(1).$$ because (\eqref{eq:structure}) $v T^{M,*}_w\otimes 1_{\mathcal{U}}= v T^{ *}_w \otimes 1_{\mathcal{U}} = v \otimes T^{ *}_w = (v \otimes 1_{\mathcal{U}})T^{ *}_w$ in $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $. We recall that $- \otimes_{\mathcal{H}_{ M,R}} \mathbb X_M $ is the left adjoint of $(-)^{\mathcal{U}_M}$. The adjoint $R[ M]$-homomorphism $\sigma=\mathcal{V} \otimes_{\mathcal{H}_{ M,R}} \mathbb X_M \to e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ sends $ v\otimes 1_{\mathcal{U}_M}$ to $v\otimes {\bf 1}_{\mathcal{U}} $ for all $v\in \mathcal V$. The $R[M]$-module generated by the $v\otimes {\bf 1}_{\mathcal{U}} $ for all $v\in \mathcal V$ is equal to $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ because $M'_2$ acts trivially. Hence we obtained an inverse of $\Phi^G$. \end{proof} Our next move is to determine if $M'_2$ acts trivially on $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $. It is equivalent to see if $M'_2$ acts trivially on $e (\mathcal V) \otimes 1_{\mathcal{U}}$ as this set generates the representation $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ of $G$ and $M'_2$ is a normal subgroup of $G$ as $M'_2$ and $M$ commute and $G=ZM' M'_2$. Obviously, $\mathcal{U}\cap M'_2$ acts trivially on $e (\mathcal V) \otimes 1_{\mathcal{U}}$. The group of double classes $(\mathcal{U}\cap M'_2)\backslash M'_2/(\mathcal{U}\cap M'_2)$ is generated by the lifts $\hat s\in \mathcal N \cap M'_2$ of the simple affine roots $s $ of $W_{M'_2}$. Therefore, $M'_2$ acts trivially on $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ if and only if for any simple affine root $s \in S_{M'_2}^{\mathrm{aff}}$ of $W_{M'_2}$, any $\hat s\in \mathcal N \cap M'_2$ lifting $s $ acts trivially on $e (\mathcal V) \otimes 1_{\mathcal{U}}$. \begin{lemma}Let $v\in \mathcal V, s \in S_{M'_2}^{\mathrm{aff}}$ and $\hat s\in \mathcal N \cap M'_2$ lifting $s $. We have $$(q_s+1)( v\otimes 1_{\mathcal{U}} - \hat s ( v\otimes 1_{\mathcal{U}}) )= 0.$$ \end{lemma} \begin{proof} We compute: \begin{align*}T_{s} (\hat s {\bf 1}_{\mathcal{U}}) &= \hat s (T_{ s}{\bf 1}_{\mathcal{U}}) = 1_{\mathcal{U} \hat s\mathcal{U} (\hat s)^{-1}} =\sum _{u }\hat s u (\hat s)^{-1}{\bf 1}_{\mathcal{U}} = \sum _{u^{op} }u^{op} {\bf 1}_{\mathcal{U}} ,\\ T_{ s} (\hat s^2 {\bf 1}_{\mathcal{U}})&= \hat s^2 (T_{ s}{\bf 1}_{\mathcal{U}}) = {\bf 1}_{\mathcal{U} \hat s\mathcal{U} (\hat s)^{-2}}={\bf 1}_{\mathcal{U}( \hat s )^{-1}\mathcal{U}} =\sum _{u } u\hat s{\bf 1}_{\mathcal{U}}.\end{align*} for $u$ in the group $ \mathcal{U}/(\hat s^{-1}\mathcal{U} \hat s \cap \mathcal{U})$ and $u^{op}$ in the group $\hat s \mathcal{U} (\hat s)^{-1}/(\hat s \mathcal{U} (\hat s)^{-1} \cap \mathcal{U})$; the reason is that $\hat s^2$ normalizes $\mathcal{U}$, $\mathcal{U} \hat s\mathcal{U} \hat s^{-1}$ is the disjoint union of the sets $\mathcal{U} \hat s u^{-1} (\hat s)^{-1}$ and $\mathcal{U}( \hat s)^{-1}\mathcal{U} $ is the disjoint union of the sets $\mathcal{U} ( \hat s)^{-1} u^{-1} $. We introduce now a natural bijection \begin{equation}\label{eq:bij}u\to u^{op}: \mathcal{U}/(\hat s^{-1}\mathcal{U} \hat s \cap \mathcal{U})\to \hat s \mathcal{U} (\hat s)^{-1}/(\hat s \mathcal{U} (\hat s)^{-1} \cap \mathcal{U}) \end{equation} which is not a group homomorphism. We recall the finite reductive group $G_{k,s}$ quotient of the parahoric subgroup $\mathfrak K_s$ of $G$ fixing the face fixed by $s$ of the alcove $\mathcal C$. The Iwahori groups $Z^0\mathcal{U}$ and $Z^0\hat s\mathcal{U} (\hat s)^{-1} $ are contained in $ \mathfrak K_s$ and their images in $G_{s,k}$ are opposite Borel subgroups $Z_k U_{s,k}$ and $Z_kU_{s,k}^{op}$. Via the surjective maps $u\mapsto \overline u: \mathcal{U}\to U_{s,k}$ and $u^{op}\mapsto \overline u^{op}:\hat s\mathcal{U} (\hat s)^{-1} \to U_{s,k}^{op}$ we identify the groups $\mathcal{U}/(\hat s^{-1}\mathcal{U} \hat s \cap \mathcal{U}) \simeq U_{s,k}$ and similarly $\hat s \mathcal{U} (\hat s)^{-1}/ (\hat s \mathcal{U} (\hat s)^{-1}\cap \mathcal{U}) \simeq U_{s,k}^{op}$. Let $G'_{k,s}$ be the group generated by $U_{s,k}$ and $U_{s,k}^{op}$, and let $B'_{s,k}=G'_{k,s}\cap Z_k U_{s,k}=(G'_{k,s}\cap Z_k) U_{s,k} $. We suppose (as we can) that $\hat s\in \mathfrak K_s$ and that its image $\hat s_k$ in $G_{s,k}$ lies in $G'_{k,s}$. We have $\hat s_kU_{s,k} (\hat s_k) ^{-1} =U_{s,k}^{op}$ and the Bruhat decomposition $G'_{k,s}= B'_{k,s} \sqcup U_{k,s}\hat s_k B'_{k,s} $ implies the existence of a canonical bijection $\overline u^{op} \to \overline u : ( U_{k,s}^{op} - \{1\})\to (U_{k,s}- \{1\}) $ respecting the cosets $\overline u^{op}B'_{k,s}=\overline u \hat s_k B'_{k,s}$. Via the preceding identifications we get the wanted bijection \eqref{eq:bij}. For $v\in e(\mathcal{V})$ and $z\in Z^0\cap M'_2$ we have $vT_z=v$, $z 1_{\mathcal{U}}= T_z 1_{\mathcal{U}}$ and $v\otimes T_z 1_{\mathcal{U}}=vT_z\otimes 1_\mathcal{U}$ therefore $Z^0\cap M'_2$ acts trivially on $\mathcal V\otimes 1_{\mathcal{U}}$. The action of the group $(Z^0\cap M'_2)\mathcal{U}$ on $\mathcal V\otimes 1_{\mathcal{U}}$ is also trivial. As the image of $Z^0\cap M'_2$ in $G_{s,k}$ contains $Z_k\cap G'_{s,k}$, $$u\hat s (v\otimes 1_{\mathcal{U}})= u ^{op}(v\otimes 1_{\mathcal{U}})$$ when $u$ and $u^{op}$ are not units and correspond via the bijection \eqref{eq:bij}. So we have \begin{align}\label{eq:for}v \otimes T_{ s} (\hat s 1_{\mathcal{U}}) - (v\otimes 1_{\mathcal{U}}) = v\otimes T_{ s} (\hat s^2 1_{\mathcal{U}}) - v\otimes \hat s 1_{\mathcal{U}} \end{align} We can move $T_s$ on the other side of $\otimes$ and as $vT_s=q_s v$ (Corollary \ref{cor:extT}), we can replace $T_s$ by $q_s$. We have $v \otimes \hat s^2 1_\mathcal{U}=v \otimes T_{s^{-2}} 1_\mathcal{U}$ because $\hat s^2\in Z^0\cap M'_2$ normalizes $\mathcal{U}$; as we can move $T_{s^{-2}}$ on the other side of $\otimes$ and as $vT_{s^{-2}}=v$ we can forget $\hat s^2 $. So \eqref{eq:for} is equivalent to $(q_s+1)( v\otimes 1_\mathcal{U} - \hat s ( v\otimes 1_\mathcal{U}) )= 0.$ \end{proof} Combining the two lemmas we obtain: \begin{proposition} \label{prop:QG} When $ \mathcal V $ is extensible to $\mathcal{H}$ and has no $q_{s}+1$-torsion for any $s \in S_{M'_2}^{\mathrm{aff}}$, then $ M'_2 $ acts trivially on $e (\mathcal V) \otimes_{\mathcal{H}_{ R}} \mathbb X $ and $\Phi^G $ is an $R[G]$-isomorphism. \end{proposition} Proposition \ref{prop:QG} for the trivial character ${\bf 1}_\mathcal{H}$, says that ${\bf 1}_\mathcal{H}\otimes_{\mathcal{H}_{ R}} \mathbb X $ is the trivial representation $ {\bf 1}_G$ of $G$ when $ q_{s}+1$ has no torsion in $R$ for all $s\in S^{\mathrm{aff}}$. This is proved in \cite[Lemma 2.28]{arXiv:1703.04921} by a different method. The following counter-example shows that this is not true for all $R$. \begin{example}\label{ex:contr} Let $G=GL(2,F)$ and $R$ an algebraically closed field where $q_{s_0}+1 = q_{s_1} + 1=0$ and $S_\mathrm{aff} = \{s_0,s_1\}$. (Note that $q_{s_0} = q_{s_1}$ is the order of the residue field of $R$.) Then the dimension of ${\bf 1}_\mathcal{H}\otimes_{\mathcal{H}_{ R}} \mathbb X $ is infinite, in particular ${\bf 1}_\mathcal{H}\otimes_{\mathcal{H}_{ R}} \mathbb X \neq {\bf 1}_G$. Indeed, the Steinberg representation $\St_G=(\Ind_B^G{\bf 1}_Z)/ {\bf 1}_G$ of $G$ is an indecomposable representation of length $2$ containing an irreducible infinite dimensional representation $\pi$ with $\pi^{\mathcal{U}}=0$ of quotient the character $(-1)^{\val \circ \det }$. This follows from the proof of Theorem 3 and from Proposition 24 in \cite{MR1026328}. The kernel of the quotient map $\St_G \otimes (-1)^{\val \circ \det }\to {\bf 1}_G $ is infinite dimensional without a non-zero $\mathcal{U}$-invariant vector. As the characteristic of $R$ is not $p$, the functor of $\mathcal{U}$-invariants is exact hence $(\St_G \otimes (-1)^{\val \circ \det })^\mathcal{U} = {\bf 1}_\mathcal{H}$. As $- \otimes_{\mathcal{H}_{ R}} R[\mathcal{U} \backslash G]$ is the left adjoint of $(-)^{\mathcal{U} }$ there is a non-zero homomorphism $${\bf 1}_\mathcal{H}\otimes_{\mathcal{H}_{ R}} \mathbb X \to \St_G \otimes (-1)^{\val \circ \det } $$ with image generated by its $\mathcal{U}$-invariants. The homomorphism is therefore surjective. \end{example} \subsection{$\mathcal{V}$ extensible to $\mathcal{H}$} Let $P=MN$ be a standard parabolic subgroup of $G$ with $\Delta_P$ and $\Delta\setminus \Delta_P$ orthogonal. We still suppose that the $\mathcal{H}_{M,R}$-module $\mathcal{V}$ is extensible to $\mathcal{H}$, but now $P\subset Q \subset G$. So we have $I_\mathcal{H}(P,\mathcal V,Q)= e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal{U} $ and $I_G(P,\sigma,Q)= e(\sigma)\otimes_R \St_Q^G $ where $\sigma=\mathcal V \otimes_{\mathcal{H}_{M,R}} \mathbb X_M$. We compare the images by $-\otimes_{\mathcal{H}_R} \mathbb X$ of the $\mathcal{H}_R$-modules $ e(\mathcal V)\otimes_R (\Ind_Q^G {\bf 1})^\mathcal{U}$ and $ e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal{U}$ with the smooth $R$-representations $e(\sigma)\otimes \Ind_Q^G {\bf 1}$ and $ e(\sigma)\otimes \St_Q^G$ of $G$. As $-\otimes_{\mathcal{H}_R} \mathbb X$ is left adjoint of $(-)^\mathcal{U}$, the $\mathcal{H}_R$-homomorphism $v\otimes f \mapsto v\otimes 1_{\mathcal{U}_M}\otimes f: e(\mathcal{V})\otimes_R (\Ind_Q^G {\bf 1})^\mathcal{U}\to (e(\sigma)\otimes_R \Ind_Q^G {\bf 1})^\mathcal{U}$ gives by adjunction an $R[G]$-homomorphism $$v\otimes f\otimes 1_\mathcal{U} \mapsto v\otimes 1_{\mathcal{U}_M}\otimes f:(e(\mathcal{V})\otimes_R (\Ind_Q^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X \xrightarrow{\Phi_Q^G} e(\sigma)\otimes_R \Ind_Q^G {\bf 1}.$$ When $Q=G$ we have $\Phi_G^G=\Phi^G$. By Remark \ref{rem:surj}, $\Phi_Q^G$ is surjective. Proposition \ref{prop:QG} applies with $M_Q$ instead of $G$ and gives the $R[M_Q]$-homomorphism $$v\otimes 1_{\mathcal{U}_{M_Q}} \mapsto v\otimes 1_{\mathcal{U}_M}: e_{\mathcal{H}_Q}(\mathcal{V})\otimes_{\mathcal{H}_{Q, R}} \mathbb X_{M_Q} \xrightarrow{\Phi^Q}e_Q(\sigma).$$ \begin{proposition} \label{prop:fourhom} The $R[G]$-homomorphism $\Phi_Q^G$ is an isomorphism if $\Phi^Q$ is an isomorphism, in particular if $ \mathcal V $ has no $q_{s}+1$-torsion for any $s \in S_{M'_2 \cap M_Q}^{\mathrm{aff}}$. \end{proposition} \begin{proof} The proposition follows from another construction of $\Phi_Q^G$ that we now describe. Proposition \ref{prop:ovvv} gives the $\mathcal{H}_R$-module isomorphism $$v\otimes f_{Q\mathcal{U}}\mapsto v\otimes 1_{\mathcal{H}}:(e(\mathcal{V})\otimes_R (\Ind_Q^G {\bf 1})^\mathcal{U})\to \Ind_{\mathcal{H}_Q}^\mathcal{H} (e_{\mathcal{H}_Q}(\mathcal{V}))=e_{\mathcal{H}_Q}(\mathcal{V}) \otimes_{\mathcal{H}_{M_{Q,R}^+},\theta} \mathcal{H}.$$ We have the $R[G]$-isomorphism \cite[Corollary~4.7]{arXiv:1703.04921} $$v\otimes 1_{\mathcal{H}} \otimes1_{\mathcal{U}}\mapsto f_{Q\mathcal{U}, v\otimes 1_{\mathcal{U}_{M_Q}}}:\Ind_{\mathcal{H}_Q}^\mathcal{H} (e_{\mathcal{H}_Q}(\mathcal{V})\otimes_{\mathcal{H}_{ R}} \mathbb X ) \to \Ind_Q^G(e_{\mathcal{H}_Q}(\mathcal{V})\otimes_{\mathcal{H}_{Q, R}} \mathbb X_{M_Q} )$$ and the $R[G]$-isomorphism *** $$f_{Q\mathcal{U},v\otimes 1_{\mathcal{U}_M}} \mapsto v\otimes 1_{\mathcal{U}_M}\otimes f_{Q\mathcal{U}}:\Ind_Q^G(e_Q(\sigma))\to e(\sigma)\otimes \Ind_Q^G {\bf 1} .$$ From $\Phi^Q$ and these three homomorphisms, there exists a unique $R[G]$-homomorphism $$(e(\mathcal{V})\otimes_R (\Ind_Q^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X \to e(\sigma)\otimes_R \Ind_Q^G {\bf 1}$$ sending $v\otimes f_{Q\mathcal{U}} \otimes1_{\mathcal{U}}$ to $v\otimes 1_{\mathcal{U}_M}\otimes f_{Q\mathcal{U}}$. We deduce: this homomorphism is equal to $\Phi_Q^G$, $\mathcal{V} \otimes 1_{Q\mathcal{U}} \otimes1_{\mathcal{U}}$ generates $(e(\mathcal{V})\otimes_R (\Ind_Q^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X $, if $\Phi^Q$ is an isomorphism then $\Phi_Q^G$ is an isomorphism. By Proposition \ref{prop:QG}, if $\mathcal{V}$ has no $q_{s}+1$-torsion for any $s \in S_{M'_2 \cap M_Q}^{\mathrm{aff}}$, then $\Phi^Q$ and $\Phi_Q^G$ are isomorphisms. \end{proof} We recall that the $\mathcal{H}_{M,R}$-module $ \mathcal V $ is extensible to $\mathcal{H}$. \begin{proposition}\label{prop:extX} The $R[G]$-homomorphism $\Phi_{Q}^G$ induces an $R[G]$-homomorphism $$(e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X \to e(\sigma)\otimes_R \St_Q^G,$$ It is an isomorphism if $\Phi_{Q'}^G$ is an $R[G]$-isomorphism for all parabolic subgroups $Q'$ of $G$ containing $Q$, in particular if $\mathcal{V}$ has no $q_{s}+1$-torsion for any $s \in S_{M'_2 }^{\mathrm{aff}}$.\end{proposition} \begin{proof} The proof is straightforward, with the arguments already developped for Proposition \ref{prop:ovvv} and Theorem \ref{thm:main8}. The representations $e(\sigma)\otimes_R \St_Q^G$ and $(e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X $ of $G$ are the cokernels of the natural $R[G]$-homomorphisms \begin{align*}\oplus_{Q\subsetneq Q' }e(\sigma)\otimes_R \Ind_{Q'}^G {\bf 1} &\xrightarrow {\id\otimes \alpha} e(\sigma)\otimes_R \Ind_Q^G {\bf 1} ,\\ \oplus_{Q\subsetneq Q' } (e(\mathcal{V})\otimes_R (\Ind_{Q'}^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X & \xrightarrow{\id\otimes \alpha^\mathcal{U} \otimes \id} (e(\mathcal{V})\otimes_R (\Ind_{Q}^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X . \end{align*} These $R[G]$-homomorphisms make a commutative diagram with the $R[G]$-homomorphisms $\oplus_{Q\subsetneq Q' }\Phi_{Q'}^G$ and $\Phi_{Q}^G$ going from the lower line to the upper line. Indeed, let $ v \otimes f_{Q'\mathcal{U}} \otimes 1_\mathcal{U} \in (e(\mathcal{V})\otimes_R (\Ind_{Q'}^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X $. One one hand, it goes to $ v \otimes f_{Q\mathcal{U}} \theta_{Q'}(e_Q^{Q'}) \otimes 1_\mathcal{U}\in (e(\mathcal{V})\otimes_R (\Ind_{Q}^G {\bf 1})^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X $ by the horizontal map, and then to $ v \otimes 1_{\mathcal{U}_M}\otimes f_{Q\mathcal{U}} \theta_{Q'}(e_Q^{Q'})$ by the vertical map. On the other hand, it goes to $ v \otimes 1_{\mathcal{U}_M}\otimes f_{Q'\mathcal{U}}$ by the vertical map, and then to $ v \otimes 1_{\mathcal{U}_M}\otimes f_{Q\mathcal{U}} \theta_{Q'}(e_Q^{Q'})$ by the horizontal map. One deduces that $\Phi_{Q}^G$ induces an $R[G]$-homomorphism $( e(\mathcal V)\otimes_R (\St_Q^G)^\mathcal{U}) \otimes_{\mathcal{H}_{ R}} \mathbb X \to e(\sigma)\otimes_R \St_Q^G$, which is an isomorphism if $\Phi_{Q'}^G$ is an $R[G]$-isomorphism for all $Q\subset Q'$. \end{proof} \subsection{General} We consider now the general case: let $P=MN\subset Q$ be two standard parabolic subgroups of $G$ and $\mathcal{V}$ a non-zero right $\mathcal{H}_{M,R}$-module with $Q\subset P(\mathcal{V})$. We recall $ I_\mathcal{H}(P,\mathcal{V},Q)= \Ind_{\mathcal{H}_{M(\mathcal{V})}}^{\mathcal{H}}((e(\mathcal{V})\otimes_R (\St_{Q }^{P(\mathcal{V})})^{\mathcal{U}_{M(\mathcal{V})}}) $ and $\sigma=\mathcal V \otimes_{\mathcal{H}_{M,R}} \mathbb X_M$ (Proposition\ref{prop:orthtriple}). There is a natural $R[G]$-homomorphism $$ I_\mathcal{H}(P,\mathcal{V},Q)\otimes _{\mathcal{H}_R} \mathbb X \xrightarrow{\Phi_{I}^G} \Ind_{P(\mathcal{V})}^G(e_{M(\mathcal{V})}(\sigma)\otimes_R \St_{Q }^{P(\mathcal{V})} )$$ obtained by composition of the $R[G]$-isomorphism \cite[Corollary~4.7]{arXiv:1703.04921} (proof of Proposition \ref{prop:fourhom}): $$ I_\mathcal{H}(P,\mathcal{V},Q)\otimes _{\mathcal{H}_R} \mathbb X \to \Ind_{P(\mathcal{V})}^G((e(\mathcal{V})\otimes_R (\St_{Q\cap M(\mathcal{V}) }^{M(\mathcal{V})})^{\mathcal{U}_{M(\mathcal{V})}}) \otimes_{\mathcal{H}_{M(\mathcal{V}), R}} \mathbb X_{M(\mathcal{V})} ),$$ with the $R[G]$-homomorphism $$\Ind_{P(\mathcal{V})}^G((e(\mathcal{V})\otimes_R (\St_{Q }^{P(\mathcal{V})})^{\mathcal{U}_{M(\mathcal{V})}}) \otimes_{\mathcal{H}_{M(\mathcal{V}), R}} \mathbb X_{M(\mathcal{V})} ) \to \Ind_{P(\mathcal{V})}^G(e_{M(\mathcal{V})}(\sigma)\otimes_R \St_{Q }^{P(\mathcal{V})}),$$ image by the parabolic induction $\Ind_{P(\mathcal{V})}^G$ of the homomorphism $$(e(\mathcal{V})\otimes_R (\St_{Q}^{P(\mathcal{V})})^{\mathcal{U}_{M(\mathcal{V})}}) \otimes_{\mathcal{H}_{M(\mathcal{V}), R}} \mathbb X_{M(\mathcal{V})} \to e_{M(\mathcal{V})}(\sigma)\otimes_R \St_Q^{P(\mathcal{V})} .$$ induced by the $R[M(\mathcal{V})]$-homomorphism $\Phi_Q^{P(\mathcal{V})}=\Phi_{Q\cap M(\mathcal{V})}^{M(\mathcal{V})}$ of Proposition \ref{prop:extX} applied to $M(\mathcal{V})$ instead of $G$. This homomorphism $\Phi_{I}^G$ is an isomorphism if $\Phi_{Q}^{P(\mathcal{V})}$ is an isomorphism, in particular if $\mathcal{V}$ has no $q_{s}+1$-torsion for any $s \in S_{M'_2 }^{\mathrm{aff}}$ where $\Delta_{M_2}= \Delta_{M(\mathcal{V})}\setminus\Delta_M$ (Proposition \ref{prop:extX}). We get the main theorem of this section: \begin{theorem} \label{thm:10.10} Let $(P=MN,\mathcal{V},Q)$ be an $\mathcal{H}_R$-triple and $\sigma=\mathcal V \otimes_{\mathcal{H}_{M,R}}R[\mathcal{U}_M\backslash M]$. Then, $(P,\sigma,Q)$ is an $R[G]$-triple. The $R[G]$-homomorphism $$I_\mathcal{H}(P,\mathcal V,Q)\otimes_{\mathcal{H}_R}R[\mathcal{U} \backslash G] \xrightarrow{\Phi_{I}^G} \Ind_{P(\mathcal{V})}^G(e_{M(\mathcal{V})}(\sigma)\otimes_R \St_{Q }^{P(\mathcal{V})} )$$ is an isomorphism if $\Phi_{Q}^{P(\mathcal{V})}$ is an isomorphism. In particular $\Phi_{I}^G$ is an isomorphism if $\mathcal{V}$ has no $q_{s}+1$-torsion for any $s \in S_{M'_2 }^{\mathrm{aff}}$. \end{theorem} Recalling $I_G(P,\sigma,Q) = \Ind_{P(\sigma)}^G( e (\sigma)\otimes_R \St_Q^{P(\sigma)} ) $ when $\sigma\neq 0$, we deduce: \begin{corollary} \label{cor:10.12}We have: $I_\mathcal{H}(P,\mathcal V,Q)\otimes_{\mathcal{H}_R} R[\mathcal{U} \backslash G] \simeq I_G(P, \sigma,Q) $, if $\sigma\neq 0$, $P(\mathcal V)=P( \sigma)$ and $\mathcal{V}$ has no $q_{s}+1$-torsion for any $s \in S_{M'_2 }^{\mathrm{aff}}$. $I_\mathcal{H}(P,\mathcal V,Q)\otimes_{\mathcal{H}_R} R[\mathcal{U} \backslash G]=I_G(P,\sigma,Q) =0$, if $\sigma=0$. \end{corollary} Recalling $P(\mathcal V)=P( \sigma)$ if $\sigma\neq 0$, $R$ is a field of characteristic $p$ and $\mathcal{V}$ simple supersingular (Proposition \ref{prop:orthtriple} 4)), we deduce: \begin{corollary} \label{cor:10.13} $I_\mathcal{H}(P,\mathcal V,Q)\otimes_{\mathcal{H}_R} R[\mathcal{U} \backslash G] \simeq I_G(P, \sigma,Q) $ if $R$ is a field of characteristic $p$ and $\mathcal{V}$ simple supersingular. \end{corollary} \section{Vanishing of the smooth dual }\label{S:9} Let $V$ be an $R[G]$-module. The dual $\Hom _R (V,R)$ of $V$ is an $R[G]$-module for the contragredient action: $gL (gv)=L(v)$ if $g\in G$, $L\in \Hom _R (V,R)$ is a linear form and $v\in V$. When $V\in \Mod_R^\infty(G)$ is a smooth $R$-representation of $G$, the dual of $V$ is not necessarily smooth. A linear form $L$ is smooth if there exists an open subgroup $H\subset G$ such that $L(hv)=L(v)$ for all $h\in H, v\in V$; the space $\Hom _R (V,R)^\infty$of smooth linear forms is a smooth $R$-representation of $G$, called the {\bf smooth dual} (or smooth contragredient) of $V$. The smooth dual of $V$ is contained in the dual of $V$. \begin{example} When $R$ is a field and the dimension of $V$ over $R$ is finite, the dual of $V$ is equal to the smooth dual of $V$ because the kernel of the action of $G$ on $V$ is an open normal subgroup $H\subset G$; the action of $G$ on the dual $\Hom _R (V,R)$ is trivial on $H$. \end{example} We assume in this section that $R$ is a field of characteristic $p$. Let $P=MN$ be a parabolic subgroup of $G$ and $V\in \Mod_R^\infty (M)$. Generalizing the proof given in \cite[8.1]{MR2392364} when $G=GL(2,F)$ and the dimension of $V$ is $1$, we show: \begin{proposition}\label{He} If $P\neq G$, the smooth dual of $\Ind_P^G (V)$ is $0$. \end{proposition} \begin{proof} Let $L$ be a smooth linear form on $\Ind_P^G (V)$ and $K$ an open pro-$p$-subgroup of $G$ which fixes $L$. Let $J$ an arbitrary open subgroup of $K$, $g\in G$ and $f\in (\Ind_P^G (V))^J$ with support $PgJ$. We want to show that $L(f)=0$. Let $J' $ be any open normal subgroup of $ J$ and let $\varphi$ denote the function in $(\Ind_P^G (V))^{J'}$ with support $PgJ'$ and value $\varphi(g)=f(g)$ at $g$. For $j\in J$ we have $L(j\varphi)=L(\varphi)$, and the support of $ j\varphi(x)= \varphi (xj)$ is $PgJ'j^{-1}$. The function $f$ is the sum of translates $ j \varphi$, where $ j$ ranges through the left cosets of the image $X$ of $g^{-1}Pg\cap J$ in $J/J' $, so that $L(f)= r L(\varphi)$ where $r$ is the order of $X$ in $J/J'$. We can certainly find $J'$ such that $r\neq 1$, and then $r$ is a positive power of $p$. As the characteristic of $C$ is $p$ we have $L(f)=0$. \end{proof} The module $R[\mathcal{U}\backslash G]$ is contained in the module $R^{\mathcal{U}\backslash G}$ of functions $f: \mathcal{U}\backslash G\to R$. The actions of $\mathcal{H}$ and of $G$ on $R[\mathcal{U}\backslash G]$ extend to $R^{\mathcal{U}\backslash G}$ by the same formulas. The pairing $$(f,\varphi)\mapsto \langle f,\varphi\rangle=\sum_{g\in \mathcal{U}\backslash G} f(g) \varphi (g):R^{\mathcal{U}\backslash G}\times R[\mathcal{U}\backslash G]\to R$$ identifies $R^{\mathcal{U}\backslash G}$ with the dual of $R[\mathcal{U}\backslash G]$. Let $h\in \mathcal{H}$ and $\check h\in \mathcal{H}, \ \check h (g)=h(g^{-1})$ for $g\in G$. We have $$\langle f,h\varphi\rangle= \langle \check hf,\varphi\rangle. $$ \begin{proposition}\label{arXiv:1406.1003_accepted} When $R$ is an algebraically closed field of characteristic $p$, $G$ is not compact modulo the center and $\mathcal V$ is a simple supersingular right $\mathcal{H}_R$-module, the smooth dual of $\mathcal{V}\otimes_{\mathcal{H}_R}R[{\mathcal{U}}\backslash G]$ is $0$. \end{proposition} \begin{proof} Let $\mathcal{H}_R^{\mathrm{aff}}$ be the subalgebra of $\mathcal{H}_R$ of basis $(T_w)_{w\in W'(1)}$ where $W'(1)$ is the inverse image of $W'$ in $W(1)$. The dual of $\mathcal{V}\otimes_{\mathcal{H}_R}R[{\mathcal{U}}\backslash G]$ is contained in the dual of $\mathcal{V}\otimes_{\mathcal{H}_R^{\mathrm{aff}}}R[{\mathcal{U}}\backslash G]$; the $\mathcal{H}_R^{\mathrm{aff}}$-module $\mathcal{V}|_{ \mathcal{H}_R^{\mathrm{aff}}}$ is a finite sum of supersingular characters \cite{Vigneras-prop-III}. Let $\chi: \mathcal{H}_R^{\mathrm{aff}}\to R$ be a supersingular character. The dual of $\chi\otimes_{\mathcal{H}_R^{\mathrm{aff}}} R[{\mathcal{U}}\backslash G]$ is contained in the dual of $R[{\mathcal{U}}\backslash G]$ isomorphic to $R^{\mathcal{U}\backslash G}$. It is the space of $f\in R^{\mathcal{U}\backslash G}$ with $\check h f =\chi(h)f$ for all $h\in \mathcal{H}_R^{\mathrm{aff}} $. The smooth dual of $\chi\otimes_{\mathcal{H}_R^{\mathrm{aff}}} R[{\mathcal{U}}\backslash G]$ is $0$ if the dual of $\chi\otimes_{\mathcal{H}_R^{\mathrm{aff}}} R[{\mathcal{U}}\backslash G]$ has no non-zero element fixed by $\mathcal{U}$. Let us take $f\in R^{\mathcal{U}\backslash G/\mathcal{U}}$ with $\check h f =\chi(h)f$ for all $h\in \mathcal{H}_R^{\mathrm{aff}} $. We shall prove that $f=0$. We have $\check T_w=T_{w^{-1}}$ for $w\in W(1)$. The elements $(T_t)_{t\in Z_k}$ and $(T_{\tilde s})_{s\in S^{\mathrm{aff}}}$ where $\tilde s$ is an admissible lift of $s$ in $ W^{\mathrm{aff}}(1)$, generate the algebra $ \mathcal{H}_R^{\mathrm{aff}} $ and $$T_t T_w=T_{tw},\quad T_{\tilde s}T_w=\begin{cases} T_{\tilde s w} & \tilde s w>w,\\ c_{\tilde s}T_w& \tilde s w<w. \end{cases} $$ with $c_{\tilde s}= - |Z_{k,s}'|\sum _{t\in Z'_{k,s}}T_t$ because the characteristic of $R$ is $p$ \cite[Proposition 4.4]{MR3484112}. Expressing $ f = \sum_{ w \in W(1)}a_{ w} T_{ w} $, $a_w\in R$, as an infinite sum, we have $$T_{t}f=\sum_{w \in W(1)}a_{t^{-1} w} T_{ w}, \quad T_{\tilde s}f=\sum_{w \in W(1), \tilde s w < w} (a_{(\tilde s)^{-1} w} + a_{ w} c_{\tilde s})T_{ w} ,$$ where $<$ denote the Bruhat order of $W(1)$ associated to $S^{\mathrm{aff}}$ \cite{MR3484112} and \cite[Proposition 4.4]{MR3484112}. A character $\chi$ of $ \mathcal{H}_R^{\mathrm{aff}} $ is associated to a character $\chi_k:Z_k\to R^*$ and a subset $J $ of $$S_{\chi_k}^{\mathrm{aff}}=\{ s\in S ^{\mathrm{aff}} \ | \ (\chi_k)|_ {Z'_{k,s}} \ \text{ trivial} \ \} $$ \cite[Definition 2.7]{Vigneras-prop-III}. We have \begin{equation}\label{cf} \begin{cases} \chi(T_t )=\chi_k(t) \quad \quad \ t\in Z_k,\\ \chi(T_{\tilde s}) = \begin{cases} 0 & s\in S^{\mathrm{aff}}\setminus J , \\ -1 & s\in J. \end{cases} \end{cases} \quad (\chi_k)(c_{\tilde s})= \begin{cases} 0 & s\in S^{\mathrm{aff}}\setminus S_{\chi_k}^{\mathrm{aff}} , \\ -1 & s\in S_{\chi_k}^{\mathrm{aff}}. \end{cases} \end{equation} Therefore $ \chi_k(t)f= \check T_{t}f=T_{t^{-1}}f$ hence $\chi_k(t) a_w= a_{tw}$. We have $\chi(T_{\tilde s})f=\check T_{\tilde s}f= T_{(\tilde s)^{-1}}f = T_{\tilde s} T_{(\tilde s)^{-2}} f=\chi_k((\tilde s)^2) T_{\tilde s}f $; as $(\tilde s) ^2 \in Z'_{k,s}$ \cite[three lines before Proposition 4.4]{MR3484112} and $J\subset S_{\chi_k}^{\mathrm{aff}}$, we obtain \begin{equation}\label{eq:Tsf}T_{\tilde s}f=\begin{cases} 0 & s\in S^{\mathrm{aff}}\setminus J , \\ -f & s\in J. \end{cases} \end{equation} Introducing $\chi_k(t) a_w= a_{tw}$ in the formula for $T_{\tilde s}f $, we get \begin{align*}\sum_{w \in W(1), \tilde s w < w} a_{ w} c_{\tilde s} T_{ w} &=-|Z'_{k,s}|^{-1}\sum_{w \in W(1), \tilde s w < w, t\in Z'_{k,s}} a_{ w} T_{t w}\\ & =- |Z'_{k,s}|^{-1}\sum_{w \in W(1), \tilde s w < w, t\in Z'_{k,s}} a_{t^{-1} w} T_{w}\\ &=- |Z'_{k,s}|^{-1}\sum_{t\in Z'_{k,s}}\chi_k(t^{-1})\sum_{w \in W(1), \tilde s w < w} a_{ w} T_{w}\\ & =\chi_k(c_{\tilde s})\sum_{w \in W(1), \tilde s w < w} a_{ w} T_{w}. \end{align*} \begin{align*} T_{\tilde s}f & =\sum_{w \in W(1), \tilde s w < w} (a_{(\tilde s)^{-1} w}+ a_{ w} \chi_k(c_{\tilde s}))T_{ w}\\ &= \begin{cases}\sum_{w \in W(1), \tilde s w < w} a_{(\tilde s)^{-1} w}T_{ w} & s\in S^{\mathrm{aff}}\setminus S_{\chi_k}^{\mathrm{aff}}, \\ \sum_{w \in W(1), \tilde s w < w} (a_{(\tilde s)^{-1} w}- a_{ w})T_{ w} & s\in S_{\chi_k}^{\mathrm{aff}} . \end{cases} \end{align*} From the last equality and \eqref{eq:Tsf} for $T_{\tilde s}f $, we get: \begin{equation}\label{df} a_{\tilde s w} = \begin{cases}0 &s \in J \cup (S^{\mathrm{aff}}\setminus S_{\chi_k}^{\mathrm{aff}}), \tilde s w < w ,\\ a_{w} & s\in S_{\chi_k}^{\mathrm{aff}} \setminus J . \end{cases} \end{equation} Assume that $a_ {w}\ne 0$. By the first condition, we know that $w>\tilde s w$ for $s \in J \cup (S^{\mathrm{aff}}\setminus S_{\chi_k}^{\mathrm{aff}})$. The character $\chi$ is supersingular if for each irreducible component $X$ of $S^{\mathrm{aff}}$, the intersection $X\cap J$ is not empty and different from $X$ \cite[Definition 2.7, Theorem 6.18]{Vigneras-prop-III}. This implies that the group generated by the $s\in S_{\chi_k}^{\mathrm{aff}} \setminus J$ is finite. If $\chi$ is supersingular, by the second condition we can suppose $w>\tilde s w$ for any $s\in S^{\mathrm{aff}}$. But there is no such element if $S^{\mathrm{aff}}$ is not empty. \end{proof} \begin{theorem}\label{contra} Let $\pi$ be an irreducible admissible $R$-representation of $G$ with a non-zero smooth dual where $R$ is an algebraically closed field of characteristic $p$. Then $\pi$ is finite dimensional. \end{theorem} \begin{proof} Let $(P,\sigma,Q)$ be a $R[G]$-triple with $\sigma$ supercuspidal such that $\pi \simeq I_G(P,\sigma,Q)$. The representation $I_G(P,\sigma,Q)$ is a quotient of $\Ind_Q^G e_Q(\sigma)$ hence the smooth dual of $\Ind_Q^G e_Q(\sigma)$ is not zero. From Proposition \ref{He}, $Q=G$. We have $I_G(P,\sigma,G)=e(\sigma)$. The smooth dual of $\sigma$ contains the smooth linear dual of $e(\sigma)$ hence is not zero. As $\sigma$ is supercuspidal, the $\mathcal{H}_{M}$-module $\sigma^{\mathcal{U}_M}$ contains a simple supersingular submodule $\mathcal V$ \cite[Proposition 7.10, Corollary~7.11]{Vigneras-prop-III}. The functor $-\otimes_{\mathcal{H}_{M,R}} R[{\mathcal{U}_M}\backslash M]$ being the right adjoint of $(-)^{\mathcal{U}_M}$, the irreducible representation $\sigma$ is a quotient of $\mathcal V\otimes_{\mathcal{H}_{M,R}} R[{\mathcal{U}_M}\backslash M]$, hence the smooth dual of $\mathcal V\otimes_{\mathcal{H}_{M,R}} R[{\mathcal{U}_M}\backslash M]$ is not zero. By Proposition \ref{arXiv:1406.1003_accepted}, $M=Z $. Hence $\sigma$ is finite dimensional and the same is true for $e(\sigma)= I_G(B,\sigma,G) \simeq \pi$. \end{proof} \begin{remark} When the characteristic of $F$ is $0$, Theorem \ref{contra} was proved by Kohlhaase for a field $R$ of characteristic $p$. He gives two proofs \cite[Proposition 3.9, Remark 3.10]{kohlhasse-smooth-duality}, but none of them extends to $F$ of characteristic $p$. Our proof is valid without restriction on the characteristic of $F$ and does not use the results of Kohlhaase. Our assumption that $R$ is an algebraically closed field of characteristic $p$ comes from the classification theorem in \cite{MR3600042}. \end{remark}
1,116,691,501,157
arxiv
\section{Introduction} \label{sec:introduction} Molecular dynamics simulation is a powerful method allowing to simulate at the particle scale different materials such as colloidal systems \citep{boattini_unsupervised_2019}, glassy materials \citep{barbot_rejuvenation_2020,barbot_characterization_2020} or metallic nanocrystals \citep{amodeo_mechanical_2017}. To help in interpreting the simulations results, being able to determine the local structure at the particle-scale is essential. To do so, several approaches were developed, mainly relying on local order parameters to describe the surrounding environment of each particle (number of neighbours, angles formed with the neighbours, ...) to detect underlying substructures in the simulated atomistic sample. Among these methods, we can cite the Bond Orientational Order (BOO) parameter \citep{steinhardt_bond-orientational_1983}, the Common Neighbours Analysis (CNA) \citep{honeycutt_molecular_1987}, or the Bond-Angle Distribution (BAD) \citep{ackland_applications_2006}. Such methods were applied with success to study several phenomena such as crystal nucleation \citep{sanz_out--equilibrium_2008}, melting \citep{noori_study_2015} or plasticity \citep{ackland_applications_2006,stukowski_structure_2012,amodeo_atomistic_2014}. However, they are mostly relying on hand chosen criteria and thus only works for already known structures. Recently, different approaches using machine learning to detect substructures from local order parameters were developed. The first models, which relied on supervised learning \citep{geiger_neural_2013,dietz_machine-learning_2017,boattini_neural-network-based_2018}, suffered from the same problem as the hand-chosen techniques: they could only be trained to find an expected structure in the studied systems. However, a recent work from Boattini $et$ $al$ \citep{boattini_unsupervised_2019} successfully applied an unsupervised learning method using the BOO parameter to automatically detect different structures within a colloidal material without relying on a priori knowledge of the underlying structures. This method also had the advantage of being easy to implement and computationally fast. While the BOO parameter is able to discriminate between the different structures in crystalline materials, it does not perform well when the simulated system is under deformation \citep{lechner_accurate_2008}. Being able to automatically detect substructures within crystal under plastic deformation, including some previously undetected and unexpected case, would open the way to a better understanding of crystal plasticity at the atomic scale. This requires to find a local order parameter suitable to discriminate substructures at the atomistic level for elastically and plastically deformed systems. In this study, we present a method inspired by the paper from Boattini $et$ $al$ \citep{boattini_unsupervised_2019} to automatically study and detect the different substructures within a crystal under plastic deformation appearing at the atomistic scale. This approach relies on the BAD parameters designed for single crystals to describe the environment around each atom, as highlighted in \citep{ackland_applications_2006}. This local parameter was already used in previous structure detection approaches relying on bond-angle distributions, showing the ability to discriminate structures associated with crystal plasticity \citep{ackland_applications_2006,amodeo_atomistic_2014}. Our method consists on extracting the most pertinent BAD parameters using an autoencoder neural networks \citep{rumelhart_learning_1986,dietz_machine-learning_2017} before applying two clustering models: K-means \citep{lloyd_least_1982,macqueen_methods_1967} and DBSCAN \citep{ester_density-based_1996}. It has the advantage of being computationally fast and easy to implement. We finally test our approach on a FCC single crystal of Nickel under plastic deformation. \section{Methods} \label{sec:methods} This section is divided in three main parts. We first describe the system on which we apply our structure detection method. We then depict the BAD parameters we use to capture the local environment of each particle. The last section focuses on the algorithms we use to detect the local atomistic substructures in the system from the BAD parameters. \subsection{System} \label{subsec:system} In this study, we perform Molecular Dynamics (MD) simulations using LAMMPS \citep{thompson_lammps_2022} to plastically deform a Ni FCC defect free single crystal. These simulations are then used as benchmark to apply our method, based on unsupervised learning, developed to detect the different structures emerging in the system during plastic deformation. We first create a cubic sample with edge lengths equal to 20nm and free surfaces oriented along the $\langle 100 \rangle$ directions as shown on Fig \ref{fig:system} (a). The interatomic interaction linking the atoms is the EAM potential for Ni \citep{adams_self-diffusion_1989}. The system is then equilibrated for 5 ps at 5K using Nose-Hoover thermostat \citep{nose_unified_1984,hoover_canonical_1985}. A deformation is imposed using uniaxial compression with a flat indenter along the $[100]$ direction, with a strain rate of $10^8 s^{-1}$. The flat indenter is modeled as an infinite plane exerting a repulsive force on atoms defined by \begin{equation} F(r)=-K(r-R)^2, \end{equation} with r being the atom position, R the plane position and K the force constant. Here we choose K=1000 eV/\AA $^3$. The temperature is maintained at 5K during the compression through the Nose-Hoover thermostat. To determine when the first plastic event occurs during the deformation, we first measure on the fly the compressive stress applied on the system by the indenter using a method similar to the one used in \citep{amodeo_atomistic_2014} and looking for the first stress drop corresponding to the first plastic event. The stress-strain curve obtained from the compression is shown on Fig \ref{fig:system} (b), with the first plastic event represented by a vertical red line. In our simulations, we observe the nucleation of Shockley head partial dislocations with burger vectors: $\{ \frac{1}{6}[-\!1 \; 1 \; 2],\frac{1}{6}[1 \; -\!1 \; 2],\frac{1}{6}[-\!1 \; -\!1 \; 2],\frac{1}{6}[-\!1 \; -\!1 \; -\!2] \}$. The dislocation lines are thus accompanied by a stacking fault in the HCP structure \citep{hull_chapter_2011}. \begin{figure}[h!] \centering \includegraphics[width=11cm]{SystemIntro.jpg} \caption{(a) Snapshot made with Ovito \citep{stukowski_visualization_2009} of the system used in this study. It consists of a cubic nanoparticle of nickel with edge length equals to 20nm. The two arrows represent the uniaxial compression performed with a flat indenter along the $[100]$ direction. The stress-strain curve obtained from this compression is shown on (b). The vertical red line corresponds to occurence of the first plastic event.} \label{fig:system} \end{figure} \subsection{Bond Angle Distribution Parameter} \label{subsec:BAD} To describe the local structural environment around each particle, we measure the Bond-Angle Distribution (BAD) parameters for each of them, based on the method described in \citep{ackland_applications_2006}. For each atom i, we first extract its nearest neighbors $N_b(i)$ using the adaptive cutoff method as described in \citep{stukowski_structure_2012}. This method has the advantage of being parameter-free, so can be directly applied to any structure, while being just slightly slower than using a fixed cutoff \citep{stukowski_structure_2012}. We define the bond-angle $\theta_{jik}$ with $j,k\in \{1..N_b(i)\}$ being two nearest neighbors of i. We then estimate $\{cos(\theta_{jik})\}$ for all the $N_b(i)(N_b(i)-1)/2$ neighbor pairs of atom i. From it, we describe nine ranges of $\{cos(\theta_{jik})\}$ over which the BAD parameters $\{\chi_l \}_{l \in \{0..8\}}$ are estimated as shown in Table \ref{Table:BAD}. These ranges are based on those from \citep{ackland_applications_2006} which were optimised to differentiate crystal structures. The idea here is to count the number of bond-angle cosines in each ranges to obtain the value of the corresponding $\{\chi_l \}_{l \in \{0..8\}}$. \begin{table}[h!] \caption{Definition of the nine bond angle cosines ranges over which the BAD parameters $\chi$ are estimated. $cos(\theta_{jik})$ is the cosines of the angle between $\bold{r}_{ij}$ and $\bold{r}_{ik}$, with $j,k$ being two nearest neighbors of the atom i. The ranges were taken from \citep{ackland_applications_2006} and were optimised to differential crystal structures.} \vspace{0.2cm} \centering \begin{tabular}{c c c c} \hline BAD Parameter & Minimum $cos(\theta_{jik})$ & Maximum $cos(\theta_{jik})$ \\ [0.5ex] \hline $\chi_0$ & -1.0 & -0.945 \\ $\chi_1$ & -0.945 & -0.915 \\ $\chi_2$ & -0.915 & -0.755 \\ $\chi_3$ & -0.755 & -0.705\\ $\chi_4$ & -0.705 & -0.195 \\ $\chi_5$ & -0.195 & 0.195 \\ $\chi_6$ & 0.195 & 0.245\\ $\chi_7$ & 0.245 & 0.795 \\ $\chi_8$ & 0.795 & 1.0 \\ [1ex] \hline \end{tabular} \label{Table:BAD} \end{table} For instance, let us consider a case where a atom i has six bond-angle cosines. If two of them are in the range $[-1.0,-0.945]$ and four of them are in the range $[-0.945,-0.915]$, then for this atom, we will have $\chi_0=2$, $\chi_1=4$ and $\{\chi_l \}_{l \in \{2..8\}}=0$. Note that in this paper, the BAD parameters were calculated using the package Pyscal \citep{menon_pyscal_2019}. \subsection{Unsupervised learning} \label{subsec:UnsupervisedLearning} From the knowledge of the BAD parameters for each atom, the aim of our method is to determine automatically the different structures present in the system. This part is done through unsupervised learning and can be decomposed on two main tasks. First we use an autoencoder to determine the good number of parameters to describe the local structures in our system. Then, we apply a perturbative method to the autoencoder to determine which parameters are the most effective. From this, we focus on the most effective parameters applying clustering and classification methods to extract the different substructures of our system. Note that as the surface atoms can be easily filtered from their number of nearest neighbours as in \citep{amodeo_atomistic_2014}. , i.e. atoms for which $N_b<11$, we filter out the surface atoms using this criteria and focus on the structures within the inner atoms. \subsubsection{Autoencoder} \label{subsubsec:Autoencoder} As explained above, the aim of this section is to extract the number of pertinent parameters to determine the different structures of our system. Indeed, out of the nine BAD parameters, some could be irrelevant or redundant to describe the local structural environment around each atom. This task is here performed using an autoencoder based neural network to perform dimensionality reduction \citep{goodfellow_deep_2016,chen_collective_2018,boattini_unsupervised_2019}. The autoencoder network can be divided in two parts: the encoder part and the decoder part as shown on Fig \ref{fig:autoencoderschematics}. The encoder aims at encoding the input into a lower dimension: the bottleneck. Then the decoder will try to reconstruct the input from the bottleneck. The output of the network will then be the reconstruction of the input done by the decoder from the bottleneck. \begin{figure}[h!] \centering \includegraphics[width=6cm]{ComposeFig7v4.png} \caption{Schematic representation of autoencoder neural network architecture for dimensional reduction. The neural network is trained in order to reproduce the input at the output while passing through a bottleneck of lower dimensions. The encoder network aims at finding a low-dimension representation of the input. The decoder network is trained to reconstruct the original input from the lower-dimension bottleneck.} \label{fig:autoencoderschematics} \end{figure} In practice, the input of the encoder is $\bold{\chi}(i)= \{\chi_l \}_{l \in \{0..8\}}(i) \in \mathbb{N}^9$, with $i \in [1..N]$, N being the number of atoms over which the training in performed. We train the encoder to reduce the difference between the input and the output for a bottleneck dimensions going from 1 to 9. To determine the pertinent dimension of input, we measure for each bottleneck $N_{bneck}$ dimension the mean square error between the input $\bold{\chi}$ and the output $\bold{\hat{\chi}}$ as \begin{equation} E_{N_{bneck}}=\frac{1}{N} \sum^N_{i=1} \Vert \bold{\chi}(i)-\bold{\hat{\chi}}(i)\Vert. \end{equation} We then look from which dimension we have the minimum difference between the input and the output. For more details about the implementation of the autoencoder, please refer to the paper \citep{boattini_unsupervised_2019} which served as a reference for this section. \subsubsection{Relevant parameters identification} \label{subsubsec:RelevantParameters} After determining the relevant number of parameters to study the substructures in our system, we want to go beyond the black box of the neural network and determine which parameters are most pertinent for our study. To do so, we use the input perturbation method \citep{yao_forecasting_1998,scardi_developing_1999,gevrey_review_2003,olden_accurate_2004}. The principle of this method is to take the trained autoencoder neural network with the optimal bottleneck dimension, and to perturbate one of the input parameters by increasing its value with a fixed amount. In this work, we increase the chosen input value by 10\% which was shown to give satisfactory results \citep{boattini_unsupervised_2019}. After perturbing one of the inputs, we look at how much this perturbation influenced the output by calculating: \begin{equation} E^{pert}_k=\frac{1}{N} \sum^N_{i=1} \Vert \bold{\chi}(i)-\bold{\hat{\chi}}^{pert}_k(i)\Vert, \label{Eq:PerturbatedError} \end{equation} with $k \in [0..8]$ being the index of the perturbed input, $\bold{\hat{\chi}}^{pert}_k$ the output after the perturbation and $\bold{\chi}$ the input without perturbation. If the perturbed input $k$ was considered during the training as carrying no pertinent information to reconstruct the input at the output, the perturbation will not propagate to the output. This will lead to a very small value of $E^{pert}_k$. On the contrary, if the perturbed input k is necessary to obtain an output similar to the input, its perturbation will lead to a higher value of $E^{pert}_k$. From this, we determine the relative importance $RI_k$ of the input k as: \begin{equation} RI_k=\frac{\Delta E_k}{\sum_{j=0}^8 \Delta E_j}, \label{Eq:RelativeImportance} \end{equation} with $\Delta E_k=E^{pert}_k-E$ being the difference between the autoencoder mean square error after perturbation of the input k and without perturbation. From this relative importance index, we can determine the importance of each input during the autoencoder training. \subsubsection{Clustering and classification} \label{subsubsec:ClusteringClassification} In the last section, we described how to isolate the most relevant BAD parameters to study the local structure. In our system most of the atoms will be in two structures, FCC and HCP (in the stacking fault), which corresponds to close values of BAD parameters \citep{ackland_applications_2006}. While most structure detection approaches focus on the already known main crystalline structures, i.e. FCC, BCC and HCP \citep{stukowski_structure_2012}, the goal of our method is also to be able to automatically detect substructures without having an a priori knowledge about them. To this end, we chose to apply two clustering methods on the relevant BAD parameters space, one to detect the main structures containing most of the atoms, and the second one to focus on the substructures containing few atoms but located far from each other in the parameter space. The clusters extracted by these two clustering approaches will then be combined to identify the different substructures of the studied system. In the next sections, first we introduce the K-means clustering method \citep{macqueen_methods_1967,lloyd_least_1982} we use to detect the main structures. Then, we detail the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) \citep{ester_density-based_1996} which allows to detect efficiently more isolated clusters. While K-means is able to efficiently determine the cluster of a new data point without having to re-apply the clustering over the whole data set, it is not the case for DBSCAN. We thus need to apply a classification algorithm over the clusters detected by DBSCAN. We thus present in the last part the logistic regression classifier we use to perform this task. \paragraph{Kmeans} \label{paragraph:Kmeans} In order to detect the main substructures present in the system, we use the K-means clustering algorithm \citep{macqueen_methods_1967,lloyd_least_1982} as implemented in sci-kit learn \citep{pedregosa_scikit-learn_2011}. The principle of the method is the following: let us consider a set of N data points in the parameter space that we want to separate into K clusters. We name $\{x_i\}_{i\in[1..N]}$ the position of the data point in the parameter space and $\{\mu_j\}_{j\in[1..K]}$ the center of each cluster, commonly called "centroid". The aim of the K-means algorithm is to choose optimal centroids in order to minimize the inertia defined by \begin{equation} IN(K)=\sum^N_{i=1} min_{j\in[1..K]}(\Vert x_i-\mu_j \Vert^2). \label{Eq:Inertia} \end{equation} If the points are located in clusters, the inertia will be minimized by placing the centroids at the center of each cluster. While the K-means requires the number of clusters as an input, the optimal number of cluster can be obtained with the following procedure: we first apply K-means to our data set for varying number of clusters K and calculate the corresponding inertia $IN(K)$. In this paper, we did it for $K \in [1..10]$. Then we apply the elbow method to the function $IN(K)$ following the protocol described in \citep{salvador_determining_2004}. From this method, we can obtain the value of K from which the inertia stop decreasing with K, thus corresponding to the optimal number of clusters for K-means clustering. After training the K-means algorithm over the data set for the optimal cluster number K, we obtain the centroids locations $\{\mu_j\}_{j\in[1..K]}$ of the different clusters. From this information, we can determine in which cluster is a new data point by looking at its closest centroid. This allows to classify new data depending on their cluster without having to retrain the algorithm each time. The K-means method being only looking at the distance between the data points and the centroid, it does not perform well for elongated clusters orfor clusters with irregular shapes. However, it performs very well if most of the data points are located in the same position in space. As the main crystalline structures are each associated with a specific position in the BAD parameter space, the K-means algorithm is well suited to detect the clusters associated with these structures. \paragraph{DB Scan} \label{paragraph:DBScan} To be able to detect clusters with smaller number of points and non-isotripic shape, we apply another clustering method: the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) \citep{ester_density-based_1996} as implemented in sci-kit learn \citep{pedregosa_scikit-learn_2011}. The principle of the algorithm is the following: we input two parameters: (1) the maximum distance between two points $l_{max}$ and (2) the minimum number of points per cluster $N_{min}$. In this method, two data points located at a distance $l<l_{max}$ are considered as neighbors. From it, the algorithm will class the data points into three categories: (i) if a data point has at least $N_{min}-1$ neighbors, it will then be considered as a "core point". (ii) if a data point has less than $N_{min}-1$ neighbors but at least one of them is a core point, it will be considered as a "border point". (iii) the remaining data points will be considered as "noise". The core points and the border points are then regrouped together to form clusters, located at a distance $l_{cluster}>l_{max}$ from each other. In our system, many atoms can be associated with the same values of BAD parameters. As the DBSCAN only look at the distance between points, having many points located at the same spatial position will only slow down the clustering without improving the result. We thus apply the DBSCAN over the list of the unique spatial positions of our data set. Finally, the BAD parameters being integers, we use for the distance parameter the value $l_{max}=1$. We also determined that $N_{min}=5$ gives satisfactory results for our data set. As this method is not influenced by the number of point at a given spatial position but only by the distance between the unique spatial positions in the data set, it would be able detect clusters associated with structure containing fewer atoms but isolated from the clusters corresponding to the main crystalline structures. \paragraph{Logistic regression classifier} \label{paragraph:LogisticRegression} In the opposite of K-means, if we want to know from DBSCAN in which cluster is a new data point, we would need to re-perform the clustering over the whole data set, plus the new data point. But this solution would be very inefficient and time consuming. One solution to solve this problem is to train a classification algorithm over the clusters detected by DBSCAN. The classification algorithm takes as an input the training data set and as the output the cluster found by DBSCAN. It will then decompose the parameter space into region; each associated with a cluster. When a new data point is added, it will be attributed a cluster depending on its location in the parameter space. In this study, we use the logistic regression classifier \citep{brzezinski_logistic_1999} implemented in scikit-learn as it simple, fast and gives satisfactory result when applied to the clusters which detected by DBSCAN in this system. More details about this classifier can be found in sci-kit learn \citep{pedregosa_scikit-learn_2011}. \paragraph{Cluster combination} \label{paragraph:ClusterCombination} In the previous parts, we described two clustering methods. One to detect the main structures in the system: the K-means algorithm, the other one to detect the substructures containing few points, more isolated in the parameter space: the DBSCAN. Then, both K-means and the association of DBSCAN with the logistic regression classifier will separate the parameter space into regions corresponding to detected clusters. Thus, each data point will be associated with two clusters: the one detected by K-means, the other detected by DBSCAN. In order to have a precise detection of the structures present in the system, we combine the results obtained from the two clustering methods. Finally, we obtain $N_{Kmeans} \cdot N_{DBSCAN}$ clusters, with $N_{Kmeans}$ and $N_{DBSCAN}$ corresponding to the number of clusters found by K-means and DBSCAN, respectively. \section{Results and discussions} \label{sec:Results} In this section, we present the results obtained by following the procedure described in the previous sections. First we analyse the data obtained we get from the autoencoder: the relevant number of the BAD order parameter necessary to detect the substructures in our system as well as which of the $\bold{\chi}(i)= \{\chi_l \}_{l \in \{0..8\}}(i) \in \mathbb{N}^9$ parameters are the most significant. We then apply the clustering methods on the extracted parameter, later analysing their performances. Finally, we examine the different substructures detected by this method, comparing the outcomes with other structure detection methods. \subsection{Autoencoder output analysis} \label{subsec:AutoencoderAnalysis} After compressing our Ni crystal until the first plastic event occurs, as described in Section \ref{subsec:system}, we record at a regular time step interval the atomic positions of the simulated nano-object. We then measure BAD parameters for each atoms as detailed in Section \ref{subsec:BAD}, filtering out the surface atoms as explained in Section \ref{subsec:UnsupervisedLearning}. It turned out that computing the BAD parameters of a single time step is sufficient to train the autoencoder and to observe the appearance of different clusters. Here, the time step used to train our algorithm corresponds to an applied strain of $\Delta \epsilon=0.05\%$ after the first plastic event. Fig \ref{fig:bottleneck} shows the mean square error between the autoencoder input and output for different bottleneck dimensions. We can observe that the error reaches a plateau for a bottleneck dimension of $N=3$. We thus consider here that the pertinent number of BAD parameters to study the structure in our system is therefore three. \begin{figure}[h!] \centering \includegraphics[width=8cm]{Histo.png} \caption{Means square error $E$ between the input and the output for different bottleneck dimensions. The error reaches a plateau for a bottleneck dimension of $N=3$ which corresponds to the relevant number of BAD parameters sufficient to reconstruct the input data.} \label{fig:bottleneck} \end{figure} Our next step is to determine which are the three most pertinent BAD parameters out of nine we computed for each atom. To this end, we calculate the relative importance (RI) of each parameter applying the perturbation method described in Section \ref{subsubsec:RelevantParameters}. The results are shown in Fig \ref{fig:RIindex}. From this figure, we can clearly observe the most significant three parameters: the triplet $\{ \chi_4,\chi_5,\chi_7\}$. From here on in this study, we will only focus on these three BAD parameters to study the substructures within the studied Ni nano-object. \begin{figure}[h!] \centering \includegraphics[width=8cm]{HistoRI.png} \caption{Relative importance index of the BAD parameters obtained from the input perturbation method. The index is calculated by perturbating consecutively each input parameter and measure how much this perturbation affected the output. The parameters $\{ \chi_4,\chi_5,\chi_7\}$ are shown to be the most relevant ones to study the different structures in the system.} \label{fig:RIindex} \end{figure} \subsection{Clustering methods combination} \label{subsec:CompareClustering} We determined that the most pertinent BAD parameters to study the emerging substructures in our numerical sample are $\{ \chi_4,\chi_5,\chi_7\}$. Here, we apply two clustering algorithms on these parameters, K-means and DBSCAN, and then we combine the outcome of the obtained clusters. The dataset used for the clustering processes is the same that we used for extract BAD parameters using the autoencoder (we also verified that using different atomistic configurations extracted after the onset of plasticity gives similar results) . On Fig \ref{fig:histograms} (a), we show a scatter plot representing for each atom their position in the $\{ \chi_4,\chi_5,\chi_7\}$ space, as well as the result of the K-means clustering. Note that as the BAD parameters are integers, one position in the $\{ \chi_4,\chi_5,\chi_7\}$ space can correspond to many atoms. Each point represented here corresponds to at least one atom. On this figure, we can see that the algorithm detected two clusters: one in dark blue and the other in yellow. \begin{figure}[h!] \centering \includegraphics[width=13.5cm]{CombineCompare6.jpg} \caption{The upper part of the figure (a) represents a Scatter plot of the BAD parameters projected on the most relevant parameters: $\{ \chi_4,\chi_5,\chi_7\}$. The colors correspond to the substructures detected with K-means clustering. The lower part shows histograms showing the distribution of $\chi_4$ (b), $\chi_5$ (c), $\chi_7$ (d). On the $\chi_4$ histogram, we observe that most of the atoms are concentrated on two values, while on the $\chi_5$ and $\chi_7$ histograms, most atoms are concentrated on a single value. Thus, two combinations of $\{ \chi_4,\chi_5,\chi_7\}$ contains most of the atoms and corresponds in practice to the FCC and the stacking fault (HCP) structures. } \label{fig:histograms} \end{figure} As detailed in Section \ref{paragraph:Kmeans}, the aim of using the K-means clustering is to detect the main crystalline structures, located close to each other in the BAD parameter space \citep{ackland_applications_2006}. While the Fig \ref{fig:scatterBAD} (a) clearly shows three blocks, the two detected clusters seems to correspond to half of the central block. To understand what was detected by the K-means algorithm, we look on Fig \ref{fig:histograms} the distribution of $\chi_4$ (b), $\chi_5$ (c) and $\chi_7$ (d). From this figure, we can see that the large majority of the atoms are concentrated on two close positions in the $\{ \chi_4,\chi_5,\chi_7\}$ space: $\{21,13,24\}$ and $\{24,12,24\}$. On \citep{ackland_applications_2006}, these two configurations were defined as corresponding to HCP and FCC, respectively. As HCP corresponds to the stacking fault structure in a FCC crystal \citep{hull_chapter_2011}, we can expect that most of the atoms will be in these two structures. From what we can see, K-means seems to have only succeeded in separating these two configurations located within the central block. This will be confirmed in the next part focusing on analysing the structures detected by the clustering algorithms. While the K-means algorithm successfully separated the two locations in the $\{ \chi_4,\chi_5,\chi_7\}$ space, we can see on Fig \ref{fig:histograms} (a) that it totally fails in identifying two clusters, corresponding to different substructures, located further away from the central block. To detect them, we use another clustering method: the DBSCAN combined with the logistic regression classifier detailed in \ref{paragraph:DBScan} and \ref{paragraph:LogisticRegression}, respectively. We show its results on Fig \ref{fig:scatterBAD} (b). From this figure, we can see that the DBSCAN successfully managed to separate into three clusters the three blocks observed in the scatter plot. After measuring the proportion of data points contained in each cluster, it appears that the central cluster (in purple) contains $99.8\%$ of the points, while the two remaining clusters contains $0.1\%$ each. This confirms that most of the atoms are located in the main structures, corresponding to the central block, making it difficult for the K-means clustering to detect the two smaller blocks. \begin{figure}[h!] \centering \includegraphics[width=13.7cm]{ComposeFig6v4.jpg} \caption{Scatter plots of the BAD parameters projected on the most relevant parameters: $\{ \chi_4,\chi_5,\chi_7\}$. The colors correspond to the clustering methods used to detect different structures with: (a) K-means clustering, (b) DB scan clustering combined with logistic regression classification, and (c) a combination of the clusters obtained through (a) and (b). Each point corresponds to at least one atom, the distribution for each parameter can be seen on Fig \ref{fig:histograms}. In total, six differents structures were detected through this combination of two clustering methods: FCC (yellow), HCP in the stacking fault (blue), upper and lower part of the dislocation lines (green and red), the interaction between dislocation line and stacking fault (skyblue) and the interaction between two dislocation lines (orange).} \label{fig:scatterBAD} \end{figure} Finally, to detect the main crystalline structures while being able to discern other structures containing fewer atoms in the system, we combine the results obtained by the two clustering methods (K-means shown on Fig \ref{fig:scatterBAD} (a), DBSCAN shown on Fig \ref{fig:scatterBAD} (b)) as highlighted on Fig \ref{fig:scatterBAD} (c). From this figure, we can see that we managed to detect a total of six different clusters: the dark blue and yellow ones from the central structures from the central block, containing the $99.8\%$ of the atoms, corresponding to the main crystalline structures. The red and green cluster, corresponding to the two other aggregation of points clearly visible in the scatter plot of Fig \ref{fig:scatterBAD} (c). Finally, the sky blue and the orange clusters, which only corresponds to few atoms, and detected only in few time steps. \subsection{Analysis of detected structures} \label{subsec:DBScanclustering} On the previous section, by combining two clustering methods, we managed to extract six different clusters from our dataset by a projection in the $\{ \chi_4,\chi_5,\chi_7\}$ space. We now focus on analyzing the structures associated with each cluster. To this end, we show on Fig \ref{fig:snapshots} (a) a snapshot made with OVITO \citep{stukowski_visualization_2009} of the Ni nano-cube deformed plastically, with an applied strain of $\Delta \epsilon=0.02\%$ beyond the yield point. On this snapshot, the atoms are colored corresponding to their clusters. For instance, the atoms contained in the yellow cluster on Fig \ref{fig:scatterBAD} (c) will be colored in yellow on Fig \ref{fig:snapshots}. On Fig \ref{fig:snapshots} (a), we can see that most of the atoms are contained in the yellow cluster which we can then associate with the FCC structure as our system is a Ni FCC nano-crystal. \begin{figure}[h!] \centering \includegraphics[width=13cm]{ComposeFig4.jpg} \caption{Snapshot of the system. The atoms are colored depending on their cluster as shown in Fig \ref{fig:scatterBAD} c). On panel (a), we observe that most of the atoms belongs to the yellow cluster which we associate with the FCC structure. On (b), we show the same snapshot after filtering out the FCC structure. We can observe that the combination of the two clustering methods is capable to separate the atoms linked with dislocation lines (green and red atoms) and those correlated with stacking faults (dark blue atoms).} \label{fig:snapshots} \end{figure} After filtering out the atoms in the FCC structure, we can now see on Fig \ref{fig:snapshots} (b) atoms in three different structures. Fig \ref{fig:snapshots} (c) corresponds to a zoom within Fig \ref{fig:snapshots} (b) to highlight more clearly the detected substructures. From this figure, the atoms colored in green and red follows what looks like a dislocation line, while the atoms in dark blue seems to correspond to the stacking fault created after the glide of a partial dislocation. By using the algorithm DXA \citep{stukowski_automated_2012} implemented in OVITO \citep{stukowski_visualization_2009}, we could confirm our prediction: the green and the red atoms are surrounding a partial dislocation line. The relative position of the green and red atoms depends on the burger vector as well as on the direction of the dislocation line (see the appendix \ref{sec:Annexe:StructureAndDislocationLines} for more details). From this, we can deduce that the dark blue atoms correspond to HCP stacking fault structures generated by the gliding of a partial dislocations (as mentioned above in FCC crystals stacking faults have a HCP structure). The two remaining clusters detected in in Fig \ref{fig:scatterBAD} c), the orange and the sky blue one, are very rarely observed during the time steps recorded during the deformation. The orange structure is observed when two dislocation lines are located close to each other while the sky blue one is detected during the interaction between a dislocation line and a stacking fault, as shown later. On Fig \ref{fig:snapshotsCompare}, we compare our structure detection method, shown in panel (a), with three other existing methods: (i) hand-chosen criterion using the BAD parameters from \citep{ackland_applications_2006} (Fig \ref{fig:snapshotsCompare} (b)), (ii) the CNA method \citep{honeycutt_molecular_1987,stukowski_structure_2012} implemented in OVITO ,which is one of the most commonly used method for structure detection (Fig \ref{fig:snapshotsCompare} (c)) and which consists of using hand-chosen criterion on a radial-distribution based local order parameter, and (iii) the autoencoder approach using the BOO parameters from \citep{boattini_unsupervised_2019} which served as an inspiration for this paper (Fig \ref{fig:snapshotsCompare} (d)). For this comparison, we filtered beforehand the surface atoms following the method explained in section \ref{subsec:UnsupervisedLearning}. On Fig \ref{fig:snapshotsCompare}, we only focus on one dislocation line and the associated stacking fault. From this figure, we can see that our method is able to capture with much more details the different structures present within a FCC crystal plastically deformed. To further emphasize the efficiency of our method, on Fig \ref{fig:Annexe:SnapshotsCompare2} we show a snapshot of the system with two dislocation lines about to interact and on Fig \ref{fig:Annexe:SnapshotsCompare3} we show the interaction between a dislocation line and a stacking fault. \begin{figure}[h!] \centering \includegraphics[width=13.5cm]{CombineCompare5.jpg} \caption{Snapshot of the system. The atoms are colored depending on their surrounding structure obtained from (a) our combination of BAD and autoencoder, (b) hand-chosen criteria applied on BAD from \citep{ackland_applications_2006}, (c) the CNA method \citep{honeycutt_molecular_1987} from OVITO \citep{stukowski_visualization_2009}, and (d) the combination of BOO and autoencoder as detailed in \citep{boattini_unsupervised_2019}. Note that for the four structure detection methods shown here, we removed the surface atoms beforehand.} \label{fig:snapshotsCompare} \end{figure} Out of the three methods used for comparison, two were mostly designed to detect the main crystalline structures (FCC, HCP or BCC): the hand-chosen criterion using the BAD parameters and CNA. We can still remark that CNA, which also relies on hand-chosen criterion, is able to capture the location of the dislocation line but with less accuracy than our method: the number of atoms close to the line defect is much larger than in our method. Also, this method is not able to capture sub-structures associated with plasticity. The atoms outside of the main crystalline structures, such as those in the line defect, are labelled as "other" (in red in Fig \ref{fig:snapshotsCompare} (c)). The hand chosen method using the BAD also categorizes from time to time atoms in the category "other", like in Fig \ref{fig:Annexe:SnapshotsCompare2} (b), but it is not able to capture the line defect. \begin{figure}[h!] \centering \includegraphics[width=13cm]{CombineCompare2.jpg} \caption{Snapshot of two dislocations about to interact. These snapshots were obtained after filtering out the FCC structure. The atoms are colored depending on their surrounding structure obtained from (a) the combination of BAD and autoencoder, (b) hand-chosen criteria applied on BAD from \citep{ackland_applications_2006}, (c) the CNA method \citep{honeycutt_molecular_1987} from OVITO \citep{stukowski_visualization_2009}, and (d) the combination of BOO and autoencoder as detailed in \citep{boattini_unsupervised_2019}. We can remark on (a) two atoms belonging to the orange cluster from Fig \ref{fig:scatterBAD} (c) within the dislocation lines. Atoms from this cluster were observed when two dislocations are close to each other. We can also remark on (b) one atom in red corresponding to the unassigned category on the hand-chosen criteria applied to BAD from \citep{ackland_applications_2006}.} \label{fig:Annexe:SnapshotsCompare2} \end{figure} The last method, the autoencoder applied to the BOO parameters, is also designed to detect automatically the different structures present within the system. However, when applying the method to our system, only two structures were detected: one associated with FCC and the other associated with HCP (the stacking fault). Also, only two structures were visible on the lower dimensional subspace over which the clustering was applied when following the method (see appendix \ref{sec:Annexe:ClusteringBOOP}). We thus conclude that the BOO parameters are not the most pertinent parameters to detect the structures within a plastically deformed crystal. \begin{figure}[h!] \centering \includegraphics[width=13cm]{CombineCompare3.jpg} \caption{Snapshot of the system focusing on the interaction between two dislocations, one of them crossing the stacking fault of the other. These snapshots were obtained after filtering out the FCC structure. The atoms are colored depending on their surrounding structure obtained from (a) the combination of BAD and autoencoder, (b) hand-chosen criteria applied on BAD from \citep{ackland_applications_2006}, (c) the CNA method \citep{honeycutt_molecular_1987} from OVITO \citep{stukowski_visualization_2009}, and (d) the combination of BOO and autoencoder as detailed in \citep{boattini_unsupervised_2019}. We can observe on (a) atoms belonging to the sky blue cluster from Fig \ref{fig:scatterBAD} (c). Atoms from this cluster are observed after the interaction between a dislocation line and a staking fault. } \label{fig:Annexe:SnapshotsCompare3} \end{figure} Overall, these comparisons shows that our method is able to detect more structures than the other presented approaches. It is also able to capture the contours of the dislocation lines with a better precision than CNA as we can see on Fig \ref{fig:Annexe:SnapshotsCompare3}. \section{Conclusion} \label{sec:Conclusion} In conclusion, we improved a method initially developed to detect automatically structures in colloidal materials. Through this method, we were able to finely detect different structure present in a Ni FCC crystal under plastic deformation. This algorithm uses the BAD parameters to describe the local environment of the atoms. Thanks to an autoencoder associated with a perturbation method, the most relevant BAD parameters were extracted. Then, by applying on these selected parameters two clustering methods, Kmeans and DBscan as well as the logistic regression classifier, different local structures have been successfully extracted. This procedure was able to detect the local structures present within the dislocation lines with a higher degree of precision compared to the commonly used CNA method. Our procedure is also more efficient than a method relying on hand-chosen criteria applied to BAD parameters or than combining autoencoder with BOO parameters. Overall, this study shows that unsupervised learning applied to the BAD parameters is a promising approach to obtain more precise structure detection within crystalline materials. As the method detailed in this paper is based on parameters using hand-chosen ranges in its definition, it is not directly applicable to other crystalline system, especially alloys. While modified BAD parameters adapted for specific systems exist \citep{amodeo_atomistic_2014}, developing a method to automatically find the optimal ranges to define the BAD parameters for any materials would allow to achieve an universal method for structural detection for crystalline materials. \newpage
1,116,691,501,158
arxiv
\section{Introduction}\label{s-intro} {In this talk two different but related subjects are considered:}\\ {\it I. A mechanism of massive PBH formation, which is much different from the caninocal ones.} {According to this mechanism the fundament of PBH creation is build at inflation by making large isocurvature fluctuations at relatively small scales, with practically vanishing density perturbations.} {It is achieved by a simple modification of a popular scenario of baryogenesis. Density perturbations are generated rather late after the QCD phase transition.} {The emerging universe looks like a piece of Swiss cheese, where (the black) holes are high baryonic density objects occupying a minor fraction of the universe volume.}\\ {{\it II. A brief review of new (and not only new) astronomical data which are in strong tension with the accepted standard cosmological model.}} {The data nicely fit the suggested scenario of PBH formation. More and more observational evidence in favor of massive and supermassive PBH {and other, not yet explained in the standard way phenomena} in the universe, appears practically every week. {The usual or astrophysical black holes are the results of stellar evolution after a star exhausted its nuclear fuel and collapsed into a compact object, either into a neutron star or into a black hole. This collapse into black hole happens if the mass of the star is, roughly speaking, larger than three solar masses. Such black holes can be created in sufficiently old or even contemporary universe.} {There exist also the astrophysical supermassive black holes (SMBH) with huge masses, ${M \sim(10^6 - 10^{9}) M_\odot}$, where $M_\odot \approx 2 \times 10^{33} $ g is the solar mass. They are supposed to be the products of matter accretion to smaller BHs or matter accretion to matter excess in galactic centers.} Primordial black holes (PBH) formed in the very early universe if the density excess at the cosmological horizon was large, ${\delta \rho /\rho \gtrsim 1}$, as it was first suggested by Zeldovich and Novikov~\cite{ZN}. {Normally the masses of PBH created through this mechanism were supposed to be rather low and the mass spectrum was quite sharp, close to the delta-function.} An alternative mechanism of formation of very massive PBH with wide spread mass spectrum was proposed in ref{. \cite{ADJS}, see also the subsequent paper~\cite{DKK} , where the model was further elaborated. {Heretic declaration of 1993} is turning now into the acknowledged faith with more and more astronomical data proving its truth. The conclusion that the observed mysterious phenomena are induced by {\it massive primordial} black holes allows to cure an avalanche of inconsistencies with the standard ${\Lambda}$CDM cosmology and astrophysics. In the most extreme form the suggested in 1993 scenario gives rise to:\\ {$\bullet$ {Cosmological Dark Matter fully (or predominantly) made by PBHs.\\ {$\bullet$ {Primordial creation of majority of black holes in the universe.}}\\ {$\bullet$ {Very early QSO formation}.}\\ {$\bullet$ {Early creation of metals and dust}.}\\ {$\bullet$ {Seeding of large galaxies by supermassive PBH.}}\\ $\bullet$ {Seeding of globular clusters by ${10^3 - 10^4}$ PBHs} and dwarf galaxies by ${10^4 - 10^5}$ PBHs~\cite{AD-KP}.\\ {$\bullet$ {Clouds of matter with high baryon-to-photon ratio.}}\\ $\bullet$ {A possible by-product: plenty of (compact) anti-stars, even in the Galaxy.} Due to lack of space many references are omitted here. They can be found in the reviews~\cite{monsters,AD-UFN}. together with more detailed discussion \section{Brief review of old and recent observations and discoveries \label{s-rev-obs}} \subsection{Data from $z\sim 10$ universe \label{ss-z-hi}} {The observations of the last several years indicate that the young universe at ${z \sim 10}$ is grossly overpopulated with unexpectedly high amount of:} \\ {1. Bright QSOs, alias supermassive BHs, up to ${M \sim 10^{10} M_\odot}$,}\\ {2. Superluminous young galaxies,}\\ {3. Supernovae, gamma-bursters,}\\ 4. Dust and heavy elements.\\ These facts are in good agreement with the predictions listed in the previous page,} but in conflict with the expectations of the Standard Cosmological Model (SCM) at least by an order of magnitude. {\it 1. Supermassive BH, or QSO.}\\ About 40 quasars with ${z> 6}$ are known, with BH of ${ 10^9 M_\odot}$ and ${L \sim 10^{13-14} L_\odot}$, where $L_\odot \approx 4\times 10^{33} $ erg/sec is the solar luminosity. {Such black holes, {when the Universe was less than one billion years old,} present substantial challenges to theories of the formation and growth of black holes and the coevolution of black holes and galaxies.} Non-standard accretion physics and the formation of massive seeds seem to be necessary. Neither of them is observed in the present day universe. Two years ago another huge QSO was discovered~\cite{QSO-10}. {There is already a serious problem with formation of lighter and less luminous quasars} {which is multifold deepened with this new found "baby".} {The new one with ${M \approx 10^{10} M_\odot }$ makes the formation} {absolutely impossible in the standard approach.} In ref.~\cite{accr-rate} the rate of accretion to the central black hole with $M = 10^5 M_\odot$ in the galaxy with the mass $3\times 10^{10} M_\odot$ at the redshift $z = 7.5$ was estimated as a few times $10^{-6} M_\odot $ per year. So the total accreted mass was about $2000 M_\odot$ during the universe life time which at this $z$ was 320 Myr. This is by far too little to make any of billion mass observed black holes (quasars). Several more puzzling examples appeared already after the conference. Among them is a black hole with the mass {${{0.8\cdot 10^9 M_\odot}}$ in the {\it neutral} universe at ${z=7.5}$~\cite{neutral}. Neutrality of the surrounding medium means that accretion is absent or very weak, so this BH is surely primordial. Another recently discovered faint QSO, at $z=6$ with ${M=10^9 M_\odot}$ needs for its formation either super-Eddington accretion or ${10^5 M_\odot}$ seed \cite{super-accr}. {\it 2. Galaxies observed at ${z \sim 10}$:}\\ There are quite a few striking examples: {a galaxy at {${z \approx 9.6}$}, which was created when the universe was younger than ${t_U < 0.5}$ Gyr;} {a galaxy at {${z \approx 11}$,} born at $t_U \sim 0.4$ Gyr, three times more luminous in UV than other galaxies at ${z = 6-8}$.} Recently many others have were discovered, which were also unexpectedly early created. Discovery of not so so young but extremely luminous galaxy with the luminosity {{${L= 3\cdot 10^{14} L_\odot }$ and the age ${t_U \sim 1.3 }$ Gyr} was reported in paper~\cite{gal-hi-L}. {Quoting the authors:} {{"The galactic seeds, or embryonic black holes, might be bigger than thought possible.} Or another way to grow this big is to have gone on a sustained binge, consuming food faster than typically thought possible". An essential feature of the seed is that it must have a low spin. According to paper~\cite{gal-dens} the {density of galaxies at ${z \approx 11}$ is ${10^{-6} }$ Mpc$\bm{^{-3}}$, an order of magnitude higher than estimated from the data at lower z.} {Origin of these galaxies is unclear.} {\it 3. Dust, supernovae, gamma-bursters...} {Abundant dust, too much denser, then expected, is observed in several early galaxies, e.g. in HFLS3 at ${ z=6.34} $ and in A1689-zD1 at ${ z = 7.55}$.} Catalogue of the observed dusty sources indicates that their number is an order of magnitude larger than predicted by the canonical theory~\cite{dust-ctlg}. {Hence, prior to or simultaneously with the QSO formation a rapid star formation should take place.} {These stars should evolve to a large number of supernovae enriching interstellar space by metals through their explosions} {which later make molecules and dust.} {(We all are dust from SN explosions, but probably at much later time.)} {Another possibility of the evolved chemistry at high redshift is a non-standard big bang nucleosynthesis in the bubbles with very high baryonic density, B, which allows for formation of heavy elements beyond lithium.} Such high B bubbles could be created in the frameworks of the mechanism of refs~\cite{ADJS,DKK}. {Observations of high redshift gamma ray bursters (GRB) also indicate a large abundance of supernova at redshifts close to ten.} {The highest redshift of the observed GBR is 9.4 and there are a few more GBRs with smaller but still high redshifts.}\\ {The necessary star formation rate for explanation of these early GBRs is at odds with the canonical star formation theory.} \subsection{{ Problems of the contemporary universe: \label{ss-contempi}}} 1. In the centers of every large galaxy a supermassive black hole (SMBH) is observed, with the mass close to billion solar masses in elliptic and lenticular galaxies and a few million solar masses in spiral galaxies, like our Milky Way. The common faith is that these SMBHs are created by matter accretion on the density excess in the galactic center (either a small BH, or a density cusp). However, the usual accretion is not effective enough to make such huge BHs even in whole universe age, $t_U \sim 15 $ Gyr. More attractive looks the inverted picture when a supermassive seed (PBH) was first formed and a galaxy is later collected by accretion of the usual matter surrounding it. It is tempting, but rather far fetched, to conclude that the type of the galaxy is determined by the properties of the seed.\\ \noindent 2. There are strikingly surprising data indicating that SMBH are observed not only in large galaxies but also in very small ones and even in almost {\bf empty} space, e.g. a black hole with ${M\sim 10^9 M_\odot}$~\cite{empty-BH}. Seemingly these poor guys happened to be in the lower density regions of the usual matter and were unable to attract enough matter to build their own galaxies.\\ \noindent 3. New more precise nuclear chronology with several nuclear chronometers based on several long-lived isitopes, revealed unexpectedly old stars in the Galaxy. Many of them have the age larger than the age of the Galaxy and one is even "older than the universe"~\cite{H-Bond} with the age ${14.46 \pm 0.31 }$ Gyr. The central age value exceeds the universe age by two standard deviations if ${H= 67.3}$ km/sec/Mpc, as determined from the angular fluctuations of the of the cosmic microwave background radiation, and hence ${t_U =13.8}$ Gyr, but if ${H= 74}$ km/sec/Mpc determined by the canonical astronomical methods, and so ${ t_U = 12.5}$ Gyr, the excess is by 9 standard deviations. It is noteworthy that this stellar Methuselah have large velocity, which may indicate its pregalactic origin. In the model of refs. \cite{ADJS,DKK} the long age found from the nuclear chronology may be mimicked by the unusual initial chemical content of the star enriched by heavy elements.\\ \noindent 4. MACHOs are low luminosity (in fact invisible) a half solar mass objects discovered by gravitational microlensing, for a brief review see ref. \cite{BDK} . The origin of these objects is mysterious. In principle they could be brown dwarfs or dead stars, but their density is too high to be created by the normal stellar evolution. The only remaining possibility seems to be primordial massive black holes. \\ \noindent { 5. The mass spectrum of the black holes in the Galaxy shows an unexpected maximum at ${ (7.8 \pm 1.2) M_\odot }$.~\cite{M-BH-narrow}. This result agrees with other observations ~\cite{kreidberg} demonstrating maximum at $M\sim{8M_\odot}$, with sharp decrease above ${10M_\odot}$ and practical absence of BH below ${5M_\odot}$. Such behavior is unexplainable if BHs are formed through the stellar collapse, as it is usually assumed, but nicely fits the hypothesis of abundant primordial BHs in the Galaxy.\\ \noindent 6. During last year several events of gravitational waves (GW) were registered at LIGO and Virgo interferometers, which originated from the coalescence of massive black hole binaries. The analysis of these event demonstrates beautiful agreement with General Relativity, in particular, for strong gravitational fields near the horizon of the Schwarzschild black hole. However the origin and properties of the GW sources remains mysterious. In majority of events the masses of the progenitor black holes are about $(20-30) M_\odot$, their spins are compatible with zero, and the spin of the baby BH created as a result of the coalescence is high as it is normally expected. The following three problems of the standard theory emerged together with the GW discovery:\\ a) What is the origin of such heavy BHs, with ${\sim 30 M_\odot}$?\\ b) How the low spin values of the progenitors can be explained?\\ c) How the binary system of BH can come from, presumably, stellar binary?\\ These problem are addressed in refs.~{ \cite{BDPP,talks-KAP,talk-AD}. They are all neatly solved if the initial black holes are primordial.\\ \noindent {7. Intermediate mass BHs: ${M \sim 10^3 M_\odot}$, in globular clusters and ${M\sim 10^{4-5}}$ in dwarf galaxies.} Though the observation of the intermediate mass BH in globular clusters (GC) is not an easy task, there are signatures that indeed IMBHs with masses up to a few $10^4 M_\odot$ can be present in GC cores~\cite{gc-bh-1,gc-bh-2}. The impact of such BH on the GC formation was studied in our paper~\cite{AD-KP}, where it was argued that PBH with masses about thousand solar mass may successfully describe GC formation in the observed amount. We also mentioned that dwarf galaxies may be seeded by somewhat heavier PBH. A close point of view was also expressed in ref.~\cite{silk-dwarf}. Recently an avalanche of black holes with masses in the range $(10^4 - 10^5) M_\odot$ appeared in the sky, almost a thousand of them have been discovered~\cite{imbh-1,imbh-2,imbh-3}. It is difficult to find any explanation except for them being primordial. \\ \noindent 8. There are accumulating data indicating that SMBH in close vicinity to each other are not so rare as it is naively expected: \\ a) Four binaries of SMBH are observed~\cite{bh-bin-1,bh-bin-2, bh-bin-3,bh-bin-4}.\\ b) Recently an observation of four quasars in a narrow range in the sky (quasar quartet) was reported~\cite{quartet}. According to the authors the probability for formation of such system by conventional mechanism is $10^{-7}$. \\ c) In ref.~\cite{triple} a kiloparsec-scale {\it triple} supermassive black hole system at $z=0.256$ is discovered.\\ The probability of an accidental formation of several supermassive black holes in close vicinity by the usual astrophysical processes is quite low, seemingly much lower than the accidental formation of close primordial black holes. \\ This list does not exhaust all mysteries in the sky. There are plenty of other data laeding to the same conclusion of abundant population of the universe with massive primordial black hole. The new data in favor of it appear practically every week and with new much more sensitive astronomical instruments the flux of new discoveries will increase multifold. It will allow ether to finally prove or disprove the discussed here picture. \section{The mechanism of massive PBH creation} {All these problems are uniquely and simply solved by creation in the early universe of massive PBHs and compact stellar-like objects. Such a mechanism was put forward a quarter of century ago~\cite{ADJS}. In contrast to the previously considered mechanisms of PBH formation, the suggested mechanism leads to a rather wide-spread mass spectrum of PBHs. The spectrum is practically model independent and has the simple log-normal form: \begin{eqnarray} \frac{dN}{dM} = \mu^2 \exp{[-\gamma \ln^2 (M/M_0)], } \label{dn-dM} \end{eqnarray} with only 3 constant parameters: ${\mu}$, ${\gamma}$, ${M_0}$. During recent years such mass distribution of PBH was rediscovered in several works and became quite popular. The mechanism of ref.~\cite{ADJS} (see also development in ref.~\cite{DKK}) is based on a slightly modified Affleck-Dine (AD) \cite{Aff-Dine} scenario of baryogenesis. In the original version the scenario was based on supersymmetry, but the full supersymmetry (SUSY) is not necessary, we need only a few simple and natural features out of it. SUSY predicts an existence of scalar fields, $\chi$ with nonzero baryonic number, ${ B_\chi \neq 0}$. A generic feature of the SUSY models is an existence of the so called flat directions in the potential $U_\chi \equiv U(\chi, \chi^*)$, i.e. the directions in the field space along which the potential does not rise. The field $\chi$ can condense along such flat directions and acquire quite a large value. It indeed happened during inflation due to rising quantum fluctuations of light scalars with mass smaller than the Hubble parameter, $H_I$. After the end of inflation this large value of $\chi$ is transformed into large baryonic number of other particles, say, quarks. We consider the toy model with the potential: \begin{eqnarray} U(\chi,\chi^*) = \frac{1}{2} \left[ \lambda (\chi^4 + \chi^{*4}) + m^2 \chi^2 + m^{*\,2}\chi^{*\,2}\right] = \\ \nonumber \lambda |\chi|^4 \left( 1+ \cos 4\theta \right) + |m|^2 |\chi|^2| \left[ 1+ \cos (2\theta+2\alpha)\right], \label{U-of-chi} \end{eqnarray} where ${ \chi = |\chi| \exp (i\theta)}$ and ${ m=|m|e^\alpha}$. If ${\alpha \neq 0}$, C and CP are broken. In supersymmetric Grand Unified Theories (SUSY GUT) the baryonic number of $\chi$ is naturally non-conserved. It is imposed in expression~(\ref{U-of-chi}) as an absence of invariance with respect to the phase transformation of $\chi$. {Initially (after inflation) the homogeneous field ${\chi}$ is away from origin and, when inflation is over, it starts to evolve down to the equilibrium point, ${\chi =0}$, according to the equation of the Newtonian mechanics:} \begin{eqnarray} \ddot \chi +3H\dot \chi +U' (\chi) = 0. \end{eqnarray} The baryonic number of $\chi$ is the mechanical angular momentum in the two-dimensional complex plane of $[\chi,\chi^*]$: \begin{eqnarray} B_\chi =\dot\theta |\chi|^2 \end{eqnarray} The decay of ${{\chi}}$ transferred its baryonic number to that of quarks in B-conserving process. Due to possible initial large amplitude of $\chi$\ the AD baryogenesis could lead to the cosmological baryon asymmetry of order of unity, much larger than the observed value $\beta = n_B/n_\gamma \approx{6\times 10^{-10}}$. If ${ m\neq 0}$, the angular momentum or, what is the sane, B, is generated because of different direction of the quartic and quadratic valleys at low ${\chi}$, see more below. In the papers~\cite{ADJS,DKK} the potential of $\chi$ was modified by an addition of the general renormalizable coupling to the inflaton field $\Phi$, the first term in the equation below: \begin{eqnarray} U = {g|\chi|^2 (\Phi -\Phi_1)^2} + \lambda |\chi|^4 \,\ln \left[ \frac{|\chi|^2 }{\sigma^2 }\right] {{+\lambda_1 \left(\chi^4 + h.c.\right) + (m^2 \chi^2 + h.c.). \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, }} \end{eqnarray} It is assumed that the inflaton field $\Phi$ takes the value $\Phi_1$ sufficiently long before the end of inflation such that about 30-40 e-folding still remains. The coupling to the inflaton ensures that the gates to the flat directions are open only for a short period of time during which $\Phi$ is sufficiently close to $\Phi_1$. So we expect the following picture. If the window to flat direction, when ${\Phi \approx \Phi_1}$, is open only for a sufficiently short period, then with a small probability the bubbles with a high baryonic density could be formed. They would occupy a minor fraction of the universe volume, while the rest of the universe would have the normal small value of the baryon asymmetry, {${{ \beta \approx 6\cdot 10^{-10}}}$, created by small ${\chi}$}, which was not able to penetrate trought he gate to a large distance from the origin. These bubbles could be called High B Bubbles or HBB, it should not be confused with the nick name of the American coffee, Hot Brown Beverage. Though cosmologically small, HBBs can be astronomically large, having masses of the order of the solar mass or even much higher. As it is already mentioned, the universe would look as Swiss cheese, where the holes are the HBBs, which may have much larger total mass than the mass of the rest of matter in the universe. This process of the HBB formation can be called the phase transition of the 3/2 order. At the high temperature cosmological epoch, when the electroweak (EW) symmetry was unbroken, quarks were massless and a large contrast in the value of the baryonic number density between the HBB and the bulk of the universe did not lead to noticeable density perturbations. These are the so called isocurvature perturbation. Even after the EW symmetry breaking the density contrast remained small and only after the QCD phase transition at the temperature $T = 100 - 200 $ MeV, when quarks formed heavy baryons, a high density contrast between HBB and the bulk would appear. As a result, either primordial black holes or compact stellar-like objects would be created. It is interesting that the sign of $\beta$ may be both negative and positive, It means that both baryonic and antibaryonic bubbles can be formed. As a result PBH and anti-PBH could be created but they are indistinguishable (if there is no long-range forces associated with the baryon number). However compact stars and anti-stars might be abundantly produced and could be in a large amount even in the Galaxy. The relative density of these anti-objects is model dependent and can vary from zero to almost 50\%. Cosmologically large matter and antimatter domain may exist but not necessarily in equal amount, so most probably the global baryonic number of the universe is non-zero. The analysis of the observational bound on the compact objects in the Galaxy was performed in the papers~\cite{BDK,DB,BD}. It is a challenging problem to find anti-stars in the Galaxy. \section{Conclusion \label{s-concl}} The described mechanism~\cite{ADJS,DKK} nicely fits the basic trend of the surprising astronomical data reviewed in Sec.~\ref{s-rev-obs}. In a sense this picture of the universe was predicted a quarter of century ago. There are still quite many problems demanding deeper research:\\ $\bullet $ Quantitative study of the galaxy formation seeded by massive PBH. In particular it would be interesting to find how the mass of the seed influences the emerging galaxy type. Such an analysis worth doing for large galaxies, dwarfs, and globular clusters. \\ $\bullet $ Formation of galaxies in low density regions of the universe. Such a study could explain existence of small galaxies with superheavy black holes inside, or even existence of superheavy BH in almost empty space.\\ $\bullet $ The problem of high velocity stars in galaxies. What is the source of their acceleration? Are they primordial stars or are accelerated by the abundant population of the PBH with intermediate masses, $(10^3-10^5) M_\odot$?\\ $\bullet $ A possible outcome of the described here mechanism of massive PBH creation is an accompanying creation of antimatter objects, which may be abundant in the Galaxy. Their discovery would prove the validity of the mechanism. \section*{Acknowledgments} This work was supported by the RNF Grant Grant N 16-12-10037. I thank Harald Fritzsch for the invitation to the Conference and also Kok Khoo Phua for the hospitality at NTU, Singapore.
1,116,691,501,159
arxiv
\section{Introduction} The reaction $K^-p\rightarrow \Sigma^0\pi^0$ is of particular interest in the study of baryon resonances and $\bar{K}N$ interaction since there are no isospin-1 baryons contributing here and it gives us a rather clean channel to study the $\Lambda$ resonances, such as $\Lambda(1405)S_{01}$, $\Lambda(1670) S_{01}$, $\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$. In the literatures, many experimental \cite{Manweiler:2008zz,armen:1970zh,London:1975av,Baxter:1974zs, Mast:1974sx,Berley:1996zh,Bangerter:1980px,Starostin:2001zz,Prakhov:2004an, Zychor:2007gf,Niiyama:2008rt,Zychor:2008ct} and theoretical efforts \cite{Borasoy:2006sr,Dalitz:1959dn,Dalitz:1960du,Kaiser:1996js,Oset:1997it,Oller:2000fj,Pakvasa:1999zv, Oset:2001cn,Jido:2003cb,GarciaRecio:2002td,Hyodo:2002pk,GarciaRecio:2003ks, Magas:2005vu,Roca:2006kr,Roca:2006pu,Roca:2008kr,Sarkar:2005ap,Oller:2006jw,Oller:2006hx,Fink:1989uk, Hyodo:2003qa,Borasoy:2005ie,Oller:2005ig,Hyodo:2003jw,Geng:2007hz,Geng:2007vm,Geng:2008er} have been devoted to understanding the nature of the low-lying $\Lambda$ resonances. However, their properties still bare a lot of controversies. For example, in the naive quark model the $\Lambda(1405)$ is classified as the lowest $L=1$ orbital excited $qqq$ state as an $SU(3)$ flavor singlet \cite{Isgur78,Capstick:1986bm,Loring:2001ky}. Meanwhile, it is also proposed to be a dynamically generated resonance emerging from the interaction of the $\bar{K}N$ and $\pi \Sigma$ with a multi-quark structure \cite{Dalitz:1959dn,Dalitz:1960du,Kaiser:1996js,Oset:1997it,Oller:2000fj, Oset:2001cn,Jido:2003cb,GarciaRecio:2002td,Hyodo:2002pk,GarciaRecio:2003ks, Magas:2005vu,Roca:2008kr}. Most of those studies are based on the unitary chiral perturbation theory (U$\chi$PT). Such a scenario is developed further which proposes that the $\Lambda(1405)$ could be a superposition of two resonances \cite{Fink:1989uk,Hyodo:2003qa,Borasoy:2005ie,Oller:2005ig,Oller:2000fj,Jido:2003cb,Magas:2005vu}. Similar mechanisms are studied in various processes \cite{Hyodo:2003jw,Geng:2007hz,Geng:2007vm}, such as $K^-p\rightarrow \pi^0\pi^0\Sigma^0$, $\pi^-p\rightarrow K^0\pi \Sigma$ and $pp\rightarrow pK^+\Lambda$, as a support of the dynamically generated states. How to clarify these issues and make a contact with experimental observables are still an open question~\cite{Zychor:2007gf,Zychor:2008ct,Zou:2007mk,Xie:2007qt,Sibirtsev:2005mv}. On the other hand, it is of great importance to understand the excitation of those low-lying $\Lambda$ states in a quark model framework. Quark model somehow provides a guidance for the underlying effective degrees of freedom within hadrons. In order to probe exotic configurations such as multiquarks and hybrids, one should also have a good understanding of where the non-relativistic constituent quark model (NRCQM) breaks down. Particularly in the sector of hyperon states, there are still a lot of ambiguities to be clarified. Apart from the $\Lambda(1405)$, the $\Lambda(1520)$ and $\Lambda(1670)$ are also suggested to be quasibound states of a meson and a baryon, which are dynamically generated resonances based on the U$\chi$PT~\cite{Sarkar:2005ap,Roca:2008kr,Roca:2006kr}. While in the quark model these two states are classified as the lowest $L=1$ orbital excited states with $J^P=3/2^-$ and $J^P=1/2^-$, respectively. To clarify the nature of those low-lying $\Lambda$ resonances and their internal effective quark degrees of freedom, more theoretical and experimental studies are needed. Recently, the higher precision data of the reaction $K^-p\rightarrow \Sigma^0\pi^0$ at eight momentum beams between 514 and 750 MeV/c were reported \cite{Manweiler:2008zz}, which provides us a good opportunity to study the properties of these low-lying $\Lambda$ resonances. In this work, we make an investigation of the $K^-p\rightarrow \Sigma^0\pi^0$ reaction in a chiral quark model. In this model an effective chiral Lagrangian is introduced to account for the quark-pseudoscalar-meson coupling. Since the quark-meson coupling is invariant under the chiral transformation, some of the low-energy properties of QCD are retained. The chiral quark model has been well developed and widely applied to meson photoproduction reactions~\cite{qk1,qk2,qkk,Li:1997gda,zhao-kstar,qk3,qk4,qk5,He:2008ty}. Its recent extension to describe the process of $\pi N$ scattering ~\cite{Zhong:2007fx} and investigate the strong decays of charmed hadrons ~\cite{Zhong:2007gp,Zhong:2008kd} also turns out to be successful and inspiring. In the literatures the $\bar{K}N$ scattering has been studied using different approaches, such as the $K$-matrix methods \cite{Martin:1969ud}, dispersion relations \cite{Gensini:1997fp,Martin:1980qe}, meson-exchange models \cite{Buttgen:1985yz,Buettgen:1990yw,MuellerGroeling:1990cw}, coupled-channel approaches \cite{Borasoy:2006sr,Oller:2000fj, Oset:1997it,Oset:2001cn,GarciaRecio:2002td,Hyodo:2003qa,Hyodo:2002pk}, and quark models \cite{Hamaie:1995wy}. Compared with these models, our model has several obvious features. One is that only a limited number of parameters will appear in the formalism. In particular, only one parameter is need for the resonances to be coupled to the pseudoscalar meson. This distinguishes from hadronic models where each resonance requires one additional coupling constant as a free parameter. The second is that all the resonances can be treated consistently at quark level. Thus, it has predictive powers when exposed to experimental data, and information about the resonance structures and form factors can be extracted. In the $K^-p\rightarrow \Sigma^0\pi^0$ reaction, for the $s$-channel, the $K^-$- and the $\pi^0$-mesons can not couple to the same quark in a baryon, which leads to a strong suppression in the $s$-channel amplitudes. As shown in Fig.~\ref{fig-1}, the amplitude $M_2^s$ is suppressed relative to $M_3^s$ by a factor of $(-1/2)^n$ with $n$ for the main quantum number of the NRCQM harmonic oscillator potential~\cite{qk1,qk2,qkk,Li:1997gda,zhao-kstar,qk3,qk4,qk5,Zhong:2007fx,He:2008ty}. In contrast, it is allowed for the $u$-channel that the kaon and pion are coupled to the same quark. Thus, the $u$-channel gives a large background in the cross section, and has significant destructive interferences with $\Lambda(1405)S_{01}$ at the forward angles. The $t$-channel, dominated by the $K^*$ exchange, also plays an important role in the reactions. It suppresses the cross section obviously at the backward angles, while enhances it at the forward angles. We also consider the $t$-channel scalar meson exchange, i.e. $\kappa$, but find its contributions are negligibly small. The $\Lambda(1405)$ governs the reaction in the whole energy region near threshold which is similar to the $S_{11}(1535)$ dominance in $\pi^- p\to \eta n$~\cite{Zhong:2007fx}. Around $P_K= 400$ MeV/c, the $\Lambda(1520)$ is responsible for the sharp resonant peak in the total cross section. The contributions of $\Lambda(1670)$ turn out to be important at $P_K\simeq 750$ MeV/c. The paper is organized as follows. In the subsequent section, the amplitudes of $s$- and $u$-channels are obtained. Then, amplitudes of $t$-channel are given in Sec.\ \ref{tc}. The resonance contributions are separated in Sec.\ \ref{ss}. We present our calculations and discussions in Sec.\ \ref{cy}. Finally, a summary is given in Sec.\ \ref{sum}. \section{amplitudes of the $s$- and $u$-channel transitions}\label{suc} \subsection{The interactions }\label{qmc} The effective quark-pseudoscalar-meson coupling in the chiral quark model has been discussed in detail in Refs.~\cite{Li:1997gda,qk3,Zhong:2007fx}. Here, we only outline the main formulae to keep the self-consistence of this work. The low energy quark-meson interactions are described by the effective Lagrangian \cite{Li:1997gda,qk3} \begin{eqnarray} \label{lg} \mathcal{L}=\bar{\psi}[\gamma_{\mu}(i\partial^{\mu}+V^{\mu}+\gamma_5A^{\mu})-m]\psi +\cdot\cdot\cdot, \end{eqnarray} where $V^{\mu}$ and $A^{\mu}$ correspond to vector and axial currents, respectively. They are given by \begin{eqnarray} V^{\mu} &=& \frac{1}{2}(\xi\partial^{\mu}\xi^{\dag}+\xi^{\dag}\partial^{\mu}\xi), \nonumber\\ A^{\mu} &=& \frac{1}{2i}(\xi\partial^{\mu}\xi^{\dag}-\xi^{\dag}\partial^{\mu}\xi), \end{eqnarray} under the chiral transformation $\xi=\exp{(i \phi_m/f_m)}$, where $f_m$ is the meson's decay constant. For the $SU(3)$ case, the pseudoscalar-meson octet $\phi_m$ can be expressed as \begin{eqnarray} \phi_m=\pmatrix{ \frac{1}{\sqrt{2}}\pi^0+\frac{1}{\sqrt{6}}\eta & \pi^+ & K^+ \cr \pi^- & -\frac{1}{\sqrt{2}}\pi^0+\frac{1}{\sqrt{6}}\eta & K^0 \cr K^- & \bar{K}^0 & -\sqrt{\frac{2}{3}}\eta}, \end{eqnarray} and the quark field $\psi$ is given by \begin{eqnarray}\label{qf} \psi=\pmatrix{\psi(u)\cr \psi(d) \cr \psi(s) }. \end{eqnarray} At the leading order of the Lagrangian [Eq.(\ref{lg})], the quark-meson pseudovector coupling is \begin{eqnarray}\label{coup} H_m=\sum_j \frac{1}{f_m}\bar{\psi}_j\gamma^{j}_{\mu}\gamma^{j}_{5}\psi_j\vec{\tau}\cdot\partial^{\mu}\vec{\phi}_m. \end{eqnarray} where $\psi_j$ represents the $j$-th quark field in a hadron. The non-relativistic form of Eq. (\ref{coup}) can be written as \cite{Zhong:2007fx,Li:1997gda,qk3} \begin{eqnarray}\label{ccpk} H^{nr}_{m}=\sum_j\Big\{\frac{\omega_m}{E_f+M_f}\mbox{\boldmath$\sigma$\unboldmath}_j\cdot \textbf{P}_f+ \frac{\omega_m}{E_i+M_i}\mbox{\boldmath$\sigma$\unboldmath}_j \cdot \textbf{P}_i -\mbox{\boldmath$\sigma$\unboldmath}_j \cdot \textbf{q} +\frac{\omega_m}{2\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_j\cdot \textbf{p}'_j\Big\}I_j \varphi_m, \end{eqnarray} where $\mbox{\boldmath$\sigma$\unboldmath}_j$ corresponds to the Pauli spin vector of the $j$-th quark in a hadron, and $\mu_q$ is a reduced mass given by $1/\mu_q=1/m_j+1/m'_j$, where $m_j$ and $m'_j$ stand for the masses of the $j$-th quark in the initial and final hadrons, respectively. For emitting a meson, we have $\varphi_m=\exp({-i\textbf{q}\cdot \textbf{r}_j})$, and for absorbing a meson we have $\varphi_m=\exp({i\textbf{q}\cdot \textbf{r}_j})$. In the above non-relativistic expansions, $\textbf{p}'_j$ $(=\textbf{p}_j-\frac{m_j}{M}\mathbf{P}_{c.m.})$ is the internal momentum for the $j$-th quark in the initial meson rest frame. $\omega_m$ and $\textbf{q}$ are the energy and three-vector momentum of the light meson, respectively. The isospin operator $I_j$ in Eq. (\ref{ccpk}) is expressed as \begin{eqnarray} I_j=\cases{ a^{\dagger}_j(u)a_j(s) & for $K^+$, \cr a^{\dagger}_j(s)a_j(u) & for $K^-$,\cr a^{\dagger}_j(d)a_j(s) & for $K^0$, \cr a^{\dagger}_j(s)a_j(d) & for $\bar{K^0}$,\cr a^{\dagger}_j(u)a_j(d) & for $\pi^+$,\cr a^{\dagger}_j(d)a_j(u) & for $\pi^-$,\cr \frac{1}{\sqrt{2}}[a^{\dagger}_j(u)a_j(u)-a^{\dagger}_j(d)a_j(d)] & for $\pi^0$,} \end{eqnarray} where $a^{\dagger}_j(u,d,s)$ and $a_j(u,d,s)$ are the creation and annihilation operators for the $u$, $d$ and $s$ quarks. \begin{center} \begin{figure}[ht] \centering \epsfxsize=9 cm \epsfbox{fig-1.eps} \caption{ Transition channels labeled by the Mandelstem variables, i.e. $s$, $u$, and $t$- channels. $M^s_3$ and $M^u_3$ ($M^s_2$, $M^u_2$ ) correspond to the amplitudes of the $s$- and $u$-channels with the incoming meson and outgoing meson absorbed and emitted by the same quark (different quarks), respectively. Note that in reaction $K^- p\to \Sigma^0\pi^0$ the amplitude $M^s_3$ vanishes.}\label{abc} \end{figure} \end{center} \subsection{The $s$-channel amplitudes} \label{s} The $s$-channel transition amplitudes as shown in Fig. \ref{abc} can be expressed as \begin{eqnarray} \mathcal{M}_{s}=\sum_j\langle N_f |H_{\pi} |N_j\rangle\langle N_j |\frac{1}{E_i+\omega_K-E_j}H_{K }|N_i\rangle, \label{sc} \end{eqnarray} where $\omega_K$ is the energy of the incoming $K^-$-meson. $H_K$ and $H_\pi$ are the standard quark-meson couplings at tree level described by Eq.(\ref{coup}). $|N_i\rangle$, $|N_j\rangle$ and $|N_f\rangle$ stand for the initial, intermediate and final states, respectively, and their corresponding energies are $E_i$, $E_j$ and $E_f$, which are the eigenvalues of the NRCQM Hamiltonian $\hat{H}$~\cite{Isgur78, Isgur:1977ky}. Following the procedures developed in Refs. \cite{qkk,Li:1997gda,qk3,Zhong:2007fx}, one can then express the $s$-channel amplitudes by operator expansions: \begin{eqnarray} \mathcal{M}_{s}=\sum_j\langle N_f |H_{\pi} |N_j\rangle\langle N_j| \sum_n \frac{1}{\omega_K^{n+1}}(\hat{H}-E_i)^n H_{K }|N_i\rangle \ , \end{eqnarray} where $n$ is the principle harmonic oscillator quantum number. Note that for any operator ${\hat{\cal O}}$, one has \begin{eqnarray} (\hat{H}-E_i){\hat{\cal O}}|N_i\rangle = [\hat{H}, \ {\hat{\cal O}}]|N_i\rangle , \end{eqnarray} a systematic expansion of the commutator between the NRCQM Hamiltonian $\hat{H}$ and the vertex coupling $H_K$ and $H_\pi$ can thus be carried out. Details of this treatment can be found in Refs. \cite{qkk,Li:1997gda,qk3}, but we note that in this study only the spin-independent potential in $\hat{H}$ is considered as a feasible leading order calculation. Finally, we can obtain the $s$-channel amplitude in the harmonic oscillator basis, which is expressed as \cite{Zhong:2007fx} \begin{eqnarray} \mathcal{M}^{s}=\sum_n (\mathcal{M}^{s}_{3}+\mathcal{M}^{s}_{2})e^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}, \end{eqnarray} where $\alpha$ is the oscillator strength, and $e^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}$ is a form factor in the harmonic oscillator basis. $\mathcal{M}^{s}_{3}$ ($\mathcal{M}^{s}_{2}$) corresponds to the amplitudes for the outgoing meson and incoming meson absorbed and emitted by the same quark (different quarks) (see Fig. \ref{abc}). Because of the isospin selection rule, the $\pi$ and $K^-$ can not couple to the same quark. Thus, the contribution of $\mathcal{M}^{s}_{3}$ vanishes and only $\mathcal{M}^{s}_{2}$ contributes to the $s$-channel, i.e. \begin{eqnarray}\label{ms2} \mathcal{M}^{s}_{2}&=&\langle N_f |6I_1\Big\{\mbox{\boldmath$\sigma$\unboldmath}_{1}\cdot \textbf{A}_{out}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{A}_{in}\sum_{n=0} \frac{F_s(n)}{n !}\frac{\mathcal{X}^n}{(-2)^n} +\Big[- \mbox{\boldmath$\sigma$\unboldmath}_{1}\cdot \textbf{A}_{out}\frac{\omega_{K}}{6\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{q}-\frac{\omega_{\pi}}{3m_q}\mbox{\boldmath$\sigma$\unboldmath}_1 \cdot \textbf{k}\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{A}_{in} \nonumber\\ &&+ \frac{\omega_{\pi}}{m_q}\frac{\omega_{K}}{2\mu_q} \frac{\alpha^2}{3}\mbox{\boldmath$\sigma$\unboldmath}_1\cdot\mbox{\boldmath$\sigma$\unboldmath}_3\Big]\times\sum_{n=1} \frac{F_s(n)}{(n-1) !}\frac{\mathcal{X}^{n-1}}{(-2)^n}+ \frac{\omega_{\pi}}{3m_q}\frac{\omega_{K}}{6\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_1 \cdot \textbf{q}\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{k}\sum_{n=2} \frac{F_s(n)}{(n-2) !}\frac{\mathcal{X}^{n-2}}{(-2)^{n}}\Big\}|N_i\rangle, \end{eqnarray} with \begin{eqnarray} \mathbf{A}_{in}=-\left(1+ \omega_{K} \mathcal{K}_i-\frac{\omega_K}{6\mu_q}\right)\textbf{k},\\ \mathbf{A}_{out}=-\left(1+\omega_{\pi}\mathcal{K}_f-\frac{\omega_\pi}{3m_q}\right)\textbf{q}, \end{eqnarray} where $\mathcal{K}_i=1/(E_i+M_i)$, $\mathcal{K}_f=1/(E_f+M_f)$ and $m_q$ is the light quark mass. In Eq. (\ref{ms2}), the subscriptions of the spin operator $\mbox{\boldmath$\sigma$\unboldmath}$ denote that it either operates on quark 3 or quark 1. The $\mathcal{X}$ is defined as $\mathcal{X}\equiv\frac{\textbf{k}\cdot \textbf{q}}{3 \alpha^2}$, and the factor $F_s(n)$ is given by expanding the energy propagator in Eq. (\ref{sc}) which leads to \begin{eqnarray}\label{s-prop} F_s(n)=\frac{M_n}{P_i \cdot k-nM_n \omega_h}, \end{eqnarray} where $M_n$ denotes the mass of the excited state in the $n$-th shell, while $\omega_h$ is the typical energy of the harmonic oscillator; $P_i$ and $k$ are the four momenta of the initial state nucleon and incoming $K^-$ meson in the c.m. system. The $F_s(n)$ has clear physical meaning that recovers the hadronic level propagators. We will come back to this in the next section. The above transition amplitude can be written coherently in terms of a number of $g$-factors, which will allow us to relate the quark-level amplitudes to those at hadronic level \begin{eqnarray} \label{sac} \mathcal{M}_{s}&=&\Big\{g_{s2}\textbf{A}_{out}\cdot\textbf{A}_{in}\sum_{n=0} (-2)^{-n} \frac{F_s(n)}{n !}\mathcal{X}^n +g_{s2}\left(-\frac{\omega_{K}}{6\mu_q}\textbf{A}_{out}\cdot \textbf{q}-\frac{\omega_{\pi}}{3m_q}\textbf{A}_{in}\cdot \textbf{k}+\frac{\omega_{\pi}}{m_q}\frac{\omega_{K}}{2\mu_q} \frac{\alpha^2}{3}\right)\nonumber\\ &&\times\sum_{n=1}(-2)^{-n}\frac{F_s(n)}{(n-1) !}\mathcal{X}^{n-1}+ g_{s2}\frac{\omega_{\pi}\omega_{K}}{18m_q\mu_q}\textbf{k}\cdot\textbf{q} \sum_{n=2}\frac{F_s(n)}{(n-2)!}(-2)^{-n}\mathcal{X}^{n-2}\nonumber\\ &&+g_{v2}i\mbox{\boldmath$\sigma$\unboldmath} \cdot(\textbf{A}_{out}\times\textbf{A}_{in})\sum_{n=0}(-2)^{-n} \frac{F_s(n)}{n !}\mathcal{X}^n +g_{v2}\frac{\omega_{\pi}\omega_{K}}{18m_q\mu_q}i\mbox{\boldmath$\sigma$\unboldmath}\cdot(\textbf{q}\times\textbf{k}) \nonumber\\ && \times\sum_{n=2}(-2)^{-n}\frac{F_s(n)}{(n-2)!}\mathcal{X}^{n-2}\Big\}e^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}, \end{eqnarray} where the $g$-factors, $g_{s2}$ and $g_{v2}$, in the $s$-channel are defined as \begin{eqnarray} g_{s2}&\equiv&\langle N_f |\sum_{i\neq j} I^{\pi}_i I^{K}_j\mbox{\boldmath$\sigma$\unboldmath}_i\cdot \mbox{\boldmath$\sigma$\unboldmath}_j |N_i \rangle/3,\\ g_{v2}&\equiv&\langle N_f |\sum_{i\neq j} I^{\pi}_i I^{K}_j(\mbox{\boldmath$\sigma$\unboldmath}_i\times \mbox{\boldmath$\sigma$\unboldmath}_j)_z |N_i\rangle/2, \end{eqnarray} which can be derived from the quark model in the $SU(6)\otimes O(3)$ limit. \subsection{The $u$-channel amplitudes} The $u$-channel transition amplitudes (see Fig. \ref{abc}) are given by \begin{eqnarray} \mathcal{M}_{u}=\sum_j\langle N_f |H_{K } \frac{1}{E_i-\omega_\pi-E_j}|N_j\rangle\langle N_j | H_{\pi } |N_i\rangle, \label{uc} \end{eqnarray} Following the same procedure in \ref{s}, when the outgoing and incoming mesons couple to the same quark, we obtain the amplitude \begin{eqnarray}\label{mu3} \mathcal{M}^{u}_{3}&=&-\langle N_f |3I^K_3 I^{\pi}_3\Big\{\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{B}_{in}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{B}_{out}\sum_{n=0} F_u(n)\frac{1}{n !}\mathcal{X}^n + \Big[-\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{B}_{in}\frac{\omega_{\pi}}{3m_q}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{k} - \frac{\omega_{K}}{6\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{q}\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{B}_{out}+ \frac{\omega_{\pi}}{m_q}\frac{\omega_{K}}{2\mu_q} \frac{\alpha^2}{3}\Big]\nonumber\\ && \times\sum_{n=1} F_u(n)\frac{\mathcal{X}^{n-1}}{(n-1) !}+ \frac{\omega_{\pi}}{3m_q}\frac{\omega_{K}}{6\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{k}\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{q}\sum_{n=2} F_u(n)\frac{\mathcal{X}^{n-2}}{(n-2) !}\Big\}|N_i\rangle. \end{eqnarray} While the outgoing and incoming mesons couple to two different quarks, the transition amplitude is given by \begin{eqnarray}\label{mu2} \mathcal{M}^{u}_{2}&=&-\langle N_f |6I^K_1I^{\pi}_3\Big\{\mbox{\boldmath$\sigma$\unboldmath}_{1}\cdot \textbf{B}_{in}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{B}_{out}\sum_{n=0} \frac{F_u(n)}{n !}\frac{\mathcal{X}^n }{(-2)^n}+\Big[ -\mbox{\boldmath$\sigma$\unboldmath}_{1}\cdot \textbf{B}_{in}\frac{\omega_{\pi}}{3m_q}\mbox{\boldmath$\sigma$\unboldmath}_3 \cdot \textbf{k}-\frac{\omega_{K}}{6\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_1 \cdot \textbf{q}\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{B}_{out}\nonumber\\&&+ \frac{\omega_{\pi}}{m_q} \frac{\omega_{K}}{2\mu_q} \frac{\alpha^2}{3}\mbox{\boldmath$\sigma$\unboldmath}_1 \cdot \mbox{\boldmath$\sigma$\unboldmath}_3\Big] \sum_{n=1} \frac{F_u(n)}{(n-1) !}\frac{\mathcal{X}^{n-1}}{(-2)^n}+ \frac{\omega_{\pi}}{3m_q}\frac{\omega_{K}}{6\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_1 \cdot \textbf{k}\mbox{\boldmath$\sigma$\unboldmath}_{3}\cdot \textbf{q}\sum_{n=2} \frac{F_u(n)}{(n-2) !}\frac{\mathcal{X}^{n-2}}{(-2)^{n}}\Big\}|N_i\rangle . \end{eqnarray} In the above equations, we have defined \begin{eqnarray} \mathbf{B}_{in}\equiv-\omega_K\left(\mathcal{K}_f+\mathcal{K}_j-\frac{1}{6\mu_q}\right)\mathbf{q}-(1+\omega_K\mathcal{K}_j)\mathbf{k},\\ \mathbf{B}_{out}\equiv-\omega_\pi\left(\mathcal{K}_i+\mathcal{K}_j-\frac{1}{3m_q}\right)\mathbf{k}-(1+\omega_\pi \mathcal{K}_j)\mathbf{q}, \end{eqnarray} where $\mathcal{K}_j=1/(E_j+M_j)$. In Eqs.(\ref{mu3}) and (\ref{mu2}), the factor $F_u(n)$ is written as \begin{eqnarray} F_u(n)=\frac{M_n}{P_i \cdot q+nM_n \omega_h}, \end{eqnarray} where $q$ is the four momentum of the outgoing $\pi$ meson in the c.m. system. The total amplitude for the $u$-channel is expressed as \begin{eqnarray}\label{uac} \mathcal{M}_{u}&=&-\Big\{\textbf{B}_{in}\cdot\textbf{B}_{out}\sum_{n=0} \left[g^u_{s1}+(-2)^{-n}g^u_{s2}\right] \frac{F_u(n)}{n !}\mathcal{X}^n +\left(-\frac{\omega_{\pi}}{3m_q}\textbf{B}_{in}\cdot \textbf{k}-\frac{\omega_{K}}{3m_q}\textbf{B}_{out}\cdot \textbf{q}+\frac{\omega_{K}}{2\mu_q}\frac{\omega_{\pi}}{m_q} \frac{\alpha^2}{3}\right)\nonumber\\ &&\times\sum_{n=1}[g^u_{s1}+(-2)^{-n}g^u_{s2}] \frac{F_u(n)}{(n-1) !}\mathcal{X}^{n-1}+ \frac{\omega_{\pi}\omega_{K}}{18m_q\mu_q}\textbf{k}\cdot\textbf{q} \sum_{n=2}\frac{F_u(n)}{(n-2)!}[g^u_{s1}+(-2)^{-n}g^u_{s2}] \mathcal{X}^{n-2} \nonumber\\ && + i\mbox{\boldmath$\sigma$\unboldmath} \cdot(\textbf{B}_{in}\times\textbf{B}_{out})\sum_{n=0}\left[g^u_{v1}+(-2)^{-n}g^u_{v2}\right] \frac{F_u(n)}{n !}\mathcal{X}^n -\frac{\omega_{\pi}\omega_{K}}{18m_q\mu_q}i\mbox{\boldmath$\sigma$\unboldmath}\cdot(\textbf{q}\times\textbf{k}) \sum_{n=2}[g^u_{v1}+(-2)^{-n}g^u_{v2}]\nonumber\\ &&\times\frac{F_u(n)}{(n-2)!}\mathcal{X}^{n-2} +i\mbox{\boldmath$\sigma$\unboldmath} \cdot \left[-\frac{\omega_{\pi}}{3m_q}(\textbf{B}_{in}\times \textbf{k})-\frac{\omega_{K}}{6\mu_q}(\textbf{q}\times\textbf{B}_{out} )\right ]\sum_{n=1}\left[g^u_{v1}+(-2)^{-n}g^u_{v2}\right] \mathcal{X}^{n-1} \frac{F_u(n)}{(n-1) !} \Big\}\nonumber\\ &&\times e^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}, \end{eqnarray} where the $g$ factors in the $u$-channel are determined by \begin{eqnarray} g^u_{s1}&\equiv&\langle N_f |\sum_{j}I^{K}_j I^{\pi}_j |N_i \rangle,\\ g^u_{s2}&\equiv&\langle N_f |\sum_{i\neq j} I^{K}_i I^{\pi}_j\mbox{\boldmath$\sigma$\unboldmath}_i\cdot \mbox{\boldmath$\sigma$\unboldmath}_j |N_i \rangle/3,\\ g^u_{v1}&\equiv&\langle N_f |\sum_{j} I^{K}_j I^{\pi}_j \sigma^z_j |N_i\rangle,\\ g^u_{v2}&\equiv&\langle N_f |\sum_{i\neq j} I^{K}_i I^{\pi}_j (\mbox{\boldmath$\sigma$\unboldmath}_i\times \mbox{\boldmath$\sigma$\unboldmath}_j)_z |N_i\rangle/2. \end{eqnarray} The numerical values of these factors can be derived in the $SU(6)\otimes O(3)$ symmetry limit. The first term in Eqs. (\ref{ms2}), (\ref{mu3}) and (\ref{mu2}) comes from the correlation between the c.m. motion of the $K^-$ meson transition operator and the c.m. motion of $\pi$-meson transition operator; the second and the third terms are the correlation among the internal and the c.m. motions of the $K^-$ and $\pi$ transition operators, and their contributions begin with the $n\geq 1$ exited states in the harmonic oscillator basis. The last two terms in these equations correspond to the correlation of the internal motions between the $K^-$ and $\pi$ transition operators, and their contributions begin with either $n\geq 1$ or $n\geq 2$ exited states. \section{Amplitudes of the $t$-channel transitions}\label{tc} \subsection{The interactions} The light meson exchange in the $t$-channel at low energies will generally have larger contributions than the heavy ones. In $K^- p\to \Sigma^0\pi^0$, we consider the $t$-channel vector meson $K^*(892)$ and scalar meson $\kappa(800)$ exchanges which are found dominantly coupled to $K\pi$~\cite{PDG}. For the $K^*K\pi$ and $\kappa K\pi$ couplings, we introduce the following effective interactions \begin{eqnarray} H_{K^*K\pi}&=&iG_v\{[(\partial_\mu \bar{K})K^*-\bar{K}^*(\partial_\mu K)]\vec{\tau}\cdot\vec{\pi} -[\bar{K}K^*-\bar{K}^*K]\vec{\tau}\cdot (\partial_\mu \vec{\pi})\},\\ H_{\kappa K\pi}&=&\frac{g_{\kappa K\pi}}{2m_\pi}\partial_\mu K\partial^\mu \pi \kappa, \end{eqnarray} where $G_v$ and $g_{\kappa K\pi}$ are the coupling constants to be determined by experimental data~\cite{PDG}. Similar to the quark-pseudoscalar-meson coupling, we introduce the $K^*NN$ and $\kappa NN$ couplings at quark level by effective $K^*qq$ and $\kappa qq$ Lagrangians: \begin{eqnarray} H_{K^*qq}&=& \bar{\psi}_j(a\gamma^{\nu}+\frac{i b\sigma^{\nu\lambda}q_{\lambda}}{2m_q})K^{*}_{\nu} \psi_j,\\ H_{\kappa qq}&=& g_{\kappa qq}\bar{\psi}_j\psi_j \kappa\label{acoup}, \end{eqnarray} where the constants $a$, $b$ and $g_{\kappa qq}$ are the vector, tensor and scalar coupling constants, which are treated as free parameters in this work. \subsection{The amplitudes} For the vector meson $K^*$-exchange, the amplitude of $t$-channel can be written as \begin{eqnarray} \label{tchannel} \mathcal{M}^V_t= G_v(q^{\mu}+k^\mu)G_{\mu\nu}\sum_j \bar{\psi}_j(a\gamma^{\nu}+\frac{i b\sigma^{\nu\lambda}q_{\lambda}}{2m_q})\phi^{m}_{\nu} \psi_j, \end{eqnarray} where $q^{\mu}, k^\mu$ are the four momenta of the $\pi^0$ and $K^-$ mesons, respectively. In (\ref{tchannel}), the propagator $G_{\mu\nu}$ is defined by \begin{eqnarray} G_{\mu\nu}=(-g_{\mu\nu}+\frac{Q_\mu Q_\nu}{t})/(t-M_{K^*}^{2}), \end{eqnarray} where $t\equiv Q^2$. The Feynman diagram is shown in Fig. \ref{abc}. The $t$-channel amplitude in the quark model is given by \begin{eqnarray} \mathcal{M}^V_t=\mathcal{O}^t_V\frac{1}{t-M_{K^*}^{2}}e^{-(\mathbf{q}-\mathbf{k})^2/6\alpha^2},\label{t-v} \end{eqnarray} where $e^{-(\mathbf{q}-\mathbf{k})^2/6\alpha^2}$ is a quark model form factor , $M_{K^*}$ is the mass vector meson $K^*$, and the amplitude $\mathcal{O}^t_V$ is given by \begin{eqnarray} \mathcal{O}^t_V=G_v a[g^s_{t}(\mathcal{H}_0+\mathcal{H}_1 \mathbf{q}\cdot \mathbf{k})+g^v_t\mathcal{H}_2 i\mbox{\boldmath$\sigma$\unboldmath}\cdot(\mathbf{q}\times \mathbf{k})]+\mathrm{tensor\ term} ,\label{tt-v} \end{eqnarray} in Eq. (\ref{tt-v}) we have defined \begin{eqnarray} \mathcal{H}_0&\equiv&E_0-\left[E_0 K_{is}+\left(\mathcal{K}_i+\frac{1}{6\mu_q}\right)(1+\mathcal{D})\right]k^2+\left[E_0\mathcal{K}_{fq}-\left(\mathcal{K}_f-\frac{1}{6\mu_q}\right)(1-\mathcal{T})\right]q^2,\\ \mathcal{H}_1&\equiv&E_0[\mathcal{K}_i\mathcal{K}_f-(\mathcal{K}_{fq}-\mathcal{K}_{is})]-(\mathcal{K}_i+\mathcal{K}_f)-(\mathcal{K}_i-\mathcal{K}_f)\mathcal{T}-\frac{1}{3\mu_q}\mathcal{T},\\ \mathcal{H}_2&\equiv&E_0[\mathcal{K}_f\mathcal{K}_i-(\mathcal{K}_{fq}-\mathcal{K}_{is})]-(\mathcal{K}_i+\mathcal{K}_f)-(\mathcal{K}_i -\mathcal{K}_f)\mathcal{T}+\frac{1}{3}\left(\frac{1}{m_q}-\frac{1}{m_s}\right)\mathcal{T}, \end{eqnarray} with \begin{eqnarray} \mathcal{K}_{fq}&=&\frac{1}{6m_q}\mathcal{K}_f,\ \mathcal{K}_{is}=\frac{1}{6m_s}\mathcal{K}_i,\\ \mathcal{T}&=&\frac{m^2_{\pi}-m^2_K}{t},\\ E_0&=&-(\omega_K+\omega_\pi)+(\omega_\pi-\omega_K)\mathcal{T}. \end{eqnarray} The $K^*$ exchange couplings, i.e. vector and tensor, can in principle be determined by $K^*$ meson photoproduction. However, it shows that the present experimental results from JLab and ELSA favor quite differently the tensor coupling values. In $K^- p\to \Sigma^0\pi^0$, the $K^*$ exchange is not a predominant transition mechanism. We hence only consider the $t$-channel vector exchange, but neglect the tensor term for simplicity. In the Eq.(\ref{t-v}), we have defined $g^s_t\equiv \langle N_f|\sum^3_{j=1}I^{K^-}_j|N_i\rangle$, and $g^v_t\equiv \langle N_f|\sum^3_{j=1}\sigma_j I^{K^-}_j|N_i\rangle$, which can be deduced from the quark model. Their values are listed in Tab. \ref{factor}. Similarly, for the scalar meson $\kappa$-exchange, the $t$-channel amplitude in the quark model is written as \begin{eqnarray} \mathcal{M}^S_t=\mathcal{O}^t_S\frac{1}{t-m^2_{\kappa} }e^{-(\mathbf{q}-\mathbf{k})^2/6\alpha^2}, \end{eqnarray} where $m_{\kappa}$ is the $\kappa$-meson mass, and $\mathcal{O}^t_S$ is given by \begin{eqnarray} \mathcal{O}^t_S\simeq\frac{g_{\kappa K \pi} g_{\kappa qq}}{2m_\pi}(\omega_K\omega_\pi-\mathbf{q}\cdot \mathbf{k})[g^s_{t}(\mathcal{A}_0+\mathcal{A}_1\mathbf{q}\cdot \mathbf{k})+g^v_t \mathcal{A}_1 i\mbox{\boldmath$\sigma$\unboldmath}\cdot(\mathbf{q}\times \mathbf{k}) ],\label{t-s} \end{eqnarray} with \begin{eqnarray} \mathcal{A}_0\equiv1+\frac{1}{2m_q}\mathcal{K}_f\mathbf{q}^2-\frac{1}{2m_s}\mathcal{K}_i\mathbf{k}^2,\\ \mathcal{A}_1\equiv \mathcal{K}_i\mathcal{K}_f-\frac{1}{2m_q}\mathcal{K}_f+\frac{1}{2m_s}\mathcal{K}_i. \end{eqnarray} In Eq.(\ref{t-s}), we have neglected the higher order terms. \section{SEPARATION OF THE SINGLE RESONANCE CONTRIBUTIONS}\label{ss} Note that, so far, we have separated out the amplitudes in terms of the harmonic oscillator principle quantum number $n$, which are the sum of a set of $SU(6)$ multiplets with the same $n$. To see the contributions of individual resonances, we need to further separate out the single-resonance-excitation amplitudes within each $n$ in the $s$-channel. Since the resonances in the $u$-channel contribute virtually and are generally suppressed by the kinematics, we treat them as degenerate to $n$. Function $F_s(n)$ in Eq.~(\ref{s-prop}) can be related to the $s$-channel propagator in the infinitely-narrow-width limit: \begin{eqnarray} F_s(n)&=&\frac{2M_n}{s-(M_i^2+M_K^2+2nM_i\omega_h)}\equiv \frac{2M_n}{s-M_n^2} \ , \end{eqnarray} where it has been assumed that $M_n^2\equiv M_i^2+M_K^2+2nM_i\omega_h$, which is not a bad assumption for the masses of an excited $n$-shell state. $M_i$ denotes the initial baryon mass. Taking into account the width effects of the resonances, the resonance transition amplitudes of the $s$-channel can be generally expressed as \cite{qk3,Zhong:2007fx} \begin{eqnarray} \mathcal{M}^s_R=\frac{2M_R}{s-M^2_R+iM_R \Gamma_R}\mathcal{O}_Re^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}, \label{stt} \end{eqnarray} and the $u$-channel as \begin{eqnarray} \mathcal{M}^u_n=-\frac{2M_n}{u-M^2_n}\mathcal{O}_n e^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}.\label{utt} \end{eqnarray} In Eqs. (\ref{stt}) and (\ref{utt}), $\mathcal{O}_R$ is the separated operators for individual resonances in the $s$-channel, while $\mathcal{O}_n$ is the operator for a set of degenerate states with the same $n$. In the $s$-channel of $K^-p\rightarrow \Sigma^0\pi^0$, only the $\Lambda$ resonances are involved. Our effort in the following subsections is to extract $\mathcal{O}_R$ for each $s$-channel resonance with $n<2$. \subsection{$n=0$ shell resonances} With $n=0$ the $\Lambda$-hyperon is the only state contributing to the $s$-channel, and the amplitude can be written as \begin{eqnarray} \mathcal{M}_\Lambda^{s}&=&\mathcal{O}_\Lambda \frac{2M_\Lambda}{s-M^2_\Lambda}e^{-(\textbf{k}^2+\textbf{q}^2)/6\alpha^2}, \end{eqnarray} with \begin{eqnarray} \mathcal{O}_\Lambda=g_{s2}\textbf{A}_{out}\cdot\textbf{A}_{in} +g_{v2}i\mbox{\boldmath$\sigma$\unboldmath} \cdot(\textbf{A}_{out}\times\textbf{A}_{in}),\nonumber\\ \end{eqnarray} where $M_\Lambda$ is the $\Lambda$-hyperon mass. \subsection{$n=1$ shell resonances} Both $S$- and $D$-wave resonances contribute to the $s$-channel amplitude with $n=1$. Note that the spin-independent amplitude for $D$-waves is proportional to the Legendre function $P^0_2(\cos\theta)$, and the spin-dependent amplitude is in proportion to $\frac{\partial}{\partial \theta}P^0_2(\cos\theta)$. Moreover, the $S$-wave amplitude is independent of the scattering angle. Thus, the $S$- and $D$-wave amplitudes can be separated out easily as follows, \begin{eqnarray} \mathcal{O}_S&=&-\frac{1}{2}g_{s2}\left(|\mathbf{A}_{out}|\cdot|\mathbf{A}_{in}|\frac{|\mathbf{k}||\mathbf{q}|}{9\alpha^2} -\frac{\omega_K}{6\mu_q}\mathbf{A}_{out}\cdot \mathbf{q}-\frac{\omega_\pi}{3m_q}\mathbf{A}_{in}\cdot \mathbf{k}+\frac{\omega_\pi\omega_K}{2m_q\mu_q}\frac{\alpha^2}{3}\right),\\ \mathcal{O}_D&=&-\frac{1}{2}g_{s2}|\mathbf{A}_{out}|\cdot|\mathbf{A}_{in}|(3\cos^2\theta-1) \frac{|\mathbf{k}||\mathbf{q}|}{9\alpha^2}-\frac{1}{2}g_{v2}i\mbox{\boldmath$\sigma$\unboldmath} \cdot(\textbf{A}_{out}\times\textbf{A}_{in})\frac{\mathbf{k}\cdot\mathbf{q}}{3\alpha^2}. \end{eqnarray} In the NRCQM, the $n=1$ shell contains three different representations, i.e. $[\textbf{70},^2\textbf{1}]$, $[\textbf{70},^2\textbf{8}]$ and $[\textbf{70},^4\textbf{8}]$. The two low-lying $\Lambda$-resonances, $\Lambda(1405)S_{01}$ and $\Lambda(1520)D_{03}$, are classified to be flavor singlet states of $[\textbf{70},^2\textbf{1}]$, and they have no counterparts in the nucleon spectrum. The $\Lambda(1670)S_{01}$ and $\Lambda(1690)D_{03}$ are interpreted as multiplets of $[\textbf{70},^2\textbf{8}]$, which are octet partners of the nucleon resonances $S_{11}(1535)$ and $D_{13}(1520)$. Usually the $\Lambda(1800)S_{01}$ and $\Lambda(1830)D_{05}$ are classified as multiplets of $[\textbf{70},^4\textbf{8}]$, among which the $D_{03}$ state has not yet been found in experiment. In the $SU(6)\otimes O(3)$ quark model, the contributions of $[\textbf{70},^4\textbf{8}]$ are forbidden in $K^-p\rightarrow \Sigma^0 \pi^0$ due to the so-called ``$\Lambda$-selection rule" \cite{Zhao:2006an,Isgur:1978xb,Hey:1974nc}. Thus, for the $S$-wave, only the resonances, $\Lambda(1405)S_{01}$ and $\Lambda(1670)S_{01}$, contribute to the reactions, and for the $D$-waves, $\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$. The separated amplitudes for the $S$- and $D$-wave can thus be re-written as \begin{eqnarray} \mathcal{O}_S&=&[g_{S_{01}(1405)}+g_{S_{01}(1670)}]\mathcal{O}_S,\\ \mathcal{O}_D&=&[g_{D_{03}(1520)}+g_{D_{03}(1690)}]\mathcal{O}_D, \end{eqnarray} where the factor $g_R$ ($R=S_{01}(1405)$, etc) represents the resonance transition strengths in the spin-flavor space, and is determined by the matrix element $\langle N_f|H_\pi|N_j\rangle\langle N_j|H_K|N_i\rangle$. Their relative strengths can be explicitly determined by the following relations \begin{eqnarray} \frac{g_{S_{01}(1405)}}{g_{S_{01}(1670)}}=\frac{\langle N_f|I^{\pi}_3\mbox{\boldmath$\sigma$\unboldmath}_3|S_{01}(1405)\rangle\langle S_{01}(1405)|I^K_3\mbox{\boldmath$\sigma$\unboldmath}_3|N_i\rangle}{\langle N_f|I^{\pi}\mbox{\boldmath$\sigma$\unboldmath}_3|S_{01}(1670)\rangle\langle S_{01}(1670)|I^K_3\mbox{\boldmath$\sigma$\unboldmath}_3|N_i\rangle},\label{abcd}\\ \frac{g_{D_{03}(1520)}}{g_{D_{03}(1690)}}=\frac{\langle N_f|I^{\pi}_3\mbox{\boldmath$\sigma$\unboldmath}_3|D_{03}(1520)\rangle\langle D_{03}(1520)|I^K_3\mbox{\boldmath$\sigma$\unboldmath}_3|N_i\rangle}{\langle N_f|I^{\pi}\mbox{\boldmath$\sigma$\unboldmath}_3|D_{03}(1690)\rangle\langle D_{03}(1690)|I^K_3\mbox{\boldmath$\sigma$\unboldmath}_3|N_i\rangle}. \end{eqnarray} On the condition of no configuration mixing among these states, we have $g_{S_{01}(1405)}/g_{S_{01}(1670)}=g_{D_{03}(1520)}/g_{D_{03}(1690)}=-3$. However, the admixtures of different configurations usually occur in physical states with the same quantum number due to spin-dependent forces~\cite{Isgur78,Loring:2001ky}. We shall see in Sec.\ref{cy} that configuration mixing may exist between the $S$-wave $S_{01}(1405)$ and $S_{01}(1670)$ in this reaction. By allowing the data to constraint the relative partial strengths, i.e. $g_{S_{01}(1405)}/g_{S_{01}(1670)}$, we can extract the mixing angle as a leading order result. With the same method, we can separate the amplitudes in $n=2$ shell as well, the detail can be found in our previous work \cite{Zhong:2007fx}. In this work, the higher resonances (i.e. $n\geq 2$) are treated as degenerate since they are less important in the beam momentum region $P_K\lesssim 800$ MeV/c where high precision data are available. \begin{table}[ht] \caption{Various g and $g_R$ factors defined in this work and extracted in the symmetric quark model. } \label{factor} \begin{tabular}{|c|c|c|c|c|c|c|c|c }\hline\hline factor & value & factor & value \\ \hline $g^u_{s1}$ & 1/2 & $g^s_{t}$ & $\sqrt{2}/2 $ \\ $g^u_{s2}$ & 2/3 & $g^v_{t}$ & $-\sqrt{2}/6$ \\ $g^u_{v1}$ & -1/6 & $g_{S_{01}(1405)}$ & 3/2 \\ $g^u_{v2}$ & -1 & $g_{S_{01}(1670)}$ & -1/2 \\ $g_{s2}$ & 2/3 & $g_{D_{03}(1520)}$ & 3/2 \\ $g_{v2}$ & 1 & $g_{D_{03}(1690)}$ & -1/2\\ \hline \end{tabular} \end{table} \section{calculation and analysis} \label{cy} \subsection{Parameters} With the transition amplitudes derived from the previous section, the differential cross section can be calculated, \begin{eqnarray} \frac{d\mbox{\boldmath$\sigma$\unboldmath}}{d\Omega}=\frac{(E_i+M_i)(E_f+M_f)}{64\pi^2 s}\frac{|\textbf{q}|}{|\textbf{k}|}\frac{1}{2} \sum_{\lambda_i,\lambda_f}\left|\left[\frac{\delta^2}{f_\pi f_K}(\mathcal{M}_s+\mathcal{M}_u)+\mathcal{M}^V_t+\mathcal{M}^S_t\right]_{\lambda_f,\lambda_i}\right|^2 , \end{eqnarray} where $\lambda_i=\pm 1/2$ and $\lambda_f=\pm 1/2$ are the helicities of the initial and final state baryons, respectively; $\delta$ is a global parameter accounting for the flavor symmetry breaking effects arising from the $quark-meson$ couplings, and will be determined by experimental data; $f_\pi$ and $f_K$ are the $\pi$- and $K$-mesons decay constants, respectively. To take into account the relativistic effects, we introduce Lorentz boost factors in the spatial part of the amplitudes as done in Refs. \cite{qkk,Zhong:2007fx}, i.e. \begin{eqnarray} \mathcal{O}_i(\textbf{k},\textbf{q})\rightarrow \gamma_k \gamma_q \mathcal{O}_i(\textbf{k}\gamma_k, \textbf{q} \gamma_q), \end{eqnarray} where $\gamma_k=M_i/E_i$ and $\gamma_q=M_f/E_f$. We also introduce an energy-dependent width for the resonances in order to take into account the off-mass-shell effects in the reaction~\cite{qkk,qk3,qk4}: \begin{eqnarray} \Gamma(\textbf{q})=\Gamma_R\frac{\sqrt{s}}{M_R}\sum_i x_i \left(\frac{|\textbf{q}_i|}{|\textbf{q}^R_i|}\right)^{2l+1} \frac{D(\textbf{q}_i)}{D(\textbf{q}^R_i)}, \end{eqnarray} where $|\textbf{q}^R_i|=((M_R^2-M_b^2+m_i^2)/4M_R^2-m_i^2)^{1/2}$, and $|\textbf{q}_i|=((s-M_b^2+m_i^2)/4s-m_i^2)^{1/2}$; $x_i$ is the branching ratio of the resonance decaying into a meson with mass $m_i$ and a baryon with mass $M_b$, and $\Gamma_R$ is the total decay width of the $s$-channel resonance with mass $M_R$. $D(\textbf{q})=e^{-\textbf{q}^2/3\alpha^2}$ is a fission barrier function. In the calculation, the universal value of harmonic oscillator parameter $\alpha=0.4$ GeV is adopted. The masses of the $u$, $d$, and $s$ constituent quarks are set as $m_u=m_d=330$ MeV, and $m_s=450$ MeV, respectively. The decay constants for $\pi$, and $K$ are $f_\pi=132$ MeV and $f_K=160$ MeV, respectively. Coupling constants in the $t$-channel transitions, i.e. $G_v$, $a$, $g_{\kappa K\pi}$ and $g_{\kappa qq}$, can be determined by other experimental data. For instance, $G_v$ can be determined by $K^*\to K\pi$~\cite{PDG}, while vector coupling $a$ can be extracted from $K^*$ photoproduction~\cite{JLab-Kstar,elsa-kstar,zhao-kstar}. As shown by Refs.~\cite{JLab-Kstar,elsa-kstar}, coupling $a$ has a value of about 3, but with quite significant uncertainties. As $G_v$ and $a$ appear simultaneously in the product of $G_v a$, we find that $G_va=38$ is a reasonable value for the $K^*$ exchange. Note that within the uncertainties of $K^*N\Sigma$ coupling, this value can be regarded as reasonable. The value of $g_{\kappa K\pi }$ predicted by QCD sum rules is $g_{\kappa K\pi }\simeq 4$, which is compatible with the value extracted from the data \cite{Brito:2004tv}. This implies that the $\kappa qq$ coupling constant is $g_{\kappa qq}\simeq 5$, which also turns to be reasonable. Parameters in the $s$- and $u$-channel will be determined by fitting the cross section data. So far, there are 63 datum points of differential cross section at seven momentum beams between 514 and 687 MeV/c available~\cite{Manweiler:2008zz}. By fitting this datum set, we find $\delta\simeq 1.55$ accounting for flavor symmetry breaking effects, and resonance parameters are also determined and listed in the Tab. \ref{parameter}. From the table, we see that all the resonance parameters roughly agree with the PDG values. The preferred Breit-Wigner mass of the $\Lambda(1405)S_{01}$ is $1420$ MeV, which is about 10 MeV larger than the upper limit of the PDG suggestion~\cite{PDG}. To fit the total cross section, we find the widths of $\Lambda(1520)D_{03}$ should have a narrower width $\Gamma\simeq 8$ MeV, which is only half of the PDG value. The fitted mass and width for $\Lambda(1670)S_{01}$ are $M=1697$ and $\Gamma=65$ MeV, respectively, which are also slightly larger than the PDG suggestions. For the $n=2$ shell we take a degenerate mass and width as $M=1850$ MeV and $\Gamma=100$ MeV since in the low energy region contributions from the $n=2$ shell are not significant. \begin{table}[ht] \caption{Breit-Wigner masses $M_R$ (in MeV) and widths $\Gamma_R$ (in MeV) for the resonances in the $s$-channel. States in the $n=2$ shell are treated as degenerate to $n$. } \label{parameter} \begin{tabular}{|c|c|c||c|c|c| }\hline\hline resonance &\ \ $M_R$ \ \ & \ \ $\Gamma_R$ \ \ &\ \ $M_R$ (PDG)\ \ & \ \ $\Gamma_R$ (PDG)\ \ \\ \hline $S_{01}(1405)$& 1420 &48 & $1406\pm 4 $ &$50\pm 2$ \\ $S_{01}(1670)$& 1697 &65 & $1670\pm 10$ & $25\sim50$ \\ $D_{03}(1520)$& 1520 &8 & $1520\pm 1$ &$16\pm 1$ \\ $D_{03}(1690)$& 1685 &63 & $1690\pm 5$ &$60\pm 10$ \\ n=2 & 1850 &100 & & \\ \hline \end{tabular} \end{table} In the $u$-channel, the intermediate states are the nucleon and its resonances. We find that contributions from the $n\geq 1$ shell are negligibly small, and are insensitive to the degenerate masses and widths for these shells. In this work, we take $M_1=1650$ MeV ($M_2=1750$ MeV), $\Gamma_1=230$ MeV ($\Gamma_2=300$ MeV) for the degenerate mass and width of $n=1$ ($n=2$) shell nucleon resonances, respectively. The last parameter we consider is the relative strength $g_{S_{01}(1405)}/g_{S_{01}(1670)}$. The data favor a much larger value for $g_{S_{01}(1405)}$ relative to $g_{S_{01}(1670)}$. In another word, a much stronger $S$-wave contribution is needed in the explanation of the experimental data. We thus empirically adjust the relative strength between $S_{01}(1405)$ and $S_{01}(1670)$ by a mixing angle (see Sec.\ref{cm}). This could be evidence that the single quark interaction picture fails in the description of the dominant $S$-wave amplitude. \subsection{Configuration mixing}\label{cm} \begin{center} \begin{figure}[ht] \centering \epsfxsize=10 cm \epsfbox{fig-2.eps} \caption{(Color online) Comparisons of the differential cross sections between with (solid curves) and without configuration mixings (dashed curves) for the $\Lambda(1405)$ and $\Lambda(1670)$, respectively. }\label{cmf} \end{figure} \end{center} In the calculations, we find that the relative strength $g_{S_{01}(1405)}/g_{S_{01}(1670)}$ is crucial for reproducing the angular distributions in the differential cross sections. With no configuration mixing, i.e. $g_{S_{01}(1405)}/g_{S_{01}(1670)}=-3$, the data can not be well explained as shown by the dashed curves in Fig. \ref{cmf}. As we know, the configuration mixing will bring uncertainties to this value, thus we determine it by fitting the data. When we take $g_{S_{01}(1405)}/g_{S_{01}(1670)}\simeq -9$, the data can be reasonably reproduced (see the solid curves in Fig. \ref{cmf}), which indicates that the configuration mixing in $S_{01}(1405)$ and $ S_{01}(1670)$ is needed. If we take the $g_{D_{03}(1520)}/g_{D_{03}(1690)}$ as a free parameter, the fitted value do not change obviously compared with the value of no configuration mixing. Thus, in the calculations, we do not considered the configuration mixing in $D_{03}(1520)$ and $D_{03}(1690)$. We empirically introduce a mixing angle between $[\textbf{70},^2\textbf{1}]$ and $[\textbf{70},^2\textbf{8}]$ within the physical states $S_{01}(1405)$ and $S_{01}(1670)$, i.e. \begin{eqnarray} |S_{01}(1405)\rangle=\cos(\theta)|\textbf{70},^2\textbf{1} \rangle-\sin(\theta)|\textbf{70},^2\textbf{8} \rangle,\\ |S_{01}(1670)\rangle=\sin(\theta)|\textbf{70},^2\textbf{1} \rangle+\cos(\theta)|\textbf{70},^2\textbf{8} \rangle. \end{eqnarray} Inserting these wave functions into Eq.(\ref{abcd}), we have \begin{eqnarray}\label{mix-coup} \frac{g_{S_{01}(1405)}}{ g_{S_{01}(1670)}}=\frac{[3\cos(\theta)-\sin(\theta)][\cos(\theta)+\sin(\theta)]} {[3\sin(\theta)+\cos(\theta)][\sin(\theta)-\cos(\theta)]}\label{rt}, \end{eqnarray} which is a function of the mixing angle $\theta$. In order to study the relation between the relative coupling strength $g_{S_{01}(1405)}/g_{S_{01}(1670)}$ and mixing angle $\theta$, we define a function of $\theta$ as \begin{eqnarray} f(\theta)=[3\cos(\theta)-\sin(\theta)][\cos(\theta)+\sin(\theta)] -\frac{g_{S_{01}(1405)}}{ g_{S_{01}(1670)}}[3\sin(\theta)+\cos(\theta)][\sin(\theta)-\cos(\theta)]. \end{eqnarray} For a given ratio $g_{S_{01}(1405)}/g_{S_{01}(1670)}$, the mixing angle $\theta$ can be determined at $f(\theta)=0$. One can easily check that the ratio $g_{S_{01}(1405)}/g_{S_{01}(1670)}=-3$ leads to $\theta=0^\circ$, i.e. no configuration mixing between $[\textbf{70},^2\textbf{1}]$ and $[\textbf{70},^2\textbf{8}]$. With the fitted value $g_{S_{01}(1405)}/ g_{S_{01}(1670)}=-9$, the $f(\theta)$ as a function of $\theta$ is shown in Fig. \ref{figq}. The mixing angle can then be extracted at $f(\theta)=0$. From the figure, we find that two mixing angles, $\theta\simeq41^\circ$ and $165^\circ$, satisfy the condition $f(\theta)=0$ with $g_{S_{01}(1405)}/ g_{S_{01}(1670)}=-9$. With $\theta=41^\circ$, the admixtures of flavor singlet $[\textbf{70},^2\textbf{1}]$ and flavor octet $[\textbf{70},^2\textbf{8}]$ in the $\Lambda(1405)$ amount to $57\%$ and $43\%$, respectively. With $\theta=165^\circ$, the $\Lambda(1405)$ is dominantly $[\textbf{70},^2\textbf{1}]$ with a wave function density of $\sim 93\%$, while admixture of $[\textbf{70},^2\textbf{1}]$ in $\Lambda(1670)$ is only $\sim 7\%$. The recent relativistic quark model study suggests that for the $\Lambda(1405)$ the admixtures of singlet $[\textbf{70},^2\textbf{1}]$ and octet $[\textbf{70},^2\textbf{8}]$ are $\sim 70\%$ and $\sim 30\%$, respectively, and for the $\Lambda(1670)$, the admixture of $[\textbf{70},^2\textbf{8}]$ is $\sim 62\%$ and that of $[\textbf{70},^2\textbf{1}]$ is $\sim 26\%$ \cite{Loring:2001ky}, which is compatible with the results with $\theta=41^\circ$. It is interesting to note that this feature that the $\Lambda(1405)$ and $\Lambda(1670)$ as mixed states dominated by the singlet and octet, respectively, is also obtained by the coupled channel studies based on U$\chi$PT~\cite{Jido:2003cb}. Furthermore, Eq. (\ref{mix-coup}) allows us to investigate the ratios of the couplings to the $\bar{K}N$ and $\pi \Sigma$ channel for the states $S_{01}(1405)$ and $S_{01}(1670)$ with the following relations: \begin{eqnarray} \frac{g_{S_{01}(1405)\bar{K}N}}{ g_{S_{01}(1670)\bar{K}N}}=\frac{\cos(\theta)+\sin(\theta)} {\sin(\theta)-\cos(\theta)},\\ \frac{g_{S_{01}(1405)\pi \Sigma}}{ g_{S_{01}(1670)\pi\Sigma}}=\frac{ 3\cos(\theta)-\sin(\theta) } { 3\sin(\theta)+\cos(\theta) }. \end{eqnarray} If we take the mixing angle $\theta=41^\circ$ we have \begin{eqnarray} \left|\frac{g_{S_{01}(1405)\bar{K}N}}{ g_{S_{01}(1670)\bar{K}N}}\right|\simeq 14,\ \left|\frac{g_{S_{01}(1405)\pi \Sigma}}{ g_{S_{01}(1670)\pi\Sigma}}\right|\simeq 0.6. \end{eqnarray} This solution is in agreement with the UChPT model prediction \cite{Oset:2001cn}, which also prefers a much stronger coupling of the $S_{01}(1405)$ to the $\bar{K} N$ channel than the $S_{01}(1670)$. On the other hand, if the mixing angle is taken as $\theta=165^\circ$ it gives \begin{eqnarray} \left|\frac{g_{S_{01}(1405)\bar{K}N}}{ g_{S_{01}(1670)\bar{K}N}}\right|\simeq 0.58,\ \left|\frac{g_{S_{01}(1405)\pi \Sigma}}{ g_{S_{01}(1670)\pi\Sigma}}\right|\simeq 17. \end{eqnarray} In order to determine the mixing angle and the couplings for these two $S_{01}$ states, a coherent study of the photoproduction $\gamma p\to K^+ \Lambda(1405)$ and $\gamma p\to K^+ \Lambda(1670)$ would be needed. \begin{center} \begin{figure}[ht] \centering \epsfxsize=8 cm \epsfbox{fig-3.eps} \caption{ The evolution of function $f(\theta)$ in terms of the mixing angle $\theta$ is shown. The values of $\theta$ corresponding to $f(\theta)=0$ are the mixing angles for $g_{S_{01}(1405)}/ g_{S_{01}(1670)}=-9$, which are found to be $\theta\simeq 41^\circ$ and $\theta\simeq 165^\circ$.}\label{figq} \end{figure} \end{center} \subsection{Differential cross section}\label{dc} In Fig. \ref{fig-1}, the differential cross sections are shown at different center mass energies (beam momenta) from $W=1536$ MeV ($P_K=436$ MeV/c) to $W=1687$ MeV ($P_K=773$ MeV/c). The experimental data~\cite{Manweiler:2008zz,armen:1970zh,London:1975av,Baxter:1974zs,Berley:1996zh,Mast:1974sx} are also included for a comparison. As shown by the solid curves, the overall agreement with the experimental data is rather good. However, we also note that the theoretical results seem to slightly underestimate the differential cross sections at forward angles at $W=1536\sim 1552$ MeV, which is just around the $\Lambda(1520)D_{03}$ production threshold. Notice that the experimental data possess quite large uncertainties, improved measurement in this energy region is needed to clarify the discrepancies. To the low energy region, i.e. $W=1457\sim 1532$ MeV (or $P_K=200\sim 425$ MeV/c), there are no data for the differential cross sections available from experiment. This is the region that the low-lying $\Lambda(1405)S_{01}$ dominates. Therefore, we plot in Fig. \ref{fig-4} the cross sections given by our model in association with exclusive cross sections by single resonance excitations or transitions. We also carry out such a decomposition for the differential cross sections in the region of $W=1569 \sim 1676$ MeV in Fig. \ref{fig-2}. \begin{center} \begin{figure}[ht] \centering \epsfxsize=12 cm \epsfbox{fig-4.eps} \caption{ (Color online) Differential cross sections for $P_K=475\sim 775$ MeV/c (i.e. $W=1536\sim 1687$ MeV). Data are from \cite{Manweiler:2008zz} (open squares), \cite{Baxter:1974zs} (open up-triangles) and \cite{armen:1970zh} (open circles). }\label{fig-1} \end{figure} \end{center} In Figs. \ref{fig-4}, the solid curves are the full calculations of the model. The thin horizontal lines denote the contributions from the $\Lambda(1405)S_{01}$. Interestingly, the $\Lambda(1405)S_{01}$ appears to be predominant and even larger than the full results. It implies that large cancelations exist between the $\Lambda(1405)S_{01}$ amplitude and other transitions. The dotted curves in Fig. \ref{fig-4} are contributions from the $u$-channel transition. It presents an enhancement at forward angles though the $u$-channel propagator will generally suppress the forward-angle cross sections. Reason for this enhancement is due to the cancelations occur within the term of ${\bf B}_{in}\cdot {\bf B}_{out}$ at backward angles. Meanwhile, the $u$-channel will provide an important destructive interference with the $\Lambda(1405)S_{01}$, and lower the differential cross sections at the forward direction. The dash-dotted curves in Fig.~\ref{fig-4} represent contributions from the $t$-channel $K^*$ exchange, which are also forward-angle enhanced. This contribution deceases with the energies and provides an essentially important interference in the amplitudes. As shown in Fig. \ref{fig-2}(a) by the dash-dot-dotted curves, its interferences with the rest mechanisms will enhance the forward-angle cross sections, but suppress the backward ones. In contrast, the overall effects from the $t$-channel $\kappa$ exchange are rather small. At $W=1522$ MeV (i.e. $P_K \sim 400$ MeV/c), the contributions from the on-shell $D_{03}(1520)$ can be seen clearly by its interference which significantly changes the shape of the differential cross section. However, in the energies away from its mass, the $D$-wave effects die out quickly. \begin{center} \begin{figure}[ht] \centering \epsfxsize=9 cm \epsfbox{fig-5.eps} \caption{(Color online) Differential cross sections at six energies in a range of $P_K=200\sim 425$ MeV/c (i.e. $W=1457\sim 1532$ MeV). The bold solid curves are given by the full model calculations. The thin lines, dashed, dash-dotted and dash-dot-dotted curves stand for the exclusive cross sections for the $S_{01}(1405)$, $u$-channel, $t$-channel $K^*$-exchange, and the $D_{03}(1520)$, respectively. }\label{fig-4} \end{figure} \end{center} \begin{center} \begin{figure}[ht] \centering \epsfxsize=9 cm \epsfbox{fig-6.eps} \caption{ (Color online) Cross sections of exclusive channels or individual resonances are shown at $P_K=514$, 629 and 750 MeV/c, respectively. The bold solid curves are for the full model calculations. In the left panel, i.e. (a1)-(a3), the dashed, dash-dotted, and dash-dot-dotted, dotted and thin solid curves are for the results given by switching off the contributions from the $\Lambda$ pole, $t$-channel $\kappa$-exchange, $t$-channel $K^*$-exchange, $\Lambda(1670)S_{01}$, and $\Lambda(1405)S_{01}$, respectively. In the right panel, i.e. (b1)-(b3), the dashed, dash-dot-dotted and dotted curves correspond to the interferences of the $\Lambda(1690)D_{03}$, $\Lambda(1520)D_{03}$ and $u$-channel with the $S$-wave amplitudes, respectively. The thin solid lines and the dash-dotted lines stand for the exclusive cross sections of the $\Lambda(1405)S_{01}$ and $\Lambda(1670)S_{01}$, respectively.}\label{fig-2} \end{figure} \end{center} Further study of the individual transitions are presented in Fig. \ref{fig-2}, where in the left panel the cross sections are given by removing one of the transition amplitudes from contributing, while in the right panel cross sections are given by single transitions interfering with the $S$-waves, i.e. $\Lambda(1405)S_{01}$ and $\Lambda(1670)S_{01}$. First in the right panel, the two horizontal lines, thin solid and dash-dotted, are exclusive cross sections for the $\Lambda(1405)S_{01}$ and $\Lambda(1670)S_{01}$, respectively. In the energy region of $W=1569\sim 1676$ MeV, the $\Lambda(1405)S_{01}$ is no longer a dominant amplitude though its contribution is still significant. By adding the $\Lambda(1690)D_{03}$, $\Lambda(1520)D_{03}$, and the $u$-channel to the $S$-waves, their effects are shown by the dashed, dash-dot-dotted, and dotted curves, respectively. It is interesting to see the role played by the $u$-channel, of which the interference contributes to the creation of the backward enhancement. On the left panel, the thin lines shows the effects without the $\Lambda(1405)S_{01}$, which are strongly forward peaking. Alternatively, this shows how important the $\Lambda(1405)S_{01}$ is in this reaction. The other drastic effects are illustrated by the dash-dot-dotted curves, which are generated by removing the $t$-channel $K^*$ exchange. As discussed earlier, it contributes to the forward enhancement and suppresses the backward cross sections. As shown by the dashed, dotted, and dashed-dotted curves, interfering effects from $\Lambda$ pole, $\Lambda(1670)S_{01}$, and $t$-channel $\kappa$ can also be identified. In particular, it shows that the $\kappa$ exchange interferes with the other amplitudes in an opposite behavior in comparison with the $K^*$. It suppresses the forward-angle cross sections but enhances the backward ones. It is interesting to compare this study with $\pi^-p\rightarrow \eta n$ \cite{Zhong:2007fx}, where the cross section is also dominated by the $S$-wave near threshold, but the angular distribution is mainly controlled by the $S$- and $D$-wave interferences. In $K^-p\rightarrow \pi^0\Sigma^0$, we find that the interferences between the $S$-wave and the $u$-channel are more crucial in the energy region $P_K\gtrsim 520$ MeV/c. The $D$-wave interferences become restricted to a relatively narrow energy region due to the narrow width of $\Lambda$ states. It should also be recognized that since only the amplitude ${\cal M}_2^s$ can contribute, the $s$-channel interferences from the $\Lambda$ pole is not as significant as the $u$-channel. \subsection{Total cross section} The total cross section as a function of the beam momentum is plotted in Fig. \ref{fig-3} to compare with experimental data~\cite{Baxter:1974zs,Mast:1974sx,armen:1970zh,Berley:1996zh,London:1975av,Manweiler:2008zz}. To see the contributions of exclusive transitions, their cross sections are also plotted. It shows that our theoretical calculations agree well with the experimental data up to $P_K<800$ MeV. Towards the low-energy limit, the total cross section exhibits a steep enhancement which is due to the dominant $\Lambda(1405)S_{01}$. The dashed curve shows the exclusive cross section of the $\Lambda(1405)S_{01}$, which is larger than the total cross section of the full calculations. The $u$-channel also turns out to be a major contributor to the cross sections, and is a main background in the whole momentum region. It becomes even larger than the other transitions above $P_K>500$ MeV, and its interference with the $S$-wave amplitudes governs the momentum-dependent behavior of the cross section except for the resonance excitations by the $\Lambda(1520)D_{03}$, which produces a sharp peak in the total cross section. The importance of the $u$-channel contributions are also stressed in the U$\chi$PT calculations \cite{Oller:2006jw,Oller:2005ig}. It is found there that by switching off the $I=1$ resonances, the results change quite significantly near threshold. To reproduce this peak, it requires that the $\Lambda(1520)D_{03}$ has a narrow width $\Gamma\simeq 8$ MeV, which is about a factor 2 smaller than the PDG value. The contributions of the $\Lambda(1670)S_{01}$ are also visible around $P_K=0.8\pm 1$ GeV/c. When the beam momentum $P_K\gtrsim 800$ MeV, the model predictions start to become worse, which indicates that the treatment of the resonances of $n=2$ shell as degenerate is no longer applied, and more realistic approach should be introduced. Because of the lack of accurate data in this momentum region, we do not discuss the higher resonances in the $n\geq 2$ shells in this work. The $t$-channel $K^*$ and $\kappa$ exchanges are also shown, and they both decrease with the increase of the beam momentum. Furthermore, the $K^*$ exchange is much larger than the $\kappa$ exchange. \begin{figure}[ht] \centering \epsfxsize=8 cm \epsfbox{fig-7.eps} \caption{(Color online) Total cross section as a function of the beam momentum $P_K$. The solid curves are the full model calculations. Data are from Refs. \cite{Baxter:1974zs} (open circles), \cite{Mast:1974sx} ( up-triangles), \cite{armen:1970zh} (open diamonds), \cite{Berley:1996zh} (left-triangles), \cite{London:1975av} (down-triangles), and \cite{Manweiler:2008zz} (squares). In (A), exclusive cross sections for the $\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$, $\Lambda(1520)D_{03}$, $t$-channel, and $u$-channel are indicated by different lines, respectively. In (B), the dotted and dashed curves correspond to the exclusive cross sections for the $t$-channel $\kappa$ and $K^*$-exchange, respectively. }\label{fig-3} \end{figure} \section{summary and discussion}\label{sum} In this work we have studied the reaction $K^-p\rightarrow \Sigma^0\pi^0$ at low energies within a chiral quark model. With a limited number of parameters, we can describe the differential cross sections and cross sections which are in a good agreement with the data. In the low energy region, i.e., $P_K< 800$ MeV/c, the $n=1$ shell resonances $\Lambda(1405)S_{01}$, $\Lambda(1520)D_{03}$ and $\Lambda(1670)S_{01}$ are found to play important roles in the reactions, and the $n\geq 2$ shell resonance contributions are negligible small. The $\Lambda(1405)S_{01}$ is very crucial in the reactions. It is the major contributor of the $S$-wave amplitude in the low-energy region. In particular, in the region of $P_K\lesssim 300$ MeV/c, $\Lambda(1405)S_{01}$ dominates the amplitudes, and contributions of the other resonances are nearly invisible in the total cross section. Around $P_K= 400$ MeV/c, the $\Lambda(1520)D_{03}$ is responsible for the strong resonant peak in the total cross section. Around $P_K=800$ MeV/c, the differential cross sections are sensitive to the $\Lambda(1670)S_{01}$. In this energy region the role of $\Lambda(1690)D_{03}$ is visible, but less important than $\Lambda(1670)S_{01}$. The non-resonant backgrounds, $u$- and $t$-channel, also play important roles in the reaction. In the $t$-channel, the $K^*$-exchange has larger cross sections than the $\kappa$. It enhances the cross section obviously at the forward angles, and has some destructive interferences at the backward angles. There can be seen a small contribution of the $s$-channel $\Lambda$-pole, which slightly enhances the cross section. The $u$-channel significantly suppresses the differential cross section at the forward angles, and produces the characteristic backward enhancement. The significant contributions of the $u$-channel agree with the results of U$\chi$PT \cite{Oller:2006jw,Oller:2005ig}. In the quark model framework, the $u$-channel allows transitions that the initial and final state mesons can be coupled to the same quark or different quarks, while the $s$-channel can only occur via transitions that the initial and final state mesons are coupled to different quarks. This explains the importance of the $u$-channel contributions. In comparison with the U$\chi$PT, the agreement implies some similarity of the coupling structure at leading order. For instance, the meson-quark couplings in our model can be related to the meson-baryon couplings via current conservation such as the recognition of the Goldberger-Treiman relation~\cite{gt-relation}. Our analysis suggests that there exist configuration mixings within the $\Lambda(1405)S_{01}$ and $\Lambda(1670)S_{01}$ as admixtures of the $[\textbf{70},^2\textbf{1},1/2]$ and $[\textbf{70},^2\textbf{8},1/2]$ configurations. The $\Lambda(1405)S_{01}$ is dominated by $[\textbf{70},^2\textbf{1},1/2]$ (93\% or 57\%), and $\Lambda(1670)S_{01}$ by $[\textbf{70},^2\textbf{8},1/2]$ (93\% or 57\%), which is in agreement with the U$\chi$PT results \cite{Jido:2003cb}. The $\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$ are assigned as the $[\textbf{70},^2\textbf{1},3/2]$ and $[\textbf{70},^2\textbf{8},3/2]$, respectively. This prescription indicates that the $\Lambda(1405)S_{01}$, $\Lambda(1520)D_{03}$ and $\Lambda(1670)S_{01}$ still possess features of the traditional 3-quark states though they may also have some exotic properties which are not sensitive to the measurement of the cross sections. Experimental measurement of polarization observables may be more selective for exposing their natures, especially for the $\Lambda(1405)S_{01}$. Nevertheless, more accurate differential cross sections in the low beam momentum region, e.g. $P_K=200\sim 500$ MeV/c, should also be useful. For higher resonances, we expect more accurate data in the region of $P_K=750\sim 900$ MeV/c can be useful for clarifying their contributions and properties. With such data available, we can then further study the role of $\Lambda(1670)S_{01}$, $\Lambda(1690)D_{03}$ and the other higher $P$- and $F$-wave resonances in the $n=2$ shell. The J-PARC facilities, which start to run recently, will provide great opportunities for the study of the hyperon spectrum in theory. By comparing with approaches at hadronic level, so far we have not yet included the coupled-channel dynamics. It would be interesting and extremely useful to develop a coupled-channel calculation in our framework for baryon resonance excitations in meson-nucleon scattering and meson photoproduction. This would be a natural way of restoring unitarity of the theory, and provide a microscopic description for meson-baryon couplings. Nevertheless, with the coupled-channel effects, one should be able to compare the quark model form factors with those extracted from the hadronic models. We wish to report the progress in the near future. \section*{ Acknowledgements } This work is supported, in part, by the National Natural Science Foundation of China (Grants 10675131 and 10775145), Chinese Academy of Sciences (KJCX3-SYW-N2), and the U.K. EPSRC (Grant No. GR/S99433/01). The authors would like to thank useful discussions with A. Sibirtsev, T.-S.H. Lee, and B.-S. Zou for very useful discussions on relevant issues.
1,116,691,501,160
arxiv
\section{Introduction} Finsler geometry is a natural generalization of the Riemannian one. Roughly speaking, it is based on the notion of an interval without the quadratic restriction, i.e. on the interval that locally does not look like the one given by the Pythagoras theorem. In fact, this possibility was already discussed by Riemann in his habilitation lecture in 1854. The general study was undertaken only 50 years later by P. Finsler. Since then, thanks to many contributions by various mathematicians, Finsler geometry has developed into a separate branch of mathematics that includes the standard Riemannian geometry as a special case {\cite{Anastasiei,Antonelli,Bao,Chern,Mo}}. At the same time, Finsler geometry has been found to be useful in many applications ranging from biology {\cite{Antonelli1993OnYC,ANTONELLI2005899}} and cosmology {\cite{Huang:2007en,Chang:2008yv,Silva:2016qkj}} to Lorentz violating models \cite{Alan_Kosteleck_2012} and non-linear optics {\cite{Roman,Miron}}, among others. Our interest in Finsler geometry comes from its natural appearance in the bi-metric formulation of the recent models for massive gravity {\cite{deRham:2014zqa}}. Namely, if one considers two different metric tensors on the same manifold, $\alpha_{ij}$ and $\beta_{ij}$, then the natural action for a ``minimally'' coupled point-like particle is given by {\cite{Akrami:2014lja}} \begin{eqnarray}\label{action_massive} S = \int ds_\alpha + \int ds_\beta \ , \end{eqnarray} where $ds_\alpha = \sqrt{\alpha_{ij}\dot{x}^i \dot{x}^j} dt$ and $ds_\beta = \sqrt{\beta_{ij}\dot{x}^i \dot{x}^j} dt$ are the usual intervals for the corresponding metrics (we included possible dimensionless coefficients into the definitions of the metric tensors). It is obvious that unless $\alpha$ is proportional to $\beta$, (\ref{action_massive}) will not lead to geodesics for some Riemannian geometry. Instead, defining \begin{eqnarray}\label{FFF1} \mathrm{F}_\alpha(x,\dot{x}) := \sqrt{\alpha_{ij}\dot{x}^i \dot{x}^j}\ ,\ \mathrm{F}_\beta(x,\dot{x}) := \sqrt{\beta_{ij}\dot{x}^i \dot{x}^j}\ \mathrm{and}\ \mathrm{F}(x,\dot{x}):= \mathrm{F}_\alpha(x,\dot{x}) + \mathrm{F}_\beta(x,\dot{x})\ , \end{eqnarray} we can trivially write (\ref{action_massive}) as \begin{eqnarray}\label{action_Finsler} S = \int \mathrm{F}(x,\dot{x}) dt \ , \end{eqnarray} which is exactly the functional, controlling the geodesic distance in the Finsler geometry defined by the Finsler function $\mathrm{F}(x,\dot{x})$ given in (\ref{FFF1}). Then a very natural question arises: Can one have a purely geometric description of the dynamics of the bi-metric gravity? By this we mean the following point. The Einstein-Hilbert action is the simplest and the most natural action describing the dynamics of Riemannian structure on a manifold. Can we interpret the bi-gravity action \cite{deRham:2014zqa} \begin{eqnarray}\label{bigrav} S_{2-gr} &=& \frac{M_{\alpha}^2}{2} \int \mathrm{d}^4 x\sqrt{|\alpha |} R[\alpha] + \frac{M_{\beta}^2}{2} \int \mathrm{d}^4 x\sqrt{|\beta |} R[\beta] + \nonumber \\ &&+\frac{M_{\alpha}^2 m^2}{4} \int \mathrm{d}^4 x\sqrt{|\alpha |} \sum\limits_{n=0}^4 c_n e_n (\mathds{1} - \sqrt{\alpha^{-1} \beta}) \end{eqnarray} as an action for some type of Finslerian gravity? (In (\ref{bigrav}), $M_{\alpha ,\beta}$ and $m$ are some characteristic scales, $e_n (\mathds{X})$ are the symmetric polynomials of a matrix $\mathds{X}$ and $c_n$ are some coupling constants.) In regard to this question, two comments are in order. Firstly, the formulation of Finslerian gravity is far from being settled (see, \cite{Vacaru2007,Vacaru2008,Pfeifer2013,Pfeifer2019} for some efforts in this direction). Partly, this is due to the fact that Finsler geometry has much more freedom in constructing invariants, even after one imposes some natural conditions, like metric compatibility. As a consequence, only one (if any) formulation of Finslerian gravity could lead to (\ref{bigrav}). Secondly, what is typically called Finsler geometry corresponds to a generalization of Riemannian geometry. Its Lorentzian counterpart is much less understood (see \cite{Pfeifer2013} for a discussion). This means that, if successful, we should expect to arrive at a Euclidean version of the bi-metric gravity. We do not see this as a problem. The similar situation happens in another approach to gravity, the spectral action approach \cite{Chamseddine:1996zu,Chamseddine:2008zj}. This approach is naturally formulated in the Euclidean setting, but because the final answer is given in terms of geometrical objects, at the end one can treat it as being defined on a space-time with the Lorentzian signature. For example, the spectral action formulation of the so-called Landau-Lifshitz gravity \cite{Horava:2009uw} was done in \cite{Pinzul:2010ct,Pinzul:2014mva,Lopes:2015bra,Pinzul:2016dwy}, where it was treated as a gravity on some generalized geometry. To address the question of the geometric formulation of the bi-metric gravity, first one has to study the geometry given by the Finsler structure (\ref{FFF1}). This paper is intended as the initial effort in this direction. We study the geometry defined by the multimetric generalization of (\ref{FFF1}). We obtain some general properties of this geometry as well as study in more details the example of the 2-dimensional multimetric Finsler geometry. The plan of the paper is as follows. In Section \ref{Preliminaries} we give a minimalistic review of Finsler geometry and its structures used in the main part. Section \ref{Multimetric general} introduces the multimetric Finsler structure and studies some of its general properties. Next, in Section \ref{Multimetric 2d}, we specify to the 2-dimensional case. This allows us to get some further results, unavailable at the moment for the general case, e.g. we find the closed expression for a natural measure on the multimetric Finsler space. We conclude with the discussion of what should be done next as well as consider some possible applications. The extensive Appendix provides a more geometric approach to some elliptic integrals used in our study of measure. \section{Preliminaries}\label{Preliminaries} Here we collect some necessary facts and definitions in Finsler geometry. The discussion will be sketchy and not very rigorous. For the detailed account, see {\cite{Anastasiei,Antonelli,Bao,Chern,Mo}}. For an $n$-dimensional manifold $\mathcal{M}$, Finsler geometry is defined, first, by specifying on the tangent bundle, $T\mathcal{M}$, a Finsler structure and, second, by choosing the non-linear connection. More specifically, Finsler structure is a map, $\mathrm{F}: T\mathcal{M}\rightarrow [0,\infty )$, that is $\mathcal{C}^{\infty}$ on the slit tangent bundle, $T\mathcal{M}\!\setminus\!\{0\}$, satisfying\label{definition_F} i) positive 1-homogeneity: $\mathrm{F}(x,\lambda y)=\lambda\mathrm{F}(x, y)$ for any $\lambda >0$ (here $(x,y)$ are coordinates of some trivialization of $T\mathcal{M}$, $x$'s being the coordinates on $\mathcal{M}$, while $y$'s - the coordinates along fibers); ii) strong convexity on $T\mathcal{M}\!\setminus\!\{0\}$: the matrix $[\py_i \py_j \mathrm{F}^2]$ is positively defined (here we defined $\py_i := \frac{\partial}{\partial y^i}$). The latter condition allows us to define a metric that depends on both, $x$ and $y$, \begin{eqnarray}\label{metric_def} g_{ij}:=\frac{1}{2}\py_i\py_j \mathrm{F}^2 \ . \end{eqnarray} Using the homogeneity of $\mathrm{F}$, it is trivial to see that $\mathrm{F} = \sqrt{g_{ij}y^i y^j}$. Introducing \begin{eqnarray}\label{l_def} l_i := \py_i \mathrm{F}\equiv \frac{g_{ij}y^j}{\mathrm{F}} \ , \end{eqnarray} the metric (\ref{metric_def}) has a convenient ADM-like decomposition \begin{eqnarray}\label{ADM} g_{ij} = l_i l_j + \mathrm{F}\py_i \py_j \mathrm{F} =:l_i l_j + h_{ij} \ , \end{eqnarray} where $\mathrm{rank} [h]=n-1$ with $h_{ij}y^j = 0$. The vectors $l_i$ satisfy $l^i l_i =1$ (where raising/lowering of the indices is done with the help of the metric (\ref{metric_def}), so $l^i = \frac{y^i}{\mathrm{F}}$) and are related to $h_{ij}$ via \begin{eqnarray}\label{lhrelation} \dot{\partial}_i l_j = \dot{\partial}_i \frac{g_{jk}y^k}{\mathrm{F}} = -\frac{1}{\mathrm{F}}l_i l_j + \frac{1}{\mathrm{F}}g_{ij} \equiv \frac{1}{\mathrm{F}}h_{ij} \ , \end{eqnarray} where we used that $g_{jk}$ is 0-homogeneous with respect to $y$ and $\py_i g_{jk} = \py_k g_{ji}$, i.e. $y^k\py_i g_{jk} = 0$. The choice of a non-linear connection on $T\mathcal{M}$ is done by specifying the horizontal subspace of $T\mathcal{M}$. In this way, while the vertical subspace is naturally spanned by $\{\py_i\}$, the basis for the horizontal one is given by $\{\delta_i := \partial_i - N^j_{\ i} \py_j\}$, where $\partial_i := \frac{\partial}{\partial x^i}$, as usual. To further specify the non-liner connection $N^j_{\ i}$, one requires that it respects the Finsler structure, i.e. \begin{eqnarray}\label{CartanN} \delta_i \mathrm{F} = 0\ \ \mathrm{or}\ \ \partial_i \mathrm{F} = N^j_{\ i} \py_j \mathrm{F}\equiv l_j N^j_{\ i} \ . \end{eqnarray} For this choice, the connection is called Cartan non-linear connection and it is explicitly given by \begin{eqnarray}\label{Cartan_def} N^i_{\ j} = \Gamma^i_{\ jk}y^k - C^i_{\ jk}\Gamma^k_{\ rs}y^r y^s \ , \end{eqnarray} where $\Gamma^i_{\ jk}$ are the standard (in general, $y$-dependent) Christoffel symbols for the metric (\ref{metric_def}) and \begin{eqnarray}\label{Cartan_tens} C_{ijk} := \frac{1}{2}\py_i g_{jk}\equiv \frac{1}{4}\py_i\py_j\py_k \mathrm{F}^2 \end{eqnarray} is the Cartan tensor. A Finsler geometry will be actually a Riemannian one, i.e. given by $\mathrm{F} = \sqrt{g_{ij}y^i y^j}$ with $g_{ij}$ depending only on $x$, if and only if $C_{ijk}=0$. Riemannian spaces are not the only special cases of the general Finsler geometry. Other interesting classes are Berwald and Landsberg spaces. With the help of the horizontal derivative, which is defined for a tensor $T_{i_1\cdots i_k}$ as \begin{eqnarray}\label{hor} T_{i_1\cdots i_k|j}:=\delta_{j}T_{i_1\cdots i_k} - \sum\limits_{s=1}^{s=k} T_{i_1\cdots j_s \cdots i_k}{}^C\Gamma^{j_s}{}_{i_s j} \ , \end{eqnarray} where the Chern connection, ${}^C\Gamma^{k}{}_{ij}$, is given by \begin{eqnarray}\label{Chern_connection} {}^C\Gamma^{k}{}_{ij}=\frac{g^{ks}}{2}\left(\delta_{i}g_{sj}+\delta_{j}g_{si}-\delta_{s}g_{ij}\right)\ , \end{eqnarray} these spaces are defined by the following conditions: \begin{eqnarray}\label{BerLan} C_{ijk|s}=0\ &-& \ \mathrm{Berwald\ space}\ ,\label{Ber} \\ \dot{C}_{ijk}:=C_{ijk|s}y^s=0\ &-& \ \mathrm{Landsberg\ space}\label{Lan}\ . \end{eqnarray} The importance of such spaces is due to their similarities with Riemannian ones: Berwald space preserves the auto-parallel curves, i.e. geodesics \cite{Anastasiei}, while Landsberg one, the holonomy invariance, at least in the indicatrix (i.e., the region where $F=1$)\cite{Antonelli}. Note, that one has the following (proper) inclusions \begin{eqnarray} Riemannian \subset Berwaldian \subset Landsbergian \subset general\ Finsler \ .\nonumber \end{eqnarray} In the 2-dimensional case, $n=2$, the rank of $h$ is 1, so one can write $h_{ij}=m_i m_j$, for some vector $m_i$. Then (\ref{ADM}) takes the form \begin{eqnarray}\label{2dmetric} g_{ij} = l_i l_j + m_i m_j\ , \end{eqnarray} where \begin{eqnarray}\label{lm} l^i m_i = 0 \ ,\ m^i m_i = 1 \ . \end{eqnarray} (As usual, the raising/lowering of the indices is done with the metric (\ref{2dmetric}).) The pair $(l, m)$ is called the Berwald zweinbeins. It is easy to see that \begin{eqnarray}\label{mi} m_i = \epsilon_{ij}l^j\ \ \mathrm{and}\ \ m^i = \epsilon^{ij}l_j \ , \end{eqnarray} where \begin{eqnarray}\label{epsilon} \epsilon_{ij} = \sqrt{{g}} \varepsilon_{ij}\ \ ,\ \ \epsilon^{ij} = \frac{1}{\sqrt{{g}}} \varepsilon^{ij} \ , \end{eqnarray} with $\varepsilon_{ij}$ being the usual ``flat'' Levi-Civita symbol, $\varepsilon_{12}=1$, and ${g}=\det(g_{ij})$. The relation (\ref{lhrelation}) takes the form of the relation between $l$ and $m$: \begin{eqnarray}\label{lmrelation} \py_i l_j = \frac{1}{\mathrm{F}}m_i m_j \ . \end{eqnarray} Any 2-dimensional Finsler geometry is completely characterized by three (pseudo)scalars: $I$, $J$ and $K$. Roughly speaking, while $I$ controls the non-Riemannianity of the geometry, $J$ tells whether it is Landsberg or not. $K$ is just a scalar curvature of a natural covariant derivative. These scalars enter the Cartan equations on $S\mathcal{M}^*$ - the dual projective sphere bundle (for the extensive discussion, see \cite{Antonelli},\cite{Bao}, also see section \ref{Cartan_equations} for some notations and further discussion): \begin{eqnarray}\label{Cartan_eqs} d\omega^1 = - I \omega^1 \wedge \omega^3 + \omega^2 \wedge \omega^3 \ ,\ d\omega^2 = - \omega^1 \wedge \omega^3\ ,\ d\omega^3 = K \omega^1 \wedge \omega^2 - J \omega^1 \wedge \omega^3 \ . \end{eqnarray} \section{Multimetric geometry: general case}\label{Multimetric general} Here we want to introduce and study some properties of the multimetric Finsler geometry, which we define by the following Finsler structure \begin{eqnarray}\label{N_multimetric} \mathrm{F} = \sum\limits_{\mu=1}^{N} \mathrm{F}^{(\mu)} \ , \end{eqnarray} where each $\mathrm{F}^{(\mu)}$ is just a Riemannian Finsler structure, i.e. $\mathrm{F}^{(\mu)}(x,y)=\sqrt{\alpha^{(\mu)}_{ij}(x)y^i y^j}$ with $\alpha^{(\mu)}_{ij}(x)$ being positively definite. At the moment, no restriction on the dimension of the manifold $\mathcal{M}$ is made. In the next section, we will specify to the 2-dimensional case, $n=2$. First of all, we must verify that (\ref{N_multimetric}) really defines a Finsler structure in the sense of the discussion on the page \pageref{definition_F}. The only non-trivial check is to see if (\ref{N_multimetric}) satisfies the strong convexity condition. \begin{proposition}\label{PropA} Let $\mathrm{F}^{(1)}$ and $\mathrm{F}^{(2)}$ be arbitrary (not necessarily Riemannian) Finsler structures. Then \begin{eqnarray} \mathrm{F}:=\mathrm{F}^{(1)}+\mathrm{F}^{(2)} \end{eqnarray} is also a Finsler structure. \end{proposition} \begin{proof} Clearly $\mathrm{F}\ : T\mathcal{M}\rightarrow [0,\infty )$ and it is of $\mathcal{C}^\infty$ on the split bundle and positively 1-homogeneous in $y$ if $\mathrm{F}^{(1,2)}$ are. Using \begin{eqnarray}\label{lh2} l_i = \py_i \mathrm{F} \equiv \sum\limits^2_{\mu=1} l^{(\mu)}_i \ ,\ \ h_{ij} = \mathrm{F}\py_i\py_j\mathrm{F}\equiv \sum\limits^2_{\mu=1} \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}} h^{(\mu)}_{ij} \end{eqnarray} and plugging this in (\ref{ADM}) we have \begin{eqnarray}\label{positive_def} g_{ij} = \left( l^{(1)}_i + l^{(2)}_i\right)\left( l^{(1)}_j + l^{(2)}_j\right) + \sum\limits^2_{\mu=1} \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}} h^{(\mu)}_{ij} \ . \end{eqnarray} Because each $h^{(\mu)}_{ij}$ is positively definite (and non-degenerate) on the orthogonal complement of $y$ (in the sense of $\mathbb{R}^n$) and $\left( l^{(1)}_i + l^{(2)}_i\right)y^i \equiv \mathrm{F}^{(1)}+\mathrm{F}^{(2)} >0 $ we have our proof. \end{proof} Clearly, by induction, a sum of any number of Finsler structures is again a Finsler structure. In particular, this guarantees that (\ref{N_multimetric}) is well-defined. Let us trivially rewrite (\ref{N_multimetric}) in a very suggestive form: \begin{eqnarray}\label{N_multimetric1} \mathrm{F} = \left( 2\sum\limits_{\mu<\nu}^{N} \mathrm{F}^{(\mu)}\mathrm{F}^{(\nu)} + \sum\limits_{\mu=1}^{N} \mathrm{F}^{(\mu)2}\right)^{\frac{1}{2}} \ . \end{eqnarray} In the bi-metric case this becomes (with $\mathrm{F}^{(1)}= \sqrt{\alpha_{ij}(x)y^i y^j}\ ,\ \mathrm{F}^{(2)}= \sqrt{\beta_{ij}(x)y^i y^j}$, cf. (\ref{FFF1})) \begin{eqnarray}\label{bimetric} \mathrm{F} = \left( \sqrt{4\alpha_{ij}(x)\beta_{km}(x)y^i y^j y^k y^m} + (\alpha_{ij}(x)+\beta_{ij}(x))y^i y^j\right)^{\frac{1}{2}} \ . \end{eqnarray} This is nothing but a special case of the so-called generalized $4^{th}$-root Finsler structure, which in the general $m^{th}$-root case is given by \cite{Tayebi2011,Tayebi2012} \begin{eqnarray}\label{m_root} \mathrm{F}_m = \left( \left(A_{i_1 ... i_m}(x)y^{i_1}... y^{i_m} \right)^{\frac{2}{m}} + B_{ij}(x)y^i y^j\right)^{\frac{1}{2}} \ . \end{eqnarray} So, we can think of (\ref{N_multimetric}) as a natural, physically motivated, multimetric generalization of the generalized $4^{th}$-root Finsler geometry. The representation (\ref{N_multimetric1}) is useful in answering the following natural questions: 1) What are the conditions on the Finsler structure (\ref{N_multimetric}) (or, equivalently, (\ref{N_multimetric1})) for the geometry to be of a special type, Riemannian or Lansbergian? 2) Why don't we take as a multimetric Finsler structure the following, seemingly equivalently natural, structure: \begin{eqnarray}\label{N_multimetric2} \tilde{\mathrm{F}}^2 = \sum\limits_{\mu<\nu}^{N} \mathrm{F}^{(\mu)}\mathrm{F}^{(\nu)}\ ? \end{eqnarray} To answer the first question, let us note that $\sum\limits_{\mu=1}^{N} \mathrm{F}^{(\mu)2}$ in (\ref{N_multimetric1}) is a homogeneous polynomial in $y$'s, while each $\mathrm{F}^{(\mu)}\mathrm{F}^{(\nu)} \equiv \sqrt{\alpha^{(\mu)}_{ij}(x)\alpha^{(\nu)}_{km}(x)y^i y^j y^k y^m}$ is, in general, an irrational function of $y$'s. To correspond to a Riemannian case, each such function should also be a homogeneous polynomial in $y$'s, which is possible if and only if it is reducible, i.e. $\alpha^{(\mu)}_{ij}(x)\alpha^{(\nu)}_{km}(x)y^i y^j y^k y^m = (c^{(\mu\nu)}_{ij}(x)y^i y^j )^2$ for some $c^{(\mu\nu)}_{ij}$ and for each $\mu , \nu$. By the uniqueness of the factorization of a polynomial, this is possible if and only if $\alpha^{(\mu)}_{ij}(x) \sim\alpha^{(\nu)}_{ij}(x) \sim c^{(\mu\nu)}_{ij}(x)$. So, we have established the following \begin{proposition}\label{PropB} The multimetric Finsler structure (\ref{N_multimetric}) will define a Riemannian geometry if and only if for all $\mu , \nu$ exist positive functions of $x$ only, $\phi^{(\mu)}(x)$, ($\phi^{(\mu)}(x) \equiv 1$) such that \begin{eqnarray} \alpha^{(\mu)}_{ij}(x) = \phi^{(\mu)}(x)\alpha^{(1)}_{ij}(x)\ . \end{eqnarray} In this case the corresponding Riemannian metric is given by \begin{eqnarray} g_{ij}(x) = \left(\sum\limits_{\mu =1}^N \phi^{(\mu)}(x)\right)\alpha^{(1)}_{ij}(x)\ . \end{eqnarray} \end{proposition} To analyze a criterium for the multimetric Finsler geometry to be of the Landsbergian type, the representation again (\ref{N_multimetric1}) proves to be very useful. As we mentioned above, due to this representation, we can think of the multimetric geometry as a generalisation of the generalized $4^{th}$-root Finsler structure and one can repeat almost verbatim the analysis of \cite{Tabata} to obtain the following result: Given a Finsler structure, $\mathrm{F}$, as in (\ref{N_multimetric1}), it is Landsbergian if and only if \begin{eqnarray}\label{Lansb1} \left( \frac{1}{\mathrm{F}} \py_r \py_s \py_t \left( \mathrm{F}^{(\mu)} \mathrm{F}^{(\nu)} \right) \right)\!{}_{|j} y^j =0 \ \ \mathrm{for\ all}\ \mu ,\nu \ . \end{eqnarray} Here the horizontal derivative is defined as in (\ref{hor}). Opening the derivations with respect to $y$'s and using that $\mathrm{F}$ is horizontally constant, i.e. $\delta_i \mathrm{F}=0$, we arrive at \begin{eqnarray} \left(\alpha^{(\mu)}_{i\{r}\alpha^{(\nu)}_{st\}}y^i+\alpha^{(\nu)}_{i\{r}\alpha^{(\mu)}_{st\}}y^i\right)\!{}_{|j} y^j=0 \end{eqnarray} as a Landsbergian condition in terms of the Riemannian metrics (here $\{\ldots\}$ stands for the cyclic permutations). The answer to the second question, namely why we don't use (\ref{N_multimetric2}) as a candidate for the multimetric metric is less precise. The point is that this would require more conditions on the Riemannian ingredients, $\alpha^{(\mu)}_{ij}(x)$, to have a well-defined Finsler geometry. For example, in the bi-metric case, (\ref{N_multimetric2}) will correspond to the $4^{th}$-root Finsler geometry introduced in {\cite{Shimada}} and it is known that it does not always lead to a positively definite metric. The role of the ``deformation'' by $B_{ij}(x)$ in (\ref{m_root}) is exactly to guarantee that the resulting structure is the Finsler one. Because the Finsler structure (\ref{N_multimetric}) is given in terms of more familiar Riemannian structures, corresponding to $\mathrm{F}^{(\mu)}$ in (\ref{N_multimetric}), it would be very desirable to express all the relevant Finsler-geometric quantities in terms of the Riemannian ones (plus possible ``interaction'' terms). We postpone the detailed analysis for the future study and here only make some preliminary considerations. Trivially repeating the analysis from Proposition \ref{PropA} and using the definitions, we find the following factorized relations \begin{eqnarray}\label{relations} l_i = \sum\limits_{\mu =1}^{N} l^{(\mu)}_i,\ \ l^i = \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}} l^{(\mu)i},\ \ \frac{1}{\mathrm{F}} h_{ij} = \sum\limits_{\mu =1}^{N} \frac{1}{\mathrm{F}^{(\mu)}} h^{(\mu)}_{ij},\ \ {\mathrm{F}} h^i_{\ j} = \sum\limits_{\mu =1}^{N} {\mathrm{F}^{(\mu)}} h^{(\mu)i}_{\ \ \ \ j}\ . \end{eqnarray} Note that in the last relation the index on the left hand side is raised by $g^{ij}$, while on the right hand side $\alpha^{(\mu)ij}$ is used. Then for the full metric we immediately have\footnote{Of course, this also could be directly obtained using $g_{ij} = \py_i (\mathrm{F}l_j)$, which is easily verified.\label{footnote1}} \begin{eqnarray}\label{metric_relations} g_{ij}=\sum\limits_{\mu,\nu =1}^{N} l^{(\mu)}_i l^{(\nu)}_j + \sum\limits_{\mu =1}^{N} \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}} h^{(\mu)}_{ij} \equiv \sum\limits_{\mu =1}^{N} \alpha^{(\mu)}_{ij} + \sum\limits_{\mu \ne \nu}^{N} \left(l^{(\mu)}_i l^{(\nu)}_j +\frac{\mathrm{F}^{(\nu)}}{\mathrm{F}^{(\mu)}} h^{(\mu)}_{ij}\right) \ . \end{eqnarray} The latter form in (\ref{metric_relations}) allows us to interpret the multimetric case as the ``interaction'' between different Riemannian structures, $\alpha^{(\mu)}_{ij}$, put on the same manifold $\mathcal{M}$: while the first term describes a ``non-interacting'' direct sum of the Riemannian structures, the second term is exactly the ``interaction''. More rigorously, if we calculate the Cartan tensor (\ref{Cartan_tens}) corresponding to the multimetric geometry, we see that only the second term leads to a non-trivial contribution. Because, as it was discussed above, this tensor is, in a sense, a measure of the non-Riemannianity of geometry, we can treat the second term in (\ref{metric_relations}) as interaction. The expression for the nonlinear connection (\ref{Cartan_def}) is easily obtained with the help of the defining relation (\ref{CartanN}). From (\ref{CartanN}) we have \begin{eqnarray}\label{rel_nonlin1} l_j N^j_{\ i} = \sum\limits_{\mu=1}^{N} l^{(\mu)}_j N^{(\mu)j}_{\ \ \ \,\ i} \ , \end{eqnarray} which immediately gives (see the footnote \ref{footnote1}) \begin{eqnarray}\label{rel_nonlin2} g_{kj} N^j_{\ i} + \mathrm{F} l_j \py_k N^j_{\ i} = \sum\limits_{\mu=1}^{N} \left( \py_k (\mathrm{F} l^{(\mu)}_j ) N^{(\mu)j}_{\ \ \ \,\ i} + \mathrm{F} l^{(\mu)}_j \py_k N^{(\mu)j}_{\ \ \ \,\ i}\right) \ . \end{eqnarray} This could be written more conveniently in terms of the so-called spray coefficients $G^i = N^i_{\ j} y^j$ or, inverting, $N^{i}_{\ j} = \frac{1}{2}\dot{\partial}_j G^{i}$. This is equivalent to the following identity \begin{eqnarray}\label{ydN} y^j \dot{\partial}_k N^{i}_{\ j} = N^{i}_{\ k} \ . \end{eqnarray} (Proof: Just apply the Euler's theorem for homogeneous functions to $N^{i}_{\ j} = \frac{1}{2}\dot{\partial}_j G^{i}$, where $G^{i}$ is 2-homogeneous (with respect to $y$).) Using (\ref{ydN}) in (\ref{rel_nonlin2}) and applying (\ref{rel_nonlin1}) one more time, we get \begin{eqnarray}\label{rel_nonlin3} g_{kj} G^j = \sum\limits_{\mu=1}^{N} \py_k (\mathrm{F} l^{(\mu)}_j ) G^{(\mu)j} \ \ \mathrm{or} \ \ G^i = \sum\limits_{\mu=1}^{N} g^{ik}\py_k (\mathrm{F} l^{(\mu)}_j ) G^{(\mu)j} \ . \end{eqnarray} Using (\ref{metric_relations}) (and the footnote \ref{footnote1}) we can split this into diagonal, ``free'', and the off-diagonal, ``interacting'', parts \begin{eqnarray}\label{rel_nonlin4} G_i = \sum\limits_{\mu=1}^{N} G^{(\mu)}_{\ \ i} + \sum\limits_{\mu \ne \nu}^{N} \left(l^{(\nu)}_i l^{(\mu)}_j +\frac{\mathrm{F}^{(\nu)}}{\mathrm{F}^{(\mu)}} h^{(\mu)}_{ij}\right) G^{(\mu)j} \ . \end{eqnarray} Now, (\ref{rel_nonlin3}) or (\ref{rel_nonlin4}) can be used to calculate the relation between the non-linear connections as well as the other important geometrical quantities (various curvatures, torsion, etc). For example, a straightforward calculation leads to \begin{eqnarray}\label{Nonlinear_relation} N^{i}_{\ k} = \sum\limits_{\mu=1}^{N} \left[G^{(\mu)j} \left( -C_k^{\ ir} + \frac{1}{2}g^{ir}\py_k\right)+ g^{ir}N^{(\mu)j}_{\ \ \ \ k} \right]\py_r (\mathrm{F} l^{(\mu)}_j ) \ , \end{eqnarray} where we used $N^{i}_{\ j} = \frac{1}{2}\dot{\partial}_j G^{i}$ (see (\ref{ydN}) and the discussion above it) as well as the definition of the Cartan tensor (\ref{Cartan_tens}). Because at the moment we do not have a good control over the structure of such objects in the general case, this is the subject of the ongoing research and the results will be reported elsewhere. In the next section, we will have more to say about this for the specific 2-dimensional case. \section{Multimetric geometry: 2d example}\label{Multimetric 2d} In this section, we would like to specify to the 2-dimensional case. The motivation for doing this is two-fold: firstly, it will serve as a nice (and more detailed) demonstration of some general results obtained in the previous section; secondly, for this case we will be able to explicitly study a very important object - a measure. \subsection{Cartan equations}\label{Cartan_equations} Because the main object defining Finsler geometry, the Finsler structure $\mathrm{F}$, is 1-homogeneous, it often makes it natural to work with the objects defined not on the tangent bundle, but rather on the projective sphere bundle $S\mathcal{M}$. In terms of the local coordinates, $S\mathcal{M}$ is defined by identifying a positive ray, $\{(x,\lambda x), \lambda>0\}$ with a single point of a projective sphere bundle. Then any object that is 0-homogeneous will be well defined on $S\mathcal{M}$. Applying this to the case of two dimensions proves to be particularly useful. The advantage of being in two dimensions is, as it was briefly discussed at the end of section \ref{Preliminaries}, that the rank of $h_{ij}$ now equals 1 and this guarantees that the metric takes the form (\ref{2dmetric}) for some vector $m$. Then the basis on $S\mathcal{M}$ is given by \begin{eqnarray}\label{vectors} e_1 = m^i \delta_i\ ,\ e_2 = l^i \delta_i\ ,\ e_3 = \mathrm{F}m^i \dot{\partial}_i\ , \end{eqnarray} where $\delta_i = \partial_i - N^j_{\ i}\dot{\partial}_j$, as was defined earlier, and $l^i$ and $m^i$ are the Berwald zweinbeins defined in (\ref{2dmetric}-\ref{mi}). Note that $e_i$'s are really 0-homogeneous (this is the reason for the appearance of the factor of $\mathrm{F}$ in $e_3$). In the same way, we introduce the dual projective sphere bundle, $S\mathcal{M}^*$, with the dual basis of 1-forms \begin{eqnarray}\label{1forms} \omega^1 = m_i dx^i\ ,\ \omega^2 = l_i dx^i\ ,\ \omega^3 = \frac{1}{\mathrm{F}}m_i \delta y^i\ , \end{eqnarray} where $\delta y^i = dy^i + N^i_{\ j}dx^j$ and, again, the factor of $\frac{1}{\mathrm{F}}$ is introduced to guarantee that $\omega^3$ is well defined on $S\mathcal{M}^*$. These 1-forms satisfy the usual Cartan equations on $S\mathcal{M}^*$ (\ref{Cartan_eqs}) and can be used as the definitions of the scalars $I$, $J$ and $K$, briefly discussed at the end of the section \ref{Preliminaries}. For the convenience we will write these equations one more time: \begin{eqnarray}\label{Cartan_eqs_1} d\omega^1 = - I \omega^1 \wedge \omega^3 + \omega^2 \wedge \omega^3 \ ,\ d\omega^2 = - \omega^1 \wedge \omega^3\ ,\ d\omega^3 = K \omega^1 \wedge \omega^2 - J \omega^1 \wedge \omega^3 \ . \end{eqnarray} We start our analysis of the Cartan equations (\ref{Cartan_eqs_1}) for the 2-dimensional multimetric geometry (\ref{N_multimetric}) with the discussion of the invariant \textit{I}. In doing so, we will obtain some other results that will be useful in the following sections, in particular, in the study of the measure. While the relations between $l-$ and $l^{(\mu)}-$zweinbeins are the same as in the general case (\ref{relations}) \begin{eqnarray} l_i = \dot{\partial}_i \mathrm{F} = \dot{\partial}_i \sum\limits_{\mu=1}^{N} \mathrm{F}^{(\mu)}\equiv \sum\limits_{\mu=1}^{N} l^{(\mu)}_i \ ,\label{l_i}\\ l^i = \frac{y^i}{\mathrm{F}} = \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}}\frac{y^i}{\mathrm{F}^{(\mu)}} \equiv \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}} l^{(\mu)i} \label{l^i}\ , \end{eqnarray} for the $m$-zweinbein, we easily get, cf. (\ref{mi}) and (\ref{epsilon}): \begin{eqnarray} m_i &=& \sqrt{{g}} \varepsilon_{ij}l^j = \frac{\sqrt{{g}}}{\sqrt{\alpha^{({\mu})}}} \epsilon^{({\mu})}_{ij}\frac{\mathrm{F}^{(\mu)}}{\mathrm{F}} l^{({\mu})j} \equiv \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}} \sqrt\frac{{g}}{\alpha^{({\mu})}} m^{({\mu})}_i \ ,\label{m_i}\\ m^i &=& \frac{1}{\sqrt{{g}}} \varepsilon^{ij}l_j = \sum\limits_{{\mu}=1}^{N}\sqrt\frac{\alpha^{({\mu})}}{{g}} \epsilon^{({\mu})ij} l^{({\mu})}_j \equiv \sum\limits_{{\mu}=1}^{N}\sqrt\frac{\alpha^{({\mu})}}{{g}} m^{({\mu})i} \label{m^i}\ , \end{eqnarray} where $\epsilon^{({\mu})}_{ij}:=\sqrt{\alpha^{({\mu})}}\varepsilon_{ij}$ and analogously for $\epsilon^{({\mu})ij}$ and $\alpha^{({\mu})}:=\det(\alpha^{({\mu})}_{ij})$. Using (\ref{l_i}) and (\ref{m_i}) in (\ref{lmrelation}) we get \begin{eqnarray} m_i m_j = \mathrm{F}\dot{\partial}_i l_j = \mathrm{F}\dot{\partial}_i \sum\limits_{{\mu}=1}^{N} l^{({\mu})}_j =\sum\limits_{{\mu}=1}^{N} \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}} m^{({\mu})}_i m^{({\mu})}_j = m_i m_j\sum\limits_{{\mu}=1}^{N}\left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{({\mu})}}{{g}}\ . \end{eqnarray} This leads to a useful relation between the determinants of the metrics that is valid only in 2d (at least, we were not able to find its analogue in the general case) \begin{eqnarray}\label{metric_relation} \frac{{g}}{\mathrm{F}^3} = \sum\limits_{{\mu}=1}^{N}\frac{\alpha^{({\mu})}}{\mathrm{F}^{{(\mu)}3}}\ . \end{eqnarray} There are various equivalent expressions $I$-invariant {\cite{Bao,Antonelli}}. We will use the following one (which, of course, can be immediately derived from the Cartan equations (\ref{Cartan_eqs_1})) \begin{eqnarray}\label{Idef} I = \frac{\mathrm{F}}{2{g}}m^i \dot{\partial}_i {g} \ . \end{eqnarray} From here it is not hard to see that using the definition of the Cartan tensor (\ref{Cartan_tens}), we have in 2-dimensional case \begin{eqnarray}\label{Cartan_tens_2d} \mathrm{F}C_{ijk} = I m_i m_j m_k \ , \end{eqnarray} i.e. we have a genuinely non-Riemannian geometry if and only if $I\ne 0$, as was commented above. Using (\ref{metric_relation}), (\ref{m^i}) and recalling that $m^i l_i =0$ we immediately obtain \begin{eqnarray}\label{I} I = -\frac{3}{2}\frac{\mathrm{F}^4}{{g}}\sum\limits_{{\mu}, {\nu}}^{N} \sqrt{\frac{\alpha^{({\mu})}}{{g}}}\frac{\alpha^{({\nu})}}{\mathrm{F}^{({\nu})4}} m^{({\mu})i}l^{(\nu)}_i \ . \end{eqnarray} We see that the ``non-Riemannianity'' is controlled by the ``cross-terms'' $m^{({\mu})i}l^{(\nu)}_i$. With the help of (\ref{l_i}) and (\ref{m^i}) these terms are easily expressed in terms of the corresponding metrics \begin{eqnarray}\label{Aml} m^{({\mu})i}l^{({\nu})}_i = \frac{1}{\sqrt{\alpha^{({\mu})}}}\varepsilon^{ij}\frac{\alpha^{({\mu})}_{jr}y^r}{\mathrm{F}^{(\mu)}} \frac{\alpha^{({\nu})}_{in}y^n}{\mathrm{F}^{(\nu)}} =: -\sqrt{\frac{{g}}{\alpha^{({\mu})}}}\mathcal{A}^{({\mu}{\nu})} \ , \end{eqnarray} where we introduced $\mathcal{A}^{({\mu}{\nu})} = \frac{\epsilon^{ji}\alpha^{({\mu})}_{jr}\alpha^{({\nu})}_{in}y^r y^n}{\mathrm{F}^{(\mu)} \mathrm{F}^{(\nu)}}$. With this notation, (\ref{I}) takes a very compact form \begin{eqnarray}\label{I1} I = \frac{3}{2}\frac{\mathrm{F}^4}{{g}}\sum\limits_{{\mu},{\nu}}^{N} \frac{\alpha^{({\nu})}}{\mathrm{F}^{{(\nu)}4}} \mathcal{A}^{({\mu}{\nu})}\ . \end{eqnarray} $\mathcal{A}^{({\mu}{\nu})}$ satisfies a very nice relation, which will be useful later: \begin{eqnarray}\label{A_rel} \frac{1}{\mathrm{F}^{(\nu)}} \sum\limits_{{\mu}=1}^N \mathcal{A}^{({\mu}{\nu})} m_j \stackrel{(\ref{m^i})}{=} -\frac{1}{\mathrm{F}^{(\nu)}} l^{(\nu)}_i m^i m_j \stackrel{(\ref{lmrelation})}{=} -\frac{\mathrm{F}}{\mathrm{F}^{(\nu)}} \py_j \left( \frac{\mathrm{F}^{(\nu)}}{\mathrm{F}}l^{(\nu)i}\right)l^{(\nu)}_i \stackrel{(\ref{lm})}{\equiv} \py_j \ln\frac{\mathrm{F}}{\mathrm{F}^{(\nu)}} \ . \end{eqnarray} It is trivial to see that $\mathcal{A}^{({\mu}{\nu})} = 0$ if and only if $\alpha^{({\mu})}_{ij} = \phi^{(\mu\nu)} \alpha^{({\nu})}_{ij}$ for some $\phi^{(\mu\nu)}(x)$, in full agreement with the general result found in Proposition \ref{PropB}. So, as we commented above, $\mathcal{A}^{({\mu}{\nu})}$ measures ``non-Riemannianity'' in $({\mu}{\nu})$ sector. For the future analysis of the Cartan equations (\ref{Cartan_eqs_1}), it will be useful to specify the general analysis of the spray coefficient (\ref{rel_nonlin3}) and the non-linear connection (\ref{Nonlinear_relation}) to the 2-dimensional case. With the help of (\ref{l_i}-\ref{m^i}) this could be re-written as \begin{eqnarray}\label{rel_nonlin_2d} G^i = \sum\limits_{\mu=1}^{N} \left[ l^i l^{(\mu)}_j + \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} m^i m_j \right] G^{(\mu)j} = : \sum\limits_{\mu=1}^{N} \delta^{(\mu) i}_{\ \ \ \ j} G^{(\mu) j}\ , \end{eqnarray} where we defined $\delta^{(\mu) i}_{\ \ \ \ j}$ satisfying the relation \begin{eqnarray}\label{delta} \sum\limits_{\mu=1}^{N} \delta^{(\mu) i}_{\ \ \ \ j} = \delta^{i}_{\ j}\ . \end{eqnarray} Then either taking derivative of (\ref{rel_nonlin_2d}) with respect to $y$ or directly specifying (\ref{Nonlinear_relation}) to the 2-dimensional case, we obtain the explicit relation between the Finslerian and Riemannian (in each sector) non-linear connections \begin{eqnarray}\label{NN} N^i_{\ j} &=& \frac{1}{2} \sum\limits_{\mu =1}^N \left(\dot{\partial}_j\delta^{(\mu) i}_{\ \ \ \ k}\right) G^{(\mu) k} + \sum\limits_{\mu=1}^{N} \delta^{(\mu) i}_{\ \ \ \ k} N^{(\mu) k}_{\ \ \ \ j} \equiv \nonumber \\ &\equiv & m^i m_j \sum\limits_{\mu=1}^{N} \left[ \frac{1}{2\mathrm{F}} l^{(\mu)}_k + \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} \left( \frac{3}{2\mathrm{F}^{(\mu)}}\left( \sum\limits_{\nu=1}^{N}\mathcal{A}^{(\nu\mu)} \right)m_k - \frac{I}{\mathrm{F}}m_k - \frac{1}{2\mathrm{F}} l_k \right) \right] G^{(\mu)k} + \nonumber \\ &+& \sum\limits_{\mu=1}^{N} \left[ l^i l^{(\mu)}_k + \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} m^i m_k \right] N^{(\mu)k}_{\ \ \ \ \ j}\ , \end{eqnarray} where we also used (\ref{lmrelation}) and (\ref{Cartan_tens}). We will also need the ``matrix'' elements of the non-linear connection, $N^i_{\ j}$, with respect to the $(l,m)$ basis. Contracting (\ref{NN}) with $l_i$, we, of course, get back (\ref{rel_nonlin1}) \begin{eqnarray}\label{rel_nonlin11} l_i N^i_{\ j} = \sum\limits_{\mu=1}^{N} l^{(\mu)}_i N^{(\mu)i}_{\ \ \ \,\ j} \ . \end{eqnarray} Introducing $\Delta N^{(\mu) i}_{\ \ \ \ j} := N^i_{\ j} - N^{(\mu) i}_{\ \ \ \ j}$, this can be written as \begin{eqnarray}\label{rel_nonlin12} \sum\limits_{\mu=1}^{N} l^{(\mu)}_i \Delta N^{(\mu)i}_{\ \ \ \,\ j} =0 \ . \end{eqnarray} The contraction of (\ref{NN}) with $m_i$ has the less trivial form \begin{eqnarray}\label{mNlm} m_i N^i_{\ j} l^j &=& \sum\limits_{\mu=1}^{N} \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} m_i N^{(\mu)i}_{\ \ \ \,\ j} l^j \ ,\\ m_i N^i_{\ j} m^j &=& \sum\limits_{\mu=1}^{N} \left[ \frac{1}{2\mathrm{F}} l^{(\mu)}_k + \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} \left( \frac{3}{2\mathrm{F}^{(\mu)}}\left( \sum\limits_{\nu=1}^{N}\mathcal{A}^{(\nu\mu)} \right)m_k - \frac{I}{\mathrm{F}}m_k - \frac{1}{2\mathrm{F}} l_k \right) \right] G^{(\mu)k} + \nonumber \\ &+& \sum\limits_{\mu=1}^{N} \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} m_k N^{(\mu)k}_{\ \ \ \ \ j} m^j\ , \end{eqnarray} which could be rewritten with the help of (\ref{metric_relation}) as \begin{eqnarray} &&\sum\limits_{\mu=1}^{N} \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} m_i \Delta N^{(\mu)i}_{\ \ \ \,\ j} l^j = 0\ , \label{mNl1}\\ &&\sum\limits_{\mu=1}^{N} \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} m_k \Delta N^{(\mu)k}_{\ \ \ \ \ j} m^j= \nonumber\\ &&=\sum\limits_{\mu=1}^{N} \left[ \frac{1}{2\mathrm{F}} l^{(\mu)}_k + \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} \left( \frac{3}{2\mathrm{F}^{(\mu)}}\left( \sum\limits_{\nu=1}^{N}\mathcal{A}^{(\nu\mu)} \right)m_k - \frac{I}{\mathrm{F}}m_k - \frac{1}{2\mathrm{F}} l_k \right) \right] N^{(\mu)k}_{\ \ \ \ \ j} y^j \ . \label{mNm1} \end{eqnarray} With the help of (\ref{A_rel}) we can establish one more relation (which will prove useful for the analysis of the Cartan equations, see below) for the matrix elements of $N^i_{\ j}$ \begin{eqnarray}\label{AN} \sum\limits_{\mu , \nu}^{N} \frac{\mathrm{F}^3}{\mathrm{F}^{(\mu)4}}\frac{\alpha^{(\mu)}}{g}\mathcal{A}^{(\nu\mu)} m_i \Delta N^{(\mu)i}_{\ \ \ \,\ j} l^j = \sum\limits_{\mu=1}^{N} \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 \frac{\alpha^{(\mu)}}{g} \left( \frac{l_i}{\mathrm{F}} - \frac{l^{(\mu)}_i}{\mathrm{F}^{(\mu)}} \right) \Delta N^{(\mu)i}_{\ \ \ \,\ j} l^j \ . \end{eqnarray} Now, we are ready to study in details the Cartan equations (\ref{Cartan_eqs_1}). In this way, we will re-derive some of the general results as well as get a better control over the 2-dimensional case. We start with the first of the Cartan equations, $d\omega^1 = - I \omega^1 \wedge \omega^3 + \omega^2 \wedge \omega^3$. To do so, we need to establish the relations between the Finsler 1-forms, $\omega^i$ (\ref{1forms}) and their analogues in each Riemannian sector, $\omega^{(\mu)i}$. These can be easily found using the analogous relations between the Berwald zweinbeins (\ref{l_i}-\ref{m^i}), the result being \begin{eqnarray} \omega^1 &=& \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}}\sqrt{\frac{g}{\alpha^{(\mu)}}}\omega^{(\mu)1} = \sum\limits_{\mu=1}^{N}\left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^2\sqrt{\frac{\alpha^{(\mu)}}{g}}\omega^{(\mu)1}\ ,\label{1forms_relation1}\\ \omega^2 &=& \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\omega^{(\mu)2} + \left(\sum\limits_{\nu=1}^{N} \mathcal{A}^{(\nu\mu)}\right)\sqrt{\frac{g}{\alpha^{(\mu)}}}\omega^{(\mu)1} = \sum\limits_{\mu=1}^{N}\omega^{(\mu)2}\ ,\label{1forms_relation2}\\ \omega^3 &=& \left(\frac{\mathrm{F}^{(\mu)}}{\mathrm{F}}\right)^2\sqrt{\frac{g}{\alpha^{(\mu)}}}\omega^{(\mu)3} + \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}^2}\sqrt{\frac{g}{\alpha^{(\mu)}}}m^{(\mu)}_i \Delta N^{(\mu) i}_{\ \ \ \ j} dx^j = \nonumber \\ &=& \sum\limits_{\mu=1}^{N}\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\sqrt{\frac{\alpha^{(\mu)}}{g}}\left[\omega^{(\mu)3} + \frac{1}{\mathrm{F}^{(\mu)}}m^{(\mu)}_i \Delta N^{(\mu) i}_{\ \ \ \ j} dx^j \right] \ ,\label{1forms_relation3} \end{eqnarray} where $dx^i = m^i \omega^1 + l^i \omega^2$. (To get the relation between $\omega^i$ and $\omega^{(\mu)i}$ for a fixed sector $\mu$, i.e. the first equality in each relation, we trivially used the fact that due to the bi-dimensionality, any of the pairs, $(l,m)$ or $(l^{(\mu)},m^{(\mu)})$, can be used as a basis.) These relations can be easily inverted \begin{eqnarray} \omega^{(\mu)1} &=& \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\sqrt{\frac{\alpha^{(\mu)}}{g}}\omega^1\ ,\label{1forms_relation1inverse}\\ \omega^{(\mu)2} &=& \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}}\omega^2 - \left(\sum\limits_{\nu=1}^{N} \mathcal{A}^{(\nu\mu)}\right)\omega^{1}\ ,\label{1forms_relation2inverse}\\ \omega^{(\mu)3} &=& \left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^2\sqrt{\frac{\alpha^{(\mu)}}{g}}\left[\omega^3 - \frac{1}{\mathrm{F}}m_i \Delta N^{(\mu) i}_{\ \ \ \ j} dx^j \right]\ .\label{1forms_relation3inverse} \end{eqnarray} Using (\ref{1forms_relation1}-\ref{1forms_relation3inverse}), after some straightforward algebra, the first Cartan equation takes the form \begin{eqnarray} d\omega^1 = A \omega^1 \wedge \omega^3 + B \omega^2 \wedge \omega^3 + C \omega^1 \wedge \omega^2 \ , \end{eqnarray} where \begin{eqnarray} A&=& -\frac{3}{2}\frac{\mathrm{F}^4}{{g}}\sum\limits_{\mu, \nu}^{N} \frac{\alpha^{(\mu)}}{\mathrm{F}^{(\mu)4}} \mathcal{A}^{(\nu\mu)}\stackrel{(\ref{I1})}{\equiv} -I \ ,\label{1-3} \\ B&=& \sum\limits_{\mu=1}^{N}\left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 {\frac{\alpha^{(\mu)}}{g}}\stackrel{(\ref{metric_relation})}{\equiv} 1 \ , \label{2-3} \\ C&=& -\frac{1}{2}\sum\limits_{\mu=1}^{N} \frac{\mathrm{F}^2}{\mathrm{F}^{(\mu)3}} {\frac{\alpha^{(\mu)}}{g}} l_i\Delta N^{(\mu) i}_{\ \ \ \ j}l^j + \nonumber\\ && +\frac{3}{2}\sum\limits_{\mu, \nu}^{N} \frac{\mathrm{F}^3}{\mathrm{F}^{(\mu)4}} {\frac{\alpha^{(\mu)}}{g}} \mathcal{A}^{(\nu\mu)} m_i\Delta N^{(\mu) i}_{\ \ \ \ j}l^j + \sum\limits_{\mu=1}^{N} \frac{\mathrm{F}^2}{\mathrm{F}^{(\mu)3}} {\frac{\alpha^{(\mu)}}{g}} m_i\Delta N^{(\mu) i}_{\ \ \ \ j}m^j\ . \label{1-2} \end{eqnarray} We see that while $A$ agrees with the result for $I$ obtained above (\ref{I1}), the coefficient $B$ provides another way to derive the determinant relation (\ref{metric_relation}). It is less trivial to see that the last coefficient is actually zero, as it should be in general. To show this, we plug into (\ref{1-2}) the relations for the matrix elements that we found above, (\ref{mNm1}) and (\ref{AN}), and after some straightforward algebra, we see that (\ref{1-2}) is indeed identically zero. Proceeding in the same way as above we can analyze the second Cartan equation, $d\omega^2 = - \omega^1 \wedge \omega^3$. Then we have from (\ref{1forms_relation1}-\ref{1forms_relation3inverse}) \begin{eqnarray}\label{d2} d\omega^2 &=& A \omega^1 \wedge \omega^3 + B \omega^2 \wedge \omega^3 + C \omega^1 \wedge \omega^2 \ , \end{eqnarray} where \begin{eqnarray} A &=& -\sum\limits_{\mu=1}^{N}\left(\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\right)^3 {\frac{\alpha^{(\mu)}}{g}}\stackrel{(\ref{metric_relation})}{\equiv} -1\ ,\label{1-3next} \\ B &\equiv& 0\ ,\\ C &=& \sum\limits_{\mu=1}^{N} \frac{\mathrm{F}^2}{\mathrm{F}^{(\mu)3}} {\frac{\alpha^{(\mu)}}{g}} m_i\Delta N^{(\mu) i}_{\ \ \ \ j}l^j \ . \label{1-2next} \end{eqnarray} As before, (\ref{1-3next}) gives the trivially satisfied expected result (again, due to the determinant relation (\ref{metric_relation})) and (\ref{1-2next}) is nothing but the relation found above (\ref{mNl1}). The third Cartan equation, $d\omega^3 = K \omega^1 \wedge \omega^2 - J \omega^1 \wedge \omega^3$ is the least trivial one. Proceeding as before and using the results for the first two Cartan equations we get (we omit the somewhat tedious but otherwise trivial manipulations) \begin{eqnarray}\label{d3} d\omega^2 &=& A \omega^1 \wedge \omega^3 + B \omega^2 \wedge \omega^3 + C \omega^1 \wedge \omega^2 \ , \end{eqnarray} where \begin{eqnarray} A &=& \sum\limits_{\mu=1}^{N}\frac{\mathrm{F}^{3}}{\mathrm{F}^{(\mu)3}}\frac{\alpha^{(\mu)}}{g}\left[\left( \frac{3}{2} \frac{l^{(\mu)}_i}{\mathrm{F}^{(\mu)}} - \frac{l_i}{\mathrm{F}} \right)\Delta N^{(\mu) i}_{\ \ \ \ j} m^j - m_i m^j m^r \py_r \left(\Delta N^{(\mu) i}_{\ \ \ \ j}\right) \right]\ ,\label{1-3nextnext} \\ B &=& \sum\limits_{\mu=1}^{N}\left[\frac{\mathrm{F}^{3}}{\mathrm{F}^{(\mu)4}}\frac{\alpha^{(\mu)}}{g}\left(-\frac{1}{2} \frac{\mathrm{F}^{(\mu)}}{\mathrm{F}}l_i + \frac{3}{2}\left(\sum\limits_{\nu=1}^{N} \mathcal{A}^{(\nu\mu)}\right) m_i\right)\Delta N^{(\mu) i}_{\ \ \ \ j} l^j + \right. \nonumber\\ && \left. +\frac{\mathrm{F}^2}{\mathrm{F}^{(\mu)3}}{\frac{\alpha^{(\mu)}}{g}}m_i \Delta N^{(\mu) i}_{\ \ \ \ j} m^j \right] , \label{2-3nextnext} \\ C &=& \sum\limits_{\mu=1}^{N}\left[ K^{(\mu)}\frac{\mathrm{F}}{\mathrm{F}^{(\mu)}} {\frac{\alpha^{(\mu)}}{g}} - \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\sqrt{\frac{\alpha^{(\mu)}}{g}}e_2 \left( \frac{\mathrm{F}}{\mathrm{F}^{(\mu)2}}\sqrt{\frac{\alpha^{(\mu)}}{g}}m_i \Delta N^{(\mu) i}_{\ \ \ \ j} m^j \right) -\right. \nonumber\\ && \left. - \frac{\mathrm{F}}{\mathrm{F}^{(\mu)2}}\sqrt{\frac{\alpha^{(\mu)}}{g}}m_i \Delta N^{(\mu) i}_{\ \ \ \ j} l^j e_1 \left( \frac{\mathrm{F}}{\mathrm{F}^{(\mu)}}\sqrt{\frac{\alpha^{(\mu)}}{g}} \right)\right]\ . \label{1-2nextnext} \end{eqnarray} While $B=0$ due to the relation between the non-linear connections (it is actually the same as (\ref{1-2})), we should have $A = -J$ and $C =K$. Let us briefly discuss this result, postponing a more detailed analysis for the future. Taking the exterior derivative of the first Cartan equation and using the rest of them, one can immediately see that the scalar $J$ can be expressed in terms of $I$ as follows \begin{eqnarray}\label{J1} J = e_2(I)\ , \end{eqnarray} where $e_2$ is given in (\ref{vectors}). This result is general and should be valid for any 2-dimensional Finsler geometry. It is indeed possible to verify that the coefficient $A$ given in (\ref{1-3nextnext}) satisfies this, i.e. $A = - e_2(I)$. The calculation is not very illuminating and we omit it. Still, this provides a very strong consistency check. The importance of the scalar $J$ is that in the same way as the scalar $I$ controls the Riemannianity of a space, the scalar $J$ tells us whether it is of a Landsberg type or not. This can be easily seen from the fact that in 2d the Cartan tensor has only one non-trivial component (\ref{Cartan_tens_2d}). Then, from the Landsbergian condition (\ref{Lan}) we get that the condition takes a form: $\dot{C}_{111}\equiv J=0$, see e.g. \cite{Bao}. So, we can ask the same question that we addressed in the general case: what is the condition on our space to be of the Landsberg type? In the 2-dimensional bi-metric case the answer is very concrete: \begin{proposition}\label{Prop3} The 2-dimensional Finsler space defined by the Finsler structure (\ref{bimetric}) is of a Landsberg type, $J=0$, if and only if $I=0$, i.e. if the space is Riemannian. \end{proposition} \begin{proof} The proof is essentially due to Szab$\acute{\mathrm{o}}$'s rigidity theorem \cite{Szabo} that states that there is no non-trivial Berwald space in 2d. Combining this with the fact that in the general n-dimensional case of a generalized $4^{th}$-root Finsler structure, Landsberg space is always of the Berwald type (\!\cite{Tabata}, Theorem 1.2), implies that in two dimensions $J=0$ if and only if $I=0$.\footnote{Though we have not checked it explicitly, we believe that the proof in \cite{Tabata} should be also true in the generalized case, i.e. for the Finsler structure given by (\ref{N_multimetric1}). Then the proposition should be valid for a general 2-dimensional multimetric Finsler geometry.} \end{proof} This result tells us that as soon as a 2d bi-metric Finsler geometry is non-Riemannian, it will be of a general type. E.g., we will not have a non-trivial bi-metric space, for which the generalized Gauss-Bonnet theorem would still hold, because for that one needs $J=0 $\cite{Bao}. Finally, we briefly discuss the last coefficient, $C$, (\ref{1-2nextnext}) in the third Cartan equation. As we mentioned, this coefficient should be equal to some scalar curvature. This is actually a scalar curvature of the Chern connection \cite{Bao}. This can be seen from the explicit evaluation of the relation $K = - \omega^3 [e_1 , e_2]$, which is an immediate consequence of the Cartan equations (\ref{Cartan_eqs_1}). Though one can observe some factorized structure in (\ref{1-2nextnext}), we do not have a complete control over this result, meaning that we cannot yet present it explicitly in the form: factorization of the sectors plus "cross-terms" as in the case of a metric (\ref{metric_relations}) or spray coefficients (\ref{rel_nonlin4}). We are planning to report on this elsewhere. \subsection{Measure} In this subsection, we would like to study a very important geometrical object - a measure on a Finsler manifold. There are two common choices, Holmes-Thompson \cite{Holmes-Thompson} and Busemann-Hausdorff \cite{Shen2001} measures. We will treat in detail the general case for the former one, though at the end, we give the Busemann-Hausdorff measure for the bi-metric case, too. We start with some preliminary technical considerations. Consider a general Finsler function $\mathrm{F}$ (not necessarily 2-dimensional) with the corresponding metric $g_{ij}=\frac{1}{2}\py_i \py_j \mathrm{F}^2$, $i,j=\overline{1,n}$. Let, as before, ${g}=\det(g_{ij})$. We want to analyze the following integral \begin{eqnarray} I_f = \int\limits_{\mathrm{F}\leq 1} f({g}) \omega \ , \end{eqnarray} where $f$ is some arbitrary nice function, $\omega = \frac{1}{n!}\epsilon_{i_1\ldots i_n}dy^{i_1}\wedge\cdots\wedge dy^{i_n}$ and the integration region, $\mathrm{F}\leq 1$, is for each fixed $x$. \begin{proposition}\label{Prop1} The volume integral $I_f$ can be reduced to a surface integral over a unit sphere in $\mathbb{R}^n$ $$ I_f = \int\limits_{\|y\|=1}\frac{f({g})}{\mathrm{F}^n}\eta\ , $$ where $\omega = d\eta$ and $\|\cdot\|$ denotes the usual norm in $\mathbb{R}^n$. \end{proposition} \begin{proof} First we show that $d(f({g})\eta) = f({g})\omega$. (All the differentials are with respect to $y$ variable only.) Really, this trivially follows from $d{g}={g}g^{ij}(\py_k g_{ij})dy^k$: \begin{eqnarray} df({g})\wedge\eta = f'({g})d{g}\wedge\eta = \frac{1}{n!}f'({g}){g}g^{ij}(\py_k g_{ij})\epsilon_{i_1\ldots i_n}y^{i_1} dy^k \wedge dy^{i_2}\wedge\cdots\wedge dy^{i_n}\nonumber\\ = \frac{1}{n!}f'({g}){g}g^{ij}(y^{i_1}\py_k g_{ij}) \epsilon_{i_1\ldots i_n}\epsilon^{k i_2\ldots i_n}\omega = \frac{1}{n}f'({g}){g}g^{ij}(y^k\py_k g_{ij})\omega \equiv 0\ ,\nonumber \end{eqnarray} where in the last equality we used that $g_{ij}$ is 0-homogeneous with respect to $y$. Using this result, we have $$ d(f({g})\eta)=df({g})\wedge\eta +f({g})\omega = f({g})\omega\ . $$ Now, by Stokes' theorem we have $$ \int\limits_{\mathrm{F}\leq 1} f({g}) \omega = \int\limits_{\mathrm{F}= 1} f({g}) \eta \equiv \int\limits_{\mathrm{F}= 1} \frac{f({g})}{\mathrm{F}^n} \eta\ , $$ where we trivially included the factor of $1/\mathrm{F}^n$, which equals 1 on the indicatrix, $\mathrm{F}=1$. The next step is to show that this new form under the integral is actually exact, $d\left(\frac{f({g})}{\mathrm{F}^n} \eta\right)=0$. Really, using the above, we can write: \begin{eqnarray} d\left(\frac{f({g})}{\mathrm{F}^n} \eta\right) =f({g})d\left(\frac{1}{\mathrm{F}^n} \eta\right)=f({g})\left(d\left(\frac{1}{\mathrm{F}^n}\right)\wedge \eta + \frac{1}{\mathrm{F}^n}\omega\right) \nonumber \\ =f({g})\left(-\frac{n}{\mathrm{F}^{n+1}}\py_k F dy^k\wedge \eta + \frac{1}{\mathrm{F}^n}\omega\right) = f({g})\left(-\frac{1}{\mathrm{F}^{n+1}}(y^k\py_k \mathrm{F})\omega + \frac{1}{\mathrm{F}^n}\omega\right)\equiv 0\ , \nonumber \end{eqnarray} where we used exactly the same manipulations as above and the fact that $y^k\py_k \mathrm{F} = \mathrm{F}$ by 1-homogeneity of $\mathrm{F}$. The last step is to note that the above result means that the integrals of the form $\frac{f(g)}{\mathrm{F}^n} \eta$ over any two surfaces around the origin, $y=0$, (with a trivial winding number) will be equal. Take as the first surface the indicatrix, $\mathrm{F}=1$, and as the second one a unit sphere, $\|y\|=1$. Then we have our proof $$ \int\limits_{\mathrm{F}\leq 1} f({g}) \omega = \int\limits_{\mathrm{F}= 1} \frac{f({g})}{\mathrm{F}^n} \eta = \int\limits_{\|y\|= 1} \frac{f({g})}{\mathrm{F}^n} \eta\ . \nonumber $$ \end{proof} Now, we apply the result of Proposition \ref{Prop1} to determine the Holmes-Thompson (HT) volume form for the case of the 2-dimensional multimetric Finsler space with the Finsler function (\ref{N_multimetric}). The Holmes-Thompson volume form is based on the preferred Riemannian metric on $T\mathcal{M}\!\setminus \!\{0\}$ associated with the Finsler structure - the Sassaki metric \cite{Chern} (we will not need the explicit form of this metric). This volume form is given by \begin{eqnarray}\label{HT} \mu_{HT}(x)= \frac{\int\limits_{\mathrm{F}\leq 1} {g}\omega}{Vol_{\mathbb{R}^n}(\|y\|\leq 1)} \ , \end{eqnarray} where the denominator is just the usual volume of a unit ball, $Vol_{\mathbb{R}^n}(\|y\|\leq 1) = \frac{\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2} + 1\right)}$. Using the result of Proposition \ref{Prop1}, we know that we can reduce the integral in (\ref{HT}) to an integral over a unit circle: \begin{eqnarray} \int\limits_{\mathrm{F}\leq 1} {g}\omega = \int\limits_{\|y\| = 1} \frac{{g}}{\mathrm{F}^2}\eta \ . \end{eqnarray} Then we have for the Holmes-Thompson measure the following result \begin{proposition}\label{Holmes-Thomson} The Holmes-Thompson measure for the 2-dimensional Finsler space defined by (\ref{N_multimetric}) is given by \begin{eqnarray}\label{HTfinal} \mu_{HT}(x)= \sum\limits_{\mu=1}^{N} \sqrt{\alpha^{(\mu)}} + \frac{2}{\pi}\sum\limits_{\mu\ne\nu}^{N} \sqrt{{\alpha^{(\mu)}}{\lambda^{(\mu\nu)}_+}}\,E\left(\sqrt{1-\frac{\lambda^{(\mu\nu)}_-}{\lambda^{(\mu\nu)}_+}}\right)\ , \end{eqnarray} where $E(k)$ is the complete elliptic integral of the second kind (of some argument $k$) and $\lambda^{(\mu\nu)}_\pm$ are given by \begin{eqnarray} \lambda^{(\mu\nu)}_{\pm} = \frac{e_1(\alpha^{(\mu)-1}\alpha^{(\nu)})\pm \sqrt{(e_1(\alpha^{(\mu)-1}\alpha^{(\nu)}))^2-4e_2(\alpha^{(\mu)-1}\alpha^{(\nu)})}}{2e_2(\alpha^{(\mu)-1}\alpha^{(\nu)})} \nonumber \end{eqnarray} with $e_1(\mathds{X})=\Tr \mathds{X}$ and $e_2(\mathds{X})=\frac{1}{2}\left((\Tr \mathds{X})^2 - \Tr \mathds{X}^2\right)$ being the symmetric polynomials associated with the matrix $\mathds{X}$. \end{proposition} \begin{proof} We will use the relation between ${g}$, $\alpha^{(\mu)}$ found above (\ref{metric_relation}) \begin{eqnarray} \frac{{g}}{\mathrm{F}^3} = \sum\limits_{{\mu}=1}^{N}\frac{\alpha^{({\mu})}}{\mathrm{F}^{{(\mu)}3}} \ . \end{eqnarray} From here we immediately have (recall that $\alpha^{(\mu)}$ does not depend on $y$) \begin{eqnarray}\label{HT1} \int\limits_{\|y\| = 1} \frac{{g}}{\mathrm{F}^2}\eta = \sum\limits_{{\mu}=1}^{N} \alpha^{(\mu)}\!\!\! \int\limits_{\|y\| = 1} \frac{\mathrm{F}}{\mathrm{F}^{(\mu)3}}\eta =\sum\limits_{{\mu}=1}^{N} \alpha^{(\mu)}\!\!\! \int\limits_{\|y\| = 1} \frac{1}{\mathrm{F}^{(\mu)2}}\eta + \sum\limits_{\mu\ne\nu}^{N} \alpha^{(\mu)}\!\!\! \int\limits_{\|y\| = 1} \frac{\mathrm{F}^{(\nu)}}{\mathrm{F}^{(\mu)3}}\eta \ . \end{eqnarray} To calculate the second integral in (\ref{HT1}) (the first one we will get by setting $\nu = \mu$ in the result), we note that this is an integral over a unit circle. Then, after the change of variables $t=\cot \phi$, where $\phi$ is the polar angle,\footnote{I.e., $\mathrm{F}^{(\mu)2}|_{\|y\|=1}=\alpha^{(\mu)}_{11} \cos(\phi)^2 + 2\alpha^{(\mu)}_{12}\cos(\phi)\sin(\phi)+\alpha^{(\mu)}_{22}\sin(\phi)^2$.} we get \begin{eqnarray}\label{HTint} \int\limits_{\|y\| = 1} \frac{\mathrm{F}^{(\nu)}}{\mathrm{F}^{(\mu)3}} \eta = \int\limits_{-\infty}^{\infty} dt\frac{\sqrt{\alpha^{(\nu)}_{11} t^2 + 2\alpha^{(\nu)}_{12}t+\alpha^{(\nu)}_{22}}}{(\alpha^{(\mu)}_{11} t^2 + 2\alpha^{(\mu)}_{12}t+\alpha^{(\mu)}_{22})^{3/2}} \ . \end{eqnarray} Integrals of this type are treated in Appendix \ref{appendix} with the result being \begin{eqnarray}\label{HTint1} \int\limits_{\|y\| = 1} \frac{\mathrm{F}^{(\nu)}}{\mathrm{F}^{(\mu)3}} \eta = 2\lambda^{(\mu\nu)}_+\sqrt{\frac{\lambda^{(\mu\nu)}_-}{\alpha^{(\nu)}}}\,E\left(\sqrt{1-\frac{\lambda^{(\mu\nu)}_-}{\lambda^{(\mu\nu)}_+}}\right) \equiv 2\sqrt{\frac{\lambda^{(\mu\nu)}_+}{\alpha^{(\mu)}}}\,E\left(\sqrt{1-\frac{\lambda^{(\mu\nu)}_-}{\lambda^{(\mu\nu)}_+}}\right)\ , \end{eqnarray} where $E(k)$ is the complete elliptic integral of the second kind (of the argument $k$) and $\lambda^{(\mu\nu)}_\pm$ are defined as in (\ref{lambda2}). For the detailed discussion and all the steps of the calculation, see Appendix, where we re-derive some known results in a more geometric form. Setting $\mu = \nu$ in (\ref{HTint1}), we immediately get \begin{eqnarray}\label{FHB} \int\limits_{\|y\| = 1} \frac{1}{\mathrm{F}^{(\mu)2}}\eta = \frac{\pi}{\sqrt{\alpha^{(\mu)}}} \ , \end{eqnarray} where we used that $\lambda^{(\mu\mu)}_\pm = 1$ and $E(0)=\pi /2$. Plugging (\ref{HTint1}) and (\ref{FHB}) into (\ref{HT1}) (and dividing by $Vol_{\mathbb{R}^2}(\|y\|\leq 1)=\pi$) we get the result (\ref{HTfinal}) . \end{proof} Note the anticipated factorized structure of (\ref{HTfinal}): while the first term is just a sum of the individual Riemannian measures, the second one describes the ``interaction'' between the geometries. The appearance of the symmetric polynomials of the matrices $\left(\alpha^{(\mu)}\right)^{-1}\alpha^{(\nu)}$ in those ``interaction'' terms is very suggestive as exactly these are the building blocks of the interaction part in the bi-metric gravity \cite{Heisenberg} (also see (\ref{action_massive})). This requires further study. The second choice for the volume form is called Busemann-Hausdorff measure and it is defined as \begin{eqnarray}\label{HB} \mu_{BH}(x)=\frac{Vol_{\mathbb{R}^n}(\|y\|\leq 1)}{Vol_{\mathbb{R}^n}(\mathrm{F}\leq 1)}\ . \end{eqnarray} Because it seems to be much less relevant for possible physical applications and because the analytic result lacks the factorized structure of the HT measure (\ref{HTfinal}), we will briefly consider just a bi-metric case. So, consider a two-dimensional Finsler space with the Finsler function $\mathrm{F}=\mathrm{F}_{\alpha} + \mathrm{F}_{\beta}$, where $\mathrm{F}_{\alpha} = \sqrt{\alpha(x)_{ij}y^i y^j}$ and $\mathrm{F}_{\beta} = \sqrt{\beta(x)_{ij}y^i y^j}$ are the usual Riemannian structures, as usual. Then we have the following \begin{proposition} For a two-dimensional Finsler space with the Finsler function $\mathrm{F}=\mathrm{F}_{\alpha} + \mathrm{F}_{\beta}$ (notations as above), the Busemann-Hausdorff volume form is given by $$ \mu_{BH}(x)= \frac{1}{\mathcal{A}+\mathcal{B}}\ , $$ where $\mathcal{A}=\frac{1}{2}\frac{\Tr h_+ h_-^{-1}}{\sqrt{\det h_-}}$ with $h_\pm := \alpha \pm \beta$ and $\mathcal{B}$ is some elliptic integral given below. \end{proposition} \begin{proof} First, let us manipulate the expression for $\frac{1}{\mathrm{F}^2}$. \begin{eqnarray} \frac{1}{\mathrm{F}^2} = \frac{1}{\left(\mathrm{F}_{\alpha} + \mathrm{F}_{\beta}\right)^2} = \frac{\left(\mathrm{F}_{\alpha} - \mathrm{F}_{\beta}\right)^2}{\left(\mathrm{F}_{\alpha} + \mathrm{F}_{\beta}\right)^2 \left(\mathrm{F}_{\alpha} - \mathrm{F}_{\beta}\right)^2} = \frac{\mathrm{F}_{\alpha}^2 + \mathrm{F}_{\beta}^2}{\left(\mathrm{F}_{\alpha}^2 - \mathrm{F}_{\beta}^2\right)^2} - 2\frac{\mathrm{F}_{\alpha} \mathrm{F}_{\beta}}{\left(\mathrm{F}_{\alpha}^2 - \mathrm{F}_{\beta}^2\right)^2}\ . \nonumber \end{eqnarray} Now, we define \begin{eqnarray} &&\mathrm{F}_{\alpha}^2 \pm \mathrm{F}_{\beta}^2 = \alpha(x)_{ij}y^i y^j \pm \beta(x)_{ij}y^i y^j = (\alpha(x)_{ij} \pm \beta(x)_{ij})y^i y^j =: h_{\pm}(x)_{ij}y^i y^j \ . \nonumber \end{eqnarray} Using this definition, we can rewrite \begin{eqnarray}\label{FFF} \frac{1}{\mathrm{F}^2} = \frac{\mathrm{F}_+^2}{\mathrm{F}_-^4} - 2\frac{\mathrm{F}_{\alpha} \mathrm{F}_{\beta}}{\mathrm{F}_-^4}\ , \end{eqnarray} where, formally, $\mathrm{F}_{\pm}=\sqrt{h_{\pm}(x)_{ij}y^i y^j}$ are the Finsler functions corresponding to the Riemannian metrics $h_{\pm}(x)$. (Note that $\mathrm{F}_{-}$ always appears as some even degree, so there will never be a problem with a possible non-positivness of $h_{-}$.) Next we note that we can write (suppressing the irrelevant variable $x$) \begin{eqnarray} h_{+ij}\frac{\partial}{\partial h_{-ij}} \mathrm{F}_-^2 = h_{+ij}\frac{\partial}{\partial h_{-ij}} \left( h_{-kl}y^k y^l \right) = h_{+ij}y^i y^j \equiv \mathrm{F}_+^2 \ . \nonumber \end{eqnarray} Using this we have for the first term in (\ref{FFF}) \begin{eqnarray} \frac{\mathrm{F}_+^2}{\mathrm{F}_-^4} = - h_{+ij}\frac{\partial}{\partial h_{-ij}} \frac{1}{\mathrm{F}_-^2} \ . \end{eqnarray} The integral of $\frac{1}{\mathrm{F}_-^2}$ over a circle, $\|y\|=1$ immediately gives as in (\ref{FHB}) \begin{eqnarray}\label{Riemann} \int\limits_{\|y\|= 1} \frac{1}{\mathrm{F}_-^2} \eta = \frac{\pi}{\sqrt{\det h_-}}\ . \end{eqnarray} Then we have for the integral of the first term in (\ref{FFF}) \begin{eqnarray}\label{A} \int\limits_{\|y\|= 1} \frac{\mathrm{F}_+^2}{\mathrm{F}_-^4} \eta = - h_{+ij}\frac{\partial}{\partial h_{-ij}} \frac{\pi}{\sqrt{\det h_-}} = \frac{\pi}{2}\frac{\Tr h_+ h_-^{-1}}{\sqrt{\det h_-}} \ , \end{eqnarray} where we used $\delta \sqrt{\det h_-} = \frac{1}{2}\sqrt{\det h_-} \Tr (h_-^{-1}\delta h_-)$. The equation (\ref{A}) gives us the value of $\mathcal{A}$ (after using $Vol_{\mathbb{R}^2}(\|y\|\leq 1)=\pi$). Now we will deal with the second term in (\ref{FFF}). After the same change of variables as in (\ref{HTint}), we get \begin{eqnarray}\label{B} \int\limits_{\|y\|= 1} \frac{\mathrm{F}_{\alpha} \mathrm{F}_{\beta}}{\mathrm{F}_-^4} \eta = 2 \int\limits_{-\infty}^{\infty} \frac{\sqrt{(\alpha_{11}t^2 + 2\alpha_{12}t+\alpha_{22})(\beta_{11}t^2 + 2\beta_{12}t+\beta_{22})}}{(h_{-11}t^2 + 2h_{-12}t+h_{-22})^2}dt \ . \end{eqnarray} Equation (\ref{B}) is an integral related to the elliptic integral of the second kind. This proves the statement about $\mathcal{B}$ and we finally have \begin{eqnarray}\label{HB1} \int\limits_{\|y\|= 1} \frac{1}{\mathrm{F}^2} \eta = \frac{\pi}{2}\frac{\Tr h_+ h_-^{-1}}{\sqrt{\det h_-}} - 4 \int\limits_{-\infty}^{\infty} \frac{\sqrt{(\alpha_{11} + 2\alpha_{12}t+\alpha_{22}t^2)(\beta_{11} + 2\beta_{12}t+\beta_{22}t^2)}}{(h_{-11} + 2h_{-12}t+h_{-22}t^2)^2}dt \ . \end{eqnarray} \end{proof} \section{Conclusion}\label{Conclusion} In this work, we have initiated the study of the multimetric Finsler geometry. Though our motivation comes from the bi-metric formulation of massive gravity \cite{deRham:2014zqa}, the main focus of the paper is on the study of general properties of the geometry. In particular, whenever it was possible, we tried to highlight the factorized structure of the important geometric objects (spray coefficient, non-linear connection, measure among others). For the general case, we have established some relations between the Finslerian geometric objects and the Riemannian ingredients. The emergent structure has a factorized form: the direct sum over the Riemannian sectors is supplemented by the terms describing the ``interaction'' between the structures in Riemannian geometries. We further study this for the special case of two dimensions, where we manage to find a closed form of the Holmes-Thompson measure. This measure also reveals the same factorization as well as leads to the natural appearance of the symmetric polynomials relevant for the bi-metric formulation of massive gravity. In spite of the significant initial results, the further study is necessary to get a more complete understanding of the multimetric geometry before it would be possible to apply it to physical problems. One of the most urgent tasks is to study different curvature tensors as the building blocks in constructing Finslerian gravities. As we saw, this problem proves to be quite technically involved even in the simplest 2-dimensional case. The general $n$-dimensional case is much more complicated. We hope to address this problem in the near future. Another important question to be addressed is the construction of the measure in the general case. Though in 2d we have succeeded in finding the measure relevant for possible physical applications, the Holmes-Thompson one, we do not expect that the same approach would work in the case of the arbitrary number of dimensions. This requires further study. In the introduction we mentioned the spectral action approach that allows to construct generalizations of the Einstein gravity starting with some generalized geometries. It would be interesting to see if gravity based on the multimetric Finsler geometry could be derived from the spectral action principle. For this, one needs to study natural Dirac operators for this geometry \cite{Flaherty1996,Flaherty1998}. This would also allow to address some geometrical/metrical questions from the point of view of the spectral geometry \cite{Connes:1994yd}. As another potential physical application of the multimetric Finsler geometry, we would like to mention its possible use as a natural model for quantum gravitational fluctuations. For this, one would need a further generalization: instead of the discrete family of Riemannian spaces, one should introduce a continuous family. Then the continuous generalization of (\ref{N_multimetric}) (for this we would need to introduce some measure on the space of metrics) would correspond to a fluctuating geometry with (\ref{action_Finsler}) providing a natural action for a point particle coupled to this fluctuating geometry. \section*{Acknowledgement} AP acknowledges the partial support of CNPq under the grant no.312842/2021-0. The research of CL is supported by the CNPq PhD fellowship and of PC and RM by the CAPES fellowships.
1,116,691,501,161
arxiv
\part*{Commentaire mathématique} \addcontentsline{toc}{part}{Commentaire mathématique} \label{comm_debut} \paragraph{Théorème d'Apollonius, lemme de `Ur\d{d}{\=\i} et couple de \d{T}\=us{\=\i}} On démontre ici trois propositions concernant les rotations affines du plan. Elles nous seront utiles pour comparer entre eux les modèles planétaires. \paragraph{Proposition 1} Soient $P_1$, $P_2$, $P_3$, $P_4$ quatre points alignés et $\alpha\in\lbrack 0,2\pi\rbrack$. Si $\overrightarrow{P_1P_3}=\overrightarrow{P_2P_4}$, alors $R_{P_2,\alpha}=R_{P_1,\alpha}R_{P_3,-\alpha}R_{P_4,\alpha}$. \label{apollonius} \emph{Démonstration}. $R_{P_2,\alpha}R_{P_4,-\alpha}$ est une transformation affine directe dont l'application linéaire associée est l'identité, et c'est donc une translation. C'est la translation de vecteur~: $$\overrightarrow{P_4R_{P_2,\alpha}(P_4)}=R_\alpha(\overrightarrow{P_2P_4})-\overrightarrow{P_2P_4},$$ où $R_\alpha$ désigne la rotation \emph{vectorielle} d'angle $\alpha$. De même $R_{P_1,\alpha}R_{P_3,-\alpha}$ est une translation de vecteur~: $$\overrightarrow{P_3R_{P_1,\alpha}(P_3)}=R_\alpha(\overrightarrow{P_1P_3})-\overrightarrow{P_1P_3},$$ mais $\overrightarrow{P_1P_3}=\overrightarrow{P_2P_4}$, donc $R_{P_1,\alpha}R_{P_3,-\alpha}=R_{P_2,\alpha}R_{P_4,-\alpha}$, \textit{q. e. d.} \paragraph{Remarque} Si $P_2$ est situé entre $P_1$ et $P_3$, une conséquence de cette proposition était bien connue de Ptolémée qui l'attribuait à Apollonius. Si $P_1$ est le centre du Monde, $P_2$ le centre d'un excentrique, et $P_4$ un astre à son apogée sur l'excentrique, on a~: $$R_{P_2,\alpha}(P_4)=R_{P_1,\alpha}R_{P_3,-\alpha}(P_4).$$ Ceci montre que l'excentrique peut être remplacé par un modèle équivalent constitué d'un déférent de centre $P_1$ et d'un épicycle de centre $P_3$. \paragraph{Proposition 2} Soient $P_1$, $P_2$, $P_3$, $P_4$, $P_5$ cinq points alignés, et $\alpha\in\lbrack 0,2\pi\rbrack$. Si $\overrightarrow{P_3P_4}=-\overrightarrow{P_1P_2}$ et $\overrightarrow{P_3P_5}=\overrightarrow{P_1P_2}$, alors $$R_{P_1,\alpha}R_{P_3,\alpha}R_{P_4,-\alpha}=R_{P_2,\alpha}R_{P_5,-\alpha}R_{P_3,2\alpha}R_{P_4,-\alpha}.$$ \emph{Démonstration}. Comme $\overrightarrow{P_2P_5}=\overrightarrow{P_1P_3}$, d'après la proposition 1, on a~: $$R_{P_1,\alpha}=R_{P_2,\alpha}R_{P_5,-\alpha}R_{P_3,\alpha},$$ d'où le résultat escompté si l'on compose à droite avec $R_{P_3,\alpha}R_{P_4,-\alpha}$. \paragraph{Remarque} $R_{P_5,-\alpha}R_{P_3,2\alpha}R_{P_4,-\alpha}$ est une translation parallèlement à la droite $(P_1P_2)$. En effet~:\label{couple_transl} $$(R_{P_5,-\alpha}R_{P_3,\alpha})(R_{P_3,\alpha}R_{P_4,-\alpha})=t_{\overrightarrow{P_3R_{P_5,-\alpha}(P_3)}}\circ t_{\overrightarrow{P_4R_{P_3,\alpha(P_4)}}},$$ mais $$\overrightarrow{P_3R_{P_5,-\alpha(P_3)}}+\overrightarrow{P_4R_{P_3,\alpha(P_4)}}=R_{-\alpha}(\overrightarrow{P_3P_4})+R_\alpha(\overrightarrow{P_3P_4})-2\overrightarrow{P_3P_4},$$ or $$(R_{-\alpha}(\overrightarrow{P_3P_4})+R_\alpha(\overrightarrow{P_3P_4})-2\overrightarrow{P_3P_4})\wedge\overrightarrow{P_3P_4}=\Vert\overrightarrow{P_3P_4}\Vert^2(\sin\alpha+\sin(-\alpha))=0.$$ Cette composée de rotations apparaît dans les modèles planétaires utilisant un ``couple de \d{T}\=us{\=\i}''. \paragraph{Proposition 3} Soit $P_1$, $P_2$, $P_3$ trois points alignés avec $\overrightarrow{P_1P_3}=-\overrightarrow{P_1P_2}$, alors $R_{P_3,-\alpha}R_{P_1,2\alpha}R_{P_2,-\alpha}=R_{P_3,\alpha}R_{P_1,-2\alpha}R_{P_2,\alpha}$. En particulier, le sens de rotation des orbes d'un couple de \d{T}\=us{\=\i} n'importe pas. \emph{Démonstration}. $$R_{P_3,2\alpha}R_{P_1,-2\alpha}=t_{\overrightarrow{P_1R_{P_3,2\alpha}(P_1)}}=t_{\overrightarrow{P_2R_{P_1,2\alpha}(P_2)}}=R_{P_1,2\alpha}R_{P_2,-2\alpha},$$ d'où le résultat. \paragraph{Préliminaires} \label{preliminaires} Nous n'analyserons pas en détail ce qu'est un orbe pour Ibn al-\v{S}\=a\d{t}ir. Qu'il suffise de le concevoir comme un corps à symétrie sphérique. Chaque orbe est doté d'un centre. Le centre de certains orbes est aussi centre du Monde. Enfin, et surtout, Ibn al-\v{S}\=a\d{t}ir utilise chaque orbe comme un référentiel solide en mouvement. Pour qu'un corps solide à symétrie sphérique puisse faire office de référentiel spatial, il faut distinguer un plan attaché à l'orbe et passant par son centre, et une direction dans ce plan. Dans tout ce qui suit, on représentera plus commodément chaque orbe par une sphère, un cercle et un point~: à savoir, une sphère dont le centre est le centre de l'orbe, un grand cercle section de cette sphère par son plan, et un point de ce cercle situé dans la direction distinguée par rapport au centre. Chaque orbe est mobile par rapport à l'orbe qui le porte (c'est-à-dire au sein du référentiel constitué par l'orbe qui le porte). Un seul mouvement est admis pour chaque orbe~: un mouvement de rotation uniforme autour d'un axe passant par son centre. Autrement dit, le centre de chaque orbe est immobile au sein de l'orbe qui le porte. Il n'y a aucune restriction quant à l'inclinaison de l'axe par rapport au plan de l'orbe~: tel orbe a son axe perpendiculaire à son plan, tel autre a son axe perpendiculaire au plan de l'orbe qui le porte. Mais tous ces mouvements sont relatifs, et chaque orbe hérite aussi <<~par accident~>> (au sens aristotélicien du terme) du mouvement de l'orbe qui le porte, qui hérite lui-même du mouvement de l'orbe qui le porte, et ainsi de suite. Le mouvement de chaque orbe est donc composé d'un mouvement propre de rotation uniforme par rapport à l'orbe qui le porte, et du mouvement absolu de l'orbe qui le porte. Seul le neuvième orbe, porté par aucun autre, n'est animé que de son mouvement propre de rotation uniforme. Ce cadre conceptuel donne lieu à trois combinaisons possibles, toutes présentes dans le texte d'Ibn al-\v{S}\=a\d{t}ir. Nous les décrirons au moyen des trois figures (i), (ii) et (iii) page~\pageref{fig2}. Dans chacune de ces trois figures il y a deux orbes representés chacun par une sphère, un grand cercle de cette sphère et un point sur ce cercle. Le premier orbe dont le point est $P$ porte le second dont le point est $Q$. Dans les figures (i) et (ii), le point $P$ de l'orbe portant est choisi par commodité au centre de l'orbe porté. Au contraire, dans la figure (iii), les deux orbes ayant même centre $O$, il a fallu choisir un autre point~: le point $P$ est alors un point à l'intersection des plans des deux orbes. Dans tout ce paragraphe, on désignera chaque orbe par son point. L'orbe de $P$ porte donc l'orbe de $Q$. Ainsi, le lieu occupé par l'orbe de $Q$ est constant au sein du référentiel solide constitué par l'orbe de $P$. Mais l'orbe de $Q$ est animé d'un mouvement de rotation uniforme sur lui-même, autour d'un axe représenté par un vecteur sur chaque figure. Peu importe ici le sens de cette rotation~; seule importe la direction de son axe, perpendiculaire au plan de l'orbe de $P$ dans la figure (i), mais perpendiculaire au plan de l'orbe de $Q$ dans les figures (ii) et (iii). La figure (i) présente cette particularité que la trajectoire du point $Q$ au sein du référentiel constitué par l'orbe de $P$ n'est pas dans le plan de l'orbe de $Q$~: le point $Q$ décrit un petit cercle parallèle au plan de l'orbe de $P$. Enfin, dans chacune des trois figures, l'orbe de $P$ est lui-même animé d'un mouvement de rotation uniforme autour d'un axe représenté par un vecteur d'origine $O$. Peu importe que ce vecteur soit perpendiculaire au plan de l'orbe de $P$ -- comme il l'est dans ces figures -- ou non. Ce dernier mouvement entraîne le point $P$ et aussi l'orbe de $Q$. Chacune des planètes supérieures (Mars, Jupiter, Saturne) est portée par un système d'orbes: un <<~orbe parécliptique~>>, un <<~orbe incliné~>>, un <<~orbe déférent~>>, un <<~orbe rotateur~>> et un <<~orbe de l'épicycle~>>. La figure (i) représente alors la relation entre l'orbe incliné (orbe de $P$) et l'orbe déférent (orbe de $Q$), ainsi que la relation entre l'orbe déférent (orbe de $P$) et l'orbe rotateur (orbe de $Q$). Pour Vénus, les orbes portent les mêmes noms, mais la figure (ii) représente alors la relation entre l'orbe déférent (orbe de $P$) et l'orbe rotateur (orbe de $Q$), ainsi que la relation entre l'orbe rotateur (orbe de $P$) et l'orbe de l'épicycle (orbe de $Q$). Mercure compte davantage d'orbes sans toutefois introduire de combinaison nouvelle. Pour chaque planète, la figure (iii) représente la relation entre l'orbe parécliptique (orbe de $P$) et l'orbe incliné (orbe de $Q$)~: le point $P$ est alors l'un des n{\oe}uds. Mais cette figure représente aussi la relation entre le neuvième orbe (orbe de $P$) et le huitième, dit de l'écliptique (orbe de $Q$)~: le point $P$ est alors le point vernal. \begin{figure} \begin{center} \input{fig001.pdf_tex} \caption{\label{fig2}Un orbe est un référentiel solide en mouvement} \end{center} \end{figure} \paragraph{Précession des équinoxes} Observons à nouveau la figure (iii). Chez Ptolémée, les étoiles fixes sont attachées au dernier orbe et le mouvement relatif de cet orbe par rapport aux orbes inférieurs rend compte du mouvement de précession des équinoxes. Pour les astronomes arabes, depuis le \emph{Traité sur l'année solaire} certainement dû aux Banu Musa (IXème siècle)\footnote{\textit{cf.} \cite{qurra1987} p.~\textit{xlvi}-\textit{lxxv} et 27-67.}, il existe un neuvième orbe au delà des étoiles fixes situées sur le huitième orbe, et c'est le mouvement relatif du huitième orbe par rapport au neuvième qui rend alors compte de la précession. Tous les orbes inférieurs des planètes, du Soleil et de la Lune héritent de ce mouvement comme les étoiles fixes (tandis que chez Ptolémée, le Soleil en était exempt\footnote{\textit{cf.} \cite{pedersen1974} p.~147.}). Chez Ibn al-\v{S}\=a\d{t}ir, l'orbe parécliptique de chaque planète est un orbe dont le centre coïncide avec le centre du Monde, comme l'écliptique. L'orbe parécliptique reproduit, en plus petit, l'orbe de l'écliptique et y est attaché en son centre. Le parécliptique est animé d'un petit mouvement de rotation sur lui-même, par rapport au référentiel constitué par l'orbe de l'écliptique~; mais comme l'axe de rotation du parécliptique coïncide avec l'axe de l'écliptique, et que la composée de deux rotations de même axe en est encore une, le parécliptique est aussi en rotation uniforme autour de son axe au sein du référentiel constitué par le neuvième orbe. Le mouvement de l'orbe de l'écliptique par rapport au neuvième est d'un degré toutes les soixante-dix années persanes. Le mouvement de l'orbe parécliptique par rapport au neuvième orbe est un peu plus rapide, comme l'avait vérifié al-Zarqalluh (XIème siècle)~: il est d'un degré toutes les soixante années persanes, c'est-à-dire $0°1'$ par année persane\footnote{Al-Zarqalluh (Azarquiel) avait observé que l'apogée solaire se meut d'un degré tous les 279 ans par rapport aux étoiles fixes, elles-mêmes étant animées du mouvement de précession (\textit{cf.} \cite{rashed1997} p.~285-287). Or $\dfrac{1}{70}+\dfrac{1}{279}\simeq\dfrac{1}{60}$.}. \paragraph{Unité de temps} Le temps absolu est mesuré par le mouvement du neuvième orbe, seul orbe dont le mouvement propre est absolu (les mouvements propres des autres orbes sont relatifs à l'orbe qui les porte). En un lieu donné, la durée entre deux culminations du Soleil (de midi à midi du lendemain) est approximativement constante. Considérons deux unités possibles pour la mesure du temps~: \begin{itemize} \item le \emph{jour sidéral}~: durée d'une rotation complète du ciel visible autour du pôle nord (temps de retour d'une étoile fixe en son lieu par rapport au pôle nord et à l'horizon) \item le \emph{jour solaire}~: durée entre deux culminations du Soleil (de midi à midi du lendemain) \end{itemize} Le neuvième orbe étant supposé mû d'un mouvement uniforme, le jour sidéral est de durée constante (sauf à tenir compte du mouvement de précession, le mouvement propre du huitième orbe qui est infime). En revanche, la durée du jour solaire varie au fil des saisons, ce n'est donc pas une bonne unité. En effet, soit midi au Soleil. Un jour sidéral plus tard, le Soleil aura presque effectué une rotation complète d'est en ouest à cause du mouvement du neuvième orbe~; pas tout à fait cependant, à cause du mouvement du parécliptique du Soleil qui l'aura légèrement entraîné vers l'est, d'un peu moins qu'un degré. Il ne sera donc pas encore midi au Soleil~: le jour sidéral est plus court que le jour solaire. Cet effet se fera sentir d'autant plus que l'angle entre le parécliptique et la trajectoire diurne du Soleil sera faible (ainsi la différence sera plus importante lors des solstices que lors des équinoxes). De toute façon, le mouvement du Soleil le long de son parécliptique n'est pas uniforme. Tout calendrier civil adoptant pourtant le jour solaire comme unité, il est commode de choisir comme unité astronomique un <<~jour solaire moyen~>>~: durée entre deux culminations, non plus du Soleil, mais d'un point imaginaire mû d'un mouvement uniforme le long de l'équateur et faisant le tour du ciel exactement en même temps que le Soleil\footnote{c'est-à-dire en une année tropique}. C'est l'unité adoptée par Ibn al-\v{S}\=a\d{t}ir~: \textit{al-yawm bi-laylatihi}. Quand une date est donnée en temps civil (c'est-à-dire un certain nombre d'heures à compter de midi au Soleil), il faut la corriger en lui ajoutant une certaine <<~équation du temps~>>, \textit{ta`d{\=\i}l al-'ay\=am}, qui s'annule lors de l'équinoxe de printemps quand le Soleil et le point imaginaire coïncident sur l'équateur\footnote{En fait, comme on va le voir, le point imaginaire suit le Soleil \emph{moyen} et il est donc situé un peu à l'Ouest du Soleil au moment de l'équinoxe.}. L'heure est un vingt-quatrième de jour solaire moyen. Soit $t_0$ la date d'un équinoxe de printemps, et $t$ un instant donné quelconque, dates exprimées en jours solaires moyens. Les mêmes dates exprimées en temps civil (jours solaires, et heures comptées à partir de midi au soleil) seront notées $\tau_0$ et $\tau$, et en jours sidéraux $T_0$ et $T$. Le point imaginaire se déplace le long de l'équateur à la vitesse du Soleil moyen~; son ascension droite varie donc comme la longitude $\overline{\lambda}$ du Soleil moyen. \`A l'instant $t$, elle vaut $\overline{\lambda}(t)-\overline{\lambda}(t_0)$. De combien le Soleil s'est-il déplacé vers l'Est depuis le dernier équinoxe de printemps~? C'est précisément son ascension droite $\alpha(t)$. Le jour sidéral est plus court que le jour solaire, d'autant qu'il faut au mouvement diurne pour récupérer le déplacement du Soleil vers l'Est. La vitesse du mouvement diurne est d'environ 15° par heure (elle est de 360° par jour sidéral, mais le jour sidéral compte un peu moins que 24 heures). On a donc~: $$(T-T_0)-(\tau-\tau_0)\simeq\frac{\alpha(t)}{15°}\text{ heures}$$ Il ne s'agit que d'une approximation, puisque pendant cette durée, le Soleil se déplace légèrement davantage vers l'Est... De même, le jour sidéral est plus court que le jour solaire moyen, d'autant qu'il faut au mouvement diurne pour récupérer le déplacement du point imaginaire vers l'Est le long de l'équateur~: $$(T-T_0)-(t-t_0)\simeq\frac{\overline{\lambda}(t)-\overline{\lambda}(t_0)}{15°}\text{ heures}$$ Ibn al-\v{S}\=a\d{t}ir donne $\overline{\lambda}(t_0)=-2°1'7''$. L'équation du temps est donc\footnote{On adopte ici la convention française quant au signe de l'équation du temps $E$.}~: $$E=(t-t_0)-(\tau-\tau_0) \simeq\frac{-\overline{\lambda}(t)-2°1'7''+\alpha(t)}{15°}\text{ heures}$$ Chez Ibn al-\v{S}\=a\d{t}ir, l'unité de temps est le jour solaire moyen, ou l'année persane de 365 jours solaires moyens. L'<<~époque~>> (c'est-à-dire l'instant $t=0$) est à midi du 24 décembre 1331. Il s'agit probablement d'une date en temps solaire moyen (les valeurs des paramètres à cette date ayant été obtenus par le calcul à partir d'une observation antérieure\footnote{Une remarque de Kennedy et Roberts (\cite{roberts1959} p.~232) au sujet du \textit{z{\=\i}j} d'Ibn al-\v{S}\=a\d{t}ir le confirme.}, un jour d'équinoxe de printemps). Le temps GMT (Greenwich Mean Time) est un temps solaire moyen mesurant comme précédemment le déplacement uniforme d'un point imaginaire le long de l'équateur, mais ce point ne coïncide pas avec le Soleil lors de l'équinoxe de printemps~: l'équation du temps n'a donc pas la même origine que chez Ibn al-\v{S}\=a\d{t}ir, elle est d'environ 8 min à l'équinoxe de printemps et elle s'annule vers le 15 avril. Il faut aussi tenir compte du décalage horaire à la longitude de Damas, 36°18'23''. L'époque d'Ibn al-\v{S}\=a\d{t}ir est donc, en temps GMT, le 24 décembre 1331 à 12 h 8 min $-\dfrac{36°18'23''}{15°}\simeq$ 9 h 43 min. Pourquoi {Ibn al-\v{S}\=a\d{t}ir} choisit-il le 24 décembre 1331 comme époque ? Le premier jour de la première année de l'ère de Yazdgard est le 12 juin 632 (date julienne). Sept cents années persanes plus tard (soit $700\times 365=255500$ jours), on est le premier jour de l'année 701 de l'ère de Yazdgard, un mardi (\textit{yawm al-thal\=ath\=a'}), le 23 \textit{rab{\=\i}` al-'awwal} 732 de l'hégire (date julienne~: 24 décembre 1331). {Ibn al-\v{S}\=a\d{t}ir } avait alors 24 ans. \paragraph{Le Soleil : figure initiale} Soit $(\mathbf{i},\mathbf{j},\mathbf{k})$ la base canonique de $\mathbb{R}^3$ qu'on identifiera à l'espace physique. Selon {Ibn al-\v{S}\=a\d{t}ir}, la trajectoire du Soleil est contenue dans le plan de l'écliptique $(\mathbf{i},\mathbf{j})$. Dans les modèles d'{Ibn al-\v{S}\=a\d{t}ir}, on chercherait en vain une description mathématique du mouvement en termes de transformations ou de différences entre deux instants. L'objet de la description mathématique est ici l'opération transformant une figure initiale (qui n'a d'existence qu'imaginaire) en une autre figure représentant la configuration des astres à un instant donné. Mais pour décrire le mouvement des astres, {Ibn al-\v{S}\=a\d{t}ir} n'admet que des mouvements circulaires uniformes ou bien des mouvements composés de mouvements circulaires uniformes~: les seules opérations autorisées seront donc des rotations spatiales dont les paramètres dépendent linéairement de la variable temps. Dans la figure initiale, les centres des orbes du Soleil sont tous alignés dans la direction du vecteur $\mathbf{j}$ elle-même confondue avec la direction du point vernal. Le point $O$ est le centre du Monde, $P_1$ est le centre de l'orbe total (\textit{al-falak al-\v{s}\=amil}\footnote{La dénomination des orbes du Soleil peut prêter à confusion. Pour décrire le mouvement en longitude des planètes, chacune possède deux orbes concentriques, parécliptique et orbe incliné. Le Soleil n'a pas d'orbe incliné ; son ``orbe total'' joue à peu près le rôle du parécliptique des autres planètes, et son ``parécliptique'', celui de l'orbe incliné. Mais pour le Soleil, ces deux orbes ayant même centre et même axe de rotation, on pourrait aussi bien les confondre en un orbe unique ; {Ibn al-\v{S}\=a\d{t}ir} le savait, et il laisse planer une certaine ambigu\"ité, par exemple dans la figure p. \pageref{soleil_orbes_solides} où le parécliptique semble contenir l'orbe total, tandis que son texte semble affirmer le contraire.}), et $P_2$ le centre du parécliptique du Soleil. Ces trois points sont confondus $O=P_1=P_2$. Le point $P_3$ est le centre de l'orbe déférent, $P_4$ le centre de l'orbe rotateur, et $P$ est le centre du Soleil, cet astre étant lui-même un corps sphérique. La figure \ref{fig003} précise la position de ces points \footnote{Attention, notre figure \ref{fig003} n'est pas présente dans le traité d'{Ibn al-\v{S}\=a\d{t}ir}. Il a certes représenté les centres des orbes du Soleil dans cinq positions distinctes, page \pageref{soleil_trajectoires} ; mais le Soleil y atteint son apogée au premier point du Cancer (c'était à peu près le cas à l'époque d'{Ibn al-\v{S}\=a\d{t}ir}), et non au point vernal. Dans notre commentaire, nous préférons suivre l'usage adopté par {Ibn al-\v{S}\=a\d{t}ir} pour d'autres planètes, car cet usage colle davantage avec l'architecture conceptuelle.}. Elle n'est pas à l'échelle. Nous donnons une représentation à l'échelle et en perspective p.~\pageref{soleil_blender}. Posons $\overrightarrow{OP_3}=60\ \mathbf{j}$, alors $$\begin{array}{l} \overrightarrow{P_3P_4}=4;37\ \mathbf{j}\\ \overrightarrow{P_4P}=2;30\ \mathbf{j} \end{array}$$ où le point-virgule sépare partie entière et partie fractionnaire\footnote{Toutes les valeurs numériques seront données en sexagésimal, et la virgule sert alors à séparer les rangs. Par exemple, $359;45,40=359+\dfrac{45}{60}+\dfrac{40}{60^2}$.}. \begin{figure} \begin{center} \input{fig003.pdf_tex} \end{center} \caption{\label{fig003}Les orbes du Soleil : figure initiale} \end{figure} Insistons encore sur ce point déjà soulevé~: \textit{a priori} la figure initiale ne représente la position des orbes à aucun instant de l'histoire passée ou future du Monde. La situation représentée peut se produire dans la réalité, s'il arrive que le Soleil soit à l'apogée de sa trajectoire à la traversée du point vernal, c'est-à-dire à l'équinoxe de printemps ; mais {Ibn al-\v{S}\=a\d{t}ir} ne se pose même pas la question de savoir si le Soleil a jamais été à l'apogée de sa trajectoire à l'instant précis de l'équinoxe de printemps. Peu importe. Comme nous l'avons dit, il faut appliquer au point $P$ une suite de transformations géométriques pour obtenir la position du Soleil prédite ou observée à un instant donné. \paragraph{Le Soleil : transformations géométriques} Les transformations appliquées au Soleil $P$ sont des rotations dans le plan $(\mathbf{i},\mathbf{j})$ paramétrées par deux angles $\lambda_A$, $\overline{\alpha}$ appelés l'\textit{apogée} et le \textit{centre}\footnote{Ici comme ailleurs, notre choix de notations est dicté par la commodité dans les comparaisons futures avec les modèles d'autres auteurs.}. Voici la liste de ces transformations~: $$R(P_4,\mathbf{k},2\overline{\alpha}),\quad R(P_3,\mathbf{k},-\overline{\alpha}), \quad R(P_2,\mathbf{k},\overline{\alpha}),\quad R(P_1,\mathbf{k},\lambda_A)$$ où $R(Q,\mathbf{k},\theta)$ désigne la rotation de centre $Q$ et d'angle $\theta$, le vecteur $\mathbf{k}$ désignant la direction de l'axe de rotation, et le sens de rotation. La description d'Ibn al-\v{S}\=a\d{t}ir ne laisse aucun doute quant à l'ordre dans lequel appliquer ces quatres transformations. Pour obtenir la configuration des orbes à un instant donné, il faut donc appliquer toutes ces rotations aux points $P_3$, $P_4$, $P$. L'image du point $P_3$ entraîné par les mouvements de l'orbe total et de l'orbe parécliptique est~: $$P_3'=R(P_1,\mathbf{k},\lambda_A)\circ R(P_2,\mathbf{k},\overline{\alpha})\ (P_3)$$ Quant au point $P_4$, il est aussi entraîné par le mouvement de l'orbe déférent et devient~: $$P_4'=R(P_1,\mathbf{k},\lambda_A)\circ R(P_2,\mathbf{k},\overline{\alpha}) \circ R(P_3,\mathbf{k},-\overline{\alpha})\ (P_4)$$ Enfin le point $P$, aussi entraîné par le mouvement de l'orbe rotateur, devient~: $$P'=R(P_1,\mathbf{k},\lambda_A) \circ R(P_2,\mathbf{k},\overline{\alpha}) \circ R(P_3,\mathbf{k},-\overline{\alpha}) \circ R(P_4,\mathbf{k},2\overline{\alpha})\ (P)$$ \paragraph{Le Soleil : trajectoire} Dans le cadre théorique que l'on vient de décrire, l'on ne peut guère parler de trajectoire dans l'espace. Il y a bien toutefois une trajectoire dans l'ensemble des valeurs des paramètres~: $$\lambda_A=\dot{\lambda}_At+\lambda_A(0)$$ $$\overline{\alpha}=(\dot{\overline{\lambda}}-\dot{\lambda}_A)t+(\overline{\lambda}(0)-\lambda_A(0))$$ On a~: $$\overline{\lambda}(0)=280;9,0,\quad\dot{\overline{\lambda}}=359;45,40 \text{ par année persane}$$ $$\lambda_{A}(0)=89;52,3,1,\quad \dot{\lambda}_A=0;1\text{ par année persane}$$ Si l'observation et la tradition délivrent une vitesse moyenne par année persane de $359;45,40$ degrés, une simple division permet d'en déduire la vitesse par jour solaire moyen, par mois de trente jours, ou bien par heure égale. C'est dans cet ordre qu'a dû procéder {Ibn al-\v{S}\=a\d{t}ir}, comme le révèle la précision absurde des valeurs données~: \begin{align*} \dot{\overline{\lambda}}&=0;59,8,19,43,33,41,55,4,6\text{ par jour solaire moyen,}\\ &=29;34,9,51,46,50,57,32\text{ par mois de trente jours,}\\ &=0;2,27,50,49,18,54,14,47\text{ par heure égale.} \end{align*} On en déduit aussi~: $$\dot{\overline{\lambda}}-\dot{\lambda}_A=0;59,8,9,51,47\text{ par jour.}$$ \paragraph{L'équation du Soleil} Le calcul de la position du point $P'$ revient à résoudre deux triangles rectangles (\textit{cf}. fig.~\ref{fig004} p.~\pageref{fig004}) pour calculer l'angle $c(\overline{\alpha})$ traditionnellement appelé <<~équation du Soleil~>>~: $$c(\overline{\alpha})=(\overrightarrow{OP_3'},\overrightarrow{OP'})= -\arcsin\left(\frac{P_3P_4\sin\overline{\alpha}-P_4P\sin\overline{\alpha}}{OP'}\right)$$ où $$OP'=\sqrt{(P_3P_4\sin\overline{\alpha}-P_4P\sin\overline{\alpha})^2+ (OP_3+P_3P_4\cos\overline{\alpha}+P_4P\cos\overline{\alpha})^2}$$ La longitude de $P'$, depuis le point vernal, est alors~: $$\left(\mathbf{j},\overrightarrow{OP'}\right)=\lambda_A+\overline{\alpha}+c(\overline{\alpha})$$ On remarque que~: $$c(360°-\overline{\alpha})=-c(\overline{\alpha})$$ On a tracé le graphe de l'équation du Soleil en fonction de $\overline{\alpha}$, \textit{cf.} figure \ref{fig005}, où l'on a aussi représenté à des fins de comparaison l'équation calculée suivant la méthode de Ptolémée\footnote{\textit{cf.} \cite{pedersen1974} p.~150.}~: $$c_{\text{Ptolémée}}(\overline{\alpha})=-\arcsin\left(\frac{2;29,30\times\sin\overline{\alpha}}{\sqrt{(2;29,30\times\sin\overline{\alpha})^2+(60+2;29,30\times\cos\overline{\alpha})^2}}\right)$$ La figure \ref{fig005} montre aussi l'équation telle qu'aurait pu l'observer {Ibn al-\v{S}\=a\d{t}ir} en 1331, en retranchant à la longitude vraie observée le soleil moyen calculé avec les paramètres $\overline{\lambda}_0,\dot{\overline{\lambda}}$ mentionnés ci-dessus. \begin{figure} \begin{center} \input{fig004.pdf_tex} \caption{\label{fig004}Le Soleil : transformations planes} \end{center} \end{figure} \paragraph{Application} On a vu que {Ibn al-\v{S}\=a\d{t}ir } donnait la longitude du Soleil moyen à $t=0$ : c'est $\overline{\lambda}_0=280°9'0''$. Pour avoir la position exacte du Soleil le long de l'écliptique à cette date, il faut corriger cette valeur par l'équation du Soleil comme on l'explique dans la section précédente. On trouve $\lambda=280°33'31''$. \`A titre de comparaison, l'Institut de Mécanique Céleste et de Calcul des \'Ephémérides de l'Observatoire de Paris donne $\lambda=280°29'$ le 24 décembre 1331 à 9 h 43 min. \paragraph{Distances extrémales et dimensions des orbes} Tandis que la figure \ref{fig003} représentait, pour chaque orbe, la trajectoire circulaire de son centre au sein de l'orbe le portant, la figure \ref{fig006} est une section des corps à symétrie sphérique constituant les orbes : chaque cercle y représente le bord d'un orbe. Cette figure n'est pas à l'échelle, mais on y a écrit les dimensions des orbes, en unités telles que $OP_3=60$. Il est alors facile de calculer la distance maximale du Soleil à la Terre, $OP=67;7$, et sa distance minimale, $52;53$. Soit $r$ le rayon du globe solaire lui-même. On a~: \begin{align*} \text{rayon du rotateur} &=2;30+r\\ \text{rayon du déférent} &=4;37+2;30+r=7;7+r\\ \text{rayon extérieur du parécliptique} &=60+7;7+r=67;7+r\\ \text{rayon intérieur du parécliptique} &=60-(7;7+r)=52;53-r \end{align*} L'épaisseur du parécliptique serait donc $14;14+2r$. Or {Ibn al-\v{S}\=a\d{t}ir} nous dit, p.~\pageref{complement_sol}~: \begin{quote} ``Son épaisseur est quatorze degrés et trente-quatre minutes, plus un complément. Pour que [les orbes] soient mutuellement contigus quand on y plonge l'orbe déférent, nous prenons un complément d'un tiers de degré à l'extérieur et à l'extérieur, en parts telles que son rayon extérieur est soixante-sept parts et dix-sept minutes.'' \end{quote} Une épaisseur de $14;34$ inclut donc déjà un ``complément'' de $0;20$ égal au diamètre supposé du globe solaire dans le chapitre 7~; mais cette valeur est provisoire, comme on le voit dans la conclusion de la première partie de la \textit{Nihaya}. D'une part, l'observation permet de déterminer le diamètre du globe solaire (\textit{cf.} p.~\pageref{obs_diam_sol}). D'autre part, il faudra prendre un ``complément'' suffisant pour que la partie concave de l'orbe du Soleil touche exactement la partie convexe du système d'orbes de Vénus (\textit{cf.} p.~\pageref{contiguite_venus_sol}). Enfin, l'épaisseur de $0;43$ choisie pour l'orbe total dans le chapitre 7 a aussi un caractère provisoire~; elle n'influe en rien sur la trajectoire décrite par le Soleil dans ce modèle~; et {Ibn al-\v{S}\=a\d{t}ir} l'oubliera complètement dans sa conclusion. Pour chaque astre, {Ibn al-\v{S}\=a\d{t}ir} trace toujours deux figures, analogues des figures \ref{fig003} et \ref{fig006}. Celle représentant les trajectoires des centres des orbes est très utile à la compréhension du modèle mathématique : elle permet de situer la position des centres des orbes et donc aussi la position de l'astre. L'autre figure, représentant les ``orbes solides'', relève davantage de l'astronomie physique et de la nature des orbes ; elle nous offre une image du Monde par ses sections planes, et elle permet aussi d'en calculer les dimensions sous l'hypothèse que les systèmes d'orbes appartenant aux différents astres s'emboitent les uns dans les autres sans qu'il n'y ait de vide. \paragraph{Diamètre apparent} Le diamètre apparent du Soleil quand il est à distance moyenne ($OP'=60$) est, selon {Ibn al-\v{S}\=a\d{t}ir}, $0;32,32$ degrés. \`A partir de cette observation et des distances extrémales calculées précédemment, on trouve facilement le diamètre apparent à l'apogée~: $$\frac{0;32,32\times 60}{67;7}\simeq 0;29,5$$ et au périgée~: $$\frac{0;32,32\times 60}{52;53}\simeq 0;36,55$$ D'après Saliba\footnote{\textit{cf.} \cite{rashed1997} p.~103.}, {Ibn al-\v{S}\=a\d{t}ir} est le premier auteur étudiant sérieusement la variation du diamètre apparent du Soleil, en lien avec la possibilité des éclipses annulaires que n'avait pas envisagées Ptolémée ; il y avait là motivation à offrir un nouveau modèle du Soleil. Ces observations suffisent en effet à déterminer $OP_3+P_3P_4+P_4P$ et $OP_3-P_3P_4-P_4P$, d'où l'on tire $P_3P_4+P_4P$, à un facteur d'échelle près permettant de choisir $OP_3=60$. Pour régler entièrement les paramètres du modèle, il suffirait enfin de déterminer $P_3P_4-P_4P$. Saliba suggère qu'{Ibn al-\v{S}\=a\d{t}ir} a pu le faire en observant l'équation maximale. {Ibn al-\v{S}\=a\d{t}ir} mentionne des observations concernant le diamètre apparent du Soleil, p.~\pageref{diam_sol} et \pageref{diam_sol2}. Sans remettre en cause l'analyse de Saliba, nous aimerions suggérer que ce scénario n'est pas le seul possible. Un premier ensemble d'observations aurait pu permettre de déterminer l'équation maximale ou bien l'équation dans les quadratures. On montre facilement que l'équation dans les quadratures vaut : $$c(90°)=-\arctan\left(\frac{P_3P_4-P_4P}{OP_3}\right)$$ Quant à l'équation maximale $c_{\max}$, on montre facilement qu'elle est atteinte lorsque $$\cos\overline{\alpha}=-\frac{P_3P_4+P_4P}{OP_3},$$ et on a alors : $$(OP_3^2-(P_3P_4+P_4P)^2)\tan^2c_{\max}=(P_3P_4-P_4P)^2$$ {Ibn al-\v{S}\=a\d{t}ir} aurait ensuite pu utiliser des observations concernant l'équation du Soleil dans les octants : il mentionne lui-même ces observations comme étant un argument en faveur de ses modèles\footnote{\textit{cf. supra} l'introduction d'{Ibn al-\v{S}\=a\d{t}ir} p.~\pageref{doutes_excentrique}.}. On montre facilement que : $$\frac{P_3P_4-P_4P}{OP_3}=-\left(\sqrt{2}+\frac{P_3P_4+P_4P}{OP_3}\right)\tan c(45°)$$ Finalement, les observations concernant le diamètre apparent auraient très bien pu n'être utilisées que comme test d'un modèle déjà achevé. Dans tous les cas, il reste à étudier \emph{comment} de telles observations -- diamètre apparent, équation maximale, équation dans les quadratures, dans les octants -- ont pu être menées, et à quelle précision, avec les moyens de l'époque. \begin{figure} \begin{center} \input{fig005.pdf_tex} \caption{\label{fig005}L'équation du Soleil} \end{center} \end{figure} \begin{figure} \begin{center} \input{fig006.pdf_tex} \caption{\label{fig006}Les orbes solides du Soleil} \end{center} \end{figure} \paragraph{La Lune} La trajectoire de la Lune est contenue dans un plan incliné par rapport au plan de l'écliptique. {Ibn al-\v{S}\=a\d{t}ir} conçoit donc un \emph{orbe incliné}, dont le plan est immobile au sein du référentiel constitué par l'orbe parécliptique de la Lune. Ce plan incliné coupe le plan du parécliptique suivant une droite, la direction des <<~n{\oe}uds~>>. Les n{\oe}uds sont deux points du parécliptique, diamètralement opposés, dans cette direction. Comme l'orbe parécliptique de la Lune entraîne l'orbe incliné dans son mouvement, alors les n{\oe}uds se déplacent de manière uniforme le long de l'écliptique, par rapport au point vernal. Pour décrire le mouvement des points situés sur le plan de l'orbe incliné, {Ibn al-\v{S}\=a\d{t}ir} utilise comme point de référence un point situé <<~en face du point vernal~>>~: l'arc de l'orbe incliné compris entre ce point et le n{\oe}ud ascendant est égal à l'arc de l'écliptique compris entre le point vernal et le n{\oe}ud ascendant (voir figure \ref{latitude}). {Ibn al-\v{S}\=a\d{t}ir} devait en effet rabattre le plan de l'orbe incliné dans le plan de l'écliptique au moyen d'une rotation dont l'axe est la direction des n{\oe}uds. Le point <<~en face du point vernal~>> devient alors une origine naturelle pour les mesures angulaires. \begin{figure} \begin{center} \input{fig007.pdf_tex} \caption{\label{latitude}Mouvement en latitude de la Lune} \end{center} \end{figure} \paragraph{La Lune : figure initiale} Soit $(\mathbf{i},\mathbf{j},\mathbf{k})$ la base canonique de $\mathbb{R}^3$. Dans la figure initiale, les plans des orbes sont tous rabattus dans le plan de l'écliptique $(\mathbf{i},\mathbf{j})$ par des rotations, les centres des orbes sont tous alignés dans la direction du vecteur $\mathbf{j}$ elle-même confondue avec la direction du point vernal~; enfin, la ligne des n{\oe}uds, à l'intersection des plans de l'orbe incliné et du parécliptique, est orientée dans la direction du vecteur $\mathbf{j}$, le n{\oe}ud ascendant étant du même côté que $\mathbf{j}$. Le point $O$ est le centre du Monde, $P_1$ le centre du parécliptique de la Lune, et $P_2$ le centre de son orbe incliné. Ces trois points sont confondus $O=P_1=P_2$. Le point $P_3$ est le centre de l'orbe de l'épicycle, $P_4$ le centre de l'orbe rotateur, et $P$ le centre de la Lune, cet astre étant lui-même un corps sphérique. La figure \ref{fig008} précise la position de ces points, mais elle n'est pas à l'échelle. Bien qu'ils soient alignés, on remarque que les rayons vecteurs $\overrightarrow{P_nP_{n+1}}$ ne sont pas tous dans le même sens. On pose~: $\overrightarrow{OP_3}=60\ \mathbf{j}$, alors $$\begin{array}{l} \overrightarrow{P_3P_4}=6;35\ \mathbf{j}\\ \overrightarrow{P_4P}=-1;25\ \mathbf{j} \end{array}$$ Il faut appliquer au point $P$ une suite de transformations géométriques pour obtenir la position de la Lune prédite ou observée à un instant donné. \begin{figure} \begin{center} \input{fig008.pdf_tex} \caption{\label{fig008}Les orbes de la Lune : figure initiale} \end{center} \end{figure} \paragraph{La Lune : transformations géométriques} Les transformations appliquées à la Lune $P$ sont des rotations paramétrées par quatre angles~: le \emph{n{\oe}ud} $\lambda_{\ascnode}$, la \emph{Lune propre} $\overline{\alpha}$, le \emph{centre de la Lune} $2\overline{\eta}$, et l'argument de latitude\footnote{Cette grandeur mesure un arc le long de l'orbe incliné entre le n{\oe}ud ascendant et le centre de l'épicycle~; il ne faut donc pas la confondre avec une latitude.} moyen $(\overline{\lambda}+\lambda_{\ascnode})$. Enfin, $\overline{\lambda}$ désigne la \emph{Lune moyenne}. Voici la liste des transformations géométriques utilisées\footnote{Il règne une légère ambiguïté concernant $\dot{\lambda}_{\tiny\ascnode}$ dans le texte d'{Ibn al-\v{S}\=a\d{t}ir}. Quant il décrit le mouvement du parécliptique de la Lune, qu'il dit être $0;3,10,38,27$ par jour, il ajoute <<~en vérité, à proprement parler, ce mouvement est l'excédent du mouvement des n{\oe}uds sur le mouvement des étoiles fixes~>> (p.~\pageref{lab001} \textit{supra}). Est-ce à dire que la valeur donnée par {Ibn al-\v{S}\=a\d{t}ir} est la différence entre le mouvement du parécliptique de la Lune et le mouvement des fixes (précession), tous deux relatifs au référentiel solide du neuvième orbe~? Ce serait donc leur somme algébrique puisque ces deux mouvements vont en sens contraires~; c'est l'interprétation que nous avons retenue dans notre commentaire mathématique. Dans ce cas, la rotation $R(P_1,\mathbf{k},-\lambda_{\tiny\ascnode})$ rend en fait compte du mouvement des fixes \textit{et} du mouvement du parécliptique~; le <<~n{\oe}ud moyen~>> et le <<~mouvement du n{\oe}ud moyen~>> désignent alors des grandeurs mesurées par rapport au référentiel solide du neuvième orbe et décrivant le mouvement du parécliptique au sein de ce référentiel. {Ibn al-\v{S}\=a\d{t}ir} dit plus loin (p.~\pageref{lab002}) que le n{\oe}ud moyen est l'arc <<~entre le commencement du Bélier et le n{\oe}ud ascendant, dans l'orbe parécliptique~>>. Si notre interprétation est juste, il faut donc penser qu'il commet un abus de langage en désignant le point vernal par la locution <<~commencement du Bélier~>>, puisque le commencement du Bélier désigne un point de l'écliptique qui subit le mouvement de précession, et non un point du neuvième orbe. Mais notre interprétation est confortée par le sort analogue dévolu au mouvement des apogées pour les autres astres. Par exemple, dans le modèle du Soleil vu ci-dessus, la grandeur $\dot{\lambda}_A$ du mouvement de l'apogée (un degré toutes les soixante années persanes) est bien la somme du mouvement de précession (un degré toutes les soixante-dix années persanes) et du mouvement de l'orbe total.}~: $$R(P_4,\mathbf{k},2\overline{\eta}),\quad R(P_3,\mathbf{k},-\overline{\alpha}),\quad R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode}),\quad R(P_2,\mathbf{j},5°), \quad R(P_1,\mathbf{k},-\lambda_{\ascnode}).$$ La rotation dont l'axe est dans la direction du vecteur $\mathbf{j}$ a pour effet d'incliner le plan de l'orbe incliné par rapport au plan du parécliptique. La description d'Ibn al-\v{S}\=a\d{t}ir ne laisse aucun doute quant à l'ordre dans lequel appliquer ces quatres transformations. Pour obtenir la configuration des orbes à un instant donné, il faut donc appliquer toutes ces rotations aux points $P_3$, $P_4$, $P$. L'image du point $P_3$ entraîné par les mouvements du parécliptique et de l'orbe incliné est~:\label{composees_lune} $$R(P_1,\mathbf{k},-\lambda_{\ascnode})\circ R(P_2,\mathbf{j},5°) \circ R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode})\ (P_3)$$ Quant au point $P_4$, il est aussi entraîné par le mouvement de l'orbe de l'épicycle et devient~: $$R(P_1,\mathbf{k},-\lambda_{\ascnode})\circ R(P_2,\mathbf{j},5°) \circ R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode})\circ R(P_3,\mathbf{k},-\overline{\alpha})\ (P_4)$$ Enfin le point $P$, aussi entraîné par le mouvement de l'orbe rotateur, devient~: $$R(P_1,\mathbf{k},-\lambda_{\ascnode})\circ R(P_2,\mathbf{j},5°) \circ R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode})\circ R(P_3,\mathbf{k},-\overline{\alpha}) \circ R(P_4,\mathbf{k},2\overline{\eta})\ (P)$$ \paragraph{La Lune : trajectoire paramétrée} Le modèle de la Lune est couplé au mouvement du Soleil moyen dont on notera désormais $\dot{\overline{\lambda}}_{\astrosun}$ et $\overline{\lambda}_{\astrosun}(0)$ les paramètres notés précédemment $\dot{\overline{\lambda}}$ et $\overline{\lambda}(0)$, puisque ces derniers symboles concernent désormais la Lune moyenne. Pour la Lune, la trajectoire dans l'ensemble des valeurs des paramètres est~: $$\lambda_{\ascnode}=\dot{\lambda}_{\ascnode}t+\lambda_{\ascnode}(0)$$ $$\overline{\lambda}+\lambda_{\ascnode}=(\dot{\overline{\lambda}}+\dot{\lambda}_{\ascnode}) t+\overline{\lambda}(0)+\lambda_{\ascnode}(0)$$ $$\overline{\alpha}=\dot{\overline{\alpha}}t+\overline{\alpha}(0)$$ $$2\overline{\eta}=2(\dot{\overline{\lambda}}-\dot{\overline{\lambda}}_{\astrosun})t+2(\overline{\lambda}(0)-\overline{\lambda}_{\astrosun}(0))$$ On a, d'après {Ibn al-\v{S}\=a\d{t}ir}~: $$\overline{\lambda}_{\astrosun}(0)=280;9,0,\quad\dot{\overline{\lambda}}_{\astrosun}=0;59,8,19,43,33,41,55,4,6\text{ par jour solaire moyen}$$ $$\overline{\lambda}(0)=213;35,50,\quad\dot{\overline{\lambda}}=13;10,35,1,13,53\text{ par jour solaire moyen}$$ $$\overline{\alpha}(0)=138;32,27,\quad\dot{\overline{\alpha}}=13;3,53,56\text{ par jour solaire moyen}$$ $$\lambda_{\ascnode}(0)=275;7,35,\quad\dot{\lambda}_{\ascnode}=0;3,10,38,27\text{ par jour solaire moyen}$$ d'où $\dot{\overline{\lambda}}+\dot{\lambda}_{\ascnode}=13;13,45,39,40$ par jour solaire moyen. On appelle <<~élongation moyenne~>> la grandeur $\overline{\eta}=\overline{\lambda}-\overline{\lambda}_{\astrosun}$. \paragraph{La Lune : transformations planes} Si $R$ et $S$ sont deux rotations dans l'espace, il existe une rotation $T$ telle que $R\circ S=T\circ R$, à savoir, la rotation de même angle que $S$ et dont l'axe est l'image par $R$ de l'axe de $S$. Appliquons cette relation de commutation à toutes les composées de rotations décrites p.~\pageref{composees_lune} ci-dessus, de sorte à réécrire à droite toutes les rotations dont l'axe est dans la direction du vecteur $\mathbf{k}$. Il suffit de remarquer que~: $$R(P_1,\mathbf{k},-\lambda_{\ascnode})\circ R(P_2,\mathbf{j},5°)= R(P_2,\mathbf{u},5°)\circ R(P_1,\mathbf{k},-\lambda_{\ascnode})$$ où le vecteur $\mathbf{u}$ est l'image de $\mathbf{j}$ par $R(P_1,\mathbf{k},-\lambda_{\ascnode})$. L'image de $P$ est donc~: $$R(P_1,\mathbf{u},5°)(P')$$ où $$P'=R(P_1,\mathbf{k},-\lambda_{\ascnode})\circ R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode}) \circ R(P_3,\mathbf{k},-\overline{\alpha})\circ R(P_4,\mathbf{k},2\overline{\eta})\ (P).$$ On introduit de même les points $P_3'$ et $P_4'$ suivants~: $$P_3'=R(P_1,\mathbf{k},-\lambda_{\ascnode})\circ R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode})\ (P_3)$$ $$P_4'=R(P_1,\mathbf{k},-\lambda_{\ascnode}) \circ R(P_2,\mathbf{k},\overline{\lambda}+\lambda_{\ascnode})\circ R(P_3,\mathbf{k},-\overline{\alpha})\ (P_4)$$ Tous ces points sont dans le plan de la figure initiale~: calculer leurs positions relève entièrement de la géométrie plane. \begin{figure} \begin{center} \input{fig009.pdf_tex} \caption{\label{fig009}La Lune : transformations planes} \end{center} \end{figure} \paragraph{Equations de la Lune}\label{equ_lune} Le calcul de la position du point $P'$ revient à résoudre les triangles rectangles représentés sur la figure \ref{fig009}. On calcule d'abord l'angle $c_1=-(\overrightarrow{P_3'P_4'},\overrightarrow{P_3'P'})$ appelé <<~équation de la Lune propre~>> (cf.~p.~\pageref{c1}). C'est une fonction de $2\overline{\eta}$~: $$c_1=\arcsin\left(\frac{P_4P\sin2\overline{\eta}}{P_3'P'}\right)$$ où $P_3'P'$ est le <<~rayon de l'épicycle apparent~>>~: $$P_3'P'=\sqrt{(P_3P_4-P_4P\cos2\overline{\eta})^2+(P_4P\sin2\overline{\eta})^2}$$ On calcule ensuite une autre <<~équation~>> $c_2=(\overrightarrow{OP_3'},\overrightarrow{OP'})$~: $$c_2=\arcsin\left(\frac{P_3'P'\sin(-\overline{\alpha}-c_1)}{OP'}\right)$$ où $$OP'=\sqrt{(OP_3+P_3'P'\cos(\overline{\alpha}+c_1))^2+(P_3'P'\sin(\overline{\alpha}+c_1))^2}$$ La position de $P'$ est alors donnée par $OP'$ et par l'angle suivant~: $$\left(\mathbf{j},\overrightarrow{OP'}\right)=\overline{\lambda}+c_2$$ La fonction $c_1$ est une fonction d'une seule variable, et $c_2$ fonction des deux variables $\overline{\alpha}$ et $2\overline{\eta}$. Dans ce qui suit, $c_2$ sera plutôt conçue comme une fonction de $\alpha=\overline{\alpha}+c_1$ et de $2\overline{\eta}$, notée $c_2(\alpha,2\overline{\eta})$. Plus précisément, $$c_2(\alpha,2\overline{\eta})=\arcsin\left(\frac{P_3'P'\sin(-\alpha)}{\sqrt{(OP_3+P_3'P'\cos\alpha)^2+(P_3'P'\sin\alpha)^2}}\right)$$ où $P_3'P'$ est la fonction de $2\overline{\eta}$ vue ci-dessus. Ces fonctions sont bien sûr $2\pi$-périodiques par rapport à chacune des variables, mais on remarque aussi que~: $$c_1(360°-2\overline{\eta})=-c_1(2\overline{\eta})$$ $$c_2(360°-\alpha,\ 360°-2\overline{\eta})=-c_2(\alpha,2\overline{\eta})$$ \paragraph{Fonction de deux variables et interpolation} Les valeurs de $c_1$ et $c_2$ devront être reportées dans des tables ; mais $c_2$ est fonction de deux variables, et il faudrait donc construire une table pour chaque valeur de $2\overline{\eta}$. Les astronomes, à partir de Ptolémée\footnote{\textit{cf.} \cite{pedersen1974}, p.~84-89.}, utilisaient dans ce contexte une méthode d'interpolation visant à calculer une valeur approchée de $c_2$ au moyen d'un produit d'une fonction de $\alpha$ par une fonction de $2\overline{\eta}$. \`A $\alpha$ donné, {Ibn al-\v{S}\=a\d{t}ir} interpole les valeurs de $c_2$ entre $c_2(\alpha,0)$ et $c_2(\alpha,180°)$ au moyen de la formule suivante (qui \emph{n'est pas} linéaire en $2\overline{\eta}$)~: $$c_2(\alpha,2\overline{\eta})\simeq c_2(\alpha,0) +\chi(2\overline{\eta})(c_2(\alpha,180°)-c_2(\alpha,0)).$$ Le coefficient d'interpolation $\chi$ est défini par~: $$\chi(2\overline{\eta}) =\frac{\max\vert c_2(\cdot,2\overline{\eta})\vert-\max\vert c_2(\cdot,0)\vert} {\max\vert c_2(\cdot,180°)\vert-\max\vert c_2(\cdot,0)\vert}$$ Ce coefficient est fonction d'une seule variable, on peut donc le calculer pour une série de valeurs de $2\overline{\eta}$ et en faire une table. Pour ce faire, on remarque que le maximum $\max\vert c_2(\cdot,2\overline{\eta})\vert$ est atteint quand $(\tan(c_2(\alpha,2\overline{\eta})))^2$ est maximum. En posant $z^{-1}=OP_3+P_3'P'\cos\alpha$, on montre que $(\tan(c_2(\alpha,2\overline{\eta})))^2$ est un polynôme de degré 2 en $z$. On calcule facilement son maximum\footnote{{Ibn al-\v{S}\=a\d{t}ir} procède géomètriquement, en considérant une droite tangente à l'<<~épicycle apparent~>> qui est un cercle de rayon $P_3'P'$.}, il est atteint quand $$\cos\alpha=-\frac{P_3'P'}{OP_3}$$ $$\vert\sin\alpha\vert=\sqrt{1-\left(\frac{P_3'P'}{OP_3}\right)^2}.$$ On reporte ces valeurs dans $\vert c_2(\alpha,2\overline{\eta})\vert$, et on trouve~: $$\max\vert c_2(\cdot,2\overline{\eta})\vert=\arcsin\frac{P_3'P'}{OP_3}.$$ \paragraph{Trigonométrie sphérique} On va à présent calculer les coordonnées sphériques du point $R(P_1,\mathbf{u},5°)(P')$ par rapport à l'écliptique. L'angle formé entre le vecteur $\mathbf{u}$ et la direction du point $P'$ vaut (modulo 360°)~: $$(\mathbf{u},\overrightarrow{OP'})=(\mathbf{j},\overrightarrow{OP'})-(\mathbf{j},\mathbf{u})=\overline{\lambda}+c_2+\lambda_{\ascnode}$$ Les égalités suivantes seront aussi prises modulo 360°. Sur la figure \ref{fig010}, on a représenté sur la sphère de l'écliptique le point $B$ dans la direction du point $R(P_1,\mathbf{u},5°)(P')$, et le point $C$ dans la direction du vecteur $\mathbf{u}$. En particulier, $$(\overrightarrow{OC},\overrightarrow{OB})=\overline{\lambda}+\lambda_{\ascnode}+c_2.$$ L'étude du triangle sphérique $ABC$ donne~: $$\tan(\overrightarrow{OC},\overrightarrow{OA}) =\cos(5°)\times\tan(\overline{\lambda}+\lambda_{\ascnode}+c_2)$$ La longitude du point $R(P_1,\mathbf{u},5°)(P')$ par rapport à l'écliptique, en prenant la direction du point vernal, c'est-à-dire $\mathbf{j}$, comme origine, est donc, quand $\overline{\lambda}+\lambda_{\ascnode}+c_2\in\ ]-90°,\ 90°[$~: $$(\mathbf{j},\overrightarrow{OA})=(\overrightarrow{OC},\overrightarrow{OA})-(\overrightarrow{OC},\mathbf{j})=\arctan(\cos(5°)\times\tan(\overline{\lambda}+\lambda_{\ascnode}+c_2))-\lambda_{\ascnode}$$ Quand au contraire $\overline{\lambda}+\lambda_{\ascnode}+c_2\in\ ]90°,\ 270°[$, on a~: $$(\mathbf{j},\overrightarrow{OA})=180°+\arctan(\cos(5°)\times\tan(\overline{\lambda}+\lambda_{\ascnode}+c_2))-\lambda_{\ascnode}.$$ On rassemble ces deux cas dans la formule suivante, valable pour toutes les valeurs des paramètres : $$(\mathbf{j},\overrightarrow{OA})=\overline{\lambda}+c_2+e_n(\overline{\lambda}+\lambda_{\ascnode}+c_2)$$ où l'<<~équation du déplacement~>> $e_n(x)$ est définie comme suit sur l'intervalle $[-90°,270°]$, et ailleurs par périodicité : $$e_n(x)=\left\lbrace\begin{array}{l} \arctan(\cos(5°)\times\tan(x))-x,\text{ si }x\in\ ]-90°,\ 90°[\\ 180°+\arctan(\cos(5°)\times\tan(x))-x,\text{ si }x\in\ ]90°,\ 270°[\\ 0°,\text{ si }x=\pm 90° \end{array}\right.$$ Enfin, la latitude de la Lune est~: $$(\overrightarrow{OA},\overrightarrow{OB})= \arcsin(\sin(5°)\times\sin(\overline{\lambda}+\lambda_{\ascnode}+c_2).$$ On peut aussi l'obtenir ainsi (c'est la solution retenue par {Ibn al-\v{S}\=a\d{t}ir} au chapitre 11)~: $$(\overrightarrow{OA},\overrightarrow{OB})= \arctan(\tan(5°)\times\sin(\overline{\lambda}+\lambda_{\ascnode}+c_2+e_n(\overline{\lambda}+\lambda_{\ascnode}+c_2))).$$ \begin{figure} \begin{center} \input{fig010.pdf_tex} \caption{\label{fig010}La Lune : coordonnées de $R(P_1,\mathbf{u},5°)(P')$} \end{center} \end{figure} \paragraph{La Lune : critique des modèles antérieurs} Les critiques adressées par {Ibn al-\v{S}\=a\d{t}ir} à ses prédécesseurs p.~\pageref{doutes_lune} concernent~: \begin{itemize} \item l'usage d'un orbe \emph{excentrique} \item la question de l'uniformité du mouvement de rotation d'un orbe excentrique par rapport à un axe passant par son centre \item ``le fait que le rayon de l'épicycle suive un point autre que le centre de l'orbe qui le porte'' (en particulier le point de \textit{prosneuse} dans le troisième modèle de Ptolémée) \item la variation du \emph{diamètre apparent} de la Lune, vu de la Terre \item l'équation de la Lune dans les \emph{octants} \end{itemize} Pour mieux évaluer les critiques adressées par {Ibn al-\v{S}\=a\d{t}ir} à ses prédécesseurs, et la précision de ses propres modèles et des observations dont il a pu les déduire, on procèdera, pour chaque astre, au choix d'une période de référence commençant à l'époque choisie par {Ibn al-\v{S}\=a\d{t}ir}. Pour la Lune, on a choisi trente jours à partir du 24 décembre 1331. \`A des fins de comparaison, il faut remarquer que les paramètres des modèles planétaires exprimés en ``mouvements moyens'' ont toujours une valeur phénoménologique indépendante du modèle géométrique considéré. Par exemple pour la Lune, $\overline{\lambda}$ est la direction d'un point se mouvant uniformément sur l'écliptique et dont la vitesse est la vitesse \emph{moyenne} de la Lune en longitude\footnote{à ne pas confondre avec la vitesse instantanée, d'où notre notation pour la vitesse moyenne $\dot{\overline{\lambda}}$ qui est une constante, à ne pas confondre avec $\dot{\lambda}$.}. La longitude vraie $\lambda$ de la Lune oscille autour de $\overline{\lambda}$ ; en théorie, on peut donc déterminer $\overline{\lambda}$ statistiquement, indépendammant d'un quelconque modèle géométrique. Il est encore plus facile de calculer la vitesse moyenne $\dot{\overline{\lambda}}$~: c'est l'inverse d'une période de retour aux mêmes longitudes. Les Babyloniens savaient déjà la calculer en moyennant sur un grand nombre de périodes. Pour comparer les modèles géométriques d'{Ibn al-\v{S}\=a\d{t}ir} et ceux de ses prédecesseurs, on pourra donc les utiliser pour calculer la trajectoire d'un \emph{même} astre sur une \emph{même} période de référence, en utilisant un \emph{même} jeu de paramètres concernant les ``mouvements moyens''. Les résultats d'une telle comparaison seront nécessairement partiels car aucun astre -- pas même le Soleil si l'on tient compte du mouvement de l'Apogée -- n'a un mouvement strictement périodique dans les modèles les plus élaborés. Ainsi pour la Lune, la figure \ref{fig011} compare le modèle d'{Ibn al-\v{S}\=a\d{t}ir} et les deuxième et troisième modèles proposés par Ptolémée. On a tracé plusieurs graphes. Le premier représente l'équation de la Lune $c_2+e_n(\overline{\lambda}+\lambda_{\ascnode}+c_2)$ calculée au moyen du modèle d'{Ibn al-\v{S}\=a\d{t}ir} ci-dessus, pendant trente jours à partir du 24 décembre 1331. Sur le même repère, on a aussi représenté l'équation telle que pouvait l'\emph{observer} {Ibn al-\v{S}\=a\d{t}ir}. Pour la connaître, à une date donnée, on calcule la longitude précise de la Lune au moyen d'outils modernes --~éphémérides de l'IMCCE~-- puis on en retranche la Lune moyenne $\overline{\lambda}=\overline{\lambda}(0)+\dot{\overline{\lambda}}t$ calculée avec les paramètres d'{Ibn al-\v{S}\=a\d{t}ir} pour cette date. Enfin, toujours sur le même repère, on a représenté l'équation de la Lune calculée au moyen des deuxième et troisième modèles de Ptolémée dans l'\textit{Almageste}. Remarquons que le modèle décrit par Ptolémée dans les \textit{Hypothèses planétaires} est identique au deuxième modèle de l'\textit{Almageste}. \`A des fins de référence, la figure \ref{fig015} rappelle les caractéristiques du troisième modèle de la Lune dans l'\textit{Almageste}. Rappelons que le premier modèle de la Lune dans l'\textit{Almageste} est le modèle ``naïf'' avec un épicycle pour rendre compte de la première anomalie lunaire. Le deuxième modèle de la Lune dans l'\textit{Almageste} fait intervenir un excentrique dont le centre n'est pas fixe. Cet excentrique porte un point, le centre de l'épicycle, à vitesse angulaire constante par rapport au centre du monde. Le troisième modèle de la Lune précise que la direction à partir de laquelle est comptée le mouvement propre de rotation de l'épicycle, $\overline{\alpha}$ dans la figure \ref{fig015}, passe par le centre $P_4'$ de l'épicycle et par un point de ``prosneuse'' qui est distinct du centre $P_3'$ de l'excentrique et du centre du Monde $O$. La figure \ref{fig011} montre que tous les modèles donnent des résultats corrects aux syzygies, \textit{i.e.}, pleine lune et nouvelle lune, indiquées en abscisse. D'ailleurs, c'est aussi le cas du premier modèle de Ptolémée --- non représenté ici --- puisqu'il est déduit des observations aux syzygies. Le deuxième modèle de Ptolémée corrige le premier en tenant compte d'observations dans les quadratures, \textit{i.e.}, à mi-chemin entre les syzygies. Ce faisant, il ouvre le flanc à la première critique d'{Ibn al-\v{S}\=a\d{t}ir}, puisqu'il utilise un excentrique. Mais il encourt aussi son troisième reproche, puisque la direction de l'``apogée'' de l'épicycle, c'est-à-dire la direction à partir de laquelle est mesuré le mouvement propre de rotation de l'épicycle, passe par le centre de l'épicycle et le centre du monde, et non par le centre de l'orbe excentrique portant le centre de l'épicycle. De plus, sur la figure \ref{fig011}, le deuxième modèle de Ptolémée présente un défaut évident aux octants. Dans son troisième modèle, Ptolémée corrige le deuxième modèle en supposant que la direction de l'``apogée'' de l'épicycle passe par le centre de l'épicycle et par un autre point, qu'on désignera par la locution ``point de prosneuse''. Ce point est distinct du centre de l'excentrique et du centre du monde. Le modèle se prête donc toujours aux critiques mentionnées ci-dessus, sauf concernant les octants. Etrangement, p.~\pageref{doutes_lune}, {Ibn al-\v{S}\=a\d{t}ir} semble ignorer que Ptolémée a précisément utilisé des observations dans les octants pour déduire son troisième modèle\footnote{\textit{Cf.} \textit{Almageste} V.5, \textit{in} \cite{ptolemy1952} p.~149.}. Comment est-ce possible ? Peut-être n'avait-il qu'une connaissance indirecte de l'\textit{Almageste}, par le biais d'un des nombreux commentaires qui circulaient à l'époque, peut-être même seulement par l'intermédiaire de la \textit{Ta\b{d}kirat} \cite{altusi1993}, qu'il connaissait, et où \d{T}\=us{\=\i} expose le troisième modèle de Ptolémée en mentionnant des observations dans les \textit{sextants} (et non les \textit{octants})\footnote{En revanche, `Ur\d{d}{\=\i}, dans son \textit{Kit\=ab al-hay'a} \cite{saliba1990}, commente les observations de Ptolémée dans les octants. Mais {Ibn al-\v{S}\=a\d{t}ir} a pu négliger cette partie du traité de `Ur\d{d}{\=\i} après avoir compris le défaut du modèle lunaire de `Ur\d{d}{\=\i} -- \textit{cf.} notre analyse ci-dessous.}. On a aussi représenté les latitudes de Lune prédites par le modèle d'{Ibn al-\v{S}\=a\d{t}ir} pour les mêmes dates, \textit{cf.} fig. \ref{fig012}. {Ibn al-\v{S}\=a\d{t}ir} n'adresse aucune objection à ses prédécesseurs concernant les latitudes de la Lune, et son modèle reproduit de très près les prédictions en latitude du troisième modèle de Ptolémée~: à l'échelle de notre tracé, les deux courbes seraient indiscernables. Enfin, on a représenté la distance Terre-Lune à la figure \ref{fig016}, où l'on voit le progrès radical accompli par {Ibn al-\v{S}\=a\d{t}ir}. Le troisième modèle de Ptolémée\footnote{Son deuxième modèle présente le même défaut.} prédit en effet des valeurs aberrantes pour la variation de la distance Terre-Lune, et donc aussi pour la variation du diamètre apparent de la Lune -- il semble varier du simple au double selon ce modèle. Le modèle d'{Ibn al-\v{S}\=a\d{t}ir} offre des prédictions correctes en ordre de grandeur. \begin{figure} \begin{center} \input{fig011.pdf_tex} \caption{\label{fig011}L'équation de la Lune en 1331, sur 30 jours} \end{center} \end{figure} \begin{figure} \begin{center} \input{fig015.pdf_tex} \caption{\label{fig015}Le troisième modèle de la Lune dans l'\textit{Almageste}} \end{center} \end{figure} \begin{figure} \begin{center} \input{fig012.pdf_tex} \caption{\label{fig012}Latitudes de la Lune en 1331, sur 30 jours} \end{center} \end{figure} \begin{figure} \begin{center} \input{fig016.pdf_tex} \caption{\label{fig016}Distance Terre-Lune en 1331, sur 30 jours} \end{center} \end{figure} \paragraph{La Lune chez al-`Ur\d{d}{\=\i}} {Ibn al-\v{S}\=a\d{t}ir} adresse une critique plus circonstanciée à son prédécesseur Mu'ayyad al-Din al-`Ur\d{d}{\=\i} (?-1266). Il écrit en effet~: \begin{quote} La position conjecturée par al-Mu'ayyad al-`Ur\d{d}{\=\i} dans la configuration des orbes de la Lune, concernant son inversion du sens du mouvement de l'excentrique et la variation de l'apogée de l'épicycle qui suit un point autre que le centre de l'orbe portant l'épicycle, est impossible.\footnote{\textit{Cf. supra} p.~\pageref{urdi_lune}.} \end{quote} Pour une fois, {Ibn al-\v{S}\=a\d{t}ir} ne critique pas l'usage même d'un excentrique~; le problème est ailleurs. On peut penser que le modèle en question est celui décrit par `Ur\d{d}{\=\i} dans son \textit{Kit\=ab al-Hay'a}\footnote{\textit{Cf.} l'édition critique du texte arabe par Saliba \cite{saliba1990}, accompagnée d'une introduction en anglais où Saliba décrit le modèle de `Ur\d{d}{\=\i}, p.~50-55.}. Suivons la description donnée par al-`Ur\d{d}{\=\i} : \begin{quote} Supposons que l'apogée de la Lune, le centre de son épicycle et le Soleil coïncident en un point donné de l'écliptique, à un instant donné. Qu'ils se meuvent, chacun selon son mouvement.\footnote{ \textit{Cf.} \cite{saliba1990} p.~ 118.} \end{quote} On reconnaît la figure initiale que nous avons dessiné fig.~\ref{fig017}, à gauche, où $P_1$ est le centre de l'``orbe des n{\oe}uds'', $P_2$ le centre de l'orbe incliné, $P_3$ le centre du déférent, $P_4$ le centre de l'épicycle, et $P$ la Lune. On a~: $$P_2P_3=10;19\qquad P_3P_4=49;41\qquad P_4P=5;15$$ Al-`Ur\d{d}{\=\i} continue~: \begin{quote} Imaginons l'orbe des n{\oe}uds se mouvoir autour du centre du Monde, sur ses pôles et sur son axe qui est l'axe de l'écliptique, en sens contraire aux signes, d'un mouvement uniforme égal à l'excès du mouvement en latitude sur le mouvement en longitude. Avec soi, il déplace aussi l'orbe incliné et tout ce qu'il contient [...] Imaginons l'apogée se mouvoir, uniformément, autour de pôles situés à la surface de l'orbe incliné. L'axe de cet orbe rencontre l'axe de l'écliptique au centre du Monde. Son mouvement par jour, dans le sens des signes, est $37;36,39,2$ [$=2\overline{\eta}+\overline{\lambda}+\lambda_{\ascnode}$] [...] L'orbe déférent se meut aussi et emporte l'épicycle en sens contraire aux signes, avec un mouvement par jour égal au double de l'élongation. Si l'on soustrait du mouvement de l'apogée (dans le sens des signes) les mouvements du n{\oe}ud et du déférent (en sens contraire), ce qui reste est le mouvement en longitude par jour.\footnote{\textit{Cf.} \cite{saliba1990} p.~118.} \end{quote} \begin{figure} \begin{center} \input{fig017.pdf_tex} \caption{\label{fig017}La Lune dans le \textit{Kitab al-Hay'at} de `Ur\d{d}{\=\i}} \end{center} \end{figure} Autrement dit, on passe de la figure initiale à la figure à droite en appliquant une composée de rotations. Si l'on néglige l'inclinaison de l'orbe incliné, on obtient ainsi~: $$P'=R_{O,3\overline{\eta}+\overline{\lambda}_{\mbox{\tiny\astrosun}}}\,R_{P_3,-2\overline{\eta}}\,R_{P_4,-\overline{\alpha}}(P)$$ et en tenant compte de l'inclinaison de l'orbe incliné : $$P'=R_{P_1,-\lambda_{\mbox{\tiny\ascnode}}} \,R_{P_1,\mathbf{j},5°} \,R_{P_2,\lambda_{\mbox{\tiny\ascnode}}+3\overline{\eta}+\overline{\lambda}_{\mbox{\tiny\astrosun}}} \,R_{P_3,-2\overline{\eta}} \,R_{P_4,-\overline{\alpha}}(P)$$ En résolvant des triangles rectangles, il est alors facile de calculer la position de point $P'$. En particulier~: $$OP_4'=\sqrt{(P_3P_4\sin(2\overline{\eta}))^2+(P_3P_4\cos(2\overline{\eta})+OP_3)^2}$$ $$q=\arcsin\left(\frac{P_3'P_4'\sin(2\overline{\eta})}{OP_4'}\times\frac{OP_3}{P_3'P_4'}\right)=\arcsin\left(\frac{OP_3\sin(2\overline{\eta})}{OP_4'}\right)$$ $$OP'=\sqrt{(P_4P\sin(-(\overline{\alpha}+q)))^2+(OP_4'+P_4P\cos(-(\overline{\alpha}+q)))^2}$$ Si l'équation de la Lune était l'angle $\widehat{P_4'OP'}$, elle serait donc égale à~: $$\arcsin\left(\dfrac{P_4P\sin(-(\overline{\alpha}+q))}{OP'}\right)$$ Al-`Ur\d{d}{\=\i} s'estimait heureux du fait que cet arc est presque égal à l'équation de la Lune dans le troisième modèle de Ptolémée, et l'avait vérifié, à une bonne précision, en particulier aux octants. Hélas, \emph{$P_4'$ n'est pas dans la direction de la Lune moyenne}. Comme on voit sur la figure, l'équation de la Lune vaut donc~: $$\arcsin\left(\dfrac{P_4P\sin(-(\overline{\alpha}+q))}{OP'}\right)+q$$ \'Etrangement, comme l'a remarqué Saliba, al-`Ur\d{d}{\=\i} se défend par anticipation contre d'éventuels contradicteurs~: \begin{quote} Qu'on ne dise pas que ces grandeurs ne sont pas prises dans le même cercle~; parce qu'on les entend comme des mouvements moyens, ce sont des arcs de l'écliptique, semblables aux arcs de l'excentrique coupés par le centre de l'épicycle. C'est comme on a coutûme de faire avec le mouvement du Soleil le long de son excentrique, où l'élongation de centre par rapport à l'Apogée est \emph{approximativement} égale à l'élongation par jour.\footnote{Nos italiques. \textit{Cf.} \cite{saliba1990} p. 119, et la remarque de Saliba qui évite cependant de juger ce modèle, p.~54.} \end{quote} On a donc le choix entre deux interprétations possibles~: \begin{itemize} \item Soit le modèle de `Ur\d{d}{\=\i} n'est pas un modèle cinématique, mais seulement un dispositif de calcul visant à déterminer l'équation du troisième modèle de Ptolémée en n'utilisant que des rotations, en évitant donc les difficultés propres au point de prosneuse. Ce serait alors une sorte d'équatoire. \item Soit il s'agit d'une erreur de `Ur\d{d}{\=\i}, car l'équation de la Lune dans ce modèle est très loin de reproduire celle du modèle de Ptolémée (\textit{cf.} notre figure \ref{fig018}). \end{itemize} Comme `Ur\d{d}{\=\i} exprime par ailleurs clairement que son modèle doit remplacer celui de Ptolémée, la première interprétation nous semble exclue. \begin{figure} \begin{center} \input{fig018.pdf_tex} \caption{\label{fig018}L'équation de la Lune selon al-`Ur\d{d}{\=\i}.} \end{center} \end{figure} {Ibn al-\v{S}\=a\d{t}ir} avait sûrement vu l'erreur de `Ur\d{d}{\=\i}. \`A la fin du paragraphe énumérant les critiques générales sur la Lune qu'{Ibn al-\v{S}\=a\d{t}ir} adresse à ses prédécesseurs, celui-ci émet une mise en garde qui fait curieusement écho à la défense par anticipation de `Ur\d{d}{\=\i}~: \begin{quote} La localisation de l'astre moyen et du centre de l'astre dans un cercle semblable à l'orbe de l'écliptique est impossible car ils ne sont pas pris dans le même cercle, ni par rapport au même point.\footnote{\textit{cf. supra} p.~\pageref{astre_moyen}.} \end{quote} Cette remarque était probablement adressée à `Ur\d{d}{\=\i} ou à un autre auteur, avant ou après `Ur\d{d}{\=\i}, ayant commis une erreur analogue. \paragraph{La Lune chez \d{T}\=us{\=\i} et Sh{\=\i}r\=az{\=\i}} Décrivons à présent le modèle exposé par \d{T}\=us{\=\i} dans sa \emph{Ta\b{d}kirat f{\=\i} `ilm al-hay'at}. On a seulement tracé la figure initiale, \textit{cf.} figure \ref{fig021}, à gauche, où $P_1$ est le centre du parécliptique, $P_2$ le centre de l'orbe incliné, $P_3$ le centre d'un orbe déférent, $P_4$ le centre d'une ``grande sphère'', $P_5$ le centre d'une ``petite sphère'', $P_6$ le centre d'une ``sphère englobante'', $P_7$ le centre de l'``épicycle'', et $P$ la Lune\footnote{Comme le dit \d{T}\=us{\=\i}, on pourrait remplacer le couple ``orbe incliné'' et ``orbe déférent'' par un orbe unique puisqu'ils ont même centre et même axe, mais il préfère garder deux orbes distincts. De même pour le couple ``grande sphère'' et ``épicycle''.}. On a~: $$P_3P_4=49;41\qquad P_4P_5=P_5P_6=\frac{10;19}{2}\qquad P_6P=5;15$$ Le couple ``petite sphère'' et ``grande sphère'' forme un dispositif que les historiens ont appelé ``couple de \d{T}\=us{\=\i}''. Que les orbes tournent, et la position du point $P$ devient~: $$P'=R_{P_1,-\lambda_{\mbox{\fontencoding {U}\fontfamily {wasy}\fontsize{6pt}{8pt}\selectfont \char 19}}} \circ R_{P_1,\mathbf{j},5°} \circ R_{P_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \circ R_{P_3,2\overline{\eta}} \circ R_{P_4,2\overline{\eta}} \circ R_{P_5,-4\overline{\eta}} \circ R_{P_6,2\overline{\eta}} \circ R_{P_7,-\overline{\alpha}}(P)$$ \begin{figure} \begin{center} \input{fig021.pdf_tex}\input{fig022.pdf_tex} \caption{\label{fig021}La Lune dans la \textit{Ta\b{d}kirat} de \d{T}\=us{\=\i}} \end{center} \end{figure} Dans le référentiel de l'orbe déférent centré en $P_3$, la trajectoire du point $P_6$ \emph{ressemble} à un cercle de rayon $49;41$. On a représenté cette trajectoire ovale sur la figure \ref{fig021}, en pointillés à droite, en adoptant pour $P_3P_4$, $P_4P_5$ et $P_5P_6$ les longueurs représentées sur la figure. Comme cette figure n'est pas à l'échelle -- le rapport $P_4P_5:P_3P_4$ est bien supérieur à $\dfrac{10;19}{2}:49;41$ pour la rendre plus lisible -- alors notre ovale est beaucoup plus applati que dans le modèle véritable. \d{T}\=us{\=\i} trace et étudie la ressemblance de cette trajectoire avec la ceinture de l'orbe excentrique du modèle de Ptolémée. Dans ses effets, ce modèle reproduit d'assez près le deuxième modèle de Ptolémée. Afin de copier l'effet de la prosneuse, \d{T}\=us{\=\i} propose d'intercaler un autre couple d'orbes entre la ``sphère englobante'' et l'épicycle. Les historiens ont appelé ce dispositif un ``couple de \d{T}\=us{\=\i} curviligne''. Il s'agit de deux sphères homocentriques, centrées en $P_7$. Leurs rayons n'importent pas à la description du mouvement. Dans la composée de rotations qu'on fait subir au point $P$, \d{T}\=us{\=\i} introduit donc de nouvelles rotations agissant sur $R_{P_7,-\overline{\alpha}}(P)$. Les axes de ces rotations ne sont pas orthogonaux au plan de la figure initiale, et il faut ajouter des rotations pour les rabattre dans ce plan. La position du point $P'$ devient~: \begin{align*} P'=&R_{P_1,-\lambda_{\mbox{\fontencoding {U}\fontfamily {wasy}\fontsize{6pt}{8pt}\selectfont \char 19}}} \circ R_{P_1,\mathbf{j},5°} \circ R_{P_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \circ R_{P_3,2\overline{\eta}} \circ R_{P_4,2\overline{\eta}} \circ R_{P_5,-4\overline{\eta}} \circ R_{P_6,2\overline{\eta}}\\ &\circ R_{P_7,\mathbf{i},\frac{c_{3\max}}{2}} \circ R_{P_7,\mathbf{j},-4\overline{\eta}} \circ R_{P_7,\mathbf{i},-\frac{c_{3\max}}{2}} \circ R_{P_7,\mathbf{j},2\overline{\eta}} \circ R_{P_7,-\overline{\alpha}}(P) \end{align*} On expliquera les rotations autour d'axes non orthogonaux au plan de la figure, de manière plus détaillée, quand on commentera l'usage qu'en fait {Ibn al-\v{S}\=a\d{t}ir} pour les mouvements en latitude des planètes. Ici $c_{3\max}=13;9°$ est l'inclinaison maximale de l'apogée moyen de l'épicycle due à la prosneuse, dans le troisième modèle de Ptolémée. La solution de \d{T}\=us{\=\i} est très réussie, en celà qu'elle reproduit de très près la trajectoire prédite par le troisième modèle de Ptolémée, sans plus utiliser ni excentrique, ni point de prosneuse ; voir notre figure \ref{fig023}. La critique que lui adresse {Ibn al-\v{S}\=a\d{t}ir} peut donc nous surprendre~: \begin{quote} La position mentionnée par Na\d{s}{\=\i}r al-\d{T}\=us{\=\i} dans la \emph{Ta\b{d}kira} sur l'élimination des doutes des orbes de la Lune est impossible parce qu'on y trouve l'excentrique et que le diamètre de l'épicycle suit un point autre que le centre de l'orbe portant l'épicycle.\footnote{\textit{Cf. supra} p.~\pageref{doute_tusi_lune}.} \end{quote} Nous y reviendrons. \begin{figure} \begin{center} \input{fig023.pdf_tex} \caption{\label{fig023}L'équation de la Lune selon \d{T}\=us{\=\i}} \end{center} \end{figure} Décrivons à présent le modèle de la Lune proposé par Qu.tb al-D{\=\i}n al-Sh{\=\i}r\=az{\=\i} dans sa \textit{Tu\d{h}fat al-Sh\=ah{\=\i}yat}\footnote{Nous avons consulté le manuscrit de la \emph{Tu\d{h}fat} à la Bibliothèque Nationale (BN~Arabe MS~2516, livre 2, chapitres 10 f.~33r-44v, et 12 f.~50r-56r) en nous aidant de la description faite par Saliba dans \cite{rashed1997}.} Voici ce que dit {Ibn al-\v{S}\=a\d{t}ir}~: \begin{quote} La position mentionnée par Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i} dans la réforme de la configuration des orbes de la Lune est impossible car y demeurent l'excentrique et le point de prosneuse. L'argument qu'il a avancé (Dieu ait pitié de lui) pour permettre le point de prosneuse de la Lune est la plus impossible des impossibilités et c'est une image fausse. Il en est revenu dans la \emph{Tu\d{h}fa} ; il y indique une autre manière, mais elle est aussi impossible.\footnote{\textit{Cf. supra} p.~\pageref{shirazi_lune}.} \end{quote} Saliba \cite{rashed1997}, suivant peut-être l'avis d'{Ibn al-\v{S}\=a\d{t}ir}, pense que le modèle décrit dans la \emph{Tu\d{h}fat} est différent des modèles décrits dans deux autres textes de Sh{\=\i}r\=az{\=\i}, les \textit{Ikht{\=\i}y\=ar\=at-i Muzaffar{\=\i}} et la \textit{Nih\=ayat al-Idr\=ak}, qui exposeraient seulement des modèles anciens. Niazi, l'auteur d'une thèse \cite{niazi2011} sur Sh{\=\i}r\=az{\=\i}, a fait une comparaison textuelle des trois ouvrages ; il affirme qu'un modèle semblable au modèle de la \textit{Tu\d{h}fat} est bien présent dans chacun des deux autres ouvrages, mais peut-être pas là où on l'attendrait\footnote{\textit{Cf.} \cite{niazi2011} p.~138-142.}. Ceci nous a aussi été confirmé par Gamini (communication personnelle) qui prépare une édition critique des \textit{Ikht{\=\i}y\=ar\=at}. \begin{figure} \begin{center} \input{fig026.pdf_tex} \caption{\label{fig026}La Lune dans la \textit{Tu\d{h}fat} de Sh{\=\i}r\=az{\=\i}} \end{center} \end{figure} La figure initiale du modèle de la \textit{Tu\d{h}fat} est notre figure \ref{fig026}. On y voit le centre $P_1$ d'un parécliptique appelé orbe des n{\oe}uds, le centre $P_2$ d'un orbe incliné, le centre $P_3$ d'un orbe excentrique, le centre $P_4$ d'un ``orbe englobant'', le centre $P_5$ d'un épicycle, et la Lune en $P$. Les distances sont~: $$P_2P_3=\frac{10;19}{2}\qquad P_3P_4=49;41$$ $$P_4P_5=\frac{10;19}{2}\qquad P_5P=5;15$$ La position de la Lune est donnée par la composée de rotations suivante~: $$P'=R_{P_1,-\lambda_{\mbox{\tiny\ascnode}}} \,R_{P_1,\mathbf{j},5°} \,R_{P_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \,R_{P_3,2\overline{\eta}} \,R_{P_4,2\overline{\eta}} \,R_{P_5,-2\overline{\eta}-\overline{\alpha}}(P)$$ Ce modèle est inspiré des modèles de `Ur\d{d}{\=\i} pour les planètes supérieures, mais il diffère substantiellement de son modèle pour la Lune. Au contraire, il est strictement équivalent au modèle de \d{T}\=us{\=\i} sans couple curviligne~: les trajectoires prédites sont égales. \emph{Démonstration}. Notons $P_i$ ($1\leq i\leq 5$) les centres des orbes de Sh{\=\i}r\=az{\=\i}, et $\tilde{P}_i$ ($1\leq i\leq 7$) les centres des orbes de \d{T}\=us{\=\i}, avec $P_1=P_2=\tilde{P}_1=\tilde{P}_2=\tilde{P}_3$ et $P_4=\tilde{P}_5$. On a alors~: $$\overrightarrow{P_2P_3}=\overrightarrow{\tilde{P}_4\tilde{P}_5}=\frac{10;19}{2}\mathbf{j}$$ $$\overrightarrow{P_3P_4}=\overrightarrow{\tilde{P}_3\tilde{P}_4}=49;41\mathbf{j}$$ $$P_5=\tilde{P}_6=\tilde{P}_7,\quad\overrightarrow{P_3\tilde{P}_3}=\overrightarrow{P_4\tilde{P}_4}=-\overrightarrow{\tilde{P}_5\tilde{P}_6}$$ Cette dernière égalité permet d'appliquer la proposition 2, et on a alors~: \begin{align*} &R_{P_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \,R_{P_3,2\overline{\eta}} \,R_{P_4,2\overline{\eta}} \,R_{P_5,-2\overline{\eta}-\overline{\alpha}}\\ =&R_{\tilde{P}_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \,\left( R_{P_3,2\overline{\eta}} \,R_{P_4,2\overline{\eta}} \,R_{\tilde{P}_6,-2\overline{\eta}} \right) \,R_{\tilde{P}_7,-\overline{\alpha}}\\ =&R_{\tilde{P}_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \,R_{\tilde{P}_3,2\overline{\eta}} \,\left( R_{\tilde{P}_4,-2\overline{\eta}} \,R_{\tilde{P}_5,4\overline{\eta}} \,R_{\tilde{P}_6,-2\overline{\eta}} \right) \,R_{\tilde{P}_7,-\overline{\alpha}} \end{align*} En vertu de la proposition 3, ceci est aussi égal à~: $$R_{\tilde{P}_2,-(\overline{\eta}-\overline{\lambda}_{\mbox\tiny\astrosun}-\lambda_{\mbox{\tiny\ascnode}})} \,R_{\tilde{P}_3,2\overline{\eta}} \,\left( R_{\tilde{P}_4,2\overline{\eta}} \,R_{\tilde{P}_5,-4\overline{\eta}} \,R_{\tilde{P}_6,2\overline{\eta}} \right) \,R_{\tilde{P}_7,-\overline{\alpha}},\quad\text{\textit{q. e. d.}}$$ Ce modèle produit des trajectoires très proches du deuxième modèle de Ptolémée, et il ne reproduit donc pas les effets du point de prosneuse.\footnote{Ce modèle achevé décrit par Sh{\=\i}r\=az{\=\i} ne tient donc pas compte de la troisième anomalie lunaire. Saliba affirme que, même dans la \textit{Tu\d{h}fat}, ``Qu\d{t}b al-D{\=\i}n ne semble pas avoir réussi à répondre à la seconde objection'' (celle du point de prosneuse), \textit{cf.} \cite{rashed1997},~p.~113. Il resterait cependant à identifier clairement, par une étude complète des trois ouvrages, ``l'argument que [Sh{\=\i}r\=az{\=\i}] a avancé pour permettre le point de prosneuse'' selon {Ibn al-\v{S}\=a\d{t}ir}.} Dallal et Saliba ont indiqué qu'un contemporain d'{Ibn al-\v{S}\=a\d{t}ir}, Sadr al-Shar{\=\i}`a (?-1347), a proposé un amendement au modèle de Sh{\=\i}r\=az{\=\i} pour reproduire les effets du point de prosneuse, en ajoutant une petite sphère de rayon $\overrightarrow{P_6P}=0;52\,\mathbf{j}$ et de centre $P_6$ remplaçant le point $P$ dans la figure initiale \ref{fig026}~; il faut alors ajouter une rotation à droite\footnote{\textit{cf.} \cite{dallal1995} p.~373-379 et \cite{rashed1997} p.~112-113. Dans \cite{rashed1997}, le sens de rotation de la petite sphère semble erroné. Dans \cite{dallal1995}, Dallal donne deux interprétations possibles du texte de Sadr al-Shar{\=\i}`a : avec nos notations, soit $\overrightarrow{P_6P}=+0;52\,\mathbf{j}$ et l'angle de rotation est $-2\overline{\eta}$, soit $\overrightarrow{P_6P}=-0;52\,\mathbf{j}$ et l'angle de rotation est $+2\overline{\eta}$. La deuxième interprétation, bien qu'elle ressemble qualitativement au modèle d'{Ibn al-\v{S}\=a\d{t}ir}, produit un résultat numériquement peu satisfaisant. C'est la première interprétation que nous avons donc retenue dans notre comparaison.}~: $$P'=R_{P_1,-\lambda_{\mbox{\tiny\ascnode}}} \,R_{P_1,\mathbf{j},5°} \,R_{P_2,-(\overline{\eta}-\overline{\lambda}_{\mbox{\tiny\astrosun}}-\lambda_{\mbox{\tiny\ascnode}})} \,R_{P_3,2\overline{\eta}} \,R_{P_4,2\overline{\eta}} \,R_{P_5,-2\overline{\eta}-\overline{\alpha}} \,R_{P_6,-2\overline{\eta}}(P).$$ Le modèle de Sadr al-Shar{\=\i}`a a bien l'effet escompté, comme on peut le voir dans les octants sur notre figure \ref{fig028}. \begin{figure} \begin{center} \input{fig028.pdf_tex} \caption{\label{fig028}La Lune selon Sh{\=\i}r\=az{\=\i} et Sadr al-Shar{\=\i}`a} \end{center} \end{figure} \paragraph{Comment expliquer le reproche qu'{Ibn al-\v{S}\=a\d{t}ir} adresse à \d{T}\=us{\=\i} concernant la Lune~?} Tout d'abord, \d{T}\=us{\=\i} rejette lui-même son propre modèle, à la fin du chapitre II.11 de la \textit{Ta\b{d}kirat}, par une intéressante analyse du taux de variation de l'anomalie due à la prosneuse chez Ptolémée et au couple curviligne chez lui : il montre que son modèle n'a pas exactement les mêmes effets que le troisième modèle de Ptolémée. Est-il donc possible que la critique d'{Ibn al-\v{S}\=a\d{t}ir} ne s'adresse pas au modèle présenté dans II.11, mais plutôt à la description du troisième modèle de Ptolémée faite par \d{T}\=us{\=\i} dans II.7~? Mais c'est à \d{T}\=us{\=\i} qu'{Ibn al-\v{S}\=a\d{t}ir} adresse un reproche ; n'aurait-il pas compris qu'il s'agissait là d'un modèle dû à Ptolémée\footnote{ceci pourrait aussi expliquer qu'il ignorait les observations dues à Ptolémée dans les octants, comme on l'a vu plus haut.}~? Il nous semble plus probable qu'{Ibn al-\v{S}\=a\d{t}ir} ajoute à l'auto-critique de \d{T}\=us{\=\i} sur la prosneuse un second reproche plus général concernant le modèle présent dans II.11. Mais quel reproche~? Bien que \d{T}\=us{\=\i} n'utilise pas d'orbe excentrique, il conçoit une combinaison de mouvements épicycliques afin de donner au point $P_6$ une trajectoire \emph{proche du cercle excentrique de Ptolémée dans le plan incliné} ; il prend d'ailleurs soin de le démontrer dans II.11. \d{T}\=us{\=\i} substitue donc à l'excentrique de Ptolémée un dispositif reproduisant la même trajectoire, par le truchement d'un artifice, le couple de \d{T}\=us{\=\i}, produisant un mouvement rectiligne oscillatoire. En tant qu'artifice produisant un mouvement \textit{par essence} interdit dans les cieux, ce dispositif ne devrait-il pas être rejeté, au même titre qu'un point glissant à la circonférence d'un excentrique~? Hélas, {Ibn al-\v{S}\=a\d{t}ir} lui-même utilise le couple de \d{T}\=us{\=\i} dans certains modèles planétaires... Donc l'argument ne tient pas. Voici donc la seule explication probable. Le refus des excentriques par {Ibn al-\v{S}\=a\d{t}ir} n'est pas une objection de principe concernant seulement l'essence physique de l'excentrique ou les mouvements permis par l'astronomie physique. C'est un choix méthodologique dans l'analyse des données de l'observation. Ptolémée aurait commis une erreur dès son \textit{deuxième} modèle lunaire, en changeant la configuration globale des orbes du premier et en intercalant un \textit{grand} excentrique pour rendre compte d'une variation de l'amplitude maximale d'une \textit{petite} anomalie lunaire aux quadratures. Et en effet, l'utilisation d'un excentrique a deux effets joints : \begin{itemize} \item produire cette variation d'amplitude de la petite anomalie lunaire \item induire un changement global dans la configuration des orbes, et en particulier dans la distance Terre-Lune. \end{itemize} Seul le premier effet était désirable. {Ibn al-\v{S}\=a\d{t}ir} concevait certainement ses modèles en suivant une saine méthodologie : partir d'une estimation préalable des mouvements moyens et des distances, pour ensuite ajouter de \textit{petites} corrections en adjoignant des épicycles \textit{de plus en plus petits} perturbant peu les autres caractéristiques des modèles préalables. Il doit avoir expliqué cette méthode dans son ouvrage perdu sur les observations, \textit{Ta`l{\=\i}q al-'ar\d{s}\=ad}. Pour la Lune, la faible variation de la distance Terre-Lune doit compter parmi les faits primitifs que les corrections successives ne peuvent remettre en cause. Nos figures \ref{fig027} et \ref{fig029} comparent entre eux les <<~meilleurs~>> modèles lunaires à partir du troisième modèle de Ptolémée~: celui de Qu\d{t}b al-D{\=\i}n al-Sh\={\i}r\=az{\=\i}, celui d'al-\d{T}\=us{\=\i} avec le couple curviligne, et celui d'{Ibn al-\v{S}\=a\d{t}ir}. Pour ne pas alourdir ces figures, nous n'avons pas représenté le troisième modèle de Ptolémée~: on a vu que celui d'al-\d{T}\=us{\=\i} en est très proche. \`A partir du deuxième modèle de Ptolémée, tous les modèles lunaires des prédécesseurs d'{Ibn al-\v{S}\=a\d{t}ir} produisaient des valeurs aberrantes de la distance Terre-Lune, y compris celui de \d{T}\=us{\=\i}, alors même que celui-ci s'était débarrassé de l'excentrique de Ptolémée vis-à-vis de l'astronomie physique. C'est qu'il fallait en fait repartir du \textit{premier} modèle de Ptolémée, et tout reconstruire à neuf. \begin{figure} \begin{center} \input{fig027.pdf_tex} \caption{\label{fig027}L'équation de la Lune~: meilleurs modèles jusqu'à {Ibn al-\v{S}\=a\d{t}ir}} \end{center} \end{figure} \begin{figure} \begin{center} \input{fig029.pdf_tex} \caption{\label{fig029}La distance Terre-Lune dans les modèles les meilleurs pour les longitudes} \end{center} \end{figure} \paragraph{Calibrage du modèle lunaire d'{Ibn al-\v{S}\=a\d{t}ir}} Le chemin qu'il a pu suivre pour déterminer les paramètres $P_2P_3$, $P_3P_4$ et $P_4P$ est facile à reconstituer~: \begin{enumerate}[1)] \item On utilise trois éclipses pour calculer l'équation maximale $c$ aux syzygies comme l'avait fait Ptolémée. Cela donne $$\frac{P_3P_4-P_4P}{P_2P_3}.$$ {Ibn al-\v{S}\=a\d{t}ir} utilise $c_2=4;56$ comme équation maximale, valeur connue depuis longtemps. \item On utilise des observations précises donnant une équation maximale maximale dans les quadratures égale à $c_2=7;40$, déjà connue de Ptolémée. On en déduit $$\frac{P_3P_4+P_4P}{P_2P_3}$$ \item Puis on vérifie que le modèle est satisfaisant dans les octants. Pour cela, {Ibn al-\v{S}\=a\d{t}ir} a dû faire ses propres observations puisqu'il n'avait pas conscience de celles émises par Ptolémée. Enfin, on utilise le modèle pour calculer la distance Terre-Lune et on compare avec des observations anciennes ou récentes qu'{Ibn al-\v{S}\=a\d{t}ir} semblait bien connaître\footnote{\textit{cf. supra} p.~\pageref{obs_diam_lune}.}. \end{enumerate} \begin{figure} \begin{center} \input{fig014.pdf_tex} \caption{\label{fig014}Les orbes solides de la Lune} \end{center} \end{figure} \paragraph{Saturne : figure initiale}\label{begin_saturne} Dans la figure initiale, les plans des orbes de Saturne sont tous rabattus dans le plan de l'écliptique $(\mathbf{i},\mathbf{j})$ par des rotations, les centres des orbes sont tous alignés dans la direction du vecteur $\mathbf{j}$ elle-même confondue avec la direction du point vernal~; enfin, la ligne des n{\oe}uds, à l'intersection des plans de l'orbe incliné et du parécliptique, est orientée dans la direction du vecteur $\mathbf{u}=\cos(50°)\mathbf{i}-\sin(50°)\mathbf{j}$ qui pointe vers le n{\oe}ud ascendant. Le point $O$ est le centre du Monde, $P_1$ le centre du parécliptique de Saturne, et $P_2$ le centre de son orbe incliné. Ces trois points sont confondus $O=P_1=P_2$. Le point $P_3$ est le centre d'un orbe \emph{déférent} (\textit{falak \d{h}\=amil}), $P_4$ le centre de l'orbe rotateur, $P_5$ le centre de l'orbe de l'épicycle, et $P$ est le centre du corps sphérique de la planète. La figure \ref{fig040} précise la position de ces points, mais elle n'est pas à l'échelle. Posons~: $\overrightarrow{OP_3}=60\ \mathbf{j}$, alors $$\begin{array}{l} \overrightarrow{P_3P_4}=5;7,30\ \mathbf{j}\\ \overrightarrow{P_4P_5}=-1;42,30\ \mathbf{j}\\ \overrightarrow{P_5P}=6;30\ \mathbf{j} \end{array}$$ Il faut appliquer au point $P$ une suite de transformations géométriques pour obtenir la position de Saturne prédite ou observée à un instant donné. \begin{figure} \begin{center} \input{fig040.pdf_tex} \end{center} \caption{\label{fig040}\label{fig_term}Les orbes de Saturne : figure initiale et terminologie} \end{figure} \paragraph{Saturne~: transformations géométriques} Les transformations appliquées à Saturne $P$ sont des rotations paramétrées par trois angles~: l'\emph{Apogée} $\lambda_A$, le \emph{centre moyen} $\overline{\kappa}$, et \emph{l'astre propre} $\overline{\alpha}$. L'\emph{astre moyen} est $\overline{\lambda}=\lambda_A+\overline{\kappa}$, le \emph{n{\oe}ud ascendant} est $\lambda_{\ascnode}=\lambda_A-140°$, et l'\emph{argument de latitude} est $\overline{\lambda}-\lambda_{\ascnode}$. Voici la liste des transformations géométriques utilisées pour produire les mouvements en longitude~: $$R_{P_5,\overline{\alpha}-\overline{\kappa}},\quad R_{P_4,2\overline{\kappa}},\quad R_{P_3,-\overline{\kappa}},\quad R_{P_2,\overline{\kappa}}, \quad R_{P_1,\lambda_A}.$$ Comme d'habitude, quand on omet la mention de l'axe de rotation, il s'agit par défaut de la direction vecteur $\mathbf{k}$. Mais il faut se souvenir que la figure \ref{fig2} p.~\pageref{fig2} cas (i) représente pour Saturne la relation entre l'orbe incliné et l'orbe déférent, ainsi que la relation entre l'orbe déférent et l'orbe rotateur. Il faut donc adjoindre des rotations visant à incliner les plans des orbes par rapport aux plans des orbes qui les portent~: $$R_{P_4,\mathbf{v},-1°},\quad R_{P_3,\mathbf{u},-3°30'},\quad R_{P_2,\mathbf{u},2°30'}.$$ où $\mathbf{v}$ est entraîné par l'orbe rotateur de sorte à ce que sa direction coïncide avec la ligne des n{\oe}uds lorsque $\overline{\kappa}=(\mathbf{j},\mathbf{u})+90°$. Autrement dit~: $$R_{2((\mathbf{j},\mathbf{u})+90°)}(\mathbf{v})=\mathbf{u}.$$ Pour Saturne, on en déduit que~: $$\mathbf{v}=R_{2\times(140°-90°)}(\mathbf{u})=\cos(50°)\mathbf{i}+\sin(50°)\mathbf{j}.$$ La théorie des longitudes des chapitres 12 à 14 et la description donnée par {Ibn al-\v{S}\=a\d{t}ir} dans le chapitre 24 sur les latitudes des planètes supérieures ne laissent aucun doute quant à l'ordre dans lequel appliquer toutes ces transformations. Pour obtenir la configuration des orbes à un instant donné, il faut donc appliquer toutes ces rotations aux points $P_3$, $P_4$, $P_5$, $P$. L'image du point $P_3$ entraîné par les mouvements de l'orbe parécliptique et de l'orbe incliné est~: $$R_{P_1,\lambda_A}\,R_{P_2,\mathbf{u},2°30'}\,R_{P_2,\overline{\kappa}}(P_3)$$ Quant au point $P_4$, il est aussi entraîné par le mouvement de l'orbe déférent et il devient~: $$R_{P_1,\lambda_A}\, R_{P_2,\mathbf{u},2°30'}\, R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\, R_{P_3,\mathbf{u},-3°30'}(P_4)$$ Le point $P_5$ est aussi entraîné par le mouvement de l'orbe rotateur et il devient~: $$R_{P_1,\lambda_A}\, R_{P_2,\mathbf{u},2°30'}\, R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\, R_{P_3,\mathbf{u},-3°30'}\, R_{P_4,2\overline{\kappa}}\, R_{P_4,\mathbf{v},-1°}(P_5)$$ Enfin le point $P$ est aussi entraîné par l'orbe de l'épicycle dont le plan est confondu avec le plan de l'orbe rotateur, et il devient~: $$R_{P_1,\lambda_A}\, R_{P_2,\mathbf{u},2°30'}\, R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\, R_{P_3,\mathbf{u},-3°30'}\, R_{P_4,2\overline{\kappa}}\, R_{P_4,\mathbf{v},-1°}\, R_{P_5,\overline{\alpha}-\overline{\kappa}}(P)$$ La figure \ref{fig041} montre l'effet des rotations inclinant les plans des orbes, pour trois positions. On a projeté orthogonalement sur le plan du parécliptique (figure du bas) et sur un plan orthogonal à la ligne des n{\oe}uds (figure du haut). On a seulement représenté les ceintures du déférent et du rotateur puisque la ceinture de l'épicycle est dans le plan de la ceinture du rotateur. \begin{figure} \begin{center} \input{fig041.pdf_tex} \end{center} \caption{\label{fig041}Saturne, trois positions : projection orthogonale} \end{figure} \paragraph{Saturne : trajectoire paramétrée} Le modèle de Saturne est couplé au modèle du Soleil en vertu de la relation suivante~: $$\overline{\alpha}=\overline{\lambda}_{\astrosun}-\lambda_A-\overline{\kappa}$$ La trajectoire dans l'ensemble des valeurs des paramètres est~: $$\lambda_A=\dot{\lambda}_At+\lambda_A(0)$$ $$\overline{\kappa}=\dot{\overline{\kappa}}t+\overline{\kappa}(0)$$ $$\overline{\lambda}_{\astrosun}=\dot{\overline{\lambda}}_{\astrosun}t+\overline{\lambda}_{\astrosun}(0)$$ On rappelle que $\overline{\kappa}=\overline{\lambda}-\lambda_A$. On a, d'après {Ibn al-\v{S}\=a\d{t}ir}~: $$\overline{\lambda}_{\astrosun}(0)=280;9,0,\quad\dot{\overline{\lambda}}_{\astrosun}=359;45,40\text{ par année persane},$$ $$\overline{\lambda}(0)=157;58,20,\quad\dot{\overline{\lambda}}=12;13,40\text{ par année persane},$$ $$\lambda_A(0)=254;52,\quad\dot{\lambda}_A=0;1\text{ par année persane}.$$ \paragraph{Transformations planes} Si $R$ et $S$ sont deux rotations, il existe une rotation $T$ telle que $R\circ S=T\circ R$, à savoir, la rotation de même angle que $S$ et dont l'axe est l'image par $R$ de l'axe de $S$. Appliquons cette relation de commutation à toutes les composées de rotations décrites ci-dessus, de sorte à réécrire à droite toutes les rotations dont l'axe est dans la direction du vecteur $\mathbf{k}$. Par exemple~: $$R_{P_1,\lambda_A}R_{P_2,\mathbf{u},2°30'}=R_{P_2,\mathbf{a},2°30'}R_{P_1,\lambda_A}$$ où le vecteur $\mathbf{a}$ est l'image de $\mathbf{u}$ par $R_{P_1,\lambda_A}$. On fait de même avec les inclinaisons des plans des petits orbes. Il existe donc une composée de rotations $M$ telle que l'image du point $P$ soit~: $$M\,R_{P_2,\mathbf{a},2°30'}(P')$$ où $$P'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}\,R_{P_3,-\overline{\kappa}}\, R_{P_4,2\overline{\kappa}}\,R_{P_5,\overline{\alpha}-\overline{\kappa}}(P)$$ On introduit de même les points $P_3'$, $P_4'$, $P_5'$ (voir figure \ref{fig040})~: $$P_3'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}(P_3)$$ $$P_4'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}\,R_{P_3,-\overline{\kappa}}(P_4)$$ $$P_5'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\,R_{P_4,2\overline{\kappa}}(P_5)$$ Tous ces points sont dans le plan de la figure initiale~: calculer leurs positions relève entièrement de la géométrie plane. \paragraph{Les équations de Saturne}\label{equ_sat} Le calcul de la position du point $P'$ revient à résoudre quatre triangles rectangles. On calcule d'abord la \emph{première équation} dite aussi ``équation du centre''~; c'est l'angle $c_1=(\overrightarrow{OP_3'},\overrightarrow{OP_5'})$. On a~: $$c_1=-\arcsin\left(\frac{P_3P_4\sin\overline{\kappa}+P_4P_5\sin\overline{\kappa}}{OP_5'}\right)$$ où $$OP_5'=\sqrt{(P_3P_4\sin\overline{\kappa}+P_4P_5\sin\overline{\kappa})^2+(OP_3+P_3P_4\cos\overline{\kappa}-P_4P_5\cos\overline{\kappa})^2}.$$ On calcule ensuite la \emph{deuxième équation} $c_2=(\overrightarrow{OP_5'},\overrightarrow{OP'})$~: $$c_2=\arcsin\left(\frac{P_5P\sin(\overline{\alpha}-c_1)}{OP'}\right)$$ où $$OP'=\sqrt{(P_5P\sin(\overline{\alpha}-c_1))^2+(OP_5'+P_5P\cos(\overline{\alpha}-c_1))^2}$$ La position du point $P'$ est alors donnée par $OP'$ et par l'angle suivant~: $$(\mathbf{j},\overrightarrow{OP'})=\lambda_A+\overline{\kappa}+c_1+c_2$$ Dans ce qui suit, $c_1$ sera conçue comme une fonction de $\overline{\kappa}$ notée $c_1(\overline{\kappa})$, et $c_2$ comme une fonction de deux variables $\overline{\kappa}$ et $\alpha=\overline{\alpha}-c_1$, notée $c_2(\overline{\kappa},\alpha)$. Plus précisément~: $$c_2(\overline{\kappa},\alpha)=\arcsin\left(\frac{P_5P\sin\alpha}{\sqrt{(P_5P\sin\alpha)^2+(OP_5'+P_5P\cos\alpha)^2}}\right)$$ où $OP_5'$ est la fonction de $\overline{\kappa}$ vue ci-dessus. On remarque que~: $$c_1(360°-\overline{\kappa})=-c_1(\overline{\kappa}),$$ $$c_2(360°-\overline{\kappa},360°-\alpha)=-c_2(\overline{\kappa},\alpha).$$ \paragraph{Fonction de deux variables et interpolation} Les valeurs de $c_1$ et $c_2$ devront être reportées dans des tables. Comme pour la Lune, {Ibn al-\v{S}\=a\d{t}ir} utilise à cet effet\footnote{\textit{Cf.} chap. 23 de la première partie de la \textit{Nih\=aya}.} la méthode ptoléméenne d'interpolation visant à calculer une valeur approchée de $c_2$ au moyen d'un produit d'une fonction de $\overline{\kappa}$ par une fonction de $\alpha$. \`A $\alpha$ donné, il interpole les valeurs de $c_2$ entre $c_2(0,\alpha)$ et $c_2(180°,\alpha)$ au moyen de la formule suivante~: $$c_2(\overline{\kappa},\alpha)\simeq c_2(0,\alpha)+\chi(\overline{\kappa})(c_2(180°,\alpha)-c_2(0,\alpha)).$$ Le coefficient d'interpolation $\chi$ est défini par~: $$\chi(\overline{\kappa})=\frac{\max\vert c_2(\overline{\kappa},\cdot)\vert-\max\vert c_2(0,\cdot)\vert}{\max\vert c_2(180°,\cdot)\vert-\max\vert c_2(0,\cdot)\vert}.$$ Ce coefficient est fonction d'une seule variable~; on peut donc le calculer pour une série de valeurs de $\overline{\kappa}$ et en faire une table. Pour ce faire, on remarque que le maximum $\max\vert c_2(\overline{\kappa},\cdot)\vert$ est atteint quand la droite $(O,P')$ est tangente au cercle de l'orbe de l'épicycle. On a alors\footnote{On le montre aisément par le calcul en posant $z^{-1}=OP_5'+P_5P\cos\alpha$ et en remarquant que la quantité $\tan^2c_2$ à maximiser est alors un polynôme de degré 2 en $z$.}~: $$\max\vert c_2(\overline{\kappa},\cdot)\vert=\arcsin\frac{P_5P}{OP'_5}.$$ \begin{figure} \begin{center} \input{fig042.pdf_tex} \end{center} \caption{\label{fig042}Saturne~: inclinaison de l'orbe incliné} \end{figure} \paragraph{Trigonométrie sphérique} On va à présent calculer les coordonnées sphériques du point $R_{P_2,\mathbf{a},2°30'}(P')$ par rapport à l'écliptique. L'angle formé entre le vecteur $\mathbf{a}$ et la direction du point $P'$ vaut (modulo $360°$)~: $$(\mathbf{a},\overrightarrow{OP'})=\overline{\kappa}+c_1+c_2+140°=\overline{\lambda}-\lambda_{\ascnode}+c_1+c_2=\lambda-\lambda_{\ascnode}$$ où $\lambda=\overline{\lambda}+c_1+c_2$. Les égalités suivantes seront prises modulo $360°$. Sur la figure \ref{fig042}, on représente sur la sphère de l'écliptique le point $B$ dans la direction du point $R_{P_2,\mathbf{a},2°30'}(P')$ et le point $C$ dans la direction du vecteur $\mathbf{a}$. En particulier, $$(\overrightarrow{OC},\overrightarrow{OB})=\lambda-\lambda_{\ascnode}.$$ L'étude du triangle sphérique $ABC$ donne~: $$\tan(\overrightarrow{OC},\overrightarrow{OA})=\cos(2°30')\times\tan(\lambda-\lambda_{\ascnode}).$$ La longitude du point $R_{P_2,\mathbf{a},2°30'}(P')$ par rapport à l'écliptique, en prenant la direction du point vernal $\mathbf{j}$ comme origine, est donc, quand $\lambda-\lambda_{\ascnode}\in\rbrack -90°,90°\lbrack$~: $$(\mathbf{j},\overrightarrow{OA})=(\overrightarrow{OC},\overrightarrow{OA})-(\overrightarrow{OC},\mathbf{j})=\arctan(\cos(2°30')\times\tan(\lambda-\lambda_{\ascnode}))+\lambda_{\ascnode}$$ Quand au contraire $\lambda-\lambda_{\ascnode}\in\rbrack 90°,270°\lbrack$, on a~: $$(\mathbf{j},\overrightarrow{OA})=180°+\arctan(\cos(2°30')\times\tan(\lambda-\lambda_{\ascnode}))+\lambda_{\ascnode}.$$ On rassemble ces deux cas dans la formule suivante, valable pour toutes les valeurs des paramètres~: $$(\mathbf{j},\overrightarrow{OA})=\lambda+e_n(\lambda-\lambda_{\ascnode})$$ où l'``équation du déplacement'' $e_n(x)$ est définie comme suit sur l'intervalle $\lbrack -90°,270°\rbrack$, et ailleurs par périodicité~: $$e_n(x)=\left\lbrace\begin{array}{l} \arctan(\cos(2°30')\times\tan x)-x,\text{ si }x\in\rbrack -90°,90°\lbrack\\ 180°+\arctan(\cos(2°30')\times\tan x)-x,\text{ si }x\in\rbrack 90°,270°\lbrack\\ 0°,\text{ si }x=\pm 90° \end{array}\right.$$ Enfin, la latitude du point $R_{P_2,\mathbf{a},2°30'}(P')$ est~: $$(\overrightarrow{OA},\overrightarrow{OB})=\arcsin(\sin(2°30')\times\sin(\lambda-\lambda_{\ascnode})).$$ \paragraph{Les inclinaisons des petits orbes} Les rotations qui composent $M$ ont leurs axes contenus dans le plan de l'orbe incliné $OCB$ et sont d'angles petits ($3°30'$ au plus). Elles auront peu d'effet sur la longitude de $R_{P_2,\mathbf{a},2°30'}(P')$. En revanche, l'effet de $M$ et les inclinaisons des plans des petits orbes ont justement été introduits dans ce modèle afin de rendre compte des variations en latitude de Saturne. Hélas, les axes des rotations qui composent $M$ ne passent pas par le point $O$, et de telles rotations sont difficiles à étudier en coordonnées sphériques\footnote{Ibn al-Hayth\=am lui-même, dans un ouvrage inégalé sur l'astronomie mathématique, avait renoncé à une tâche semblable~; \textit{cf.} \cite{rashed2006} p.~444-447.}. Ibn al-\v{S}\=a\d{t}ir se contente ici d'une description qualitative de l'effet en latitude de $M$ (voir chapitre 24). \paragraph{Comparaisons} \`A la figure \ref{fig044} nous donnons les latitudes de Saturne pendant quinze ans, à partir de l'\'Epoque choisie par {Ibn al-\v{S}\=a\d{t}ir}, calculées de trois manières différentes~: (1) par l'IMCCE de l'Observatoire de Paris, (2) en suivant la méthode strictement ptoléméenne de l'\textit{Almageste} mais avec les paramètres d'{Ibn al-\v{S}\=a\d{t}ir} à l'\'Epoque et ses mouvements moyens\footnote{On a suivi l'interprétation des méthodes de Ptolémée donnée dans \cite{pedersen1974} et \cite{swerdlow2005}.}, (3) au moyen du modèle complet d'{Ibn al-\v{S}\=a\d{t}ir} décrit par la composée de huit rotations appliquée au point $P$ comme ci-dessus. \begin{figure} \begin{center} \input{fig044.pdf_tex} \end{center} \caption{\label{fig044}Latitudes de Saturne sur quinze ans, à partir de l'\'Epoque} \end{figure} Quand {Ibn al-\v{S}\=a\d{t}ir} indique que le n{\oe}ud ascendant de Saturne est ``à cinq parts du Lion'', il faut peut-être comprendre qu'il est à $5°$ \emph{avant} le début du Lion, à $115°$ du point vernal, en 1339 ap. J.-C. \textit{i. e.} à $t=8$ ans de l'\'Epoque, quand l'Apogée de Saturne est à $254;52+8\times 0;1=255°$ du point vernal\footnote{\textit{Cf.} p.~\pageref{noeud_sup1} \textit{supra}.}~; on aurait alors $\lambda_{\ascnode}=\lambda_A-140$ comme l'affirmait déjà Ptolémée $1200$ ans auparavant. C'est la valeur que nous avons adoptée dans le commentaire ci-dessus. Hélas, l'élongation n{\oe}ud-Apogée n'est pas constante, et ceci explique le décalage important entre le tracé des latitudes selon {Ibn al-\v{S}\=a\d{t}ir} et la courbe de l'IMCCE sur la figure \ref{fig044}, près du n{\oe}ud ascendant. Nous en concluons qu'{Ibn al-\v{S}\=a\d{t}ir} n'a pas cherché à corriger la valeur donnée par Ptolémée en confrontant son modèle en latitude avec l'observation directe pour Saturne\footnote{Lire $5°$ \emph{après} le début du Lion, \textit{i. e.} à $125°$ du point vernal, conduirait à un désaccord encore plus grand avec d'hypothétiques observations du temps d'{Ibn al-\v{S}\=a\d{t}ir}.}. Il semble néanmoins l'avoir fait pour Jupiter~: au même endroit, il affirme que, pour Jupiter, $\lambda_{\ascnode}=\lambda_A-70=181-70=111$, valeur coïncidant avec celle donnée par Ptolémée, mais il ajoute plus loin\footnote{\textit{Cf.} p.~\pageref{noeud_sup2} \textit{supra}.} que ``d'après ses propres observations'' $\lambda_{\ascnode}=\lambda_A-62$. Enfin, pour Mars, il ne mentionne pas la valeur donnée par Ptolémée\footnote{Selon Ptolémée, pour Mars, $\lambda_{\ascnode}=\lambda_A-85°30'$.} et il affirme simplement que $\lambda_{\ascnode}=\lambda_A-90=138-90=48$. Comme l'a montré Swerdlow\footnote{\textit{Cf.} \cite{swerdlow2005} p.~47.}, la théorie des latitudes des planètes supérieures dans l'\textit{Almageste} n'est guère satisfaisante pour l'observateur moderne~; elle commet souvent des erreurs de l'ordre de $20'$. Ceci tient à trois causes~: -- Dans le référentiel héliocentrique, le diamètre de l'épicycle indique la direction Soleil-Terre. Le plan de l'épicycle devrait donc toujours rester parallèle à l'écliptique\footnote{à peu près, car Soleil moyen $\simeq$ Soleil vrai.}, contrairement à ce que pense Ptolémée. -- Les observations transmises par Ptolémée sont peu précises~; elles sont arrondies au degré près~! De plus ces observations des latitudes maximales, lors des oppositions, ou près des conjonctions, devaient être rares pour Saturne dont la période moyenne est de 30 ans environ. -- Les calculs pour déduire les inclinaisons de l'excentrique et de l'épicycle sont sensibles à la précision de ces observations. Comme {Ibn al-\v{S}\=a\d{t}ir} s'appuie sur les observations de Ptolémée et sur ses inférences concernant les inclinaisons maximales des plans des orbes, son modèle présente alors à peu près les mêmes défauts en latitude que celui de Ptolémée. \begin{table} \begin{center}\begin{tabular}{c|c|c|c} & Saturne & Jupiter & Mars\\ $\overrightarrow{P_3P_4}$ & $5;7,30\,\mathbf{j}$ & $4;7,30\,\mathbf{j}$ & $9\,\mathbf{j}$\\ $\overrightarrow{P_4P_5}$ & $-1;42,30\,\mathbf{j}$ & $-1;22,30\,\mathbf{j}$ & $-3\,\mathbf{j}$\\ $\overrightarrow{P_5P}$ & $6;30\,\mathbf{j}$ & $11;30\,\mathbf{j}$ & $39;30\,\mathbf{j}$\\ $(\mathbf{j},\mathbf{u})$ & $-140°$ & $-62°$ & $-90°$\\ $\overline{\lambda}(0)$ & $157;58,20$ & $272;6,10$ & $292;0,0$\\ $\dot{\overline{\lambda}}$, par année persane & $12;13,40$ & $30;20,33$ & $191;17,11$\\ $\lambda_A(0)$ & $254;52$ & $180;52$ & $137;52$ \\ $\dot{\lambda}_A$, par année persane & $0;1$ & $0;1$ & $0;1$ \\ inclinaison de l'orbe incliné & $R_{P_2,\mathbf{u},2°30'}$ & $R_{P_2,\mathbf{u},1°30'}$ & $R_{P_2,\mathbf{u},1°}$ \\ inclinaison du déférent & $R_{P_3,\mathbf{u},-3°30'}$ & $R_{P_3,\mathbf{u},-2°}$ & $R_{P_3,\mathbf{u},-1°37'30''}$ \\ inclinaison du rotateur & $R_{P_4,\mathbf{v},-1°}$ & $R_{P_4,\mathbf{v},-0°30'}$ & $R_{P_4,\mathbf{v},-0°37'30''}$ \end{tabular}\end{center} \caption{Synopsis des paramètres utilisés par {Ibn al-\v{S}\=a\d{t}ir} pour les planètes supérieures} \end{table} \paragraph{Les longitudes des planètes supérieures chez \d{T}\=us{\=\i}, `Ur\d{d}{\=\i} et Sh{\=\i}r\=az{\=\i}} Le modèle de \d{T}\=us{\=\i} pour les planètes supérieures comprend sept orbes\footnote{\textit{Cf.} \cite{altusi1993} II.11[10], p.~208.}. Sur la figure initiale (fig.~\ref{fig046}), $P_1$ est le centre du parécliptique, $P_2$ le centre de l'orbe incliné, $P_3$ le centre d'un orbe déférent excentrique, $P_4$ le centre d'une ``grande sphère'', $P_5$ le centre d'une ``petite sphère'', $P_6$ le centre d'un orbe englobant, $P_7$ le centre de l'épicycle, et $P$ le centre du globe planétaire. On a $O=P_1=P_2$, et $P_6=P_7$. Pour Saturne, on note $e=3;25$ et on a~:\footnote{Pour Mars et Jupiter, on a le même modèle que Saturne, seuls $e$ et $P_7P$ diffèrent.} \begin{align*} &\overrightarrow{P_2P_3}=2e\,\mathbf{j}\\ &\overrightarrow{P_3P_4}=60\,\mathbf{j}\\ &\overrightarrow{P_4P_5}=\overrightarrow{P_5P_6}=-\dfrac{e}{2}\,\mathbf{j}\\ &\overrightarrow{P_7P}=6;30\,\mathbf{j} \end{align*} \d{T}\=us{\=\i} adjoint trois orbes supplémentaires pour rendre compte des mouvements en latitude au moyen d'un ``couple de \d{T}\=us{\=\i} curviligne''\footnote{\textit{Cf.} \cite{altusi1993} II.11[19], p.~218-220.}~; on les négligera ici. Les rotations décrivant le mouvement en longitude de chaque planète supérieure selon \d{T}\=us{\=\i} sont alors~: $$R_{P_1,\lambda_A} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,-2\overline{\kappa}} \,R_{P_6,\overline{\kappa}} \,R_{P_7,\overline{\alpha}}.$$ On aura remarqué que la ``petite'' et la ``grande'' sphère constituent un couple de \d{T}\=us{\=\i} dont l'effet est décrit par $R_{P_4,\overline{\kappa}}\,R_{P_5,-2\overline{\kappa}}\,R_{P_6,\overline{\kappa}}$. \begin{figure} \begin{center} \input{fig046.pdf_tex} \end{center} \caption{\label{fig046}Saturne, al-\d{T}\=us{\=\i}, figure initiale} \end{figure} Le modèle de `Ur\d{d}{\=\i} pour les planètes supérieures comprend cinq orbes dont les centres sont~:\footnote{\textit{Cf.} \cite{saliba1990}, et \cite{rashed1997} p.~119-122.} $O=P_1=P_2$ le centre du Monde, $P_3$ le centre du déférent que Sh{\=\i}r\=az{\=\i} appellera ``déférent corporel'' (car il est bien distinct du déférent de Ptolémée), $P_4$ le centre de l'orbe rotateur, $P_5$ le centre de l'épicycle, et $P$ le centre du globe planétaire. Notant toujours, pour Saturne, $e=3;25$, on pose~: \begin{align*} &\overrightarrow{P_2P_3}=\dfrac{3e}{2}\,\mathbf{j}=5;7,30\,\mathbf{j}\\ &\overrightarrow{P_3P_4}=60\,\mathbf{j}\\ &\overrightarrow{P_4P_5}=-\dfrac{e}{2}\,\mathbf{j}\\ &\overrightarrow{P_5P}=6;30\,\mathbf{j} \end{align*} Le mouvement en longitude est alors décrit par les rotations suivantes~: $$R_{P_1,\lambda_A} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}.$$ \begin{figure} \begin{center} \input{fig047.pdf_tex} \end{center} \caption{\label{fig047}Saturne, al-`Ur\d{d}{\=\i}, figure initiale} \end{figure} Pour la Lune, on a vu Sh{\=\i}r\=az{\=\i} proposer un modèle strictement équivalent au modèle de \d{T}\=us{\=\i} sans couple curviligne, assez différent du modèle erroné qu'offrait `Ur\d{d}{\=\i} dans son \textit{Kit\=ab al-hay'a}. Pour les planètes supérieures, Sh{\=\i}r\=az{\=\i} semble avoir hésité, dans les années 1281-1285, entre un modèle fondé sur un ``principe conjectural'' et le modèle de `Ur\d{d}{\=\i} (ou un modèle équivalent) fondé sur un ``principe déductif''. Ma connaissance des trois ouvrages écrits par Sh{\=\i}r\=az{\=\i} dans ces années-là\footnote{Les \textit{Ikht{\=\i}y\=ar\=at-i muzaffar{\=\i}}, la \textit{Nih\=aya al-idr\=ak} et la \textit{Tu\d{h}fa al-sh\=ah{\=\i}ya}. \textit{Cf.} \cite{rashed1997}, \cite{masoumi2013}, \cite{niazi2011}.} s'appuie sur les travaux de Saliba, Gamini-Masoumi et Niazi~; il me faudra peut-être réviser mes jugements quand les textes originaux seront plus accessibles. Niazi voit un peu la \textit{Nih\=aya al-idr\=ak} comme un commentaire de la \textit{Tadhkira} de \d{T}\=us{\=\i} et les \textit{Ikht{\=\i}y\=ar\=at-i muzaffar{\=\i}} comme un auto-commentaire de la \textit{Nih\=aya al-idr\=ak}, bien que les deux ouvrages aient été rédigés presque simultanément\footnote{\textit{C.} \cite{niazi2011} p. 213.}. Il n'est donc pas surprenant que Sh{\=\i}r\=az{\=\i} reprenne la structure du traité de \d{T}\=us{\=\i} dans sa \textit{Nih\=aya al-idr\=ak} et y décrive d'abord le modèle de Ptolémée pour les planètes supérieures\footnote{\textit{Nih\=aya al-idr\=ak}, livre 2, chapitre 8. \textit{Cf.} \cite{niazi2011} p. 150 et appendices où Niazi donne un résumé analytique du chapitre 8 et une édition du début de ce chapitre. Une édition critique de la fin du chapitre 8 figure dans \cite{masoumi2013}.}. Puis, dans son chapitre sur les latitudes, comme l'avait fait \d{T}\=us{\=\i} avec son couple de \d{T}\=us{\=\i}, Sh{\=\i}r\=az{\=\i} propose un nouveau modèle\footnote{\textit{Cf.} \cite{niazi2011} p. 151. Il serait intéressant d'étudier le comportement du modèle de Sh{\=\i}r\=az{\=\i} pour les latitudes.}. Ce modèle est aussi décrit dans les \textit{Ikht{\=\i}y\=ar\=at} où Sh{\=\i}r\=az{\=\i} explique pourquoi il le préfère au modèle de `Ur\d{d}{\=\i}\footnote{\textit{Cf.} \cite{niazi2011} p.~203-204.}~: dans le modèle de `Ur\d{d}{\=\i}, la trajectoire du centre de l'épicycle dans le plan de l'orbe incliné n'est pas exactement un cercle\footnote{On retrouve évidemment le même défaut dans le modèle de \d{T}\=us{\=\i}, comme nous l'avions déjà remarqué pour la Lune (\textit{cf.} la courbe en pointillés, fig. \ref{fig021} p.~\pageref{fig021} \textit{supra}).}, mais c'en est bien un dans le nouveau modèle de Sh{\=\i}r\=az{\=\i}. Dans les \textit{Ikht{\=\i}y\=ar\=at}, Sh{\=\i}r\=az{\=\i} utilise les termes ``déductif'' et ``conjectural'' pour marquer l'opposition entre les deux modèles\footnote{Le ``principe déductif'', \textit{istinb\=a\d{t}{\=\i}}, désigne le dispositif à l'{\oe}uvre dans le modèle de `Ur\d{d}{\=\i}, et le ``principe conjectural'', \textit{\d{h}ads{\=\i}}, désigne celui à l'{\oe}uvre dans le nouveau modèle de Sh{\=\i}r\=z{\=\i}. \`A cette occasion, il utilise aussi un troisième terme, ``innovant'', \textit{ibd\=a`{\=\i}}, pour désigner un principe que Niazi n'a pas su expliquer (\textit{cf.} \cite{niazi2011} note 52 p.~138). Ce terme se rapporte peut-être aux dispositifs utilisés pour décrire les mouvements en latitude, puisque {Ibn al-\v{S}\=a\d{t}ir} en parle ainsi dans la \textit{Nih\=aya al-S\=ul} (p.~\pageref{ibdaai} \textit{supra})~: ``Le principe qu'il a nommé l'invention (\textit{ibd\=a`{\=\i}}) concernant le désordre des latitudes des astres est impossible.'' Selon Niazi, les termes \textit{istinb\=a\d{t}{\=\i}} et \textit{\d{h}ads{\=\i}} sont aussi utilisé dans la \textit{Nih\=aya al-idr\=ak}, mais ils seront absents de la \textit{Tu\d{h}fa} (\cite{niazi2011} p.~159).}. Pourtant, Niazi montre qu'après de nombreuses révisions, l'état final du chapitre sur les latitudes de la \textit{Nih\=aya al-idr\=ak} semble préférer une solution fondée sur le ``principe déductif'', c'est-à-dire à la manière de `Ur\d{d}{\=\i}~! La \textit{Tu\d{h}fa}, au dire de Niazi, reflète sûrement son choix définitif, et on y trouve le modèle suivant. C'est le seul que nous retiendrons ici~: \begin{quote} ``Le \textit{premier orbe} est le parécliptique~; la partie convexe [du parécliptique] de Saturne touche la partie concave du huitième orbe, sa partie concave touche la partie convexe du parécliptique de Jupiter, la partie concave du parécliptique de Jupiter touche la partie convexe du parécliptique de Mars, et la partie concave du parécliptique de Mars touche la partie convexe du parécliptique de Vénus. Le \textit{deuxième orbe} est le déférent excentrique portant le centre de [l'orbe] englobant~; il est dans la partie supérieure du parécliptique~; la distance de son centre au centre du déférent imaginaire est la moitié de [la distance] entre les centres du Monde et du déférent imaginaire de l'astre~; et sa ceinture est inclinée par rapport à la trajectoire du Soleil, de la grandeur de l'inclinaison de [l'orbe] incliné de l'astre, inclinaison constante. Le \textit{troisième orbe} est [l'orbe] englobant~; il est dans la partie supérieure de l'excentrique~; son axe est perpendiculaire au plan de la ceinture de l'excentrique, et sa ceinture est dans son plan c'est-à-dire dans le plan de [l'orbe] incliné. Le \textit{quatrième orbe} est le fléchisseur\footnote{Nous traduisons ainsi \textit{mumayyil}~; cet orbe est responsable de l'inclinaison variable de la ceinture de l'épicycle.}~; il est dans la partie inférieure de l'orbe englobant~; son axe est parallèle à l'axe de l'orbe englobant et perpendiculaire au plan de l'orbe incliné, et sa ceinture est aussi dans son plan~; et la distance de son centre au centre de l'orbe englobant est égale à celle entre les centres de l'excentrique et du déférent imaginaire de l'astre [...]. Le \textit{cinquième orbe}~: l'astre tourne autour du centre du fléchisseur et d'un axe coupant l'axe du fléchisseur en le centre commun. La ceinture [du cinquième orbe] est inclinée par rapport à la ceinture [du fléchisseur] vers le Nord et vers le Sud, de la grandeur de l'inclinaison, pour l'astre en question, par rapport au plan de l'orbe incliné, et c'est une inclinaison constante.''\footnote{Notre traduction. \textit{Cf.} manuscrit Arabe 2516 à la Bibliothèque Nationale de France~; chapitre 11, f. 45v.} \end{quote} Les centres des orbes dans le modèle de la \textit{Tu\d{h}fa} sont~: $O=P_1$ centre du parécliptique, $P_3$ centre d'un ``déférent corporel'', $P_4$ centre de l'orbe ``englobant'', $P_5$ centre d'un orbe ``fléchisseur'', $P_6=P_5$ centre de l'épicycle, et $P$ centre du globe planétaire. On notera $D$ le centre du ``déférent imaginaire'' qui n'est autre que le centre de la trajectoire approximativement circulaire du point $P_5$ au sein du parécliptique. On a~: \begin{align*} &\overrightarrow{P_1P_3}=\dfrac{3e}{2}\,\mathbf{j}=5;7,30\,\mathbf{j}\\ &\overrightarrow{P_3P_4}=60\,\mathbf{j}\\ &\overrightarrow{P_4P_5}=-\dfrac{e}{2}\,\mathbf{j}=-1;42,30\,\mathbf{j}\\ &\overrightarrow{P_5P}=6;30\,\mathbf{j} \end{align*} Si on néglige les inclinaisons, la composée de rotations entraînant le point $P$ dans son mouvement en longitude est alors~: $$R_{P_1,\lambda_A} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,-\overline{\kappa}} \,R_{P_6,\overline{\alpha}}.$$ Pour les longitudes, on voit que ce modèle est équivalent à celui de `Ur\d{d}{\=\i}. Si Sh{\=\i}r\=az{\=\i} semble avoir finalement renoncé à proposer un modèle différent, il a cependant offert une contribution originale à la fin du chapitre 8 du livre 2 de sa \textit{Nih\=aya}. Pour répondre à d'éventuels contradicteurs qui reprocheraient aux astronomes de Maragha d'avoir déplacé le centre du déférent concentrique que Ptolémée avait posé au milieu entre le centre du Monde et le point équant (``bissection de l'excentricité''), Sh{\=\i}r\=az{\=\i} tente d'abord de justifier par des arguments issus de l'observation le choix fait par Ptolémée. Gamini et Masoumi ont récemment analysé cette intéressante réflexion sur les ``arcs rétrogrades''\footnote{Le passage correspondant de la \textit{Nih\=aya al-idr\=ak} et des \textit{Ikht{\=\i}y\=ar\=at} est aussi étudié dans \cite{niazi2011} p.~195-200, mais Niazi a une interprétation légèrement différente de celle de \cite{masoumi2013}. \`A ses yeux il faut voir dans tout ce passage une tentative de justifier la préférence pour \textit{les deux} modèles de `Ur\d{d}{\=\i} et de Sh{\=\i}r\=az{\=\i} vis-à-vis d'éventuels contradicteurs qui souhaiteraient revenir à Ptolémée~; aux yeux de Gamini-Masoumi (\cite{masoumi2013} p.~51) il s'agit plutôt de justifier le refus du modèle de `Ur\d{d}{\=\i}.}. Ce faisant, Sh{\=\i}r\=az{\=\i} caractérise le centre du déférent comme étant le centre de la trajectoire, circulaire chez Ptolémée, du centre de l'épicycle dans le plan de l'orbe incliné, et il caractérise le point équant comme étant le point autour duquel le mouvement du centre de l'épicycle a une vitesse angulaire constante. Dans tous les modèles des astronomes de Maragha ainsi qu'{Ibn al-\v{S}\=a\d{t}ir} pour les planètes supérieures, il existe bien un tel ``point équant'' et un point proche du centre de la trajectoire circulaire ou presque circulaire du centre de l'épicycle -- même si ces points ne sont pas toujours mentionnés dans les descriptions des modèles. Pour les modèles de `Ur\d{d}{\=\i} et de Sh{\=\i}r\=az{\=\i}, ce sont les points $E$ et $D$ sur nos figures \ref{fig047} et \ref{fig048}. \begin{figure} \begin{center} \input{fig048.pdf_tex} \end{center} \caption{\label{fig048}Saturne, Sh{\=\i}r\=az{\=\i}, figure initiale} \end{figure} En fait le modèle de `Ur\d{d}{\=\i} (et donc aussi celui de Sh{\=\i}r\=az{\=\i}) est strictement équivalent au modèle de \d{T}\=us{\=\i}. Partons en effet du modèle de `Ur\d{d}{\=\i} décrit comme ci-dessus par la composée de rotations $R_{P_1,\lambda_A} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}$ appliquée au point $P$ de la fig. \ref{fig047}. $P_1$, $P_2$, $P_3$, $P_4$, $P_5$ désigneront les centres des orbes dans la figure initiale de `Ur\d{d}{\=\i}, et $\tilde{P}_1$, $\tilde{P}_2$, $\tilde{P}_3$, $\tilde{P}_4$, $\tilde{P}_5$, $\tilde{P}_6$, $\tilde{P}_7$ désigneront les centres des orbes dans la figure initiale d'al-\d{T}\=us{\=\i}. On a~: $$O=P_1=P_2=\tilde{P}_1=\tilde{P}_2,\quad\overrightarrow{P_3\tilde{P}_3}=\frac{e}{2}\,\mathbf{j},\quad P_4=\tilde{P}_5,$$ $$P_5=\tilde{P}_6=\tilde{P}_7,\quad \overrightarrow{P_4\tilde{P}_4}=\frac{e}{2}\,\mathbf{j}.$$ Alors~: \begin{align*} R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}} &= \left(R_{P_3,\overline{\kappa}}\,R_{P_4,-\overline{\kappa}}\right) R_{P_4,2\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}\\ &= R_{\tilde{P}_3,\overline{\kappa}} \left(R_{\tilde{P}_4,-\overline{\kappa}} \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}\right)\quad\text{(proposition 1)}\\ &=R_{\tilde{P}_3,\overline{\kappa}} \,R_{\tilde{P}_4,\overline{\kappa}} \,R_{P_4,-2\overline{\kappa}} \,R_{P_5,\overline{\alpha}+\overline{\kappa}}\quad\text{(proposition 3)}\\ &=R_{\tilde{P}_3,\overline{\kappa}} \,R_{\tilde{P}_4,\overline{\kappa}} \,R_{\tilde{P}_5,-2\overline{\kappa}} \,R_{\tilde{P}_6,\overline{\kappa}} \,R_{\tilde{P}_7,\overline{\alpha}}. \end{align*} \label{equiv_sup} Enfin, pour les longitudes des planètes supérieures, le modèle d'{Ibn al-\v{S}\=a\d{t}ir} est équivalent aux modèles des trois savants de Maragha. En effet, notons toujours $P_i$ les centres des orbes dans la figure initiale de `Ur\d{d}{\=\i}, mais notons à présent $\tilde{P}_i$ les centres des orbes d'{Ibn al-\v{S}\=a\d{t}ir}. On a~: $$O=P_1=P_2=\tilde{P}_1=\tilde{P}_2,\quad P_4=\tilde{P}_4,\quad P_5=\tilde{P}_5,\quad P=\tilde{P}.$$ Comme $\overrightarrow{\tilde{P}_2\tilde{P}_3}=\overrightarrow{P_3P_4}$, en vertu de la proposition 1~: \begin{align*} R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}} &=\left(R_{P_3,\overline{\kappa}}\,R_{P_4,-\overline{\kappa}}\right) \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}\\ &=R_{\tilde{P}_2,\overline{\kappa}} \,R_{\tilde{P}_3,-\overline{\kappa}} \,R_{\tilde{P}_4,2\overline{\kappa}} \,R_{\tilde{P}_5,\overline{\alpha}-\overline{\kappa}} \end{align*} Une démonstration plus intuitive, au moyen d'une figure géométrique, circule beaucoup dans la littérature secondaire depuis Kennedy\footnote{\textit{Cf.} \cite{kennedy1966} p.~367.}. Nous l'avons reproduite, fig.~\ref{fig051} p.~\pageref{fig051}. \begin{figure} \begin{center} \includegraphics{fig051.jpg} \end{center} \caption{\label{fig051}Démonstration d'équivalence entre les modèles en longitude de divers savants, selon Kennedy \cite{kennedy1966} p.~367} \end{figure} \paragraph{Sections planes des orbes solides de Saturne} Les sections planes des orbes solides de Saturne sont représentées à la figure \ref{fig049} qui respecte à peu près les ordres de grandeurs sauf pour $OP_3$. Les rayons sont~: \begin{align*} \text{rayon du globe planétaire} &= r\\ \text{rayon de l'orbe de l'épicycle} &=6;30+r\\ \text{rayon du rotateur} &=1;42,30+6;30+r=8;12,30+r\\ \text{rayon du déférent} &=5;7,30+1;42,30+6;30+r=13;20+r\\ \text{rayon extérieur du l'orbe incliné}&=60+5;7,30+1;42,30+6;30+r=73;20+r\\ \text{rayon intérieur de l'orbe incliné}&=60-(5;7,30+1;42,30+6;30+r)=46;40-r\\ \text{épaisseur du parécliptique}&=0;40 \end{align*} Ici comme pour l'orbe total du Soleil, le choix de l'épaisseur $0;40$ du parécliptique est arbitraire, et {Ibn al-\v{S}\=a\d{t}ir} propose encore d'ajouter un complément à l'épaisseur de l'orbe incliné. Mais pour Saturne, il néglige $r$ quand il calcule les rayons des orbes solides. Il n'en tient compte qu'à la fin de son raisonnement, dans le ``complément'' qu'il ajoute à l'épaisseur de l'orbe incliné, quand il dit~: \begin{quote} ``La distance minimale est à la surface intérieure de l'orbe incliné de Saturne, en parts, quarante-six et deux tiers~; en plus de la réunion des orbes, il faut aussi compter le rayon de l'astre donc cela fait quarante-six parts.''\footnote{\textit{Cf.} p.~\pageref{contiguite_sat} \textit{supra}. Bien que le rayon intérieur de l'orbe incliné passe à $46$, il semble qu'{Ibn al-\v{S}\=a\d{t}ir} maintienne son rayon extérieur à $73;20$~: c'est l'épaisseur \textit{du parécliptique} qui fait passer le rayon extérieur \textit{du système d'orbes} à $74$. Dans l'orbe incliné, si l'on tient compte du rayon $r$ de la planète, il faut alors supposer que le cercle médian n'est plus un cercle de rayon $60$ mais un cercle de rayon légèrement inférieur (entre $59;40$ et $60$ si $2r<0;40$). En toute rigueur, il faudrait donc réduire proportionnellement le système entier des orbes de Saturne par rapport aux valeurs rapportées ci-dessus.} \end{quote} Il vise large en arrondissant le rayon intérieur de l'orbe incliné à $46$. Le rayon intérieur du système d'orbes de Saturne est donc $46$, et son rayon extérieur est, en tenant compte de l'épaisseur du parécliptique, $73;20+0;40=74$. Il ne remettra pas ces valeurs en question dans la conclusion de la \textit{Nihaya}, et il suivra le même procédé pour les quatre autres planètes. \begin{figure} \begin{center} \input{fig049.pdf_tex} \end{center} \caption{\label{fig049}Saturne, orbes solides} \end{figure} \paragraph{Les orbes solides de Jupiter} \begin{align*} \text{rayon du globe planétaire}&=r\\ \text{rayon de l'orbe de l'épicycle}&=11;30+r\\ \text{rayon du rotateur}&=1;22,30+11;30+r=12;52,30+r\\ \text{rayon de déférent}&=4;7,30+1;22,30+11;30+r=17+r\\ \text{rayon extérieur de l'orbe incliné}&=60+4;7,30+1;22,30+11;30+r=77+r\\ \text{rayon intérieur de l'orbe incliné}&=60-(4;7,30+1;22,30+11;30+r)=43-r\\ \text{épaisseur du parécliptique}&=1 \end{align*} \paragraph{Les orbes solides de Mars} \begin{align*} \text{rayon du globe planétaire}&=r\\ \text{rayon de l'orbe de l'épicycle}&=39;30+r\\ \text{rayon du rotateur}&=3+39;30+r=42;30+r\\ \text{rayon du déférent}&=9+3+39;30+r=51;30+r\\ \text{rayon extérieur de l'orbe incliné}&=60+9+3+39;30+r=111;30+r\\ \text{rayon intérieur de l'orbe incliné}&=60-(9+3+39;30+r)=8;30-r\\ \text{épaisseur du parécliptique}&=0;30 \end{align*} \paragraph{Genèse du modèle en latitude pour les planètes supérieures} Concernant les latitudes des planètes supérieures, {Ibn al-\v{S}\=a\d{t}ir} s'estime satisfait du progrès qu'il a accompli. Voici la critique qu'il adresse à ses prédécesseurs~: \begin{quote} ``Le rapprochement de la ceinture de l'épicycle et de la ceinture de l'orbe incliné, sans un mobile qui ne perturbe les mouvements en longitude, dans l'astronomie classique de l'\textit{Almageste} et des \emph{Hypothèses} concernant les corps mûs en latitude, est impossible. Ce dont se sont efforcés Ptolémée et ses successeurs comme Ibn al-Haytham dans son \textit{\'Epître}, Na\d{s}{\=\i}r al-\d{T}\=us{\=\i} dans sa \textit{Tadhkira}, Mu'ayyad al-`Ur\d{d}{\=\i} dans son \textit{Astronomie}, Qu\d{t}b al-Sh{\=\i}r\=az{\=\i} dans sa \uwave{\textit{Muntaha 'adw\=ar}}, et ce que quiconque a repris de leurs propos dans son livre [...], rien de tout cela ne suffit au but visé pour la longitude et la latitude, et quand cela convient pour l'une, cela pêche pour l'autre. Que Dieu pardonne à ceux qui ont ici admis leur faiblesse.''\footnote{\textit{Cf.} p.~\pageref{critique_lat1} \textit{supra}.} \end{quote} \`A cette critique s'en ajoutera une seconde~; mais il faut déjà comprendre la première. Pour expliquer l'oscillation du plan de l'épicycle\footnote{Pour Saturne, selon Ptolémée, l'oscillation du plan de l'épicycle autour du plan de l'excentrique est d'amplitude $i_1+i_2=4°30'$. Il oscille autour d'un plan parallèle à l'écliptique avec une amplitude de $i_2=2°$ puisque l'excentrique est lui-même incliné de $i_1=2°30'$. \textit{Cf.} \cite{swerdlow2005} p.~43.}, Ptolémée avait décrit une sorte de modèle mécanique dans l'\textit{Almageste} en termes équivoques~: \begin{quote} ``En résumé général, suivant les hypothèses, les cercles excentriques des cinq planètes se trouvent inclinés sur le plan du cercle milieu du zodiaque autour du centre du zodiaque, et cette inclinaison est constante dans Saturne, Jupiter et Mars [...]'' ``Les diamètres apogées des épicycles qui, à \emph{certain point de départ}, étoient dans le plan de l'excentrique, sont transportés par de petits cercles fixés, pour ainsi dire, à leurs extrémités périgées, et d'un rayon propre à représenter les inégalités observées dans les latitudes. Mais ces petits cercles sont \emph{perpendiculaires aux plans des excentriques}~; ils ont leurs centres dans ces plans [...]'' ``Les diamètres qui coupent à angles droits les diamètres apogées [...] demeurent constamment parallèles au plan de l'écliptique [...]'' ``Au sujet de ces petits cercles qui font ainsi varier la position des épicycles, il faut d'abord remarquer ce qui suit. Ils sont partagés en deux également \emph{par les plans sur lesquels nous disions que se fait la variation de l'inclinaison} [...]'' ``Qu'on objecte pas à ces hypothèses, qu'elles sont trop difficiles à saisir [...]''\footnote{\textit{Cf.} \cite{ptolemy1952}, chap. XIII.2. Nos italiques~; nous adoptons ici la traduction française de Halma.} \end{quote} Peut-être à la recherche d'une \emph{description exacte} du mouvement du plan de l'épicycle, alors même qu'un \emph{calcul exact} des latitudes résultant de l'inclinaison des épicycles semblait si difficile\footnote{Ibn al-Haytham renonce à ce calcul dans sa \textit{Configuration des mouvements}, \textit{cf.} \cite{rashed2006} p.~444-447.}, al-\d{H}asan Ibn al-\d{H}asan Ibn al-Haytham a précisé le modèle de Ptolémée dans un texte aujourd'hui perdu sur \textit{Le mouvement d'enroulement}, probablement antérieur à son traité sur \textit{La configuration des mouvements}. C'est sûrement à ce texte qu'{Ibn al-\v{S}\=a\d{t}ir} fait allusion. Heureusement pour nous, al-\d{T}\=us{\=\i} a décrit dans la \textit{Ta\b{d}kira} le dispositif conçu par Ibn al-Haytham\footnote{\textit{Cf.} \cite{altusi1993} II.11 [16].}~: il s'agit d'adjoindre à l'épicycle deux orbes ou sphères homocentriques comme sur la figure \ref{fig052}, où l'axe de l'épicycle est perpendiculaire au plan de la section représentée sur la figure. Le mouvement propre de l'épicycle devant être rapporté au référentiel solide constitué par la seconde sphère, c'est un diamètre de la seconde sphère qui jouera le rôle de l'``apogée moyen'' de l'épicycle. Que ce diamètre soit l'axe $\mathbf{j}$ de la seconde sphère. Le mouvement de rotation de la première sphère autour de son axe $\mathbf{t}$ aura pour effet de faire ``osciller'' la direction du vecteur $\mathbf{j}$ autour du vecteur $\mathbf{t}$. Le rôle du mouvement de rotation de la seconde sphère autour de $\mathbf{j}$ est, comme pour la troisième sphère d'un couple de \d{T}\=us{\=\i}, d'annuler le moment angulaire imprimé à l'épicycle par le mouvement de la première sphère. Le plan de l'épicycle oscillera alors autour d'un plan contenant le vecteur $\mathbf{t}$. Reste à comprendre où fixer les pôles de la première sphère dans le système d'orbes assurant les mouvements en longitude. \d{T}\=us{\=\i} reste vague~; il nous faut donc deviner. \begin{figure} \begin{center} \input{fig052.pdf_tex} \end{center} \caption{\label{fig052}Ibn al-Haytham's \emph{Iltif\=af}, according to al-\d{T}\=us{\=\i}} \end{figure} Naïvement, plaçons l'axe de la première 1'axe de la première sphère, c'est-à-dire le vecteur $\mathbf{t}$, dans le plan de la figure initiale (notre fig. \ref{fig046} p.~\pageref{fig046}), et fixons ses pôles dans la ``sphère englobante''. Le modèle de \d{T}\=us{\=\i} auquel on adjoint les deux orbes d'Ibn al-Haytham contient maintenant neuf orbes\footnote{Voir notre fig.~\ref{fig053} p.~\pageref{fig053}, figure plane qui, bien sûr, n'est plus suffisante pour décrire la configuration initiale des orbes.}~: -- le parécliptique de centre $P_1$, -- l'orbe incliné en $P_2$, -- l'orbe déférent en $P_3$, -- une ``grande sphère'' en $P_4$, une ``petite sphère'' en $P_5$ et une ``sphère englobante'' en $P_6$, formant le couple de \d{T}\=us{\=\i}, -- les deux sphères d'Ibn al-Haytham en $P_7$ et $P_8$, où $P_8=P_7=P_6$, -- l'épicycle en $P_9=P_8$, -- Saturne en $P$. \begin{figure} \begin{center} \input{fig053.pdf_tex} \end{center} \caption{\label{fig053}Saturne d'après la \textit{Ta\b{d}kira} II.11[16]} \end{figure} Les rotations décrivant le mouvement de l'astre sont~: $$R_{P_1,\lambda_A} \,R_{P_2,\mathbf{u},2°30'} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,-2\overline{\kappa}} \,R_{P_6,\overline{\kappa}} \,R_{P_7,\mathbf{t},-(\overline{\kappa}+140°)} \,R_{P_8,\mathbf{j},(\overline{\kappa}+140°)} \,R_{P_9,\overline{\alpha}}$$ où $\mathbf{u}=\cos(50°)\mathbf{i}-\sin(50°)\mathbf{j}$, et $\mathbf{t}=\sin(4°30')\mathbf{i}+\cos(4°30')\mathbf{j}$. Al-\d{T}\=us{\=\i} juge cette solution peu satisfaisante, car~: \begin{quote} ``Les petits cercles mentionnés, en entraînant des inclinaisons latitudinales, causent aussi des inclinaisons en longitude, et les positions des apogées et des périgées sont donc différents de ce qu'elles devraient être quant aux points avec lesquels ils devraient être alignés.''\footnote{\textit{Cf.} \cite{altusi1993} II.11 [15].} \end{quote} Dans sa première critique, {Ibn al-\v{S}\=a\d{t}ir} ne fait que répéter celle qu'al-\d{T}\=us{\=\i} adresse au modèle de Ptolémée précisé par Ibn al-Haytham, et il ajoute que ni al-\d{T}\=us{\=\i}, ni `Ur\d{d}{\=\i}, ni Sh{\=\i}r\=az{\=\i} n'ont su résoudre ce problème~; pourtant al-\d{T}\=us{\=\i} propose une autre solution dans la \textit{Ta\b{d}kira}~: il propose de remplacer la première sphère d'Ibn al-Haytham par \emph{deux} sphères homocentriques (\textit{cf.} figure \ref{fig054}). L'épicycle est toujours contenu dans la seconde sphère d'Ibn al-Haytham, mais elle-même est contenue dans un \emph{petite sphère}, elle-même contenue dans une \emph{grande sphère}, elle-même contenue dans la \emph{sphère englobante} centrée en $P_6$. Leurs centres $P_6=P_7=P_8=P_9=P_{10}$ sont tous confondus. La vitesse de rotation de la petite sphère est double et en sens inverse de celle de la grande sphère~: les axes des trois sphères additionnelles étant presque parallèles, elles agiront presque comme un couple de \d{T}\=us{\=\i}.\label{notations_couple_curviligne} \begin{figure} \begin{center} \input{fig054.pdf_tex} \end{center} \caption{\label{fig054}Le couple curviligne de \d{T}\=us{\=\i} pour les latitudes} \end{figure} Quel est le mouvement de l'apogée moyen produit par ce dispositif~? \`A la surface de la sphère englobante, la trace de l'axe de la seconde sphère d'Ibn al-Haytham est une courbe fermée qu'al-\d{T}\=us{\=\i} pensait pouvoir assimiler à un petit arc d'un grand cercle de la sphère englobante, parcouru alternativement dans un sens puis dans l'autre~: l'apogée moyen décrit bien un mouvement oscillatoire\footnote{Ragep l'explique très bien, \textit{cf.} \cite{altusi1993} fig. C26 et note 54 p.~455.}. Comment insérer ce dispositif dans le modèle d'al-\d{T}\=us{\=\i} pour Saturne~? Comme on l'a fait pour le dispositif d'Ibn al-Haytham, on choisira naïvement d'insérer l'axe de la grande sphère dans le plan de la figure initiale, donc dans le plan de la sphère englobante, dans la direction du vecteur $\mathbf{j}$. On notera~: \begin{align*} \mathbf{u}&=\cos(50°)\mathbf{i}-\sin(50°)\mathbf{j}\\ \mathbf{v}&=\cos(4°30')\mathbf{j}-\sin(4°30')\mathbf{k}\\ \mathbf{w}&=\cos(2°15')\mathbf{j}-\sin(2°15')\mathbf{k} \end{align*} où $\mathbf{v}$ est l'axe de la seconde sphère d'Ibn al-Haytham et $\mathbf{w}$ est l'axe de la ``petite sphère''. La composée de rotations décrivant le mouvement de Saturne est alors~: $$R_{P_1,\lambda_A} \,R_{P_2,\mathbf{u},2°30'} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,-2\overline{\kappa}} \,R_{P_6,\overline{\kappa}} \,T\,R_{P_{10},\mathbf{i},-4°30'}\,R_{P_{10},\overline{\alpha}}$$ où $T=R_{P_7,\mathbf{j},\overline{\kappa}+50°}\,R_{P_8,\mathbf{w},-2(\overline{\kappa}+50°)}\,R_{P_9,\mathbf{v},\overline{\kappa}+50°}$ est le ``couple curviligne'' des trois sphères additionnelles. \begin{figure} \begin{center} \input{fig055.pdf_tex} \end{center} \caption{\label{fig055}Interprétation naïve de la solution d'al-\d{T}\=us{\=\i} utilisant son ``couple curviligne'' pour les latitudes de Saturne} \end{figure} Pour Saturne dont l'épicycle est petit, la solution d'Ibn al-Haytham et celle du couple curviligne d'al-\d{T}\=us{\=\i} se valent y compris pour les longitudes. Pourtant elles ont toutes deux un défaut notable~: l'épicycle garde une inclinaison par rapport à l'écliptique près des n{\oe}uds, causant les larges oscillations en latitude entre $t=10$ et $t=12$ sur le graphe de la figure \ref{fig055}. Et pour cause~: nous avons, ``naïvement'', fixé les pôles de la plus grande des deux ou trois sphères additionnelles dans le plan de l'orbe englobant, lui-même coïncidant avec le plan de l'excentrique de Ptolémée. L'épicycle va donc osciller autour de l'excentrique~: aux n{\oe}uds, il sera confondu avec le plan de l'excentrique, dont l'inclinaison est de $2°30'$ par rapport à l'écliptique. Il y a donc erreur. Mais où devrait-on attacher les pôles de la grande sphère dans la sphère englobante, si ce n'est dans le plan de cet orbe~? Al-\d{T}\=us{\=\i} est plutôt vague. Selon lui, les pôles de la première sphère d'Ibn al-Haytham sont à une distance des pôles de sa seconde sphère ``égale à la déviation maximale du diamètre [de l'apogée moyen] de la planète par rapport \emph{au plan dans lequel elle n'a pas de latitude}''\footnote{\textit{Ta\b{d}kira} \cite{altusi1993} II.11[16]. On remarque qu'en décrivant le modèle mécanique de Ptolémée, al-\d{T}\=us{\=\i} est, sur ce point, à peu près aussi vague que Ptolémée~: le modèle résulte selon al-\d{T}\=us{\=\i} en un ``déplacement des extrémités des diamètres de l'épicycle \emph{hors des plans dans lesquels ils n'ont pas d'inclinaison}'' (\textit{Ta\b{d}kira} II.11[14]).}. Il faut donc penser que les axes des deux sphères d'Ibn al-Haytham devraient être inclinés de $2°$ l'un par rapport à l'autre, puisque l'épicycle est incliné de $2°$ au plus par rapport à l'écliptique. Si l'on pouvait alors fixer les pôles de la première sphère dans un plan parallèle à l'écliptique, le modèle produirait une oscillation du plan de l'épicycle de $\pm 2°$ autour d'un plan parallèle à l'écliptique, et il serait confondu avec le plan de l'écliptique aux n{\oe}uds. Hélas, dans le solide de la sphère englobante, il n'existe aucun plan qui reste constamment parallèle à l'écliptique quand les orbes se meuvent~! Une rotation spatiale n'agit pas seulement sur les directions, elle agit aussi sur les plans affines. {Ibn al-\v{S}\=a\d{t}ir} l'avait bien compris, car c'est là l'objet de sa seconde critique. Il s'est posé la question de savoir autour de quel plan l'épicycle doit osciller. Selon lui~: \begin{quote} ``L'observation confirme que c'est le plan de l'écliptique et non le plan de l'orbe incliné. La plupart des Modernes -- Na\d{s}{\=\i}r al-\d{T}\=us{\=\i}, Mu'ayyad al-`Ur\d{d}{\=\i}, Qu\d{t}b al-Sh{\=\i}r\=az{\=\i} -- ont pensé que la latitude s'annulait quand c'est le plan de l'orbe incliné: c'est impossible [...]''\footnote{\textit{Cf.} p.~\pageref{critique_lat2} \textit{supra}.} \end{quote} En effet, la solution dont nous avons tracé le graphe fig. \ref{fig055}, bien que naïve, est la seule possible, à moins de modifier radicalement le modèle d'al-\d{T}\=us{\=\i}. Pour rendre possible l'oscillation autour d'un plan parallèle à l'écliptique, il faudrait modifier les autres orbes pour garantir l'existence d'un tel plan dans le solide de l'orbe englobant. Il faudrait par exemple incliner, une fois pour toutes, le plan et l'axe de l'orbe englobant par rapport à ceux du déférent. On aurait alors la composée de rotations suivante, toujours en accord avec les notations de la p.~\pageref{notations_couple_curviligne} ci-dessus~: $$R_{P_1,\lambda_A} \,R_{P_2,\mathbf{u},2°30'} \,R_{P_3,\overline{\kappa}} \,R_{P_4,\overline{\kappa}} \,R_{P_5,-2\overline{\kappa}} \,R_{P_6,\mathbf{u},-2°30'} \,R_{P_6,\overline{\kappa}} \,T\,R_{P_{10},\mathbf{i},-4°30'}\,R_{P_{10},\overline{\alpha}}$$ où, comme ci-dessus, $$T=R_{P_7,\mathbf{j},\overline{\kappa}+50°} \,R_{P_8,\mathbf{w},-2(\overline{\kappa}+50°)} \,R_{P_9,\mathbf{v},\overline{\kappa}+50°},$$ mais ici $\mathbf{v}=\cos(2°)\mathbf{j}-\sin(2°)\mathbf{k}$ et $\mathbf{w}=\cos(1°)\mathbf{j}-\sin(1°)\mathbf{k}$. Une telle modification rend l'adéquation avec les prédictions ptoléméennes presque parfaite\footnote{Une correction semblable s'appliquerait aussi aisément au dispositif d'Ibn al-Haytham de la fig.~\ref{fig052}.}. Comme on l'a vu plus haut, {Ibn al-\v{S}\=a\d{t}ir} propose une autre solution, adaptée à son propre modèle en longitude, et consistant simplement à incliner de manière judicieuse les plans et les axes des orbes déjà utilisés. Il n'est besoin d'adjoindre aucun orbe additionnel pour le mouvement en latitude~; cinq orbes suffisent là où al-\d{T}\=us{\=\i} en utilise dix. Son modèle n'est pas tout à fait équivalent à celui d'al-\d{T}\=us{\=\i}, mais il est aussi très proche des prédictions ptoléméennes~; il est donc plus intéressant de comparer les deux modèles aux éphémérides modernes comme nous l'avons fait à la figure \ref{fig056}. Qu'en conclure~? Au moins dès le XIème siècle avec Ibn al-Haytham, les astronomes avaient engagé l'étude des composées de rotations spatiales. {Ibn al-\v{S}\=a\d{t}ir} a assimilé les travaux concernant les composées de rotations homocentriques à axes non parallèles (les sphères homocentriques utilisées pour décrire les latitudes), aussi bien que les travaux de l'école de Maragha décrivant les mouvements en longitude par des composées de rotations à axes parallèles. C'est peut-être une meilleure appréhension du concept d'orbe solide, et une compréhension plus fine de l'action des rotations spatiales sur les plans affines, qui lui permettent d'élucider les défauts des solutions d'Ibn al-Haytham et d'al-\d{T}\=us{\=\i}, et d'y remédier. \begin{figure} \begin{center} \input{fig056.pdf_tex} \end{center} \caption{\label{fig056}Comparaison des modèles en latitude pour Saturne} \end{figure} Un indice montre que la recherche d'{Ibn al-\v{S}\=a\d{t}ir} s'appuie bien sur les travaux de ses prédécesseurs. Après avoir décrit son modèle pour les latitudes des planètes supérieures, {Ibn al-\v{S}\=a\d{t}ir} discute qualitativement les effets produits par les inclinaisons des axes des orbes dans la figure initiale, sur les inclinaisons aux n{\oe}uds et aux limites des latitudes. Pour ce faire, il est conduit à identifier $P_3$ et $P_4$ dans ses explications, et sur une de ses figures. On pourra comparer la figure originale p.~\pageref{lat_inf_profil} à notre fig.~\ref{fig041} p.~\pageref{fig041} où nous distinguons $P_3$ et $P_4$. Eu égard au fait que $P_3P_4$ est petit devant les rayons de l'épicycle et de l'excentrique, cette approximation peut se justifier. Mais il s'agit sans doute d'un choix didactique ou d'un résidu d'une réflexion antérieure sur un modèle plus simple. L'usage de l'approximation $P_3\simeq P_4$ montre que les composées de rotations homocentriques avaient déjà valeur de paradigme dans les théories planétaires. C'est en inclinant les axes de plusieurs orbes non homocentriques qu'{Ibn al-\v{S}\=a\d{t}ir} semble accomplir un geste nouveau. \label{end_saturne} \paragraph{Vénus~: figure initiale} La figure initiale pour Vénus est analogue à celle de Saturne (\textit{cf.} fig.~\ref{fig040} p.~\pageref{fig040}), sauf quant aux rayons des orbes et à la position de la ligne des n{\oe}uds. La ligne des n{\oe}uds, à l'intersection des plans de l'orbe incliné et du parécliptique, est ici orientée dans la direction du vecteur $\mathbf{i}$, le n{\oe}ud ascendant étant du même côté que $\mathbf{i}$. Posons $\overrightarrow{OP_3}=60\,\mathbf{j}$, alors $$\overrightarrow{P_3P_4}=1;41\,\mathbf{j},\qquad \overrightarrow{P_4P_5}=-0;26\,\mathbf{j},\qquad \overrightarrow{P_5P}=43;33\,\mathbf{j}.$$ \paragraph{Vénus~: transformations géométriques} Les rotations utilisées pour modéliser le mouvement de Vénus en longitude sont analogues à celles des planètes supérieures. En revanche la théorie des latitudes exposée dans le chapitre 25 de la \textit{Nih\=aya} est assez différente. Voici la liste complète des rotations utilisées dans les chapitres 19 (pour les longitudes) et 25 (pour les latitudes)~: $$R_{P_5,\overline{\alpha}-\overline{\kappa}},\quad R_{P_5,\mathbf{j},0°30'}, \quad R_{P_5,\mathbf{i},0°5'},$$ $$R_{P_4,2\overline{\kappa}},\quad R_{P_4,\mathbf{j},3°}, \quad R_{P_4,\mathbf{i},-0°5'},$$ $$R_{P_3,-\overline{\kappa}},\quad R_{P_2,\overline{\kappa}},\quad R_{P_2,\mathbf{i},0°10'},\quad R_{P_1,\lambda_A}.$$ L'image du point $P_3$ entraîné par les les mouvements de l'orbe parécliptique et de l'orbe incliné est~: $$R_{P_1,\lambda_A}\,R_{P_2,\mathbf{i},0°10'} \,R_{P_2,\overline{\kappa}}(P_3).$$ Quant au point $P_4$, il est aussi entraîné par le mouvement de l'orbe déférent et devient~: $$R_{P_1,\lambda_A}\,R_{P_2,\mathbf{i},0°10'} \,R_{P_2,\overline{\kappa}}\,R_{P_3,-\overline{\kappa}}(P_4).$$ Le point $P_5$ est aussi entraîné par le mouvement de l'orbe rotateur et devient~: $$R_{P_1,\lambda_A} \,R_{P_2,\mathbf{i},0°10'} \,R_{P_2,\overline{\kappa}} \,R_{P_3,-\overline{\kappa}} \,R_{P_4,\mathbf{i},-0°5'} \,R_{P_4,\mathbf{j},3°} \,R_{P_4,2\overline{\kappa}}(P_5).$$ Enfin le point $P$, aussi entraîné par l'orbe de l'épicycle, devient~: $$R_{P_1,\lambda_A} \,R_{P_2,\mathbf{i},0°10'} \,R_{P_2,\overline{\kappa}} \,R_{P_3,-\overline{\kappa}} \,R_{P_4,\mathbf{i},-0°5'} \,R_{P_4,\mathbf{j},3°} \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\mathbf{i},0°5'} \,R_{P_5,\mathbf{j},0°30'} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}(P).$$ La figure \ref{fig057} montre l'effet des rotations inclinant les plans des orbes, pour trois positions. On a projeté orthogonalement sur le plan du parécliptique (figure du bas), sur un plan orthogonal à la ligne des n{\oe}uds (figure du haut), et sur un plan orthogonal au plan du parécliptique mais parallèle à la ligne des n{\oe}uds (figure de droite). Sur la figure du bas, on a seulement représenté la ceinture de l'épicycle~; dans les deux autres figures, le petit segment en trait plein de centre $P_4$ est la ceinture du rotateur. Sur la figure de droite, on a confondu $P_4$ et $P_5$, par approximation. La ceinture du déférent reste toujours dans le plan de l'orbe incliné. \begin{figure} \begin{center} \footnotesize\input{fig057.pdf_tex} \end{center} \caption{\label{fig057}Vénus, trois positions~: projections orthogonales} \end{figure} \paragraph{Vénus~: trajectoire paramétrée} Le modèle de Vénus est couplé au modèle du Soleil en vertu de la relation suivante~: $$\overline{\kappa}=\overline{\lambda}_{\astrosun}-\lambda_A.$$ La trajectoire dans l'ensemble des valeurs des paramètres est~: $$\lambda_A=\dot{\lambda}_At+\lambda_A(0),$$ $$\overline{\lambda}_{\astrosun}=\dot{\overline{\lambda}}_{\astrosun}t+\overline{\lambda}_{\astrosun}(0),$$ $$\overline{\alpha}=\dot{\overline{\alpha}}t+\overline{\alpha}(0).$$ On a, d'après {Ibn al-\v{S}\=a\d{t}ir}~: $$\overline{\lambda}_{\astrosun}(0)=280;9,0,\quad\dot{\overline{\lambda}}_{\astrosun}=359;45,40\text{ par année persane},$$ $$\lambda_A(0)=77;52,\quad\dot{\lambda}_A=0;1\text{ par année persane},$$ $$\overline{\alpha}(0)=320;50,19,\quad\dot{\overline{\alpha}}=225;1,48,41\text{ par année persane}.$$ \paragraph{Transformations planes et équations de Vénus} Comme nous l'avons expliqué pour Saturne, on peut réécrire à droite toutes les rotations dont l'axe est dans la direction du vecteur $\mathbf{k}$, en appliquant des relations de commutation. Il existe donc une composée de rotations $M$ telle que l'image du point $P$ soit~: $$M\,R_{P_2,\mathbf{a},0°10'}(P')$$ où $\mathbf{a}$ est l'image de $\mathbf{i}$ par la rotation $R_{P_1,\lambda_A}$, et où $$P'=R_{P_1,\lambda_A} \,R_{P_2,\overline{\kappa}} \,R_{P_3,-\overline{\kappa}} \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}(P).$$ On introduit de même les points $P'_3$, $P'_4$, $P'_5$ suivants (\textit{cf.} fig. \ref{fig058}) exactement comme pour Saturne~: $$P_3'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}(P_3),$$ $$P_4'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}\,R_{P_3,-\overline{\kappa}}(P_4),$$ $$P_5'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\,R_{P_4,2\overline{\kappa}}(P_5).$$ Les équations pour le mouvement en longitude de Vénus sont donc identiques à celles des planètes supérieures (\textit{cf.} p.~\pageref{equ_sat} ci-dessus). \begin{figure} \begin{center} \footnotesize\input{fig058.pdf_tex} \end{center} \caption{\label{fig058}Les orbes de Vénus à un instant $t$} \end{figure} \paragraph{Vénus~: trigonométrie sphérique} On va à présent calculer les coordonnées du point $R_{P_2,\mathbf{a},0°10'}(P')$ par rapport à l'écliptique. L'angle formé entre le vecteur $\mathbf{a}$ et la direction du point $P'$ vaut (modulo $360°$)~: $$(\mathbf{a},\overrightarrow{OP'})=\overline{\kappa}+c_1+c_2+90°=\overline{\lambda}-\lambda_{\ascnode}+c_1+c_2=\lambda-\lambda_{\ascnode}$$ où $\lambda=\overline{\lambda}+c_1+c_2$ et $\overline{\lambda}=\overline{\kappa}+\lambda_A$, et où $\lambda_{\ascnode}=\lambda_A-90°$ est la longitude du n{\oe}ud ascendant sur l'écliptique, toujours situé $90°$ avant l'Apogée. Comme on l'a montré pour Saturne, on trouve que la longitude du point $R_{P_2,\mathbf{a},0°10'}(P')$ par rapport à l'écliptique, en prenant la direction du point vernal, c'est-à-dire $\mathbf{j}$, comme origine, vaut~: $$\lambda+e_n(\lambda-\lambda_{\ascnode})$$ où l'``équation du déplacement'' $e_n(x)$ est définie comme suit sur l'intervalle $\lbrack -90°,270°\rbrack$, et ailleurs par périodicité~: $$e_n(x)=\left\lbrace\begin{array}{l} \arctan(\cos(0°10')\times\tan x)-x,\text{ si }x\in\rbrack -90°,90°\lbrack\\ 180°+\arctan(\cos(0°10')\times\tan x)-x,\text{ si }x\in\rbrack 90°,270°\lbrack\\ 0°,\text{ si }x=\pm 90°. \end{array}\right.$$ Sa latitude est~: $$\arcsin(\sin(0°10')\times\sin(\lambda-\lambda_{\ascnode})).$$ \paragraph{Vénus~: les inclinaisons des petits orbes} Les rotations qui composent $M$ ont leurs axes contenus dans le plan de l'orbe incliné et sont d'angles petits ($3°$ au plus). Elles auront peu d'effet\footnote{\textit{Cf.} \cite{penchevre2016} où nous montrons que l'erreur commise en les négligeant est inférieure à $0°5'$.}. Dans le chapitre 25, {Ibn al-\v{S}\=a\d{t}ir} se contente d'une description qualitative de l'effet en latitude de $M$. \paragraph{Vénus~: second modèle} Le modèle que nous avons restitué ci-dessus est le premier modèle décrit par {Ibn al-\v{S}\=a\d{t}ir} dans le chapitre 25. On voit que les inclinaisons des plans des orbes sont conçues dans l'idée de reproduire les effets décrits dans l'\textit{Almageste}. L'inclinaison de l'``orbe incliné'', variable dans l'\textit{Almageste} mais constante chez {Ibn al-\v{S}\=a\d{t}ir}, minime, n'a que peu d'influence sur le résultat final. {Ibn al-\v{S}\=a\d{t}ir} envisage ensuite un second modèle plus proche de la théorie exposée par Ptolémée dans ses \textit{Hypothèses planétaires}. La composée de rotations utilisée est alors la suivante~: $$R_{P_1,\lambda_A} \,R_{P_2,\mathbf{i},0°10'} \,R_{P_2,\overline{\kappa}} \,R_{P_3,-\overline{\kappa}} \,R_{P_4,\mathbf{i},-0°5'} \,R_{P_4,\mathbf{j},3°30'} \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\mathbf{i},0°5'} \,R_{P_5,\overline{\alpha}-\overline{\kappa}}.$$ \paragraph{Comparaison} \`A la figure \ref{fig059}, nous donnons les latitudes de Vénus pendant deux ans, à partir de l'\'Epoque choisie par {Ibn al-\v{S}\=a\d{t}ir}, calculées de quatre manières différentes~: (1) par l'IMCCE de l'Observatoire de Paris, (2) en suivant la méthode strictement ptoléméenne de l'\textit{Almageste} mais avec les paramètres d'{Ibn al-\v{S}\=a\d{t}ir} à l'\'Epoque et ses mouvements moyens, (3) au moyen du premier modèle d'{Ibn al-\v{S}\=a\d{t}ir} censé reproduire les valeurs de l'\textit{Almageste}, et (4) au moyen du second modèle d'{Ibn al-\v{S}\=a\d{t}ir} imitant les \textit{Hypothèses planétaires}. \begin{figure} \begin{center} \input{fig059.pdf_tex} \end{center} \caption{\label{fig059}Vénus~: comparaison des modèles pour les latitudes} \end{figure} \paragraph{Les orbes solides de Vénus} La figure est la même que pour les planètes supérieures (figure \ref{fig049}). \begin{align*} \text{rayon du globe planétaire}&=r\\ \text{rayon de l'orbe de l'épicycle}&=43;33+r\\ \text{rayon du rotateur}&=0;26+43;33+r=43;59+r\\ \text{rayon du déférent}&=1;41+0;26+43;33+r=45;40+r\\ \text{rayon extérieur de l'orbe incliné}&=60+1;41+0;26+43;33+r=105;40+r\\ \text{rayon intérieur de l'orbe incliné}&=60-(1;41+0;26+43;33+r)=14;20-r\\ \text{épaisseur du parécliptique}&=0;20 \end{align*} \paragraph{Le mouvement en longitude de Vénus selon al-\d{T}\=us{\=\i}} Le modèle de Na\d{s}ir al-D{\=\i}n al-\d{T}\=us{\=\i} pour les longitudes de Vénus est aussi analogue à son modèle pour les planètes supérieures\footnote{\textit{Cf.} ci-dessus fig.~\ref{fig046}. Quitte à bien choisir les paramètres, l'équivalence cinématique entre les modèles d'al-\d{T}\=us{\=\i}, al-`Ur\d{d}{\=\i}, al-Sh{\=\i}r\=az{\=\i} et {Ibn al-\v{S}\=a\d{t}ir}, démontrée p.~\pageref{equiv_sup} \textit{supra}, vaut aussi pour Vénus.}~: sept orbes, dont les centres sont déterminés par les relations suivantes\footnote{\textit{Cf.} \cite{altusi1993} p.~457.}, $$\overrightarrow{P_7P}=43;10\,\mathbf{j},\quad\overrightarrow{P_5P_6}=\overrightarrow{P_4P_5}=-0;37,30\,\mathbf{j},\text{ d'où }\overrightarrow{P_4P_6}=-1;15\,\mathbf{j},$$ $$\overrightarrow{P_3P_4}=60\,\mathbf{j},\qquad\overrightarrow{P_2P_3}=2;30\,\mathbf{j}.$$ Al-\d{T}\=us{\=\i} choisit ces paramètres de façon à reproduire la trajectoire prédite par le modèle de l'\textit{Almageste}. Ainsi $\vert P_2P_3\vert=2e$ et $\vert P_4P_6\vert=e$ où $e=1;15$ est l'excentricité chez Ptolémée, et al-\d{T}\=us{\=\i} garde le rayon de l'épicycle $43;10$ utilisé par Ptolémée. Au contraire, le modèle d'al-\d{T}\=us{\=\i} serait équivalent au modèle d'{Ibn al-\v{S}\=a\d{t}ir} si l'on avait~: $$\overrightarrow{P_7P}=43;33\,\mathbf{j},\quad\overrightarrow{P_5P_6}=\overrightarrow{P_4P_5}=-0;26\,\mathbf{j},\text{ d'où }\overrightarrow{P_4P_6}=-0;52\,\mathbf{j},$$ $$\overrightarrow{P_3P_4}=60\,\mathbf{j},\qquad\overrightarrow{P_2P_3}=2;7\,\mathbf{j}.$$ Sous cette hypothèse, $\vert P_2P_3\vert-\vert P_4P_6\vert=1;15$ serait bien égal à l'excentricité de Vénus selon Ptolémée, mais $\dfrac{\vert P_2P_3\vert}{2}=1;3,30$ serait égal à la demi-excentricité du Soleil \emph{selon {Ibn al-\v{S}\=a\d{t}ir}}\footnote{Rappelons en effet que dans le modèle du Soleil d'{Ibn al-\v{S}\=a\d{t}ir}, si l'on suit les notations de la fig.~\ref{fig003} p.~\pageref{fig003} \textit{supra}, le rayon de l'épicycle apparent minimal est $\vert P_3P_4\vert-\vert P_4P\vert=2;7$.}. Al-\d{T}\=us{\=\i} lui-même indiquait que l'excentricité de Vénus est la moitié de celle du Soleil, et que l'excentricité du Soleil valait $2;30$ pour Ptolémée mais environ $2;5$ pour des observateurs plus récents\footnote{\textit{Cf.} \cite{altusi1993} p.~146 et 182. Dans \cite{ghanem1976} p.~64, Kennedy donne les valeurs adoptées par trois autres savants pour l'excentrité de Vénus~: al-B{\=\i}r\=un{\=\i} $1;2,30$, al-Zarqulla $1;3,22$, al-K\=ash{\=\i} $1;3$. Elles sont toutes proches des $1;3,30$ d'{Ibn al-\v{S}\=a\d{t}ir}.}. \begin{figure} \begin{center} \input{fig060.pdf_tex} \end{center} \caption{\label{fig060}Erreur en longitude des modèles d'{Ibn al-\v{S}\=a\d{t}ir} et d'al-\d{T}\=us{\=\i} pour Vénus} \end{figure} \paragraph{Comparaison} Pour mieux juger des paramètres adoptés par {Ibn al-\v{S}\=a\d{t}ir} et al-\d{T}\=us{\=\i}, on donne fig.~\ref{fig060} les longitudes de Vénus pendant cinq ans, à partir de l'\'Epoque choisie par {Ibn al-\v{S}\=a\d{t}ir}~; comme l'équation de Vénus a une amplitude de l'ordre de $50°$ en coordonnées géocentriques, on a préféré tracer l'\emph{erreur} en degrés de longitude par rapport à la longitude prédite par le serveur d'éphémérides de l'IMCCE~: (1) en adoptant le modèle d'al-\d{T}\=us{\=\i} avec $\vert P_2P_3\vert=2;30$ et $\vert P_7P\vert=43;10$ et les mouvements moyens d'{Ibn al-\v{S}\=a\d{t}ir}, (2) en adoptant le modèle d'{Ibn al-\v{S}\=a\d{t}ir}. Dans l'absolu, le choix d'{Ibn al-\v{S}\=a\d{t}ir} est un peu meilleur. Celui d'al-\d{T}\=us{\=\i} reproduit de très près les prédictions ptoléméennes de l'\textit{Almageste}\footnote{Nous avons omis la courbe représentant la théorie de l'\textit{Almageste}, car à cette échelle elle est indiscernable de la courbe représentant le modèle d'al-\d{T}\=us{\=\i}.}. \paragraph{Mercure selon {Ibn al-\v{S}\=a\d{t}ir}}\label{begin_mercure} La théorie d'{Ibn al-\v{S}\=a\d{t}ir} pour Mercure ressemble à celle de Vénus, à trois différences près~: -- Dans la figure initiale, la position relative de l'épicycle au sein du rotateur n'est pas la même~: pour Mercure, $\overrightarrow{P_4P_5}$ est orienté dans le même sens que $\mathbf{j}$. Cette particularité n'est d'ailleurs pas indiquée explicitement dans le texte du chapitre 21~; il faut regarder les figures pour le voir, et bien des lecteurs ont dû s'y méprendre. -- Il y a deux orbes additionnels, l'\emph{orbe englobant} et l'\emph{orbe protecteur}, portés par l'orbe de l'épicycle, agissant comme un couple de \d{T}\=us{\=\i}, et dont le rôle est de faire varier la distance entre l'astre et le centre de l'épicycle (on notera $P_6$ et $P_7$ les centres de ces deux orbes additionnels). -- Dans le chapitre 25 sur les latitudes, les inclinaisons des plans des orbes sont de signes contraires à ceux de Vénus. \begin{figure} \begin{center} \input{fig061.pdf_tex} \end{center} \caption{\label{fig061}Mercure~: figure initiale} \end{figure} Sur la figure initiale fig.~\ref{fig061}, les centres des orbes sont définis par~: $$O=P_1=P_2,\quad \overrightarrow{P_2P_3}=60\,\mathbf{j},\quad \overrightarrow{P_3P_4}=4;5\,\mathbf{j},$$ $$\overrightarrow{P_4P_5}=0;55\,\mathbf{j},\quad \overrightarrow{P_5P_6}=22;46\,\mathbf{j},\quad \overrightarrow{P_6P_7}=\overrightarrow{P_7P}=-0;33\,\mathbf{j}.$$ Comme pour Vénus, {Ibn al-\v{S}\=a\d{t}ir} présente deux modèles différents pour Mercure dans le chapitre 25. Le premier, censé reproduire les mouvements en latitude décrits dans l'\textit{Almageste}, consiste en la composée de rotations suivante~: $$\hspace{-2cm}R_{P_1,\lambda_A} \,R_{P_2,\mathbf{i},-0°10'} \,R_{P_2,\overline{\kappa}} \,R_{P_3,-\overline{\kappa}} \,R_{P_4,\mathbf{i},0°5'} \,R_{P_4,\mathbf{j},-6°36'30''} \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\mathbf{i},-0°5'} \,R_{P_5,\mathbf{j},-0°22'30''} \,R_{P_5,\overline{\alpha}-\overline{\kappa}} \,R_{P_6,2\overline{\kappa}} \,R_{P_7,-4\overline{\kappa}}.$$ On a, d'après {Ibn al-\v{S}\=a\d{t}ir}~: $$\overline{\lambda}_{\astrosun}(0)=280;9,0,\quad\dot{\overline{\lambda}}_{\astrosun}=359;45,40\text{ par année persane},$$ $$\lambda_A(0)=212;52,\quad\dot{\lambda}_A=0;1\text{ par année persane},$$ $$\overline{\alpha}(0)=154;2,\quad\dot{\overline{\alpha}}=1133;57,1\text{ par année persane}.$$ Kennedy\footnote{\textit{Cf.} \cite{ghanem1976} p.~65.} a indiqué deux incohérences, concernant les rayons et les distances, que nous croyons pouvoir résoudre au vu de l'ensemble~: -- Le ``rayon de l'épicycle apparent'' minimal est $23;52-4\times 0;33=21;40$, alors que les manuscrits indiquent $21;20$ dans le chapitre 21. La ressemblance entre \textit{thulth} ($0;20$) et \textit{thulthay} ($0;40$) a probablement induit un copiste en erreur. De plus, au chapitre 23, {Ibn al-\v{S}\=a\d{t}ir} écrit bien $21;40$. Nous avons donc opté pour $21;40$, et nous avons laissé $21;20$ dans l'apparat critique. -- Le texte donne $\vert P_4P_5\vert=0;50$ et non $0;55$. Les valeurs annoncées par {Ibn al-\v{S}\=a\d{t}ir} pour les distances minimale et maximale de Mercure au centre du Monde quand $\overline{\kappa}=0°$ ou $180°$ présentent une erreur de $0;5$. Kennedy indique que cette erreur disparaît si l'on suppose $\vert P_4P_5\vert=0;55$ ou bien $\vert P_3P_4\vert=4;10$. En fait, les calculs faits par {Ibn al-\v{S}\=a\d{t}ir} concernant les rayons des orbes solides révèlent qu'il avait lui-même adopté $\vert P_4P_5\vert=0;55$. \paragraph{\'Equations de Mercure et formule d'interpolation}\label{equ_mercure} On déduit les équations de Mercure comme on l'a fait pour les planètes supérieures et Vénus~; il faut seulement penser au sens de $\overrightarrow{P_4P_5}$, et surtout remplacer $P_5P$ par le ``rayon de l'épicycle apparent'' $P'_5P'$ que nous calculerons ensuite~: $$c_1=-\arcsin\left(\frac{P_3P_4\sin\overline{\kappa}-P_4P_5\sin\overline{\kappa}}{OP_5'}\right)$$ où $OP_5'=\sqrt{(P_3P_4\sin\overline{\kappa}-P_4P_5\sin\overline{\kappa})^2+(OP_3+P_3P_4\cos\overline{\kappa}+P_4P_5\cos\overline{\kappa})^2}$. $$c_2=\arcsin\left(\frac{P_5'P'\sin\alpha}{OP'}\right)$$ où $OP'=\sqrt{(P_5'P'\sin\alpha)^2+(OP_5'+P_5'P'\cos\alpha)^2}$. La formule d'interpolation est~: $$c_2(\overline{\kappa},\alpha)\simeq c_2(0,\alpha)+\chi(\overline{\kappa})(c_2(180°,\alpha)-c_2(0,\alpha)),$$ $$\chi(\overline{\kappa})=\frac{\max\vert c_2(\overline{\kappa},\cdot)\vert-\max\vert c_2(0,\cdot)\vert}{\max\vert c_2(180°,\cdot)\vert-\max\vert c_2(0,\cdot)\vert},$$ $$\max\vert c_2(\overline{\kappa},\cdot)\vert=\arcsin\frac{P_5'P'}{OP_5'}.$$ Pour calculer $P_5'P'$, on rappelle que les points $P_5'$ et $P'$ sont définis par les transformations planes à appliquer aux points $P_5$ et $P$~: $$P_5'=R_{P_1,\lambda_A}\,R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\,R_{P_4,2\overline{\kappa}}(P_5),$$ $$P'=R_{P_1,\lambda_A} \,R_{P_2,\overline{\kappa}} \,R_{P_3,-\overline{\kappa}} \,R_{P_4,2\overline{\kappa}} \,R_{P_5,\overline{\alpha}-\overline{\kappa}} \,R_{P_6,2\overline{\kappa}} \,R_{P_7,-4\overline{\kappa}}(P).$$ On a donc \begin{align*} \overrightarrow{P_5'P'}&=\overrightarrow{P_5R_{P_5,\overline{\alpha}-\overline{\kappa}}R_{P_6,2\overline{\kappa}}R_{P_7,-4\overline{\kappa}}(P)}\\ &=R_{\overline{\alpha}-\overline{\kappa}}(\overrightarrow{P_5R_{P_6,2\overline{\kappa}}R_{P_7,-4\overline{\kappa}}(P)}) \end{align*} La remarque p.~\pageref{couple_transl} montre que le vecteur $\overrightarrow{P_5R_{P_6,2\overline{\kappa}}R_{P_7,-4\overline{\kappa}}(P)}$ est colinéaire au vecteur $\mathbf{j}$ et que $$R_{P_6,2\overline{\kappa}}\,R_{P_7,-4\overline{\kappa}}(P)=t_{\overrightarrow{P_7R_{P_6,2\overline{\kappa}}(P_7)}}\,t_{\overrightarrow{PR_{P_7,-2\overline{\kappa}}(P)}}(P).$$ On a donc~: \begin{align*} P_5'P'&=\left<\overrightarrow{P_5R_{P_6,2\overline{\kappa}}R_{P_7,-4\overline{\kappa}}(P)},\mathbf{j}\right>\\ &=\left<\overrightarrow{P_5P}+\overrightarrow{PR_{P_7,-2\overline{\kappa}}(P)}+\overrightarrow{P_7R_{P_6,2\overline{\kappa}}(P_7)},\mathbf{j}\right>\\ &=\left<\overrightarrow{P_5P_6}+R_{-2\overline{\kappa}}(\overrightarrow{P_7P})+R_{2\overline{\kappa}}(\overrightarrow{P_6P_7}),\mathbf{j}\right>\\ &=P_5P_6-(PP_7+P_7P_6)\cos 2\overline{\kappa} \end{align*} Finalement~: $$P_5'P'=P_5P_6-PP_6\cos 2\overline{\kappa}.$$ \paragraph{Les orbes solides de Mercure} \textit{Cf.} figure \ref{fig050}. \begin{align*} \text{rayon du globe planétaire}&=r\\ \text{rayon du protecteur}&=0;33+r\\ \text{rayon de l'orbe englobant}&=0;33+0;33+r=1;6+r\\ \text{rayon de l'orbe de l'épicycle}&=22;46+1;6+r=23;52+r\\ \text{rayon du rotateur}&=0;55+23;52+r=24;47+r\\ \text{rayon du déférent}&=4;5+24;47+r=28;52+r\\ \text{rayon extérieur de l'orbe incliné}&=60+28;52+r=88;52+r\\ \text{rayon intérieur de l'orbe incliné}&=60-(28;52+r)=31;8-r\\ \text{épaisseur du parécliptique}&=0;8 \end{align*} \begin{figure} \begin{center} \input{fig050.pdf_tex} \end{center} \caption{\label{fig050}Mercure, orbes solides} \end{figure} \paragraph{Comparaisons} \`A la figure \ref{fig062} nous avons tracé l'équation de Mercure sur un an à partir de l'\'Epoque de référence choisie par {Ibn al-\v{S}\=a\d{t}ir}, (1) calculée au moyen des formules ci-dessus, et (2) calculée au moyen des éphémérides de l'IMCCE. Nous n'avons pas tracé la courbe representant le modèle de l'\textit{Almageste}, car à cette échelle cette courbe est presque confondue avec celle d'{Ibn al-\v{S}\=a\d{t}ir}. Nous avons repéré sur l'axe supérieur les syzygies et les quadratures au moyen des lettres <<~a~>>, <<~p~>>, <<~q~>>, resp. apogée, périgée, quadrature. Bien que le résultat soit assez bon aux syzygies et aux quadratures, on observe une erreur considérable près des octants~; la figure \ref{fig063} le montre plus clairement, et elle permet de vérifier que le modèle d'{Ibn al-\v{S}\=a\d{t}ir} n'est pas l'équivalent exact du modèle de l'\textit{Almageste}. Enfin, les latitudes de Mercure sont sur la figure \ref{fig064}. \begin{figure} \begin{center} \input{fig062.pdf_tex} \end{center} \caption{\label{fig062} Les longitudes de Mercure} \end{figure} \begin{figure} \begin{center} \input{fig063.pdf_tex} \end{center} \caption{\label{fig063} Erreur en longitude des modèles de Ptolémée et d'{Ibn al-\v{S}\=a\d{t}ir} pour Mercure} \end{figure} \begin{figure} \begin{center} \input{fig064.pdf_tex} \end{center} \caption{\label{fig064} Les latitudes de Mercure} \end{figure} \paragraph{Le modèle de Qu\d{t}b al-D{\=\i}n al-Sh{\=\i}r\=az{\=\i} pour Mercure} Le modèle de Sh{\=\i}r\=az{\=\i} pour Mercure n'est pas équivalent à celui d'{Ibn al-\v{S}\=a\d{t}ir}. Il reproduit de manière assez précise les prédictions de l'\textit{Almageste}, mieux encore qu'{Ibn al-\v{S}\=a\d{t}ir}. Posons $c=3$~; les centres des orbes dans la figure initiale sont définis par\footnote{Le modèle est décrit par Kennedy dans \cite{ghanem1976} p.~101-103.}~: $$\overrightarrow{P_1P_2}=6\,\mathbf{j},\quad \overrightarrow{P_2P_3}=60\,\mathbf{j},\quad \overrightarrow{P_3P_4}=\overrightarrow{P_4P_5}=-\frac{c}{2}\,\mathbf{j}=-1;30\,\mathbf{j},$$ $$\overrightarrow{P_5P_6}=\overrightarrow{P_6P_7}=\frac{c}{2}\,\mathbf{j}=1;30\,\mathbf{j},\quad \overrightarrow{P_7P_8}=c\,\mathbf{j}=3\,\mathbf{j},\quad \overrightarrow{P_8P}=22;30\,\mathbf{j}.$$ On a un déférent excentrique de centre $P_2$, un épicycle de centre $P_8$, et les deux triplets d'orbes centrés en $(P_3,P_4,P_5)$ et $(P_5,P_6,P_7)$ constituent deux couples de \d{T}\=us{\=\i}. Les rotations engendrant les mouvements en longitude sont~: $$R_{P_1,\lambda_A}\, R_{P_2,\overline{\kappa}}\, \overbrace{R_{P_3,-\overline{\kappa}}\, R_{P_4,2\overline{\kappa}}\, R_{P_5,-\overline{\kappa}}}^{\text{premier couple}}\, \overbrace{R_{P_5,2\overline{\kappa}}\, R_{P_6,-4\overline{\kappa}}\, R_{P_7,2\overline{\kappa}}}^{\text{deuxième couple}}\, R_{P_7,\overline{\kappa}}\, R_{P_8,\overline{\alpha}-\overline{\kappa}}$$ $$=R_{P_1,\lambda_A}\, R_{P_2,\overline{\kappa}}\, R_{P_3,-\overline{\kappa}}\, R_{P_4,2\overline{\kappa}}\, R_{P_5,\overline{\kappa}}\, R_{P_6,-4\overline{\kappa}}\, R_{P_7,3\overline{\kappa}}\, R_{P_8,\overline{\alpha}-\overline{\kappa}}.$$ \paragraph{Calibrage du modèle d'{Ibn al-\v{S}\=a\d{t}ir} pour Mercure} En 1974, Willy Hartner \cite{hartner1974} a étudié de près les écarts entre les prédictions ptoléméennes et celles du modèle d'{Ibn al-\v{S}\=a\d{t}ir} dans les syzygies et les quadratures, et il en a tiré quelques indices concernant la genèse de ce modèle. Nous reprenons ici ses analyses, en formulant d'abord les conditions nécessaires et suffisantes pour qu'un modèle de même forme que celui d'{Ibn al-\v{S}\=a\d{t}ir} reproduise exactement les élongations maximales observées par Ptolémée dans les syzygies et les quadratures. Dans le modèle de Ptolémée, notons $R$ le rayon du déférent excentrique portant l'épicycle~: $R$ est la distance entre le centre de l'épicycle et le centre du déférent. Notons $e$ l'excentricité. Le rapport entre les distances Terre--centre de l'épicycle quand $\bar{\kappa}=0°$ et $\bar{\kappa}=180°$ est alors\footnote{\textit{Cf.} \cite{pedersen1974} pour une description de la théorie de Ptolémée.} $$\frac{R+3e}{R-e}.$$ Dans le modèle d'{Ibn al-\v{S}\=a\d{t}ir}, ce rapport est $$\frac{OP_3+(P_3P_4+P_4P_5)}{OP_3-(P_3P_4+P_4P_5)}.$$ Le rayon de l'épicycle apparent d'{Ibn al-\v{S}\=a\d{t}ir} étant le même en $\bar{\kappa}=0°$ et en $\bar{\kappa}=180°$, le modèle d'{Ibn al-\v{S}\=a\d{t}ir} reproduira les élongations maximales ($\max\vert c_2(\bar{\kappa},.)\vert$) du modèle de Ptolémée à l'Apogée et au périgée si et seulement si ces deux rapports sont égaux. Dans les quadratures, Ptolémée avait observé que l'<<~équation du centre~>> est $c_1(90°)=3°$. Dans le modèle de Ptolémée, l'équation du centre se calcule ainsi~: $$\tan c_1(90°)=\frac{-e}{\sqrt{R^2-e^2}-e}.$$ Dans le modèle d'{Ibn al-\v{S}\=a\d{t}ir}, on a~: $$\tan c_1(90°)=\frac{-(P_3P_4-P_4P_5)}{OP_3}.$$ Ces deux grandeurs seront égales si et seulement si $$P_3P_4-P_4P_5=\frac{e\times OP_3}{\sqrt{R^2-e^2}-e}.$$ Avec $OP_3=R=60$, on obtient, comme Hartner\footnote{\textit{Cf.} \cite{hartner1974} p.~15. Nous reprenons ci-dessous les calculs menés par Hartner, car il les a menés sous l'hypothèse que $P_4P_5=0;50$, au lieu de $0;55$. Ses conclusions restent valables.}, les relations suivantes~: \begin{align} P_3P_4-P_4P_5=\frac{R(C-1)}{C+1}\\ e=\frac{R(C-1)}{C+3} \end{align} où $C=\dfrac{R+3e}{R-e}=\dfrac{OP_3+(P_3P_4+P_4P_5)}{OP_3-(P_3P_4+P_4P_5)}$, et \begin{align} P_3P_4-P_4P_5=\frac{Re}{\sqrt{R^2-e^2}-e} \end{align} Cette dernière relation peut être inversé pour exprimer $e$ en fonction de $P_3P_4-P_4P_5$~: \begin{align} e=\frac{R(P_3P_4-P_4P_5)}{\sqrt{R^2+2R(P_3P_4-P_4P_5)+2(P_3P_4-P_4P_5)^2}} \end{align} Comme les rayons des orbes choisis par {Ibn al-\v{S}\=a\d{t}ir} impliquent que\linebreak ${\dfrac{OP_3+(P_3P_4+P_4P_5)}{OP_3-(P_3P_4+P_4P_5)}=\dfrac{65}{55}}$, on aurait, d'après (2), $e=2;36,31$. D'après (3), on aurait alors~: $$P_3P_4-P_4P_5=2;43,38,$$ d'où $$c_1(90°)=\arctan\frac{-(P_3P_4-P_4P_5)}{OP_3}=-2;36,18.$$ Au contraire, selon {Ibn al-\v{S}\=a\d{t}ir}, on a~: $$P_3P_4-P_4P_5=3;10,$$ d'où $$c_1(90°)=3;1,16.$$ Hartner en tire une première conclusion~: {Ibn al-\v{S}\=a\d{t}ir} n'a pas calibré son modèle pour reproduire exactement les élongations maximales observées par Ptolémée aux syzygies et dans les quadratures. \'Etrangement, Hartner, peut-être trop prudent, juge <<~peu probable~>>\footnote{\textit{Cf.} \cite{hartner1974} p.~23.} qu'{Ibn al-\v{S}\=a\d{t}ir} n'ait pas eu l'intention de concevoir un véritable <<~substitut géométrique~>> du mécanisme de l'\textit{Almageste}. La suite de son article tend pourtant à convaincre qu'{Ibn al-\v{S}\=a\d{t}ir} aurait au moins utilisé une observation nouvelle à l'apogée~; et si l'on veut préserver l'adéquation du modèle dans les quadratures, cette observation nouvelle conduit à abandonner les relations (1) à (4). Ptolémée avait observé une élongation maximale à l'apogée, $\max\vert c_2(0,\cdot)\vert=19;3$. Le modèle d'{Ibn al-\v{S}\=a\d{t}ir} donne\footnote{Hartner trouve $19;30$ sous l'hypothèse que $P_4P_5=0;50$. On trouve $19;28$ en prenant $P_4P_5=0;55$.} $\max\vert c_2(0,\cdot)\vert=19;28$. Cette supposée observation nouvelle sera notre point de départ pour tenter de reconstituer le calibrage du modèle d'{Ibn al-\v{S}\=a\d{t}ir}. Quelles sont les données de l'observation~? On a d'abord cette nouvelle observation, qu'on arrondira à $19;30$~: $$\max\vert c_2(0,\cdot)\vert=\arcsin\frac{P_5P_6-2\times P_6P_7}{OP_3+(P_3P_4+P_4P_5)}=19;30.$$ L'\textit{Almageste} donne aussi une observation de l'élongation maximale au périgée~: $$\max\vert c_2(180°,\cdot)\vert=\arcsin\frac{P_5P_6-2\times P_6P_7}{OP_3-(P_3P_4+P_4P_5)}=23;15.$$ Enfin, des observations de l'élongation maximale aux quadratures, Ptolémée déduit que $c_1(90°)=3°$ et que $\max\vert c_2(90°,\cdot)\vert=23;15$. On déduit des deux premières données~: $$\frac{OP_3-(P_3P_4+P_4P_5)}{OP_3+(P_3P_4+P_4P_5)}=\frac{\sin(19°30')}{\sin(23°15')},$$ d'où, si $OP_3=60$, $$P_3P_4+P_4P_5=5;1,7.$$ En reportant dans $\max\vert c_2(0,\cdot)\vert$, on en déduit~: $$P_5P_6-2\times P_6P_7=(60+P_3P_4+P_4P_5)\sin(19°30')=21;42,13.$$ De $c_1(90°)$, on tire~: $$P_3P_4-P_4P_5=60\tan(3°)=3;8,40,$$ d'où enfin, comme il se doit~: $$P_3P_4=4;4,53\simeq 4;5,$$ $$P_4P_5=0;56,13\simeq 0;55.$$ Reste à calculer $P_5P_6+2\times P_6P_7$. Or~: $$\max\vert c_2(90°,\cdot)\vert=23;15=\arcsin\frac{P_5P_6+2\times P_6P_7}{\sqrt{(P_3P_4-P_4P_5)^2+60^2}},$$ d'où $$P_5P_6+2\times P_6P_7=\sin(23°15')\times\sqrt{(P_3P_4-P_4P_5)^2+60^2}=23;43;2.$$ On devrait alors avoir $2\times P_6P_7=1;1$, d'où $P_6P_7=P_7P=0;30,30$ au lieu de $0;33$ d'{Ibn al-\v{S}\=a\d{t}ir}~; mais il est bien possible qu'{Ibn al-\v{S}\=a\d{t}ir} ait observé une équation maximale légèrement supérieure à celle de Ptolémée dans les quadratures. Si $\max\vert c_2(90°,\cdot)\vert=23;30$, sans rien changer par ailleurs, on trouverait en effet $P_6P_7\simeq 0;34$, arrondi qu'on pourrait aussi tronquer à $0;33$.\label{end_mercure} \paragraph{Tables d'équations en longitude} Les formules p.~\pageref{equ_lune}, \pageref{equ_sat} et \pageref{equ_mercure}, ainsi que les formules d'interpolation afférentes, permettent de restituer les tables d'équations mentionnées aux chapitres 11, 14 et 23 de la première partie de la \textit{Nihaya al-s\=ul} comme nous l'avons fait dans la table \ref{jadwal}. Pour faciliter d'éventuelles comparaisons, nous avons adopté le même format que Fuad Abbud dans \cite{ghanem1976} p.~80. \begin{table} \begin{center}\begin{tabular}{cc|cccc} Lune & $2\bar{\eta}$ ou $\alpha$ & $c_1(2\bar{\eta})$ & $c_2(\alpha,0)$ & $c_2(\alpha,180°)-c_2(\alpha,0)$ & $\chi(2\bar{\eta})$ \\ & $30$ & $7;32$ & $-2;18$ & $-1;8$ & $0;5$ \\ & $60$ & $11;48$ & $-4;5$ & $-2;5$ & $0;18$ \\ & $90$ & $12;9$ & $-4;55$ & $-2;40$ & $0;33$ \\ & $120$ & $9;33$ & $-4;27$ & $-2;36$ & $0;47$ \\ & $150$ & $5;11$ & $-2;40$ & $-1;39$ & $0;57$ \\ \hline Saturne & $\bar{\kappa}$ ou $\alpha$ & $c_1(\bar{\kappa})$ & $c_2(0,\alpha)$ & $c_2(180,\alpha)-c_2(0,\alpha)$ & $\chi(\bar\kappa)$ \\ & $30$ & $-3;6$ & $2;42$ & $0;18$ & $0;3$ \\ & $60$ & $-5;29$ & $4;50$ & $0;33$ & $0;11$ \\ & $90$ & $-6;30$ & $5;51$ & $0;42$ & $0;25$ \\ & $120$ & $-5;48$ & $5;21$ & $0;41$ & $0;41$ \\ & $150$ & $-3;26$ & $3;13$ & $0;26$ & $0;55$ \\ \hline Jupiter & $\bar{\kappa}$ ou $\alpha$ & $c_1(\bar{\kappa})$ & $c_2(0,\alpha)$ & $c_2(180,\alpha)-c_2(0,\alpha)$ & $\chi(\bar\kappa)$ \\ & $30$ & $-2;31$ & $4;31$ & $0;22$ & $0;3$ \\ & $60$ & $-4;26$ & $8;16$ & $0;43$ & $0;12$ \\ & $90$ & $-5;14$ & $10;23$ & $0;58$ & $0;26$ \\ & $120$ & $-4;39$ & $9;55$ & $1;2$ & $0;42$ \\ & $150$ & $-2;44$ & $6;13$ & $0;43$ & $0;55$ \\ \hline Mars & $\bar{\kappa}$ ou $\alpha$ & $c_1(\bar{\kappa})$ & $c_2(0,\alpha)$ & $c_2(180,\alpha)-c_2(0,\alpha)$ & $\chi(\bar\kappa)$ \\ & $30$ & $-5;15$ & $11;9$ & $1;28$ & $0;2$ \\ & $60$ & $-9;22$ & $21;45$ & $3;8$ & $0;9$ \\ & $90$ & $-11;19$ & $30;54$ & $5;17$ & $0;20$ \\ & $120$ & $-10;20$ & $36;29$ & $8;29$ & $0;36$ \\ & $150$ & $-6;15$ & $31;51$ & $13;5$ & $0;53$ \\ \hline Vénus & $\bar{\kappa}$ ou $\alpha$ & $c_1(\bar{\kappa})$ & $c_2(0,\alpha)$ & $c_2(180,\alpha)-c_2(0,\alpha)$ & $\chi(\bar\kappa)$ \\ & $30$ & $-1;0$ & $12;25$ & $0;19$ & $0;4$ \\ & $60$ & $-1;44$ & $24;26$ & $0;40$ & $0;14$ \\ & $90$ & $-2;1$ & $35;25$ & $1;8$ & $0;28$ \\ & $120$ & $-1;46$ & $43;42$ & $1;52$ & $0;44$ \\ & $150$ & $-1;2$ & $42;47$ & $3;13$ & $0;55$ \\ \hline Mercure & $\bar{\kappa}$ ou $\alpha$ & $c_1(\bar{\kappa})$ & $c_2(0,\alpha)$ & $c_2(180,\alpha)-c_2(0,\alpha)$ & $\chi(\bar\kappa)$ \\ & $30$ & $-1;25$ & $7;22$ & $0;59$ & $0;12$ \\ & $60$ & $-2;31$ & $13;54$ & $2;1$ & $0;39$ \\ & $90$ & $-3;1$ & $18;26$ & $3;4$ & $1;3$ \\ & $120$ & $-2;44$ & $19;6$ & $3;55$ & $1;11$ \\ & $150$ & $-1;38$ & $13;11$ & $3;27$ & $1;5$ \\ & $180$ & $0$ & $0$ & $0$ & $1$ \end{tabular}\end{center} \caption{\label{jadwal}Restitution d'un extrait des tables d'équations en longitude d'{Ibn al-\v{S}\=a\d{t}ir}} \end{table} \label{comm_fin} \part*{\'Edition et traduction} \addcontentsline{toc}{part}{\'Edition et traduction} \label{txt_debut} \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ACAUBD@\RL{'a.sl}!ACAUBD@\RL{'a.sl}, 1) fondement, 2) origine} \index{ASACBD@\RL{sa'ala}!BEASACBDAI AVAQBHAQBJAI@\RL{mas'alaT .durUriyaT}, principe} \index{ATBCBC@\RL{^skk}!\RL{^skkuN}, doute} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{BGBJAC@\RL{haya'a}!BGBJACAI@\RL{hI'aT}, configuration, astronomie} \addcontentsline{toc}{chapter}{Préface} \includepdf[pages=1,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \LARGE L'achèvement de l'enquête et la correction des fondements \end{center} Au nom de Dieu, le Clément, le Miséricordieux~; louer Dieu convient à sa Majesté~; que Dieu bénisse notre seigneur Mahomet et sa descendance. `Al{\=\i} b. Ibr\=ah{\=\i}m b. Mu\d{h}ammad b. al-\v{S}\=a\d{t}ir, \emph{muwaqqit}\footnote{Qui observe et indique l'heure de la prière.} à la Grande Mosquée des Omeyyades, dit~: Dans ce traité, nous avons voulu exposer la configuration des orbes des astres selon la méthode que nous avons inventée -- méthode sauve des doutes et en accord avec les observations -- mais sans les démonstrations, à la manière d'un abrégé, pour en simplifier l'apprentissage et l'usage. Nous y ajoutons ce qu'il faut et nous le déduisons des fondements et des principes. C'est une noble intention, car elle fait connaître les états des corps supérieurs et inférieurs (du point de vue de leurs quantités), leurs grandeurs et leurs distances (du point de vue de leurs quantités et de certaines de leurs qualités), les lieux de leurs mouvements, la grandeur et le sens de ces mouvements, et ce qui, parmi les choses connues, découle des grandeurs et des distances des corps. Ptolémée, et d'autres parmi les anciens et les modernes, avaient entrepris d'établir des fondements, mais ils ne suffisent pas car ils contredisent les fondements déjà admis en géométrie et en physique. Plusieurs spécialistes ont déjà exposé des doutes réfléchis sur ces fondements. Nous en avons exposé d'autres que l'observation (entre autres) nous a engagé à considérer. Aucun de nos prédécesseurs n'a su établir de fondements suffisants sans contredire les observations, et ils l'ont avoué dans leurs livres. Nous avons étudié ces doutes avec soin et nous avons expliqué cela dans notre livre intitulé \emph{Commentaire des observations}\footnote{\emph{Ta`l{\=\i}q al-ar\d{s}\=ad}, ouvrage perdu d'Ibn al-\v{S}\=a\d{t}ir.}. Nous exposerons aussi ces doutes au début du présent traité, sans démonstration ni explication~: nous ferons ainsi savoir notre excuse pour avoir dit ce que nous y disons, et nous montrerons la grande utilité de ce que nous avons inventé. Seul le saura qui embrasse la connaissance de ce qu'ont inventé anciens et modernes, la contradiction des fondements, et l'épuisement de leur effort à établir des fondements suffisants. \newpage\phantomsection \includepdf[pages=2,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Nos prédécesseurs n'ont pu faire réussir cette entreprise, mais Dieu (loué soit-il) l'a fait réussir~: nous avons indiqué toutes les prémisses de ces fondements, leurs conséquences, et leurs démonstrations, dans notre \emph{Commentaire des observations}. Dans le présent traité, nous avons voulu détacher les fondements des démonstrations, et exposer ces doutes comme nous avons dit -- nous demandons à Dieu réussite et protection. Ce traité se compose de deux parties. La première compte trente chapitres et une conclusion. L'introduction comprend deux sections. \newpage\phantomsection \index{AYBDBE@\RL{`lm}!AYAGBDBE@\RL{`Alam}, monde} \index{AYBDBE@\RL{`lm}!AYAGBDBE AYBDBHBJ@\RL{`Alam `alwiy}, monde supérieur, \emph{i. e.} supralunaire)} \index{AYBDBE@\RL{`lm}!AYAGBDBE ASBABDBJ@\RL{`Alam safliy}, monde inférieur, \emph{i. e.} sublunaire} \index{AYBDBE@\RL{`lm}!AYAGBDBE BCBHBF BAASAGAO@\RL{`Alam al-kwn wa-al-fsAd}|see{\RL{`Alam safliy}}} \index{AMAOAO@\RL{.hdd}!BEAMAOAOAO ALBGAGAJ@\RL{m.hddid al-jihAt}, le neuvième orbe} \index{AHASAW@\RL{bs.t}!ALASBE AHASBJAW@\RL{jsm basI.t}, corps simple} \index{AQBCAH@\RL{rkb}!ALASBE BEAQBCAH@\RL{jsm mrkb}, corps composé} \index{AUBHAQ@\RL{.swr}!AUBHAQ@\RL{.sUraT j .suwar}, figure, constellation} \index{AWAHAY@\RL{.tb`}!AWAHBJAYAI@\RL{.tabI`aT}, nature} \index{ACAKAQ@\RL{'a_tr}!ACAKBJAQBJ@\RL{'a_tIriy}, éthéré} \index{AYBFAUAQ@\RL{`n.sr}!AYBFAUAQ@\RL{`un.sar}, élément simple} \index{BHBDAO@\RL{wld}!BEBHAGBDBJAO AKBDAGAKAI@\RL{al-mwAlId al-_talA_taT}, les trois règnes} \index{ACAKAQ@\RL{'a_tr}!ACAKAQ AYBDBHBJ@\RL{'a_tr `alwiy}, météore} \index{ASAMAH@\RL{s.hb}!ASAMAH@\RL{s.hAb j s.hb}, nuage, nébuleuse ??} \index{ATBGAH@\RL{^shb}!ATBGAH APBHAGAJ APBFAGAH@\RL{^shb _dwAt al-a_dnAb}, étoile filante ??} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AYAQAVBJAI@\RL{.harakaT `r.diyaT}, mouvement par accident} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BBASAQBJAI@\RL{.harakaT qsriyaT}, mouvement violent} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AEAQAGAOBJAI@\RL{.harakaT 'irAdiyaT}, mouvement volitif} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AWAHBJAYBJAI@\RL{.harakaT .tabI`iyaT}, mouvement naturel} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \addcontentsline{toc}{chapter}{Introduction} \includepdf[pages=3,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Introduction \large Première section \end{center} Sache que le \emph{monde} est le nom de ce qu'embrasse la surface visible de l'orbe supérieur. Cet orbe est ce qui définit les directions, car lui et son centre définissent deux directions~: le haut et le bas. Il est [l'orbe] absolument simple. Les corps sont classés en corps \emph{simples} et corps \emph{composés}. Le corps simple est ce dont les parties et la nature sont semblables, c'est-à-dire ce qui ne se divise pas en corps de figures et de natures différentes~; au contraire, le corps simple a une nature unique et ce qui en émane procède d'une manière unique\footnote{\emph{d'une manière unique} ou \emph{d'une même manière} ou \emph{d'une seule manière} ou \emph{selon le même}~? Nous avons choisi la première traduction. C'est un attribut du mouvement dans la classification qui va suivre.}. Le corps composé, c'est le contraire. Parmi les corps simples, on distingue l'\emph{éther} et les \emph{éléments simples}. L'éther, c'est les orbes et les astres situés en eux. On appelle cela monde supérieur ou monde des orbes et des cieux. Les autres corps simples sont les quatre éléments simples bien connus. On appelle cela les éléments (y compris ce qui se compose [d'éléments]). On appelle cela monde inférieur, monde de la génération et de la corruption, lieu des éléments simples et des corps qui en sont composés sous l'orbe de la Lune. Dans les corps composés, on distingue une première division -- ce dont la composition est parfaite et dont la forme se conserve un temps -- et une seconde division -- sans perfection ni forme qui se conserve. La première division, c'est les trois règnes~: minéral, végétal, animal. Le minéral est ce qui ne possède pas la faculté de croissance, la plante est ce qui possède la faculté de croissance, sans la perception, et l'animal est ce qui possède la faculté de croissance avec la perception. La seconde division des corps composés, c'est les météores\footnote{Les météores appartiennent-ils au monde sublunaire ou bien aux cieux~? Al-\d{T}\=us\={\i} laisse entendre qu'il s'agit d'éther (\emph{cf.} \cite{altusi1993} \S~II.1), bien qu'ils appartiennent au monde sublunaire (\cite{altusi1993} \S~II.2). Ils seraient parfois emportés par le mouvement des cieux, à cause du fait qu'ils sont assez loin de la Terre (contrairement à la partie de l'air <<~adjacente à la Terre~>> qui se conforme au mouvement rectiligne, \emph{cf.} \cite{altusi1993} \S~II.1).}~: les nuages, les étoiles filantes, \emph{etc}. Les lieux des corps composés sont les lieux des parties qui sont majoritaires en eux pourvu qu'il n'y ait pas d'attraction en sens contraire. Le mouvement est la condition de la chose mue entre l'origine et le terme, en tant que son état à chaque instant diffère de ce qu'il sera après et de ce qu'il était avant. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI@\RL{.harakaT basI.taT}, mouvement simple} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEANAJBDBAAI@\RL{.harakaT mu_htalifaT}, mouvement irrégulier (\textit{i. e.} non uniforme)} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCBCAHAI@\RL{.harakaT mrkkbaT}, mouvement composé} \index{AMAQBC@\RL{.hrk}!BEAJAMAQAQBC BFBAASBG AZBJAQBG@\RL{mt.hrrk binafsihi / bi.gayrihi}, mû par soi / par un autre} \index{ASACBD@\RL{sa'ala}!BEASACBDAI AVAQBHAQBJAI@\RL{mas'alaT .durUriyaT}, principe} \index{ANBDBH@\RL{_hlw}!ANBDAGAB@\RL{_hlA'}, vide} \index{ASBCBF@\RL{skn}!ASBCBHBF@\RL{sukUn}, repos} \label{var27} \includepdf[pages=4,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Le mouvement est ou par accident, ou violent, ou volitif, ou naturel (ou bien il se compose de mouvements naturels, comme les mouvements des astres, et c'est alors un composé simple que certains disent aussi être un composé volitif). Le \emph{mouvement par accident} est comme le mouvement du passager du bateau. Le\label{bateau} \emph{mouvement violent} est comme le mouvement du bateau et il lui vient d'une cause extérieure. Le \emph{mouvement volitif} est comme le mouvement des animaux qui ont une conscience. Le mouvement est \emph{naturel} s'il est celui d'êtres qui n'ont pas conscience de ce qui procède d'eux, comme les éléments, les orbes (à ce que certains disent), et les plantes.\footnote{Ce paragraphe laisse déjà entendre qu'il y a deux doctrines, l'une affirmant que le mouvement des astres est un mouvement simple volitif, l'autre que le mouvement des orbes est un mouvement simple naturel. D'autre part, c'est la première mention d'un mouvement <<~composé simple~>>.} On distingue le mouvement naturel \emph{qui n'agit pas d'une manière unique} (le mouvement des plantes) du mouvement naturel \emph{qui agit d'une manière unique}. Selon moi, il y a dans celui-ci deux divisions~: le mouvement droit et le mouvement circulaire. Le \emph{mouvement droit} tend vers le centre et il est propre aux éléments, ainsi qu'à leurs composés. Le \emph{mouvement circulaire} est selon moi comme le mouvement des orbes. Il y a une doctrine distinguant le mouvement volitif \emph{qui ne procède pas d'une manière unique} (le mouvement des animaux) du mouvement volitif \emph{qui procède d'une manière unique} comme le mouvement des orbes autour de leurs centres. Selon cette doctrine, les orbes et les astres ont une conscience, mais selon une autre doctrine ce sont des corps simples qui n'ont pas de conscience. Mon point de vue est qu'ils sont composés (sauf le neuvième orbe), mais pas composés d'éléments.\footnote{\textit{cf.} l'analyse de ces deux doctrines dans notre commentaire p.~\pageref{mouvement}.} Le mouvement céleste individuel est ce qui émane d'un moteur simple unique, et le mouvement composé ce qui émane d'une multiplicité de moteurs simples (plus qu'un). Chaque mouvement individuel est simple, mais chaque mouvement irrégulier est composé~; car il y a certains mouvements célestes simples composés comme on le montrera si Dieu le veut.\footnote{\textit{cf.} notre commentaire sur le mouvement simple-composé p.~\pageref{simple_compo}.} On a besoin de sept principes\footnote{Le mot \emph{mas\=a'il} renvoie aux \emph{mas\=a'il \d{da}r\=uriyya} de la préface~: d'où ma traduction.} physiques~: \emph{Premier principe.} Le vide est impossible, et dans les orbes, le repos est impossible.\label{pas_de_repos} \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BGBJAC@\RL{haya'a}!BGBJACAI@\RL{hI'aT}, configuration, astronomie} \index{AHAOAB@\RL{bda'}!BEAHAOAB@\RL{mabda'}, principe, origine} \index{ALASBE@\RL{jsm}!ALASBE@\RL{jsm}, corps, solide} \label{var24} \includepdf[pages=5,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \emph{Deuxième principe.} Tout mouvement a un principe. Si le principe n'est pas séparé de la chose mue quant à sa position, de sorte que l'indication sensible qui le montre est une, on dit qu'elle est \emph{mue par soi}. Si au contraire il en est séparé, de sorte que le mouvement se rapporte à la chose mue et que l'action de mouvoir se rapporte à ce en quoi est son principe, on dit alors qu'elle est \emph{mue par un autre}. \emph{Troisième principe.} Un corps qui n'est pas mû par soi mais par le mouvement d'un autre corps mû par soi paraît se mouvoir par soi. \emph{Quatrième principe.} Aucune chose ayant en elle un principe de mouvement circulaire n'accepte le mouvement rectiligne, et inversement, sauf par violence. \emph{Cinquième principe.} Il ne peut y avoir de principe de deux mouvements différents dans une chose mue simple car la différence des mouvements exigerait une différence des choses mues\footnote{<<~choses mues~>>, \emph{sic}, à moins que \textit{mt\d{h}arrk} puisse aussi signifier moteur.}. \`A chaque fois qu'il y a plusieurs mouvements différents dans un orbe, c'est qu'il a un mouvement par soi et un mouvement par un autre. \emph{Sixième principe.} Les orbes n'acceptent pas le mouvement rectiligne, et il ne peut en être du mouvement de l'astre dans les cieux comme du mouvement du poisson dans l'eau. \emph{Septième principe.} Les mouvements des orbes agissent d'une manière unique, donc il n'accélèrent ni ne ralentissent, ni ne reviennent avant complétion d'un tour, ni ne s'arrêtent, ni ne sortent de leur lieu, ni ne changent d'état. Au contraire, ils sont constamment mus d'un mouvement simple, circulaire, dans la direction vers laquelle ils tendent. Les mouvements que l'observation trouve différents de cela sont des mouvements par accident dûs à la composition de mouvements simples. On ne peut sortir des limites imposées par ces fondements, or ils sont incompatibles avec la configuration admise par Ptolémée et les autres, même les meilleurs parmi les modernes, comme en témoignent leurs livres. \newpage\phantomsection \index{BGBJAC@\RL{haya'a}!BGBJACAI@\RL{hI'aT}, configuration, astronomie} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BABDBC@\RL{flk}!BABDBC ANAGAQAL BEAQBCAR@\RL{falak _hArij al-markaz}, orbe excentrique} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEASAJBHBJAI@\RL{.harakaT mustawiyaT}, mouvement uniforme} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEANAJBDBAAI@\RL{.harakaT mu_htalifaT}, mouvement irrégulier (\textit{i. e.} non uniforme)} \index{AMAPBH@\RL{.h_dw}!BFBBAWAI BEAMAGAPACAI@\RL{nuq.taT al-m.hA_dAT}, point de prosneuse} \includepdf[pages=6,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center}\large Deuxième section \end{center} Des doutes et des impossibilités qui nous ont arrêtés dans les configurations connues. \label{doutes_excentrique} Parmi ces [doutes], il y a \emph{l'orbe excentrique}. Un orbe excentrique rapporté au centre du monde, c'est impossible, car cela entraîne nécessairement qu'il y ait dans les orbes rapportés au centre du monde des formes sans la perfection de la circularité et qui ne sont pas mues. S'il se meut autour du centre du monde cela entraîne une irrégularité, et si l'excentrique se meut d'un mouvement uniforme autour de son propre centre, alors il y a des mouvements qui ne sont pas uniformes par rapport au centre du monde. Si l'on permettait cela, alors il n'y aurait plus aucune nécessité de poser des orbes multiples~: autant dire qu'à chaque astre appartient un orbe qui le meut d'un mouvement irrégulier, qui stationne, rétrograde, accélère ou ralentit. C'est impossible. Avec le mouvement de l'Apogée, c'est encore plus impossible. Comme Ptolémée pensait que l'Apogée est fixe, il a adopté l'excentrique. Quant à l'orbe épicycle\footnote{Ibn al-\v{S}\=a\d{t}ir fait ici allusion aux deux modèles équivalents proposés par Ptolémée pour le mouvement du Soleil~: l'un faisait intervenir un excentrique (de centre fixe distinct du centre du monde et dans la direction de l'Apogée, elle-même fixée, cet excentrique était en rotation uniforme autour de son propre centre), et l'autre faisait intervenir un épicycle.} pour le Soleil, il est permis, mais il n'est pas en accord avec des observations précises, comme tu en prendras connaissance à propos de la configuration du Soleil. Car nous avons trouvé que l'irrégularité du Soleil, c'est-à-dire l'équation du Soleil, n'est pas en accord avec des observations précises dans les octants. \label{doutes_lune}Parmi les doutes concernant les orbes de la Lune, il y a aussi l'excentrique. L'excentrique dans les orbes de la Lune est impossible, et l'uniformité du mouvement de l'excentrique autour d'un point autre que son propre centre est aussi impossible\footnote{Allusion aux deuxième et troisième modèles de la Lune dans l'\textit{Almageste}.}. Le fait que le rayon de l'épicycle suive un point autre que le centre de l'orbe qui le porte est impossible\footnote{Allusion au point de prosneuse dans le troisième modèle de la Lune dans l'\textit{Almageste}, mais cette critique concerne aussi son deuxième modèle.}~: l'orbe épicycle paraîtrait alors avoir un mouvement multiple. Il y aurait en lui deux mouvements, l'un uniforme et l'autre irrégulier, or le mouvement irrégulier ne fait pas un tour complet et c'est impossible. [Ces hypothèses] entraînent que le rayon de la Lune dans les quadratures est double du rayon de la Lune dans les syzygies, or c'est impossible~: on n'a jamais vu ça. \newpage\phantomsection \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ALASBE@\RL{jsm}!ALASBE@\RL{jsm}, corps, solide} \index{AMAPBH@\RL{.h_dw}!BFBBAWAI BEAMAGAPACAI@\RL{nuq.taT al-m.hA_dAT}, point de prosneuse} \index{ANBDBH@\RL{_hlw}!ANBDAGAB@\RL{_hlA'}, vide} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BD-AYAQAV@\RL{.harakaT al-`r.d}, mouvement en latitude} \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \index{BABDBC@\RL{flk}!BABDBC BEAYAOAOBD BEASBJAQ@\RL{falak mu`ddl al-msIr}, orbe équant} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \includepdf[pages=7,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent D'après des observations précises, quand la distance de la Lune au Soleil est d'un signe et demi, la Lune vraie déduite de ces principes contredit ce qu'exige l'observation. D'ailleurs ni Ptolémée ni aucun autre n'a mentionné d'observation de la Lune vraie dans ces positions. La localisation de l'astre moyen\label{astre_moyen} et du centre de l'astre dans un cercle semblable à l'orbe de l'écliptique est impossible car ils ne sont pas pris dans le même cercle, ni par rapport au même point. L'orbe excentrique est aussi impossible pour les autres astres errants, puisque nous avons dit que l'uniformité du mouvement des excentriques autour de points autres que leurs centres est impossible. Quand ils disent que l'excentrique est dans le plan de l'orbe incliné et que le plan de l'excentrique coupe le parécliptique en deux points opposés (la tête et la queue), c'est impossible, car rien sauf un grand cercle ne coupe un grand cercle en deux moitiés, or l'excentrique n'est pas un grand cercle~; il est donc impossible que le parécliptique soit coupé en deux points opposés, et il y a là une subtilité, prête attention. Que le rayon de l'épicycle suive un point autre que le centre de l'orbe qui le porte, c'est impossible, or ils l'ont posé ainsi dans les orbes des cinq planètes. L'orbe équant est impossible. C'est une image fausse. Imaginer une droite dont l'un des deux bouts est fixe au centre de l'orbe équant, et l'autre s'allonge ou se raccourcit, passe par le centre des épicycles des astres et les fait tourner autour de ce point, c'est impossible. La droite n'est pas corporelle, il faudrait du vide, et d'autres choses impossibles qui en dépendent. Le fait que Mercure soit plus proche ailleurs qu'au point opposé à l'Apogée est impossible à cause des observations, bien que cela ne soit pas impossible à concevoir. Concevoir des orbes dans les orbes des épicycles des astres errants, de la manière indiquée par Ptolémée dans les \emph{Hypothèses planétaires}, orbes qui inclinent leurs plans par rapport au plan du zodiaque, pour le mouvement en latitude des astres, c'est impossible. La position mentionnée par Na\d{s}{\=\i}r al-\d{T}\=us{\=\i} \label{doute_tusi_lune} dans la \emph{Ta\b{d}kira} sur l'élimination des doutes des orbes de la Lune est impossible parce qu'on y trouve l'excentrique et que le diamètre de l'épicycle suit un point autre que le centre de l'orbe portant l'épicycle. \newpage\phantomsection \index{AYAQAVBJ@\RL{al-mwyd al-`r.dI}, al-Mu'ayyad al-`Ur\d{d}{\=\i}} \index{ATBJAQAGARBJ@\RL{q.tb al-dIn al-^sIrAzI}, Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i}} \index{ARAQBBAGBDAI@\RL{al-zarqAlaT}, al-Zarq\=ulla} \index{BBAHBD@\RL{qbl}!AEBBAHAGBD@\RL{'iqbAl}, trépidation, accession} \index{AOAHAQ@\RL{dbr}!AEAOAHAGAQ@\RL{'idbAr}, trépidation, récession} \index{AMAPBH@\RL{.h_dw}!BFBBAWAI BEAMAGAPACAI@\RL{nuq.taT al-m.hA_dAT}, point de prosneuse} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{APAQBH@\RL{_drw}!APAQBHAI@\RL{_dirwaT}, sommet, apogée} \index{BABDBC@\RL{flk}!BABDBC ANAGAQAL BEAQBCAR@\RL{falak _hArij al-markaz}, orbe excentrique} \index{BABDBC@\RL{flk}!BABDBC BEAYAOAOBD BEASBJAQ@\RL{falak mu`ddl al-msIr}, orbe équant} \index{BBBEAQ@\RL{qmr}!BBBEAQ@\RL{qamar}, Lune} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BD-AYAQAV@\RL{.harakaT al-`r.d}, mouvement en latitude} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \includepdf[pages=8,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection La position mentionnée par Na\d{s}{\=\i}r al-\d{T}\=us{\=\i} dans la correction de la configuration des quatre planètes Saturne, Jupiter, Mars et Vénus est impossible, car y demeurent les excentriques, les orbes équants, l'irrégularité du mouvement propre à cause du mouvement de l'apogée\footnote{Première occurrence du mot \textit{dhirwa} que l'on traduit par <<~apogée~>> (sans majuscule)~: c'est une extrémité du diamètre de l'épicycle parallèle à la direction Terre--astre~moyen. On traduit \textit{'awj} par <<~Apogée~>> (avec une majuscule)~: c'est le point où le centre de l'épicycle est à distance maximale de la Terre. Voir la figure \ref{fig040}~p.~\pageref{fig040} \textit{infra}.} des épicycles, et d'autres choses impossibles. Il l'a lui-même reconnu --- Dieu le pardonne~: il lui manquait un principe suffisant. La position conjecturée par al-Mu'ayyad al-`Ur\d{d}{\=\i} dans la configuration des orbes de la Lune, concernant son inversion du sens du mouvement de l'excentrique et la variation de l'apogée de l'épicycle qui suit un point autre que le centre de l'orbe portant l'épicycle, c'est impossible.\label{urdi_lune} La position mentionnée par al-Mu'ayyad al-`Ur\d{d}{\=\i} dans la réforme des orbes des astres errants est impossible car y demeurent l'excentrique, les orbes équants, etc. La position mentionnée par Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i} dans la réforme de la configuration des orbes de la Lune est impossible car y demeurent l'excentrique et le point de prosneuse. L'argument qu'il a avancé (Dieu ait pitié de lui) pour permettre le point de prosneuse de la Lune est la plus impossible des impossibilités et c'est une image fausse. Il en est revenu dans la \emph{Tu\d{h}fa} ; il y indique une autre manière, mais elle est aussi impossible.\label{shirazi_lune} Le principe qu'il a nommé l'invention concernant le désordre des latitudes des astres est impossible.\label{ibdaai} La position qu'il a mentionnée dans la réforme des orbes des astres errants est impossible à cause qu'on y trouve les excentriques, les équants, et l'irrégularité des deux apogées de l'épicycle. Le mouvement de trépidation\footnote{Littéralement, <<~accession et récession~>>.} n'est pas vrai car cela diffère de ce qu'on vérifie dans les observations anciennes et modernes. C'est d'ailleurs une idée fausse bien qu'on puisse concevoir de poser des orbes qui causent ces mouvements. S'ils étaient réels, j'ai déterminé le Soleil du point de vue d'al-Zarq\=ulla et je l'ai trouvé inférieur à la vérité de plus de trois degrés et demi --- ceci pendant l'accession. Pendant la récession, la différence serait de plus de douze degrés s'il est en fin de récession. \newpage\phantomsection \index{ACASAO@\RL{'asad}!BBBDAH ACASAO@\RL{qlb al-'asad}, le c{\oe}ur du Lion} \index{AHAQAL@\RL{brj}!AHAQBHAL BEBCBHBCAHAI@\RL{brUj mkwkbaT}, constellations du zodiaque} \index{BEALAQBJAWBJ@\RL{al-majrI.tI}, al-Majr{\=\i}\d{t}{\=\i}} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AKAGBFBJAI@\RL{.hrkaT _tAnyaT}, deuxième mouvement} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \includepdf[pages=9,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent J'ai déjà déterminé le c{\oe}ur du Lion avec ce mouvement, or cela n'est pas en accord avec sa position [observée], et cela contredit toutes les autres observations de cet astre, anciennes et présentes. La différence est grande et ajoute souvent quatre degrés à deux degrés. Dans les fondements, je n'ai rien trouvé en quoi il puisse manquer [quelque chose] de la même grandeur. On a vérifié par l'observation que le mouvement des fixes est un mouvement simple et uniforme autour du centre du Monde et sur les pôles du zodiaque, vers l'Est. Que cela soit différent est impossible. Al-Majr{\=\i}\d{t}{\=\i} a affirmé que les mouvements de tous les orbes des astres vont d'Est en Ouest, et que ce qu'on voit aller vers l'Est vient de la <<~carence~>> des mouvements venant de l'Est à rejoindre le mouvement diurne. C'est impossible. Son affirmation que le pôle de l'orbe du zodiaque tourne en un cercle (autour du pôle du Monde) dont l'azimut est de la grandeur de l'inclinaison maximale, vers l'Est, par carence du mouvement du huitième orbe, c'est impossible. Il affirme cela parce qu'il confond entre les parts de l'équateur et les parts du zodiaque. En fait c'est le contraire. Par exemple, si les pôles de l'orbe du zodiaque se meuvent, par carence, vers l'Est, d'un quart de cercle, et si nous traçons un grand cercle qui passe par les pôles de l'orbe du zodiaque, par les pôles de l'équateur, et par les solstices, alors il passera par la part de l'équateur qui suit le commencement du Bélier. On avait vérifié par l'observation l'immobilité des quatre points de l'équateur~; seulement il confond avec ces points les parties du zodiaque étoilé qui n'est pas le zodiaque véritable. Ce qu'il a dit est impossible à imaginer et cela n'existe pas. De même qu'il a rendu possible un mouvement des pôles du zodiaque vers l'Est autour des pôles de l'équateur, il a permis d'autres mouvements vers l'Est d'une manière semblable~; mais ceci est différent de ce que j'ai contesté [ci-dessus] sur l'existence des mouvements vers l'Est. Son énoncé selon lesquel les mouvements vers l'Est sont dus à la carence du huitième, \textit{etc.} est futile. La démonstration a déjà établi l'existence de ces mouvements et a montré qu'une carence de l'orbe pour un mouvement de cette grandeur est impossible. \newpage\phantomsection \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{BEALAQBJAWBJ@\RL{al-majrI.tI}, al-Majr{\=\i}\d{t}{\=\i}} \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BDBHBDAHBJAI@\RL{.harakaT lawlbiyaT}, mouvement hélicoïdal} \index{ACAQASAWBHAC@RL{'aris.tU'a}, Aristote} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEASAJBHBJAI@\RL{.harakaT mustawiyaT}, mouvement uniforme} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCBCAHAI@\RL{.harakaT mrkkbaT}, mouvement composé} \index{AOBHAQ@\RL{dwr}!AJAOBHBJAQ@\RL{tadwIr}, épicycle} \index{BABDBC@\RL{flk}!BABDBC AKAGBEBF@\RL{flk _tAmin}, huitième orbe (étoiles fixes)} \index{BABDBC@\RL{flk}!BABDBC BEAYAOAOBD BFBGAGAQ@\RL{flk m`ddl al-nhAr}, orbe de l'équateur} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{AHASAW@\RL{bs.t}!ALASBE AHASBJAW@\RL{jsm basI.t}, corps simple} \index{AQBCAH@\RL{rkb}!ALASBE BEAQBCAH@\RL{jsm mrkb}, corps composé} \label{var1} \includepdf[pages=10,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection L'énoncé qui dit qu'à chaque astre il y a un orbe unique avec lequel il se meut autour des pôles du zodiaque vers l'Est, encore par carence, et que ces deux mouvements produisent, dans les astres, arrêt, retour, mouvement droit ou incliné de la trajectoire de part et d'autre, et d'autres choses qu'on rencontre dans les astres, est un énoncé futile. Je l'ai déjà étudié avec soin et j'ai établi par la démonstration sa futilité dans mon livre intitulé \textit{Commentaire des observations}. Tout ce qu'a suivi al-Majr{\=\i}\d{t}{\=\i} dans sa lettre sur la modification des mouvements et l'établissement de l'orbe du Soleil autour du centre du monde, son centre sortant des pôles du monde, et que les pôles de l'orbe du Soleil tournent sur ce cercle excentrique vers l'Est par carence de cet orbe, c'est une impossibilité. Et cela que, quand l'axe de l'orbe du Soleil tournait dans l'excentrique sur les pôles de l'équateur, le Soleil, voire tout point dans le plan de l'orbe du Soleil, décrivait l'orbe excentrique~; c'est différent de ce que j'indique. D'autant plus que ce qu'ils ont indiqué n'est pas fondé sur l'observation~; or nous en avons déjà montré la futilité. Le mouvement hélicoïdal qu'a mentionné Aristote ne fait apparaître ni vitesse, ni lenteur, ni arrêt, ni retour dans les astres. Ce n'est qu'une indication de leur trajectoire décrite par deux mouvements~: le mouvement diurne et le mouvement qui leur est propre. Quand le Soleil était à la tête du Capricorne puis se mouvait pendant une demi-année jusqu'à la tête du Cancer, alors il décrivait une trajectoire hélicoïdale autour du centre du monde. Aristote n'a indiqué que cela, par le mouvement hélicoïdal~; mais qui entend cela pense que l'intention d'Aristote était que, si le Soleil se meut d'un mouvement hélicoïdal, alors l'irrégularité du mouvement du Soleil se produit, ainsi que celle des autres astres, or c'est impossible. Tout mouvement uniforme par rapport à un point est irrégulier par rapport à un autre, et inversement. De plus, tout orbe dont le mouvement est uniforme autour d'un point autre que son centre a un mouvement composé. L'existence de petits orbes comme les orbes des épicycles, sans rapport au centre du monde, n'est pas interdit sauf dans la neuvième sphère. En effet, c'est comme le fait qu'il existe dans chaque orbe un astre, et dans le huitième orbe de nombreux astres sphériques, chacun plus grand que les épicycles de certains des astres tandis que cet astre lui-même est distinct du corps de l'orbe. Donc il n'est pas interdit qu'il existe des orbes d'épicycles et de choses semblables, et à partir de là on comprend qu'il y a dans les orbes une certaine composition ; celui qui est absolument simple, c'est le neuvième orbe, et l'on ne peut imaginer d'astre sur cet orbe ni rien d'autre semblable. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BABDAM@\RL{ibn afla.h}, Ibn Afla\d{h}} \index{AYAQAVBJ@\RL{al-mwyd al-`r.dI}, al-Mu'ayyad al-`Ur\d{d}{\=\i}} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{ACBHBD@\RL{'awl}!ACBDAI@\RL{'AlaT j 'AlAt}, instrument} \index{ACAQAV@\RL{'ar.d}!ACAQAV@\RL{'ar.d}, la Terre} \index{BABDBC@\RL{flk}!BABDBC AHAQBHAL@\RL{flk al-burUj}, orbe de l'écliptique} \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \index{BBBJAS@\RL{qys}!BEBBBJAGAS@\RL{miqyAs j maqAyIs}, gnomon} \index{ATAYAY@\RL{^s``}!ATAYAGAY@\RL{^su`A`}, rayon (de lumière)} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{BGBFAO@\RL{hnd}!BGBFAO@\RL{al-hnd}, les Indiens} \includepdf[pages=11,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Les \emph{man\u{s}\=ura} qu'indique Ptolémée dans son livre intitulé \emph{Hypothèses planétaires} sont futiles. Ce qu'ont indiqué Ibn Afla\d{h} et al-Mu'ayyad al-`Ur\d{d}{\=\i} quant au fait que Vénus est au-dessus du Soleil, et ce qu'ils en ont conclu, c'est impossible. Le rapprochement de la circonférence de l'écliptique et de la circonférence de l'équateur, et leur éloignement, c'est impossible. La variation de l'inclinaison maximale n'est pas due à une différence de temps ni de lieu. Ce qu'on y trouve de variable est dû aux instruments, à ce qui diffère dans leur érection, c'est dû à l'irrégularité de l'ombre des extrémités des gnomons, c'est dû à une variation de la lumière du Soleil dans le corps de la cible, c'est dû au fait que le centre du volume de la Terre n'est pas fixe par rapport au centre du monde, comme nous l'avons exposé dans mon livre intitulé \textit{Commentaire des observations}. D'après les Indiens et d'autres, on indique que l'inclinaison totale du Soleil est vingt-quatre degrés. C'est futile et il n'y aucune vérité là-dedans. C'est comme si cette doctrine indiquait le corps qui contient l'inclinaison du Soleil des deux côtés~; son extension en latitude est de quarante-huit degrés dont la moitié est vingt-quatre degrés~; ceci est à peu près la latitude maximale du soleil mais \emph{son centre} ne dépasse pas $23;31$. Si l'on prend cette doctrine de ce point de vue, alors c'est vrai et sans défaut. \newpage\phantomsection \index{BGBFAOASAI@\RL{hndsaT}, géométrie} \index{ANAWAW@\RL{_h.t.t}!ANAWAW@\RL{_ha.t.t}, ligne} \index{ANAWAW@\RL{_h.t.t}!ANAWAW BEASAJBBBJBE@\RL{_ha.t.t mustaqIm}, droite} \index{ASAWAM@\RL{s.t.h}!ASAWAM@\RL{s.t.h}, surface} \index{ASAWAM@\RL{s.t.h}!ASAWAM BEASAJBH@\RL{\vocalize s.t.h mstwiN}, plan} \index{ALASBE@\RL{jsm}!ALASBE@\RL{jsm}, corps, solide} \phantomsection \addcontentsline{toc}{chapter}{I.1 Fondements posés comme axiomes en géométrie} \label{var10} \includepdf[pages=12,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre un \large Fondements posés comme axiomes en géométrie \end{center} Le point, la ligne, l'angle, la surface, le cercle et la sphère sont des êtres de figures connues. Chaque point, droite, plan ou angle s'ajuste sur son semblable. L'intersection de deux lignes quelconques est un point, celle de deux surfaces est une ligne, et celle de deux solides est une surface. Nous pouvons supposer une droite donnée sur n'importe quel plan ou bien passant par un point arbitraire. Nous pouvons fixer un point sur n'importe quelle droite ou plan. Nous pouvons reporter une droite finie sur n'importe quel plan ou solide. Nous pouvons tracer un cercle en un point quelconque avec un rayon quelconque. Les angles droits sont égaux. Lorsqu'une droite tombe sur une droite, cela produit deux angles droits, ou bien un angle aigu et un angle obtus~; la somme des deux angles est égale à deux droits. La somme des angles de tout triangle est égale à deux droits. \newpage\phantomsection \index{ASBEBH@\RL{smw}!ASBEAGAB@\RL{al-samA'}, les cieux} \index{AKBBBD@\RL{_tql}!BEAQBCAR AKBBBD@\RL{markaz al-_tql}, centre de gravité} \phantomsection \addcontentsline{toc}{chapter}{I.2 Fondements déjà établis} \includepdf[pages=13,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre deux \large Fondements déjà établis \end{center} La surface de la Terre et de l'eau est approximativement ronde. La rotondité des cieux est comme celle de la sphère. La Terre est au cieux comme le centre de la sphère est à son bord. La grandeur de la Terre est sensible comparée à ce qui est au-dessous de l'orbe de Mars\footnote{Donc la parallaxe sera négligeable dans l'observation des planètes supérieures (Mars, Jupiter, Saturne).}. Le centre de gravité de la Terre coïncide avec le centre du monde (et non le centre de son volume) où elle réside sans mouvement d'aucun côté. Le haut est ce qui est vers les cieux et s'éloigne du centre~; le bas est ce qui est vers le centre. Tous les graves tendent vers le centre, et ce qui est léger tend vers le bord. \newpage\phantomsection \index{BABDBC@\RL{flk}!BABDBC BEAYAOAOBD BFBGAGAQ@\RL{flk m`ddl al-nhAr}, orbe de l'équateur} \index{AYAOBD@\RL{`dl}!BEAYAOAOBD BFBGAGAQ@\RL{m`ddl al-nhAr}, équateur} \index{BABDBC@\RL{flk}!BABDBC AHAQBHAL@\RL{flk al-burUj}, orbe de l'écliptique} \index{BABDBC@\RL{flk}!BABDBC BEBCBHBCAH@\RL{flk mkwkb}, orbe des étoiles fixes (\emph{syn.} orbe de l'écliptique)} \index{AHAQAL@\RL{brj}!AHAQBHAL BEBCBHBCAHAI@\RL{brUj mkwkbaT}, constellations du zodiaque} \index{ARBEBF@\RL{zmn}!ARBEAGBF@\RL{'azmAn mu`addl al-nahAr}!temps équatoriaux} \index{AWBDAS@\RL{.tls}!AWBDAS@\RL{a.tlas}, Atlas |see{\RL{flk m`ddl al-nhAr}}} \index{BCBCAH@\RL{kkb}!BCBHBCAH AJBGAGAHAJ@\RL{kwkb thAbit}, étoile fixe} \index{ALAR@\RL{jz}!ALARAB@\RL{juz' j 'ajzA'}, partie, segment, portion, part} \index{AQAJAH@\RL{rtb}!AJAQAJBJAH@\RL{tartIb}, ordre, disposition} \phantomsection \addcontentsline{toc}{chapter}{I.3 Disposition des neuf orbes} \label{var11} \includepdf[pages=14,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre trois \large Disposition des neuf orbes \end{center} Le premier est celui qui entoure les huit. Sa forme est sphérique et son centre est le centre du monde. Il a deux pôles sur lesquels il tourne autour de son centre, centre du monde et centre du tout, d'un mouvement uniforme qui fait, en un jour et une nuit, une révolution plus la coascension du Soleil en un jour et une nuit\footnote{``Un jour et une nuit'' désigne une jour solaire. Un jour solaire est en effet un peu plus long qu'un jour tropique.}. De ce mouvement, l'ensemble des orbes se meut d'Est en Ouest autour de son centre et sur ses pôles. La ceinture de ce mouvement, lieu équidistant des pôles de cet orbe, est appelé \emph{équateur}~; le pôle situé à gauche quand on est tourné vers l'Est est appelé pôle Nord de l'équateur, et son homologue, pôle Sud. Dans le plan de l'équateur on suppose qu'il y a un second cercle dont le bord est divisé selon les parts et leurs fractions, au nombre de trois cent soixante parts. Ces divisions s'appellent temps équatoriaux ou encore parts de l'équateur~; à cause de l'immobilité de ce cercle, c'est par lui qu'on mesure les mouvements. Cet orbe, le premier, est celui qui est absolument simple~; il définit les directions et on l'appelle l'\emph{Atlas}. Le deuxième orbe est l'orbe des astres appelés étoiles fixes. Sa forme est sphérique, sa partie convexe touche la partie concave du neuvième et sa partie concave touche la partie convexe du septième qui entoure les orbes de Saturne. Il a deux pôles fixes sur lesquels il tourne autour du centre de monde d'un mouvement simple et uniforme d'Ouest en Est, comme on l'a vérifié par la démonstration et par l'observation. Dans le plan de la ceinture de ce mouvement (lieu équidistant des pôles de cet orbe), sont dessinés les douze divisions du zodiaque, leurs degrés et leurs fractions, en commençant par la partie qui longe la figure du Bélier dans l'orbe des étoiles fixes dans la première des désignations\footnote{Peut-être {Ibn al-\v{S}\=a\d{t}ir} fait-il allusion ici à une ``première nomenclature''. Il expliquera plus loin qu'une première nomenclature consiste à nommer les divisions du zodiaque selon les constellations (les ``figures'' formées par les étoiles, donc). La seconde nomenclature consiste à les nommer en partant du point vernal : la première division s'appellerait alors ``Bélier'' par pure convention, et elle ne longerait pas toujours la constellation du Bélier, à cause du mouvement de précession.}, comme on sait. Le plan de ce lieu est incliné de $23;31$ degrés par rapport au plan de l'orbe de l'équateur (c'est là l'inclinaison maximale). \newpage\phantomsection \index{BBBDAH@\RL{qlb}!BEBFBBBDAH AUBJBABJBJ@\RL{mnqlb .sayfiyy}, solstice d'été} \index{BBBDAH@\RL{qlb}!BEBFBBBDAH ATAJBHBJ@\RL{mnqlb ^satawI}, solstice d'hiver} \index{AYAOBD@\RL{`dl}!AYAJAOAGBD AQAHBJAYBJBJ@\RL{i`tidAl rabI`iyy}, équinoxe de printemps} \index{AYAOBD@\RL{`dl}!AYAJAOAGBD ANAQBJBABJBJ@\RL{i`tidAl _hrIfiyy}, équinoxe d'automne} \index{BCBCAH@\RL{kkb}!BCBHBCAH AJBGAGAHAJ@\RL{kwkb thAbit}, étoile fixe} \index{AHAQAL@\RL{brj}!AHAQAL@\RL{brj}, signe (du zodiaque) |see{\RL{falak al-burUj}}} \index{BABDBC@\RL{flk}!BABDBC AHAQBHAL@\RL{flk al-burUj}, orbe de l'écliptique} \index{AHAQAL@\RL{brj}!AJBHAGBDBJ AHAQBHAL@\RL{`al_A tawAliy al-burUj}, dans le sens des signes} \index{AHAQAL@\RL{brj}!ANBDAGBA AJBHAGBDBJ AHAQBHAL@\RL{il_A _hilAf tawAliy al-burUj}, en sens contraire de celui des signes} \index{AOAQAL@\RL{drj}!AOAQALAI@\RL{drjaT j darjAt, drj}, degré} \label{var28} \includepdf[pages=15,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Considérons le grand cercle qui touche les deux pôles de l'équateur et les deux pôles de l'écliptique~; il coupe la ceinture de l'écliptique en deux points. L'intersection proche du pôle Nord s'appelle \emph{solstice d'été}, et la seconde, \emph{solstice d'hiver}. Le plan de l'écliptique coupe l'équateur en deux points~: le point qui précède le solstice d'été s'appelle l'\emph{équinoxe de printemps}, et celui qui précède le solstice d'hiver, l'\emph{équinoxe d'automne}. Ces quatre points sont immobiles ; ils ne se meuvent pas par rapport au lieu qu'ils longent dans l'équateur. On découpe en trois portions égales chacun des arcs délimités par ces points~: nous imaginons six grands cercles qui passent par les pôles de l'écliptique, et la ceinture est coupée par eux en douze portions égales, de même que toute la figure du huitième orbe. On appelle chacune de ces portions un \emph{signe du zodiaque}, et chaque signe est divisé en trente portions égales qu'on appelle degrés de l'écliptique. Il y a très longtemps, la constellation du Bélier (dans le huitième orbe, l'orbe des étoiles fixes) longeait la portion dont le commencement est l'équinoxe de printemps. Cette portion s'appelle le signe du Bélier. La deuxième portion s'appelle selon la constellation qui la longeait, c'est-à-dire le Taureau, puis les Gémeaux, puis le Cancer, \textit{etc.}, comme vous savez. Mais à la fin de l'an huit cent vingt de l'hégire la première portion longera la constellation des Poissons de l'orbe des étoiles fixes, donc les constellations se déplaceront d'un signe dans le sens des signes\footnote{Le mot <<~signe~>> désigne désormais un arc de trente degrés de l'écliptique, et la locution <<~dans le sens des signes~>> indique l'ordre (conventionnel) des signes du zodiaque, c'est-à-dire d'Ouest en Est.}. Les portions immobiles longeront d'autres constellations que celles qu'elles longeaient autrefois. On pourra alors appeler Poissons la première portion, Bélier la deuxième, Taureau la troisième, \textit{etc.} Ou bien on gardera leurs noms, mais on ne considèrera plus les constellations qu'ils longent dans le zodiaque étoilé. On a observé que ce mouvement est d'un degré en soixante-dix années persanes. Chez Ptolémée il est d'un degré par siècle, chez les modernes il est d'un degré et demi par siècle, et chez certains d'entre eux il est d'un degré tous les soixante-dix ans, en années persanes. \newpage\phantomsection \index{BCBCAH@\RL{kkb}!BCBHBCAH AJBGAGAHAJ@\RL{kwkb thAbit}, étoile fixe} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BJBHBEBJBJAI@\RL{.harakaT yawumiyyaT}, mouvement diurne} \index{BABDBC@\RL{flk}!BABDBC BFAGAQ@\RL{falak al-nAr}, orbe du feu} \index{BABDBC@\RL{flk}!BABDBC BGBHAG@\RL{flk al-hwA}, orbe de l'Air} \index{BABDBC@\RL{flk}!BABDBC BEAGAB@\RL{flk al-mA'}, orbe de l'Eau} \index{BABDBC@\RL{flk}!BABDBC AJAQAGAH@\RL{flk al-trAb}, orbe de la Terre} \index{ACAQAV@\RL{'ar.d}!BCAQAI ACAQAV@\RL{kuraT al-'ar.d}, globe terrestre} \includepdf[pages=16,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Ceci étant dit, sachez que l'orbe inférieur (le premier) appartient à la Lune, le deuxième à Mercure, le troisième à Vénus, le quatrième au Soleil, le cinquième à Mars, le sixième à Jupiter, le septième à Saturne, et dans le huitième sont enchassées les étoiles fixes, appelées ainsi car leurs positions les unes par rapport aux autres sont fixes. Le neuvième, mentionné ci-dessus, se meut sur ses pôles et autour de son centre d'Est en Ouest. C'est le mouvement diurne, et les orbes des sept astres sont mûs par ce mouvement parallèlement à l'équateur. Sous l'orbe de la Lune, il y a l'orbe du Feu, puis l'Air, puis l'Eau, puis la Terre, c'est-à-dire le globe terrestre. Nous avons déjà dénoncé la faiblesse du raisonnement selon lequel Vénus est au-dessus du Soleil. Rare sont les savants qui ont vu cela. Nous avons expliqué cela ailleurs dans nos livres. Dieu est le plus savant. \newpage\phantomsection \index{ASBHBI@\RL{sw_A}!ANAWAW ASAJBHAGAB@\RL{_ha.t.t al-istiwA'}, équateur terrestre} \index{AYAOBD@\RL{`dl}!BEAYAOAOBD BFBGAGAQ@\RL{m`ddl al-nhAr}, équateur} \index{AYAQAV@\RL{`r.d}!AYAQAV AHBDAO@\RL{`r.d al-bld}, latitude du pays, \emph{i. e.} par rapport à l'équateur} \index{BFAUBA@\RL{n.sf}!AOAGAEAQAI BFAUBA BFBGAGAQ@\RL{dA'iraT n.sf al-nhAr}, méridien} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI BFAUBA BFBGAGAQ@\RL{dA'iraT n.sf al-nhAr}|see{\RL{n.sf}}} \index{AWBHBD@\RL{.twl}!AWBHBD AHBDAO@\RL{.tUl al-bld}, longitude du pays (par rapport à l'équateur)} \addcontentsline{toc}{chapter}{I.4 Cercles remarquables et latitudes des pays} \includepdf[pages=17,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre quatre \large Cercles remarquables et latitudes des pays \end{center} Le plus remarquable des grands cercles est le cercle de l'\emph{équateur}. C'est celui qui est équidistant des pôles de l'équateur, qui est ceinture du mouvement diurne et cercle des équinoxes. Qu'on conçoive son plan coupant le globe terrestre, la ligne de coupe s'appelle équateur en raison de l'égalité du jour et de la nuit pour tous les habitants qui s'y trouvent, parce que les cieux y tournent en roue avec les pôles de l'équinoxe à l'horizon. Puisque la Terre est sphérique et stable au centre du monde, pour qui avance sur Terre de l'équateur en allant vers le Nord, le pôle Nord s'élève au-dessus de l'horizon~; et pour qui avance vers le Sud, le pôle Sud s'élève au-dessus de l'horizon. Plus l'un des deux pôles s'élève, plus l'autre s'abaisse. L'élévation du pôle s'appelle \emph{latitude du pays}. C'est un arc du grand cercle\footnote{Le cercle méridien décrit dans le paragraphe suivant.} passant par le zénith et par les pôles de l'équateur, cet arc est égal à l'abaissement de l'autre pôle [sous l'horizon], et il est aussi égal à la distance entre l'équateur et le zénith dans cette région. Le \emph{cercle méridien} ou longitude du pays coupe toutes les trajectoires visibles en deux moitiés. Il passe par le pôle visible et par le zénith. Il change quand on se déplace vers l'Est ou vers l'Ouest mais ne change pas quand on suit le Nord ou le Sud. Pour chaque pays, soit son méridien ; nous imaginons qu'il coupe la sphère terrestre ou qu'il y est dessiné. Ce qui est compris entre le méridien passant par l'extrémité Ouest du monde habité et le méridien d'un lieu supposé, c'est la \emph{longitude du pays}. Son maximum est cent quatre-vingt degrés. La latitude maximale est quatre-vingt-dix degrés, et elle est atteinte là où l'un des pôles est au zénith. Les cieux tournent alors comme une meule; aux autres latitudes, ils tournent de manière oblique. \newpage\phantomsection \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{AHAQAL@\RL{brj}!AHAQAL@\RL{brj}, signe (du zodiaque) |see{\RL{falak al-burUj}}} \index{BABDBC@\RL{flk}!BABDBC AHAQBHAL@\RL{flk al-burUj}, orbe de l'écliptique} \index{BEBJBD@\RL{myl}!AOAGAEAQAI BEBJBD@\RL{dA'iraT al-mIl}, cercle de déclinaison} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI BEBJBD@\RL{dA'iraT al-mIl}|see{\RL{mIl}}} \index{BEBJBD@\RL{myl}!BEBJBD ACBHBHBD@\RL{mIl 'awwal}, première déclinaison, \emph{i. e.} par rapport à l'équateur} \index{AWBDAY@\RL{.tl`}!BEAWAGBDAY@\RL{m.tAl`}, coascension} \index{BEAQAQ@\RL{mrr}!BEBEAQAQ@\RL{mamarr j At}, passage, transit (en général au méridien)} \index{BEAQAQ@\RL{mrr}!AOAQALAI BEBEAQAQ@\RL{darjaT mamarr}, degré de transit} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI AYAQAV@\RL{dA'iraT al-`r.d}, cercle de latitude} \index{BEBJBD@\RL{myl}!BEBJBD AKAGBFBJ@\RL{mIl _tAniy}, seconde déclinaison, \emph{i. e.} par rapport à l'écliptique} \index{AYAQAV@\RL{`r.d}!AYAQAV BEAYAOAOBD@\RL{`r.d m`ddl}, latitude rapportée à l'équateur (mais mesurée sur un cercle de latitude)} \includepdf[pages=18,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Les régions habitées sur Terre vont : [en latitude], de l'équateur jusque là où le pôle Nord atteint une hauteur égale au complément de l'inclinaison [de l'écliptique] $66;29$, et en longitude, d'une longitude d'un degré jusqu'aux cent quatre-vingt degrés. Ceci étant admis, parmi les cercles remarquables je dis qu'il y a aussi la \emph{ceinture de l'écliptique}. C'est la ceinture du huitième orbe et on l'appelle aussi <<~voie du Soleil~>> car le Soleil la suit toujours. Nous imaginons que ce cercle découpe tous les orbes et dessine dans chacun un grand cercle appelé \emph{orbe parécliptique}. La moitié de la ceinture de l'écliptique qui commence par l'équinoxe de printemps est au Nord de l'équateur, et l'autre moitié est au Sud. Chacune des deux moitiés est divisée en six portions appelées \emph{signes}, comme précédemment. Parmi les grands cercles, il y a aussi le \emph{cercle de déclinaison}. C'est le grand cercle qui passe par les pôles de l'équateur céleste et par n'importe quel point ou par un astre donné~; il coupe la ceinture de l'équateur céleste. Le plus petit des deux arcs [du cercle de déclinaison] situés entre ce point et l'équateur est la déclinaison de ce point appelée \emph{première déclinaison}. Si ce cercle est celui qui passe par la tête du Cancer et du Capricorne, il passe alors aussi par les pôles de l'orbe de l'écliptique et la déclinaison [d'un astre sur l'écliptique] est ici maximale. Soit un cercle de déclinaison passant par un astre, alors l'arc compris entre l'astre et l'équateur céleste est la distance de l'astre à l'équateur céleste. Le point où il rencontre l'équateur est la \emph{coascension} du degré de transit de cet astre au méridien~; et le second point, où il rencontre la ceinture de l'écliptique, est le \emph{degré de transit} de cet astre. Parmi les grands cercles, il y a aussi le \emph{cercle de latitude}. C'est le grand cercle imaginaire passant par les pôles de l'écliptique et par un point donné de l'écliptique ou par un astre. On le connaît aussi sous le nom de \emph{cercle de seconde déclinaison}. [S'il passe par un point de l'écliptique], l'angle aigu situé sur le cercle [de latitude] entre la ceinture de l'écliptique et l'équateur est la seconde déclinaison de ce point, dite aussi \emph{latitude} de ce point. \newpage\phantomsection \index{AYAQAV@\RL{`r.d}!AMAUAUAI AYAQAV@\RL{.hi.s.saT `r.d}, argument de latitude} \index{AWBHBD@\RL{.twl}!AWBHBD@\RL{.tUl}, longitude (par rapport à l'écliptique)} \index{AWBHBD@\RL{.twl}!AWBHBD BEAYAOAOBD@\RL{.tUl m`ddl}, longitude rapportée à l'équateur (mais mesurée sur un cercle de latitude)} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{ASBEAJ@\RL{smt}!ASBEAJ AQABAS@\RL{samt al-ra's}, zénith} \index{ASBEAJ@\RL{smt}!ASBEAJ BBAOBE@\RL{samt al-qadam, samt al-rjl}, nadir} \index{BFAUBA@\RL{n.sf}!AOAGAEAQAI BFAUBA BFBGAGAQ@\RL{dA'iraT n.sf al-nhAr}, méridien} \index{AYAQAV@\RL{`r.d}!AYAQAV AHBDAO@\RL{`r.d al-bld}, latitude du pays, \emph{i. e.} par rapport à l'équateur} \index{BFAUBA@\RL{n.sf}!ANAWAW BFAUBA BFBGAGAQ@\RL{_ha.t.t n.sf al-nahAr}, ligne méridienne} \index{ANAWAW@\RL{_h.t.t}!ANAWAW BFAUBA BFBGAGAQ@\RL{_ha.t.t n.sf al-nahAr}|see{\RL{n.sf}}} \includepdf[pages=19,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent S'il passe par un astre, l'arc de ce cercle compris entre l'astre et la ceinture de l'écliptique du côté le plus proche est la \emph{latitude} de cet astre, et le petit arc de ce cercle compris entre l'astre et l'équateur est la latitude de l'astre rapportée à l'équateur et on l'appelle aussi \emph{argument de latitude} de l'astre.\footnote{Dans les modèles planétaires, l'\emph{argument de latitude} désigne plutôt l'arc de l'écliptique entre le n{\oe}ud et la longitude moyenne de l'astre. Dans tout ce paragraphe l'auteur semble viser une synthèse rationnelle des usages du mot \textit{`ar\d{d}} -- latitude. Pour un astre de latitude nulle, au sens où cet astre serait sur l'écliptique, l'usage devait prescrire à ce terme un autre sens que celui auquel nous sommes habitués aujourd'hui.} Si le cercle de latitude passe par un astre donné et qu'il coupe la ceinture de l'écliptique, il la coupe en le degré de \emph{longitude} de l'astre. L'origine des longitudes de l'astre est l'équinoxe de printemps, à la tête du Bélier, par convention. Si le cercle de latitude [passant par un point donné] coupe l'équateur, l'intersection est la longitude de l'astre rapportée à l'équateur. Si le point donné est un des solstices, le cercle de latitude [passant par ce point] est confondu avec le cercle de première déclinaison~: c'est le cercle passant par les pôles de l'écliptique et par les pôles de l'équateur, les première et seconde déclinaisons sont réunies, et chacune est à son maximum $23;31$. Parmi les cercles remarquables, il y a le \emph{cercle de l'horizon}. C'est celui qui sépare la partie visible de la partie invisible des cieux. Un de ses pôles est le \emph{zénith}, l'autre le \emph{nadir}. Sur ce cercle ont lieu les levers et les couchers. [Ce cercle] est décrit par la droite issue de l'observateur, tangente à la sphère terrestre et allant jusqu'au neuvième orbe~; or la partie visible est plus grande que la partie cachée. Il a tort celui qui dit que la partie visible est plus grande que la partie cachée d'une grandeur de quatre minutes et vingt-six secondes. J'ai trouvé par l'observation que les parts visibles du Soleil et des astres sont plus grandes que ce qu'on a calculé, d'une grandeur variable qui dépasse deux tiers de degré et qui est composée de combien est descendu l'observateur et de combien a baissé le centre du volume du globe terrestre par rapport au centre du monde. La raison en est que c'est le centre de gravité de la Terre qui est confondu avec le centre du monde, et non le centre de son volume. La différence entre les centres est nécessaire, car [la Terre] est constituée d'eau et de terre, et le centre de gravité est du côté de la terre~; ceci a été établi par examens successifs. J'ai déjà consacré un écrit à ce sujet. Parmi les grands cercles, il y a le \emph{cercle méridien} dont on a déjà décrit certaines propriétés. C'est le grand cercle imaginaire passant par les pôles de l'horizon et par les pôles de l'équateur. Si le pôle de l'horizon est pôle de l'équateur, [le méridien] est [un] cercle perpendiculaire à l'horizon qui coupe les trajectoires diurnes visibles en deux moitiés. \newpage\phantomsection \index{AQBAAY@\RL{rf`}!AQAJBAAGAY@\RL{irtifA`}, hauteur (coordonnées azimutales)} \index{AMAWAW@\RL{.h.t.t}!BFAMAWAGAW@\RL{in.hi.tA.t}, abaissement|see{\RL{irtifA`}}} \index{ASBEAJ@\RL{smt}!AOAGAEAQAI ACBHBHBD ASBEBHAJ@\RL{dA'iraT 'awwal al-sumUt}, cercle origine des azimuts} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI ACBHBHBD ASBEBHAJ@\RL{dA'iraT 'awwal al-sumUt}|see{\RL{smt}}} \index{ASBEAJ@\RL{smt}!ASBEAJ@\RL{smt}, direction, azimut} \index{AYAQAV@\RL{`r.d}!AYAQAV BBBDBJBE AQBHBJAI@\RL{`r.d aqlIm al-rwyaT}, latitude de ce qui est visible sous un climat, \textit{i. e.} latitude du pôle de l'horizon par rapport à l'écliptique} \index{AQBAAY@\RL{rf`}!AQAJBAAGAY@\RL{irtifA`}, hauteur (coordonnées azimutales)} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI AQAJBAAGAY@\RL{dA'irat al-irtifA'}, cercle de hauteur} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI ADBABB AMAGAOAK@\RL{dA'iraT al-'ufuq al-.hAdi_t}, cercle de l'horizon occurrent} \index{ATAQBB@\RL{^srq}!ANAWAW BEATAQBB BEAZAQAH@\RL{_ha.t.t al-ma^sraq wa-al-ma.grab}, ligne de l'Est et de l'Ouest} \label{var4} \includepdf[pages=20,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Il coupe la ceinture de l'écliptique au-dessus de la Terre en un point qu'on appelle le dixième, et au-dessous de la Terre en un point qu'on appelle le quatrième ou bien le piquet. Quand l'astre arrive sur ce cercle, il est au terme de son ascension si c'est un astre pas toujours visible~; si c'est un astre toujours visible, alors il a sur ce cercle deux hauteurs, et la moitié de leur somme est la latitude du lieu. L'arc situé sur le méridien entre les pôles de l'équateur et l'horizon vrai est la latitude du pays. Il coupe la Terre en une ligne qui s'appelle \emph{ligne méridienne}. Parmi les grands cercles, il y a le cercle de l'Est et de l'Ouest. C'est le grand cercle imaginaire passant par le zénith et le nadir et par les pôles du méridien, donc il est perpendiculaire à l'horizon, et on l'appelle \emph{cercle origine des azimuts}. Si un astre ou un point donné y passe, il n'aura pas d'azimut lors du passage. Si on l'imagine coupant le globe terrestre, [il y dessine] la ligne de l'Est et de l'Ouest qui est perpendiculaire à la ligne méridienne. La ligne de l'Est et de l'Ouest ne change pas quand on va vers des lieux plus à l'Est ou plus à l'Ouest\footnote{Est-ce pour autant qu'Ibn al-\v{S}\=a\d{t}ir ne connaît pas la loxodromie~? Il était cependant connu que, pour un pays à même latitude que la Mecque et à l'Ouest, la direction de la \textit{qibla} n'est pas le point Est, et qu'elle est ``à gauche'' du point Est (\textit{cf. infra} p.~\pageref{loxoqibla}).}, mais elle change quand on va vers le Nord ou vers le Sud. Parmi les grands cercles, il y a le \emph{cercle de latitude de ce qui est visible sous un climat}. C'est le grand cercle imaginaire passant par les pôles de l'écliptique et de l'horizon, donc il est perpendiculaire à l'horizon, et il coupe en deux moitiés la partie visible et la partie cachée de l'écliptique. L'arc de ce cercle compris entre un des pôles de l'écliptique et l'horizon s'appelle \emph{latitude de ce qui est visible sous le climat}. Parmi les grands cercles, il y a le \emph{cercle de hauteur}. C'est un grand cercle imaginaire qui passe par les pôles de l'horizon et par le point donné, donc il est perpendiculaire à l'horizon et le coupe en deux points dont la distance à la ligne de l'Est et de l'Ouest est l'azimut de [ce cercle] de hauteur. L'arc de ce cercle compris entre l'horizon et le point donné est la \emph{hauteur}, et la distance du point au zénith sur ce cercle est le complément de la hauteur. Parmi les grands cercles, il a le \uwave{\emph{cercle de l'horizon occurrent}}. C'est un grand cercle qui passe par les deux points Nord et Sud et par le point donné ou par un astre donné. La \emph{hauteur de l'horizon occurrent} est un arc du cercle origine des azimuts, compris entre l'horizon du lieu et l'horizon occurrent. \newpage\phantomsection \index{AOBHAQ@\RL{dwr}!BEAOAGAQ@\RL{mdAr}, trajectoire, trajectoire diurne (\textit{i. e.} cercle parallèle à l'équateur)} \index{BFBGAQ@\RL{nhr}!BBBHAS BFBGAGAQ@\RL{qws al-nhAr}, arc diurne, durée du jour} \index{AOBHAQ@\RL{dwr}!BEAOAGAQAGAJ AYAQBHAV@\RL{mdArAt al-`rU.d}, trajectoires selon les latitudes (\textit{i. e.} cercles parallèles à l'écliptique)} \index{BBBFAWAQ@\RL{qn.tr}!BEBBBFAWAQAGAJ@\RL{muqan.tarAt}, muqantars, cercles parallèles à l'horizon} \includepdf[pages=21,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent La \emph{latitude de l'horizon occurrent} est un arc d'un grand cercle passant par les pôles de l'équateur et perpendiculaire à l'horizon occurrent. Parmi les cercles remarquables, il y a les \emph{trajectoires [diurnes]}. Ce sont des cercles parallèles à l'équateur et passant par le point donné. Leur route est réglée [par rapport à] la déclinaison et au pôle. Les plus grands d'entre eux sont proches de la ceinture de la sphère. Dans un pays où la latitude est telle que la partie visible des trajectoires du Nord est plus grande que la moitié, et que celle des trajectoires du Sud est plus petite que la moitié, l'\emph{arc diurne} de l'astre est proportionné à la partie visible de sa trajectoire~; si la distance de la trajectoire au pôle est inférieure ou égale à la latitude du lieu, alors l'astre qui la suit est toujours visible s'il est du côté du pôle visible, et toujours caché sinon. Parmi les cercles remarquables, il y a les \emph{trajectoires selon les latitudes}. Ce sont des cercles parallèles à la ceinture de l'écliptique et passant par n'importe quel astre. Chacune des étoiles fixes tourne autour des pôles de l'écliptique vers l'Est, et elle est attachée à l'un de ces cercles sans jamais en dévier. Parmi les cercles remarquables, il y a les \emph{muqantars}. Ce sont des cercles parallèles à l'horizon~: ceux qui sont au-dessus de l'horizon s'appellent muqantars de hauteur, et ceux qui sont en-dessous, muqantars d'abaissement. Les cercles de hauteur leur sont perpendiculaires. Ils coupent tous les cercles de hauteur au même niveau. Leurs pôles sont le zénith et le nadir. Leur forme varie si on les projette, et elle varie en fonction de la surface sur laquelle on les projette et du pôle. \newpage\phantomsection \index{ALBHAR@\RL{jwz}!ALBHARAGAB@\RL{al-jawzA'}, les Gémeaux} \index{BABDBC@\RL{flk}!BABDBC AKAGBEBF@\RL{flk _tAmin}, huitième orbe (étoiles fixes)} \index{AKAHAJ@\RL{_tbt}!BCBHAGBCAH AKAGAHAJAI@\RL{al-kwAkb al-_tAbitaT}, les étoiles fixes} \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{ASAMAH@\RL{s.hb}!ASAMAH@\RL{s.hAb j s.hb}, nuage, nébuleuse ??} \index{AXBDBE@\RL{.zlm}!BCBHBCAH BEAXBDBE@\RL{kwkb m.zlm}, étoile obsure} \index{AUBHAQ@\RL{.swr}!AUBHAQ@\RL{.sUraT j .suwar}, figure, constellation} \index{AMBEBD@\RL{.hml}!AMBEBD@\RL{.hml}, Bélier (zodiaque)} \index{AMBHAJ@\RL{.hwt}!AMBHAJ@\RL{.hwt}, Poissons (zodiaque)} \index{AKBHAQ@\RL{_twr}!AKBHAQ@\RL{_twr}, Taureau (zodiaque)} \index{ASAQAW@\RL{sr.t}!ASAQAWAGBF@\RL{sr.tAn}, Cancer (zodiaque)} \addcontentsline{toc}{chapter}{I.5 Le mouvement des étoiles fixes, et une notice sur leurs changements de position causés par les déplacements vers l'Est ou vers l'Ouest} \includepdf[pages=22,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre cinq \large Le mouvement des étoiles fixes, et une notice sur leurs changements de position causés par les déplacements vers l'Est ou vers l'Ouest \end{center} Les astres appelés \emph{étoiles fixes} constellent la surface du \emph{huitième orbe}. Elles ont diverses magnitudes. Il est impossible de les dénombrer, sauf à Dieu, bien que d'une époque reculée jusqu'à Hipparque ils en ont observé mille vingt-deux. Ils ont déterminé leurs longitudes et leurs latitudes à une date donnée. Ils ont classé leurs magnitudes selon six degrés~: le premier degré est la magnitude la plus grande, et le sixième, la plus petite. Chaque degré se différencie en trois genres~: majeur, moyen, mineur. Parmi les mille vingt-deux étoiles, ils en ont trouvé quinze de première magnitude, quarante-cinq de deuxième magnitude, deux cent huit de troisième magnitude, quatre cent soixante-quatorze de quatrième magnitude, deux cent dix-sept de cinquième magnitude, quarante-neuf de sixième magnitude, et quatorze en dehors de ces degrés de magnitude. Parmi ces quatorze, il y a neuf étoiles cachées que l'on appelle \emph{obscures} et cinq \emph{étoiles nuageuses} comme si elles étaient des taches ou des morceaux de nuage. Comme il est difficile de marquer par des noms cette multitude d'étoiles mais que leurs formes sont fixes, ils ont imaginé, pour les reconnaître, des figures sur lesquelles ou à côté desquelles sont situées les étoiles. On dit ainsi <<~le c{\oe}ur du scorpion~>>, <<~la corne du taureau~>>, <<~la queue de l'ourse~>> \textit{etc.} pour reconnaître telle étoile par cette nomination et pour ne pas la confondre avec une autre. Ils ont donc imaginé quarante-huit figures~: vingt-et-une au Nord de la ceinture du zodiaque, douze sur la ceinture du zodiaque et quinze au Sud. Les figures du Nord sont~: la Petite Ourse, la Grande Ourse, le Dragon, Céphée, le Bouvier, la Couronne Boréale, Hercule, la Lyre, le Cygne, Cassiopée, Persée, le Cocher, Ophiuchus, le Serpent, la Flèche, l'Aigle, le Dauphin, le Petit Cheval, Pégase, Andromède, le Triangle. Les étoiles observées sur ces figures sont trois cent soixante. Les figures du zodiaque sont~: le Bélier, le Taureau, les Gémeaux, le Cancer, le Lion, la Vierge, la Balance, le Scorpion, le Sagittaire, le Capricorne, le Verseau, les Poissons. Les étoiles observées sur ces figures ou autour d'elles sont trois cent quarante-six. \newpage\phantomsection \index{ALAQAQ@\RL{jrr}!BEALAQAQAI@\RL{majarraT}, Voie Lactée} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BFARBD@\RL{nzl}!BEBFAGARBD BBBEAQ@\RL{manAzil al-qamar}, les maisons lunaires} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AKAGBFBJAI@\RL{.hrkaT _tAnyaT}, deuxième mouvement} \index{AOBHAQ@\RL{dwr}!BEAOAGAQ@\RL{mdAr}, trajectoire, trajectoire diurne (\textit{i. e.} cercle parallèle à l'équateur)} \includepdf[pages=23,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Les figures du Sud sont~: la Baleine, Orion, \'Eridan, le Lièvre, le Grand Chien, le Petit Chien, le Navire Argo, l'Hydre, la Coupe, le Corbeau, le Centaure, le Loup, l'Autel, la Couronne Australe, le Poisson Austral. Les étoiles observées sur ces figures et autour d'elles sont trois cent seize. Quant aux étoiles nuageuses, il y en a une sur le poignet de Persée, une deuxième est la tête d'Orion, une troisième \uwave{...}, une quatrième est celle qui suit \uwave{le dard (le front ?)} du Scorpion, et une cinquième est un {\oe}il du Sagittaire. La \emph{Voie Lactée} est faite d'un très grand nombre de petites étoiles très rapprochées. De par leur petitesse, leur nombre et leur proximité, elles apparaissent comme des taches nuageuses dont la couleur rappelle le lait, et c'est pour cela que Ptolémée l'a nommée Voie Lactée. Enfin, il y a vingt-huit \emph{maisons lunaires} dont les étoiles sont bien identifiées et nommées. \begin{center} \large Section \end{center} Le mouvement de ces étoiles est exclusivement en longitude sans mouvement en latitude. C'est un mouvement d'Ouest en Est autour des pôles du zodiaque, qu'on appelle \emph{deuxième mouvement}. Sa grandeur\footnote{c'est-à-dire sa vitesse} est~: selon Ptolémée, un degré en cent ans~; selon des modernes, un degré et demi en cent ans~; et selon certains d'entre eux, un degré en soixante-dix ans. Ayant vérifié ceci par l'observation, j'ai trouvé qu'elles se meuvent d'un seul degré en soixante-dix années persanes. J'ai embrassé toutes les observations des anciens et des modernes parvenues jusqu'à nous au sujet de ces étoiles~; dans mon livre \textit{Ta`l{\=\i}q al-ar\d{s}\=ad}, j'ai consigné cela ainsi que tout ce que j'ai observé et établi. Cela est établi, et sache que les étoiles dites <<~fixes~>> ne changent aucunement de trajectoires selon les latitudes, et qu'elles ne changent pas non plus de positions les unes par rapport aux autres, ni par rapport aux pôles du zodiaque. Cependant, à cause du deuxième mouvement, elles changent de positions par rapport à l'équateur. Leurs trajectoires diurnes changent en tant que leurs distances à l'équateur augmente ou diminue. Relativement aux habitants du climat\footnote{\textit{i. e.} relativement à un observateur situé sur Terre en un lieu dont la latitude par rapport à l'équateur serait donnée.}, elles changent de positions~: ce qui était élevé en hauteur s'abaisse -- c'est ainsi lorsque sa trajectoire diurne s'éloigne du zénith -- et inversement. Une étoile peut passer par le zénith alors qu'elle ne le faisait pas avant, et c'est ainsi lorsque sa distance à l'équateur céleste atteint la latitude du pays (du même côté que le pays) alors qu'elle était auparavant supérieure ou bien inférieure. \newpage\phantomsection \label{var21} \includepdf[pages=24,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Inversement, lorsque sa distance à l'équateur céleste devient inférieure à la latitude du pays (resp. supérieure) tandis qu'elle lui était égale et du même côté, alors sa trajectoire diurne passe du côté du pôle caché (resp. visible) en s'éloignant du zénith. Une étoile peut devenir toujours visible (resp. cachée) alors qu'elle ne l'était pas auparavant, et c'est ainsi lorsque le complément de sa distance à l'équateur céleste devient inférieur ou égal à la latitude du pays et qu'elle est du côté du pôle visible (resp. caché) alors qu'il lui était supérieur auparavant. Auparavant elle avait un lever et un coucher. Quand l'égalité est atteinte~: elle touche l'horizon à chaque révolution quand elle passe par le méridien~; elle ne se couche plus si elle est du côté du pôle visible~; et elle ne se lève plus si elle est du coté du pôle caché. Dans ce cas sa distance maximale à l'horizon est égale au double de la latitude du pays. Quand le complément de sa distance devient inférieur, elle ne touche plus [l'horizon]~; sa distance à l'horizon est, au minimum, la différence entre la latitude du pays et le complément de sa distance à l'équateur, et au maximum, la somme de la latitude du pays et de sa distance à l'équateur. Enfin, il peut arriver qu'une étoile ait un lever et un coucher alors qu'elle était toujours visible ou cachée auparavant, et c'est ainsi lorsque le complément de sa distance à l'équateur dépasse la latitude du pays alors qu'elle lui était inférieure ou égale auparavant. Ainsi, les expressions <<~toujours visible~>> ou <<~toujours cachée~>> sont un abus de langage. Dieu est le plus savant. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI AYAQAVBJAI@\RL{.harakaT `r.diyaT}, mouvement par accident} \index{AMBHBI@\RL{.hw_A}!AMAGBH@\RL{\vocalize .hAwiN (al-.hAwiy)}, contenant} \index{AMBHBI@\RL{.hw_A}!BEAMBHBI@\RL{m.hw_A}, contenu} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{ASBABF@\RL{sfn}!ASBABJBFAI@\RL{safInaT}, navire} \index{ATAHAK@\RL{^sb_t}!AJATAHAHAK@\RL{ta^sabba_ta}, adhérer} \index{BDARBB@\RL{lzq}!ACBDARBB@\RL{'alzaqa}, agglutiner, coller} \index{AMAQBC@\RL{.hrk}!BEAJAMAQAQBC BFBAASBG AZBJAQBG@\RL{mt.hrrk binafsihi / bi.gayrihi}, mû par soi / par un autre} \addcontentsline{toc}{chapter}{I.6 Les mouvements par accident et les autres mouvements} \label{var25} \includepdf[pages=25,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre six \large Les mouvements par accident et les autres mouvements \end{center} Quand le neuvième orbe se meut, les autres orbes se meuvent avec lui d'Est en Ouest sur ses pôles et autour de son centre. Cela s'appelle <<~mouvement du contenant au contenu~>>. Les anciens ont convenu qu'il puisse y avoir mouvement du contenu par le mouvement du contenant, que ce mouvement soit dans le même sens que celui du contenant ou bien en sens inverse. Là où ils n'étaient pas d'accord, c'est quand le contenu se meut \emph{de son mouvement propre autour de l'axe du contenant et de son centre}~: peut-il alors se mouvoir par le mouvement du contenant, ou non~?\label{impuissance} L'un penche en faveur de cette possibilité en raisonnant comme suit. Le mouvement du contenant au contenu se fait avec attachement du mû en son lieu dans celui qui meut, donc il est mû par accident à cause du mouvement de son lieu, comme le passager du navire est mû à cause du mouvement du navire. En plus de cela, il se meut de son mouvement propre, comme le passager qui va et vient sur le navire, tantôt dans le sens de son mouvement, tantôt en sens inverse. C'est certes impossible à percevoir, car il est impossible de percevoir deux mouvements différents dans une même sphère sur une ceinture et ses pôles~: on perçoit plutôt un seul mouvement, composé de la somme des deux s'ils vont dans le même sens, ou bien résultant de la différence entre le plus rapide et le plus lent s'ils ne vont pas dans le même sens (et on jugerait de même s'ils sont plus nombreux)\footnote{Al-\d{T}\=us{\=\i} utilise précisément cet argument dans la \emph{Tadhkira} (II.2)~: selon lui, il est alors impossible que le mouvement de précession des étoiles fixes (huitième orbe) ait mêmes pôles que le mouvement diurne (neuvième orbe).}. \`A quoi l'on répond~: ce n'est pas impossible à percevoir quand il y a aussi un astre sur l'orbe supérieur. L'autre pense que c'est impossible en raisonnant comme suit. Si le contenu adhère au contenant, alors ce n'est pas par les deux pôles car ils sont fixes sur l'axe commun. Et ce n'est pas non plus par deux autres points. S'il y avait toujours adhérence, alors il faudrait qu'il y ait adhérence des surfaces entières, et ceci interdit au contenu le mouvement qui lui est propre. Et il est impossible qu'il y ait des points du contenu qui tantôt s'attachent à des points du contenant, tantôt s'en séparent. Le mouvement du contenu est moindre que le mouvement du contenant, non par soi mais à cause du fait qu'ils sont \emph{collés}. Alors il n'est pas mû par le mouvement du contenant tout entier. \newpage\phantomsection \index{ATBJAQAGARBJ@\RL{q.tb al-dIn al-^sIrAzI}, Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i}} \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BJBHBEBJBJAI@\RL{.harakaT yawumiyyaT}, mouvement diurne} \index{ACBHAL@\RL{'awj}!AMAQBCAIACBHAL@\RL{.harakaT al-'awj}, mouvement des apogées} \includepdf[pages=26,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent C'est comme dans la roue située sous une charge~; alors le mouvement devrait être sans ordre parce que ce mouvement n'est dans la nature d'aucune des deux sphères, mais c'est [en fait] une coïncidence, et il n'y a rien là qui contrevienne à l'ordre. Pourtant le contenu n'est pas, dès lors qu'il adhère au contenant, mû par celui-ci~; et il n'adhère pas quand il est mû par lui~; donc il faut qu'il soit mû par lui sans être mû par lui, et qu'il adhère sans adhérer~; c'est absurde.\footnote{Celui qui <<~pense que c'est impossible~>> fait donc une sorte de raisonnement par l'absurde~: la possibilité du mouvement en question ne peut s'expliquer que par l'hypothèse de l'<<~adhérence~>>, mais l'adhérence empêche le mouvement du contenu par le contenant. L'hypothèse de l'adhérence semble se décliner en trois hypothèses rejetées tour à tour~: \begin{itemize} \item il y a toujours adhérence \item des points tantôt s'attachent, tantôt se séparent \item le contenu est collé au contenant et cela freine le mouvement \end{itemize}} [Le même dira~:] en ce qui concerne le mouvement du contenu sur l'axe du contenant, d'un mouvement qui coïncide avec le mouvement du contenant ou qui en diffère, je ne vois pas de raison qui empêche cela. Si les deux mouvements s'équilibrent et que leur sens et leur durée s'unissent, alors les séparer est inconcevable et inutile. Voilà quelles sont les opinions avec les indices et les doutes. Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i} a dit que la preuve de la possibilité est plus flagrante et plus accessible mais à condition de distinguer des cas. [Il a dit que] la preuve de l'impossibilité n'échappe pas à l'objection parce qu'elle est fondée sur le fait que quand les centres sont confondus il faut qu'il y ait adhérence, or cela n'est pas nécessaire. \begin{center} \large Section \end{center} Je ne vois pas comment il serait possible que le contenu soit mû par un contenant, ainsi que par soi-même, ainsi que par un autre contenant sur d'autres pôles et dans l'autre sens. Montrons cela. L'orbe parécliptique du Soleil est mû par le mouvement diurne d'Est en Ouest, et il se meut lui-même sur les deux pôles de l'écliptique d'Ouest en Est, d'un déplacement égal au mouvement du centre du Soleil ou du Soleil moyen. Or son mouvement par le mouvement du huitième orbe, en vue du mouvement de l'Apogée, n'est pas permis~: il y a besoin d'un autre orbe qui le meuve vers l'Est sur les pôles de l'écliptique, d'un déplacement égal au mouvement de l'Apogée. \newpage\phantomsection \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \includepdf[pages=27,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent \`A supposer qu'il soit permis, il faudrait que le reste des orbes des autres astres se meuve du même mouvement, et le parécliptique du Soleil ne pourrait se mouvoir comme cela sans que se meuvent ainsi les autres orbes, ceux de Saturne et de Jupiter. C'est subtil, prête attention~!\footnote{{Ibn al-\v{S}\=a\d{t}ir } s'intéresse ici à la composition de trois mouvements concentriques~: un mouvement propre, le mouvement d'un contenant, et le mouvement d'un autre contenant. Le Soleil est entraîné par un mouvement annuel ayant pour origine l'Apogée, dû au mouvement de l'<<~orbe parécliptique du Soleil~>>. Mais il est aussi entraîné par le mouvement diurne du neuvième orbe le contenant, et par le mouvement de l'Apogée. Le mouvement de l'Apogée doit être causé par le mouvement d'un autre orbe le contenant. Si cet orbe est le huitième (animé du mouvement de précession des fixes), alors les orbes intermédiaires (dont ceux de Jupiter et Saturne) doivent aussi être entraînés par ce mouvement comme par le mouvement diurne. Et sinon, alors il faut concevoir un autre orbe.} \emph{Remarque}. Si nous supposons que le Soleil a un épicycle et un parécliptique comme dans la configuration connue\footnote{Le modèle du Soleil avec un épicycle, chez Ptolémée.}, alors le parécliptique du Soleil se meut de lui-même sur les pôles de l'écliptique comme le centre du Soleil, l'épicyle se meut en haut de cet orbe en sens inverse, et le neuvième orbe meut ces deux orbes d'Est en Ouest. Or le huitième est impuissant à mouvoir le parécliptique d'Ouest en Est d'un déplacement égal au mouvement de l'Apogée, comme nous l'avons dit. Il y a donc besoin d'un autre orbe qui meuve le parécliptique, d'un déplacement égal au mouvement de l'Apogée. Il n'y a personne qui dise que l'Apogée n'est mû ni selon l'hypothèse que son mouvement est semblable à celui du huitième orbe, ni selon l'hypothèse que le huitième orbe le meut par accident dans le sens des signes, que le neuvième le meut dans le sens inverse sur des pôles distincts de ceux du premier mouvement, et qu'en plus de cela l'Apogée se meut par lui-même~; alors il y a besoin d'ajouter un autre orbe qui meuve l'Apogée.\footnote{{Ibn al-\v{S}\=a\d{t}ir} semble simplement répéter le raisonnement du paragraphe précédent. La structure de tout ce paragraphe et du précédent est confuse.} \emph{Autre remarque.} Toutes les observations vraies témoignent que le mouvement de l'Apogée est plus rapide que le mouvement du huitième orbe. Il faut dès lors ajouter un autre orbe qui meuve l'Apogée d'un déplacement égal au mouvement observé. \emph{Autre remarque.} Si on suppose que le Soleil a un parécliptique, un épicyle et un autre orbe qui meut l'Apogée, avec ce principe, la longitude calculée ne coïncide pas avec l'observation vraie, car nous avons fait des observations précises dans les octants et nous avons trouvé que cela diffère de la longitude calculée sur le principe ci-dessus. Et nous avons déjà démontré cela dans le livre \textit{Ta`l{\=\i}q al-ar\d{s}\=ad}. Donc il nous faut un principe simple qui coïncide avec les observations~; Dieu a fait trouver un principe qui suffit à l'effet visé, et [c'est un principe] que tu vas connaître, si Dieu le veut. \newpage\phantomsection \index{AQBCAH@\RL{rkb}!ALASBE BEAQBCAH@\RL{jsm mrkb}, corps composé} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \includepdf[pages=28,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \emph{Autre remarque}. Les astronomes n'ont pas été d'accord au sujet du mouvement des petits orbes qui ne tournent pas autour du centre du monde, tel l'orbe de l'épicycle et d'autres semblables. Ils ont tous accepté qu'ils puissent tourner dans une direction ou une autre, car ils ont signalé que l'orbe de l'épicycle avait une moitié supérieure et une moitié inférieure~; si bien que, s'il tourne dans le sens des signes dans la moitié supérieure, alors il tournera dans le sens contraire dans la partie inférieure, et inversement. Son mouvement ne serait ni un mouvement violent, ni un mouvement par accident, mais ce serait un mouvement naturel. Ils ont tous accepté qu'il puisse y avoir des épicycles dans les orbes autres que le neuvième. En effet, dans les orbes, nous voyons des astres, or l'astre, dans son orbe, montre une certaine \emph{composition}. A quiconque dirait que les orbes sont tous simples, qu'il ne peut donc exister d'épicycle en eux, et qu'aucun mouvement excentrique ne peut donc être un mouvement simple, je dirais que l'existence et le mouvement des épicycles a déjà été prouvé. S'il y avait une preuve décisive qui empêche cela, les orbes n'en demeureraient pas moins des objets composés. Il n'y a pas de simplicité dans les orbes. Je pense qu'il s'agit d'une composition de choses simples et non d'éléments, sauf en ce qui concerne le neuvième orbe, mais Dieu est le plus savant.\footnote{Justification des épicycles~: bien qu'ils causent une certaine <<~composition~>>, de toute façon les orbes ne sont jamais absolument <<~simples~>> (sauf le neuvième qui ne contient aucun astre...).} \newpage\phantomsection \index{AOBHAQ@\RL{dwr}!BEAQAOAGAQAGAJBEAQAGBCARBCAQ@\RL{madArAt marAkaz al-akr}, trajectoires des centres des orbes} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{BABDBC@\RL{flk}!BABDBCATAGBEBD@\RL{falak ^sAmil}, orbe total} \index{AQBCAR@\RL{rkz}!BEAQBCAR@\RL{markaz}, centre} \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR ATBEAS@\RL{.harakaT markaz al-^sams}, mouvement du centre du Soleil} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BHASAW ATBEAS@\RL{.harakaT ws.t al-^sams}, mouvement du Soleil moyen} \addcontentsline{toc}{chapter}{I.7 L'orbe du Soleil et ses mouvements} \includepdf[pages=29,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre sept \large L'orbe du Soleil et ses mouvements \end{center} Nous évoquerons d'abord les trajectoires des centres des sphères solides parce qu'elles sont plus faciles à représenter sur le plan. Nous imaginons un orbe dans le plan de l'écliptique et en son centre, et nous l'appelons le \emph{parécliptique}. Nous imaginons un autre orbe dont le centre soit sur la périphérie de l'orbe parécliptique et dont le rayon soit quatre degrés et trente-sept minutes (si le rayon du parécliptique mesure soixante parts), et nous l'appelons \emph{orbe déférent}. Sur la périphérie du déférent, il y a le centre d'un autre orbe dont le rayon est de deux degrés et demi des mêmes parts. On l'appelle \emph{orbe rotateur}. Et il y a un autre orbe au centre du monde dont la périphérie est au-dessus de ces orbes. Son rayon intérieur est la somme des rayons des trois orbes mentionnés auxquels on ajoute le rayon du globe solaire. Le rayon du globe solaire est un sixième d'une des mêmes parts. Donc le rayon [de l'orbe total] est soixante-sept degrés et dix-sept minutes, et son épaisseur est d'une grandeur telle qu'il peut se mouvoir vers l'Est comme le mouvement de l'Apogée. Nous supposons que son rayon est soixante-huit degrés, alors son épaisseur est quarante-trois minutes. On l'appelle \emph{orbe total}. A présent, supposons que les trois orbes sont sur la ligne issue du centre du zodiaque et passant par l'Apogée (voir figure suivante). Quant aux mouvements, l'orbe parécliptique se meut d'un mouvement simple autour du centre du zodiaque et sur ses pôles dans le sens des signes. Ce mouvement est comme le mouvement du centre du Soleil, c'est-à-dire de $0;59,8,9,51,46,57,32,3$ degrés par jour\footnote{Si l'on utilise cette valeur pour calculer le Soleil moyen (Soleil moyen $=$ centre $+$ Apogée), les trois derniers rangs sexagésimaux diffèrent de la valeur donnée dans la suite du texte. On retiendra donc, comme valeur du mouvement du centre, $0;59,8,9,51,47$ degrés par jour. Le manuscrit C corrige d'ailleurs cette erreur (voir apparat critique dans l'arabe).}. Le centre du déférent se déplace donc d'autant dans le sens des signes. L'orbe déférent se meut d'un mouvement uniforme en son centre, dans le sens inverse des signes quand on est dans sa partie supérieure. Ce mouvement est aussi comme celui du centre du Soleil, c'est-à-dire $0;59,8,9,51,46,57,32,3$. L'orbe rotateur se meut d'un mouvement uniforme en son centre, dans le sens des signes quand on est dans sa partie supérieure, comme deux fois le centre du Soleil, c'est-à-dire $1;58,16,19,43,34$ degrés par jour. Le centre du corps du Soleil se déplace donc d'autant, dans le sens des signes. \newpage\phantomsection \index{BHASAW@\RL{ws.t}!BHASAW@\RL{ws.t}, astre moyen} \index{AQBCAR@\RL{rkz}!BEAQBCAR@\RL{markaz}, centre} \index{ACBHAL@\RL{'awj}!ACBHAL@\RL{'awj}, Apogée} \index{AMAVAV@\RL{.h.d.d}!AMAVBJAV@\RL{.ha.dI.d}, périgée} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD ASBEAS@\RL{ta`dIl al-^sams}, équation du Soleil} \index{ACBHAL@\RL{'awj}!AMAQBCAIACBHAL@\RL{.harakaT al-'awj}, mouvement des apogées} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \includepdf[pages=30,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent L'orbe total se meut d'un mouvement simple autour de son centre, d'Ouest en Est, comme le mouvement de l'Apogée, c'est-à-dire de $0;0,0,9,51,46,51$ degrés par jour, ce qui fait une minute par an, ou un degré tous les soixante ans\footnote{Soixante années persanes.}, par l'observation véritable. Observe et corrige, alors la distance du centre de l'orbe déférent \footnote{Le texte porte ``épicycle'', mais il faut bien lire ``déférent''.} au point fixe du zodiaque devient la somme des deux mouvements. Leur somme est le [mouvement du Soleil] moyen, et c'est un mouvement simple-composé. Alors la distance du Soleil au centre du monde est maximale quand il est à l'Apogée, minimale quand il est au périgée, et moyenne quand le centre est $3$ signes et $7;7,1,6,8,13$ degrés\footnote{Ou bien faut-il lire <<~$7;7$ ou $8;13$~>>~? Mais ce résultat est erroné. La distance moyenne est atteinte lorsque le Soleil est sur la ceinture du parécliptique, c'est-à-dire lorsque la distance au centre du monde est de 60 parts. La résolution d'une équation de degré 2 montre que cette distance est atteinte lorsque le centre est de 3 signes et $0;18,2,25,27,47,11$ degrés. \`A moins qu'{Ibn al-\v{S}\=a\d{t}ir } donne ici une valeur en signes et en \emph{heures} (avec 1 h = $0°2'30''$)~: si le cosinus du centre est arrondi à $0;0,18,40$, on trouve en effet 3 signes et $7;7,48$ heures.}. Trace une droite joignant le centre du zodiaque au centre du déférent, et une autre droite joignant le centre du zodiaque au corps du Soleil. Elles cernent l'angle d'anomalie du Soleil, ou équation [du Soleil]. Elles sont confondues quand le centre de l'orbe déférent est à l'Apogée ou au périgée. Et le maximum de cette anomalie est $2;2,6$ degrés. Il est atteint quand le centre vaut trois signes et sept degrés, ou huit [signes] et vingt-trois degrés\footnote{Il suffit pour le voir de construire une table de l'équation du Soleil en fonction du centre, comme avait pu en construire {Ibn al-\v{S}\=a\d{t}ir}. Si on construit une telle table pour des valeurs du centre variant de demi-degré en demi-degré, on constate que la correction maximale est atteinte quand le centre est entre $96;30$ et $97$ degrés, et qu'elle est alors supérieure à $2;2,5,10$. La comparaison des premières différences montre que la correction ne peut, sur cet intervalle, varier de plus de $0;0,0,38$ degrés quand le centre varie de $0;30$ degrés. La correction maximale est donc inférieure à $2;2,5,48$. Une table plus précise montrerait que la correction maximale est de $2;2,5,13$. \textit{Cf.} aussi le graphe de l'équation du Soleil dans notre commentaire \textit{supra} figure \ref{fig005} p.~\pageref{fig005}.}. La droite passant par le centre du déférent coupe l'écliptique en le \emph{Soleil moyen}. C'est-à-dire que la distance entre cette droite et la droite issue du zodiaque et passant par l'équinoxe de printemps est le Soleil moyen. Sa distance à la droite passant par l'Apogée est son \emph{centre}. La distance entre la droite passant par l'Apogée et la droite passant par le commencement du Bélier\footnote{Ibn al-Sh\=a\d{t}ir veut dire l'équinoxe de printemps. A cause du <<~deuxième mouvement~>> (le mouvement de précession), le Bélier se déplace par rapport à l'équinoxe de printemps...} est l'\emph{Apogée}. La somme de l'Apogée et du centre est le Soleil moyen. \newpage\phantomsection \index{BFAWBB@\RL{n.tq}!BEBFAWBBAI@\RL{min.taqaT}, ceinture} \index{BBAWAH@\RL{q.tb}!BBAWAH@\RL{q.tb}, pôle} \index{ACBHAL@\RL{'awj}!AMAQBCAIACBHAL@\RL{.harakaT al-'awj}, mouvement des apogées} \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \includepdf[pages=31,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent La distance maximale du centre du Soleil au centre du monde est soixante-sept degrés et sept minutes, en parts telles que le rayon du parécliptique en compte soixante. La distance minimale du centre du Soleil au centre du monde est cinquante-deux degrés et cinquante-trois minutes, en les mêmes parts. Le Soleil moyen à midi du premier jour de l'année sept cent un de l'ère de Yazdgard 9 signes et $10;9,0$ degrés à Damas et l'Apogée du Soleil à la date indiquée est 2 signes et $29;52,3,1$ degrés. Le mouvement de l'Apogée dans le sens des signes, en soixante années persanes, est d'un seul degré. Le mouvement du Soleil [moyen] en une année persane est $11$ [signes] et $29;45,40$ degrés, en un mois c'est-à-dire trente jours, $0$ [signe] et $29;34,9,51,46,50,57,32$ degrés, en un jour $0;59,8,19,43,33,41,55,4,6$, et en une heure égale $0;2,27,50,49,18,54,14,47$. \^Otons du Soleil moyen l'Apogée, il reste le centre. Nous imaginons l'orbe parécliptique comme un orbe sphérique solide dont la partie concave touche la partie convexe de la totalité des orbes de Vénus et dont la partie convexe touche la partie concave de l'orbe total du Soleil~; son centre est le centre du monde, il a deux pôles alignés avec les pôles de l'écliptique, et sa ceinture est dans le plan de la ceinture de l'écliptique. \label{complement_sol} Son épaisseur est de quatorze degrés et trente-quatre minutes, plus un complément. Pous que [les orbes] soient mutuellement contigus quand on y plonge l'orbe déférent, nous prenons un complément d'un tiers de degré à l'extérieur et à l'intérieur, en parts telles que son rayon extérieur est soixante-sept parts et dix-sept minutes. Nous prenons une sphère dont le diamètre est quatorze degrés et trente-quatre minutes, contenue en lui et le touchant à l'intérieur et à l'extérieur. Nous prenons une autre sphère, de rayon deux degrés, un demi-degré et un sixième de degré des mêmes parts, contenue dans la sphère du déférent de sorte qu'elle la touche en un point. Nous prenons le corps du Soleil contenu dans cette sphère. Nous prenons les ceintures de ces sphères dans le plan de l'orbe écliptique. C'est ce que nous avons représenté dans la figure plane. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AOBHAQ@\RL{dwr}!AJAOBHBJAQ@\RL{tadwIr}, épicycle} \index{ALAOBD@\RL{jdl}!ALAOBHBD@\RL{jdwl}, table} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD@\RL{ta`dIl}, équation} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE ATBEAS@\RL{taqwIm al-^sams}, Soleil vrai} \label{var5}\label{var15}\label{var16} \includepdf[pages=32,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Les bordant, il y a une sphère dont le centre est le centre du monde, dont la partie concave touche la partie convexe du parécliptique du Soleil, dont l'épaisseur est de quarante-trois minutes, dont la partie convexe touche la partie concave du parécliptique de Mars, dont la circonférence est dans le plan de l'écliptique et dont les deux pôles sont les pôles de l'écliptique. Les mouvements sont comme on l'a dit précédemment. Ainsi le cas du Soleil est mené à bien par quatre orbes. Il y en a deux qui entourent le centre du monde et sont dans le plan de l'écliptique~; et il y a le déférent et le rotateur. Si la correction s'accordait au principe de l'épicycle selon la configuration de Ptolémée dans les octants, il faudrait ajouter un autre orbe mouvant l'Apogée d'un degré tous les soixante ans. Mais cela contredit l'observation et ne s'y accorde qu'en les équinoxes et les solstices et non ailleurs. De plus ces orbes ont un même centre, et le centre [du Soleil], le Soleil moyen et l'Apogée sont des quantités prises dans un même cercle~; or dans la configuration connue\footnote{un modèle du Soleil avec un excentrique}, le Soleil moyen est une quantité prise dans un certain cercle, et le centre et l'Apogée sont des quantités prises dans un autre cercle~; et ceci est erroné, ils en ont fait un arc semblable, et c'est impossible\footnote{Dans les modèles avec un excentrique, il faut bien faire attention à quel centre se rapporte chaque paramètre angulaire (par rapport au centre de l'excentrique, ou non). Chez {Ibn al-\v{S}\=a\d{t}ir} la question ne se pose pas, car à chaque orbe ne correspond qu'un unique centre. Mieux encore, les paramètres \textit{centre}, \textit{Apogée} et \textit{Soleil moyen} désigneront tous des angles mesurés le long de cercles concentriques ; c'est plus compliqué chez Ptolémée.}. \begin{center} \large Section \end{center} \emph{Comment trouver le Soleil vrai par les tables et par le calcul.} Si tu prends le Soleil moyen à la date que tu veux, que tu prends aussi l'Apogée, et que tu ôtes l'Apogée du Soleil moyen, il reste le centre. Donc tu tires le Soleil moyen et l'Apogée de la table, à la date que tu veux. Pour cela, tu prends ce qui appartient aux années entières, aux années intercalaires, aux mois et aux jours -- ne compte pas le jour dans lequel tu es -- puis ajoute rang à rang et augmente ce qui doit être augmenté\footnote{addition sexagésimale avec retenues}~; rejette les circonférences entières~; reste alors le Soleil moyen. Après cela, fait l'addition pour l'Apogée, puis ôtes l'Apogée du Soleil moyen, il reste le centre. Entre-le alors dans la table de l'équation du Soleil. Ce que tu y trouves en degrés et fractions de degré, corrigé en interpolant ce qu'il y a entre les lignes, c'est l'équation du Soleil. Ensuite, si le centre tombe dans la partie supérieure de la table, tu soustrais l'équation au Soleil moyen, et s'il tombe dans la partie inférieure de la table, tu ajoutes l'équation au Soleil moyen. \newpage\phantomsection \includepdf[pages=33,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent On obtient alors le Soleil vrai, par rapport à l'orbe écliptique, à midi du jour pour lequel tu as calculé, à Damas. Et si tu veux calculer l'équation, par le calcul, sans table~: si c'est cela que tu veux, alors multiplie le cosinus du centre du Soleil par sept degrés et sept minutes. Ce qu'on obtient, ajoute-le à soixante si le centre est du côté de la distance maximale, c'est-à-dire quand le centre est inférieur à trois signes, sept degrés et sept minutes\footnote{il faut lire seulement~: trois signes, c'est-à-dire 90°.}, ou bien supérieur à huit signes, vingt-deux degrés et cinquante-trois minutes\footnote{il faut lire plutôt~: neuf signes, c'est-à-dire 270°.}~; sinon, il est du côté de la distance minimale, alors retranche ce qu'on obtient de soixante. Le somme de l'addition, ou bien le reste de la soustraction, on l'appelle le total. Ensuite, multiplie le sinus du centre par deux degrés et sept minutes, on appelle le résultat dividende. Mets au carré le dividende et mets au carré le total, fais la somme des deux carrés, prends-en la racine, c'est la distance du Soleil au centre du monde (en parts telles que le rayon du parécliptique en compte soixante). Divise le dividende par la distance du Soleil au centre du monde, il en sort le sinus de l'équation du Soleil. Cherche son arc dans la table des sinus, il en sort l'équation [du Soleil]. Nous avons fait ce calcul pour chaque degré, et nous l'avons établi dans la table. Ensuite, si le centre est inférieur à six signes, tu soustrais l'équation au Soleil moyen, et s'il est supérieur à six signes, tu ajoutes l'équation au Soleil moyen. Tu obtiens la position du Soleil par rapport à l'écliptique. \begin{center} \large Section \end{center} \label{diam_sol} Le \emph{diamètre du Soleil} à distance moyenne (c'est-à-dire quand sa distance au centre du monde est comme le rayon du parécliptique qui est par supposition de soixante parts) est $32;32$. Son diamètre à l'Apogée est $29;5$, et au périgée $36;55$. \newpage\phantomsection \index{AHBGAJ@\RL{bht}, vitesse apparente} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{BGBFAO@\RL{hnd}!BGBFAO@\RL{al-hnd}, les Indiens} \label{var6} \includepdf[pages=34,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection La méthode pour calculer son diamètre ailleurs qu'en ces lieux est que tu divises son diamètre à distance moyenne par sa distance au centre du monde qui est le centre de l'écliptique\footnote{Notons $\alpha$ le diamètre apparent du Soleil (c'est un angle), et OS sa distance au centre du monde. \`A distance moyenne, OS = 60 et $\alpha=0°32'32''$ (probablement une donnée de l'observation), or $\alpha\simeq\sin\alpha$ est inversement proportionnel à OS. On a donc, aux autres distances~: $\alpha=\dfrac{0°32'32''\times 60}{\text{OS}}$.}. Son diamètre à distance moyenne est selon Hipparque $32;45$, chez les Indiens $32;33$, chez les modernes $32;35$, chez Ptolémée dans son chapitre\footnote{\textit{L'Almageste}, livre 5 chapitre 14.} concernant les grandeurs $31;20$, et chez moi on a trouvé par l'observation $32;32$. Pour cela il y a une méthode approchée qui est que tu multiplies sa vitesse apparente \footnote{Le mot \textit{buht} que nous traduisons par ``vitesse apparente'' est utilisé pour chaque astre dans le même contexte et semble désigner la vitesse angulaire mesurée par rapport aux étoiles fixes, c'est-à-dire l'arc parcouru par l'astre le long d'un grand cercle du huitième orbe, par unité de temps. Il ne faut sûrement pas y voir un concept défini de manière rigoureuse. Cette grandeur est, grossièrement, inversement proportionnelle à OS ; elle est donc, grossièrement, proportionnelle au diamètre apparent de l'astre. Pour le Soleil, on s'en convainc aisément en pensant à un modèle (approximatif) avec un excentrique.} par trente-trois, il en sort son diamètre approché~; mais la première méthode est exacte. Dieu est le plus savant. \newpage\phantomsection \includepdf[pages=35,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center}\label{soleil_trajectoires} \input{soleil_fr.pdf_tex} Les orbes du Soleil figurés par les trajectoires des centres des sphères \end{center} \newpage\phantomsection \includepdf[pages=36,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center}\label{soleil_orbes_solides} \input{soleil_orbes_fr.pdf_tex} Les orbes solides du Soleil \end{center} \begin{center} \includegraphics[width=\textwidth]{figure.jpg} Une représentation moderne des orbes du Soleil, librement inspirée d'Ibn al-\v{S}\=a\d{t}ir\label{soleil_blender} \end{center} \begin{center} \includegraphics[width=\textwidth]{lune.jpg} Une représentation moderne des orbes de la Lune, librement inspirée d'Ibn al-\v{S}\=a\v{t}ir \end{center} \newpage\phantomsection \index{BJBHBE@\RL{ywm}!BJBHBE AHBDBJBDAJBG@\RL{al-yawm bilaylatihi j al-'ayyAm bilayAlIhA}, nychtémère (\textit{litt.} le jour avec sa nuit)} \index{AWBDAY@\RL{.tl`}!BEAWAGBDAY ASAJBHAGAEBJBJAI@\RL{m.tAl` istiwA'iyyaT}, coascension calculée à l'équateur terrestre} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD ACBJBJAGBE@\RL{ta`dIl al-'ayyAm}, équation des nychtémères} \index{AWBDAY@\RL{.tl`}!BEAWAGBDAY@\RL{m.tAl`}, coascension} \index{AHBGAJ@\RL{bht}, vitesse apparente} \addcontentsline{toc}{chapter}{I.8 Calcul de l'équation des nychtémères} \includepdf[pages=37,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre huit \normalsize Calcul de l'équation des nychtémères \end{center} [Cette équation] est l'écart entre les nychtémères, rapporté au jour [où] la quantité observée est [à la fois] l'astre moyen est l'astre corrigé. Nous avons déjà posé que le jour où nous avons observé à la fois l'astre moyen et l'astre corrigé est l'équinoxe de printemps, et nous lui rapportons tous les autres nychtémères. La méthode pour calculer cette équation est la suivante. Tu calcules le Soleil moyen et le Soleil corrigé au jour souhaité. Ajoute au Soleil moyen deux degrés, une minute et sept secondes. Prends la différence entre cela et la coascension du Soleil corrigé (coascension calculée par rapport à l'équateur terrestre en prenant comme origine le commencement du Bélier). Multiplie-la par quatre. On obtient les minutes de l'équation des nychtémères. Si le Soleil moyen avec la quantité ajoutée est supérieur à la coascension du Soleil corrigé, alors retranche l'équation de la date, et s'il lui est inférieur, alors ajoute-la à la date. On obtient la date corrigée par l'équation des nychtémères. On en tire les [astres] moyens dans la table. \emph{Remarque.} Multiplie la vitesse apparente de l'astre (par heure) par les minutes de l'équation des nychtémères. Regarde ce qu'on obtient~: si la position moyenne avec la quantité ajoutée est supérieure à la coascension du Soleil corrigé, alors retranche le résultat de cet astre corrigé, mais si elle leur est inférieure, alors ajoute le résultat à cet astre corrigé. On obtient l'astre corrigé à l'instant demandé. Nous avons déjà calculé une table pour l'y trouver en fonction du Soleil moyen. \newpage\phantomsection \index{AOBHAQ@\RL{dwr}!BEAOAGAQ@\RL{mdAr}, trajectoire, trajectoire diurne (\textit{i. e.} cercle parallèle à l'équateur)} \index{ANASBA@\RL{_hsf}!ANASBHBAAI@\RL{_hsUf}, éclipse de Lune} \index{BCASBA@\RL{ksf}!BCASBHBAAI@\RL{ksUf}, éclipse de Soleil} \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \index{ALBEAY@\RL{jm`}!AEALAJBEAGAYAI@\RL{'ijtimA`}, conjonction} \index{BBAHBD@\RL{qbl}!AEASAJBBAHAGBDAI@\RL{'istiqbAl}, opposition} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AOBHAQ@\RL{dwr}!AJAOBHBJAQ@\RL{tadwIr}, épicycle} \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{AMAVAV@\RL{.h.d.d}!AMAVBJAV@\RL{.ha.dI.d}, périgée} \index{APAQBH@\RL{_drw}!APAQBHAI@\RL{_dirwaT}, sommet, apogée} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \addcontentsline{toc}{chapter}{I.9 Les orbes de la Lune et leurs mouvements} \includepdf[pages=38,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre neuf \large Les orbes de la Lune et leurs mouvements selon la méthode juste et sauve des doutes \end{center} Quand on a observé la Lune, on a découvert qu'elle se meut sur une trajectoire différente de celle du Soleil et la coupant en deux lieux opposés qui se déplacent le long de la trajectoire du Soleil en sens inverse des signes. Cette trajectoire est inclinée par rapport à celle du Soleil, et l'inclinaison maximale est toujours d'une même grandeur de chaque côté (cinq parts), et telle que, dans une moitié de sa trajectoire, la Lune est au Nord de la ceinture du zodiaque, et dans l'autre moitié, au Sud. On a donc su qu'elle a un orbe parécliptique, et un orbe incliné par rapport à celui-ci, et que ces deux orbes se coupent en deux points opposés, donc que l'un bissecte l'autre. Quand on a découvert que le mouvement de la Lune sur l'orbe incliné varie, en lenteur et en vitesse, autour du centre du monde qui est le centre du zodiaque, on a su qu'elle a un orbe dont le centre est sur la ceinture de l'orbe incliné. De sorte que, quand la Lune est dans la partie haute de cet orbe, elle est plus éloignée du centre du monde, et quand elle est dans sa partie basse, elle en est plus proche~; en témoigne la variation de la grandeur du corps de la Lune, en particulier lors des éclipses de Lune et des éclipses de Soleil. On appelle cet orbe \emph{épicycle} depuis longtemps. Ensuite, on a trouvé que la grandeur du rayon de l'épicycle, lors des conjonctions et des oppositions, ne dépasse jamais cinq parts et un sixième chez nous (cinq parts et un quart chez Ptolémée et Hipparque)~; mais on a trouvé que sa grandeur, si la distance de la Lune au Soleil est d'un quart de cercle des deux côtés\footnote{C'est-à-dire lors des quadratures.}, est de huit parts. Nous avons su ainsi qu'il y a un autre petit orbe sur la ceinture de l'épicycle, de sorte que, quand la Lune est dans les conjonctions ou les oppositions, elle est au périgée de cet orbe, et que, quand elle est dans les quadratures, elle est à l'apogée -- la distance au centre de l'épicycle est alors huit parts. Si la Lune est au périgée de cette orbe, sa distance au centre de l'épicycle est cinq parts et un sixième. Son mouvement est le double du mouvement de l'orbe portant le centre de l'épicycle, de sorte que, quand l'orbe portant l'épicycle tourne d'un quart de cercle, l'orbe portant le corps de la Lune se meut d'un demi-cercle. Ainsi savons-nous que la Lune a quatre orbes et quatre mouvements simples. \newpage\phantomsection \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \index{BABDBC@\RL{flk}!BABDBC BFAGAQ@\RL{falak al-nAr}, orbe du feu} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{AYBBAO@\RL{`qd}!AYBBAOAI@\RL{`qdaT}, n{\oe}ud} \index{ALARBGAQ@\RL{jzhr}!ALBHARBGAQ@\RL{jUzhr}|see{\RL{`qdaT}}} \index{ALARBGAQ@\RL{jzhr}!ALBHARBGAQ BEALAGAR ATBEAGBDBJBJ@\RL{jwzhr mujAz ^simAliyy}|see{\RL{ra's}}} \index{ALARBGAQ@\RL{jzhr}!ALBHARBGAQ BEALAGAR ALBFBHAHBJBJ@\RL{jwzhr mujAz junUbiyy}|see{\RL{_dnb}}} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \index{AOBHAQ@\RL{dwr}!AJAOBHBJAQ@\RL{tadwIr}, épicycle} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \includepdf[pages=39,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Le premier est l'orbe \emph{parécliptique}. Ses deux pôles sont alignés avec les pôles de l'écliptique. Sa partie convexe touche la partie concave des orbes de Mercure, et sa partie concave touche la partie convexe du deuxième de ses orbes. Son rayon est de soixante-neuf parts. Le deuxième orbe est un \emph{orbe incliné}. L'inclinaison de son plan par rapport au plan du parécliptique est constante et son maximum est de cinq parts. Son centre est le centre du monde, c'est le centre du parécliptique~; sa partie convexe touche la partie concave du parécliptique, et sa partie concave touche l'orbe du feu, comme on sait. Son rayon est, par supposition, de soixante parts. La ceinture qui est sur sa partie convexe coupe la ceinture qui est sur la partie concave du parécliptique, en deux points opposés. On les appelle les \emph{n{\oe}uds}. Le premier s'appelle \emph{tête}, ou n{\oe}ud ascendant, car la Lune le passe en s'inclinant vers le Nord par rapport au parécliptique. L'autre, qui lui est opposé, s'appelle \emph{queue}, ou n{\oe}ud descendant. Les pôles de l'orbe incliné sont attachés à deux points de la partie concave du parécliptique. Leur distance aux pôles du parécliptique mesure l'inclinaison maximale de l'orbe incliné (c'est aussi la latitude maximale de la Lune)~: cinq parts. Le centre de la partie concave de l'orbe incliné est le centre du monde et son rayon est de cinquante-et-une parts. Quant au troisième orbe, nous imaginons une sphère de rayon huit parts, seize minutes et vingt-sept secondes (des parts telles que le rayon de l'orbe incliné en compte soixante). Nous la supposons plongée dans l'orbe incliné qu'elle touche en un point de sa ceinture. Nous l'appelons sphère de l'\emph{épicycle}. Quant au quatrième orbe, nous imaginons une sphère de rayon une part, quarante-et-une minutes et vingt-sept secondes (des mêmes parts). Nous la supposons plongée dans le corps de l'épicycle~; elle le touche en un point de sa ceinture qui est dans le plan de l'orbe incliné, et c'est aussi en ce point que l'épicycle touche la ceinture de l'orbe incliné. On appelle cet orbe le \emph{rotateur}. Nous supposons que le corps de la Lune est plongé dans le corps du rotateur. Un point de la partie convexe de la Lune touche la ceinture du rotateur qui est dans le plan de l'orbe incliné. On a trouvé par nos observations que le diamètre du globe lunaire est de trente-deux minutes cinquante-quatre secondes (des mêmes parts). \newpage\phantomsection \index{ALARBGAQ@\RL{jzhr}!AMAQBCAI ALBHARBGAQ@\RL{.harakaT al-jwzhr}, mouvement des n{\oe}uds} \index{BHASAW@\RL{ws.t}!BHASAW@\RL{ws.t}, astre moyen} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BD-AYAQAV@\RL{.harakaT al-`r.d}, mouvement en latitude} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{APAQBH@\RL{_drw}!APAQBHAI BD-AJAOBHBJAQ BD-BEAQAEBJBJ@\RL{_dirwaT al-tadwIr al-mar'iyy}, apogée apparent} \index{AHAYAO@\RL{b`d}!AHAYAO BEAG AHBJBF BD-BFBJAQBJBF AH-BD-BHASAW@\RL{bu`d mA bayna al-nIrayn bi-al-ws.t}, élongation moyenne} \index{BBAWAQ@\RL{q.tr}!BFAUBA BBAWAQ BD-AJAOBHBJAQ BD-BEAQAEBJBJ@\RL{n.sf q.tr al-tadwIr al-mar'iyy}, rayon de l'épicycle apparent} \includepdf[pages=40,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Quant aux mouvements, le premier mouvement, mouvement du parécliptique, est un mouvement simple autour de son centre (qui est le centre du monde) et sur ses pôles, en sens inverse des signes, de la grandeur du mouvement des n{\oe}uds, c'est-à-dire $0;3,10,38,27$. Se déplacent de ce mouvement~: la tête, la queue, le lieu où la latitude de la Lune est maximale, et tous les orbes de la Lune. En vérité, à proprement parler, ce mouvement est l'excédent du mouvement des n{\oe}uds sur le mouvement des étoiles fixes.\label{lab001} Le deuxième mouvement est le mouvement de l'orbe incliné autour de son centre (qui est le centre du parécliptique et le centre du monde) et autour de ses pôles fixes que l'on a mentionnés précédemment. C'est un mouvement simple et uniforme qui fait en des temps égaux des [déplacements] égaux dans le sens des signes du zodiaque, de $13;13,45,39,40$. Il vaut la somme de la Lune moyenne et du mouvements des n{\oe}uds, et on l'appelle depuis longtemps \emph{mouvement en latitude}. Le centre de l'épicycle se déplace donc dans le sens des signes, à partir d'un point qu'on imagine être un point fixe par rapport à l'écliptique, d'un mouvement de grandeur égale à la Lune moyenne c'est-à-dire $13;10,35,1,13,53$. Le troisième mouvement est le mouvement de l'épicycle autour de son centre et sur ses pôles situés à la perpendiculaire du plan de l'orbe incliné. Ce mouvement est de $13;3,53,46,18$ en un jour et une nuit, et il est en sens inverse des signes dans la partie supérieure [de l'épicycle]. On le connait depuis longtemps~: c'est le \emph{mouvement propre} de la Lune. Ce mouvement commence à l'apogée de l'épicycle apparent. Le quatrième mouvement est le mouvement du rotateur. C'est un mouvement simple autour de son centre, qui fait se mouvoir la Lune sur la ceinture du rotateur dans le plan de la ceinture de l'orbe incliné, dans le sens des signes quand elle est dans sa partie supérieure. Ce mouvement est de $24;22,53,23$ en un jour et une nuit. C'est deux fois l'excédent de la Lune moyenne sur le Soleil moyen. Ce mouvement commence au périgée lors des conjonctions et des oppositions moyennes et il passe par l'apogée lors des quadratures. Pour montrer cela, supposons Soleil moyen et Lune moyenne en conjonction en un point fixe de l'écliptique~; supposons que le centre de l'épicycle et le centre du rotateur sont sur une droite passant par le centre du monde et par ce point que l'on imagine être fixe par rapport à l'écliptique~; supposons la Lune au périgée du rotateur, en cet instant, sur cette droite, à distance minimale du centre de l'épicycle (cinq parts et un sixième). \newpage\phantomsection \includepdf[pages=41,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Puis chaque orbe se meut du mouvement simple observé que nous avons assigné à chacun~: -- Le Soleil se meut, en un jour et une nuit, de $0;59,8,20$ dans le sens des signes. -- Le parécliptique se meut, en un jour et une nuit, de $0;3,10,38,27$ dans le sens inverses des signes~; il entraîne l'orbe incliné, l'épicycle et le rotateur dans ce mouvement en sens inverse des signes. -- L'orbe incliné se meut dans le sens des signes, dans le même temps, de $13;13,45,39,40$. Laissant le point fixe de l'écliptique où les deux luminaires étaient en conjonction, l'élongation du centre de l'épicycle devient, en un jour et une nuit, $12;11,26,41,30$~; c'est l'excédent du mouvement de la Lune moyenne sur le Soleil moyen, et on l'appelle \emph{élongation moyenne entre les deux luminaires}. -- Le rotateur se meut autour de son centre (donc la Lune se déplace du périgée du rotateur), en sens inverse des signes, du double de l'élongation, c'est-à-dire $24;22,53,23$ en un jour et une nuit. Alors le corps de la Lune s'éloigne du centre de l'épicycle~; sa distance s'appelle \emph{rayon de l'épicycle apparent}. Si l'orbe incliné s'est mû (avec son inclinaison) d'un quart de cercle en sept jours, dix-huit minutes, quatorze secondes, cinquante-deux tierces et quinze quartes d'un jour et une nuit, alors l'orbe rotateur s'est mû, dans le même temps, d'un demi-cercle (car son mouvement est le double de l'élongation). La Lune se déplace donc du périgée de l'épicycle vers son apogée, et la distance du centre du corps de la Lune au centre de l'épicycle devient huit parts~: c'est le rayon de l'épicycle apparent dans les première et seconde quadratures. Si l'orbe incliné s'est mû, pendant une durée double de celle-ci, d'un demi-cercle, alors le centre de l'épicycle est passé par l'opposé du point fixe (sur la droite joignant ce point fixe au centre du monde), du côté opposé au lieu de la conjonction. Il y a donc à présent opposition entre le Soleil moyen et la Lune moyenne. Le rotateur se meut d'un mouvement double de celui de l'orbe incliné, c'est-à-dire d'un cercle entier, et il déplace la Lune vers le périgée du rotateur. \newpage\phantomsection \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI@\RL{.harakaT basI.taT}, mouvement simple} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEANAJBDBAAI@\RL{.harakaT mu_htalifaT}, mouvement irrégulier (\textit{i. e.} non uniforme)} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD@\RL{ta`dIl}, équation} \label{var22} \includepdf[pages=42,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent La distance de la Lune au centre de l'épicycle devient minimale, et le rayon de l'épicycle apparent devient cinq parts et un sixième ($5;10$), comme pendant la conjonction. Donc le rayon de l'épicycle apparent est toujours $5;10$ dans les conjonctions et les oppositions, et huit parts dans les quadratures. On l'a toujours trouvé ainsi dans l'observation. De plus, l'épicycle se meut de son mouvement propre. Le centre de la Lune tourne alors autour du centre de l'épicycle en une trajectoire dont le rayon est $5;10$, comme nous l'avons dit, dans les conjonctions et les oppositions. L'arcsinus de ceci est $4;56,24$. C'est l'équation maximale dans les conjonctions et les oppositions, atteinte quand la Lune est sur la droite tangente à l'épicycle apparent, d'un côté ou de l'autre. Mais quand le centre de l'épicycle est dans les quadratures, la distance du centre du corps de la Lune au centre de l'épicycle est de huit parts~: c'est le rayon de l'épicycle apparent dans les quadratures. Son arcsinus est sept parts et deux tiers. C'est ce que nous avons trouvé par l'observation dans les quadratures quand la Lune est sur la droite tangente à l'épicycle apparent d'un côté ou de l'autre. Comme le rapport entre le mouvement propre de la Lune et le mouvement de la Lune moyenne est inférieur au rapport entre la droite joignant le centre du monde au périgée de l'épicycle apparent et le rayon de l'épicycle apparent, alors la Lune n'a ni station ni rétrogradation (mais son mouvement devient lent dans la moitié de l'apogée et rapide dans la moitié du périgée)~: on ne la voit pas revenir en sens inverse des signes à cause de la petitesse de l'épicycle et de la faiblesse de son mouvement par rapport au mouvement de l'orbe incliné. Avec le rayon de l'épicycle apparent variant entre cinq parts un sixième et huit parts, c'est comme si les degrés de lenteur et de vitesse n'étaient pas constants, mais variables, de sorte que la lenteur, la vitesse, \emph{etc.} reviennent, tantôt moins, tantôt plus, à cause de cette variation. De ces orbes aux mouvements simples procèdent les mouvements irréguliers -- mouvements en accord avec l'observation en tout temps. Quant aux variations simples en longitude qui s'imposent en raison de ces mouvements, il y en a quatre~; on les appelle \emph{équations}. \newpage\phantomsection \label{var7} \includepdf[pages=43,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection La première variation\footnote{notée $c_2$ dans notre commentaire, car elle sera désignée comme ``deuxième variation'' page \pageref{deuxieme} ci-dessous.} est ce qu'on obtient à cause du rayon de l'épicycle dans les conjonctions et les oppositions. C'est l'angle formé au centre du monde par deux droites qui en sont issues, l'une passant par le centre de l'épicycle et l'autre par le centre du corps de la Lune. Cet angle est maximal quand la droite passant par la Lune touche l'épicycle apparent d'un côté ou de l'autre (sans le couper). Ce maximum, dans les conjonctions et les oppositions, est de $4;56,24$ (à condition que le rayon de l'orbe incliné soit de soixante parts). Cet angle s'évanouit à l'apogée et au périgée~: c'est quand les deux droites citées sont confondues. Cette variation est soustraite à la Lune moyenne tant que la Lune descend de l'apogée au périgée en sens inverse des signes, dans la moitié droite, jusqu'à ce que la droite passant par la Lune coïncide avec la droite passant par le centre de l'épicycle. Au contraire, on l'ajoute à la Lune moyenne tant que la Lune monte du périgée vers l'apogée dans le sens des signes, jusqu'à ce que l'une des droites coïncide avec l'autre. Cette variation est appelée équation simple ou \emph{première équation}. La deuxième variation est causée par l'augmentation de cette première variation quand le centre de l'épicycle n'est ni en conjonction ni en opposition. Elle est mêlée à la première et n'existe pas sans elle. Elle est maximale quand le centre de l'épicycle est dans les quadratures, c'est-à-dire à un quart de cercle du lieu où se situe la conjonction moyenne ou l'opposition moyenne. Son maximum est $2;44$, soit $7;40$ avec la première équation. Elle est toujours ajoutée à la première équation, que celle-ci soit ajoutée ou soustraite [à la Lune moyenne]. On l'appelle \emph{deuxième équation} (et on l'appelait autrefois variation de la distance minimale). \emph{Remarque.} Nous avons déjà avancé que le rayon de l'épicycle apparent est toujours $5;10$ dans les conjonctions et les oppositions, $8;0$ dans les quadratures. Entre ces positions, il augmente graduellement du plus petit au plus grand. La première variation devient, à cause de cette variation du rayon, la somme des deux variations. C'est subtil~: prête attention. La troisième variation\footnote{notée $c_1$ dans notre commentaire.} est appelée \emph{équation de la Lune propre}.\label{c1} C'est l'angle formé au centre de l'épicycle par deux droites qui en sont issues, l'une passant par le centre du rotateur et l'autre par le centre de la Lune. \newpage\phantomsection \index{BFBBBD@\RL{nql}!AJAYAOBJBD BFBBBD@\RL{ta`dIl al-naql}, équation du déplacement} \index{BFASAH@\RL{nsb}!AOBBAGAEBB BFASAH@\RL{daqA'iq al-nisb}, coefficient d'interpolation} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \includepdf[pages=44,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Si celle qui passe par le centre de la Lune touche le rotateur, alors l'équation est maximale, et c'est, selon nos observations, $12;26$. Cette équation s'évanouit lorsque la Lune est à l'apogée ou au périgée par rapport au centre de l'épicycle. On l'ajoute à la Lune propre quand l'élongation du centre de l'épicycle au Soleil moyen est inférieure à un quart de cercle ou supérieure à trois quarts de cercle, c'est-à-dire supérieure à neuf signes ou inférieure à trois signes. On la soustrait quand l'élongation est supérieure à un quart de cercle et inférieure à trois quarts de cercle. La quatrième variation est appelée \emph{équation du déplacement}. Sans cette équation, on obtient la Lune vraie par rapport de l'orbe incliné~; mais le but est d'avoir sa position par rapport au parécliptique pour connaître sa position par rapport à la ceinture du zodiaque. Cette équation est l'écart entre la distance de son lieu au n{\oe}ud sur la ceinture du parécliptique, et cette même distance sur la ceinture de l'orbe incliné. J'en tiens compte quand je veux convertir l'une de ces deux distances en l'autre. On appelle cela <<~déplacer la Lune de l'orbe incliné à l'écliptique~>>. Cette équation est maximale dans les octants de l'orbe incliné, et son maximum est environ six minutes. Elle s'évanouit aux n{\oe}uds et quand la Lune atteint sa latitude maximale de part et d'autre. Les modernes en ont peu fait usage. Le \emph{coefficient d'interpolation} en minutes est un nombre dont le rapport à soixante est la proportion due à la variation de l'élongation par rapport au Soleil. C'est une donnée qui facilite le calcul de la Lune vraie par les tables.\footnote{Ce coefficient est noté $\chi$ dans notre commentaire. Ceci est mieux expliqué au chapitre 11.} \emph{Remarque.} Si nous faisions le calcul selon la méthode de calcul de la première équation en partant du rayon de l'épicycle apparent, alors on obtiendrait la somme due des première et deuxième équations correspondant à l'élongation pour laquelle on aurait fait le calcul. C'est subtil~: prête attention. Parlons de la latitude de la Lune. J'ai dit précédemment que ses dernières latitudes\footnote{<<~ses dernières latitudes~>>, c'est-à-dire les latitudes (Nord et Sud) maximales.} sont égales au Nord et au Sud. D'après ce qu'on a trouvé en observant, la latitude maximale est de cinq parts. La Lune est au Nord quand elle va de la tête vers la queue, et au Sud quand elle va de la queue vers la tête. La Lune monte de sa dernière latitude Sud à sa dernière latitude Nord, elle descend de sa dernière latitude Nord à sa dernière latitude Sud. \newpage\phantomsection \index{BFAXAQ@\RL{n.zr}!ANAJBDAGBA BEBFAXAQ@\RL{i_htilAf al-man.zr}, parallaxe} \index{ATBCBD@\RL{^skl}!AJATBCBCBDAGAJ@\RL{t^skkulAt}, phases (de la Lune)} \index{BEAMBH@\RL{m.hw}!BEAMBH@\RL{m.hw}, obscurcissement} \index{BFBHAQ@\RL{nwr}!BFBHAQ@\RL{nwr}, lumière} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \includepdf[pages=45,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Le dessein de l'ascension est de rapprocher la Lune du pôle visible, et celui de la descente est de l'en éloigner. Sa parallaxe et ses phases selon sa position par rapport au Soleil seront traitées ultérieurement dans un chapitre particulier, si Dieu le veut. Quant à la variation des parts de sa surface qui reçoivent la lumière et qu'on appelle <<~obscurcissement~>>\footnote{les tâches dues au relief à la surface de la Lune, comme on le sait depuis Galilée.}, le plus vraisemblable est que certaines sont placées face à la lumière et d'autres pas~; ou bien c'est à cause de la variation de sa position par rapport au Soleil. On dit aussi qu'il y a, dans l'épicycle de la Lune, avec elle, des corps distincts qui ne sont pas disposés pareillement face à l'éclairage. On dit aussi que la raison de cette variation est la réflexion des rayons vers la Lune par la mer environnante, à cause de son poli, et l'absence de leur réflexion par la surface du quart habité et du reste du globe, à cause de sa rugosité. En effet les lieux de sa face éclairés par les rayons venant à elle du Soleil après réflexion par la surface de la mer seraient plus lumineux que les lieux qui sont éclairés par les rayons venant seulement du Soleil. \begin{center} \large Section \end{center} Quelle est la distance de la Lune au centre de la Terre~? Dans cette configuration, lors des conjonctions et des oppositions, cette distance vaut soixante parts en moyenne, $65;10$ au maximum et $54;50$ au minimum. Dans les quadratures, elle vaut $68;0$ au maximum et $52;0$ au minimum. La distance maximale atteinte à l'intérieur de ses orbes est $69;0$, et la distance minimale est $51;0$. Le tout, en parts telles que le rayon de l'orbe incliné en compte soixante. Sache que la distance du centre de l'épicycle au centre de la Terre est soixante parts et qu'elle n'augmente ni ne diminue~; mais si tu veux la distance de la Lune en parts telles que le rayon de la Terre en compte une seule, ou bien telle autre quantité en usage pour les distances et les volumes, alors voici la route à suivre selon mes observations. Tu ôtes de chaque degré deux minutes et il restera ce qu'on demandait~; en suivant la voie de Ptolémée, tu jetterais une minute de chaque degré. En effet la distance du centre de l'épicycle au centre du monde est selon nos observations cinquante-huit fois le rayon de la Terre, et selon Ptolémée cinquante-neuf fois le rayon de la Terre. \newpage\phantomsection \index{ALARBGAQ@\RL{jzhr}!BHASAW ALBHARBGAQ@\RL{ws.t al-jwzhr}, n{\oe}ud moyen} \index{ALARBGAQ@\RL{jzhr}!AJBBBHBJBE ALBHARBGAQ@\RL{tqwIm al-jwzhr}, n{\oe}ud vrai} \index{BHASAW@\RL{ws.t}!BHASAW BBBEAQ@\RL{ws.t al-qamar}, Lune moyenne} \index{AHAYAO@\RL{b`d}!AHAYAO BEAVAGAYBA@\RL{b`d m.dA`f}, élongation double} \index{ANAUAU@\RL{_h.s.s}!ANAGAUAUAI BBBEAQ@\RL{_hA.s.saT al-qamar}, Lune propre} \index{APAQBH@\RL{_drw}!APAQBHAI AJAOBHBJAQ BHASAWBI@\RL{_dirwaT al-tadwIr al-ws.t_A}, apogée moyen de l'épicycle} \index{APAQBH@\RL{_drw}!APAQBHAI BEAQAEBJBJAI@\RL{_dirwaT mar'iyyaT}, apogée apparent} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE BBBEAQ@\RL{tqwIm al-qamar}, Lune vraie} \index{AYAQAV@\RL{`r.d}!AMAUAUAI AYAQAV@\RL{.hi.s.saT `r.d}, argument de latitude} \includepdf[pages=46,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Comment interpréter les choses qui se rapportent à la Lune~? -- Le \emph{n{\oe}ud moyen} est [l'arc] entre le commencement du Bélier et le n{\oe}ud ascendant, dans l'orbe parécliptique, en sens inverse des signes.\label{lab002} -- Le \emph{n{\oe}ud vrai} est [l'arc] entre ces deux points dans l'orbe parécliptique, dans le sens des signes. -- La \emph{Lune moyenne} est [l'arc] entre le point face au commencement du Bélier (à condition qu'il soit fixe) et l'extrémité de la droite joignant le centre du monde au centre de l'épicycle. Et si j'avais dit, l'élongation du centre de l'épicycle au point mentionné dans le sens des signes, cela aurait été juste aussi. -- Si l'on ôte le Soleil moyen de la Lune moyenne il reste l'\emph{élongation} du centre de l'épicycle au Soleil moyen, et le double de cette élongation s'appelle l'\emph{élongation double}. C'est égal au mouvement du rotateur de la Lune. -- La \emph{Lune propre} est [l'arc] entre l'apogée moyen de l'épicycle et le centre du corps de la Lune, sur la ceinture de l'épicycle, dans le sens qu'on a supposé [pour l'épicycle] (c'est-à-dire, si on est dans la moitié haute, en sens inverse des signes). La grandeur de ces arcs ne varie pas. Parmi les choses dont la grandeur varie en croissant ou en décroissant, il y a~: -- La \emph{Lune propre corrigée} est [l'arc] entre le centre du corps de la Lune et son apogée apparent sur la ceinture de son épicycle. C'est la véritable Lune propre, après qu'on l'a corrigée par la première équation. -- La \emph{Lune vraie} est [l'arc] entre le commencement du Bélier et le point du parécliptique où tombe son cercle de latitude, dans le sens des signes. -- Son \emph{argument de latitude} est l'excédent de la Lune vraie sur le n{\oe}ud vrai. \newpage\phantomsection \index{ASBFBH@\RL{snw}!ASBFAI BBBEAQBJAI@\RL{sanaT qamariyaT}, année lunaire} \index{ASBFBH@\RL{snw}!ASBFAI ATBEAS@\RL{sanaT al-^sams}, année solaire} \index{ATBGAQ@\RL{^shr}!ATBGAQ BBBEAQBJBJ@\RL{^shr qamariyy}, mois lunaire} \index{ALAOBD@\RL{jdl}!ALAOBHBD@\RL{jdwl}, table} \addcontentsline{toc}{chapter}{I.10 Détermination des mouvements de la Lune à une date donnée} \includepdf[pages=47,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre dix \large Détermination des mouvements de la Lune à une date donnée \end{center} Dans les tables, nous avons écrit que la Lune moyenne à midi du premier jour de l'année sept cent un de l'ère de Yazdgard est sept signes et $3;35,50$ degrés. Le mouvement de la Lune moyenne en vingt années persanes est cinq signes et $13;7,40,49,48,21$\footnote{Le <<~cinq signes et $13;7,40,49,48,21$ degrés~>>, c'est notre correction (modulo 360 comme les valeurs suivantes)~; le texte arabe (y compris l'édition page ci-contre) donne <<~deux signes et $7;40,49,48,21$~>>.}, en une seule année quatre signes et $9;23,2,29,25,1$ degrés, en trente jours un signe et $5;17,30,36,56,18,9$ degrés, en un jour et une nuit $13;10,35,1,13,52,36,18,16,7$ degrés, et en une heure égale $0;32,56,27,33,41,31$ degrés. Nous avons aussi écrit que la Lune propre à la date mentionnée est quatre signes et $18;32,27$ degrés. Son mouvement en vingt années persanes est onze signes et $4;22,30$ degrés, en une seule année deux signes et $28;43,7$ degrés, en un mois un signe et $1;56,58$ degrés, en un jour $13;3,53,56$ degrés, et en une heure $0;32,39,45$. Nous avons établi que le n{\oe}ud moyen à la date mentionnée est neuf signes et $5;7,35$ degrés. Son mouvement en sens inverse des signes est toutes les vingt années persanes de zéro signe et $26;34,37,20$ degrés, en une seule année $19;19,43,52$ degrés, en un mois $1;35,19,13,18,54,15$, en un jour $0;3,10,38,26,37,48,29,36$, et en une heure $0;0,7,56,36,6,35$. Ceci étant admis, sache que l'année lunaire\footnote{Une année lunaire compte exactement douze mois lunaires.} est exactement $354;22,1,35,8,50,24$, le mois lunaire moyen\footnote{Avec les notations adoptées dans notre commentaire mathématique, le mois lunaire moyen est $360°/(\omega_m-\omega_m^{\astrosun})$. On trouve bien la valeur donnée ici, jusqu'au cinquième rang sexagésimal, à partir des paramètres $\omega_m$ et $\omega_m^{\astrosun}$ adoptés par {Ibn al-\v{S}\=a\d{t}ir}.} est $29;31,50,7,55,44,12$, l'année solaire exacte est trois cent soixante-cinq jours, cinq heures, quarante-neuf minutes et trois cinquièmes de secondes, c'est-à-dire $365;5,49,0,36,32$.\footnote{Le texte arabe (y compris l'édition ci-contre) dit <<~trois cinquièmes de soixantièmes de secondes~>>, ce qui semble incohérent avec la valeur écrite en chiffres. Pourtant, en calculant l'année solaire avec un mouvement moyen par année persane de $359;45,40$, on trouve 365 jours et $5;49,0,33,36$ heures. Le 36 exprime donc bien <<~trois cinquièmes de soixantièmes de secondes~>>, et l'erreur commise par le copiste serait alors d'avoir oublié trentre-trois secondes. Mais la durée de l'année en jours donnée ensuite confirme le choix que nous avons fait (sinon, elle serait en effet $365;14,32,31,24$).} Supposons que le jour et la nuit comptent soixante parts, alors l'année solaire est $365;14,32,31,31,20$. L'excédent\footnote{Nous n'avons pas compris cette phrase. Est-ce une variation ?} par rapport au cercle est de $6;15,9,8$. \newpage\phantomsection \index{AHBGAJ@\RL{bht}, vitesse apparente} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{BGBFAO@\RL{hnd}!BGBFAO@\RL{al-hnd}, les Indiens} \index{ALBEBGAQ@\RL{jmhr}!ALBEBGBHAQ@\RL{al-jumhUr}, les Grecs} \index{ACAKBJAQ BD-AOBJBF BD-AHBGAQBJ@\RL{'a_tIr al-dIn al-abharI}, Ath{\=\i}r al-D{\=\i}n al-Abhar{\=\i}} \index{BCBHATBJAGAQ AHBF BDAHAHAGBF@\RL{kU^syAr bn labbAn}, K\=u\v{s}y\=ar b. Labb\=an} \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \index{AXBDBD@\RL{.zll}!BBAWAQ AXBDBD@\RL{qi.tr al-.zill}, diamètre de l'ombre} \index{ANASBA@\RL{_hsf}!ANASBHBA@\RL{_husUf}, éclipse de Lune} \label{var18} \includepdf[pages=48,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Ceci étant admis, sache que le diamètre de la Lune quand elle est à distance moyenne de la Terre, c'est-à-dire quand la distance Terre-Lune est de soixante parts (en parts du rayon de l'orbe portant l'épicycle), est $0;32;54;33$. J'avais déjà découvert cela en suivant une autre voie et j'avais trouvé $32;22$ mais je préfère la première valeur. Le diamètre de la Lune est selon Hipparque $33;15$, selon Ptolémée $33;5$, selon les Indiens $32;0$, selon les Grecs $32;20$, et selon Ath{\=\i}r\footnote{Ath{\=\i}r al-D{\=\i}n al-Abhar{\=\i}}, K\=u\v{s}y\=ar\footnote{K\=u\v{s}y\=ar b. Labb\=an} et al-\d{T}\=us{\=\i} $32;3$.\label{obs_diam_lune} Si nous voulons le diamètre de la Lune à d'autres distances que la distance moyenne, nous divisons son diamètre à distance moyenne par sa distance au centre du monde (en parts du diamètre du parécliptique)~: on obtient le diamètre de la Lune quand elle est à cette distance.\footnote{C'est-à-dire que $0;32;54;33\times 60=\text{diamètre apparent}\times\text{distance Terre-Lune}$, où le diamètre apparent est exprimé en degrés, et la distance Terre-Lune en soixantièmes du rayon de la trajectoire du centre de l'épicycle dans l'orbe incliné.} La vitesse apparente de la Lune par heure, c'est aussi son diamètre, mais sache que la première méthode est plus exacte. Quant au diamètre de l'ombre, selon Ptolémée, il est deux fois et trois cinquièmes de fois comme le diamètre de la Lune. Je l'ai trouvé, par l'observation de nombreuses éclipses anciennes et récentes, deux fois et demi et un cinquième de fois comme le diamètre de la Lune si nous supposons que la latitude maximale de la Lune est cinq degrés~; et si nous supposons qu'elle est de quatre degrés, deux tiers et un quart de degrés, alors le diamètre de l'ombre est deux fois et deux tiers de fois comme le diamètre de la Lune. Par l'observation, j'ai trouvé que la durée des éclipses est supérieure à ce que prédit le calcul sous l'hypothèse que le diamètre de l'ombre serait deux fois et trois cinquièmes de fois comme le diamètre de la Lune.\footnote{Al-\v{S}\=a\d{t}ir donne donc trois valeurs du diamètre de l'ombre : $2;36$, $2;42$ et $2;40$. La première, attribuée à Ptolémé, est rejetée sur la base d'observations. } \newpage\phantomsection \includepdf[pages=49,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \input{lune_fr.pdf_tex} Les orbes de la Lune figurés par les trajectoires des centres des sphères. Nous y avons représenté l'orbe de l'épicycle et le rotateur en huit positions~: la conjonction, l'opposition, les quadratures et les octants. \end{center} \newpage\phantomsection \includepdf[pages=50,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \input{lune_orbes_fr.pdf_tex} Les orbes solides de la Lune mus de mouvements simples uniformes en leurs centres. Le plan de la figure est le plan de l'orbe incliné~: on a vu que le plan de l'orbe incliné est incliné par rapport au plan de l'écliptique ou du parécliptique d'une inclinaison égale à la latitude de la Lune. \end{center} \newpage\phantomsection \index{AYAOBD@\RL{`dl}!AJAYAOBJBD BEAMBCBE@\RL{ta`dIl mu.hakam}, équation principale} \addcontentsline{toc}{chapter}{I.11 La Lune vraie par les tables ou par le calcul} \label{var8}\label{var12} \includepdf[pages=51,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre onze \large La Lune vraie par les tables ou par le calcul \end{center} Si tu veux [la Lune vraie], alors calcule à partir des tables la Lune moyenne, la Lune propre et le Soleil moyen de la manière que je t'ai indiquée pour le Soleil. Puis soustrais le Soleil moyen de la Lune moyenne~; le double du résultat est le centre de la Lune, c'est-à-dire l'élongation du centre de l'épicycle au point où la distance est maximale\footnote{Le <<~centre de la Lune~>> est donc ce qu'il a appelé précédemment (chapitre 9) l'<<~élongation double~>>, c'est-à-dire la longueur de l'arc du rotateur entre le centre de la Lune et le point du rotateur où la distance (au centre de l'épicycle) est maximale.}. Entre, avec cela, dans la table des équations de la Lune, et prends-y l'équation de la Lune propre. Si le centre est plus petit que six signes\footnote{Six signes $=180^\circ$.}, alors ajoute l'équation à la Lune propre, et s'il est plus grand que six signes, alors soustrais-la de la Lune propre. On obtient la Lune propre corrigée\footnote{C'est donc la <<~troisième équation~>> du chapitre 9, notée $c_1$ dans notre commentaire.}. Puis entre avec la Lune propre corrigée dans la table des équations de la Lune, et prends-y la troisième\footnote{C'est $c_2(\alpha,0)$ dans notre commentaire, \textit{i. e.} la <<~première équation~>> du chapitre 9.} et la quatrième équation\footnote{C'est $(c_2(\alpha,180°)-c_2(\alpha,0))$ dans notre commentaire, \textit{i. e.} la <<~deuxième équation~>> du chapitre 9.}. Puis entre, avec le centre, dans la table des équations de la Lune, prends-y la deuxième équation\footnote{C'est $\chi$ dans notre commentaire, appelé <<~coefficient d'interpolation~>> au chapitre 9.}, multiplie-la par la quatrième, et ajoute le tout à la troisième~; on obtient la troisième équation \emph{principale}\footnote{Nous traduisons \textit{mu\d{h}km} par \emph{principale}. Pour la Lune comme pour les autres astres, l'équation principale est celle calculée au moyen des transformations planes, après rabattement du plan de l'orbe incliné dans le plan de l'écliptique. Elle néglige donc l'effet des inclinaisons des orbes. Mais peut-être ce nom renvoie-t-il aussi au fait que cette équation est toujours calculée par interpolation -- alors aurait-on pu aussi traduire par \emph{équation interpolée}.}. Puis regarde si la Lune propre corrigée est inférieure à six signes. Dans ce cas, retranche la principale de la Lune moyenne~; mais si elle est plus grande que six signes, alors ajoutes-la à la Lune moyenne. On obtient la Lune vraie par rapport à la ceinture de l'orbe incliné. Puis retranche le n{\oe}ud vrai de la Lune vraie. Il reste l'argument de latitude de la Lune. Entre-le dans la table de l'équation du déplacement de la Lune\footnote{C'est la <<~quatrième équation~>> du chapitre 9.}. Retranche ce que tu y trouves de la Lune vraie si l'argument de latitude de la Lune est compris entre un et quatre-vingt-dix degrés ou bien entre cent quatre-vingt et deux cent soixante-dix degrés, et ajoute-le sinon. \newpage\phantomsection \label{var19} \includepdf[pages=52,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent On obtient la Lune vraie par rapport à l'écliptique à la date pour laquelle tu as fait le calcul, à midi, à Damas~; et si la date n'a pas encore été corrigée par la correction des jours avec leurs nuits, alors il faut la corriger comme je l'ai enseigné pour le Soleil\footnote{Voir chapitre 8.}. Si tu veux la latitude de la Lune, entre l'argument de latitude dans la table des latitudes de la Lune, tu y trouves la latitude de la Lune et son sens\footnote{son <<~sens~>>, c'est-à-dire si elle est orientée vers le Nord ou bien vers le Sud.}. La manière d'y arriver par le calcul est que tu multiplies le sinus de l'argument de latitude par la tangente de la latitude entière\footnote{L'\emph{argument de latitude} est ici l'argument de latitude \emph{par rapport à l'écliptique} (de même que {Ibn al-\v{S}\=a\d{t}ir} a distingué précédemment la Lune vraie par rapport à la ceinture de l'orbe incliné et la Lune vraie par rapport à l'écliptique). La \emph{latitude entière} est l'inclinaison de l'orbe incliné par rapport à l'écliptique, $5^\circ$. Voir commentaire mathématique.} de la Lune, on obtient la tangente de la latitude partielle de la Lune. Tu trouves son arc dans la table des tangentes inverses~: c'est la latitude. On a déjà parlé de son sens. \begin{center} \large Section \normalsize Détermination de l'équation de la Lune propre par le calcul \end{center} Détermine le centre de la Lune et multiplie son sinus par le rayon de l'orbe portant le corps de la Lune, c'est-à-dire par $1;25$. Appelons ce qu'on obtient \emph{le dividende}. Multiplie ensuite le cosinus du centre par $1;25$. \^Ote le résultat du rayon de l'épicycle ($6;35$) si le centre est compris entre un et quatre-vingt-dix degrés ou bien entre deux cent soixante-dix et trois cent soixante degrés. Sinon, ajoute-le au rayon de l'épicycle. Ajoute le carré de ce qu'on obtient au carré du dividende. Prends-en la racine carrée, c'est le rayon de l'épicycle apparent. Garde-le, tu en auras besoin ici et ailleurs. Ensuite, divise le dividende par le rayon de l'épicycle apparent, il en sort le sinus de l'équation de la Lune propre~; écris-la en face de ce degré dans la table. \newpage\phantomsection \includepdf[pages=53,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Section \normalsize Calcul de la deuxième équation\footnote{\label{deuxieme} C'est $c_2$ dans notre commentaire. \`A nouveau, Ibn al-\v{S}\=a\d{t}ir a changé de numérotation. Il désigne ici par <<~deuxième équation~>> celle qu'il appelait <<~troisième équation~>> au début de ce chapitre, et <<~première équation~>> au chapitre neuf.} de la Lune. \end{center} Multiplie le sinus de la Lune propre corrigée et son cosinus par le rayon de l'épicycle apparent lors d'une conjonction ou d'une opposition, c'est-à-dire par l'excédent du rayon de l'épicycle sur le rayon du rotateur, ce qui fait $5;10$. Ce qu'on obtient du cosinus, ajoute-le toujours à soixante si la Lune propre corrigée est dans la moitié supérieure de l'épicycle, et ôte-le si elle est dans sa moitié inférieure. Ce qu'on obtient, ajoute son carré au carré du produit de sinus du mouvement propre par $5;10$ (ce dernier produit est ce qu'on appelle à présent le \emph{dividende}). Prends la racine carré de cette somme. Divise le dividende par cela. On obtient le sinus de la deuxième équation correspondant au degré pour lequel tu as fait le calcul. Prends son arc dans la table des sinus~: tu trouves l'équation correspondant à cette position. La manière de calculer la troisième équation est la suivante. Tu supposes que le rayon de l'épicycle apparent est de huit degrés, c'est-à-dire le sinus de sept degrés et deux tiers, et tu suis ce que je t'ai enseigné pour la deuxième équation. Ce qu'on obtient pour chaque position, soustrais-en la deuxième équation pour cette même position. \'Ecris le résultat en face de cette position. \emph{Remarque.} Si tu utilises le rayon de l'épicycle apparent à une position donnée au lieu de $5;10$, et que tu suis le procédé indiqué, il en sort l'équation principale, c'est-à-dire la deuxième équation corrigée. \newpage\phantomsection \includepdf[pages=54,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Section \normalsize Calcul du coefficient d'interpolation pour la Lune \end{center} Si tu veux cela, prends l'arc du rayon de l'épicycle apparent dans la table des sinus~; ce qu'on obtient est la variation maximale pour ce degré comme équation principale. Retranches-en $4;56,24$ qui est le maximum de la deuxième équation. Ce qui reste, divise-le par $2;43,21$ (c'est la différence entre l'équation maximale dans les quadratures $7;39,45,11$ et le maximum de la deuxième équation). Ce qui sort de la division, c'est le coefficient d'interpolation~; écris-le en face de ce degré. \emph{Remarque.} Sache qu'il est possible de construire d'autres tables pour la Lune vraie, autrement disposées que celles connues. Pour cela, tu supposes que l'épicycle de la Lune est l'épicycle véritable dont le rayon est $6;35$. Par rapport à ce nombre, tu calcules une différence qui lui est retranchée à distance maximale, et une différence qui lui est ajoutée à distance minimale\footnote{<<~à distance minimale~>>, il veut dire ici quand le rayon de l'épicycle apparent est maximal, c'est-à-dire que la Lune peut être au plus proche de la Terre.} quand il vaut huit parts. Par rapport à ce nombre-ci, tu calcules une autre différence qui en est retranchée avec un coefficient d'interpolation (à l'opposé de ce qu'on a vu ci-dessus\footnote{Veut-il dire simplement que l'interpolation se fait ici par soustraction plutôt que par addition~?}). Enfin, on peut aussi faire le calcul de la Lune vraie au moyen d'une table unique\footnote{Dans ce qui suit, Ibn al-\v{S}\=a\d{t}ir semble expliquer comment construire, pour chaque valeur du centre, une <<~table unique~>> contenant l'équation de la Lune propre et toutes les valeurs de l'équation principale correspondant aux différentes valeurs de la Lune propre. Il conçoit donc une sorte de tableau à double entrée pour représenter l'équation principale comme fonction de deux variables~: le centre et la Lune propre. Chaque colonne de ce tableau est elle-même une table, et il explique dans le paragraphe suivant comment construire la colonne correspondant à un centre de $90^\circ$.} qui montrerait à la fois la variation maximale à une position donnée dans l'orbe portant [l'épicycle]\footnote{L'orbe portant l'épicycle, c'est-à-dire l'orbe incliné.} et la correction du mouvement propre dépendant de cette position. \newpage\phantomsection \label{var20} \includepdf[pages=55,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Par exemple, si nous supposons que le centre de la Lune (c'est-à-dire l'élongation double) est de trois signes, alors la correction du mouvement propre est $12;26$. D'autre part, si le centre est de trois signes, la variation maximale est de six parts et un sixième de part~; alors nous avons calculé la deuxième équation sous l'hypothèse que son maximum est de six parts et un sixième\footnote{Veut-il dire que, pour un centre de $90^\circ$, il ne calcule de manière exacte qu'une seule valeur de la deuxième équation (son maximum, dénoté $\max\vert e(\cdot,90^\circ)\vert$ dans notre commentaire mathématique), et calcule toutes les autres par interpolation en utilisant cette unique valeur~? En effet, il serait bien fastidieux de calculer de manière exacte chaque cellule du tableau à double entrée~; mais l'explication reste sujette à interprétation.}. Nous avons écrit ceci en face de $12;26$. Quand nous entrons dans cette table le mouvement propre absolu (je veux dire, sans sa correction), nous trouvons ainsi l'équation principale sans équation additionnelle~; mais nous aurons alors besoin de nombreuses tables\footnote{Peut-être veut-il dire qu'il y aura de nombreuses colonnes dans ce tableau à double entrée : en effet, une colonne pour chaque valeur de l'élongation double.}. \begin{center} \large Section \end{center} \emph{Si tu veux la distance de la Lune au centre du monde,} prends connaissance du sinus de la Lune propre corrigée et de son cosinus~; multiplie chacun par le rayon de l'épicycle apparent ($5;10$ lors d'une conjonction ou d'une opposition, sinon, c'est ce que je t'ai demandé de conserver dans un paragraphe antérieur). Ce qu'on obtient à partir du sinus de la Lune propre, c'est le \emph{dividende} et garde-le. Ce qu'on obtient à partir du cosinus, ajoute-le à soixante si la Lune propre corrigée est inférieure à trois signes ou supérieure à neuf signes, sinon ôte-le de soixante. Le total ou le reste, prends son carré et ajoute-le au carré du dividende. Prends la racine carrée de cela, c'est la distance de la Lune au centre du monde à condition que le rayon de l'orbe portant [l'épicycle]\footnote{C'est-à-dire le rayon de l'orbe incliné.} soit soixante. \emph{Remarque.} Si nous divisons par cette distance le diamètre de la Lune à distance moyenne (c'est $0;32,54,33$), alors on obtient son diamètre à cette distance. Si nous multiplions la vitesse apparente de la Lune par jour par deux minutes et demi\footnote{Deux minutes et demi font un vingt-quatrième de degré.}, alors on obtient aussi le diamètre de la Lune, mais la première méthode est plus précise. Dans les conjonctions et les oppositions, le diamètre minimum de la Lune est $0;30,18$, et son diamètre maximum $0;36,0$. Dans les quadratures son diamètre minimum est $0;29,2,15$ et son diamètre maximum $0;37,58,20$. Sache cela. \emph{Seconde remarque.} Si nous divisions par cette distance le dividende qu'on a gardé, il en sortirait le sinus de l'équation principale de la Lune. Comprends cela. \newpage\phantomsection \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{ACBHAL@\RL{'awj}!AMAQBCAIACBHAL@\RL{.harakaT al-'awj}, mouvement des apogées} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR@\RL{.harakaT al-markaz}, mouvement du centre (en général, le centre d'un épicycle)} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{ANAUAU@\RL{_h.s.s}!ANAGAUAUAI BCBHBCAH@\RL{_hA.s.saT al-kwkab}, astre propre (traduit par ``anomalie'' par de nombreux auteurs)} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \addcontentsline{toc}{chapter}{I.12 Configuration des orbes de Saturne selon la vraie méthode} \includepdf[pages=56,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre douze \large Configuration des orbes de Saturne selon la vraie méthode \end{center} Parmi les orbes de Saturne nous imaginons~: -- un \emph{orbe parécliptique} représentant l'orbe de l'écliptique, dans son plan, autour de son centre et sur ses pôles -- un deuxième orbe, \emph{incliné} par rapport au parécliptique d'une inclinaison constante de deux parts et demi et le coupant en deux points opposés dont l'un s'appelle la tête et l'autre la queue -- un troisième orbe dont le centre est sur le bord de l'orbe incliné et dont le rayon fait cinq parts et un huitième (en parts telles que le rayon de l'orbe incliné en compte soixante)~; il s'appelle \emph{déférent} -- un quatrième orbe dont le centre est sur le bord du déférent et dont le rayon fait un degré, quarante-deux minutes et trente secondes~; il s'appelle \emph{rotateur} -- un cinquième orbe dont le centre est sur le bord du rotateur et dont le rayon fait six parts et demi (en les mêmes parts)~; il s'appelle \emph{orbe de l'épicycle} Le centre du corps de Saturne est attaché en un point de la ceinture de l'épicycle. Quant aux mouvements, le parécliptique se meut d'un mouvement simple autour de son centre, dans le sens des signes. C'est le \emph{mouvement des Apogées} qui fait chaque jour $0;0,9,52$. Les intersections -- la tête et la queue -- se déplacent donc, et les deux inclinaisons extrêmes aussi. L'orbe incliné se meut d'un mouvement simple autour de son centre qui est centre de l'univers, dans le sens des signes comme le précédent. Sa mesure est le \emph{mouvement du centre de Saturne} et c'est l'excédent du mouvement moyen sur le mouvement de l'Apogée~: il fait, en un jour et une nuit, $0;2,0,26,17$. Le déférent se meut en sens contraire dans sa partie supérieure, de la même quantité que le mouvement du centre de Saturne, c'est-à-dire, en un jour et une nuit, $0;2,0,26,17$. Le rotateur se meut dans le sens des signes dans sa partie supérieure, d'une quantité double de celle du mouvement du centre de Saturne, c'est-à-dire, en un jour et une nuit, $0;4,0,52,34$. Une révolution du rotateur s'achève donc avec une demi-révolution du déférent. L'orbe de l'épicycle se meut d'un mouvement simple autour de son centre dans le sens des signes dans sa partie supérieure. Sa mesure est l'excédent du \emph{mouvement propre} de Saturne sur le mouvement de son centre. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \index{APAQBH@\RL{_drw}!APAQBHAI@\RL{_dirwaT}, sommet, apogée} \includepdf[pages=57,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Ce mouvement est simple-composé. En effet, supposons que l'épicycle est immobile\footnote{Immobile, au sein du référentiel constitué par l'orbe rotateur.}. L'orbe incliné se meut d'un quart de cercle, l'orbe déférent d'un quart de cercle, et l'orbe rotateur d'un demi cercle~: cela signifie que le diamètre passant par l'apogée et le périgée de l'épicycle s'est déplacé d'un quart de cercle [en trop]. Nous devons donc supposer qu'il est animé d'un mouvement en sens contraire égal au mouvement du déférent, c'est-à-dire au mouvement du centre de Saturne. Ainsi le diamètre passant par l'apogée et le périgée se déplacera vers son lieu. Nous avons trouvé par l'observation que le mouvement propre dont se meut l'épicycle fait, en un jour et une nuit, $0;57,7,43,34,22$, mais on vient de voir que cet orbe se meut en sens contraire aux signes, d'un mouvement simple autour de son centre. Or il semble se mouvoir dans le sens des signes, d'un mouvement simple autour de son centre, donc le plus petit des deux mouvements est soustrait du plus grand, c'est-à-dire que nous soustrayons du mouvement propre le mouvement du centre. Il reste le mouvement de l'orbe de l'épicycle dans le sens des signes, égal à l'excédent du mouvement propre de Saturne sur le mouvement de son centre~: par jour, $0;55,7,17,17,22$. C'est son vrai mouvement simple, sauf que son apogée, dans le sens des signes, se meut toujours d'autant que le mouvement propre de Saturne, somme de ce mouvement simple et du mouvement de son centre. Il s'agit d'une notion subtile, mais il faut la comprendre. Il en est de même pour les autres planètes, donc on n'aura pas à répéter cette explication.\footnote{Le mot <<~apogée~>>, \textit{dhirwa}, désigne l'extrémité d'un diamètre de l'épicycle parallèle à la direction Terre--centre~de~l'épicycle. La grandeur appelée <<~mouvement de l'astre propre~>> désigne le mouvement de rotation de l'épicycle sur lui-même \emph{relativement à ce diamètre}~; mais le mouvement de l'épicycle \emph{relativement à l'orbe rotateur}, excédent du mouvement propre sur le mouvement du centre, intéresse {Ibn al-\v{S}\=a\d{t}ir} davantage.}. Quand nous voudrons l'élongation entre l'astre et l'apogée, nous devrons sommer le mouvement de cet épicycle et le mouvement du centre de Saturne~: le résultat sera comme le mouvement propre de Saturne, ce sera l'élongation entre l'astre et l'apogée. Je t'ai expliqué cette notion, elle contient une subtilité, examine-la donc avec attention. Supposons que les centres du déférent, du rotateur et de l'épicycle soient alignés sur une droite passant par le centre du Monde et par l'Apogée. Alors le centre de l'épicycle est à distance maximale de la Terre, c'est-à-dire $63;25$. Maintenant, si l'orbe incliné se meut d'un quart de cercle, que le déférent se meut, dans le même temps, d'un quart de cercle aussi, et que le rotateur se meut, dans le même temps, d'un demi-cercle, alors la distance du centre de l'épicycle au centre du déférent devient six parts, une demi-part et un tiers de part. \newpage\phantomsection \index{AYAOBD@\RL{`dl}!AJAYAOBJBD ANAGAUAUAI@\RL{ta`dIl al-_hA.s.saT}, équation de l'astre propre} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD BEAQBCAR@\RL{ta`dIl al-markaz}, équation du centre} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD@\RL{ta`dIl}, équation} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{APAQBH@\RL{_drw}!APAQBHAI BEAQAEBJBJAI@\RL{_dirwaT mar'iyyaT}, apogée apparent} \index{APAQBH@\RL{_drw}!APAQBHAI AMBBBJBBBJBJAI@\RL{_dirwaT .haqIqiyyaT}, apogée vrai} \includepdf[pages=58,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Traçons une droite du centre du Monde au centre de l'épicycle, et une autre droite du centre du Monde au centre du déférent. L'angle entre ces deux droites est l'angle de la \emph{première équation de Saturne}. L'\emph{équation de l'astre propre} est de cette même grandeur. On explique ceci sur un exemple. Si l'orbe incliné se meut d'un demi-cercle, que le déférent se meut d'un demi-cercle aussi, et que le rotateur se meut, dans le même temps, d'un cercle entier, alors le centre de l'épicycle vient à distance minimale du centre du Monde, c'est-à-dire $56;35$, et la droite allant du centre du Monde au centre de l'épicycle se confond avec celle allant du centre du Monde au centre du déférent~; ainsi la première équation (qui est aussi bien équation du centre qu'équation de l'astre propre) s'évanouit. Comme on a trouvé que le Soleil rejoint Saturne au milieu de son mouvement direct et s'oppose à Saturne au milieu de sa rétrogradation, on a su que le mouvement de l'orbe de l'épicycle vaut l'excédent du mouvement de Soleil moyen sur Saturne moyen. Dès lors, nous avons su qu'en soustrayant Saturne moyen du Soleil moyen, reste le mouvement propre de Saturne. De même pour Jupiter et Mars. Ceci étant admis, sache qu'à ces orbes et ces mouvements sont liées des anomalies qu'on appelle \emph{équations}. La première équation s'appelle \emph{équation du centre et de l'astre propre}. C'est l'angle au centre du Monde (le centre de l'orbe incliné) entre deux droites qui en sont issues, l'une passant par le centre du déférent, et l'autre par le centre de l'épicycle. Cet angle est égal à l'angle au centre de l'épicycle entre deux droites qui en sont issues, l'une passant par le centre du Monde, et l'autre, parallèle à la droite joignant le centre du déférent au centre du Monde. L'écart entre les extrémités de ces deux droites le long de l'épicycle, c'est l'écart entre l'apogée vrai et l'apogée apparent\footnote{\textit{Cf.} figure \ref{fig040} p.~\pageref{fig040} pour mieux saisir la terminologie adoptée par {Ibn al-\v{S}\=a\d{t}ir} pour les planètes supérieures et en particulier pour Saturne. L'\emph{apogée apparent} et le \emph{périgée apparent} sont les points de l'épicycle alignés avec le centre du Monde et le centre de l'épicycle.}. Il est démontré que l'équation de l'astre propre est égale à l'équation du centre à cause de l'égalité des deux angles. On le voit clairement sur l'exemple. Cette équation s'évanouit à l'Apogée et au périgée. Elle s'ajoute au centre et se retranche de l'astre propre quand le centre est supérieur à six signes~; elle se retranche du centre et s'ajoute à l'astre propre quand le centre est inférieur à six signes. Son maximum, pour Saturne, est de six parts, un tiers de part et un cinquième de part, atteint quand l'élongation entre le centre du déférent et l'Apogée est, dans un sens ou dans l'autre, trois signes et quatre degrés. La \emph{deuxième équation} est causée par l'épicycle. \newpage\phantomsection \index{APAQBH@\RL{_drw}!APAQBHAI BEAQAEBJBJAI@\RL{_dirwaT mar'iyyaT}, apogée apparent} \index{APAQBH@\RL{_drw}!APAQBHAI AMBBBJBBBJBJAI@\RL{_dirwaT .haqIqiyyaT}, apogée vrai} \index{BFASAH@\RL{nsb}!AOBBAGAEBB BFASAH@\RL{daqA'iq al-nisb}, coefficient d'interpolation} \index{ACBHAL@\RL{'awj}!ACBHAL@\RL{'awj}, Apogée} \index{AQBCAR@\RL{rkz}!BEAQBCAR@\RL{markaz}, centre} \index{BHASAW@\RL{ws.t}!BHASAW@\RL{ws.t}, astre moyen} \index{ANAUAU@\RL{_h.s.s}!ANAGAUAUAI BCBHBCAH@\RL{_hA.s.saT al-kwkab}, astre propre (traduit par ``anomalie'' par de nombreux auteurs)} \includepdf[pages=59,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent C'est l'angle au centre du Monde entre deux droites qui en sont issues~: l'une passant par le centre de l'épicycle, et l'autre par l'astre. Son maximum est atteint quand celle-ci est tangente à l'épicycle. Elle s'évanouit à l'apogée et au périgée apparents de l'épicycle. Son maximum, pour Saturne, est six parts et treize minutes. On l'ajoute à Saturne moyen quand l'astre propre corrigé est inférieur à six signes, et on la retranche de Saturne moyen quand l'astre propre corrigé est supérieur à cela. La \emph{troisième équation} est la variation de la deuxième\footnote{\textit{Cf.} chapitre 23, p. \pageref{troisieme_equation} \textit{infra}.}. En effet, la deuxième équation varie selon la plus ou moins grande distance entre le centre de l'épicycle et le centre du Monde. On ajoute ou on retranche cette troisième équation, selon qu'on ajoute ou qu'on retranche la deuxième. Le \emph{coefficient d'interpolation}, c'est un nombre dont le rapport à soixante est comme le rapport entre ce qu'il faut de la troisième anomalie et la totalité de cette anomalie. On clarifiera cela lors du calcul des équations. \begin{center} \large Section \end{center} Ceci étant admis, sache que l'\emph{Apogée de l'astre} est un arc de l'orbe incliné entre deux droites issues de son centre~: l'une passant par l'équinoxe de printemps\footnote{{Ibn al-\v{S}\=a\d{t}ir} ne prend plus la peine de préciser <<~le point en face de l'équinoxe de printemps~>>, c'est-à-dire le point de l'orbe incliné situé à la même distance du n{\oe}ud que l'équinoxe de printemps sur l'écliptique (\textit{cf.} modèle de la Lune).}, et l'autre passant par le centre du déférent, le centre du rotateur et le centre de l'épicycle quand le centre de l'épicycle est à distance maximale du centre de l'orbe incliné. En effet ces trois centres sont alignés à l'Apogée et au périgée~; et j'appelle Apogée la distance maximale. Le \emph{centre de l'astre} est un arc de l'orbe incliné entre deux droites issues de son centre~: l'une passant par l'Apogée et l'autre par le centre du déférent. L'\emph{astre moyen} est la somme de l'Apogée et du centre. C'est un arc de l'orbe incliné compris entre deux droites issues de son centre~: l'une passant par le commencement du Bélier et l'autre par le centre du déférent. L'\emph{[astre] propre} est un arc de l'orbe de l'épicycle entre l'astre et l'apogée vrai, dans le sens des signes. L'\emph{[astre propre] corrigé} est un arc de cet orbe entre l'astre et l'apogée apparent, dans le sens des signes. L'\emph{apogée apparent} de cet orbe, c'est le point où le coupe la droite passant par le centre de l'épicycle et le centre de l'orbe incliné, dans la partie supérieure de l'épicycle (l'autre point est le périgée apparent). Menons une droite issue du centre du Monde (centre de l'écliptique et centre de l'orbe incliné) et passant par l'astre, jusqu'à l'orbe de l'écliptique. \newpage\phantomsection \index{BBBHBE@\RL{qwm}!BEBBBHBE BCBHBCAH@\RL{mqwm al-kawkab}, astre vrai (= longitude vraie de l'astre)} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \includepdf[pages=60,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Prenons un des cercles de latitude qui passe par l'extrémité de cette droite. Il coupe l'orbe de l'écliptique en un point qui est l'astre vrai\footnote{Nous traduisons \textit{mqwm al-kawkab} ainsi que \textit{taqw{\=\i}m al-kawkab} par <<~astre vrai~>>.}~; c'est-à-dire que l'élongation entre ce point et le commencement du Bélier est l'astre vrai. L'élongation entre ce point et l'astre, sur ce cercle, est la latitude de l'astre. Nous avons mis la question des latitudes des astres dans un chapitre séparé pour en faciliter la compréhension. Ceci étant admis, sache qu'une première manière, c'est d'appliquer les démonstrations géométriques aux trajectoires des centres des sphères solides appartenant à Saturne. Quant aux sphères, on les représente comme suit. Nous imaginons un orbe circonscrit à l'ensemble des orbes de Saturne, dans le plan du zodiaque et sur les pôles du zodiaque. C'est l'orbe parécliptique. Sa surface extérieure touche la surface intérieure du huitième orbe, et sa surface intérieure touche la surface extérieure de l'orbe incliné de Saturne. La surface intérieure de l'orbe incliné de Saturne touche la surface extérieure de l'orbe parécliptique de Jupiter, et l'épaisseur de l'orbe incliné est vingt-six degrés et trente minutes\footnote{Cette valeur est certainement corrompue~: l'épaisseur de l'orbe incliné ne peut être inférieure à $26;40$, diamètre de l'orbe déférent.}. Son plan est incliné par rapport au plan de l'orbe du zodiaque, de deux parts et demi. Ensuite, nous imaginons un globe entièrement sphérique, de rayon treize degrés et un tiers de degré, appelé le déférent. Nous imaginons une deuxième sphère, de rayon huit parts, douze minutes et demi, plongée dans l'orbe déférent et le touchant en un point. Elle s'appelle orbe rotateur. Nous imaginons une troisième sphère, de rayon six parts et demi, plongée dans l'orbe rotateur et le touchant en un point opposé au point où le rotateur touche le déférent. Elle s'appelle orbe de l'épicycle. L'astre lui-même est plongé dans l'épicycle. Il le touche en un point opposé au point où il touche le rotateur -- quand il est à l'Apogée. Nous supposons que le déférent est plongé à l'intérieur de l'orbe incliné et le touche au point de l'Apogée, puis que chaque orbe se meut et que la grandeur et le sens de son mouvement sont comme nous les avons supposés. Alors le parécliptique se meut autour des pôles du zodiaque dans le sens des signes, et son mouvement est comme le mouvement des Apogées. L'orbe incliné se meut dans le sens des signes, d'un mouvement simple autour de son centre, de la grandeur du mouvement du centre de Saturne, c'est-à-dire, par jour, $0;2,0,26,17$. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \includepdf[pages=61,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent L'orbe déférent se meut dans le sens contraire aux signes dans sa partie supérieure~; son mouvement est de la grandeur du mouvement du centre de Saturne. Le rotateur, dans sa partie supérieure, se meut dans le sens des signes, d'une grandeur double de celle du mouvement du centre de Saturne. Enfin, l'orbe de l'épicycle se meut d'un mouvement simple autour de son centre, dans le sens des signes dans sa partie supérieure~; son mouvement est l'excédent du mouvement propre de Saturne sur le mouvement de son centre -- c'est le mouvement simple-composé qu'on a déjà expliqué. Ainsi varient les positions des orbes, et semblent aussi varier leurs mouvements qui sont composés de mouvements simples. A l'Apogée, l'astre est à distance maximale s'il est à l'apogée de l'épicycle. De même, il est à distance minimale s'il est au périgée de l'épicycle quand l'épicycle est au périgée du déférent. Entre Apogée et périgée, il est à distance intermédiaire.\footnote{Quand il s'agit de l'orbe de l'épicycle, le mot \textit{\b{d}irwa} désigne en général un point situé \emph{dans la partie supérieure} de l'orbe, et non pas le point situé à l'apogée de l'orbe de l'épicycle \emph{relativement au centre de l'orbe portant} cet orbe. On peut aussi penser qu'{Ibn al-\v{S}\=a\d{t}ir} fait alors abstraction des orbes intermédiaires entre l'orbe déférent et l'orbe de l'épicycle, comme il semblera le faire quand il parlera parfois d'un ``épicycle apparent''. L'orbe qui porte le centre de l'épicycle apparent est bien l'orbe déférent. L'apogée de l'épicycle est alors le point situé dans la ``partie supérieure'' de l'orbe de l'épicycle~: mais c'est le périgée de l'épicycle relativement au centre de l'orbe rotateur.} Sache que l'astre Saturne n'atteint ni le point de ses orbes le plus éloigné du centre du Monde, ni le point le plus proche. La distance maximale de Saturne est, en parts, soixante-neuf, deux tiers et un quart~; sa distance minimale est, en parts, cinquante et un demi-sixième~; mais pour compléter les orbes et en faire des sphères circonscrites les unes aux autres, la distance maximale au sein de l'orbe incliné de Saturne doit être (rayon de l'orbe incliné plus rayon de l'orbe déférent), en parts, soixante-treize et un tiers. En sus, il y a l'épaisseur du globe du parécliptique~; qu'on la prenne égale à deux tiers de part, le total est soixante-quatorze parts. La distance minimale est à la surface intérieure de l'orbe incliné de Saturne, en parts, quarante-six et deux tiers~; en plus de la réunion des orbes, il faut aussi compter le rayon de l'astre donc cela fait quarante-six parts.\label{contiguite_sat} Sache cela, et fais de manière analogue pour les autres astres. Ce sont des modèles parfaits qui échappent aux doutes~; ce sont des sphères parfaites, et chacun de leurs mouvements est un mouvement simple, uniforme autour du centre de l'orbe qui se meut, un mouvement qui décrit des portions égales en des temps égaux. Gloire à Dieu qui nous a donné ceci et nous l'a appris. \newpage\phantomsection \includepdf[pages=62,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{saturne_fr.pdf_tex} \end{minipage} Les orbes de Saturne, représentées dans le plan par les ceintures trajectoires des centres des orbes solides ou par les surfaces de ces ceintures \end{figure} \newpage\phantomsection \includepdf[pages=63,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{saturne_orbes_fr.pdf_tex} \end{minipage} Les orbes de Saturne~: sphères entières représentées dans le plan à l'Apogée, au périgée et aux élongations intermédiaires \end{figure} \newpage\phantomsection \index{ACAQAN@\RL{'ara_ha}!AJABAQBJAN BEAJBBAOAOBE@\RL{ta'rI_h mtqddm}, \'Epoque} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \addcontentsline{toc}{chapter}{I.13 La régulation des mouvements de Saturne et leurs conditions initiales à l'\'Epoque} \includepdf[pages=64,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre treize \large La régulation des mouvements de Saturne et leurs conditions initiales à l'\'Epoque \end{center} Nous avons observé que Saturne moyen, à midi du premier jour de l'année sept cent un de l'ère de Yazdgard, est 5 signes et $7;58,20$ degrés, et son Apogée 8 signes et $14;52$ degrés\footnote{Pour Saturne et pour les autres planètes, Kennedy et Roberts ont retrouvé les valeurs des Apogées à cette date par un calcul de précession au moyen des valeurs indiquées dans le \emph{Z{\=\i}j} de {Ibn al-\v{S}\=a\d{t}ir } pour une autre date~; ceci confirme cette lecture (en particulier pour le chiffre 52) et explique aussi la coïncidence des rangs fractionnaires des Apogées des cinq planètes (voir \cite{roberts1959} p.~232).}. Le mouvement de Saturne moyen en vingt années persanes est 8 signes et $4;33,20$ degrés~; son mouvement en une année persane est $12;13,40$ degrés, en trente jour $1;0,18,4,55,53,26$ degrés, en un jour $0;2,0,36,9,51,46,50,57,32$, et en une heure $0;0,5,1,30,25$ (en degré). Le mouvement propre de chaque astre est l'excédent du mouvement du Soleil moyen sur le mouvement de l'astre moyen, et le mouvement de son centre est l'excédent du mouvement de l'astre moyen sur le mouvement des Apogées, dont la grandeur est d'un degré en soixante ans~; donc le mouvement propre [de Saturne] est, par jour, $0;57,7,43,34,21,15,5$, et le mouvement de son centre est, par jour, $0;2,0,26,17$ (en degré). Dieu est le plus savant. \newpage\phantomsection \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \index{ALAOBD@\RL{jdl}!ALAOBHBD@\RL{jdwl}, table, tableau, catalogue} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD BEAMBCBE@\RL{ta`dIl mu.hakam}, équation principale} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \addcontentsline{toc}{chapter}{I.14 Détermination de Saturne vrai} \includepdf[pages=65,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre quatorze \large Détermination de Saturne vrai \end{center} Calcule Saturne moyen, son Apogée, et le Soleil moyen à la date souhaitée, puis retranche Saturne moyen du Soleil moyen, il reste Saturne propre, puis retranche l'Apogée de Saturne moyen, il reste son centre, puis consulte l'entrée de la table des équations de Saturne correspondant au centre de Saturne, et prends-y la première équation. Si le centre de Saturne est inférieur à six signes, retranche l'équation de Saturne moyen et du centre, et ajoute-la au mouvement propre~; mais si le centre est supérieur à six signes, alors ajoute l'équation à l'astre moyen et au centre et retranche-la de l'astre propre. On obtient l'astre moyen corrigé, le centre corrigé, et l'astre propre corrigé. Puis consulte l'entrée correspondant au centre corrigé dans la table des équations de Saturne, prends-y le coefficient d'interpolation, regarde s'il est additif ou soustractif, et mets-le de côté. Puis consulte l'entrée correspondant au mouvement propre corrigé dans la table des équations de Saturne, et prends-y la troisième et la quatrième équation si le coefficient d'interpolation est additif -- sinon prends-y la deuxième et la troisième. Puis multiplie la deuxième ou la quatrième par le coefficient d'interpolation, et ajoute le résultat à la troisième équation si le coefficient d'interpolation est additif -- sinon retranche-le de la troisième équation. On obtient ainsi la troisième [équation] \emph{principale}~; ajoute-la à l'astre moyen corrigé si le mouvement propre corrigé est supérieur à six signes, mais s'il est inférieur à six signes alors retranche-la de l'astre moyen. On obtient l'astre vrai par rapport à l'écliptique à la date pour laquelle tu as fait ce calcul. Sache cela.\footnote{{Ibn al-\v{S}\=a\d{t}ir } explique ainsi comment calculer la <<~troisième équation principale~>>, c'est-à-dire l'équation due au mouvement propre de l'épicycle, en s'aidant des tables. Il doit y avoir trois colonnes numérotées $C_2$, $C_3$, $C_4$, ainsi qu'un multiplicateur $\lambda$. Il dit de calculer $C_3+\lambda C_4$ ou bien $C_3-\lambda C_2$ suivant que $\lambda$ est <<~additif~>> ou <<~soustractif~>>.} \newpage\phantomsection \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \addcontentsline{toc}{chapter}{I.15 Configuration nouvelle des orbes de Jupiter} \includepdf[pages=66,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre quinze \large Configuration nouvelle des orbes de Jupiter \end{center} La configuration des orbes de Jupiter est comme celle des orbes de Saturne (vue précédemment) sauf en ce qui concerne les grandeurs des orbes et des mouvements. Parmi les orbes de Jupiter, nous imaginons~: -- un orbe dans le plan de l'écliptique, sur ses pôles et en son centre~: on l'appelle \emph{parécliptique}. -- un deuxième orbe, incliné par rapport au parécliptique, d'inclinaison constante égale à une part et demi. Il coupe le parécliptique en deux points opposés dont l'un s'appelle la \emph{tête}, l'autre la \emph{queue}. -- un troisième orbe dont le centre soit sur le bord de l'orbe incliné et dont le rayon soit quatre parts et un huitième (en parts telles que le rayon de l'orbe incliné en compte soixante)~; on l'appelle \emph{déférent}. -- un quatrième orbe dont le centre soit sur le bord du déférent et dont le rayon soit une part, vingt-deux minutes et demi. On l'appelle \emph{rotateur}. -- un cinquième orbe dont le centre soit sur le bord du rotateur et dont le rayon soit onze parts et demi (en les mêmes parts). On l'appelle \emph{orbe de l'épicycle}. -- le centre du corps de Jupiter est situé sur la ceinture de l'épicycle. Supposons à présent que les centres des orbes déférent, rotateur, de l'épicycle et de l'astre soient tous alignés sur la droite issue du centre du parécliptique dans la direction de l'Apogée. Supposons que le parécliptique est mû d'un mouvement simple sur ses pôles, dans le sens des signes, comme le mouvement des Apogées~: en un jour et une nuit, $0;0,0,9,52$. La tête et la queue sont entraînées par ce mouvement, ainsi que les deux parties d'inclinaison maximale. L'orbe incliné se meut d'un mouvement simple autour de son centre (qui est aussi le centre du parécliptique), dans le sens des signes, comme le mouvement du centre de Jupiter, c'est-à-dire l'excédent de Jupiter moyen sur le mouvement de l'Apogée~: en un jour et une nuit, $0;4,59,6,14,35$. Le déférent se meut en sens inverses des signes dans sa partie supérieure, d'autant que le mouvement du centre de Jupiter aussi~: en un jour et une nuit, $0;4,59,6,14,35$. Le rotateur se meut, dans sa partie supérieure, dans le sens des signes, d'autant que le double du mouvement du centre de Jupiter~: en un jour et une nuit, $0;9,58,12,29,10$. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEANAJBDBAAI@\RL{.harakaT mu_htalifaT}, mouvement irrégulier (\textit{i. e.} non uniforme)} \index{BABDBC@\RL{flk}!BABDBC@\RL{flk}, 1) orbe, 2) cieux} \index{AYAOBD@\RL{`dl}!BEAYAOAOBD@\RL{mu`addal}, corrigé} \index{AYAOBD@\RL{`dl}!BEAQBCAR BEAYAOAOBD@\RL{markaz mu`addal}, centre corrigé} \index{AYAOBD@\RL{`dl}!ANAGAUAUAI BEAYAOAOBD@\RL{_hA.s.saT mu`addalaT}, astre propre corrigé} \index{BBBHBE@\RL{qwm}!BEBBBHBE BCBHBCAH@\RL{mqwm al-kawkab}, astre vrai (= longitude vraie de l'astre)} \index{BHASAW@\RL{ws.t}!BHASAW@\RL{ws.t}, astre moyen} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \index{APAQBH@\RL{_drw}!APAQBHAI AJAOBHBJAQ BHASAWBI@\RL{_dirwaT al-tadwIr al-ws.t_A}, apogée moyen de l'épicycle} \includepdf[pages=67,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent L'orbe de l'épicycle se meut autour de son centre, dans sa partie supérieure, dans le sens des signes, de la grandeur de l'excédent du mouvement propre de Jupiter sur le mouvement de son centre~: en un jour et une nuit, $0;49,9,57,22,24,10$. Ceci est le mouvement simple-composé, donc l'astre s'éloigne de l'apogée moyen, chaque jour, d'autant que le mouvement propre de Jupiter qui est $0;54,9,3,36,59,10$ (comme nous l'avons expliqué à propos des orbes de Saturne). Si les orbes continuent à se mouvoir des mêmes mouvements, il s'avère que l'astre aura un mouvement irrégulier composé de mouvements simples. Les tenants et les aboutissants sont comme nous l'avons expliqué dans les orbes de Saturne. Nous représentons la configuration des orbes de Jupiter réduites aux cercles [des trajectoires des centres des orbes] selon ce qu'on projette sur la figure comme nous l'avons représenté avec les orbes de Saturne~; puis nous les représentons en tant que sphères solides comme on l'imagine dans les cieux.\label{cieux} Mentionnons à présent leurs grandeurs et distances. Sache que la droite issue du centre du Monde dirigée vers le centre du déférent passe par l'astre moyen. Celle issue du centre du Monde et passant par le centre de l'épicycle passe, dans l'écliptique, par le centre corrigé de l'astre, et elle est délimite, dans l'épicycle, l'astre propre corrigé\footnote{Plus précisément, cette droite coupe l'épicycle en un point qui est son <<~apogée apparent~>> et qui sert d'origine pour mesurer l'astre propre corrigé. \textit{Cf.} fig. \ref{fig_term} p.~\pageref{fig_term}.}. La droite issue du centre du Monde et passant par l'astre passe, dans l'écliptique, par l'astre vrai. [Tout ceci], si l'astre est sans latitude~; s'il a une latitude, alors les cercles passant par les pôles de l'écliptique et par les extrémités des droites mentionnées coupent l'écliptique en l'astre moyen, le centre, et l'astre vrai, comme on l'a établi. Ceci dit, sache que la distance maximale de l'astre de Jupiter au centre du Monde est soixante-quatorze parts et un quart, et que sa distance minimale est quarante-cinq parts, une demi-part et un quart~; la distance maximale du centre de l'épicycle au centre du Monde est $62;45$, et sa distance minimale est $57;15$~; sauf que Jupiter n'atteint pas la distance maximale de ses orbes solides, ni leur distance minimale. \newpage\phantomsection \index{BHAUBD@\RL{w.sl}!BEAJAJAUBD AHAYAVBG AHAHAYAV@\RL{mutta.sil ba`.dh biba`.d}, contigus} \includepdf[pages=68,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Quant aux grandeurs de ces orbes, je les décris comme suit. Le rayon de la sphère du déférent est dix-sept parts. Le rayon de la sphère du rotateur est douze parts et cinquante-deux minutes et demi. Le rayon de la sphère de l'épicycle est onze parts et demi. Le rayon de l'orbe incliné est soixante-dix-sept. Sa face externe touche la face interne de son orbe parécliptique. L'épaisseur du parécliptique peut être choisie arbitrairement, et on la pose égale à un degré. La distance minimale dans l'orbe incliné est quarante-trois, et même inférieure à cela de la grandeur du rayon de la sphère de l'astre avec [ce qu'il faut] pour que les orbes soient contigus, comme nous l'avons dit~; on pose ceci égal à un degré. \newpage\phantomsection \includepdf[pages=69,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{jupiter_fr.pdf_tex} \end{minipage} Les orbes de Jupiter représentées par les trajectoires des centres des sphères solides à l'Apogée, au périgée, et aux élongations intermédiaires \end{figure} \newpage\phantomsection \includepdf[pages=70,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{jupiter_orbes_fr.pdf_tex} \end{minipage} Configurations des orbes solides de Jupiter à l'Apogée, au périgée et aux élongations intermédiaires \end{figure} \newpage\phantomsection \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BHASAW@\RL{.harakaT al-ws.t}, mouvement de l'astre moyen} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR@\RL{.harakaT al-markaz}, mouvement du centre (en général, le centre d'un épicycle)} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \addcontentsline{toc}{chapter}{I.16 Régulation des mouvements de Jupiter} \includepdf[pages=71,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre seize \large Régulation des mouvements de Jupiter \end{center} Leurs conditions initiales à l'\'Epoque, c'est-à-dire à midi du premier jour de l'année sept cent un de l'ère de Yazdgard, sont de 9 signes et $2;6,10$ degrés pour Jupiter moyen et de 6 signes et $0;52$ degrés pour son Apogée. Le mouvement de Jupiter moyen en vingt années persanes est 8 signes et $6;51,0$ degrés, en une année 1 signe et $0;20,33$ degrés, en un mois persan $2;29,38,3,17,16$, en un jour $0;4,59,16,6,34,32$, et en une heure $0;0,12,28,10,16,26,20$. Le mouvement des Apogées est d'un degré en soixante ans. Le mouvement du centre est l'excédent du mouvement de l'astre moyen sur [le mouvement des] Apogées. Le mouvement propre est, pour les planètes supérieures, l'excédent du mouvement du Soleil moyen sur le mouvement de l'astre moyen~; c'est pour Jupiter, en un jour et une nuit, $0;54,9,3,36,59,10$. Jupiter vrai [se calcule] de la même manière que Saturne Vrai. \newpage\phantomsection \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \addcontentsline{toc}{chapter}{I.17 La configuration des orbes de Mars d'après notre description, avec une notice sur les cercles trajectoires des centres des sphères solides} \includepdf[pages=72,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre dix-sept \large La configuration des orbes de Mars d'après notre description, avec une notice sur les cercles trajectoires des centres des sphères solides \end{center} Nous imaginons~: -- un orbe parécliptique représentant l'écliptique, sur ses pôles et dans son plan. -- un deuxième orbe dont le plan est incliné d'une seule part par rapport au plan du parécliptique et qui le coupe en deux points opposés appelés la tête et la queue. -- un troisième orbe dont le centre est sur le bord de l'orbe incliné et dont le rayon est neuf parts (en parts telles que le rayon de l'orbe incliné en compte soixante)~; on l'appelle déférent. -- un quatrième orbe dont le centre est sur le bord du déférent et dont le rayon est trois parts~; on l'appelle rotateur. -- un cinquième orbe dont le centre est sur le bord du rotateur et dont le rayon est trente-neuf parts et demi (en les mêmes parts)~; on l'appelle orbe de l'épicycle. -- le centre de l'astre est sur le bord de l'orbe de l'épicycle. Supposons que les centres du déférent, du rotateur et de l'épicycle sont à l'Apogée, c'est-à-dire sur la droite issue du centre du Monde dans la direction de l'Apogée. Supposons que le parécliptique est mû d'un mouvement simple autour de son centre et sur les pôles de l'écliptique, dans le sens des signes, comme le mouvement des Apogées~: en un jour et une nuit, $0;0,0,9,52$. La tête, la queue et les deux parties d'inclinaison maximale sont entraînées par ce mouvement. L'orbe incliné se meut dans le sens des signes, d'un mouvement simple autour de son centre, comme le mouvement du centre de Mars~: en un jour et une nuit $0;31,26,29,45$. Le déférent se meut, en sens inverse des signes, d'autant que le mouvement du centre de Mars. Le rotateur se meut, dans le sens des signes dans sa partie supérieure, d'autant que le double du mouvement du centre de Mars~: en un jour et une nuit, $1;2,52,59,30$. L'orbe de l'épicycle se meut autour de son centre d'un mouvement simple, en sens inverse des signes dans sa partie supérieure, d'autant que l'excédent du mouvement du centre de Mars sur son mouvement propre. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEANAJBDBAAI@\RL{.harakaT mu_htalifaT}, mouvement irrégulier (\textit{i. e.} non uniforme)} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BHASAW@\RL{.harakaT al-ws.t}, mouvement de l'astre moyen} \index{AQALAY@\RL{rj`}!AQALBHAY@\RL{rujU`}, rétrogradation} \index{AOBHAQ@\RL{dwr}!BEAQAOAGAQAGAJBEAQAGBCARBCAQ@\RL{madArAt marAkaz al-akr}, trajectoires des centres des orbes} \includepdf[pages=73,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Ainsi l'orbe de l'épicycle semble se mouvoir dans le sens des signes dans sa partie supérieure d'autant que le mouvement propre de Mars, en un jour, $0;27,41,50$~; car il tourne de par le mouvement du rotateur et de par son propre mouvement qui est, autour de son centre, en un jour et une nuit, $0;3,44,40$~; retranchant cette grandeur du mouvement du centre en un jour ($0;31,26,29,45$), il reste $0;27,41,50$ et c'est le mouvement apparent de l'astre par l'épicycle, dans le sens des signes, égal au mouvement propre de Mars, c'est-à-dire l'excédent du mouvement du Soleil moyen sur le mouvement de l'astre moyen. C'est le mouvement simple-composé déjà mentionné.\footnote{Il y a une petite incohérence dans la valeur du mouvement propre, au niveau du troisième rang après la virgule~: le 50 devrait être un 40 si on calcule le mouvement propre comme différence entre Soleil moyen et Mars moyen. Cela peut venir d'une confusion entre Mars moyen et son centre (le mouvement des Apogées est environ $0;0,0,10$).} Voyons un exemple~; supposons donc que les centres du déférent, du rotateur et de l'épicycle sont sur la droite issue du centre du Monde dans la direction de l'Apogée, et que se meuve chaque orbe d'un mouvement simple autour de son centre de la grandeur et dans le sens qu'on a supposés. Est engendré en l'astre un mouvement irrégulier et il [lui] arrive de rétrograder (l'observation en rend compte). C'est le mouvement irrégulier composé de mouvements simples comme on l'a expliqué ailleurs. Voyez la figure des orbes de Mars où les trajectoires des centres des sphères solides sont projetées dans le plan pour qu'on se les représente facilement. Ceci étant dit, sache que la distance maximale du centre de l'épicycle de Mars au centre du Monde est soixante-six parts, et que sa distance minimale est cinquante-quatre parts. La distance maximale de l'astre Mars au centre du Monde est cent cinq parts et demi, et sa distance minimale est quatorze parts et demi (tout cela en parts telles que le rayon de l'orbe incliné en compte soixante). \newpage\phantomsection \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \index{BHAUBD@\RL{w.sl}!BEAJAJAUBD AHAYAVBG AHAHAYAV@\RL{mutta.sil ba`.dh biba`.d}, contigus} \includepdf[pages=74,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Cependant l'astre Mars n'atteint pas la distance maximale de ses orbes, ni leur distance minimale~; car le rayon du déférent est cinquante-et-une parts et demi, le rayon de la sphère du rotateur est quarante-deux parts et demi, et le rayon de la sphère de l'épicycle est trente-neuf parts et demi, or la distance maximale de l'orbe incliné (c'est son rayon) est cent onze degrés et demi, au-dessus desquels il y a l'épaisseur du parécliptique, soit trente minutes, cela fait cent douze parts de distance maximale, et la distance minimale [de l'orbe incliné] est huit parts et trente minutes, et avec [ce qu'il faut] pour que les orbes soient contigus, soit trente minutes, il reste huit parts. Conclusion~: la distance minimale des orbes de Mars est huit parts, et leur distance maximale est cent douze parts. C'est ce sur quoi nous nous sommes appuyés. Voyez la figure des orbes solides de Mars tels qu'on les imagine dans les cieux~; ce sont des sphères entières dont on a présenté les rayons et les positions comme on l'a fait en l'expliquant pour Saturne et les autres, donc il n'est pas besoin de répéter cela. \newpage\phantomsection \includepdf[pages=75,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{mars_fr.pdf_tex} \end{minipage} \label{mars_trajectoires} Configuration des orbes de Mars représentées dans le plan par les trajectoires des centres des sphères solides à l'Apogée, au périgée et aux élongations intermédiares \end{figure} \newpage\phantomsection \includepdf[pages=76,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{mars_orbes_fr.pdf_tex} \end{minipage} Configuration des orbes de Mars~: sphères entières représentées dans le plan à l'Apogée, au périgée et aux élongations intermédiaires \end{figure} \newpage\phantomsection \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \addcontentsline{toc}{chapter}{I.18 Régulation des mouvements de Mars} \includepdf[pages=77,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre dix-huit \large La régulation des mouvements de Mars \end{center} Mars moyen, mardi midi à Damas, le premier jour de l'année sept-cent un de l'ère de Yazdgard, est 9 [signes] et $22;0,0$ [degrés], et son Apogée 4 [signes] et $17;52,0$ [degrés]. Le mouvement de Mars moyen en vingt années persanes est 8 [signes] et $15;43,20$ [degrés]\footnote{modulo $360°$.}~; en une année persane, 6 [signes] et $11;17,11,0$~; en trente jours, $0;15,43,19,48,29,39$~; en un jour et une nuit, $0;0,31,26,39,36,59,18$~; et en une heure $0;0,1,18,36,39$. Le mouvement des apogées est d'un degré en soixante ans~; en un ans, une minute~; en un jour, $0;0,0,9,52$. Le mouvement propre de Mars est d'autant que l'excédent du mouvement du Soleil moyen sur Mars moyen, c'est-à-dire par jour $0;0,27,41,40,7$. Mars vrai se calcule de la même manière que Saturne vrai. \newpage\phantomsection \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \index{BEBJBD@\RL{myl}!AZAGBJAI BEBJBD@\RL{.gAyaT al-mIl}, partie d'inclinaison maximale (dans un orbe incliné)} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR@\RL{.harakaT al-markaz}, mouvement du centre (en général, le centre d'un épicycle)} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR ATBEAS@\RL{.harakaT markaz al-^sams}, mouvement du centre du Soleil} \addcontentsline{toc}{chapter}{I.19 La configuration des orbes de Vénus selon notre méthode, avec une notice sur les cercles trajectoires des sphères solides} \includepdf[pages=78,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre dix-neuf \large La configuration des orbes de Vénus selon notre méthode, avec une notice sur les cercles trajectoires des sphères solides \end{center} Nous imaginons~: -- un orbe parécliptique représentant l'écliptique, dans son plan et sur ses pôles -- un deuxième orbe, au centre de l'écliptique, son plan étant incliné par rapport au plan du parécliptique, d'une inclinaison constante d'un sixième de part (selon l'opinion la plus juste), et le coupant en deux points opposés appelés la tête et la queue -- un troisième orbe dont le centre est sur le bord de l'orbe incliné, et dont le rayon est une part et quarante-et-une minutes (en parts telles que le rayon de l'orbe incliné en compte soixante)~; on l'appelle déférent -- un quatrième orbe dont le centre est sur le bord du déférent et dont le rayon est vingt-six minutes~; on l'appelle rotateur -- un cinquième orbe dont le centre est sur le bord du rotateur et dont le rayon est, d'après nos observations, quarante-trois parts et trente-trois minutes (en les mêmes parts)~; on l'appelle orbe de l'épicycle -- le centre de l'astre, sur le bord de ce dernier orbe, attaché en un de ses points Supposons les centres du déférent, du rotateur et de l'épicycle alignés sur la droite issue du centre de l'orbe incliné dans la direction de l'Apogée. Supposons le parécliptique mû sur ses pôles, dans le sens des signes, comme le mouvement des Apogées~: en un jour et une nuit, $0;0,0,9,52$. Ce mouvement entraîne la tête et la queue, et les deux parties d'inclinaison maximale. L'orbe incliné se meut dans le sens des signes, comme le mouvement du centre de Vénus qui est comme le mouvement du centre du Soleil~: en un jour et une nuit, $0;59,8,9$. Le déférent se meut en sens inverse des signes, d'une grandeur égale au mouvement du centre de Vénus~; le rotateur se meut dans les sens des signes, d'une grandeur double du mouvement du centre de Vénus~: en un jour et une nuit, $1;58,16,18$. \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEANAJBDBAAI@\RL{.harakaT mu_htalifaT}, mouvement irrégulier (\textit{i. e.} non uniforme)} \index{AKAHAJ@\RL{_tbt}!AKAHAGAJ@\RL{_tibAt}, immobilité} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{APAQBH@\RL{_drw}!APAQBHAI@\RL{_dirwaT}, sommet, apogée} \index{BHAUBD@\RL{w.sl}!BEAJAJAUBD AHAYAVBG AHAHAYAV@\RL{mutta.sil ba`.dh biba`.d}, contigus} \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \label{var26} \includepdf[pages=79,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent L'orbe de l'épicycle se meut d'un mouvement simple autour de son centre, en sens inverse des signes dans sa partie supérieure, d'autant que l'excédent du mouvement du centre de Vénus sur son mouvement propre~: en un jour et une nuit, $0;22,8,41,25,33,53$. Il semble donc progresser dans le sens des signes d'autant que le mouvement propre de Vénus qui est, en un jour et une nuit, $0;36,59,28,26$. En effet, supposons l'épicycle immobile, et que se meuvent l'orbe incliné, le déférent et le rotateur~: l'apogée se déplace dans le sens des signes, d'autant que le mouvement du centre. Supposons lui un autre mouvement, en sens inverse des signes, d'autant que le mouvement du centre, alors l'apogée revient en son lieu~; mais d'autre part, on a trouvé par l'observation que [l'épicycle] se meut, par rapport à l'apogée, dans le sens des signes, d'autant que le mouvement propre. Or on a démontré que, si on suppose l'épicycle mû en sens inverse des signes, d'autant que l'excédent du mouvement du centre sur le mouvement propre, alors l'apogée reste en arrière d'autant que le mouvement propre~: c'est le mouvement simple composé déjà démontré. Ayant supposé les centres du déférent, du rotateur et de l'épicycle alignés sur la droite issue du centre de l'écliptique dans la direction de l'Apogée, si les orbes se meuvent d'autant qu'on l'a supposé avec l'orientation ci-dessus, alors l'astre semblera avoir un mouvement irrégulier composé de mouvements simples, et c'est ce qu'on trouve par l'observation. Ceci étant dit, sache que la distance maximale du centre de l'épicycle de Vénus au centre du Monde est soixante-et-une parts et un quart de part, et que sa distance minimale est cinquante-huit parts, une demi-part et un quart de part. La distance maximale de Vénus au centre du Monde est cent quatre parts et quarante-huit minutes, et sa distance minimale est quinze parts et un cinquième de part. En fait, l'astre de Vénus n'atteint ni la distance minimale de ses orbes, ni leur distance maximale, car le rayon de la sphère du déférent est quarante-cinq parts et deux tiers de parts (et le rayon du rotateur est quarante-trois parts et cinquante-neuf minutes), donc la distance maximale de l'orbe incliné est, pour Vénus, cent cinq parts et deux tiers de part à quoi il faut ajouter l'épaisseur du parécliptique supposée d'un tiers de part, et la distance minimale du parécliptique est quatorze parts et un tiers de part dont il faut retrancher un intervalle supposé d'un tiers de part~: la distance minimale des orbes de Vénus semble donc être quatorze parts, et leur distance maximale, cent six parts (peut-être plus, mais pas moins que cela). \newpage\phantomsection \index{AOBHAQ@\RL{dwr}!BEAQAOAGAQAGAJBEAQAGBCARBCAQ@\RL{madArAt marAkaz al-akr}, trajectoires des centres des orbes} \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \index{BFAWBB@\RL{n.tq}!BEBFAWBBAI@\RL{min.taqaT}, ceinture} \includepdf[pages=80,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Remarque \end{center} Etant donné les rayons des trajectoires des centres des sphères (ce sont les cercles représentés sur la figure), soit à calculer les rayons des sphères solides pour tel astre. Voici la méthode~: la somme du rayon de l'orbe de l'épicycle et du rayon du rotateur est le rayon de la sphère du rotateur, ajoutes-y le rayon du déférent, on obtient le rayon de la sphère du déférent. Si nous ajoutons le rayon de la sphère du déférent à soixante, on obtient le rayon extérieur de l'orbe incliné auquel il faut ajouter l'épaisseur du parécliptique~; et si nous le retranchons de soixante, et que nous retranchons de ce qui reste [ce qu'il faut] pour que les orbes soient contigus, alors on obtient le rayon [intérieur] de l'orbe incliné. Voyez la description des orbes de Vénus représentés par les trajectoires des centres des sphères. [On obtient] la figure des orbes solides de Vénus en considérant [les cercles] projetés dans le plan comme étant les ceintures des sphères solides. \newpage\phantomsection \includepdf[pages=81,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{venus_fr.pdf_tex} \end{minipage} Configuration des orbes de Vénus représentées par les trajectoires des centres des orbes solides à l'Apogée, au périgée et aux élongations intermédiaires \end{figure} \newpage\phantomsection \includepdf[pages=82,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{venus_orbes_fr.pdf_tex} \end{minipage} Configuration des orbes solides de Vénus~: sphères entières représentées dans le plan à l'Apogée, au périgée et aux élongations intermédiaires \end{figure} \newpage\phantomsection \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \addcontentsline{toc}{chapter}{I.20 Régulation des mouvements de Vénus} \includepdf[pages=83,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt \large La régulation des mouvements de Vénus \end{center} Leur compte à midi du premier jour de l'année sept cent un de l'ère de Yazdgard est de 9 signes et $10;9,0$ degrés pour Vénus moyen (comme le Soleil moyen), et 2 signes $17;52,0$ degrés pour l'Apogée. Le mouvement de Vénus moyen est comme le mouvement du Soleil moyen, soit en vingt ans 11 signes et $25;13,20$ degrés, en une année persane 11 signes et $29;45,40,0$ degrés, en un mois persan 0 signe et $29;34,9,51,46,50,57,32$ degrés, en un jour $0;59,8,19,43,33,41,55,4,6$, et en une heure $0;2,27,50,50$. \`A la date mentionnée, Vénus propre a atteint 10 signes et $20;50,19$ degrés~; son mouvement en vingt années persanes est 6 signes et $0;36,0$ degrés, en une année persane 7 signes et $15;1,48,41$, en un mois 0 signe et $18;29,44,13,9$ degrés, en un jour $0;36,59,28,26,13,4,16$, et en une heure 0 signe et $0;1,32,28,41,6$ degrés. La correction [de Vénus] est faite selon la méthode de Saturne et des autres astres errants. \newpage\phantomsection \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \index{BABDBC@\RL{flk}!BABDBCBEBEAKAKBD@\RL{falak muma_t_tal}, parécliptique} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR@\RL{.harakaT al-markaz}, mouvement du centre (en général, le centre d'un épicycle)} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BEAQBCAR ATBEAS@\RL{.harakaT markaz al-^sams}, mouvement du centre du Soleil} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{BABDBC@\RL{flk}!BABDBC BEAMBJAW@\RL{falak mu.hI.t}, orbe englobant} \index{BABDBC@\RL{flk}!BABDBC AMAGBAAX@\RL{falak .hAfi.z}, orbe protecteur} \addcontentsline{toc}{chapter}{I.21 Configuration des orbes de Mercure selon notre méthode confirmée par l'observation} \includepdf[pages=84,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-et-un \large La configuration des orbes de Mercure selon notre méthode confirmée par l'observation. \end{center} Nous imaginons~: -- un orbe dans le plan de l'écliptique, sur ses pôles et en son centre, appelé le parécliptique. -- un deuxième orbe dans un plan incliné par rapport au plan du parécliptique d'un demi et un quart de degré à l'Apogée vers le Sud, mais cette inclinaison n'est pas constante. Selon une autre doctrine l'orbe est incliné d'un sixième de part et son inclinaison est constante~: cette doctrine est plus juste. Le plan de l'orbe incliné coupe le plan du parécliptique en deux points opposés appelés la tête et la queue. -- un troisième orbe dont le centre est sur la ceinture de l'orbe incliné, de rayon quatre parts et cinq minutes (en parts telles que le rayon de l'orbe incliné en compte soixante), appelé le \emph{déférent}. -- un quatrième orbe dont le centre est sur la ceinture du déférent, de rayon un demi et un tiers de degré\footnote{Le rayon du rotateur serait donc $0;50$. Mais les calculs des rayons des orbes solides donnés plus loin laissent penser qu'il doit être $0;55$, c'est-à-dire une demi, un tiers et un demi-sixième de degré.}, appelé le \emph{rotateur}. -- un cinquième orbe dont le centre est sur la ceinture du rotateur, de rayon vingt-deux parts et quarante-six minutes (en les mêmes parts), appelé orbe de l'épicycle. -- un sixième orbe dont le centre est sur la ceinture de l'épicycle, de rayon trente-trois minutes, appelé l'\emph{englobant}. -- un septième orbe dont le centre est sur l'englobant, de rayon semblable au rayon de l'englobant (trente-trois minutes), appelé le \emph{protecteur}. -- Mercure est centré sur la ceinture de ce [dernier] orbe. Passons aux mouvements. Le parécliptique se meut sur les pôles de l'écliptique, dans le sens des signes, d'un degré tous les soixante ans, comme le mouvement des Apogées. L'orbe incliné se meut dans le sens des signes, comme le mouvement du centre de Mercure, c'est-à-dire comme le centre du Soleil~: en un jour et une nuit, $0;59,8,10$. Le déférent se meut en sens inverse des signes dans sa partie supérieure, aussi d'autant que le mouvement du centre de Mercure. L'orbe de l'épicycle se meut dans le sens des signes dans sa partie supérieure, d'autant que l'excédent du mouvement propre de Mercure sur le mouvement de son centre~: en un jour et une nuit, $2;7,16,0,9,51,39,56,39$. \newpage\phantomsection \index{BBAWAQ@\RL{q.tr}!BFAUBA BBAWAQ BD-AJAOBHBJAQ BD-BEAQAEBJBJ@\RL{n.sf q.tr al-tadwIr al-mar'iyy}, rayon de l'épicycle apparent} \index{BABDBC@\RL{flk}!BABDBC BEAMBJAW@\RL{falak mu.hI.t}, orbe englobant} \index{BABDBC@\RL{flk}!BABDBCATAGBEBD@\RL{falak ^sAmil}, orbe total} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AHASBJAWAI BEAQBCBCAHAI@\RL{.harakaT basI.taT mrkkbaT}, mouvement simple-composé} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ANAGAUAI@\RL{.harakaT _hA.saT}, mouvement propre} \index{APAQBH@\RL{_drw}!APAQBHAI@\RL{_dirwaT}, sommet, apogée} \index{AHAOAB@\RL{bda'}!BEAHAJAOAB@\RL{mubtada'}|see{\RL{mabda'}}} \index{AHAOAB@\RL{bda'}!BEAHAOAB@\RL{mabda'}, principe, origine} \includepdf[pages=85,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent C'est un mouvement simple. Quant au mouvement propre de Mercure, c'est un mouvement simple-composé qui mesure, ensemble, le mouvement de cet épicycle ($2;7,16,0,9,51,39,56,39$) et le mouvement du centre de Mercure ($0;59,8,10$)~; car ces deux mouvements vont dans le même sens. L'astre s'éloigne de l'apogée, d'autant que la somme des deux mouvements, 0 signe et $3;6,24,10,1,38,37,28,42$~: c'est le mouvement propre de Mercure, un mouvement composé mais uniforme par rapport au centre de l'épicycle. Voici une explication complémentaire. Si l'orbe incliné se meut d'un quart de cercle, le déférent d'un quart de cercle, et le rotateur d'un demi-cercle, alors se déplace l'apogée (origine du mouvement propre) d'un quart de cercle dans le sens des signes. Cependant, on trouve par l'observation qu'elle se déplace, dans le sens des signes, comme le mouvement propre de Mercure (0 signe $3;6,24,10$), donc le mouvement de l'épicycle autour de son centre est, dans le sens des signes, l'excédent de ce mouvement propre sur le mouvement du centre (car ils vont dans le même sens). J'ai déjà expliqué cela. L'englobant\footnote{Cet orbe qu'{Ibn al-\v{S}\=a\d{t}ir} avait appelé \textit{mu\d{h}\={\i}\d{t}} est maintenant désigne du nom de \textit{\v{s}\=amil}. Nous avons traduit ces deux termes par <<~englobant~>>.} se meut dans le sens des signes dans sa partie supérieure, comme le double du mouvement du centre de Mercure~: en un jour $1;58,16,20$. Le protecteur se meut en sens inverse des signes dans sa partie supérieure, quatre fois comme le mouvement du centre de Mercure~: en un jour $3;56,32,40$. Toujours située sur la droite issue du centre de l'épicycle et passant par le centre de l'englobant, Mercure tantôt se rapproche, tantôt s'éloigne du centre de l'épicycle~; et elle ne quitte jamais cette droite\footnote{Ces deux orbes (englobant et protecteur) constituent un \emph{couple de \d{T}\=us{\=\i}}.}. Quand le centre de l'épicycle est à l'Apogée ou au périgée, Mercure est à distance minimale du centre de son épicycle, distance appelée rayon de l'épicycle apparent qui vaut alors vingt-et-une parts et deux tiers de part. Quand le centre est à trois signes [de l'Apogée], Mercure est à distance maximale du centre de l'épicycle, à vingt-trois degrés et cinquante-deux minutes. La distance maximale de Mercure au centre du Monde est quatre-vingt-six et deux tiers. Sa distance minimale est trente-trois et un tiers\footnote{Il faut comprendre ces distances extrémales comme étant les distances extrémales de Mercure au centre du Monde \emph{quand Mercure est à l'Apogée ou au périgée}, \textit{i. e.} quand $\overline{\kappa}=0°$ ou $180°$. Le rayon de l'épicycle apparent est alors minimal.}. Sauf que Mercure n'atteint pas la distance minimale de ses orbes solides, comme nous l'avons expliqué précédemment. \newpage\phantomsection \index{ALASBE@\RL{jsm}!BCAQAGAJ BEALASBEAI AJAGBEBEAI@\RL{kraT mjsm, falak mjsm}, sphère solide, orbe solide} \index{BHAUBD@\RL{w.sl}!BEAJAJAUBD AHAYAVBG AHAHAYAV@\RL{mutta.sil ba`.dh biba`.d}, contigus} \includepdf[pages=86,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Quant aux dimensions des orbes solides~: le rayon de la sphère du déférent est $28;52$, le rayon de la sphère du rotateur est $24;47$, le rayon de la sphère de l'épicycle est $23;52$, le rayon de la sphère de l'englobant est $1;6$, le rayon de la sphère du protecteur est $0;33$ (tout cela en parts telles que le rayon du parécliptique en compte soixante). \label{rapport_merc}La distance maximale dans le parécliptique est donc quatre-vingt-huit parts et cinquante-deux minutes, avec en plus de cela l'épaisseur du parécliptique~; admettons qu'elle atteint 89. La distance minimale de ses orbes est $31;8$, et même moins, de [ce qu'il faut] pour que les orbes soient contigus~; on la pose égale à $31;0$. Dieu est le plus savant. \newpage\phantomsection \includepdf[pages=87,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{mercure_fr.pdf_tex} \end{minipage} Configuration des orbes de Mercure représentées par les centres des sphères solides dans la plan \end{figure} \newpage\phantomsection \includepdf[pages=88,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{mercure_orbes_fr.pdf_tex} \end{minipage} Configuration des orbes solides de Mercure, sphères entières représentées dans le plan à l'Apogée, au périgée et aux élongations intermédiaires \end{figure} \newpage\phantomsection \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \index{ACAUBD@\RL{'a.sl}!ACAUBD@\RL{'a.sl}, 1) fondement, 2) origine} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BHASAW@\RL{.harakaT al-ws.t}, mouvement de l'astre moyen} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BHASAW ATBEAS@\RL{.harakaT ws.t al-^sams}, mouvement du Soleil moyen} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \addcontentsline{toc}{chapter}{I.22 Régulation des mouvements de Mercure} \includepdf[pages=89,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-deux \large La régulation des mouvements de Mercure \end{center} Leurs conditions initiales à midi du premier jour de l'année sept cent un de l'ère de Yazdgard sont de 9 signes et $10;9,0$ degrés pour l'origine de Mercure moyen (c'est comme l'origine du Soleil moyen et de Vénus moyen), et 7 signes et $2;52,0$ degrés pour l'Apogée. L'origine du mouvement propre à cette date est de 5 signes $4;2,0$ degrés. Le mouvement de Mercure moyen est comme le mouvement du Soleil moyen (par an, 11 signes $29;45,40$ degrés). Le mouvement propre de Mercure est\footnote{Les valeurs suivantes ont probablement été calculées à partir des valeurs observées par an ou en vingt années persanes. On obtient alors, par division, presque exactement la valeur donnée par jour $3;6,24,10,1,38,27,48,29$~: seul le huitième rang est erroné.}~: en vingt années persanes 11 signes $29;0,20,0$ degrés, par an 1 signe $23;57,1$, par mois 3 signes $3;12,5,1$, par jour 0 signe $3;6,24,10,1,38,37,29$, et par heure 0 signe $0;7,46,0,25$. Si l'on retranche le mouvement de l'Apogée du mouvement moyen, on obtient le mouvement du centre. La [longitude] vraie de cet astre [se calcule] de la même manière que Saturne vrai. \newpage\phantomsection \index{AYAOBD@\RL{`dl}!AJAYAOBJBD@\RL{ta`dIl}, équation} \index{ALBJAH@\RL{jIb}!ALBJAH@\RL{jIb}, sinus} \index{ALBJAH@\RL{jIb}!jIb tamAm@\RL{jIb al-tamAm}, cosinus} \index{ALAOBD@\RL{jdl}!ALAOBHBD@\RL{jdwl}, table, tableau, catalogue} \index{AQBCAR@\RL{rkz}!BEAQBCAR@\RL{markaz}, centre} \index{AYAOBD@\RL{`dl}!ANAGAUAUAI BEAYAOAOBD@\RL{_hA.s.saT mu`addalaT}, astre propre corrigé} \addcontentsline{toc}{chapter}{I.23 Calcul de l'équation des astres} \includepdf[pages=90,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-trois \large Calcul de l'équation des astres \end{center} Pour calculer la \emph{première équation}\footnote{notée $c_1(\overline{\kappa})$ dans notre commentaire.}, pour Saturne, Jupiter, Mars et Vénus, tu prends le sinus du centre et son cosinus, et tu multiplies chacun par le rayon du déférent. Ce que tu obtiens à partir du sinus, appelons-le \emph{base}~; ce que tu obtiens avec le cosinus, appelons-le \emph{hauteur}. Puis multiplie le sinus, puis le cosinus, chacun par le rayon du rotateur. Ce que tu obtiens à partir du sinus, ajoute-le à la base si le centre est inférieur à trois signes, et s'il est supérieur à trois signes alors retranche-le\footnote{Ceci est erroné, il faut toujours ajouter quelque soit la valeur du centre (voir commentaire mathématique).}. Appelons \emph{dividende} leur somme ou différence. Ce que tu obtiens à partir du cosinus, retranche-le de la hauteur~; ajoute soixante à ce qui reste si le centre est inférieur à trois signes, et s'il est supérieur à trois signes alors retranche ce qui reste de soixante. Mets cela au carré, et ajoutes-y le carré du dividende. Prends la racine carrée du total~: tu obtiens la distance du centre de l'épicycle au centre du Monde. Divise le dividende que tu as retenu par cela, il en sort le sinus de la première équation. Trouve son arc dans la table des sinus, c'est la première équation. \'Ecris-la en face du degré pour lequel tu l'as calculée, c'est l'équation du centre et du mouvement propre. Pour calculer la \emph{deuxième équation}\footnote{notée $c_2(0°,\alpha)$ dans notre commentaire.}, tu multiplies le sinus de l'astre propre corrigé (puis son cosinus) par le rayon de l'orbe de l'épicycle de cet astre. Ce que tu obtiens à partir du cosinus, tu l'ajoutes à la distance du centre de l'épicycle au centre du Monde pour cet astre ($63;25$ pour Saturne, $62;45$ pour Jupiter, 66 pour Mars, $61;15$ pour Vénus, 65 pour Mercure) si l'astre propre corrigé est dans la moitié supérieure de l'orbe de l'épicycle, et tu le retranches si l'astre propre corrigé est dans sa moitié inférieure. Ce qu'on obtient, ajoute son carré au carré du produit du sinus de l'astre propre par le rayon de l'épicycle (ce produit est le \emph{dividende})~; prends la racine carrée des deux carrés, et divise le dividende par cela. On obtient le sinus de la deuxième équation pour cet astre. Trouve son arc dans la table des sinus~; écris-le en face de [la valeur] de l'astre propre corrigé pour laquelle tu l'as calculé. \newpage\phantomsection \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{BFASAH@\RL{nsb}!AOBBAGAEBB BFASAH@\RL{daqA'iq al-nisb}, coefficient d'interpolation} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD BEAMBCBE@\RL{ta`dIl mu.hakam}, équation principale} \index{AHAYAO@\RL{b`d}!AHAYAO@\RL{b`d}, 1)~distance, 2)~élongation (\textit{i.e.}, écart en longitude)} \includepdf[pages=91,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \emph{Remarque}. La racine carrée des deux carrés est la distance de l'astre au centre du Monde à cette date, en parts telles que le rayon de l'orbe incliné en compte soixante, et le rayon de l'épicycle, $6;30$ pour Saturne, $11;30$ pour Jupiter, $39;30$ pour Mars, $43;33$ pour Vénus, et $21;40$ pour Mercure (pour ce dernier, c'est le rayon de l'épicycle apparent quand la distance est maximale ou bien minimale\footnote{c'est-à-dire à l'Apogée ou au périgée, en lesquels le couple de \d{T}\=us{\=\i } rend l'épicycle apparent moindre que le vrai.}). \label{troisieme_equation}La \emph{troisième équation}\footnote{notée $c_2(180°,\alpha)-c_2(0°,\alpha)$ dans notre commentaire.} est la variation à distance minimale. Pour la calculer, tu fais comme pour la deuxième équation, mais tu suppose le centre de l'épicycle à distance minimale, c'est-à-dire $56;35$ pour Saturne, $57;15$ pour Jupiter, 54 pour Mars, $58;45$ pour Vénus, et 55 pour Mercure. Tu calcules ensuite selon la méthode pour la deuxième équation. Tu retranches du résultat la deuxième équation pour le même degré, et tu écris la différence en face du degré pour lequel tu as fait ce calcul~: c'est la variation à distance minimale, et il faut toujours l'ajouter à la deuxième équation (comme pour la Lune, j'ai déjà indiqué cela). La méthode pour calculer le \emph{coefficient d'interpolation} \footnote{noté $\chi(\overline{\kappa})$ dans notre commentaire.} est comme je t'ai enseigné pour le calcul du coefficient d'interpolation pour la Lune. J'ai conçu les équations de sorte à ce que la variation s'ajoute [toujours additivement] à la deuxième équation (c'est plus facile ainsi). La méthode pour calculer le coefficient d'interpolation est la suivante. Tu calcules la distance du centre de l'épicycle au centre du Monde comme je t'ai enseigné, puis tu calcules le rayon de l'épicycle apparent\footnote{Pour toutes les planètes sauf Mercure, il s'agit simplement du rayon constant de l'épicycle.}. Tu divises celui-ci par la distance du centre de l'épicycle que tu as calculée. Tu trouves l'arcsinus du résultat dans la table des sinus~: c'est l'équation principale maximale pour ce degré. Tu en soustrais l'équation maximale à distance maximale pour cet astre. Tu divises la différence par la différence entre l'équation à distance maximale et l'équation à distance minimale. Le résultat est le coefficient d'interpolation. \newpage\phantomsection \index{ALAOBD@\RL{jdl}!ALAOBHBD@\RL{jdwl}, table, tableau, catalogue} \includepdf[pages=92,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Il faut bien conduire ce calcul, c'est subtil. Si tu préfères calculer les équations de la manière la plus courante\footnote{Cette manière ``plus courante'' conduirait aux tables auxquelles le chapitre 14 fait implicitement référence. C'est ainsi que faisait Ptolémée.}, alors tu calcules quand l'épicycle est à distance moyenne, et tu ajoutes à cela la variation à distance minimale, ou bien tu en retranches la variation à distance maximale. Je t'ai aussi expliqué dans le chapitre sur le calcul des équations de la Lune que l'on peut calculer une table unique pour corriger la Lune. De même pour chaque astre~: on peut faire une équation unique pour corriger chacun sans équation du centre ni équation de [l'astre] propre. \newpage\phantomsection \index{AYBBAO@\RL{`qd}!AYBBAOAI@\RL{`qdaT}, n{\oe}ud} \index{ACBHAL@\RL{'awj}!ACBHAL@\RL{'awj}, Apogée} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{BEBJBD@\RL{myl}!BEBJBD@\RL{myl}, inclinaison, inclinaison maximale (\textit{i. e.} mesure d'un angle dièdre)} \index{ASAWAM@\RL{s.t.h}!ASAWAM BEASAJBH@\RL{\vocalize s.t.h mstwiN}, plan} \index{APAQBH@\RL{_drw}!APAQBHAI@\RL{_dirwaT}, sommet, apogée} \index{AMAVAV@\RL{.h.d.d}!AMAVBJAV@\RL{.ha.dI.d}, périgée} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{BBAWAQ@\RL{q.tr}!BBAWAQ@\RL{qi.tr}, diamètre} \addcontentsline{toc}{chapter}{I.24 Latitudes des trois planètes supérieures~: Saturne, Jupiter et Mars} \label{lat_debut} \includepdf[pages=93,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-quatre \large Les latitudes des trois planètes supérieures~: Saturne, Jupiter et Mars \end{center} Dans la configuration de chacune d'elles, on a vu que l'inclinaison maximale\footnote{Pour décrire la mesure de l'angle entre deux plans, on utilisait habituellement l'expression ``inclinaison maximale'' qui désigne en effet l'arc maximal d'un cercle orthogonal à l'un des deux plans, de centre situé sur la droite commune (le cercle est alors orthogonal aux deux plans). Mais {Ibn al-\v{S}\=a\d{t}ir} oubliera souvent le mot ``maximale'' pour parler simplement d'``inclinaison''.} de l'orbe incliné par rapport au parécliptique est~: deux parts et demi pour Saturne, une part et demi pour Jupiter, et une part pour Mars. \`A présent, j'affirme qu'on a aussi trouvé ceci par l'observation~: quand le centre de l'épicycle est à la moitié de l'arc d'orbe incliné situé entre les n{\oe}uds (\textit{i. e.} là où l'inclinaison de l'orbe incliné par rapport au parécliptique est maximale, vers le Nord), l'inclinaison maximale du plan de l'épicycle par rapport à l'orbe incliné vaut quatre parts et demi pour Saturne, deux parts et demi pour Jupiter, et deux parts un quart pour Mars. Par l'observation, on a trouvé qu'alors le rayon passant par l'Apogée et le périgée de l'épicycle avait son extrémité à l'Apogée comprise entre l'orbe incliné et le parécliptique, et son extrémité au périgée incliné du même côté que l'orbe incliné. Le rayon perpendiculaire au rayon mentionné est alors parallèle au plan de l'écliptique. Puis on a observé quand le centre de l'épicycle est situé aux n{\oe}uds~: on a trouvé que les plans des trois cercles\footnote{Trois cercles~: le déférent, du rotateur et de l'épicycle.} étaient dans le plan du parécliptique, c'est-à-dire le plan de l'écliptique. En effet, on a trouvé que la latitude des astres était nulle quand le centre de l'épicycle est à la tête ou bien à la queue, et que l'astre est en un point quelconque de l'orbe de l'épicycle. J'en déduis que les diamètres passant par les apogées ne sont pas constamment situés dans les plans des orbes inclinés, ni dans les plans des paracléptiques, sauf quand les centres des épicycles sont en l'un des n{\oe}uds. Après [un passage à la tête] les apogées des planètes supérieures s'inclinent toujours du côté du plan de l'écliptique, et leurs périgées du côté opposé. Leur inclinaison augmente jusqu'à un maximum atteint au milieu de l'arc compris entre les deux n{\oe}uds, puis elle décroît jusqu'à disparaître au deuxième n{\oe}ud. Puis l'apogée commence à s'incliner vers le plan de l'écliptique, et le périgée du côté opposé~; cette inclinaison augmente jusqu'à atteindre son maximum au milieu de l'arc compris entre les deux n{\oe}uds, puis elle diminue jusqu'à disparaître à la tête~; et la chose recommence depuis le début. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{BFAWBB@\RL{n.tq}!BEBFAWBBAI@\RL{min.taqaT}, ceinture} \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{AWBHBD@\RL{.twl}!AWBHBD@\RL{.tUl}, longitude (par rapport à l'écliptique)} \index{ACAUBD@\RL{'a.sl}!ACAUBD@\RL{'a.sl}, 1) fondement, 2) origine} \includepdf[pages=94,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection L'apogée, après s'être éloigné des n{\oe}uds, reste toujours entre les deux ceintures~; pour le périgée c'est le contraire. Cela a pour effet de diminuer les latitudes des apogées et d'augmenter les latitudes des périgées. Si la distance du centre de l'épicycle au centre de l'écliptique varie, alors l'inclinaison de l'apogée et du périgée par rapport au centre de l'écliptique varie aussi~: si la distance du centre de l'épicycle augmente, alors l'inclinaison de l'apogée décroît, et inversement. \label{noeud_sup1} Le n{\oe}ud ascendant de Saturne au premier jour de l'année sept cent neuf du calendrier persan est à cinq parts du Lion, le n{\oe}ud ascendant de Jupiter à vingt-et-une parts du Cancer, et le n{\oe}ud ascendant de Mars à dix-huit parts du Taureau. Pour chaque astre, le n{\oe}ud descendant est à l'opposé du n{\oe}ud ascendant. Un quart de cercle après le n{\oe}ud ascendant se situe la latitude maximale vers le Nord~; c'est pour Saturne $2;4$ quand il est à l'apogée de l'épicycle et $3;3$ quand il est au périgée\footnote{Ces valeurs et les suivantes sont tirées des tables de latitudes de l'\textit{Almageste} (chapitre XIII). Les données de l'observation rapportées par Ptolémée sont respectivement~: 2°, 3°, 1°, 2°, --, $4;20$°, 2°, 3°, 1°, 2°, --, 7° (\textit{cf.} \cite{swerdlow2005} p.~45-46).} pour Jupiter $1;6$ quand il est à l'apogée et $2;4$ quand il est au périgée, et pour Mars $0;7$ quand il est à l'apogée et $4;21$ quand il est au périgée. Un quart de cercle après le n{\oe}ud descendant se situe la latitude maximale vers le Sud~; c'est pour Saturne $2;2$ quand il est à l'apogée de l'épicycle et $3;5$ quand il est en son périgée, pour Jupiter $1;5$ quand il est à l'apogée de l'épicycle et $2;8$ quand il est au périgée, et pour Mars $0;3$ quand il est à l'apogée de l'épicycle et $7;7$ quand il est en son périgée. \begin{center} \large Section \end{center} Sache que, concernant le problème des latitudes, ces fondements sont des fondements nécessaires trouvés par l'observation, certains ayant aussi été confirmés par le calcul~; on ne peut aucunement s'en passer. Ni Ptolémée ni Hipparque, ni aucun autre des Anciens, ni aucun des Modernes jusqu'à nos jours, n'a pu faire cela~: donner à l'astronomie des fondements simples et vrais qui suffisent à la fois pour les mouvements en longitude et pour les mouvements en latitude. Ptolémée n'a pas pu imaginer de fondements suffisants pour les mouvements en latitude~; que l'on ait ou pas d'indulgence pour la théorie classique des mouvements en longitude, on ne peut avoir aucune indulgence pour les mouvements en latitude. Le problème est le suivant. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \index{ATBJAQAGARBJ@\RL{q.tb al-dIn al-^sIrAzI}, Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i}} \index{AYAQAVBJ@\RL{al-mwyd al-`r.dI}, al-Mu'ayyad al-`Ur\d{d}{\=\i}} \index{BGBJAKBE@\RL{ibn al-hay_tam}, Ibn al-Haytham} \index{AHAOAB@\RL{bda'}!BEAHAOAB@\RL{mabda'}, principe, origine} \includepdf[pages=95,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Quand le centre de l'épicycle est au milieu de l'arc compris entre les n{\oe}uds, que le plan de l'épicycle est incliné par rapport au plan de l'orbe incliné (comme je l'ai dit et comme on l'a trouvé par l'observation) et que l'orbe incliné tourne en direction du n{\oe}ud, alors il faut que l'épicycle ne cesse d'être incliné par rapport au plan de l'orbe incliné, y compris au n{\oe}ud. Nous avons certes vérifié par l'observation que le plan de l'épicycle n'est pas incliné par rapport au plan de l'écliptique et qu'eux deux ont une latitude nulle. Inversement, après que l'épicycle a été dans le plan de l'écliptique et quand il parcourt l'arc compris entre les n{\oe}uds, il présente une forte inclinaison. \label{critique_lat1} Le rapprochement de la ceinture de l'épicycle et de la ceinture de l'orbe incliné, sans un mobile qui ne perturbe les mouvements en longitude, dans l'astronomie classique de l'\textit{Almageste} et des \emph{Hypothèses} concernant les corps mûs en latitude, est impossible. Ce dont se sont efforcés Ptolémée et ses successeurs comme Ibn al-Haytham dans son \textit{\'Epître}, Na\d{s}{\=\i}r al-\d{T}\=us{\=\i} dans sa \textit{Tadhkira}, Mu'ayyad al-`Ur\d{d}{\=\i} dans son \textit{Astronomie}, Qu\d{t}b al-Sh{\=\i}r\=az{\=\i} dans sa \uwave{\textit{Muntaha 'adw\=ar}}, et ce que quiconque a repris de leurs propos dans son livre (sans compter l'extension qu'on leur a donné, et la gloire qu'on s'est faite du principe \textit{ibd\=a`{\=\i}}), rien de tout cela ne suffit au but visé pour la longitude et la latitude, et quand cela convient pour l'une, cela pêche pour l'autre. Que Dieu pardonne à ceux qui ont ici admis leur faiblesse. \`A la fin de ce qu'a dit Qu\d{t}b al-D{\=\i}n, il a dit (que Dieu aide celui qui examine son livre) qu'il avait trouvé une manière parfaite de tout définir et d'éliminer les défauts qui restent dans ce que nous avons mentionné (c'est Dieu qui inspire les choses correctes)~; mais Ptolémée avait déjà dit des choses semblables~; la connaissance appartient à ceux qui la mentionne [en premier]. Sache que la raison empêchant de se débarrasser des doutes dans l'astronomie connue est que le principe de cette astronomie est faux~; il est donc nécessaire de changer complètement ce principe pour l'amener à une position sauve du doute et qui satisfasse au problème posé, si c'est possible, comme nous l'avons fait. Dès lors que nous avons conçu la configuration des orbes de la manière sus-dite, que nous en sommes venus aux latitudes de ces astres, et que nous avons rapporté ce qu'on a trouvé par l'observation concernant leurs latitudes et leurs inclinaisons, nous avons trouvé que ceci était en accord avec notre conception des orbes et de leurs mouvements ; ceci se montre par ce que je vais expliquer ici. \newpage\phantomsection \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \index{ATBJAQAGARBJ@\RL{q.tb al-dIn al-^sIrAzI}, Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i}} \index{AYAQAVBJ@\RL{al-mwyd al-`r.dI}, al-Mu'ayyad al-`Ur\d{d}{\=\i}} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{BABDBC@\RL{flk}!BABDBC AJAOBHBJAQ@\RL{falak al-tadwIr}, orbe de l'épicycle} \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BEBJBD@\RL{myl}!BEBJBD@\RL{myl}, inclinaison, inclinaison maximale (\textit{i. e.} mesure d'un angle dièdre)} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \includepdf[pages=96,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Supposons que, quand le centre du déférent et du rotateur sont au milieu de l'arc compris entre les n{\oe}uds, le diamètre du déférent est incliné par rapport à l'orbe incliné du côté où se trouve l'inclinaison du périgée de l'épicycle et que son inclinaison est chez Saturne trois parts et demi ; le rotateur de Saturne est incliné par rapport au plan de son déférent d'une seule part. L'orbe déférent de Mars est incliné par rapport au plan de l'orbe incliné d'un degré et demi et un huitième, et l'épicycle de Mars est incliné par rapport au plan de l'orbe déférent d'une demi-part et un huitième. Enfin, le diamètre de l'orbe déférent de Jupiter est incliné par rapport au plan de l'orbe incliné de deux parts, et le diamètre de l'épicycle de Jupiter est incliné par rapport au plan du déférent de Jupiter d'une demi-part. Faisons se mouvoir les orbes autour de leurs centres en donnant à chacune son mouvement simple de la grandeur supposée dans le sens supposé. Quand l'orbe incliné se meut d'un quart de cercle, le déférent se meut d'un quart de cercle aussi, le rotateur se meut d'un demi-cercle, et l'inclinaison change de côté: l'inclinaison de l'épicycle passe du côté opposé à l'inclinaison du déférent et le plan de l'épicycle vient dans le plan de l'écliptique\footnote{En fait, bien qu'ils deviennent parallèles, les deux plans ne sont jamais confondus. Ils le seraient si le rayon de la ceinture du déférent était nul, \textit{i. e.} si le centre du rotateur était confondu avec le centre du déférent. Comme le rayon du déférent est petit devant le rayon de l'orbe incliné, cette hypothèse peut se justifier~; {Ibn al-\v{S}\=a\d{t}ir} l'a aussi adoptée dans le paragraphe suivant et sur la figure.}. En effet on a posé l'inclinaison du déférent égale à la somme de l'inclinaison du l'orbe incliné et de l'inclinaison de l'épicycle. Or quand s'inverse l'inclinaison de l'épicycle il faut soustraire l'inclinaison de l'épicycle de l'inclinaison du déférent, il reste l'excédent de l'inclinaison de l'orbe déférent sur l'inclinaison de l'épicycle, et cet excédent est de la grandeur de l'inclinaison de l'orbe incliné; le centre du déférent étant arrivé au n{\oe}ud, il a atteint le plan de l'écliptique, et le plan de l'épicycle est donc dans le plan de l'épicycle. Ceci est évident. Prête attention et sache que la somme de l'inclinaison du déférent et de l'inclinaison de l'épicycle a été posée égale à l'inclinaison, observée, de l'épicycle par rapport au plan de l'orbe incliné, et que la différence entre l'inclinaison du déférent et l'inclinaison de l'épicycle a été posée de la grandeur de l'inclinaison de l'orbe incliné. De là, tu sais aussi que, quand les deux inclinaisons s'ajoutent, elles atteignent un maximum et c'est au milieu de l'arc compris entre les n{\oe}uds; mais quand elles sont opposées, leur différence est l'inclinaison de l'orbe incliné, on est aux n{\oe}uds et l'épicycle est dans le plan de l'écliptique. Comme on a supposé l'épicycle dans le plan de l'écliptique à cause de l'observation, cette notion est donc juste. \newpage\phantomsection \index{AYAOBD@\RL{`dl}!BEAQBCAR BEAYAOAOBD@\RL{markaz mu`addal}, centre corrigé} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \index{ATBJAQAGARBJ@\RL{q.tb al-dIn al-^sIrAzI}, Qu\d{t}b al-D{\=\i}n al-\v{S}{\=\i}r\=az{\=\i}} \index{AYAQAVBJ@\RL{al-mwyd al-`r.dI}, al-Mu'ayyad al-`Ur\d{d}{\=\i}} \includepdf[pages=97,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Pour démontrer cela\footnote{Pour transcrire les lettres \textit{\=a l\=a b j d h w z \d{h} \d{t} {\=\i} k l m n s ` f \d{s} q r \v{s}} utilisées dans la figure, on utilisera respectivement les lettres A A' B C D E F G H I J K L M N S O P \d{S} Q R \v{S}.}, menons la droite AB dans le plan de l'écliptique et soit B le centre de l'écliptique. En soit issue la droite BE inclinée par rapport à AB comme l'inclinaison de l'orbe incliné par rapport au plan de l'écliptique. Traçons la droite CED supposée diamètre de l'orbe déférent; du point E, menons aussi la droite GI supposée diamètre de l'épicycle et formant un angle SEG somme des deux inclinaisons quand elles sont du même côté (au milieu de l'arc entre les n{\oe}uds). Quand le centre du déférent se déplace vers le point B, le diamètre du déférent CD garde son inclinaison et devient la droite Q\d{S}. Si le diamètre de l'épicycle gardait alors son inclinaison, la droite GI deviendrait la droite OK; mais si le rotateur tourne d'un demi-cercle, alors l'inclinaison du diamètre OK par rapport au diamètre \d{S}Q passe de l'autre côté, l'angle QBK devient l'angle A'BQ et le diamètre de l'épicycle (la droite OK) coïncide avec le plan de l'écliptique. Il en est ainsi car la somme des deux inclinaisons est comme l'inclinaison de l'épicycle, et l'excédent de l'inclinaison du déférent sur l'inclinaison de l'épicycle est comme l'inclinaison de l'orbe incliné. On a déjà répété cela. Attention, quand on dit que le centre arrive aux n{\oe}uds, il faut comprendre: le centre corrigé. \begin{center} \large Remarque \end{center}\label{critique_lat2} Si nous avions posé l'inclinaison du déférent de la même grandeur que l'inclinaison de l'épicycle, c'est-à-dire toutes deux de la grandeur de l'inclinaison de l'épicycle trouvée par l'observation, alors après rotations du déférent d'un quart de cercle et du rotateur d'un demi-cercle le plan de l'épicycle serait dans le plan de l'orbe incliné; c'est évident. Or l'observation confirme que c'est le plan de l'écliptique et non le plan de l'orbe incliné. La plupart des Modernes -- Na\d{s}{\=\i}r al-\d{T}\=us{\=\i}, Mu'ayyad al-`Ur\d{d}{\=\i}, Qu\d{t}b al-Sh{\=\i}r\=az{\=\i} -- ont pensé que la latitude s'annulait quand c'est le plan de l'orbe incliné: c'est impossible puisque le plan de l'épicycle reste incliné par rapport au plan de l'écliptique. C'est très clair~; cependant les observateurs ont dit qu'il est soit dans le plan de l'écliptique soit proche de celui-ci, et ils n'ont pas vérifié avec certitude qu'il est exactement dans le plan de l'écliptique. \newpage\phantomsection \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AYBBAO@\RL{`qd}!AYBBAOAI@\RL{`qdaT}, n{\oe}ud} \index{ACBHAL@\RL{'awj}!ACBHAL@\RL{'awj}, Apogée} \includepdf[pages=98,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Deuxième remarque \end{center} Si l'on avait trouvé par l'observation que l'inclinaison de l'épicycle \emph{par rapport à l'orbe incliné}\footnote{Nous avons mis en italique~: {Ibn al-\v{S}\=a\d{t}ir} envisage ici un modèle où l'épicycle serait incliné par rapport à l'orbe incliné et non par rapport au déférent (les plans du déférent, du rotateur et de l'orbe incliné seraient confondus).} était de la grandeur trouvée par l'observation, alors après rotation du déférent d'un quart de cercle et rotation, dans le même temps, du rotateur, d'un demi-cercle, l'inclinaison de l'épicycle s'inverserait~; mais il passe alors dans le plan de l'écliptique; or on a trouvé par l'observation que l'inclinaison de l'épicycle est différente de l'inclinaison de l'orbe incliné. \`A cause de cela, nous avons déduit les inclinaisons des déférents et des épicycles de sorte que la somme des deux inclinaisons soit comme l'inclinaison, observée, de l'épicycle, et que la différence des deux soit comme l'inclinaison de l'orbe incliné. Ce que nous avons accompli est correct. \begin{center} \large Troisième remarque \end{center} Puisqu'on a découvert par l'observation que l'inclinaison maximale de l'épicycle par rapport à l'orbe incliné est atteinte au milieu de l'arc entre les n{\oe}uds, que là se trouve la dernière latitude Nord, et que ce lieu est, chez Mars, le lieu de l'Apogée, alors les diamètres du déférent et de l'épicycle passant par l'Apogée et le périgée coïncident avec la droite issue du centre de l'écliptique et passant par le lieu de la dernière latitude Nord (c'est-à-dire le milieu de l'arc entre les n{\oe}uds): car ce lieu est le lieu de l'Apogée chez Mars. \label{noeud_sup2} Puisque le lieu de la dernière latitude Nord qui est milieu de l'arc entre les n{\oe}uds suit le lieu de l'Apogée chez Jupiter de vingt parts d'après Ptolémée et vingt-huit parts d'après nos observations, et qu'il précède le lieu de l'Apogée chez Saturne de cinquante parts, alors le diamètre dans les dernières latitudes n'est pas celui passant par l'Apogée du déférent et par l'Apogée du rotateur. Si nous supposons que les orbes sont à l'Apogée, puis qu'ils se meuvent, le déférent se mouvant de vingt parts, et le rotateur, du double, alors l'Apogée du déférent se déplace de vingt parts dans le sens des signes et devient le point D, et le diamètre EI devient celui où se situent les dernières latitudes; le rotateur ayant tourné du double de cela, son Apogée devient le point F tel que l'arc DE soit comme l'arc EF, et le diamètre [du rotateur] coïncidant\footnote{Il faudrait plutôt dire ``parallèle à la droite EI''.} avec la droite EI se situe aux dernières latitudes. \newpage\phantomsection \index{BBAWAQ@\RL{q.tr}!BBAWAQ AYAQAVBJ@\RL{qi.tr `r.diy}, diamètre des latitudes} \includepdf[pages=99,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection On fait de même pour Saturne: posons que les orbes se meuvent à l'envers et reviennent au lieu situé cinquante parts avant l'Apogée de Saturne, lieu de ses dernières latitudes. Parmi les diamètres du déférent et les diamètres du rotateur, ceux qui coïncident avec la droite issue du centre de l'écliptique dans la direction des dernières latitudes sont les deux \emph{diamètres des latitudes} (j'entends par là, les diamètres en lesquels sont les dernières latitudes). Voir la figure. \begin{center} \large Remarque \end{center} Si l'on avait supposé que le déférent était dans le plan de l'orbe incliné, que l'inclinaison du rotateur par rapport au plan du déférent était comme l'inclinaison qu'on a supposée du déférent par rapport à l'orbe incliné, et que l'inclinaison de l'épicycle par rapport au rotateur était ce qu'elle est, alors le résultat de ces deux hypothèses serait le même. Sache cela. \begin{figure}[h!] \centering\label{lat_inf_profil} \footnotesize\input{lat_inf_profil_fr.pdf_tex} \end{figure} \newpage\phantomsection \includepdf[pages=100,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \begin{figure}[h!] \centering \noindent\hspace{-85mm}\hspace{.5\textwidth}\begin{minipage}{17cm} \footnotesize\input{lat_inf_fr.pdf_tex} \end{minipage} \end{figure} \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \index{AYBBAO@\RL{`qd}!AYBBAOAI@\RL{`qdaT}, n{\oe}ud} \index{AYAQAV@\RL{`r.d}!AYAQAV@\RL{`r.d}, latitude, \emph{i. e.} par rapport à l'écliptique} \index{ACBHAL@\RL{'awj}!ACBHAL@\RL{'awj}, Apogée} \index{BBAWAQ@\RL{q.tr}!BBAWAQ@\RL{qi.tr}, diamètre} \index{BEBJBD@\RL{myl}!BEBJBD BHAQAGAH@\RL{mIl al-wirAb}, inclinaison de biais} \index{BEBJBD@\RL{myl}!BEBJBD AJAOBHBJAQ@\RL{mIl al-tadwIr}, inclinaison de l'épicycle} \addcontentsline{toc}{chapter}{I.25 Mouvements de Vénus et Mercure en latitude, en deux parties} \includepdf[pages=101,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-cinq \large Mouvements de Vénus et Mercure en latitude, en deux parties \end{center} \begin{center} \large Première partie \normalsize Les latitudes de Vénus \end{center} Chez Ptolémée, il y a deux théories à ce sujet. \emph{Sa première théorie} est celle qu'il expose dans l'\emph{Almageste} quand il dit que l'orbe incliné de Vénus tantôt s'approche tantôt s'éloigne du plan de l'écliptique. Aux n{\oe}uds, le centre de l'épicycle de Vénus est dans le plan de l'écliptique, mais hors des n{\oe}uds, il est toujours au Nord du plan de l'écliptique (au maximum, à un sixième de degré, inclinaison vers le Nord de l'orbe incliné de Vénus quand elle est à l'Apogée). Quand le centre de son épicycle est aux n{\oe}uds (c'est-à-dire aux quadratures de l'Apogée), le diamètre perpendiculaire au diamètre passant par l'Apogée de l'épicycle et par son périgée est dans le plan de l'écliptique; quand il est à la queue, l'Apogée de l'épicycle est incliné vers le Nord et le périgée vers le Sud; quand il est à la tête, l'Apogée de l'épicycle est incliné vers le Sud et son périgée vers le Nord; la grandeur de cette inclinaison est deux degrés et demi dans l'\emph{Almageste}, et elle s'appelle \emph{inclinaison de l'épicycle}. Quand le centre de l'épicycle est à l'Apogée, le diamètre passant par l'Apogée de l'épicycle et par son périgée est dans le plan de l'orbe incliné, et le diamètre perpendiculaire est incliné par rapport au plan de l'écliptique, de sorte que la moitié allant de l'Apogée de l'épicycle à son périgée est au Nord de l'orbe incliné, et l'autre moitié, au Sud; la grandeur de cette inclinaison par rapport au plan de l'écliptique est de trois degrés et demi, et elle s'appelle \emph{inclinaison de biais}. C'est la théorie de Ptolémée dans l'\emph{Almageste}. Certains parmi les Modernes l'ont amendée par l'observation : ils ont trouvé que l'inclinaison de l'épicycle était égale à l'inclinaison de biais. Les savants ont calculé les tables de latitude de Vénus selon cette théorie. Sa \emph{seconde théorie} est celle qu'il expose dans son livre connu sous le nom des \emph{Hypothèses} et qui vient après le premier: l'orbe incliné de Vénus serait d'inclinaison constante de part et d'autre. Son inclinaison maximale est un sixième de degré vers le Nord, et de même vers le Sud. L'inclinaison de l'épicycle est, dans les deux cas, trois degrés et demi. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \includepdf[pages=102,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Ni Ptolémée ni aucun autre n'avait encore pu établir les principes du mouvement de l'épicycle et de l'orbe incliné selon la description ci-dessus à cause de la variation d'inclinaison de l'épicycle dans les deux cas, et de la variation des deux diamètres dont l'un est incliné et l'autre est tantôt dans le plan de l'écliptique, tantôt dans le plan de l'orbe incliné. On y est parvenu grâce à Dieu. \`A cet effet, on supposera que l'inclinaison maximale de l'orbe incliné vers le Nord est atteinte à l'Apogée de Vénus et que cette inclinaison est un sixième de degré. C'est une inclinaison de grandeur constante vers le Nord, et de même vers le Sud~; mais elle se meut, elle se déplace avec l'Apogée, et les n{\oe}uds (la tête et la queue) se déplacent aussi de par son mouvement. Supposons le centre de l'épicycle et du rotateur sur la droite passant par l'Apogée. Supposons le déférent dans le plan de l'orbe incliné. Supposons l'apogée moyen du rotateur incliné vers le Sud par rapport au plan de l'orbe incliné, d'un angle de cinq minutes d'arc, et la moitié du rotateur commençant à l'Apogée (c'est la << première >> moitié) inclinée vers le Nord par rapport au plan de l'orbe incliné, formant un angle en son centre d'une grandeur de trois degrés. Supposons l'apogée de l'épicycle incliné par rapport à l'apogée du rotateur, vers le Nord, de cinq minutes d'arc. Le diamètre de l'épicycle [passant par] l'Apogée reste donc dans le plan de l'orbe incliné. Supposons la première moitié de l'épicycle (celle qui commence à l'Apogée) inclinée d'un demi-degré vers le Nord par rapport à la moitié du rotateur inclinée vers le Nord. Ceci étant admis, que les orbes tournent. Que l'orbe incliné tourne d'un quart de cercle, le déférent d'un quart de cercle aussi, et le rotateur d'un demi-cercle dans le même temps; alors l'inclinaison de l'épicycle s'inverse, le diamètre perpendiculaire au diamètre de l'Apogée arrive dans le plan de l'écliptique, et l'autre diamètre est incliné par rapport au plan de l'écliptique d'un angle d'une grandeur de deux parts et demi; ceci, à condition que la variation d'inclinaison de l'épicycle soit comme l'indique l'\emph{Almageste}. Quant à l'amendement fait par les Modernes (pour qui l'épicycle est incliné aux n{\oe}uds, à l'Apogée et au périgée de trois degrés et demi), et la théorie qu'expose Ptolémée dans les \emph{Hypothèses}, nous supposerons l'orbe incliné comme on l'a indiqué dans la première théorie. Le déférent est dans le plan de l'orbe incliné. \newpage\phantomsection \index{AQABAS@\RL{ra'asa}!AQABAS@\RL{ra's}, tête, n{\oe}ud ascendant} \index{APBFAH@\RL{_dnb}!APBFAH@\RL{_dnb}, queue, n{\oe}ud descendant} \index{ACBHAL@\RL{'awj}!ACBHAL@\RL{'awj}, Apogée} \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BABDBC@\RL{flk}!BABDBC BEAGAEBD@\RL{falak mA'il}, orbe incliné} \label{var2} \includepdf[pages=103,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent \`A l'Apogée, l'apogée du rotateur est incliné par rapport à l'orbe incliné, de cinq minutes d'arc vers le Sud~; l'apogée de l'épicycle est incliné par rapport à l'apogée du rotateur, de cinq minutes d'arc vers le Nord~; la moitié du rotateur\footnote{Ici, le manuscrit porte le mot <<~épicycle~>>, \textit{cf.} texte arabe. C'est pourtant le plan du rotateur qui doit subir l'inclinaison de $3°30$ ; l'épicyle en hérite par composition.} commençant à l'Apogée est inclinée vers le Nord de trois degrés et demi. C'est la voie sur laquelle s'appuie l'amendement fait par les Modernes. L'inclinaison maximale de l'épicycle est, vers le Sud, $8;40$, et vers le Nord, $1;3$, quand l'inclinaison maximale de biais\footnote{Il est difficile de donner un sens précis à ces deux valeurs. Peut-être $1°3'$ désigne-t-elle la latitude Nord maximale atteinte par Vénus quand le centre de l'épicycle est à la queue~: le diamètre de l'épicycle passant par son apogée est alors incliné de $2°30'$ par rapport au plan de l'écliptique, et on a $$1°3'\simeq 2°30'\times\frac{43;33}{60+43;33}.$$ La valeur $8°40'$ semble en revanche être erronée, bien que les latitudes de Vénus puissent dépasser cette valeur dans le modèle fondé sur l'<<~amendement fait par les Modernes~>>.} est $2;30$. La dernière latitude Nord est toujours, pour l'orbe incliné, un sixième de degré. La tête de Vénus précède l'Apogée d'un quart de cercle, et sa queue le suit d'un quart de cercle. Les dernières latitudes Nord et Sud, pour l'orbe incliné, sont à l'Apogée et au périgée. \emph{Remarque}. Concernant la latitude de Vénus, tout ce qu'ont fait les Anciens et les Modernes en concevant des ceintures qui s'approchent ou une inclinaison variable de l'épicycle est impossible. Qui est bien versé dans cet art ne l'ignore pas. \begin{center} \large Deuxième partie \normalsize Les latitudes de Mercure \end{center} Chez Ptolémée, il y a deux théories, la première dans l'\emph{Almageste} et la seconde dans les \emph{Hypothèses}. \emph{Première théorie}. Nous supposons un orbe incliné par rapport au parécliptique, d'inclinaison maximale atteinte en l'Apogée, vers le Sud, un demi-degré et un quart. Cette situation à l'Apogée est réciproque de celle au périgée, où l'inclinaison est vers le Nord. De plus, la ceinture de l'orbe incliné tend à se rapprocher de la ceinture de l'écliptique jusqu'à ce qu'elles se confondent, puis elle s'incline à nouveau d'autant dans l'autre sens (et de même pour le lieu situé à l'opposé, [au périgée]). Or ceci est impossible: on ne peut le concevoir dans l'astronomie de Ptolémée, et ni lui ni aucun autre n'a abordé le problème du mobile de ce mouvement. \newpage\phantomsection \index{BEBJBD@\RL{myl}!BEBJBD BHAQAGAH@\RL{mIl al-wirAb}, inclinaison de biais} \index{BEBJBD@\RL{myl}!BEBJBD AJAOBHBJAQ@\RL{mIl al-tadwIr}, inclinaison de l'épicycle} \includepdf[pages=104,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection D'autre part, quand le centre de l'épicycle est au milieu de l'arc entre les n{\oe}uds, à l'Apogée, à la dernière latitude Sud, il faut supposer que le diamètre passant par l'Apogée et le périgée de l'épicycle est dans le plan de l'orbe incliné, et que l'autre diamètre est incliné de sorte que la première moitié de l'épicycle soit au Sud, l'autre moitié au Nord, et que l'angle de cette inclinaison (dite \emph{de biais}) soit sept degrés. Quand le centre de l'épicycle est aux n{\oe}uds, le diamètre perpendiculaire à l'Apogée est dans le plan de l'écliptique, et l'autre diamètre est incliné par rapport au plan de l'écliptique, son périgée vers le Nord et son Apogée vers le Sud; l'angle de cette inclinaison est six degrés et un quart, et on l'appelle \emph{inclinaison de l'épicycle}. Ainsi, le centre de l'épicycle est constamment au Sud du plan de l'écliptique, ou bien dans ce plan (aux n{\oe}uds). C'est cette théorie qu'on trouve dans l'\textit{Almageste}. \emph{Seconde théorie}. C'est la théorie des \emph{Hypothèses}. Là, l'inclinaison de Mercure est constante, vers le Sud à l'Apogée, d'une grandeur de six degrés (et l'autre extrémité, au périgée, est donc inclinée vers le Nord de six degrés). Quand le centre de l'épicycle est entre les deux n{\oe}uds, le diamètre passant par l'Apogée coïncide avec l'orbe incliné, et l'autre diamètre est incliné (ainsi que la deuxième moitié de l'épicycle) vers le Nord d'un angle de six degrés et demi; on l'appelle \emph{inclinaison de biais}. Quand le centre de l'épicycle est [aux] n{\oe}uds, le diamètre perpendiculaire au diamètre de l'Apogée est dans le plan de l'écliptique, et le diamètre de l'Apogée est incliné, son périgée vers le Nord et son Apogée vers le Sud, d'un angle égal à l'inclinaison de biais, c'est-à-dire six parts et demi. C'est la seconde théorie; mais ni Ptolémée ni aucun autre n'a pu établir des principes permettant de concevoir ces inclinaisons sans perturber les mouvements en longitude. Nous avons pu concevoir ces deux théories~; gloire à Dieu. Voici \emph{la théorie sur laquelle on s'est appuyé}. Nous supposons que l'inclinaison maximale de l'orbe incliné est vers le Sud à l'Apogée et vers le Nord au périgée (cette inclinaison maximale est de six degrés dans les \emph{Hypothèses}). \newpage\phantomsection \index{BABDBC@\RL{flk}!BABDBCAMAGBEBD@\RL{falak .hAmil}, orbe déférent} \index{BABDBC@\RL{flk}!BABDBCBEAOBJAQ@\RL{falak mdIr}, orbe rotateur} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{BEBJBD@\RL{myl}!BEBJBD BHAQAGAH@\RL{mIl al-wirAb}, inclinaison de biais} \includepdf[pages=105,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Nous supposons que l'apogée du rotateur est incliné vers le Nord de cinq minutes d'arc, que la seconde moitié du rotateur est inclinée vers le Sud d'un angle de six degrés, un demi-degré et un huitième de degré par rapport au plan de l'orbe incliné, que l'apogée de l'épicycle est incliné par rapport à l'apogée du rotateur vers le Sud de cinq minutes d'arc, et que sa seconde moitié est inclinée de sept degrés vers le Nord par rapport au plan de l'orbe incliné (elle est donc inclinée d'un quart et un huitième de degré par rapport au plan du rotateur). Quand les orbes se meuvent, l'orbe incliné d'un quart de cercle, le déférent aussi, et l'épicycle d'un demi-cercle, alors l'inclinaison du diamètre de l'Apogée devient six degrés et un quart, et le diamètre qui lui est perpendiculaire passe dans le plan de l'écliptique. Or il en est ainsi à l'observation, c'est donc la voie sur laquelle on s'est appuyé; elle ne prend pas en considération que, selon Ptolémée, le centre de l'épicycle serait toujours au Sud, mais il est certes revenu de cette opinion dans les \emph{Hypothèses}. Sache cela. La seconde théorie est conçue comme suit. Nous supposons l'orbe incliné comme on l'a décrit ci-dessus, de sorte qu'il coupe le parécliptique aux n{\oe}uds et que l'inclinaison maximale soit vers le Sud à l'Apogée et vers le Nord au périgée. Nous supposons que le centre de l'épicycle est à l'Apogée au milieu de l'arc compris entre les n{\oe}uds. Nous supposons que l'Apogée du rotateur est incliné de cinq minutes d'arc vers le Nord, et que sa seconde moitié est inclinée vers le Nord, et l'autre moitié vers le Sud, d'un angle de six degrés et demi; on appelle cela \emph{inclinaison de biais}. Nous supposons l'apogée de l'épicycle incliné par rapport à l'apogée du rotateur de cinq minutes d'arc vers le Sud; alors le diamètre de l'Apogée est dans le plan de l'orbe incliné. Nous supposons que la seconde moitié de l'épicycle est inclinée de six degrés et demi par rapport au plan de l'orbe incliné. Que l'on se représente cela, puis que les orbes se meuvent jusqu'à ce que le centre de l'épicycle soit au n{\oe}ud. Le diamètre perpendiculaire au diamètre de l'Apogée coïncide avec le plan de l'écliptique, et le diamètre de l'Apogée reste incliné de six degrés et demi par rapport au plan de l'écliptique. La moitié [de l'épicycle] contenant son Apogée est vers le Sud, et l'autre moitié vers le Nord. C'est ainsi qu'il faut concevoir les mouvements des orbes en latitude, de façon à ne pas perturber les mouvements en longitude. \newpage\phantomsection \index{AWBHBD@\RL{.twl}!AWBHBD@\RL{.tUl}, longitude (par rapport à l'écliptique)} \index{BFBBBD@\RL{nql}!AJAYAOBJBD BFBBBD@\RL{ta`dIl al-naql}, équation du déplacement} \index{AOBHAQ@\RL{dwr}!BEAOAGAQAGAJ AYAQBHAV@\RL{mdArAt al-`rU.d}, trajectoires selon les latitudes (\textit{i. e.} cercles parallèles à l'écliptique)} \index{BBBHBE@\RL{qwm}!BEBBBHBE BCBHBCAH@\RL{mqwm al-kawkab}, astre vrai (= longitude vraie de l'astre)} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE@\RL{taqwIm}, 1) calcul de la longitude vraie, 2) longitude vraie d'un astre} \index{BBBHBE@\RL{qwm}!AJBBBHBJBE BBBEAQ@\RL{tqwIm al-qamar}, Lune vraie} \label{var3} \includepdf[pages=106,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Remarque \end{center} Supposer, au besoin, que les orbes sont dans les plans des [orbes qui] les portent ne perturbe pas les mouvements en longitude de manière trop importante. Tu sais que ceci est analogue à l'équation du déplacement\footnote{Le raisonnement peu rigoureux d'Ibn al-\v{S}\=a\d{t}ir est à peu près le suivant. Pour chaque astre, aussi bien que pour la Lune qu'il a traitée dans un chapitre antérieur, l'inclinaison de l'orbe incliné a peu d'influence sur le mouvement en longitude (l'équation du déplacement, qu'il sait calculer, est faible). Donc les inclinaisons des autres orbes auront \textit{a fortiori} une influence négligeable sur les mouvements en longitude.} de la Lune vraie de l'orbe incliné au parécliptique. Puisque la dernière latitude de la Lune est de cinq degrés, et que la variation maximale due au déplacement est de six minutes d'arc et deux tiers, l'omission de l'équation du déplacement des trajectoires selon les latitudes à la ceinture de l'écliptique aura pour effet une perturbation d'une minute d'arc et un tiers, par degré de latitude, au plus: c'est une grandeur négligeable. Calcule le déplacement si tu le souhaites. En voici la méthode. Prends l'élongation entre l'astre vrai et la tête de cet astre. Prends l'équation qui correspond à cette élongation dans la table de l'équation du déplacement de la Lune. Multiplie cela par la latitude de cet astre, et divise le produit par la latitude maximale de la Lune (c'est cinq degrés). En sort l'équation qu'il faut ajouter à l'astre vrai si le nombre entré dans la table était compris entre quatre-vingt-dix et cent quatre-vingt ou entre deux cent soixante-dix et trois cent soixante degrés, et qu'il faut sinon soustraire de l'astre vrai; reste [l'astre] vrai véritable, rapporté à l'écliptique. Cette équation est maximale dans les octants, et nulle aux n{\oe}uds et aux dernières latitudes de part et d'autre. Dieu est le plus grand et le plus savant. \label{lat_fin} \newpage\phantomsection \index{ASAQAY@\RL{sr`}!ASAQAYAI@\RL{sur`aT}, vitesse} \index{AHAWBH@\RL{b.tw}!AHAWAB@\RL{bu.t'}, lenteur} \index{BHBBBA@\RL{wqf}!BHBBBHBA@\RL{wuqUf}, station|see{\RL{rujU`}}} \index{AQALAY@\RL{rj`}!AQALBHAY@\RL{rujU`}, rétrogradation} \index{AOBHAQ@\RL{dwr}!AJAOBHBJAQ@\RL{tadwIr}, épicycle} \index{AMAQBC@\RL{.hrk}!ARAGBHBJAI AMAQBCAI@\RL{zAwiyaT al-.harakaT}, mouvement angulaire} \addcontentsline{toc}{chapter}{I.26 Cause de la vitesse, de la lenteur, des stations et des rétrogradations des astres} \includepdf[pages=107,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-six \large Cause de la vitesse, de la lenteur, des stations et des rétrogradations des astres. \end{center} On a vu que le centre de l'épicycle était sur le bord de l'orbe incliné, que le mouvement de l'orbe incliné était dans le sens des signes, et que le mouvement de l'épicycle était aussi dans le sens des signes dans sa partie supérieure. Si l'astre est dans la partie supérieure de l'épicycle, les mouvements de l'épicycle et de l'orbe incliné sont dans le sens des signes, et il en résulte que l'astre est rapide à cause de l'addition des deux mouvements qui vont dans le même sens. Si l'astre est dans la partie inférieure de l'épicycle, le mouvement de l'épicycle est en sens inverse du mouvement de l'orbe incliné. Dans ce cas, si le mouvement angulaire de l'orbe incliné par jour est égal au mouvement angulaire de l'épicycle, l'astre paraît stationnaire~; si le [mouvement] angulaire de l'épicycle est supérieur au [mouvement] angulaire de l'orbe incliné, l'astre paraît rétrograder d'autant que l'excédent~; et si le [mouvement] angulaire de l'épicycle par jour est inférieur au [mouvement] angulaire de l'orbe incliné, l'astre paraît avancer d'autant que la différence entre les deux [mouvements] angulaires. Si nous représentons un orbe d'épicycle, et que nous menons en son centre une droite issue du centre de l'écliptique, alors le rapport du mouvement de l'épicycle au mouvement de l'orbe incliné est soit inférieur, soit égal, soit supérieur au rapport de la droite joignant le centre de l'orbe incliné et le périgée de l'épicycle au rayon de l'épicycle. \emph{Si le rapport est inférieur}, alors les seuls effets causés en l'astre par les deux mouvements sont la vitesse dans la partie lointaine et la lenteur dans la partie proche, parce que le mouvement dans la partie lointaine est la somme des deux mouvements, et qu'il est, dans la partie proche, la différence entre le mouvement de l'orbe incliné et le mouvement de l'épicycle. L'astre n'a alors ni station ni rétrogradation, car il y a station quand le rapport des droites mentionnées est égal au rapport des deux mouvements, et la rétrogradation est établie quand le rapport est inférieur (ce rapport est, [au périgée de l'épicycle], le plus petit de ces rapports, donc un [autre] rapport, égal ou inférieur, ne se peut trouver). \emph{Si le rapport est égal}, alors l'astre aura une station au milieu de la période de lenteur, c'est-à-dire quand il sera à distance minimale et sur la droite mentionnée. \newpage\phantomsection \index{APAQBH@\RL{_drw}!APAQBHAI BEAQAEBJBJAI@\RL{_dirwaT mar'iyyaT}, apogée apparent} \index{BBBHBE@\RL{qwm}!BEBBBJBE AQALBHAY@\RL{muqIm lil-rujU` / lil-istiqAmaT}, en station rétrograde / en station directe} \index{BBBHAS@\RL{qws}!BBBHAS AQALBHAY@\RL{qws al-rujU` / al-istiqAmaT}, arc rétrograde / arc direct} \includepdf[pages=108,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Il n'y aura pas rétrogradation, car la rétrogradation est établie quand le rapport des droites est inférieur au rapport des deux mouvements, mais ce rapport, étant égal au plus petit de tous les rapports, ne peut en avoir un qui le minore~: en effet le plus petit des rapports est atteint dans l'épicycle quand la droite passe par son centre. L'astre n'aura donc pas de rétrogradation. \emph{Si le rapport est supérieur}, alors l'astre aura une rétrogradation dans la partie proche, entre deux stations. Les stations se situent des deux côtés de l'épicycle, sur deux droites issues du centre de l'écliptique telles que, pour chacune, le rapport du mouvement de l'épicycle au mouvement de l'orbe incliné soit égal au rapport du segment de cette droite entre le centre de l'écliptique et le bord de l'épicycle près du périgée à la demi-corde de l'épicycle située sur cette droite. \`A l'arrivée de l'astre dans la partie proche, à la première de ces deux droites (appellée première station), on dit que l'astre est en \textit{station rétrograde}~: il s'arrête après un ralentissement progressif, et de là, il rétrograde jusqu'à son arrivée à la seconde droite. Il rétrograde d'abord de plus en plus vite, la vitesse maximale étant en vigueur quand l'astre est à distance minimale, puis il ralentit à nouveau jusqu'à atteindre la seconde droite, lieu de la seconde station, où l'on dit que l'astre est en \textit{station directe} et où il subit un second arrêt. L'arc compris entre les deux droites du côté du périgée s'appelle \textit{arc rétrograde}, et l'arc compris entre les deux droites du côté de l'apogée de l'épicycle apparent s'appelle \textit{arc direct}. Après le second arrêt, l'astre avance en allant progressivement de l'arrêt à une course lente, puis à une course moyenne, puis à une course rapide. Les courses moyennes sont en vigueur entre la lenteur et la vitesse, quand l'astre est située à distance moyenne le long de l'épicycle. L'avance la plus rapide a lieu à l'apogée apparent au milieu de l'arc direct. La rétrogradation la plus rapide a lieu au périgée apparent, au milieu de l'arc rétrograde. \newpage\phantomsection \begin{center} \input{fig_ch26_ar.pdf_tex} \end{center} \newpage\phantomsection \begin{center} \input{fig_ch26_fr.pdf_tex} \end{center} \newpage\phantomsection \index{ALBEAY@\RL{jm`}!AEALAJBEAGAYAI@\RL{'ijtimA`}, conjonction} \index{BBAHBD@\RL{qbl}!AEASAJBBAHAGBDAI@\RL{'istiqbAl}, opposition} \includepdf[pages=109,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Le rapport de la droite entre le centre de l'écliptique et le périgée de l'épicycle de la Lune ou du Soleil au rayon de l'épicycle est supérieur au rapport du mouvement de l'épicycle au mouvement de l'orbe incliné, donc aucun des deux n'a de rétrogradation ni de station, mais seulement de la lenteur et de la vitesse, d'après ce que je t'ai expliqué. Le rapport précédent est donc un critère. Sache qu'avance et rétrogradation sont composés à partir des angles parcourus lors de mouvements uniformes. On soustrais le plus petit du plus grand~: si domine ce qui est dans le sens des signes, alors l'astre avance, sinon il rétrograde. Si additif et soustractif sont égaux, alors l'astre semble s'arrêter. \emph{Remarque}\footnote{L'objet de cette remarque est une coïncidence importante présente dans tous les modèles médiévaux~; ce sera un argument essentiel en faveur de l'héliocentrisme copernicien aux yeux de Kepler.}. Les trois planètes supérieures sont en conjonction avec le Soleil aux apogées de leurs épicycles (on appelle cela \uwave{<<~foyer~>>}), et en opposition avec le Soleil aux périgées de leurs épicycles. Les planètes inférieures sont en conjonction avec le Soleil aux apogées et au périgées de leurs épicycles (on appelle ça aussi un \uwave{<<~foyer~>>}). Dieu est le plus savant. \newpage\phantomsection \index{BBBHAS@\RL{qws}!BBBHAS AQABBJAI@\RL{qws al-ru'yaT}, arc de visibilité} \index{AXBGAQ@\RL{.zhr}!AXBGBHAQ@\RL{.zuhUr}, visibilité ($\neq$ \RL{i_htifA'})} \index{ANBABJ@\RL{_hafiya}!ANAJBAAGAB@\RL{i_htifA'}, invisibilité} \index{AWBDAY@\RL{.tl`}!AWBDBHAY@\RL{.tulU`}, lever (d'un astre)} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{AMAWAW@\RL{.h.t.t}!BFAMAWAGAW@\RL{in.hi.tA.t}, abaissement|see{\RL{irtifA`}}} \index{AQBAAY@\RL{rf`}!AQAJBAAGAY@\RL{irtifA`}, hauteur (coordonnées azimutales)} \addcontentsline{toc}{chapter}{I.27 Visibilité et invisibilité des cinq astres errants} \includepdf[pages=110,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-sept \large Visibilité et invisibilité des cinq astres errants \end{center} Quand l'abaissement du Soleil sous l'horizon au lever\footnote{Mesuré le long d'un cercle de hauteur, cet abaissement maximal du Soleil sous l'horizon s'appelle <<~arc de visibilité~>> dans les paragraphes suivants.} est, pour Saturne \uwave{$11;0$}, pour Jupiter $10;0$, pour Mars $11;30$, \emph{les trois planètes supérieures} sont à la limite de leur visibilité. Quand elles sont en opposition avec le Soleil, l'arc de visibilité mesure la moitié des arcs ci-dessus. L'arc de visibilité de \emph{Vénus et Mercure}, quand ils commencent à être visibles le soir (et ne le sont plus le matin), est sept parts pour Vénus et douze parts pour Mercure. Quand ils sont visibles le matin (et plus le soir), leur arc de visibilité est cinq parts pour Vénus et sept parts pour Mercure. La cause nécessaire de la diminution des arcs est la grandeur [apparente] des deux corps parce que les deux astres sont alors proches du périgée de l'épicycle. Ces limites sont telles lorsque le centre de l'épicycle est à distance moyenne~; elles varient sinon en fonction de cela, de la latitude des planètes, et d'autres raisons, comme nous l'avons expliqué avec l'élaboration de ce livre dans notre livre \emph{Commentaire des observations}. \newpage\phantomsection \index{BFBHAQ@\RL{nwr}!BFBJBJAQ@\RL{nayyir}, lumineux} \index{BFBHAQ@\RL{nwr}!BFBJAQAGBF@\RL{al-nayrAn}, les deux luminaires (Lune et Soleil)} \index{ALBEAY@\RL{jm`}!AEALAJBEAGAYAI@\RL{'ijtimA`}, conjonction} \index{BBAHBD@\RL{qbl}!AEASAJBBAHAGBDAI@\RL{'istiqbAl}, opposition} \index{AXBDBD@\RL{.zll}!BEANAQBHAW AXBDBD@\RL{ma_hrU.t al-.zill}, cône d'ombre} \index{BBBEAQ@\RL{qmr}!BBBEAQ@\RL{qamar}, Lune} \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \index{ANASBA@\RL{_hsf}!ANASBHBA@\RL{_husUf}, éclipse de Lune} \index{BGBDBD@\RL{hll}!BGBDAGBD@\RL{hilAl}, croissant (de Lune)} \index{AHAUAQ@\RL{b.sr}!AHAUAQ@\RL{ba.sar}, 1)~vision, 2)~observateur} \index{AXBDBE@\RL{.zlm}!BEAXBDBE@\RL{mu.zlim}, obscur ($\neq$ \RL{nayyir})} \index{ATAYAY@\RL{^s``}!ATAYAGAY@\RL{^su`A`}, rayon (de lumière)} \index{AVBHAB@\RL{.daw'}!AVBHAB@\RL{.daw'}, clarté} \index{BFAXAQ@\RL{n.zr}!ANAJBDAGBA BEBFAXAQ@\RL{i_htilAf al-man.zr}, parallaxe} \addcontentsline{toc}{chapter}{I.28 Cause des éclipses de Lune et de Soleil, en deux sections} \includepdf[pages=111,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-huit \large Cause des éclipses de Lune et de Soleil, en deux sections \end{center} \begin{center} \large Première section \normalsize La cause des éclipses de Lune \end{center} On a établi que la Lune est un corps sphérique opaque obscur, sous la lumière du Soleil. Ses rayons ne sont pas transmis par elle~; malgré son obscurité elle est lisse et les rayons du Soleil sont réfléchis par elle. La clarté lui est attribuée, mais c'est en réalité les rayons du Soleil refléchis par le corps de la Lune. La face devant le Soleil est toujours lumineuse et l'autre face est toujours obscure. Quand le croissant apparaît, sa partie visible est de la grandeur dont est déviée la moitié éclairée qui est devant le Soleil. La déviation augmente chaque nuit jusqu'à l'opposition~; la face devant le Soleil devient alors visible entièrement car l'observateur s'interpose entre les deux luminaires. Puis le processus s'inverse~: la partie visible se soustrait de la lumière jusqu'à la conjonction des deux luminaires. Puis le processus recommence depuis le début. La cause des éclipses de Lune est son entrée dans le cône d'ombre de la Terre. Cela ne peut se produire que pendant l'opposition des deux luminaires. Une autre condition est que la latitude [de la Lune] doit être inférieure à soixante-trois minutes. Si sa latitude est égale à cela, elle touche le cône d'ombre de la Terre sans y tomber. [Si sa latitude est] inférieure à cela, elle entre dans le cône d'ombre de la Terre~: une partie de la Lune s'éclipse, de la grandeur de ce qui entre dans le cône d'ombre de la Terre. [Si sa latitude est supérieure à un degré et trois minutes, il n'y a pas d'éclipse, la clarté du Soleil se propage jusqu'à sa surface, et elle apparaît comme lumière en étant réfléchie. Le fait décisif concerne la latitude de la Lune quand elle est en opposition avec le Soleil. Si elle est supérieure à la somme du rayon de la Lune et du rayon de l'ombre de la Terre alors il n'y a pas éclipse de Lune. Si la latitude est égale aux deux rayons alors la Lune touche le bord du disque d'ombre, de l'extérieur, du côté de la latitude de la Lune, et il n'y a pas d'éclipse non plus. Si la latitude est inférieure aux deux rayons alors il y a bien éclipse. \newpage\phantomsection \index{BCASBA@\RL{ksf}!BCASBHBAAI@\RL{ksUf}, éclipse de Soleil} \index{ALBEAY@\RL{jm`}!AEALAJBEAGAYAI@\RL{'ijtimA`}, conjonction} \index{AHAUAQ@\RL{b.sr}!AHAUAQ@\RL{ba.sar}, 1)~vision, 2)~observateur} \index{BCAKBA@\RL{k_tf}!BCAKAGBAAI@\RL{ki_tAfaT}, opacité} \label{var23} \includepdf[pages=112,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Dans ce dernier cas, si la latitude est pourtant supérieure au rayon de l'ombre, alors est éclipsée une partie moindre que la moitié~; si elle lui est égale, alors le bord du disque d'ombre passe par le centre de cette face de la Lune et la moitié est éclipsée~; si la latitude est inférieure au rayon de l'ombre mais supérieure à la différence entre le rayon de la Lune et le rayon de l'ombre, alors est éclipsée une partie plus grande que la moitié~; si la latitude est égale à la différence entre le rayon de la Lune et le rayon de l'ombre, alors la Lune touche le bord du disque d'ombre, de l'intérieur, du côté de sa latitude, et elle est totalement éclipsée mais elle ne peut le demeurer~; si la latitude est inférieure à cette différence, alors l'éclipse est totale et dure en proportion de combien la Lune s'est engagée dans le disque d'ombre. La durée est maximale quand le centre de la Lune passe, à mi-temps de l'éclipse, au centre de disque de l'ombre -- une éclipse dure quand la Lune reste éclipsée et sombre pendant un certain temps, et cette durée est proportionnelle au rapport entre la portion de l'écliptique contenue dans le disque de l'ombre et l'excédent de sa vitesse sur la vitesse du Soleil. Le commencement de l'assombrissement suivi de l'éclaircissement est vers le Sud-Est si la latitude de la Lune est au Nord, il est vers le Nord-Est si la latitude est au Sud, et si la latitude est nulle alors [cela dépend de son sens de variation]. La partie sombre de la Lune comporte toujours deux parties convexes, l'une (à l'opposé de la latitude) appartenant à la Lune, l'autre appartenant au disque de l'ombre. Sa partie lumineuse a toujours la forme d'un croissant dont la partie convexe appartient à la Lune et dont la partie concave appartient au disque de l'ombre. \begin{center} \large Deuxième section \normalsize La cause des éclipses de Soleil \end{center} [Le Soleil est éclipsé] quand il cesse d'éclairer les éléments proches de nous, à un instant où d'habitude il les éclaire. [Ces éclipses] sont causées par l'interposition de la Lune entre l'observateur et le Soleil~: elle dérobe alors la lumière du Soleil aux regards de l'observateur à cause de son opacité. Dès qu'elle traverse l'intervalle entre l'observateur et le Soleil, l'éclipse survient. Cela se produit lors des conjonctions qui ont lieu en plein jour, pourvu que la Lune soit alignée avec le Soleil et l'observateur. Si elle n'entre pas dans l'alignement avec l'observateur par rapport au Soleil, celui-ci ne s'éclipsera pas pendant cette conjonction. \newpage\phantomsection \index{BFBHAQ@\RL{nwr}!AMBDBBAI BFBHAQAGBFBJAI@\RL{.halaqaT nUrAniyaT}, anneau lumineux (lors d'une éclipse annulaire)} \index{BGBDBD@\RL{hll}!BGBDAGBD@\RL{hilAl}, croissant (de Lune)} \index{BFAXAQ@\RL{n.zr}!ANAJBDAGBA BEBFAXAQ@\RL{i_htilAf al-man.zr}, parallaxe} \includepdf[pages=113,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Ce qui importe est la conjonction des astres tels qu'ils apparaissent après correction avec la parallaxe~: grâce à cela on sait qu'il peut y avoir éclipse chez certains peuples et non chez les autres bien que le Soleil soit au-dessus de l'horizon chez l'un comme chez les autres, à cause de la variation de la position relative de la Lune par rapport aux observateurs en fonction du lieu qu'ils habitent. Elle dépend du pays~: il peut y avoir éclipse dans un pays et non dans l'autre, ou bien éclipse dans l'un et éclipse dans l'autre mais de grandeurs différentes, dans des directions différentes, ou de durées différentes. Sache que les diamètres des deux luminaires sont égaux en apparence, ou différents. Qu'ils soient égaux, et que la latitude apparente soit inférieure à la moitié de la somme des rayons des deux luminaires~; alors, si elle est égale au rayon du Soleil, il est éclipsé à moitié~; si elle est supérieure, il est éclipsé moins qu'à moitié~; si elle est inférieure, il est éclipsé plus qu'à moitié~; si la latitude apparente est nulle, il est entièrement éclipsé mais cela ne dure pas. Et si les rayons des deux luminaires étaient différents en apparence~? Que le rayon du Soleil soit plus grand en apparence que rayon de la Lune~; si la latitude apparente déjà mentionnée est supérieure au rayon du Soleil, il est éclipsé moins qu'à moitié~; mais s'il y a égalité, il est aussi éclipsé moins qu'à moitié, d'autant moins que l'excédent de son rayon sur le rayon de la Lune est grand~; si la latitude est inférieure et qu'elle est égale à la différence entre le rayon du Soleil et le rayon de la Lune, la Lune touche de l'intérieur le bord du disque du Soleil dont il ne reste plus qu'un anneau lumineux de la forme d'un croissant~; si la latitude de la Lune est exactement nulle, à mi-temps de l'éclipse l'anneau deviendra circulaire, entourant le corps la Lune, de rondeur uniforme~; si la latitude est entre ces deux états, la Lune est sculptée différemment d'un croissant ou d'un anneau circulaire parfait. Dans le dernier cas, ainsi que dans le cas d'un croissant, la partie lumineuse la plus épaisse est du côté opposé à la latitude apparente. Enfin, que le rayon de la Lune soit plus grand en apparence que le rayon du Soleil~; si la latitude apparente déjà mentionnée est égale au rayon de la Lune, il est éclipsé à moitié, le bord de la Lune passant par le centre du Soleil~; si la latitude est supérieure à cela, il est éclipsé moins qu'à moitié~; si la latitude est inférieure à cela et qu'elle est égale à la différence entre le rayon du Soleil et le rayon de la Lune, il est éclipsé entièrement mais cela ne dure pas~; si elle est encore inférieure à la différence, l'éclipse totale dure en proportion de cela~; la durée maximale est atteinte quand la latitude de la Lune est exactement nulle à mi-temps de l'éclipse, et cette durée est proportionnelle au rapport entre la différence des deux diamètres et la vitesse de la Lune. \newpage\phantomsection \index{ANASBA@\RL{_hsf}!ANASBHBA@\RL{_husUf}, éclipse de Lune} \index{AXBDBD@\RL{.zll}!BBAWAQ AXBDBD@\RL{qi.tr al-.zill}, diamètre de l'ombre} \index{ATBGAQ@\RL{^shr}!ATBGAQ BBBEAQBJBJ BHASAW@\RL{^shr qamariyy}, mois lunaire} \addcontentsline{toc}{chapter}{I.29 Les intervalles de temps entre les éclipses de Lune et de Soleil} \includepdf[pages=114,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Chapitre vingt-neuf \large Les intervalles de temps entre les éclipses de Lune et de Soleil, en deux sections \end{center} \begin{center} \large Première section \normalsize Les intervalles de temps entre les éclipses de Lune \end{center} Sache que les limites des éclipses sont quand, à l'opposition des deux luminaires, la latitude de la Lune est inférieure à un degré et quatre minutes. Cette latitude requiert une distance au n{\oe}ud (quelque soit le n{\oe}ud, et que la distance soit dans le sens des signes ou en sens contraire) de douze degrés. Si la latitude dépasse cela, elle dépasse la somme des rayons de la Lune et de l'ombre de la Terre. Sachant cela, sache qu'il ne peut y avoir deux éclipses de Lune à un mois d'intervalle, car entre les limites des éclipses de Lune, de part et d'autre, il n'y a pas plus que vingt-cinq parts, or le Soleil décrit plus que cette grandeur en un mois lunaire et il sort donc des limites de l'éclipse. Il ne peut pas non plus y avoir deux éclipses de Lune à sept mois d'intervalle si l'opposition lors de l'éclipse a lieu avant l'arrivée au premier n{\oe}ud, à l'extrême limite [du segment entre les limites], et que l'autre opposition est après le passage du second n{\oe}ud, sept mois plus tard, car la seconde opposition ne tombe pas entre les limites d'une éclipse et elle dépasse la grandeur limite pour une éclipse, en sens contraire des signes, au delà du second n{\oe}ud. En effet, le Soleil se meut en sept mois lunaires d'environ deux cent cinq degrés. Si il est lors de la première opposition à la limite de l'éclipse (comme on l'a supposé), il dépassera la limite de l'éclipse d'un degré lors de la seconde opposition~: après une durée de douze degrés il arrive au premier n{\oe}ud, puis après cent quatre-vingt degrés il arrive au second n{\oe}ud, et après treize jours il passe la limite de l'éclipse d'un degré. Cela suppose que le n{\oe}ud est immobile, mais en fait il se meut de onze degrés pendant cette durée en sens contraire des signes, donc la distance entre le Soleil et la limite de l'éclipse est douze degré~; ainsi il ne peut y avoir deux éclipses de Lune à sept mois d'intervalle. Le plus souvent, il y a six mois entre deux éclipses car pendant cette durée le Soleil se déplace du voisinage d'un des n{\oe}uds au voisinage de l'autre n{\oe}ud. \newpage\phantomsection \index{BCASBA@\RL{ksf}!BCASBHBAAI@\RL{ksUf}, éclipse de Soleil} \index{BFAXAQ@\RL{n.zr}!ANAJBDAGBA BEBFAXAQ@\RL{i_htilAf al-man.zr}, parallaxe} \index{ASBCBF@\RL{skn}!BEASBCBHBF@\RL{al-maskUn}, la partie habitée (du globe)} \includepdf[pages=115,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Il peut aussi y avoir cinq mois entre deux éclipses, mais plus rarement. Il faut qu'une opposition lors d'une éclipse ait lieu après le passage au n{\oe}ud à l'extrême limite [du segment entre les limites], puis qu'une autre opposition ait lieu cinq mois plus tard avant d'atteindre l'autre n{\oe}ud. Ceci peut arriver à la limite de l'éclipse, à cause du mouvement du n{\oe}ud qui est pendant cette durée de huit parts en sens contraire des signes, mais aucune des deux éclipses ne peut alors être totale (tandis que des éclipses à six mois d'intervalle peuvent être toutes deux totales, toutes deux partielles, ou bien l'une totale et l'autre partielle). \begin{center} \large Deuxième section \normalsize Les intervalles de temps entre les éclipses de Soleil \end{center} On détermine ces intervalles de temps au moyen des limites des éclipses de Soleil mais elles ne sont pas égales des deux côtés comme pour les éclipses de Lune, car c'est la latitude apparente qui importe ici (et non la latitude réelle comme pour les éclipses de Lune), or la parallaxe doit tantôt être ajoutée à la latitude de la Lune, tantôt retranchée, pour obtenir la latitude apparente, et la limite par rapport au n{\oe}ud de part et d'autre différe selon la parallaxe du lieu. Au centre du monde habité, le sinus de la latitude est de trente-six parts, et le lieu des éclipses s'étend jusqu'à dix-huit degrés après le n{\oe}ud ascendant ou avant le n{\oe}ud descendant, mais seulement jusqu'à neuf degrés avant le n{\oe}ud ascendant ou après le n{\oe}ud descendant. En effet, la parallaxe maximale à cette latitude est de soixante-quatre minutes~; or les rayons des deux luminaires font, au plus, un total de trente-quatre minutes~; si la latitude est quatre-vingt-dix-huit minutes Nord, retranches-en la parallaxe, il reste trente-quatre degrés de latitude apparente, or c'est égal aux deux rayons des luminaires, et la limite de l'éclipse au Nord est donc bien le premier nombre ci-dessus (la latitude de la Lune est quatre-vingt-dix-huit minutes quand la Lune est environ dix-huit degrés après le n{\oe}ud ascendant ou avant le n{\oe}ud descendant). Quand la Lune a une latitude Sud, son maximum est trente-quatre minutes, et elle a cette latitude quand elle est six degrés et demi après le n{\oe}ud descendant ou avant le n{\oe}ud ascendant. \newpage\phantomsection \index{BCASBA@\RL{ksf}!BCASBHBAAI@\RL{ksUf}, éclipse de Soleil} \index{ANASBA@\RL{_hsf}!ANASBHBA@\RL{_husUf}, éclipse de Lune} \includepdf[pages=116,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Ceci étant admis, sache que~: -- il ne peut y avoir deux éclipses de Soleil à un mois d'intervalle au même endroit, -- mais cela peut arriver dans deux endroits différents, l'un dans les latitudes Nord, l'autre dans les latitudes Sud. Le premier point résulte du fait que les limites dans lesquelles peut avoir lieu l'éclipse, avant le n{\oe}ud et après le n{\oe}ud, sont distantes de vingt-cinq degrés, or le Soleil se meut d'environ un signe par mois lunaire, c'est-à-dire trente degrés, donc il sort des limites de l'éclipse et celle-ci ne peut avoir lieu. Pour le second point, si l'on fait avec les latitudes Sud dans les endroits situés dans les latitudes Sud comme on l'a fait avec les latitudes Nord dans les endroits situés dans les latitudes Nord, on trouve que les deux limites des éclipses sont distantes de trente-six degrés, or le Soleil décrit trente degrés par mois, donc il peut retomber dans les limites de l'éclipses. Il arrive qu'il y ait deux éclipses de Soleil au même endroit à cinq mois d'intervalle, l'une après le n{\oe}ud ascendant, puis l'autre avant le n{\oe}ud descendant. Cela arrive aussi à sept mois d'intervalle, l'une avant le n{\oe}ud descendant, puis l'autre avant le n{\oe}ud ascendant, mais le premier cas est plus fréquent. Et l'occurrence de deux éclipses de Soleil à six mois d'intervalle n'est pas sujette à caution~: c'est le cas le plus fréquent. Enfin, il peut y avoir une éclipse de Soleil la moitié d'un mois après une éclipse de Lune. Les limites de l'éclipse de Lune sont comptées le long d'un cercle passant par le centre de l'ombre, perpendiculaire à l'orbe incliné~; et la limite de l'éclipse de Soleil est comptée le long d'un arc passant par le centre de la Lune, perpendiculaire à l'écliptique. Dieu est le plus savant. \newpage\phantomsection \index{AKBBBD@\RL{_tql}!BEAQBCAR AKBBBD@\RL{markaz al-_tql}, centre de gravité} \index{AMALBE@\RL{.hjm}!BEAQBCAR AMALBE@\RL{markaz al-.hjm}, centre du volume} \index{AOBHAQ@\RL{dwr}!ASAJAOAGAQAI@\RL{istidAraT}, sphéricité (de la Terre, des cieux)} \index{BAAQASAN@\RL{frs_h}!BAAQASAN@\RL{farsa_h j frAs_h}, parasange (= 3 milles)} \index{BEBJBD@\RL{myl}!BEBJBD BEBJAGBD@\RL{mIl j amyAl}, mille (= 4000 coudées modernes)} \index{APAQAY@\RL{_dr`}!APAQAGAY@\RL{_dirA` j a_drA`}, coudée (coudée moderne = 24 doigts)} \index{AUAHAY@\RL{.sb`}!AUAHAY@\RL{a.sb`}, doigt (= 6 grains d'orge)} \index{ATAYAQ@\RL{^s`r}!ATAYBJAQAI BEAYAJAOBDAI@\RL{^sa`IraT mu`tadalaT}, grain d'orge (unité de longueur)} \index{BFAUBA@\RL{n.sf}!ANAWAW BFAUBA BFBGAGAQ@\RL{_ha.t.t n.sf al-nahAr}, ligne méridienne} \index{BABDBC@\RL{flk}!BABDBC@\RL{flk}, 1) orbe, 2) cieux} \index{ALAHBD@\RL{jbl}!ALAHBD@\RL{jabal j jibAl}, montagne} \index{ACAQAV@\RL{'ar.d}!ACAQAV@\RL{'ar.d}, la Terre} \index{AZBHAQ@\RL{.gwr}!AZBHAQ@\RL{.gawr j 'a.gwAr}, terrain encaissé} \addcontentsline{toc}{chapter}{I.30 Science de la forme de la Terre et de sa surface} \includepdf[pages=117,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Trentième et dernier chapitre \large Science de la forme de la Terre et de sa surface \end{center} En astronomie, on a déjà établi des preuves que l'ensemble de la Terre et de l'eau est sphérique, qu'elle est au centre des cieux, que son centre de gravité coïncide avec le centre du Monde, et qu'elle est immobile au centre [des cieux]. Elle ne s'incline d'aucun côté autrement que ce qu'exige la différence entre son centre de gravité et le centre de son volume. Ainsi la sphéricité de la surface de la Terre et de l'eau est parallèle à la sphéricité des cieux~; les reliefs qui proviennent des montagnes et des terrains encaissés ne la font pas sortir des limites de la sphéricité, car ils sont insensibles par comparaison avec la grandeur de la Terre. En effet, soit une montagne haute d'une demi-parasange\footnote{La parasange est une unité de longueur. On va voir que~: -- 1 parasange = 3 milles -- 1 mille = 4000 coudées modernes -- 1 coudée moderne = 24 doigts (1 coudée ancienne = 32 doigts) -- 1 doigt = 6 grains d'orge Dans ce paragraphe, {Ibn al-\v{S}\=a\d{t}ir} compte en coudées modernes.}, c'est-à-dire six mille coudées. Par rapport à l'ensemble de la Terre, elle est moindre qu'un cinquième d'un septième de la largeur d'un grain d'orge\footnote{Si la circonférence terrestre est de 8000 parasanges, et que la coudée (moderne) mesure 144 grains d'orge, alors on a bien~: $$\frac{\dfrac{1}{2}\times 144}{8000\times \dfrac{7}{22}}=\frac{198}{1000}\times\frac{1}{7}<\frac{1}{5}\times\frac{1}{7}.$$ } par rapport à une sphère d'une coudée de diamètre~; c'est bien insensible par comparaison avec une telle sphère. Sache que les grands cercles à la surface de la Terre sont parallèles aux grands cercles célestes. Ceux qui sont à sa surface se subdivisent comme eux en trois cent soixante parts, et chaque part du cercle terrestre est nommée comme son homologue du cercle céleste. Si un voyageur se déplace le long du méridien, tout droit, sur une terre uniforme, jusqu'à ce que le pôle s'élève ou bien s'abaisse d'un degré exactement, alors la grandeur dont il s'est déplacé le long de ce cercle est d'un degré, et le périmètre entier [du cercle] fait trois cent soixante fois la grandeur dont il s'est déplacé. \newpage\phantomsection \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{AOAQAL@\RL{drj}!AOAQALAI@\RL{drjaT j darjAt, drj}, degré} \index{ALBEBGAQ@\RL{jmhr}!ALBEBGBHAQ@\RL{al-jumhUr}, les Grecs} \index{ACAQAV@\RL{'ar.d}!AOBHAQ ACAQAV@\RL{dwr al-'ar.d}, circonférence terrestre} \includepdf[pages=118,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Hipparque et Ptolémée (auteur de l'\emph{Almageste}) ont déjà montré cela et ils ont trouvé qu'une portion d'un degré du grand cercle imaginaire sur la Terre mesure soixante-six milles et deux tiers de mille, si le mille mesure trois mille coudées\footnote{Coudées anciennes}, la coudée\footnote{Coudée ancienne} mesure trente-deux doigts, et le doigt six grains d'orge alignés \uwave{dans le sens de la longueur}. Une école de savants s'est mise à observer cela au temps d'al-Ma'm\=un et sous ses ordres\uwave{...} Ils ont trouvé qu'une portion d'un degré faisait cinquante-six milles et deux tiers de mille, si le mille mesure quatre mille coudées, la coudée vingt-quatre doigts, et le doigt six grains d'orge. J'ai certes enseigné qu'il y a eu parmi nos prédécesseurs des conventions différentes quant à la coudée, au mille et à la parasange (pas de différence quant au doigt)~: l'ancienne coudée mesure une coudée moderne et un tiers, et le mille vaut quatre-vingt-seize mille doigts pour les deux écoles, c'est-à-dire trois mille coudées anciennes, mais quatre mille coudées modernes. La différence pour les coudées est bien réelle, mais pour les milles elle n'est que verbale~; quant à la parasange, il n'y a pas de différence puisqu'elle vaut trois milles chez les anciens comme chez les modernes, et une parasange fait douze mille coudées chez les modernes~; ainsi la différence dont j'ai parlé, dans la détermination du degré, entre les anciens et les modernes, est bien une différence réelle et résulte d'un écart des procédés. La grandeur de la différence est dix milles. Ceci étant admis, sache qu'un degré compte, pour les anciens, vingt-deux parasanges et deux neuvièmes de parasange, et pour les modernes, dix-huit parasanges et huit neuvièmes de parasange. Et la mesure selon les anciens est en accord avec les Grecs. Multiplions une portion d'un degré par trois cent soixante (c'est-à-dire le nombre de parts que compte la circonférence terrestre), on obtient la grandeur de la circonférence terrestre~: huit mille parasanges selon les anciens, six mille huit cents parasanges selon les modernes (la différence est de mille deux cents parasanges). \newpage\phantomsection \index{AMBHAW@\RL{.hw.t}!BEAMBJAW@\RL{m.hI.t}, contour, périmètre} \index{ASBHAM@\RL{sw.h}!BEASAGAMAI@\RL{masA.haT al-basI.t / al-jarim}, mesure d'une surface / d'un volume} \index{ALAQBE@\RL{jrm}!ALAQBE@\RL{jarm j 'ajrAm}, corps, volume|see{\RL{masA.haT}}} \index{ASBHAM@\RL{sw.h}!BEASAGAMAI@\RL{masA.haT al-basI.t / al-jarim}, mesure d'une surface / d'un volume} \index{AHASAW@\RL{bs.t}!AHASBJAW@\RL{basI.t}, plan, surface|see{\RL{masA.haT}}} \index{ASBCBF@\RL{skn}!BEASBCBHBF@\RL{al-maskUn}, la partie habitée (du globe)} \includepdf[pages=119,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Si l'on divise le périmètre par trois et un septième\footnote{Ici, {Ibn al-\v{S}\=a\d{t}ir } prend donc $\pi\simeq 3+\dfrac{1}{7}$.}, on obtient le diamètre du globe terrestre (avec l'eau)~: deux mille cinq cent quarante-cinq parasanges et cinq onzièmes de parasange selon les anciens, deux mille cent soixante-trois parasanges et sept onzièmes de parasange selon les modernes. Si l'on multiplie le diamètre par la circonférence, on obtient la surface du globe terrestre (avec l'eau)~: 20363636 parasanges et quatre onzièmes de parasange selon les anciens, 14712727 parasanges et trois onzièmes de parasange selon les modernes. On dit que la partie habitée est un quart du globe terrestre~: une demi-circonférence suivant les longitudes et un quart de circonférence suivant les latitudes. La volume du globe terrestre entier avec l'eau est, selon les anciens, 8630116345 parasanges cubes et vingt-et-un cent-vingt-et-unièmes et deux tiers de cent-vingt-et-unième de parasange cube (ces fractions font environ un sixième)~; selon les modernes, c'est 5033856498 parasanges cubes et quarante-deux cent-vingt-et-unièmes de parasange cube\footnote{Ces deux mesures de volume ne sont pas parfaitement exactes si l'on adopte les valeurs données ci-dessus pour la surface et le diamètre du globe. Si, comme pour les anciens, la longueur du méridien est 8000 parasanges, on devrait avoir~: $$\text{volume de la Terre} = \frac{\text{surface}\times\text{diamètre}}{6} = 8639118457+\frac{36+\dfrac{1}{3}}{121} \text{ parasanges cubes}$$ Si la longueur du méridien est 6800 parasanges, on devrait avoir un volume de ${5305498622+\dfrac{71+\dfrac{1}{3}}{121}}$ parasanges cubes.}. Si tu veux connaître cela en coudées, alors multiplies le nombre de coudées cubes d'une parasange par le nombre de parasanges du volume. Si tu veux connaître la surface de la Terre en coudées, alors multiplies le nombre de coudées carrées d'une parasange par le nombre de parasanges de la surface de la Terre~; que ce soit selon les anciens ou selon les modernes, on obtient ainsi la surface de la Terre en coudées carrées (la coudée carrée des anciens ou bien celle des modernes). C'est ce que nous voulions montrer. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BFAXAQ@\RL{n.zr}!ANAJBDAGBA BEBFAXAQ@\RL{i_htilAf al-man.zr}, parallaxe} \index{AQAUAO@\RL{r.sd}!AEAQAUAGAO AUAMBJAMAI@\RL{al-'ar.sAd al-.sa.hI.haT}, les observations} \index{BBAWAQ@\RL{q.tr}!BFAUBA BBAWAQ BD-AJAOBHBJAQ BD-BEAQAEBJBJ@\RL{n.sf q.tr al-tadwIr al-mar'iyy}, rayon de l'épicycle apparent} \index{BBBEAQ@\RL{qmr}!BBBEAQ@\RL{qamar}, Lune} \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \index{BBAWAQ@\RL{q.tr}!BFAUBA BBAWAQ ATBEAS BBBEAQ@\RL{ni.sf qi.tr al-^sams / al-qamar}, rayon apparent du Soleil / de la Lune} \addcontentsline{toc}{chapter}{I.31 Conclusion~: distances et [grandeurs] des corps} \label{conclusion} \includepdf[pages=120,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \Large Conclusion \large Distances et [grandeurs] des corps, en quatre sections \end{center} \begin{center} \large Première section \normalsize Correction des distances des deux luminaires au centre de la Terre \end{center} Ptolémée avait indiqué dans l'\emph{Almageste} que le rayon de l'orbe incliné de la Lune était cinquante-neuf fois le rayon de la Terre -- et c'est la distance maximale du centre de l'épicycle. Des chercheurs en cet art ont corrigé ceci pendant les longs intervalles de temps les séparant de Ptolémée, et en suivant des voies plus précises que les siennes~: ils ont trouvé qu'il y avait un défaut dans la valeur de la parallaxe utilisée par Ptolémée, or ils ont évité fortement ce défaut en supposant que le rayon du parécliptique faisait cinquante-huit fois le rayon de la Terre. \`A partir de là, ils ont trouvé une parallaxe en accord avec l'observation. Ils ont répété cela~: ils ont eu raison, et nous nous sommes [ensuite] appuyés [sur ce résultat]. Nous avons déja montré que le rayon de l'épicycle apparent pendant les oppositions et les conjonctions est cinq parts et un sixième, en parts telles que le rayon du parécliptique en compte soixante. Cela fait cinq parts, en parts telles que son rayon en compte cinquante-huit. Si l'on ajoute cela au rayon du parécliptique, on obtient soixante-trois parts, distance maximale de la Lune au centre de la Terre pendant les conjonctions et les oppositions. Il est déjà démontré que le diamètre [apparent] de la Lune à cette distance de soixante-trois diamètres terrestres (c'est la distance maximale pendant les conjonctions et les oppositions) est $0;30,18$. \label{diam_sol2}De nombreuses observations ont révélé que le diamètre du Soleil à distance maximale est $0;29,5$, à distance moyenne $0;32,32$, et à distance minimale $0;36,54,41$. Nous avons déjà indiqué dans la configuration des orbes du Soleil que sa distance minimale est $52;53$, sa distance moyenne $60$, et sa distance maximale $67;7$. Si nous divisons son diamètre à distance moyenne par la distance du Soleil, on obtient le diamètre du Soleil à cette distance~; et si nous divisons le diamètre du Soleil à distance moyenne par son diamètre à un instant donné, on obtient la distance du Soleil\footnote{Autrement dit, la distance du Soleil et son diamètre apparent sont inversement proportionnels~: $\dfrac{\text{diamètre apparent du Soleil}}{60}=\dfrac{0;32,32}{\text{distance du Soleil}}$.}. \newpage\phantomsection \index{AXBDBD@\RL{.zll}!BBAWAQ AXBDBD@\RL{qi.tr al-.zill}, diamètre de l'ombre} \index{AXBDBD@\RL{.zll}!BEANAQBHAW AXBDBD@\RL{ma_hrU.t al-.zill}, cône d'ombre} \index{ANASBA@\RL{_hsf}!ANASBHBA@\RL{_husUf}, éclipse de Lune} \includepdf[pages=121,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection D'après ce que nous avons dit, on peut vérifier que si nous divisons le diamètre du Soleil à distance moyenne, $0;32,32$, par le diamètre de la Lune à distance maximale, $0;30,18$, il sort $0;62,30$ comme distance du Soleil\footnote{Il y a erreur~: $\dfrac{0;32,32}{0;30,18}=0;64,25,20$. Hélas la valeur $0;62,30$ sera utilisée dans la suite du texte.} au centre de l'écliptique à l'instant où le rayon de la Lune est égal au rayon du Soleil quand la Lune est à distance maximale lors des conjonctions et des oppositions. Ceci étant admis, voyons le tracé de la figure en forme de sapin. Soit AB le cercle représentant le Soleil, et C son centre. Soit DE le cercle représentant la Terre, et R son centre. Soit ANB le cône qui la contient, et RC sa hauteur. Soit JIH le cercle représentant la Lune, et I son centre. Soit ARB le cône contenant la Lune et le Soleil. Reportons la longueur RI en RL, et traçons les perpendiculaires CB, JI, RE et LM à la hauteur de ce cône c'est-à-dire la droite NC. Si on les prolonge de l'autre côté [de NC], on obtient les diamètres de ces sphères. Sache que les droites passant par les points de tangence avec chacun de ces cercles ne sont pas leurs diamètres, mais ils en diffèrent peu car la distance est longue. L'angle ARB contient le Soleil et la Lune à l'instant où la distance [de la Lune] à la Terre est soixante-trois. Nous avons dit que le diamètre de la Lune dans cette position est $0;30,18$, et pour sa moitié IJ c'est l'angle IRJ~; or le rapport de RI (soixante-trois) à IJ est comme le rapport du sinus de l'angle JIR au sinus de l'angle IRJ~; donc IJ est connu. Voici le calcul. Prenons le sinus de l'angle IRJ, c'est-à-dire du rayon de la Lune, c'est $0;0,15,51,54$. Multiplions-le par RI, c'est-à-dire soixante-trois. On obtient $0;16,39,29,42$, longueur de la droite IJ, et c'est le rayon de la Lune si le rayon terrestre est l'unité. Multiplions-le par le coefficient de l'ombre qui vaut à cette distance, comme je l'ai vérifié par de nombreuses observations, deux et quarante-trois minutes. On obtient le rayon de l'ombre\footnote{En effectuant ce calcul, on trouve que le rayon de l'ombre vaut $0;45,15,17,41,6$ rayon terrestre~; hélas, la suite révèle que {Ibn al-\v{S}\=a\d{t}ir} a mené le calcul en supposant que le rayon de l'ombre est $0;45,30,19,41$.}. La droite LR est comme la droite RI, et les droites LM, RE, IK sont perpendiculaires à la droite NC donc elles sont parallèles entre elles~; ainsi la somme des deux droites LM et IK fait le double de RE~; mais RE est le rayon de la Terre et son double fait cent vingt minutes~; si l'on en retranche la somme du rayon de l'ombre et du rayon de la Lune, $62;9,49,23$, on obtient la droite JK qui fait $0;57,50,10,37$.\footnote{Deuxième erreur~: en refaisant les calculs à partir de la valeur de rayon de l'ombre mentionnée dans la note précédente, on a trouvé $0;58,5,12,36,54$.} \newpage\phantomsection \includepdf[pages=122,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection La droite RE est parallèle à la droite JK donc le rapport de RE à JK est comme le rapport de BR à BJ et comme le rapport de RC à CI~; alors si RC valait soixante minutes, CI (distance entre les centres des deux luminaires) vaudrait $57;50,10,37$~; reste RI (distance du centre de la Terre à la Lune) égal à $0;2,9,49,23$~; et le rapport de ceci aux soixante minutes de RC est comme le rapport de RI (distance du centre de la Terre à la Lune), soixante-trois, à RC qui est la distance du Soleil~; donc la distance du Soleil est connue. Voici le calcul. Divisons la distance de la Lune, soixante-trois, par $0;2,9,49,23$, il sort $29;7$. Multiplions-le par soixante, on obtient 1747, distance du Soleil au centre de la Terre à l'instant où son diamètre est comme le diamètre de la Lune~; or c'est soixante-deux et demi~; si nous multiplions la distance du Soleil ci-dessus par soixante et que nous divisons le résultat par soixante-deux et demi, on obtient 1677, sept minutes et un cinquième de minute, et c'est la distance moyenne du Soleil si le rayon terrestre est l'unité.\footnote{Ce calcul de la distance Terre-Soleil comprend donc deux erreurs~: l'estimation du rayon de l'ombre et la valeur $0;62,30$. Si l'on corrige ces deux erreurs, la distance moyenne du Soleil est 1840 rayons terrestres. Mais il est peu sensé de ``corriger'' ainsi les erreurs de calcul de l'auteur, car ce procédé pour calculer la distance Terre-Soleil est très sensible à la précision des données de l'observation. Notons $u$ le sinus du rayon apparent de la Lune à distance maximale, et $C$ le ``coefficient de l'ombre'', $C=2;43$. La distance Terre-Soleil lors de syzygies où les deux rayons apparents sont égaux est, selon {Ibn al-\v{S}\=a\d{t}ir}~: $$\frac{63}{63(1+C)u-1}.$$ Comme le dénominateur (c'est RI dans le texte) est de l'ordre de $0;2$, on voit aisément qu'une erreur relative de $1 \%$ sur $u$ ou sur $C$ conduit à une erreur relative de $30 \%$ sur la distance Terre-Soleil~!\label{sensibilite}} Divisons-le par soixante. Le quotient de la division est $27;57,7,12$, c'est une portion d'un degré du rayon du parécliptique, c'est-à-dire que chaque degré du rayon du parécliptique fait cette grandeur. On a montré que la distance maximale du Soleil faisait $7;7$ de plus que sa distance moyenne, donc si nous multiplions la portion d'un degré par $7;7$ (on obtient 198 $55;30,14,24$) et qu'on l'ajoute à la distance moyenne du Soleil, on obtient la distance maximale du Soleil égale à 1876 $2;42$~; et si l'on retranche cela de la distance moyenne, il reste la distance minimale du Soleil qui est 1478 et un cinquième. Le rapport de JI à IR est comme le rapport de CB à RC donc CB est connu. Pour le calculer, multiplions $29;7$ par le rayon de la Lune $0;16,39,29,42$ puis divisons le résultat par la distance de la Lune c'est-à-dire soixante-trois, on obtient $7;42,53$ et c'est le rayon du Soleil si le rayon de la Terre est l'unité\footnote{Il y a une erreur de calcul ou de copie : on trouve $7;41,56$ et non $7;42,53$.}.\label{obs_diam_sol} \newpage\phantomsection \index{ASBHAM@\RL{sw.h}!BEASAGAMAI@\RL{masA.haT al-basI.t / al-jarim}, mesure d'une surface / d'un volume} \includepdf[pages=123,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent On vient de montrer que le rayon du Soleil est sept fois et deux tiers de fois et un vingtième de fois le rayon de la Terre~; le cube de ceci est 459 $8;6,47,46,25,17$~; or on sait que le rapport de la boule à la boule est comme le rapport du cube du diamètre au cube du diamètre~; donc le volume du Soleil mesure le volume de la Terre quatre cent cinquante-neuf fois et deux tiers de cinquième de fois environ. Pour calculer la distance du sommet du cône d'ombre de la Terre, traçons la droite LS parallèle à NE~; alors le triangle RNE est semblable au triangle LRS donc le rapport de NR à LR est comme le rapport de ER à RS, donc RN est connu. Voici le calcul. La droite LM est comme la droite ES, et la droite ER qui est le rayon de la Terre vaut soixante minutes~; donc si l'on en retranche $0;45,30,19,41$, il reste $0;14,29,40,19$ et c'est la droite SR. Divisons par cela la distance de la Lune qui vaut soixante-trois, on trouve $260;47,18$ et c'est la distance du sommet du cône d'ombre de la Terre au centre de la Terre quand le Soleil est à la distance mentionnée ci-dessus. Cela augmente quand le Soleil s'éloigne du centre de la Terre, et cela diminue quand il se rapproche. \newpage\phantomsection \index{ALASBE@\RL{jsm}!ALASBE@\RL{jsm}, corps, solide} \includepdf[pages=124,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \normalsize Section \end{center} Si nous doublons le rayon de la Lune ($0;16,39,29,42$) on obtient $0;33,18,59,24$ qu'on arrondit à $0;33,19$. Si nous divisons le diamètre de la Terre (cent vingt minutes) par cela, on obtient $3;36,6$, et le diamètre du globe terrestre est donc trois fois et trois cinquièmes de fois le diamètre de la Lune (plus des fractions très petites). Si nous divisons le diamètre du Soleil par le diamètre de la Lune, on trouve vingt-sept et quatre cinquièmes~; donc le diamètre du Soleil est vingt-sept fois et quatre cinquièmes de fois le diamètre de la Lune. Si l'on pose que le diamètre de la Lune est une unité, alors son cube est une unité, le cube formé sur le diamètre de la Terre (trois et trois cinquièmes) est quarante-six et deux-tiers, et le cube formé sur vingt-sept et quatre cinquièmes est vingt-et-un mille quatre cent quatre-vingt-un et un quart environ. Cela implique que, si le corps de la Lune est l'unité, alors le corps de la Terre est quarante-six et deux tiers, et le corps du Soleil est vingt-et-un mille quatre cent quatre-vingt-un et un quart. Le corps du Soleil est quatre cent cinquante-neuf fois et un quinzième de fois le corps de la Terre environ. Ces grandeurs ni ces distances ne peuvent être moins que ce que nous avons dit~; mais elles pourraient être plus grandes. \newpage\phantomsection \begin{figure}[h!] \begin{center} \tiny \input{sapin_fr.pdf_tex} \end{center} \end{figure} \newpage\phantomsection \includepdf[pages=125,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ALAQBE@\RL{jrm}!ALAQBE@\RL{jarm j 'ajrAm}, corps, volume|see{\RL{masA.haT}}} \index{ASBHAM@\RL{sw.h}!BEASAGAMAI@\RL{masA.haT al-basI.t / al-jarim}, mesure d'une surface / d'un volume} \index{AXBDBD@\RL{.zll}!BBAWAQ AXBDBD@\RL{qi.tr al-.zill}, diamètre de l'ombre} \index{BHAUBD@\RL{w.sl}!BEAJAJAUBD AHAYAVBG AHAHAYAV@\RL{mutta.sil ba`.dh biba`.d}, contigus} \index{BBAWAQ@\RL{q.tr}!BFAUBA BBAWAQ ATBEAS BBBEAQ@\RL{ni.sf qi.tr al-^sams / al-qamar}, rayon apparent du Soleil / de la Lune} \includepdf[pages=126,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \normalsize Section \end{center} Pour Ptolémée, si le rayon de la Lune est l'unité, alors le rayon de la Terre est trois parts et deux cinquièmes de part, et le rayon du Soleil est dix-huit fois et quatre cinquièmes de fois comme le rayon de la Lune~; le rayon du Soleil est alors cinq fois et demi le rayon de la Terre, et son volume est cent soixante-dix fois le volume de la Terre. La distance maximale de la Lune est soixante-quatre et un sixième, et la distance moyenne du Soleil est mille deux cent vingt. \begin{center} \normalsize Remarque \end{center} La cause de ces différences de grandeurs est que nous différons de lui légèrement quant au diamètre de la Lune, au diamètre de l'ombre et au diamètre du Soleil. Chez lui, le diamètre de la Lune à distance maximale est $31;20$, chez nous c'est $30;18$. Nous différons de son diamètre d'une minutes et deux secondes. Chez lui, le diamètre du Soleil est comme le diamètre de la Lune dans son chapitre concernant les grandeurs\footnote{\emph{L'Almageste}, livre 5, chapitre 14.}, et le rapport du diamètre de l'ombre de la Terre au diamètre de la Lune est chez lui deux fois et trois cinquièmes de fois, alors qu'il est chez nous deux fois et quarante-trois minutes. Des chercheurs en cet art ont mis à l'épreuve de manière répétée ce en quoi nous différons de lui, et nous aussi~; et si nous utilisions sa valeur du rapport de l'ombre, la différence serait encore plus grande, donc la différence provient des diamètres du Soleil et de la Lune. Sache cela. \begin{center} \normalsize Remarque \end{center} Si nous retranchons de la distance minimale du Soleil le rayon du Soleil $7;43$ (avec pour unité le rayon terrestre), il reste $1470;29$ et c'est la distance minimale atteinte au sein des orbes du Soleil. Si nous ajoutons cela à sa distance maximale, $1883;46$, on obtient la distance maximale au sein de ses orbes~; en sus, il y a l'épaisseur du parécliptique, et en deça, [ce qu'il faut] pour que les orbes soient contigus. \begin{center} \normalsize Remarque \end{center} Nous avons indiqué dans la configuration des orbes de la Lune que sa distance maximale pendant les conjonctions et les oppositions est $65;10$, que sa distance minimale est alors $54;50$, que sa distance maximale pendant les quadratures est $68;0$, et que sa distance minimale est alors $52;0$, toujours sous l'hypothèse que la distance moyenne est soixante. \newpage\phantomsection \index{AYAQAVBJ@\RL{al-mwyd al-`r.dI}, al-Mu'ayyad al-`Ur\d{d}{\=\i}} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{BABDAM@\RL{ibn afla.h}, Ibn Afla\d{h}} \index{BHAUBD@\RL{w.sl}!BEAJAJAUBD AHAYAVBG AHAHAYAV@\RL{mutta.sil ba`.dh biba`.d}, contigus} \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \includepdf[pages=127,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Si nous convertissons cela en parts telles que le rayon de la Terre en soit une seule, on trouve que sa distance minimale pendant les conjonctions et les oppositions est $53;0$, sa distance maximale est alors $63;0$, sa distance minimale pendant les quadratures est $50;16$, et sa distance maximale est alors $65;44$. Il faut ajouter à cette dernière distance le rayon de la Lune, $0;16,39,30$, on trouve $66;0,39,30$~; en sus, il y a l'épaisseur du parécliptique. Et si nous retranchons [de la distance minimale pendant les quadratures] le rayon de la Lune, il reste $49;59,21$~; en deça, il y a le [ce qu'il faut] pour que les orbes soient contigus. \begin{center} \large Deuxième section \normalsize Les distances de Mercure et Vénus au centre de la Terre (en rayons terrestres) \end{center} Nous avons montré dans la configuration des orbes de Mercure que le rapport de la distance minimale de ces orbes à leur distance maximale est comme le rapport de trente-et-un à quatre-vingt-neuf et demi\footnote{\textit{Cf.} cependant p.~\pageref{rapport_merc}, où le rapport est $\dfrac{31}{89}$.}, et que le rapport de la distance minimale des orbes de Vénus à leur distance maximale est comme le rapport de quatorze à cent six. Nous avons montré que l'orbe le plus distant de la Lune est son orbe incliné, et qu'il est à soixante-six rayons terrestres. En sus, il y a l'épaisseur du parécliptique que l'on pose d'un degré~; donc la distance maximale des orbes de la Lune est soixante-sept, et c'est aussi la distance minimale des orbes de Mercure. Si nous la multiplions par le rapport propre à Mercure, c'est-à-dire que nous la multiplions par quatre-vingt-neuf et demi, puis que nous divisons par trente-et-un, il en sort cent quatre-vingt-treize et un tiers et un dixième environ~: c'est la distance maximale des orbes de Mercure. Si nous la multiplions par le rapport propre à Vénus, c'est-à-dire que nous la multiplions par cent six, puis que nous divisons le résultat par quatorze, il en sort mille quatre cent soixante-quatre et quatre septièmes~: c'est la distance maximale des orbes de Vénus. Elle est inférieure à la distance minimale des orbes du Soleil, de cinq parts, deux tiers de part et un quart de part~: c'est [ce qu'il faut] pour que les orbes du Soleil soient contigus. Il est ainsi démontré que les lieux de Vénus et de Mercure sont sous le Soleil, contrairement à l'opinion de Mu'ayyad al-`Ur\d{d}{\=\i} et de Ibn Afla\d{h}. \newpage\phantomsection \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \includepdf[pages=128,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Troisième section \normalsize Les distances des trois planètes supérieures au centre de la Terre (en rayons terrestres) \end{center} Nous avons montré dans la configuration des orbes des trois planètes [supérieures] que le rapport de la distance minimale des orbes de Mars à leur distance maximale est comme le rapport de huit à cent douze, que le rapport de la distance minimale des orbes de Jupiter à leur distance maximale est comme le rapport de quarante-deux à soixante-dix-huit, et que le rapport de la distance minimale des orbes de Saturne à leur distance maximale est comme le rapport de quarante-six à soixante-quatorze. On vient de voir que la distance maximale des orbes portant le Soleil est mille huit cent quatre-vingt-trois, deux tiers et un dixième, avec en sus l'épaisseur du parécliptique que nous supposons égale à $6;14$, et on atteint ainsi mille huit cent quatre-vingt-dix~: c'est aussi la distance minimale des orbes de Mars. Si nous multiplions cela par le numérateur du rapport propre à Mars, puis que nous divisons le produit par son dénominateur, on obtient vingt-six mille quatre cent soixante~: c'est la distance maximale des orbes de Mars. Si nous multiplions cela par le numérateur du rapport propre à Jupiter, puis que nous divisons le produit par son dénominateur, on obtient quarante-neuf mille cent quarante environ~: c'est la distance maximale des orbes de Jupiter. Si nous multiplions cela par le numérateur du rapport propre à Saturne, puis que nous divisons le produit par son dénominateur, on obtient soixante-dix-neuf mille cinquante-et-un, un sixième et un huitième\footnote{$49140\times \dfrac{74}{46}=79051+\dfrac{7}{23}\simeq 79051+\dfrac{7}{24} =79051+\dfrac{1}{6}+\dfrac{1}{8}$.}~: c'est la distance maximale des orbes de Saturne, et c'est aussi la distance minimale des étoiles fixes. Moindre que cela, c'est impossible~; mais cela pourrait être plus. Dieu est le plus savant. \newpage\phantomsection \index{AEAHAQANAS@\RL{'ibr_hs}, Hipparque} \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BBAWAQ@\RL{q.tr}!ACBBAWAGAQ BCBHAGBCAH@\RL{'aq.tAr al-kawAkib}, diamètres apparents des planètes} \index{AYAWAGAQAO@\RL{`u.tArid}, Mercure} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{ATAQBJ@\RL{^sry}!BEATAJAQBJ@\RL{al-mu^starI}, Jupiter} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{ARAMBD@\RL{z.hl}!ARAMBD@\RL{zu.hal}, Saturne} \includepdf[pages=129,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \large Quatrième section \normalsize Mesures des corps des cinq astres errants \end{center} Hipparque et Ptolémée ont observé les diamètres de ces planètes, et ils ont trouvé que le diamètre de Mercure est un quinzième du diamètre du Soleil, celui de Vénus un dixième, celui de Mars un vingtième, celui de Saturne un dix-huitième, et celui de Jupiter un douzième~; mais ils n'ont pas dit à quelles distances se rapportent leurs observations. Ptolémée les a adoptées dans les distances moyennes, mais parmi les chercheurs en cet art, les Modernes les ont adoptées dans les distances minimales, sauf pour Mercure où ils l'ont adoptée dans les distances moyennes. Ceci étant admis, si nous divisons la distance moyenne du Soleil par son diamètre, on obtient cent huit parts et demi et un cinquième de part~; gardons ce nombre de côté~; puis divisons la distance moyenne de Mercure c'est-à-dire 130 et treize minutes par le rapport du Soleil à son rayon c'est-à-dire quinze, on obtient huit parts et quarante minutes que l'on divise par le nombre qu'on a mis de côté, et on obtient quatre minutes et quatre cinquièmes de minute~: c'est le diamètre de Mercure si le diamètre de la Terre est l'unité. Si nous prenons de même le dixième de la distance moyenne de Vénus (huit cent vingt-neuf) et que nous le divisons par le nombre mis de côté, on obtient deux tiers de part et un dixième de part~: c'est le diamètre de Vénus si le diamètre de la Terre est l'unité. Quant à Mars, si nous supposons que ce rapport est le sien dans les distances moyennes, alors il devrait être à distance minimale plus grand que Vénus~; or ce n'est pas le cas, donc nous avons pris ce rapport comme étant le sien quand il est entre sa distance moyenne et sa distance minimale, c'est-à-dire quand sa distance à la Terre vaut huit mille trente-deux fois le rayon de la Terre\footnote{8032 est la moyenne entre la distance minimale du système d'orbes de Mars (1890) et sa distance moyenne à la Terre $\dfrac{1890+26460}{2}=14175$ rayons terrestres.}. Prenons un vingtième de ce nombre, divisons-le par cent huit, un demi et un cinquième, on obtient trois parts et deux tiers de part environ~: c'est le diamètre de Mars si le diamètre de la Terre est l'unité. Quant à Jupiter, si nous supposons que ce rapport est le sien dans les distances moyennes, alors divisons sa distance moyenne (trente-sept mille huit cents) par douze, on obtient trois mille cent cinquante~; divisons cela par cent huit, un demi et un cinquième, on obtient vingt-neuf environ~: le diamètre de Jupiter est donc vingt-neuf fois le diamètre de la Terre. \newpage\phantomsection \index{BBAOAQ@\RL{qdr}!BBAOAQ@\RL{qdr}!magnitude (d'une étoile)} \includepdf[pages=130,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Si nous supposons au contraire que ce rapport est le sien à distance minimale, c'est-à-dire à vingt-huit mille huit cent onze environ (car la planète Jupiter n'atteint pas la distance minimale de ses orbes qui est égale à la distance maximale de Mars), alors le diamètre de Jupiter est vingt-trois fois le diamètre de la Terre. Au cube, cela fait douze mille cent soixante-sept, et c'est le rapport de son volume au volume de la Terre~: son volume mesure douze mille cent soixante-sept fois le volume de la Terre environ. Quant à Saturne, supposons que le rapport mentionné ci-dessus est le sien à distance minimale, c'est-à-dire à la distance maximale de Jupiter à laquelle on ajoute l'orbe déférent de Saturne~: cette distance est quarante-neuf mille neuf cent soixante-huit\footnote{\emph{Erreur}~: à la fin du chapitre 12, {Ibn al-\v{S}\=a\d{t}ir} a montré que la distance minimale \emph{atteinte} par Saturne, rapportée au rayon de la ceinture de son orbe incliné, est $50;5$. En rayons terrestres, cela fait donc~: $\dfrac{49140+79051+\dfrac{1}{6}+\dfrac{1}{8}}{2}\times 0;50,5\simeq 53502$, et non $49968$.}. Divisons-la par le rapport du diamètre du Soleil au diamètre de Saturne (dix-huit), on obtient deux mille sept cent soixante-seize. Divisons cela par cent huit, un demi et un cinquième (le nombre que nous avions mis de côté), on trouve vingt-cinq, un tiers et un cinquième environ~: c'est le diamètre de Saturne si le diamètre de la Terre est l'unité. Quant aux étoiles fixes, celles qui sont de première magnitude ont pour diamètre un vingtième du diamètre du Soleil, celles qui sont de deuxième magnitude ont pour diamètre un vingt-deuxième, celles qui sont de troisième magnitude ont pour diamètre un vingt-quatrième, celles qui sont de quatrième magnitude ont pour diamètre un vingt-sixième, celles qui sont de cinquième magnitude ont pour diamètre un vingt-huitième, et celles qui sont de sixième magnitude ont pour diamètre un trentième du diamètre du Soleil s'il est à distance moyenne. Sache donc ces nombres, puis divise la distance maximale de Saturne (soixante-dix-neuf mille cinquante-et-un) par cent huit, un demi et un cinquième (le nombre qu'on avait mis de côté), on obtient sept cent vingt-sept et un quart~: on appelle cela la base. Si nous divisons ce nombre par vingt, on obtient le diamètre des étoiles de première magnitude ; si nous le divisons par vingt-deux, on obtient le diamètre de celles de deuxième grandeur ; si nous le divisons par vingt-quatre, on obtient le diamètre de celles de troisième magnitude ; si nous le divisons par vingt-six, on obtient le diamètre de celles de quatrième magnitude ; si nous le divisons par vingt-huit, on obtient le diamètre de celles de cinquième magnitude ; et si nous le divisons par trente, on obtient le diamètre de celles de sixième magnitude. \newpage\phantomsection \includepdf[pages=131,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Divisons donc par les nombres qu'on a mentionnés ; on trouve que le diamètre de celles de première magnitude est trente-six fois, un cinquième de fois et un sixième de fois [le diamètre de la Terre], que le diamètre de celles de deuxième magnitude est trente-trois et un quatorzième environ, que le diamètre de celles de troisième magnitude est trente, un cinquième et un sixième, que le diamètre de celles de quatrième magnitude est vingt-huit environ, que le diamètre de celles de cinquième magnitude est vingt-six environ, et que le diamètre de celles de sixième magnitude est vingt-quatre, un cinquième et un sixième. Si nous mettons au cube les diamètres des astres errants et des fixes, ceci sera rapporté à l'unité qu'est le cube du diamètre de la Terre. On a déjà traité le cas de la Lune et du Soleil. Quant à Mercure, son diamètre est quatre minutes et quatre cinquièmes de minutes, et le cube de ceci est une seconde, quarante-six tierces, quarante-cinq quartes, et sept quintes ; donc le rapport de son volume au volume de la Terre est comme le rapport de l'unité à deux mille vingt-cinq environ. Le diamètre de Vénus est quarante-six minutes ; le cube de ceci est vingt-sept minutes, [Vénus] est donc deux cinquièmes et un vingtième du globe terrestre. Nous avons montré que le diamètre de Mars est trois parts et deux tiers de part si le diamètre de la Terre est l'unité ; son cube est quarante-neuf, un sixième et un huitième ; le volume de Mars mesure donc quarante-neuf fois, un sixième et un huitième de fois la Terre environ. Le diamètre de Jupiter est vingt-neuf fois le diamètre de la Terre ; le cube de son diamètre est vingt-quatre mille trois cent quatre-vingt-neuf, et son volume mesure donc vingt-quatre mille trois cent quantre-vingt neuf fois le volume de la Terre, à condition que le rapport du diamètre du Soleil à son diamètre vaille quand [Jupiter] est à distance moyenne (à distance minimale, voir ce qui précède). Sache cela. Quant au diamètre de Saturne, il est vingt-cinq fois, un tiers et un quart de fois comme le diamètre de la Terre ; son cube est seize mille deux cent quatre-vingt-quatorze, un demi et un huitième environ ; donc le volume de Saturne mesure seize mille deux cent quatre-vingt-quatorze fois le volume de la Terre. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{BAAQASAN@\RL{frs_h}!BAAQASAN@\RL{farsa_h j frAs_h}, parasange (= 3 milles)} \includepdf[pages=132,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Quant à la grandeur des étoiles fixes, le diamètre de celles qui sont de première magnitude mesure trente-six fois, un cinquième de fois et un sixième de fois le diamètre de la Terre environ ; le cube de ceci est vingt-sept mille six cent cinquante-quatre arrondi ; et son volume est d'autant par rapport au volume de la Terre, c'est à dire quarante-sept-mille six cent cinquante-quatre fois la Terre environ. Quant aux étoiles fixes les plus petites, de sixième magnitude, le diamètre de chacune d'entre elles est vingt-quatre fois, un cinquième et un sixième de fois le diamètre de la Terre ; le cube de ceci est quatorze mille quatre cent soixante-trois environ ; donc son volume est quatorze mille quatre cent soixante-trois fois le volume de la Terre. En vertu des fondements véritables, ces grandeurs ne peuvent être moindres, mais peut-être sont-elles encore plus grandes, et cela ne nous étonnerait point. La grandeur est à Dieu, il est le plus grand, sa puissance est immense, il est puissant sur toute chose. Tous les Anciens étaient d'accord quant à la portée des observations qui ont conduit à ces choses. De manière répétée, Ptolémée disait déjà que les distances ne peuvent être moindres que ce qu'il avait indiqué~; quant aux volumes, ils sont fonctions des distances, et si les distances sont supérieures alors les volumes le sont aussi~; cela n'est pas exhaustif dans cette espèce~; et il se contredit à propos de la configuration et des volumes des astres dans son livre intitulé \emph{Les Hypothèses planétaires}. Nous demandons le succès à Dieu le Très Haut. \begin{center} \normalsize Remarque \end{center} Pour toute sphère dont le diamètre est connu, on connaît ses grands cercles, ainsi que les portions de ses grands cercles, sa surface et son volume. Toute grandeur connue par rapport à une certaine grandeur, l'est aussi par rapport à tout ce par rapport à quoi cette grandeur-ci l'est. Par exemple, les distances des orbes et les volumes des astres sont connus par rapport à la grandeur du rayon de la Terre, et le rayon de la Terre est connu par rapport aux parasanges, aux milles, aux coudées, aux doigts et au grain d'orge ; ainsi les distances et les volumes sont connus aussi par rapport à ces grandeurs-ci. \newpage\phantomsection \index{BABDBC@\RL{flk}!ACBBAQAH BBAQAH AJAGASAY@\RL{'aqrab qurb al-tAsi`}, rayon de la cavité du neuvième orbe} \index{BABDBC@\RL{flk}!BABDBC AJAGASAY@\RL{flk tAsi`}, neuvième orbe|see{\RL{flk m`ddl al-nhAr}}} \index{ASAQAY@\RL{sr`}!ASAQAYAI@\RL{sur`aT}, vitesse} \index{ARBEBF@\RL{zmn}!ARBEAGBF@\RL{'azmAn mu`addl al-nahAr}!temps équatoriaux} \includepdf[pages=133,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Sache que si nous ajoutons à la distance maximale des orbes de Saturne les diamètres des étoiles fixes, cela fait la distance maximale atteinte au sein des étoiles fixes ; il est possible que cela soit davantage, si Dieu le veut, mais il est impossible que cela soit moins, comme je l'ai enseigné. La distance maximale du huitième orbe ou la distance minimale du neuvième n'est pas moins que soixante-dix-neuf mille quatre-vingt-huit fois la grandeur du rayon de la Terre ; c'est son rayon, et si je le double puis que je multiplie cela par trois plus un septième, on obtient la circonférence de la cavité du neuvième orbe ; si nous divisons ensuite la circonférence par trois-cent-soixante, on obtient une portion d'un degré du bord de la cavité du neuvième orbe ; si nous divisons le degré par soixante, on en obtient une portion d'une minute d'arc. Nous avons multiplié le rayon de la partie convexe de la sphère des fixes (soixante-dix-neuf mille quatre-vingt-huit) par trois plus un septième, puis nous avons doublé le résultat, on obtient la circonférence d'un grand cercle situé à la surface du huitième orbe (qui coïncide avec la cavité du neuvième orbe)~: 497124 et quatre septièmes. Nous avons divisé cela par trois cent soixante, on obtient 1380, deux tiers et un quart : c'est une portion d'un degré d'un cercle de la cavité du neuvième orbe. Nous divisons cela par soixante, on obtient vingt-trois et des fractions (en rayons terrestes) : c'est une minute d'arc. Cette durée\label{quatre_secondes} suffit à un homme pour compter rapidement jusqu'à six~; le temps qu'il dise <<~un~>>, l'orbe se meut deux fois comme le diamètre de la Terre environ. Nous avons montré que quand le neuvième orbe se meut d'un degré, c'est à dire en un quinzième d'heure, alors il se meut d'une grandeur mesurant mille trois cent quatre-vingt fois, deux tiers de fois et un quart de fois le rayon de la Terre. S'il se meut d'une minute, alors alors il se meut d'une grandeur mesurant vingt-trois fois le rayon de la Terre. Cela concerne sa face concave~; quant à son épaisseur, personne ne la connaît sauf Dieu le Très Haut~; une portion d'un degré de sa face convexe est plus grande qu'une telle portion de sa face concave, mais personne ne sait de combien, sauf Dieu le Très Haut. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ARBGAQ@\RL{zhr}!ARBGAQAI@\RL{zuharaT}, Vénus} \index{AQBJAN@\RL{ry_h}!BEAQAQBJAN@\RL{marrI_h}, Mars} \index{BBAWAQ@\RL{q.tr}!ACBBAWAGAQ BCBHAGBCAH@\RL{'aq.tAr al-kawAkib}, diamètres apparents des planètes} \index{BFAXAQ@\RL{n.zr}!ANAJBDAGBA BEBFAXAQ@\RL{i_htilAf al-man.zr}, parallaxe} \index{AHAUAQ@\RL{b.sr}!AHAUAQ@\RL{ba.sar}, 1)~vision, 2)~observateur} \index{BFBHAQ@\RL{nwr}!BFBHAQ@\RL{nwr}, lumière} \index{BFBHAQ@\RL{nwr}!BFBJBJAQ@\RL{nayyir}, lumineux} \includepdf[pages=134,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \begin{center} \normalsize Remarque \end{center} Sache que l'apparence des diamètres de Vénus et de Mars n'est pas fonction de leurs distances. La distance maximale de chacun est le double de sa distance minimale~; or les diamètres devraient suivre les distances, c'est-à-dire que chacune de ces deux planètes devrait paraître avoir, à distance minimale, un diamètre double de son diamètre à distance maximale~; mais ce n'est pas ce qu'on voit. Ptolémée a dit~: <<~La raison pour laquelle la vision voit et croit que les grandeurs des corps ne sont pas dans le même rapport que leurs distances, c'est l'erreur qui entre dans l'observateur à cause de la parallaxe. On s'en aperçoit en toute chose vue par l'{\oe}il (il n'y a pas de différence entre choses de grandeurs différentes, avec la connaissance de leur diminution continuelle). On voit toute planète beaucoup plus proche de nous que son état en vérité, à cause du fait que la vision se réduit au distances auxquelles elle s'est accoutumée et habituée. Nous avons montré que l'augmentation et la diminution subie par la grandeur est en raison de l'augmentation et de la diminution des distances, mais ce rapport est inférieur au rapport réel, à cause de la faiblesse de la vision, d'après ce que nous avons dit à propos de la distinction de la quantité de la différence pour chaque espèce de ce que nous avons mentionné.~>> Ceci est la réponse de Ptolémée. Nous pensons pouvoir analyser cela ainsi~: -- Quand une sphère se rapproche de l'observateur, ce qui en est visible est moindre que ce qui en est visible à distance. -- L'observateur ne peut percevoir les diamètres réels des corps qui ont de l'éclat à cause de la réception de la vision par les rayons lumineux. -- Avec la distance, l'observateur ne voit pas les choses comme elles sont. Expliquons cela. Prenons deux corps sphériques, le diamètre de l'un étant double du diamètre de l'autre, et regardons-les de loin. L'{\oe}il ne pourra percevoir la grandeur réelle de leurs diamètres, ni ne pourra voir que le diamètre de l'un est double du diamètre de l'autre, ni ne pourra le vérifier par l'aide d'aucun instrument, même s'il sait que le diamètre de l'un est double du diamètre de l'autre. \newpage\phantomsection \index{AVBHAB@\RL{.daw'}!AVBHAB@\RL{.daw'}, clarté} \index{BCAKBA@\RL{k_tf}!BCAKAGBAAI@\RL{ki_tAfaT}, opacité} \index{AYBCAS@\RL{`ks}!AEBFAYBCAGAS@\RL{in`ikAs}, reflet} \index{ATAYAY@\RL{^s``}!ATAYAGAY@\RL{^su`A`}, rayon (de lumière)} \includepdf[pages=135,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection \noindent Que l'observateur estime la grandeur de l'un des diamètres par rapport à l'autre~; supposons que l'un des deux reste à sa place et que l'autre se déplace au double de sa distance~; l'observateur ne pourra plus admettre la différence qu'il avait estimée. D'autre part, si le regard se porte sur la lumière d'une lampe à distance donnée et que le regard imagine qu'elle est d'une certaine grandeur, puis qu'on s'approche [de cette lampe] à mi-distance ou qu'on s'en éloigne d'autant, alors le regard n'aura pas l'impression d'un tel changement de distance. \`A partir de là et de ce qui précède, on a la réponse à la question indépendamment de l'incapacité de la vision à saisir la grandeur des choses dans leur état. Nous affirmons que si l'{\oe}il se porte sur Paul et Pierre, il ne peut déterminer lequel des deux est plus grand (même par la distance), ni ne peut dire de quelle grandeur l'un est plus grand que l'autre~; ceci, même s'il n'y a aucune distance entre eux deux. ll est donc clair que la distance maximale de Mars est vingt-six mille quatre cent soixante comparée à la grandeur du rayon de la Terre. La science de la vision échoue pour les grandeurs à des distances bien moindres que cela ; de plus, elle répugne à la détermination des grandeurs même si nous percevons que les diamètres des astres diffèrent aux autres distances et que nous le sentons (sauf que cela ne dépend pas du rapport des distances comme nous l'avons expliqué). De là, on comprend que les astres sont des corps qui ont un éclat et que leur lumière leur est propre, non comme la Lune qui est un corps opaque dont la lumière vient du reflet des rayons du Soleil à sa surface. Pour cette raison, l'estimation du diamètre [de la Lune] aux autres distances de la Terre est possible, mais non celle des diamètres des autres astres. \newpage\phantomsection \index{ALAOBD@\RL{jdl}!ALAOBHBD@\RL{jdwl}, table, tableau, catalogue} \includepdf[pages=136,pagecommand={\thispagestyle{plain}}]{edit.pdf}\phantomsection Ceci achève ce que nous voulions produire dans cette première partie. Va suivre une deuxième partie sur la configuration de la Terre et ce qui en dépend. Nous continuerons cela par le calcul des tables des équations des astres selon ces fondements dans une troisième \label{troisieme1} partie si Dieu le veut. On espère que le lecteur de ce livre ne rejettera pas ce qui n'est pas parfaitement connu, qu'il empruntera le chemin de la vérité et de la raison, et qu'il échappera à la voie de l'entêtement. Le bien-fondé de ce que nous avons produit et la clarté de la voie que nous avons empruntée paraîtra à qui suivra la voie de la vérité et confrontera ces fondements avec les Anciens et les Modernes. Merci à Dieu, qu'Il soit notre refuge, nous nous en remettons à Lui. \newpage\phantomsection \index{ASBHBI@\RL{sw_A}!ANAWAW ASAJBHAGAB@\RL{_ha.t.t al-istiwA'}, équateur terrestre} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{AYAOBD@\RL{`dl}!BEAYAOAOBD BFBGAGAQ@\RL{m`ddl al-nhAr}, équateur, plan de l'équateur} \index{ASBCBF@\RL{skn}!BEASBCBHBF@\RL{al-maskUn}, la partie habitée (du globe)} \index{BGBJAC@\RL{haya'a}!BGBJACAI@\RL{hI'aT}, configuration, astronomie} \index{AOBHAQ@\RL{dwr}!ASAJAOAGAQAI@\RL{istidAraT}, sphéricité (de la Terre, des cieux)} \index{AHAMAQ@\RL{b.hr}!AHAMAQ@\RL{b.hr}, mer} \index{BFBGAQ@\RL{nhr}!BBBHAS BFBGAGAQ@\RL{qws al-nhAr}, arc diurne, durée du jour} \includepdf[pages=1,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \addcontentsline{toc}{chapter}{II.1 L'ensemble de la configuration de la Terre et ce qui l'entoure} \begin{center} \Large Au nom de Dieu clément et miséricordieux~; j'implore son secours \LARGE Seconde partie de l'Achèvement de l'enquête \normalsize La configuration de la Terre et de ce qui en dépend concernant la variation de position des corps célestes. En dix sections. \Large Première section \normalsize L'ensemble de la configuration de la Terre et ce qui l'entoure. \end{center} \noindent Nous avons déjà dit au début de la première partie que la Terre, globalement, est ronde, qu'un individu debout où que ce soit, a la tête vers la périphérie (c'est-à-dire vers le haut) et les pieds vers le centre (c'est-à-dire vers le bas), et que la surface convexe de la Terre est parallèle à la surface concave de l'orbe qui l'entoure. Le sommet de la tête d'un voyageur, sur Terre, indique sans cesse une portion différente de cet orbe. Supposons que trois personnes se séparent en un lieu, que l'un se dirige vers l'Ouest, l'autre vers l'Est, et que le troisième demeure immobile jusqu'à ce que les deux autres aient fait un tour et reviennent à lui~; on trouve que les jours mis par le voyageur allant vers l'Ouest sont en nombre inférieur aux jours de celui qui est immobile, et que les jours mis par le voyageur allant vers l'Est sont en nombre supérieur aux siens. En effet le voyageur allant vers l'Ouest retranche [quelque chose] des circonférences à cause de son voyage, et celui qui va vers l'Est y ajoute [quelque chose] à cause de son voyage. La durée du jour pour celui qui va vers l'Ouest est plus grande que pour celui qui est immobile, à raison de son mouvement~; et la durée du jour pour celui qui est immobile est plus grande que pour celui qui va vers l'Est, à raison du mouvement de celui-ci. La quantité ajoutée ou retranchée est d'une journée, donc si l'on dit à chacun des trois ``quel jour de la semaine est-on ?'' et que celui qui est immobile répond vendredi, alors celui qui va vers l'Ouest dira jeudi, et celui qui va vers l'Est dira samedi. Si l'on imagine que le plan de l'équateur coupe le Monde, il coupe la Terre en deux moitiés Nord et Sud, et cette coupure s'appelle ligne de l'équateur. Imaginons encore un autre grand cercle passant par les pôles de l'équateur~; il coupe la Terre en deux moitiés dessus comme dessous. Ils divisent donc la Terre en quatre~; deux quarts au Nord et deux quarts au Sud. L'un des quarts Nord, le plus élevé, est le quart habité. Le reste est soit en grande partie sous les mers, soit inconnu. \newpage\phantomsection \index{AMAQAQ@\RL{.harara}!AMAQAGAQAI@\RL{.hrAraT}, chaleur} \index{AHAMAQ@\RL{b.hr}!AHAMAQ@\RL{b.hr}, mer} \index{ARBFAL@\RL{znj}!ARBFAL@\RL{znj}, Noirs} \index{AMAHAT@\RL{.hb^s}!AMAHATAI@\RL{.hb^saT}, \'Ethiopie} \index{BFBJBD@\RL{nIl}, Nil} \index{BEAUAQ@\RL{mi.sr}, \'Egypte} \index{BBBJAS@\RL{qys}!BEBBBJAGAS@\RL{miqyAs j maqAyIs}, gnomon} \index{AYBEAQ@\RL{`mr}!BEAYBEBHAQ@\RL{al-ma`mUr, al-`imAraT}, partie cultivée de la Terre~; par extension, le quart habité (\RL{maskUn})} \index{AHAQAO@\RL{brd}!AHAQAO@\RL{brd}, froid} \index{BDBHBF@\RL{lwn}, couleur} \index{ASBCBF@\RL{skn}!BEASBCBHBF@\RL{al-maskUn}, la partie habitée (du globe)} \index{ANASBA@\RL{_hsf}!ANASBHBA@\RL{_husUf}, éclipse de Lune} \index{AMAQBB@\RL{.hrq}!AWAQBJBBAI BEAMAJAQBBAI@\RL{.tarIqaT mu.htariqaT}, routes brûlantes} \index{BBBEAQ@\RL{qmr}!BBBEAQ@\RL{qamar}, Lune} \index{ALAHBD@\RL{jbl}!ALAHBD@\RL{jabal j jibAl}, montagne} \index{AWBHASBJ@\RL{n.sIr al-dIn al-.tUsI}, Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}} \includepdf[pages=2,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \uwave{La science ... est propre à ...} Certains spécialistes de cette science disent que les contrées du Sud, parce qu'elles sont proches du périgée solaire, sont très chaudes~; leur grande chaleur empêche toute culture. Si c'en était la cause, les contrées du Sud dont la latitude dépasse l'inclinaison maximale ne seraient pas incultes, et certaines contrées du Nord seraient inappropriées à la culture, bien que d'autres ne le sont pas, à latitude et longitude égale. Mais selon certains de ces spécialistes, la couleur de ceux qui en sont proches montre la durée pendant laquelle le périgée est au Sud de l'écliptique, pour ces contrées très chaudes. La chaleur attire l'humidité, comme le montre la flamme d'une lampe quand elle attire l'huile ; ainsi les mers seraient-elles entraînées vers la moitié Sud, et la moitié Nord de la Terre serait-elle laissée à découvert ? L'existence de mers au Nord empêche ce jugement. Selon certains, celles de ces contrées qui ne sont pas habitées sont celles où vont se coucher les luminaires, et on les appelle ``routes brûlantes'' ; mais c'est un conte invraisemblable. L'auteur de la \emph{Ta\b{d}kira} a dit : si la raison n'en était la Providence, pas un des deux quarts Nord ne serait approprié à la culture. Mais pour atteindre la science de ce qui rend ce quart approprié à la culture, on sait que la partie habitée est le quart le plus élevé, dans la moitié Nord, parce qu'en longitude, il n'existe pas, aux deux extrémités de la partie habitée, de lieux où la limite de l'avancée des éclipses et de leur retard dépasse douze heures, et parce qu'en latitude, il n'existe nulle part, dans toutes les contrées habitées, où l'extrémité de l'ombre du gnomon soit côté Sud à midi des équinoxes, sauf dans quelques lieux aux confins des pays des Noirs, de l'Ethiopie, etc. Les premières latitudes habitées au Sud sont là où le pôle Sud est haut de $16;25$ degrés, ce sont les lieux qu'on vient de mentionner. Les dernières au Nord sont là où le pôle Nord est haut de 66 degrés ; il est impossible de vivre au delà à cause du froid dû à l'éloignement du Soleil par rapport au zenith là-bas. Donc les latitudes habitées mesurent $82;25$ degrés ; les longitudes, $177;15$ degrés. La mer entoure de presque tout côté la part habitable. Le côté Ouest, le côté Nord, et presque tout le Sud, notamment le Sud-Est, sont connus. Le Sud-Ouest n'est pas connu. Il est dit que des voyageurs, en haut du cours du Nil en Egypte, sont arrivés en des lieux dont la latitude Sud dépasse quelques dix degrés, et qu'ils ont vu les montagnes blanches, comparables à la Lune, où sont les sources du Nil ; même plus loin au Sud, ils n'ont pas atteint de mer. Nous n'avons pas non plus de témoignage au sujet d'une mer au Nord-Est. Dans la partie découverte et habitée, il y a de nombreuses mers. \newpage\phantomsection \index{AHAWBDBEBJBHAS@\RL{b.talimayUs}, Ptolémée} \index{ASBHBI@\RL{sw_A}!ANAWAW ASAJBHAGAB@\RL{_ha.t.t al-istiwA'}, équateur terrestre} \index{ALBEBGAQ@\RL{jmhr}!ALBEBGBHAQ@\RL{al-jumhUr}, les Grecs} \index{ACBBBDBE@\RL{'aqlm}!AEBBBDBJBE@\RL{'iqlIm j 'aqAlIm}, climat} \index{ANBDAL@\RL{_hlj}!ANBDBJAL AHAQAHAQBI@\RL{halIj brbr_A}, golfe de Berbera (golfe d'Aden)} \index{ANBDAL@\RL{_hlj}!ANBDBJAL ACANAVAQ@\RL{_halIj a_h.dar}, <<~golfe vert~>> (golfe d'Oman)} \index{ANBDAL@\RL{_hlj}!ANBDBJAL BAAGAQAS@\RL{_halIj fAris}, golfe Persique} \index{ANBDAL@\RL{_hlj}!ANBDBJAL ACAMBEAQ@\RL{_halIj a.hmar}, mer Rouge} \index{AMBHAW@\RL{.hw.t}!AHAMAQ BEAMBJAW@\RL{al-ba.hr al-mu.hI.t}, l'Océan} \index{AHAMAQ@\RL{b.hr}!AHAMAQ BHAQBFBC@\RL{b.hr warank}, mer Baltique} \index{AHAMAQ@\RL{b.hr}!AHAMAQ AWAHAQASAJAGBF@\RL{ba.hr .tabaristAn}, mer Caspienne} \index{AHAMAQ@\RL{b.hr}!AHAMBJAQAI ANBHAGAQARBE@\RL{bu.hayraT _hawArizm}, mer d'Aral} \index{AJBHBDBI@\RL{tUl_A}, île de Thulé} \index{AUBBBDAH@\RL{.saqlab}, Slave} \index{ANBDAO@\RL{_hld}!ALARAGAEAQ ANAGBDAOAGAJ@\RL{jazA'ir al-_hAlidAt}, îles Canaries} \index{BBAHAH@\RL{qbb}!BBAHAHAI ACAQAV@\RL{qubbaT al-'ar.d}, dôme de la Terre} \includepdf[pages=3,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent Certaines mers touchent l'Océan, comme celle qui part du Maghreb et d'Al-Andalous et qui est entre Al-Andalous et la Syrie, ou comme la mer du Sud qui touche au côté Est d'où sortent quatre golfes vers le milieu des terres habitées~: le golfe de Berbera\footnote{\textit{i. e.} le golfe d'Aden où se trouve la ville de Berbera.}, celui qui est le plus proche du Maghreb, le golfe Rouge\footnote{\textit{i. e.} la mer Rouge.}, le golfe Persique et le golfe Vert\footnote{\textit{i. e.} le golfe d'Oman.}. Chacun d'eux a une longitude et une latitude avantageuses. [Parmi les mers qui touchent l'Océan], il y a aussi la mer des Warank\footnote{\textit{i. e.} la mer Baltique.} du côté Nord. Certaines mers ne touchent pas l'Océan, comme la mer du Tabaristan\footnote{\textit{i. e.} mer Caspienne.}, le lac du Khwarizm\footnote{\textit{i. e.} mer d'Aral.}, et d'autres qui sont des marais et des p\^echeries. Outre les mers, parmi les choses empêchant l'habitation, il y a les plaines, les montagnes, les sables, les collines, les maquis, et beaucoup d'autres que savent les spécialistes des routes, le voyageur, \emph{etc.} ; la connaissance détaillée de cela est seulement dans les livres variés de cette catégorie. La grandeur de la partie habitée, du côté Nord, tombe entre un peu plus de dix degrés de latitude jusqu'à la limite des cinquante degrés. Les hommes de métier l'ont divisée en sept climats longitudinalement, de sorte que chaque climat couvre l'étendue d'Est en Ouest sous un certain parallèle ; les conditions des lieux qui s'y trouvent sont semblables ; son [étendue] en latitude se calcule par des incréments d'une demi-heure par rapport à la durée du jour le plus long dans le climat moyen. Du septième climat jusqu'à une latitude de 66 parts, on trouve peu de constructions, et leurs habitants ressemblent à des sauvages. Au delà de 63 parts de latitude, il y a l'\^ile de Thulé\footnote{les Hébrides, l'Islande ou la Norvège peut-être.} dont les habitants vivent dans les bains à cause du grand froid ; le jour le plus long y dure vingt heures. Ptolémée a dit que là où la latitude est de 64 parts, il y a un peuple de Slaves qui ne savent pas. Les Grecs placent la longitude en partant de l'Ouest de sorte que sa quantité augmente dans le sens de signes, ils placent le début des latitudes sur l'équateur pour les délimiter de manière naturelle, et ils placent le début des lieux habités aux îles Canaries qui sont aujourd'hui entourées d'eau. Un peuple le place au littoral ouest du Maghreb. Entre eux deux, il y a dix degrés. La dernière limite de la partie habitée est comme nous avons rapporté -- selon certains, c'est la demeure des Démons -- et c'est le début de la partie habitée pour ceux qui le placent du côté de l'Est. Ils appellent ce qui est [au milieu] entre les deux limites, sur l'équateur, le Dôme de la Terre. \newpage\phantomsection \index{ACBBBDBE@\RL{'aqlm}!AEBBBDBJBE@\RL{'iqlIm j 'aqAlIm}, climat} \includepdf[pages=4,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent C'est situé à un quart de circonférence du commencement Ouest ; il y a donc des variantes concernant ce nom, en raison des variantes concernant le commencement Ouest. \emph{Le premier climat} commence là où le jour le plus long dure douze heures trois quarts et où la latitude est douze degrés et \uwave{[deux ?]} tiers de degré. En son milieu, le jour dure treize heures et la latitude est seize degrés, un demi-degré et un huitième de degré.\footnote{Début $12;20$ ou $12;40$. Milieu $16;37,30$.} \emph{Le deuxième climat} commence là où le jour dure treize heures et quart et où la latitude est vingt degrés, un quart et un cinquième de degré. En son milieu, le jour dure treize heures et demi, et la latitude est vingt-quatre degrés, un demi et un sixième de degré. \footnote{Début $20;27$. Milieu $24;40$.} \emph{Le troisième climat} commence là où le jour dure treize heures trois quarts et où la latitude est vingt-sept degrés et demi. En son milieu, le jour dure quatorze heures, et la latitude est trente degrés et deux tiers de degré.\footnote{Début $27;30$. Milieu $30;40$.} \emph{Le quatrième climat} commence là où le jour dure quatorze heures et quart, et où la latitude est trente-trois degrés et demi et un huitième de degré. En son milieu, le jour dure quatorze heures et demi, et la latitude est trente-six degrés, un cinquième et un sixième de degré.\footnote{Début $33;37,30$. Milieu $36;22$.} \emph{Le cinquième climat} commence là où le jour dure quatorze heures trois quarts et où la latitude est trente-neuf degré moins un dixième de degré. En son milieu, le jour dure quinze heures, et la latitude est quarante-et-un degrés et un quart de degré.\footnote{Début $38;54$. Milieu $41;15$.} \emph{Le sixième climat} commence là où le jour dure quinze heures et quart et où la latitude est quarante-trois degrés, un quart et un huitième de degré. En son milieu, le jour dure quinze heures et demi, et la latitude est quarante-cinq degrés, un quart et un dixième de degré.\footnote{Début $43;22,30$. Milieu $45;21$.} \emph{Le septième climat} commence là où le jour dure quinze heures trois quarts et où la latitude est quarante-sept degrés et un cinquième de degré. En son milieu, le jour dure seize heures et quart, et la latitude est cinquante degrés et un tiers de degré.\footnote{Début $47;12$. Milieu $50;20$.} La fin de chaque climat est le commencement de celui qui le suit. D'autres gens posent le commencement du premier climat à l'équateur, et la fin du septième là où s'achève la partie habitée, non là où s'en achève la majeure partie. \newpage\phantomsection \index{BFBGAQ@\RL{nhr}!BFBGAQ@\RL{nahr j anhAr}, rivière} \index{ALAHBD@\RL{jbl}!ALAHBD@\RL{jabal j jibAl}, montagne} \index{BDBHBF@\RL{lwn}, couleur} \index{BFBGAQ@\RL{nhr}!BBBHAS BFBGAGAQ@\RL{qws al-nhAr}, arc diurne, durée du jour} \index{ANBDAL@\RL{_hlj}!ANBDBJAL BAAGAQAS@\RL{_halIj fAris}, golfe Persique} \label{var13} \includepdf[pages=5,pagecommand={\thispagestyle{plain}}]{edit2.pdf} [figure des sept climats contenant les montagnes et les grandes rivières]\label{carte_fleuves_montagnes} \emph{Dans le premier climat} il y a vingt montagnes et trente rivières, et la plupart de ses peuples sont noirs. \emph{Dans le deuxième climat} il y a vingt-sept montagnes et vingt-sept rivières, et les couleurs de ses peuples sont noire et brune. \emph{Dans le troisième climat}, il y a trente-trois montagnes et vingt-deux rivières, et la plupart de ses peuples sont bruns. \emph{Dans le quatrième climat}, il y a vingt-cinq montagnes et vingt-deux rivières, et la couleur de la plupart de ses peuples est entre brun et blanc ; ce sont les hommes les plus justes \uwave{khlqan w khlqan}, et c'est pourquoi [ce climat] a été une mine \uwave{de peuples}, de saints et de savants ; d'ailleurs les peuples du troisième, du cinquième et des autres climats manquent \uwave{khlqan w khlqan}. \emph{Dans le cinquième climat}, il y a trente montagnes et quinze rivières, et la plupart de ses peuples sont blancs. \emph{Dans le sixième climat}, il y a onze montagnes et quarante rivières, et souvent la couleur de ses peuples est jaune. \emph{Dans le septième climat}, il y a autant de rivières et de montagnes, et la couleur de ses peuples est entre jaune et blanc ; la plus grande part en est inoccupée à cause de l'intensité du froid et de la fréquence des neiges et des pluies, et les peuples de certaines de ses contrées habitent dans des bains [chauds] pendant six mois de l'année. Entre la fin du septième climat et la fin de la partie habitée, il y a beaucoup moins d'habitations qu'en amont ; là, le peuple ressemble à des bêtes. Le jour le plus long dure 17 heures quand la latitude est 54 degrés environ. Il dure 18 heures à 58 degrés de latitude. Il dure 19 heures à 61 degrés de latitude. Il dure 20 heures à 63 degrés de latitude. Il dure 21 heures à $64;30$ degrés de latitude. Il dure 22 heures à environ 65 degrés de latitude. Il dure 23 heures à 66 degrés de latitude. Il dure 24 heures quand la latitude atteint le complément de l'inclinaison [de l'écliptique]. Il dure un mois à 67 degrés de latitude. Il dure deux mois à $69;45$ degrés de latitude. Il dure trois mois à $73;30$ degrés de latitude. Il dure quatre mois à $78;30$ degrés de latitude. Il dure cinq mois à 84 degrés de latitude. Il dure six mois quand la latitude atteint un quart de circonférence. Passons aux pays connus des Anciens dans chaque climat, en allant de l'extrémité Est vers l'extrémité Ouest. \emph{Le premier climat} commence à l'Est aux confins de la Chine et il passe par la partie Sud de la Chine où il y a la cité impériale, puis par le Sud de l'Inde et du Sind, \uwave{l'île Krk Baha'ir}, le royaume du Yemen puis le golfe Persique et la péninsule arabique qui fait à peu près cinq cents stades et où se trouve tout l'état arabe, ses tribus et ses peuplades près du Hedjaz, Taëf, le Yemen, Bahrein, Nejd, Tihama, la Mecque, et Médine ou Yathrib, mais toutes ne sont pas dans le premier climat. Parmi les villes célèbres, celles qui s'y trouvent sont la ville de Zafar, Oman, Hadramaout, Aden, Ta`izz, Sanaa du Yemen, M\=ar\=a, Zabid, Qalhât, Shihr, Huras, Mahra, et Saba. \newpage\phantomsection \index{AMAHAT@\RL{.hb^s}!AMAHATAI@\RL{.hb^saT}, \'Ethiopie} \index{BFBJBD@\RL{nIl}, Nil} \index{AMBHAW@\RL{.hw.t}!AHAMAQ BEAMBJAW@\RL{al-ba.hr al-mu.hI.t}, l'Océan} \includepdf[pages=6,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent Puis [le premier climat] passe par l'Ethiopie et coupe le Nil d'Egypte, le Soudan et la Nubie, \uwave{comme \d{H}arr\=a}, palais du roi d'Ethiopie, et \uwave{Dahlamat} une ville de Nubie, et le Ghana d'où vient l'or du Soudan, dans le Maghreb, puis il passe par le Sud de la Berbérie, jusqu'à l'Océan à l'Ouest. \emph{Le deuxième climat} commence à l'Est de la Chine, puis il traverse la majeure partie de l'Inde, le Sind, comme la ville de Mansura, Narwar, et \uwave{Danbek}, et il arrive à Oman et Bassorah. Il coupe la péninsule arabique sur les terres de Nejd et Tihama ; il compte, parmi les villes de là-bas : Al-Yam\=ama, Bahrein, Hajar, Yathrib, Harr, La Mecque, Taëf et Jeddah. Il coupe la mer de Qulzum\footnote{\textit{i. e.} la mer Rouge.} et passe par la Haute-Egypte, puis il coupe le Nil~; il compte, parmi les villes de là-bas, Qûs, Akhmîm, \uwave{S\=abir}, \uwave{\d{S}\=a}, et Assouan, puis il prend sur les terres du Maghreb et passe au c{\oe}ur de l'Afrique, puis par la Berbérie, pour finir à l'Océan. \emph{Le troisième climat} commence à l'Est et passe par la Chine, par le Royaume Indien et le Gandh\=ara qui est dans les hautes-terres de l'Inde, par Multan dans le Sind, par \uwave{Z\=a'il}, Bust, S{\=\i}statm\=an\footnote{Sistan ?}, Kerman, Sirjan, Fars, Ispahan, Ahvaz, Askar, Koufa, Bassorah, Wasit, Baghdad, Al-Anbâr, Hit ; puis il passe par la Syrie, où sont les villes de \uwave{\d{H}iy\=ar}, Salamiya, \d{H}om\d{s}, Hama, Baalbek, Maarret en Nouman, Damas, Tyr, Acre, Tibériade, Qays\=ariya\footnote{aujourd'hui Baniyas.}, Jérusalem, Ascalon, \uwave{Madiyan}, 'Ars\=uf\footnote{Apollonie.}, Ramla, Gaza, puis il passe par la Basse-Egypte où sont les villes de Farama\footnote{Péluse.}, Tunis, Damiette, Le Caire, Alexandrie ; puis il passe en Ifriqiya où sont les villes de Kerouan et Sousse, puis il passe par les peuplades de la Berbérie, puis par Tanger, pour finir à l'Océan. \newpage\phantomsection \index{AMBHAW@\RL{.hw.t}!AHAMAQ BEAMBJAW@\RL{al-ba.hr al-mu.hI.t}, l'Océan} \includepdf[pages=7,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \emph{Le quatrième climat} commence au Nord de la Chine, il passe par \uwave{Batout}, \uwave{Khoulkh{\=\i}z}, Cathay, Khotan, les montagnes du Cachemire, Bul\=ur, Badakhsh\=an, Kaboul, Balkh, Hérât, Fergh\=ana, Samarcande, Boukhara, \=Amul\footnote{aujourd'hui Türkmenabat au Turkmenistan.}, Merv, Sarakhs, Tus, Nishapur, Gorgan, Esfarayen, Qûhistân, \uwave{Q\=umz}, Daylam, Tabaristan, Qom, Hamadan, Azerbaïdjan, Qazvin, Dinavar, Hulwân, Shahraz\=ur, Mossoul, S\=amarr\=a, Nusaybin, \=Amid\footnote{aujourd'hui Diyarbak{\i}r en Turquie.}, Mardin, Ra's al-`Ayn, Samosate, Malatya, Alep, Qinnasrîn, Antioche, Tripoli\footnote{Tripoli du Liban.}, Tarse, Ammuriye\footnote{la ville antique d'Amorium, aujourd'hui Hisarköy en Turquie.}, Lattaquié, puis il passe par l'île de Chypre, par Rhodes, et par les terres occidentales de l'Europe, et par Tanger, pour finir à l'Océan au détroit entre Al-Andalous et le Maghreb. \emph{Le cinquième climat} commence à l'Est à la frontière du Turkestan, près de Gog et Magog ; il passe par les célèbres nations du Turkestan avec ses peuplades, jusqu'à la limite de Kachgar\footnote{dans le Xinjiang.}, Taraz, 'Isf{\=\i}j\=ab\footnote{ aujourd'hui Sayram, au Kazakhstan.}, \uwave{\d{H}a\d{h}}, Usrushana\footnote{aujourd'hui Istaravchan au Tadjikistan.}, Ferghana, Samarcande, Soghd, Boukhara, Khwarezm, \uwave{Shams}, la contrée d'Arménie, Barda, Mayy\=af\=ariq{\=\i}n\footnote{Silvan, en Turquie.}, la Séleucie, Erzurum, Ahlat\footnote{au bord du lac de Van.}, et il passe dans l'Empire Romain par \uwave{\d{H}arshafat}, \uwave{Qrt}, et \uwave{Rome}, puis par les côtes de la mer de Syrie\footnote{\textit{i. e.} la mer Méditerranée.}, l'Italie et Al-Andalous, jusqu'à l'Océan. \emph{Le sixième climat} commence à l'Est par les terres des Turcs orientaux et leurs peuplades, il traverse la mer de Gorgan\footnote{la mer Caspienne.}, il passe par les Khazars et les \uwave{M\=ut\=ani ?}, par les Slaves\footnote{D'après Kazimirski, nom générique donné aux peuples du Nord-Est de l'Europe.}, les \uwave{Aryens ?}, B\=ab al-Abw\=ab\footnote{aujourd'hui Derbent en Russie.}, la Russie, puis l'Empire Romain dont Constantinople, et le Nord d'Al-Andalous, pour finir à l'Océan. \newpage\phantomsection \index{AMBHAW@\RL{.hw.t}!AHAMAQ BEAMBJAW@\RL{al-ba.hr al-mu.hI.t}, l'Océan} \includepdf[pages=8,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \emph{Le septième climat} commence à l'Est en passant par l'extrémité des Turcs orientaux et de leurs peuplades, par le Nord (Gog et Magog), puis il passe par Ghiy\=a\d{d} où des montagnes recèlent des turques vivant comme des bêtes. Puis il passe par Bolghar, la Russie et les Slaves, et il traverse la Mer Méditerranée et les pays slaves pour finir à l'Océan. J'ai placé ces pays dans leurs climats, de manière détaillée, selon leur longitude et leur latitude, dans les tables de la troisième partie, avec les autres tables déjà mentionnées. Dieu approuve la raison. \label{troisieme2} \newpage\phantomsection \index{ASBHBI@\RL{sw_A}!ANAWAW ASAJBHAGAB@\RL{_ha.t.t al-istiwA'}, équateur terrestre} \index{AYAOBD@\RL{`dl}!BEAYAOAOBD BFBGAGAQ@\RL{m`ddl al-nhAr}, équateur, plan de l'équateur} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AKAGBFBJAI@\RL{.hrkaT _tAnyaT}, deuxième mouvement} \index{AXBGAQ@\RL{.zhr}!AXBGBHAQ@\RL{.zuhUr}, visibilité ($\neq$ \RL{i_htifA'})} \index{ASBFBH@\RL{snw}!BAAUBHBD ASBFAI@\RL{fu.sUl al-sanaT}, les saisons} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \includepdf[pages=9,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \addcontentsline{toc}{chapter}{II.2 Particularités de l'équateur terrestre} \begin{center} \Large Deuxième section \normalsize Particularités de l'équateur terrestre \end{center} \noindent L'équateur passe par les zéniths des lieux qui se trouvent sur l'équateur terrestre, et il coupe leurs horizons orthogonalement, et leurs cercles horizontaux bissectent chaque trajectoire diurne puisqu'ils passent par les pôles de l'équateur. C'est pourquoi en ces lieux, le jour et la nuit sont toujours égaux. L'origine de leurs azimuts est l'équateur, et ses pôles sont aux points Nord et Sud du cercle horizontal. Ainsi tous les astres ont un lever et un coucher, sauf [ceux qui sont] aux pôles, car une moitié [de leur trajectoire diurne] est visible et l'autre cachée. La durée de visibilité de chaque point de l'orbe est égale à la durée pendant laquelle il est caché ; il y a certes un écart à cause de l'irrégularité des trajectoires due au second mouvement\footnote{Le second mouvement est le mouvement du huitième orbe, c'est-à-dire le mouvement de précession des fixes.}, mais cet écart est insensible. Le Soleil passe deux fois l'an au zénith, et c'est quand il est aux points des équinoxes ; il n'y alors, à midi, pas d'ombre projetée horizontale. Il ne s'éloigne pas du zénith d'un angle supérieur à l'inclinaison [de l'écliptique], et sa hauteur n'est jamais inférieure au complément de l'inclinaison ; il passe la moitié de l'année de chaque côté, et l'ombre à midi est alors située de l'autre côté. Les ombres au début de l'été et au début de l'hiver sont égales. Les pôles de l'écliptique sont à l'horizon chaque fois qu'un des deux points des équinoxes est au zénith, la ceinture de l'écliptique est alors orthogonale à l'horizon, et le méridien du lieu bissecte la partie visible de l'écliptique. Pendant que la moitié Nord de la ceinture de l'écliptique passe par le méridien du lieu, c'est son pôle Sud qui est visible. Pendant que la moitié Sud de la ceinture y passe, c'est son pôle Nord qui est visible. La hauteur de chacun ne dépasse pas l'inclinaison [de l'écliptique]. Ils ont, dans chaque année, huit saisons : deux printemps, deux étés, deux automnes et deux hivers. Le début du premier printemps, c'est quand le Soleil est au milieu du Verseau, et le début du second, c'est quand il est au milieu du Lion ; le début du premier été, quand il est au commencement du Bélier, et le début du second quand il est au commencement de la Balance ; le début du premier automne quand il est au milieu du Taureau, et le début du second quand il est au milieu du Scorpion ; le début du la premier hiver quand il est au commencement du Cancer et le début du second quand il est au commencement du Capricorne ; comme il est montré dans la figure suivante. Dieu est le plus savant. \newpage\phantomsection \begin{center} \input{fig020.pdf_tex} \end{center} \newpage\phantomsection \begin{center} \input{fig019.pdf_tex} \end{center} \newpage\phantomsection \index{ACBHAL@\RL{'awj}!ACBHAL ATBEAS@\RL{'awj al-^sams}, Apogée du Soleil} \index{AMAQAQ@\RL{.harara}!AMAQAQ@\RL{.hrr}, chaud} \index{AHAQAO@\RL{brd}!AHAQAO@\RL{brd}, froid} \index{AYAOBD@\RL{`dl}!AYAOBD@\RL{`dl}, tempéré} \index{BHASAY@\RL{ws`}!ASAYAI@\RL{s`aT m^srq / m.grb}, amplitude Est / Ouest} \index{AHBF ASBJBFAG@\RL{ibn sInA}, Avicenne} \index{AQAGARBJ@\RL{al-rAzI}, Fa\b{h}r al-D{\=\i}n al-R\=az{\=\i}} \index{AMAHAT@\RL{.hb^s}!AMAHATAI@\RL{.hb^saT}, \'Ethiopie} \index{BABDBC@\RL{flk}!BABDBC BEASAJBBBJBE BCAQAI BEBFAJAUAHAI@\RL{falak mustaqIm, kuraT munta.sibaT}, sphère droite} \index{ASBEAJ@\RL{smt}!ASBEAJ AQABAS@\RL{samt al-ra's}, zénith} \index{BBBDAH@\RL{qlb}!BEAOAGAQ BEBFBBBDAHBJBF@\RL{madAr al-munqalbayn}, les deux tropiques} \index{ATAYAY@\RL{^s``}!ATAYAGAY@\RL{^su`A`}, rayon (de lumière)} \includepdf[pages=10,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Là-bas, l'orbe tourne comme une roue puisque l'horizon coupe toutes les trajectoires diurnes orthogonalement. C'est pourquoi on appelle ses horizons <<~horizons de l'orbe droit~>> ou <<~horizons de la sphère droite~>>. Comme leur horizon est un des cercles de déclinaison (il passe par les pôles), alors l'\emph{amplitude Est} de tout point\footnote{L'\emph{amplitude Est d'un astre} mesure donc, à l'horizon, l'arc entre le point où se lève les équinoxes et le point où se lève l'astre. L'amplitude Ouest est relative aux couchers. Ragep : \textit{ortive amplitude}, \textit{occasive amplitude}.}, c'est-à-dire l'arc à l'horizon entre son lever et le lever des équinoxes, est égal à sa déclinaison. De même pour son amplitude Ouest. Les gens s'accordent à dire que les lieux les plus chauds en été sont ceux qui sont sous le tropique, si aucune raison terrestre ou atmosphérique n'enlève rien à sa chaleur, car le Soleil arrive au zénith et y prolonge son séjour pendant à peu près deux mois, à cause de l'amoindrissement de l'écart en déclinaison. Pour cette raison, il semble alors ne pas avoir de mouvement en déclinaison pendant des jours. Le jour en été ne dure pas plus longtemps, ni la nuit moins longtemps, mais la longueur du jour n'influe pas sur l'intensité de la chaleur quand le Soleil est proche du zénith, à cause de l'épaississement des rayons quand cela arrive, épaississement dû au fait qu'ils forment un angle aigu. Le Grand Maître\footnote{Avicenne} conteste cela et pense que les contrées les plus tempérées sont à l'équateur terrestre, parce que le Soleil n'y reste pas longtemps au zénith, parce que son mouvement en déclinaison est y est le plus rapide. Bien que le passage au zénith cause l'échauffement, c'est la durée du passage au zénith qui y conduit. C'est ainsi que, [en général], l'été est plus chaud que le printemps, et l'après-midi plus chaud qu'avant midi. De plus, à cause de l'égalité entre la durée du jour et la durée de la nuit, la dureté des conditions de chacun sera rapidement rompue par l'autre ; ainsi les deux seront tempérés. L'Imam\footnote{Al-R\=az{\=\i}} réfute cela, en disant que l'échauffement du Soleil en hiver, à l'équateur terrestre, est comme l'échauffement du Soleil en été dans un pays dont la latitude est le double de l'inclinaison de l'écliptique, et que cet échauffement est très intense. Que penser alors de la chaleur estivale à l'équateur terrestre où le Soleil peut être considéré au zénith toute l'année ! On réfute l'Imam en contredisant le fait que la chaleur hivernale à l'équateur est comme la chaleur estivale dans ce pays~: la chaleur estivale de ce pays serait plus intense, à cause de la longueur du jour et de la courte durée de ses nuits, au contraire de ce qui se passe à l'équateur terrestre. De plus les choses habituelles n'ont pas autant d'influence que les choses inhabituelles. \newpage\phantomsection \index{BDBHBF@\RL{lwn}, couleur} \includepdf[pages=11,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent \`A cause de l'accoutumance de la constitution des peuples de l'équateur à la chaleur, et de leur manque d'accoutumance au froid, il est donc possible qu'ils ne ressentent pas la chaleur de l'air quand le Soleil est au zénith, ni sa froideur quand il est aux solstices, contrairement au peuple de cet [autre] pays aux mêmes moments. L'Imam juge que le climat le plus tempéré est le quatrième. Il le déduit du fait qu'un lieu où prospèrent les habitations se trouve sans aucun doute plus proche du milieu des sept climats que de leurs extrémités. Ainsi, le fait de griller, ou celui de rester immature, causés par les conditions de chaleur ou de froid, sont clairement visibles aux extrémités [des sept climats]. La vérité est que, si l'on entend par \emph{tempéré} l'uniformité des états, il n'y a aucun doute qu'elle est atteinte davantage à l'équateur terrestre ; mais si l'on entend par là l'équilibre entre les deux conditions, alors il n'y a aucun doute qu'il est atteint davantage dans le quatrième climat. Le montrent les faits suivants. La couleur noire des habitants de l'équateur terrestre dans les Pays des Noirs et en \'Ethiopie est très intense, et leurs cheveux sont très crépus~: c'est causé par la chaleur de l'air. Le contraire a lieu dans le quatrième climat~; car l'air y est tempéré. \newpage\phantomsection \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{BABHBB@\RL{fwq}!ADBABB BEAGAEBD@\RL{'ufuq mA'il}, horizon incliné} \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \includepdf[pages=12,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \addcontentsline{toc}{chapter}{II.3 Particularités des lieux qui ont une latitude [non nulle]} \begin{center} \Large Troisième section \normalsize Particularités des lieux qui ont une latitude [non nulle] \end{center} On les appelle \emph{horizons inclinés}~; ce sont ceux qui sont entre l'équateur terrestre et l'un des pôles de l'équateur. Là-bas, l'orbe tourne [sous un angle égal à l'éloignement] de l'équateur par rapport à ces horizons, du côté du pôle caché. Chacun de ces horizons est incliné par rapport à [l'équateur] du côté du pôle visible. C'est pourquoi on les appelle horizons inclinés. La hauteur du pôle est du même côté, et elle est égale à la latitude du pays. La distance des trajectoires diurnes toujours visibles ou toujours cachées, par rapport à l'équateur, est supérieure ou égale au complément de la latitude du pays. La plus grande d'entre elles est celle dont la distance est égale au complément de la latitude, elle touche l'horizon, et elle n'est jamais invisible, ou bien jamais visible. Les trajectoires diurnes sont coupées par l'horizon en deux arcs inégaux. Le plus grand [arc] est celui qui est visible, quand il s'agit d'une trajectoire proche du pôle visible et située du même côté que lui ; c'est celui qui est caché, quand il s'agit d'une trajectoire proche du pôle caché et située du même côté que lui. Pour tout couple de trajectoires diurnes équidistantes de l'équateur mais situées de part et d'autre de l'équateur, les arcs alternés sont deux à deux égaux\footnote{L'absence du duel en français rend cette phrase un peu obscure. L'arc visible de l'une des deux trajectoire est égal à l'arc caché de l'autre, et inversement.}. Plus le Soleil s'éloigne de l'équateur du côté du pôle visible, plus l'excès du jour sur la nuit croît, et inversement du côté du pôle caché -- mais on augmente la latitude du pays, alors l'écart entre l'arc visible et l'arc caché [de chaque trajectoire diurne] augmente aussi. Le jour s'allonge jusqu'à ce que [le Soleil] atteigne le sommet du solstice qui est proche du pôle visible, et il raccourcit jusqu'à l'autre solstice. En chaque partie [du circuit du Soleil sur l'écliptique], le jour est égal à la nuit en la partie opposée, et inversement. Les deux temps ne sont égaux que lorsque le Soleil est à l'équinoxe à son lever ou à son coucher. S'il se lève quand il est à l'équinoxe, alors la nuit de ce lever est égale à son jour ; s'il se couche quand il est à l'équinoxe, alors le jour où il se couche est égal à la nuit du lever [suivant]. D'où apparaît l'impossibilité que [le jour et la nuit] soient égaux dans toutes les contrées à une même date, puisqu'il est impossible que [le Soleil] soit situé en même temps sur tous les horizons. Soit deux cercles de déclinaison, l'un à l'Est, l'autre à l'Ouest, tels que chacun passe par l'un des deux points d'intersection de l'horizon et de la trajectoire diurne du Soleil, d'un astre, ou d'un point quelconque. Il y a deux triangles~: l'un à l'Est, l'autre à l'Ouest, sous l'horizon si l'on est du côté du pôle visible, et sur l'horizon si l'on est du côté du pôle caché. \newpage\phantomsection \index{BHASAY@\RL{ws`}!ASAYAI@\RL{s`aT m^srq / m.grb}, amplitude Est / Ouest} \index{BFBGAQ@\RL{nhr}!AJAYAOBJBD BFBGAGAQ@\RL{ta`dIl al-nahAr}, équation du jour} \index{BFBGAQ@\RL{nhr}!BBBHAS BFBGAGAQ@\RL{qws al-nhAr}, arc diurne, durée du jour} \index{ASBEAJ@\RL{smt}!ASBEAJ AQABAS@\RL{samt al-ra's}, zénith} \index{ASBEAJ@\RL{smt}!ASBEAJ BBAOBE@\RL{samt al-qadam, samt al-rjl}, nadir} \includepdf[pages=13,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent Le premier côté de chacun de ces deux triangles est un arc du cercle de déclinaison, et sa longueur est la déclinaison du Soleil, de l'astre, ou du point quelconque. Le deuxième côté est un arc du cercle de l'horizon, et c'est l'amplitude Est quand on est du côté [Est], ou bien l'amplitude Ouest quand on est du côté [Ouest] ; en effet l'amplitude Est est un arc du cercle de l'horizon entre la trajectoire de l'astre (ou du point quelconque) et le lever des équinoxes\footnote{\textit{i. e.} le point Est, à l'horizon.}, et l'amplitude Ouest est l'arc entre la trajectoire mentionnée et le coucher des équinoxes\footnote{\textit{i. e.} le point Ouest, à l'horizon.}. Le troisième côté est l'arc de l'équateur entre ce lever -- ou ce coucher -- et le cercle de déclinaison passant par l'intersection de la trajectoire et de l'horizon~: c'est l'\emph{équation du jour} du Soleil ou de l'astre. L'excédent de l'arc diurne (de l'astre ou du point quelconque) sur son arc diurne en l'équateur terrestre est le double de l'équation du jour, lorsque l'astre (ou le point quelconque) est du côté du pôle visible ; et c'est son défaut qui est le double, lorsque l'astre est du côté du pôle caché. Il est clair qu'on obtient la moitié de l'arc diurne en ajoutant l'équation, pendant un quart de circonférence, puis en la retranchant ; il est donc bien permis de l'appeler ``équation du jour'', car c'est l'équation pour la moitié [du jour]. Il y a des gens qui posent un seul cercle de déclinaison passant par le lever de l'équinoxe et par son coucher. Avec l'horizon et avec toute trajectoire diurne, [ce cercle de déclinaison] forme deux triangles, l'un à l'Est et l'autre à l'Ouest ; mais ils sont sur l'horizon si [la trajectoire diurne] est du côté du pôle visible, et sous l'horizon si elle est du côté du pôle caché. L'équation du jour est alors un arc de la trajectoire de l'astre ou du point quelconque, entre le cercle de l'horizon et le cercle de déclinaison passant par le lever de l'équinoxe et son coucher. Cet arc de la trajectoire diurne est semblable à l'arc de l'équateur mentionné ci-dessus. Toute trajectoire diurne dont la distance à l'équateur est égale à la latitude du pays touche le cercle origine des azimuts, au zénith si elle est du côté du pôle visible, et au nadir si elle est du côté du pôle caché. Si la distance est supérieure à la latitude du pays, alors [la trajectoire diurne] ne rencontrera pas le cercle origine des azimuts ; si elle est du côté du pôle visible, alors elle passera près du zénith mais loin du nadir du côté caché. Si en revanche la distance est inférieure [à la latitude du pays], alors [la trajectoire diurne] coupera le cercle origine des azimuts en deux points, l'un à l'Est et l'autre à l'Ouest. Dès lors que l'astre est sur l'arc [de trajectoire diurne] situé entre le cercle origine des azimuts et l'équateur, il s'éloigne du cercle origine des azimuts quand il est du côté caché si cette trajectoire diurne est située du côté du pôle visible, et inversement. \newpage\phantomsection \begin{center} \small\input{fig025.pdf_tex}\normalsize \end{center} \newpage\phantomsection \begin{center} \small\input{fig024.pdf_tex}\normalsize \end{center} \newpage\phantomsection \index{AHAYAO@\RL{b`d}!AHAYAO AYBF ASBEAJ@\RL{b`d `an al-samt}, distance zénithale} \index{ASBFBH@\RL{snw}!BAAUBHBD ASBFAI@\RL{fu.sUl al-sanaT}, les saisons} \index{ASBEAJ@\RL{smt}!ASBEAJ AQABAS@\RL{samt al-ra's}, zénith} \index{ACBHAL@\RL{'awj}!ACBHAL ATBEAS@\RL{'awj al-^sams}, Apogée du Soleil} \index{AMAQAQ@\RL{.harara}!AMAQAQ@\RL{.hrr}, chaud} \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \includepdf[pages=14,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \addcontentsline{toc}{chapter}{II.4 Particularités des lieux dont la latitude ne dépasse pas le complément de l'inclinaison [de l'écliptique]} \begin{center} \Large Quatrième section \normalsize Particularités des lieux dont la latitude ne dépasse pas le complément de l'inclinaison [de l'écliptique] \end{center} \noindent Ces lieux se répartissent en quatre catégories. \emph{Première catégorie :} là où la latitude est inférieure à l'inclinaison de l'écliptique. En ces lieux, le Soleil passe au zénith en deux points [de l'écliptique] dont les déclinaisons sont égales à la latitude du lieu et qui sont situés du même côté que le pôle visible ; la ceinture de l'écliptique est alors perpendiculaire à l'horizon, ses pôles sont à l'horizon, et un homme debout n'a pas d'ombre à midi. Tant que le Soleil est dans l'arc compris entre ces deux points du côté du pôle visible, l'ombre sera portée du côté du pôle caché, et le pôle visible de l'écliptique sera celui des deux qui est proche du pôle caché de l'équateur, et son pôle caché sera celui qui est proche du pôle visible [de l'équateur]. Tant que le Soleil est dans l'arc opposé compris entre ces deux points du côté du pôle caché, l'ombre sera portée du côté du pôle visible de l'équateur, et le pôle visible de l'écliptique sera celui qui est proche du pôle visible de l'équateur, et son pôle caché celui qui est proche du pôle caché. Plus la latitude croît, plus les deux points se rapprochent l'un de l'autre. L'arc entre les deux points, ainsi que les pôles de l'écliptique, ont des levers et des couchers. Les saisons ne sont pas égales. Ainsi leurs étés sont plus longs. Puisque le Soleil atteint le zénith deux fois l'an, les saisons ne seront pas égales même si l'on en comptait plus que quatre, car le Soleil atteint deux distances zénithales maximales différentes, suivant qu'il est d'un côté ou de l'autre de l'équateur. \emph{Deuxième catégorie :} là où la latitude est égale à l'inclinaison de l'écliptique. En ces lieux, le Soleil passe au zénith une seule fois par an, l'un des pôles de l'écliptique est toujours visible, l'autre toujours caché, et ils ne touchent pas l'horizon dans leurs trajectoires sauf quand le point solsticial situé du côté du pôle visible atteint le zénith -- et alors la ceinture de l'écliptique est perpendiculaire à l'horizon. Les ombres sont toute l'année du côté du pôle visible, sauf quand le Soleil est au zénith -- alors il n'y en a pas. La hauteur du Soleil croît de l'un des deux points solsticiaux à l'autre, puis la variation change de sens et elle décroît jusqu'à revenir à sa valeur initiale. Les saisons sont quatre, ni plus ni moins, et elles sont égales et analogues alternativement pour les contrées du Nord et pour celles du Sud ; mais là où la latitude Sud est égale à l'inclinaison [de l'écliptique] il fait plus chaud que là où la latitude Nord est égale à l'inclinaison de l'écliptique, parce que l'Apogée du Soleil est au Nord, et que son périgée est au Sud. \newpage\phantomsection \index{AMAWAW@\RL{.h.t.t}!BFAMAWAGAW@\RL{in.hi.tA.t}, abaissement|see{\RL{irtifA`}}} \index{AQBAAY@\RL{rf`}!AQAJBAAGAY@\RL{irtifA`}, hauteur (coordonnées azimutales)} \includepdf[pages=15,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \emph{Troisième catégorie :} là où la latitude est supérieure à l'inclinaison de l'écliptique mais inférieure à son complément. Le Soleil n'y atteint pas le zénith. Deux hauteurs lui sont propres~: l'une est la somme de l'inclinaison de l'écliptique et du complément de la latitude du lieu, et l'autre est plus petite, c'est l'excédent du complément de la latitude sur l'inclinaison de l'écliptique. Il en est de même du pôle toujours visible, car il n'atteint jamais l'horizon, et sa hauteur maximale est atteinte lors du passage du solstice caché au méridien du lieu, et sa hauteur minimale lors du passage du solstice visible au méridien du lieu. Quant au pôle caché, d'après le même raisonnement, il atteint deux hauteurs négatives\footnote{hauteur négative, \textit{in\d{h}i\d{t}\=a\d{t}} que nous avons ailleurs traduit par <<~abaissement~>>, désigne une hauteur mesurée le long d'un grand cercle perpendiculaire à l'horizon mais \emph{sous} l'horizon.}. Les autres conditions, quant au fait que les ombres tombent du côté du pôle visible et quant à la longueur du jour et de la nuit, sont comme nous avons montré ci-dessus. Quant aux planètes, celles dont la latitude dépasse l'excédent de la latitude du lieu sur l'inclinaison de l'écliptique passeront par le zénith deux fois ; celles dont la latitude est égale à l'excédent y passeront une fois~; et celles dont la latitude est moindre ne passeront pas par le zénith. Plus la latitude augmente, plus l'équation du jour et les amplitudes Est et Ouest augmentent. \emph{Quatrième catégorie :} là où la latitude est égale au complément de l'inclinaison de l'écliptique. Là, la trajectoire du solstice situé du côté du pôle visible est toujours visible, et la trajectoire de l'autre solstice est toujours cachée. La trajectoire du pôle visible de l'écliptique passe par le zénith, et la trajectoire de l'autre pôle passe par le point opposé. Donc quand le point solsticial visible arrive à l'horizon, il le touche un un point qui est le pôle du cercle origine des azimuts du côté du pôle visible, le point solsticial invisible touche l'autre pôle du cercle origine des azimuts, les deux pôles [de l'écliptique] viennent au zénith et au [nadir], la ceinture de l'écliptique coïncide avec le cercle de l'horizon, la tête du Bélier avec le point Est, le commencement de la Balance avec le point Ouest, la tête du Cancer avec le point Nord, et la tête du Capricorne avec le point Sud ; et le point de l'équateur en face de la tête du Capricorne sera situé sur le méridien du lieu au Sud au dessus de la Terre\footnote{c'est-à-dire au dessus de l'horizon.}, et le point de l'équateur en face de la tête du Cancer sera situé sur le méridien du lieu au Nord au dessous de la Terre\footnote{c'est-à-dire sous l'horizon.}, si le pôle visible est au Nord. De cela, on connaît aussi la position des deux ceintures relativement à l'horizon quand [le pôle visible] est au Sud. \newpage\phantomsection \index{AJAGBHAOBHASBJBHAS@\RL{tAwdUsyUs}, Théodose} \index{BHASAY@\RL{ws`}!ASAYAI@\RL{s`aT m^srq / m.grb}, amplitude Est / Ouest} \index{BFBGAQ@\RL{nhr}!AJAYAOBJBD BFBGAGAQ@\RL{ta`dIl al-nahAr}, équation du jour} \index{BJBHBE@\RL{ywm}!BJBHBE AHBDBJBDAJBG@\RL{al-yawm bilaylatihi j al-'ayyAm bilayAlIhA}, nychtémère (\textit{litt.} le jour avec sa nuit)} \index{AVBHAB@\RL{.daw'}!AVBHAB@\RL{.daw'}, clarté} \index{BBBJAS@\RL{qys}!BEBBBJAGAS@\RL{miqyAs j maqAyIs}, gnomon} \index{ASBCBF@\RL{skn}!BEASBCBHBF@\RL{al-maskUn}, la partie habitée (du globe)} \index{AYBEAQ@\RL{`mr}!BEAYBEBHAQ@\RL{al-ma`mUr, al-`imAraT}, partie cultivée de la Terre~; par extension, le quart habité (\RL{maskUn})} \includepdf[pages=16,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Quand le pôle s'incline vers l'Ouest par rapport au zénith dans le sens direct\footnote{Le sens direct est ici le sens du mouvement diurne, c'est-à-dire d'Est en Ouest ; ce n'est donc pas le sens ``des signes''.}, le point solsticial et la moitié Est de la ceinture de l'écliptique (dont le centre est l'équinoxe de printemps) s'élèvent [au dessus de l'horizon], si le pôle visible est au Nord ; par là même, la moitié Ouest s'abaissera aussi. Les deux cercles\footnote{Ceinture de l'écliptique et horizon.} se couperont en deux points opposés proches des solstices et des points Nord et Sud : il y a tangence en ces quatre points, donc nécessairement ils se couperont en d'autres points. Alors la partie suivant le solstice caché près du pôle du cercle origine des azimuts tendra à se coucher, et la partie suivant le solstice visible près de l'autre pôle du cercle origine des azimuts tendra à se lever~; puis les parties de la moitié cachée se lèveront l'une après l'autre en chacune des parties de la moitié Est de l'horizon, et les parties de la moitié visible se coucheront l'une après l'autre en chacune des parties de la moitié Ouest de l'horizon, pendant la durée d'un nychtémère, jusqu'à ce que la position de l'orbe revienne à son état initial. L'amplitude Est et l'équation du jour atteint [en ces lieux] un quart de cercle. Le jour s'allonge jusqu'à ce que la durée d'un nychtémère soit un jour entier lorsque le Soleil arrive au solstice visible, parce que sa trajectoire est alors toujours visible comme nous l'avons dit, puis apparaissent des nuits, et elles s'allongent jusqu'à ce que la durée [d'un nychtémère] soit une nuit entière lorsqu'il arrive au solstice caché ; c'est ainsi si l'on considère le début de la nuit ou du jour comme étant l'arrivée du centre du Soleil à l'horizon. Si l'on considère le début du jour comme étant l'apparition de la clarté et la disparition des étoiles fixes, et le début de la nuit comme étant la disparation [de la clarté] et l'apparition [des étoiles fixes], alors la durée [maximale] de chacun des deux est \emph{un mois} d'après ce qu'a montré Théodose dans les \textit{Habitations}. La hauteur du Soleil croît jusqu'au double de l'inclinaison de l'écliptique, puis elle décroît jusqu'à s'évanouir au point de tangence avec l'horizon. Après son lever au pôle du cercle origine des azimuts, il s'élève à l'Est, il franchit [le grand cercle] correspondant à la droite Est-Ouest, il atteint sa hauteur maximale à son arrivée au méridien du lieu, au Sud, et c'est le double de l'inclinaison [de l'écliptique] ; puis sa hauteur décroît jusqu'à toucher l'horizon au pôle du cercle origine des azimuts. Puisque le Soleil tourne autour du gnomon, et que l'ombre est toujours du côté opposé, l'ombre tourne aussi autour du gnomon. Comme nous l'avons montré, en ces lieux, une moitié du l'orbe de l'écliptique se lève avec une révolution de l'équateur, tandis que l'autre moitié se lève instantanément. La partie habitée s'arrête, du côté Nord, aux environs de cet horizon, comme on sait. \newpage\phantomsection \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{AUAHAM@\RL{.sb.h}!AUAHAM@\RL{.sb.h}, aurore} \index{ATBABB@\RL{^sfq}!ATBABB@\RL{^sfq}, crépuscule} \index{ASBFBH@\RL{snw}!BAAUBHBD ASBFAI@\RL{fu.sUl al-sanaT}, les saisons} \index{AMAQBC@\RL{.hrk}!AMAQBCAI ABBHBDBI@\RL{.harakaT 'Ul_A}, premier mouvement|see{\RL{.harakaT yawumiyyaT}}} \index{AMAQBC@\RL{.hrk}!AMAQBCAI BJBHBEBJBJAI@\RL{.harakaT yawumiyyaT}, mouvement diurne} \includepdf[pages=17,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \addcontentsline{toc}{chapter}{II.5 Particularités des lieux dont la latitude dépasse le complément de l'inclinaison [de l'écliptique] sans atteindre un quart de cercle} \begin{center} \Large Cinquième section \normalsize Particularités des lieux dont la latitude dépasse le complément de l'inclinaison [de l'écliptique] sans atteindre un quart de cercle \end{center} \noindent En ces lieux, la trajectoire du pôle de l'écliptique dévie un peu du zénith, du côté du pôle caché. La plus grande trajectoire diurne toujours visible est plus grande que la trajectoire du solstice, et elle coupe la ceinture de l'écliptique en deux points de déclinaisons égales, du côté du pôle visible ; et la plus grande trajectoire diurne toujours cachée la coupe en deux points qui sont leurs opposés, du côté du pôle caché. La ceinture de l'écliptique se divise en quatre portions. \emph{La première}, toujours visible, est celle dont le centre est le solstice du côté du pôle visible, et le Soleil y reste pendant un jour de leur été. \emph{La deuxième}, toujours cachée, est celle dont le centre est le solstice du côté du pôle caché, et le Soleil y reste pendant une nuit de leur hiver. Les extrémités du premier arc touchent l'horizon en l'un des pôles du cercle origine des azimuts, du côté du pôle visible, et ils ne se couchent pas avec le mouvement diurne ; les extrémités du deuxième arc touche [l'horizon] en l'autre pôle du cercle origine des azimuts, et ils ne se lèvent pas. \emph{La troisième} [portion] est celle dont le centre est le commencement du Bélier ; elle se lève à rebours, c'est-à-dire que sa fin se lève avant son début, et elle se couche à l'endroit, c'est-à-dire que son début se couche avant sa fin, quand le pôle visible est au Nord ; mais elle se lève à l'endroit et elle se couche à rebours quand le pôle visible est Sud. \emph{La quatrième} est celle dont le centre est le commencement de la Balance, et son régime est à l'inverse de la troisième. Le solstice visible a deux hauteurs. Sa hauteur maximale est la somme de l'inclinaison de l'écliptique et du complémentaire de la latitude du lieu, et [elle est atteinte] sur le méridien du lieu, du côté du pôle caché. Sa hauteur minimale est l'excédent de la latitude du lieu sur le complémentaire de l'inclinaison de l'écliptique, et [elle est atteinte] sur le méridien du lieu, du côté du pôle visible. Le pôle visible de l'écliptique a aussi deux hauteurs. Sa hauteur maximale est la somme du complémentaire de la latitude du lieu et du complémentaire de l'inclinaison de l'écliptique. Sa hauteur minimale est l'excédent de la latitude du lieu sur l'inclinaison de l'écliptique. Le pôle et le solstice passent ensemble au méridien du lieu, de part et d'autre du zénith, à des hauteurs complémentaires. On déduit de manière analogue l'état du solstice et du pôle cachés. En ces horizons, l'aurore et le crépuscule [peuvent] durer longtemps. L'ombre [peut] tomber de tous les côtés, mais elle est plus longue du côté du pôle caché. \newpage\phantomsection \index{BHASAY@\RL{ws`}!ASAYAI@\RL{s`aT m^srq / m.grb}, amplitude Est / Ouest} \includepdf[pages=18,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Pour se représenter facilement les positions, supposons que la latitude du lieu est $70$ degrés Nord, que l'arc toujours visible [contient] les Gémeaux et le Cancer, que l'arc toujours caché [contient] le Sagittaire et le Capricorne, que l'arc qui se lève à rebours et se couche à l'endroit va du début du Verseau à la fin du Taureau, et que celui qui se lève à l'endroit et se couche à rebours va du début du Lion à la fin du Scorpion. Si la tête du Cancer est au Sud, à sa hauteur maximale $43;35$, alors le pôle visible est au Nord sur le méridien du lieu à sa hauteur minimale $46;25$, le début de la Balance s'apprête à se lever au point du lever des équinoxes, le début du Bélier s'apprête à se coucher au point de leur coucher, et la moitié visible de l'orbe de l'écliptique est au Sud, d'Ouest en Est, comme on voit sur la figure. Que les orbes soient mûs par le premier mouvement, alors le début du Cancer s'abaisse vers l'Ouest, le pôle de l'écliptique s'élève vers l'Est, le début du printemps se couche, le début de l'automne se lève, et de même pour les deux arcs qu'ils délimitent ; [le long de l'horizon], la distance entre le lever de chaque partie -- resp. le coucher de la partie opposée -- et le lever de l'équinoxe -- resp. son coucher -- croît jusqu'à ce que vienne le tour des deux parties dont l'une touche l'horizon sans [jamais] se coucher et l'autre le touche sans [jamais] se lever. Alors la Balance et le Scorpion se sont levés à l'endroit, et leurs amplitudes Est ont occupé tout le quart Sud-Est. Le Bélier et le Taureau se sont couchés à l'endroit, et leurs amplitudes Ouest ont occupé tout le quart Nord-Ouest. Le début du Sagittaire est au point Sud où il touche l'horizon ; et le début des Gémeaux touche l'horizon au Nord. Le pôle visible de l'écliptique est du côté Est, entre ses hauteurs maximale et minimale, sur le cercle origine des azimuts ; son autre pôle est à l'opposé. La moitié visible de la ceinture de l'écliptique est maintenant à l'Ouest, du Sud au Nord ; et sa moitié cachée est à l'opposé. L'intersection de la ceinture de l'écliptique et de l'horizon est aux points Nord et Sud. Voir la figure. Que les orbes [continuent] de se mouvoir, alors le début des Gémeaux prend de la hauteur en allant vers l'Est. Adjacente, la fin du Taureau se lève peu à peu -- de sorte que le lever de chaque partie soit plus proche du lever de l'équinoxe que les levers des parties qui se lèvent avant elle. Le Taureau se lève, puis la fin du Bélier, puis son début. Le quart Nord-Est est entièrement occupé par les amplitudes Est de ces signes. Finalement le début du Bélier atteint son lever. En face de cela, le début du Sagittaire se met à s'abaisser sous l'horizon, la fin du Scorpion -- qui lui est adjacente -- se couche, il se couche peu à peu, puis la Balance se couche de sa fin jusqu'à son début, et leurs amplitudes Ouest occupent tout le quart Sud-Ouest. Le début de la Balance atteint son coucher. \newpage\phantomsection \begin{center} \small\input{fig031.pdf_tex}\normalsize \end{center} \newpage\phantomsection \begin{center} \small\input{fig030.pdf_tex}\normalsize \end{center} \newpage\phantomsection \index{AMAQBC@\RL{.hrk}!AMAQBCAI AKAGBFBJAI@\RL{.hrkaT _tAnyaT}, deuxième mouvement} \index{AWBDAY@\RL{.tl`}!AWBDAY@\RL{.tl`}, se lever ($\neq$ \RL{.grb})} \index{AZAQAH@\RL{.grb}!AZAQAH@\RL{.grb}, se coucher ($\neq$ \RL{.tl`})} \includepdf[pages=19,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent Le début du Cancer atteint le méridien du côté Nord où il est à sa hauteur minimale $3;35$ et où le pôle de l'écliptique est à sa hauteur maximale du côté Sud, $86;25$. La moitié visible de l'orbe de l'écliptique est du côté Nord entre le lever des équinoxes et leur coucher, mais dans un ordre différent de l'ordre habituel. Sa moitié cachée est à l'opposé. La ceinture de d'écliptique et l'horizon se coupent aux points Est et Ouest. Voir la figure. Que les orbes [continuent] de se mouvoir, alors les Poissons se lèvent de leur fin jusqu'à leur début, puis le Verseau, de sa fin jusqu'à son début, de sorte que leurs amplitudes Est occupent tout le quart Sud-Est. En face d'eux, la Vierge se couche de sa fin jusqu'à son début, puis le Lion, de sa fin jusqu'à son début, leurs amplitudes Ouest occupant tout le quart Nord-Ouest, et le début du Verseau vient toucher l'horizon au point Sud. La moitié visible du cercle de l'écliptique est entre les deux, du côté Est. Le début du Cancer a pris de la hauteur à l'Est, et le pôle s'est mis à descendre à l'Ouest. Voir la figure. Que les orbes [continuent] de se mouvoir, alors le début du Lion s'élève de l'horizon en se dirigeant vers l'Est, et les parts [du Lion] se lèvent dans l'ordre jusqu'à sa fin, puis [jusqu'à] la fin de la Vierge, de même. Leurs amplitudes Est occupent tout le quart Nord-Est. En face de cela, le début du Verseau s'abaisse sous terre à partir de l'horizon, et le Verseau se couche puis les Poissons dans l'ordre [des signes], occupant de leurs amplitudes Ouest tout le quart Sud-Ouest, le lever atteignant le début de la Balance, et le coucher atteignant le début du Bélier ; car la proximité des levers des parts [de l'écliptique] -- resp. de leurs couchers -- et du lever -- resp. du coucher -- de l'équinoxe augmente. Pendant ce temps-là, le début du Cancer a atteint sa hauteur maximale au méridien, et le pôle visible de l'écliptique a atteint sa hauteur minimale au méridien. La moitié visible de l'orbe de l'écliptique est du côté du Sud. Le cercle est complet, et on retrouve les positions supposées au commencement. En ces horizons, si la latitude du pays se rapproche de la latitude maximale et que la hauteur de l'équateur par rapport à l'horizon devient petite, il peut arriver qu'un astre ayant une trajectoire [diurne] très proche de l'horizon se déplace vers une autre trajectoire [diurne] à cause du deuxième mouvement. Il [peut] alors disparaître après avoir été visible bien qu'il soit dans la moitié Est ; ou il peut apparaître après avoir été caché bien qu'il soit dans la moitié Ouest. Il se serait alors couché à l'Est, ou levé à l'Ouest. Ceci compte parmi les questions singulières. \newpage\phantomsection \begin{center} \small\input{fig033.pdf_tex}\normalsize \end{center} \newpage\phantomsection \begin{center} \small\input{fig032.pdf_tex}\normalsize \end{center} \newpage\phantomsection \index{AJAGBHAOBHASBJBHAS@\RL{tAwdUsyUs}, Théodose} \index{AMAQBC@\RL{.hrk}!AMAQBCAI AKAGBFBJAI@\RL{.hrkaT _tAnyaT}, deuxième mouvement} \index{AWBDAY@\RL{.tl`}!AWBDAY@\RL{.tl`}, se lever ($\neq$ \RL{.grb})} \index{AZAQAH@\RL{.grb}!AZAQAH@\RL{.grb}, se coucher ($\neq$ \RL{.tl`})} \index{ATAQBB@\RL{^srq}!BEATAQBB@\RL{ma^sraq}, Est} \index{AZAQAH@\RL{.grb}!BEAZAQAH@\RL{ma.grab}, Ouest} \index{AOBHAQ@\RL{dwr}!BEAOAGAQ@\RL{mdAr}, trajectoire, trajectoire diurne (\textit{i. e.} cercle parallèle à l'équateur)} \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{BFBGAQ@\RL{nhr}!BFBGAGAQ@\RL{nahAr}, jour} \index{ACBHAL@\RL{'awj}!ACBHAL ATBEAS@\RL{'awj al-^sams}, Apogée du Soleil} \index{AVBHAB@\RL{.daw'}!AVBHAB@\RL{.daw'}, clarté} \index{AUAHAM@\RL{.sb.h}!AUAHAM@\RL{.sb.h}, aurore} \index{ATBABB@\RL{^sfq}!ATBABB@\RL{^sfq}, crépuscule} \addcontentsline{toc}{chapter}{II.6 Particularités des horizons dont la latitude est égale à un quart de cercle} \includepdf[pages=20,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \newpage\phantomsection \begin{center} \Large Sixième section \normalsize Particularités des horizons dont la latitude est égale à un quart de cercle \end{center} En ces lieux, un des pôles de l'équateur est au zénith, et l'équateur et l'horizon sont confondus. Par le premier mouvement, les cieux tournent comme une meule. L'Est ne se distingue pas de l'Ouest, puisque lever et coucher sont possibles dans toutes les directions. La hauteur maximale est égale à l'inclinaison de l'écliptique, et de même pour l'abaissement maximal\footnote{\textit{i. e.} la hauteur minimale comptée négativement sous l'horizon.}. La moitié de l'orbe de l'équateur située du côté du pôle visible est toujours visible, et l'autre moitié est toujours cachée. De même pour l'orbe de l'écliptique ; ainsi, il fait jour tant que le Soleil est dans la moitié visible [de l'écliptique], et il fait nuit tant qu'il est dans l'autre moitié. L'année est un jour et une nuit, mais les deux diffèrent à cause de la vitesse ou de la lenteur du mouvement [du Soleil]. \`A la date présente, au pôle Nord, le jour dure sept nychtémères de plus que la nuit, parce que l'Apogée du Soleil est vers la fin des Gémeaux et que son périgée est vers la fin du Sagittaire. C'est ainsi si l'on considère que le jour va du lever [du Soleil] à son coucher et que la nuit va de son coucher à son lever ; mais si l'on considère que le jour va de l'apparition de la clarté -- qui fait disparaître les fixes -- jusqu'à sa disparition, et que c'est le contraire pour la nuit, alors le jour dure plus de sept mois et la nuit dure presque cinq mois selon Théodose dans les \textit{Habitations}. Si l'on considère que le jour va du lever de l'aurore jusqu'au coucher du crépuscule, alors le jour dure neuf mois et sept jours environ -- de nos jours. En effet le lever de l'aurore et le coucher du crépuscule durent chacun cinquante jours -- de nos jours -- comme le montrera leur description ultérieurement. En ces lieux, le lever ou le coucher des astres, causé par le deuxième mouvement, ne se fait pas en une position déterminée de l'horizon. Toute étoile fixe dont la latitude est nulle sera pendant douze mille ans au dessus de l'horizon, et pendant la même durée en dessous. [Toute étoile fixe] dont la latitude est inférieure à l'inclinaison [de l'écliptique] aura un lever et un coucher, mais la durée de sa visibilité et de son invisibilité dépendra de la distance de sa trajectoire par rapport à la ceinture de l'écliptique\footnote{Ici, ``trajectoire'' ne désigne pas une trajectoire diurne bien qu'il s'agisse du même mot. C'est plutôt la trajectoire de l'étoile par rapport au référentiel attaché au neuvième orbe~: il s'agit d'une trajectoire circulaire dans un plan parallèle à la ceinture de l'écliptique. Rappelons que pour {Ibn al-\v{S}\=a\d{t}ir} le mouvement de précession, \textit{i. e.} le ``deuxième mouvement'', est le mouvement de rotation du huitième orbe~--~les étoiles fixes~--~autour d'un axe incliné par rapport à l'axe Nord-Sud au sein du référentiel attaché au neuvième orbe.}. \newpage\phantomsection \includepdf[pages=21,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent [Toute étoile fixe] dont la latitude est égale à [l'inclinaison] touche l'horizon à chaque révolution due au deuxième mouvement ; une telle étoile -- de même qu'une étoile ayant une latitude supérieure à l'inclinaison -- n'aura ni lever ni coucher, et elle sera toujours visible ou toujours cachée. Qu'on se souvienne de ce qui a été mentionné au sujet de la position de l'orbe à cause des deux premiers mouvements~; ceci dépend de cela. Ici s'achève le compte-rendu des caractéristiques des contrées [quant aux] trajectoires diurnes \textit{etc.} \newpage\phantomsection \index{AOAQAL@\RL{drj}!AOAQALAI@\RL{drjaT j darjAt, drj}, degré} \index{AOAQAL@\RL{drj}!AOAQAL ASBHAGAB@\RL{drj al-sawA'}, dégrés égaux ($\neq$ leur coascension)} \index{ALBEBGAQ@\RL{jmhr}!ALBEBGBHAQ@\RL{al-jumhUr}, les Grecs} \index{AWBDAY@\RL{.tl`}!BEAWAGBDAY@\RL{m.tAl`}, coascension} \index{AYAOBD@\RL{`dl}!BEAYAOAOBD BFBGAGAQ@\RL{m`ddl al-nhAr}, équateur, plan de l'équateur} \index{AHAQAL@\RL{brj}!BEBFAWBBAI AHAQBHAL@\RL{min.taqaT al-burUj}, écliptique, ceinture de l'écliptique} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \index{AKBDAK@\RL{_tl_t}!BEAKBDAK@\RL{m_tl_t}, triangle (éventuellement sphérique)} \index{AHAQAL@\RL{brj}!AHAQAL@\RL{brj}, signe (du zodiaque) |see{\RL{falak al-burUj}}} \index{BABDBC@\RL{flk}!BABDBC AHAQBHAL@\RL{flk al-burUj}, orbe de l'écliptique} \addcontentsline{toc}{chapter}{II.7 Les coascensions de l'écliptique} \includepdf[pages=22,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \newpage\phantomsection \begin{center} \Large Septième section \normalsize Les coascensions de l'écliptique \end{center} La \emph{coascension}, ce sont les parts d'arc de l'équateur qui se lèvent en même temps que se lèvent des parts d'arc de l'écliptique. On dit que l'arc de l'écliptique [est mesuré] en \emph{degrés égaux}. Les coascensions varient selon les horizons. Leur origine est l'équinoxe de printemps chez les Grecs. Chez certains [autres savants] c'est le solstice d'hiver~; ce qu'ils font montre qu'ils adoptent cette hypothèse. \`A l'équateur terrestre, chaque quart délimité par deux des quatres points des équinoxes et des solstices se lève avec un quart [de l'équateur]. En effet, quand le point de l'équinoxe -- qui est l'une des deux bornes [de chacun] des deux quarts des ceintures de l'équateur et de l'écliptique -- atteint le zénith, alors les solstices atteignent l'horizon. Et comme le [grand cercle] passant par les quatre pôles est confondu avec lui, alors [l'horizon] coupe perpendiculairement les ceintures de l'écliptique et de l'équateur. On raisonne de manière analogue pour chacun des quatre quarts. Ce qui se lève, de l'équateur, avec le signe [du zodiaque] suivant un des quatre points, n'est pas [un arc] égal, c'est-à-dire trente parts. Un signe adjacent à un points des équinoxes est plus grand que sa coascension, parce que l'arc d'écliptique dont il est question est plus grand que l'arc correspondant de l'équateur. En effet dans le triangle formé par eux et par l'horizon, cet arc-là est le côté opposé à l'angle droit formé par l'équateur et l'horizon, alors que la coascension est le côté opposé à l'angle aigu formé par l'écliptique et l'horizon ; et le côté opposé à l'angle droit est plus grand que le côté opposé à l'angle aigu. Ce qu'on vient de dire se produit aussi pour deux signes\footnote{\textit{i. e.} un arc de 60° adjacent à l'équinoxe le long de l'écliptique.} suivant le même point et pour leur coascension. Quant à [l'arc] complément du quart [de la circonférence] de l'écliptique et adjacent au solstice, sa coascension est plus grande que lui. On le démontre ainsi~: ce qui reste de l'équateur pour compléter le quart [de l'équateur] est supérieur à ce qui reste à partir du point de l'écliptique pour compléter le quart [de l'écliptique], et l'excédent de ceci est égale au manque de cela. De l'excédent et du manque en coascension, et de l'égalité de cet excédent et de ce manque, il apparaît aussi que deux arcs égaux et à distances égales de l'un des quatre points ont des coascensions égales à l'équateur terrestre. \newpage\phantomsection \index{ASAWAM@\RL{s.t.h}!ASAWAM BEASAJBH@\RL{\vocalize s.t.h mstwiN}, plan} \index{BEAQAQ@\RL{mrr}!AOAQALAI BEBEAQAQ@\RL{darjaT mamarr}, degré de transit} \index{BEAQAQ@\RL{mrr}!BEBEAQAQ@\RL{mamarr j At}, passage, transit (en général au méridien)} \index{BEAQAQ@\RL{mrr}!BEAQBHAQ@\RL{murUr}|see{\RL{mamarr}}} \index{BEBJBD@\RL{myl}!AOAGAEAQAI BEBJBD@\RL{dA'iraT al-mIl}, cercle de déclinaison} \index{AKBDAK@\RL{_tl_t}!BEAKBDAK@\RL{m_tl_t}, triangle (éventuellement sphérique)} \index{AZAQAH@\RL{.grb}!BEAZAGAQAH@\RL{ma.gArib}, co-coucher (<<~codescension~>>)} \includepdf[pages=23,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Si on connaît la coascension d'un quart [de l'écliptique], alors on connaît aussi la coascension des autres quarts. De la même manière, de la coascension d'un signe on peut connaître la coascension des suivants, jusqu'à tout connaître. Divisons la ceinture de l'écliptique en quatre segments dont les extrémités sont les milieux des quarts [ci-dessus]. Chaque segment dont le milieu est une équinoxe est plus grand que sa coascension, et chaque segment dont le milieu est un solstice est plus petit que sa coascension. Il en est des transits de la ceinture de l'écliptique et de l'équateur aux méridiens et aux cercles de déclinaisons dans toutes les contrées comme il en est de leurs coascensions à l'équateur terrestre, puisque chaque [cercle de déclinaison] est un horizon parmi les horizons de l'équateur terrestre. Et puisque le co-coucher est [l'arc] de l'équateur qui se couche en même temps que se lève [l'arc mesurant] la coascension, alors le co-coucher de chaque signe est égal à sa coascension. Quand l'horizon passe par les pôles de l'équateur et de l'écliptique, c'est-à-dire quand le point de l'équinoxe est au zénith [ou au nadir], le fait que l'horizon coupe perpendiculairement ces deux cercles se démontre par la contraposée, comme suit. Chacun des deux arcs (celui de l'équateur et celui de l'écliptique) est un côté opposé à un [des deux angles] angles droits. Il y a deux angles droits dans le triangle, car s'il n'y avait pas deux angles droits, alors un quart de la ceinture de l'écliptique serait nécessairement plus grand qu'un quart de l'équateur ; or l'hypothèse est qu'ils sont égaux. Certes l'impossibilité [de deux angles droits dans un triangle] n'a lieu que dans les triangles plans. Passons aux horizons inclinés. Là, une moitié [se lève] en même temps qu'une moitié quand elles sont délimitées par les équinoxes -- et non un quart en même temps qu'un quart. En effet l'équateur n'est pas perpendiculaire à l'horizon. Si un quart adjacent à une équinoxe et situé du côté du pôle visible se lève, alors il sera plus grand que sa coascension, d'autant plus grand que l'angle obtus sera plus grand qu'un angle droit ; car [ce quart] est le côté opposé à un angle obtus dans le triangle mentionné ci-dessus, et sa coascension est le côté opposé à un angle aigu. S'il est au contraire situé du côté du pôle caché, alors sa coascension sera plus grande que lui, puisqu'on sera dans le régime inverse. De là, on démontre que les coascensions de deux portions égales situées à des distances égales d'une des deux équinoxes sont égales. Si l'on divise l'orbe de l'écliptique en deux moitiés au milieu de chacune desquelles se situe un des deux points des équinoxes, alors celle dont le milieu est le passage du côté du pôle visible sera plus grande que son lever, et inversement. \newpage\phantomsection \index{BFBGAQ@\RL{nhr}!AJAYAOBJBD BFBGAGAQ@\RL{ta`dIl al-nahAr}, équation du jour} \index{AWBDAY@\RL{.tl`}!AJAYAOBJBD BEAWAGBDAY@\RL{ta`dIl ma.tAli`}, équation de la coascension} \includepdf[pages=24,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent La coascension des arcs au Nord dans les horizons du Nord est égale à la coascension des arcs opposés au Sud dans les horizons du Sud ; de même pour les horizons du Sud. Quel que soit l'horizon, le co-coucher d'un arc est toujours égal à la coascension de l'arc opposé. En un lieu dont la latitude est égale au complément de l'inclinaison de l'écliptique, on l'a déjà montré plus haut, la moitié de l'écliptique se lève avec un tour complet de l'équateur, et l'autre moitié se lève instantanément ; quant aux couchers, il faut échanger les deux moitiés. Là où la latitude dépasse le complément de l'inclinaison sans toutefois atteindre un quart de circonférence, il y a des arcs toujours visibles de l'orbe de l'écliptique, et ce qui est à l'opposé est toujours caché. On divise l'équateur en des parties qui se lèvent en même temps que les parties de l'écliptique se levant à rebours, et des parties qui se lèvent en même temps que les parties de l'écliptique se levant à l'endroit. Ainsi à $70$ degrés de latitude, les Gémeaux et le Cancer sont toujours visibles, et le Sagittaire et le Capricorne toujours cachés~; quand l'équinoxe de printemps se lève, les Poissons se lèvent après elle, à rebours, de leur fin jusqu'à leur début, puis le Verseau [se lève] à rebours aussi, puis le Lion commence à se lever, à partir de son début, à l'endroit, puis la Vierge puis la Balance puis le Scorpion, de même ; lorsqu'est atteint le début du Sagittaire, la fin du Taureau commence à se lever à rebours, puis le Taureau et le Bélier se lèvent à rebours, jusqu'à ce que le point de l'équinoxe de printemps revienne à l'horizon. On traite de la même manière les autres horizons, ainsi que les couchers. Les portions de l'écliptique ou de l'équateur qui ne se lèvent ni ne se couchent n'ont ni coascension ni co-coucher. C'est pourquoi il n'y a ni coascension ni co-coucher à la latitude de $90$ degrés. \emph{L'équation de la coascension} d'un pays est ce qu'on ajoute ou retranche à la coascension de la sphère droite\footnote{\textit{i. e.} aux coascensions calculées à l'équateur terrestre.}. C'est l'équation du jour, et le co-coucher de l'arc est comme sa coascension\footnote{En effet, quand le Soleil a un lever et un coucher par nychtémère, la durée du jour est la coascension d'une moitié de l'écliptique.}. La partie \emph{ascendante} de l'orbe de l'écliptique est celle qui vient à l'horizon vers l'Est, et sa partie \emph{descendante} est la partie opposée, à l'horizon Ouest. \newpage\phantomsection \index{BJBHBE@\RL{ywm}!BJBHBE AHBDBJBDAJBG@\RL{al-yawm bilaylatihi j al-'ayyAm bilayAlIhA}, nychtémère (\textit{litt.} le jour avec sa nuit)} \index{AWBDAY@\RL{.tl`}!BEAWAGBDAY@\RL{m.tAl`}, coascension} \index{BJBHBE@\RL{ywm}!BJBHBE AMBBBJBBBJ@\RL{ywm .hqIqy}, jour vrai} \index{BJBHBE@\RL{ywm}!BJBHBE BHASAWBJ@\RL{ywm ws.ty}, jour moyen} \index{BHASAW@\RL{ws.t}!BHASAW ATBEAS@\RL{ws.t al-^sams}, Soleil moyen} \index{ANBDBA@\RL{_hlf}!ACANAJBDAGBA@\RL{i_htilAf}, irrégularité, anomalie, variation} \index{BEAQAQ@\RL{mrr}!BEBEAQAQ@\RL{mamarr j At}, passage, transit (en général au méridien)} \index{ACBHAL@\RL{'awj}!ACBHAL ATBEAS@\RL{'awj al-^sams}, Apogée du Soleil} \addcontentsline{toc}{chapter}{II.8 Comment déterminer la durée des jours avec leurs nuits, et leurs équations} \includepdf[pages=25,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \newpage\phantomsection \begin{center} \Large Huitième section \normalsize Comment déterminer la durée des jours avec leurs nuits, et leurs équations \end{center} Le \emph{jour} désignera le nychtémère\footnote{littéralement, le <<~jour avec sa nuit~>>.}. Il y a le jour \emph{vrai} et le jour \emph{moyen}. Le jour vrai est la durée entre le lever du Soleil, son coucher ou son transit au méridien, d'une part, et son retour à ce qu'on a posé comme origine parmi les trois, d'autre part, en raison du premier mouvement ; sa grandeur est une circonférence de l'équateur, plus l'[arc d'équateur] qui se lève en même temps que l'arc parcouru par le Soleil en raison de son mouvement propre entre le début et la fin [du jour]. Cet arc [de l'écliptique] est plus petit [quand le Soleil est] dans la moitié la plus distante, et il est plus grand [quand le Soleil est] dans la moitié la plus proche ; et l'arc d'équateur qui se lève en même temps qu'un arc [donné] de l'écliptique est tantôt plus petite, tantôt plus grande. En raison de ces deux variations, la grandeur d'un nychtémère varie. L'écart est petit, donc cette variation est insensible sur une durée d'un petit nombre de jours (un ou deux jours par exemple), mais elle est sensible sur une durée d'un grand nombre de jours. Les astronomes sont contraints d'utiliser des nychtémères de grandeurs égales pour déterminer les mouvements moyens et d'autres choses dont ils ont besoin~; aussi ont-ils posé que l'excès\footnote{\textit{i. e.} la différence entre la durée du jour et la période du mouvement de l'équateur.} est égal au mouvement moyen du Soleil pendant un nychtémère, et ils ont appelé ces jours ``jours moyens''. Chacun est égal à une rotation de l'équateur, plus le Soleil moyen. Ils l'ont appelé ``moyen'' car il implique le parcours moyen [du Soleil], de même qu'ils ont appelé le premier ``jour vrai'' car il implique le parcours vrai [du Soleil], c'est-à-dire son mouvement en longitude. Les deux [durées ainsi définies] sont parfois égales, parfois différentes, parce que la coascension de son mouvement en longitude est parfois supérieure, parfois inférieure [au mouvement en longitude]~; outre ces deux cas, le mouvement en longitude est tantôt égal au mouvement moyen (quand [le Soleil] est à une position où son mouvement est moyen), tantôt supérieur (quand [le Soleil] est dans la moitié [de sa trajectoire passant par] le périgée), tantôt inférieur (quand [le Soleil] est dans la moitié [de sa trajectoire passant par] l'Apogée). On aura donc six divisions. \newpage\phantomsection \index{AYAOBD@\RL{`dl}!AJAYAOBJBD ACBJBJAGBE@\RL{ta`dIl al-'ayyAm}, équation des nychtémères} \index{BFAUBA@\RL{n.sf}!AOAGAEAQAI BFAUBA BFBGAGAQ@\RL{dA'iraT n.sf al-nhAr}, méridien} \includepdf[pages=26,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Les deux [durées] sont égales quand coascension et [Soleil] moyen sont égaux, c'est-à-dire quand sont égaux les deux excédents ou défauts par rapport au mouvement en longitude. L'écart [entre les deux durées] s'appelle \emph{équation des nychtémères}. Elle n'est sensible que sur une durée d'un grand nombre de jours -- pas un jour ni deux. Pour la déterminer quantitativement, il faut connaître les deux variations. L'écart dû [à la variation du] parcours du Soleil est quatre fois l'anomalie maximale qui est environ deux degrés, parce que le mouvement en longitude dans la moitié [passant par] l'Apogée est égal au mouvement moyen moins deux fois le maximum de ce qu'on lui ajoute, et qu'il en est de même dans la moitié [passant] par le périgée. Il n'y a pas de contradiction entre le fait que l'arc en longitude soit supérieur à l'arc moyen -- comme il arrive dans les orbes du Soleil -- et le fait que le mouvement en longitude soit inférieur au mouvement moyen -- comme nous venons de dire. L'écart dû à la coascension dépend des horizons, donc ça n'est pas la même chose pour toutes les contrées. Si l'on pose que les jours commencent quand le Soleil arrive à l'horizon, de sorte que l'origine est l'arrivée du Soleil à l'horizon Est, alors on compte l'écart entre les degrés égaux et leur coascension par rapport au lieu [donné]. Si l'on pose que l'origine est l'arrivée du Soleil à l'horizon Ouest, alors on compte l'écart entre les degrés égaux et la coascension [de l'arc] opposé, par rapport au lieu [donné]. Si l'on pose que l'origine est l'arrivée [du Soleil] au méridien, alors l'écart est le même pour tous les horizons, et les jours ont la même durée dans tous [les horizons]~: on compte le déplacement et la coascension par rapport à l'équateur terrestre. Et c'est pourquoi le méridien a été choisi comme origine. Ci-dessus\footnote{\textit{cf.} section précédente.}, on a divisé l'orbe de l'écliptique en quatre. Parmi les quatre portions, il y en a deux qui ont chacune un équinoxe en son milieu et qui sont chacune supérieure à sa coascension. L'une va du milieu du Verseau au milieu du Taureau, l'autre va du milieu du Lion au milieu du Scorpion. L'excédent de chacune sur sa coascension par rapport à l'équateur terrestre est cinq degrés. Les deux autres portions ont chacune un solstice en son milieu et elles sont chacune inférieure à sa coascension. L'une va du milieu du Taureau au milieu du Lion, et l'autre va du milieu du Scorpion au milieu de Verseau. Le défaut de chacune à sa coascension par rapport à l'équateur terrestre est aussi cinq degrés. \newpage\phantomsection \index{ASBFBH@\RL{snw}!ASBFAI ATBEAS@\RL{sanaT al-^sams}, année solaire} \includepdf[pages=27,pagecommand={\thispagestyle{plain}}]{edit2.pdf} En combinant les deux écarts par excès et par défaut, cette variation fera la grandeur totale de l'écart entre les jours moyens et les jours vrais. On ne peut faire autrement que choisir un jour de l'année comme origine. Les autres jours lui seront rapportés, de sorte que midi de ce jour soit l'origine des jours moyens ainsi que des jours vrais. Quel que soit le jour de l'année choisi comme origine du temps, l'écart entre les jours moyens et les jours vrais écoulés depuis ce jour sera tantôt additif, tantôt soustractif ; mais si l'on prend l'origine vers la fin du Verseau, alors les jours vrais seront toujours inférieurs au jours moyens~; et si l'on prend l'origine vers le début du Scorpion, alors les jours vrais seront toujours supérieurs aux jours moyens. Les hommes de métier se sont accordés à choisir [l'origine] vers la fin du Verseau. Voici la figure des six divisions, sous l'hypothèse que l'Apogée est à la fin des Gémeaux. L'écart dû à l'anomalie solaire change à cause du mouvement de l'Apogée, mais seulement sur de longues durées. Ainsi s'achève l'explication de l'écart dans la grandeur des jours. L'explication de [leur] grandeur à un instant donné [figure] dans les livres pratiques. Cet écart s'appelle équation des nychtémères. Au cours d'une révolution complète, [la durée totale des] jours vrais et [la durée totale des] jours moyens sont égales, et la considération précédente tombe. \newpage\phantomsection \begin{center} \small\input{fig034.pdf_tex}\normalsize \end{center} \begin{center} \small\input{fig035.pdf_tex}\normalsize \end{center} \newpage\phantomsection \index{ATAYAY@\RL{^s``}!ATAYAGAY@\RL{^su`A`}, rayon (de lumière)} \index{BFBHAQ@\RL{nwr}!BFBHAQ@\RL{nwr}, lumière} \index{AUAHAM@\RL{.sb.h}!AUAHAM@\RL{.sb.h}, aurore} \index{ATBABB@\RL{^sfq}!ATBABB@\RL{^sfq}, crépuscule} \index{AVBHAB@\RL{.daw'}!AVBHAB@\RL{.daw'}, clarté} \index{AXBDBD@\RL{.zll}!BEANAQBHAW AXBDBD@\RL{ma_hrU.t al-.zill}, cône d'ombre} \index{AUAHAM@\RL{.sb.h}!AUAHAM BCAGAPAH@\RL{.sb.h kA_dib}, aurore trompeuse} \index{AUAHAM@\RL{.sb.h}!AUAHAM AUAGAOBB@\RL{.sb.h .sAdiq}, aurore franche} \index{APBFAH@\RL{_dnb}!APBFAH ASAQAMAGBF@\RL{_danb al-sr.hAn}, queue du loup|see{\RL{.sb.h kA_dib}}} \index{BFAXAQ@\RL{n.zr}!BFAXAQ@\RL{n.zr}, {\oe}il (de l'observateur)} \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \addcontentsline{toc}{chapter}{II.9 L'aurore et le crépuscule} \includepdf[pages=28,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \begin{center} \Large Neuvième section \normalsize L'aurore et le crépuscule. \end{center} L'aurore est l'illumination de l'horizon Est à l'approche du Soleil, et le crépuscule est l'illumination de l'horizon Ouest après que le Soleil s'y soit retiré. La cause de l'aurore est la suivante~: pendant la nuit, de dessous la Terre, quand le Soleil approche l'horizon Est, le cône d'ombre de la Terre (en quoi consiste la nuit) penche vers l'Ouest. Plus il penche, plus le rayon de lumière à son bord s'approche du regard, du côté de l'Est, jusqu'à être vu, petit à petit. Pour se le représenter, imaginons un triangle à angles aigus dont la base soit l'horizon et dont les côtés soient à la surface du cône dans un plan passant par le centre du Soleil, le centre de la Terre et le sommet du cône. Imaginons un autre triangle ayant un angle droit, dont la base soit entre l'horizon et le \emph{premier point vu de l'aurore} (c'est-à-dire un point du côté du premier triangle), et dont les deux autres côtés soient issus de l'{\oe}il, l'un des deux passant par une des extrémités de la base et l'autre par l'autre, l'une [des deux extrémités de la base] étant le point mentionné, et l'autre un point de l'horizon. Comme [le côté] issu de l'{\oe}il et passant par le point [mentionné] est perpendiculaire [en ce point], alors [l'autre] côté [issu de l'{\oe}il] vers l'horizon est plus long car c'est l'hypoténuse\footnote{Parmi les directions situées dans le même cercle de hauteur que le Soleil, celle l'observateur voit la clarté en premier est normale au cône d'ombre, car la distance à la surface du cône est minimale le long de la normale. C'est mieux expliqué chez \d{T}\=us{\=\i} \cite{altusi1993} p.~294-297.}. Ainsi ce qu'on voit [en premier] de l'aurore est comme une droite située au dessus de l'horizon~; ce qui est plus proche de l'horizon est [encore] obscur. On appelle cela \emph{ce qui en est vu de l'aurore en premier} parce que cela apparaît en premier à l'aurore, et \emph{aurore trompeuse} parce que cela [semble] être en contradiction avec l'obscurité de ce qui est plus proche encore du Soleil. [On l'appelle aussi] \emph{queue du loup} à cause du déploiement de son début et de la ponctualité de sa fin. Ensuite, si le Soleil s'approche beaucoup de l'horizon, la lumière se développe à l'horizon comme une demi-disque, et l'aurore devient alors \emph{franche}. L'horizon Est s'emplit de clarté, la moitié du ciel est atteinte, et cette clarté ne cesse plus de croître jusqu'à ce que rougeoie l'horizon, que le Soleil se lève, et que le cône d'ombre se couche. Plus le Soleil monte, plus le cône s'abaisse, jusqu'à midi ; alors le Soleil commence à se diriger vers l'Ouest, et l'ombre et ses limites vont vers l'Est, jusqu'à ce que le Soleil arrive à l'horizon Ouest, qu'il le fasse rougir, et que le cône [d'ombre] arrive à l'horizon Est. \newpage\phantomsection \begin{center} \input{fig036.pdf_tex} \end{center} \newpage\phantomsection \begin{center} \input{fig037.pdf_tex} \end{center} \newpage\phantomsection \includepdf[pages=29,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent Si le Soleil se couche et que se lève le cône d'ombre, alors le rougeoiment appelé crépuscule commence à décroître. \`A sa suite, la blancheur croît, jusqu'à ce que la blancheur soit complète et que la rougeur cesse, puis la blancheur commence à décroître, jusqu'à ce qu'il n'en reste plus qu'une clarté prolongée, comme l'aurore trompeuse. Puis commence la nuit. Les gens sont rendus favorables au dîner et au sommeil à l'injonction du crépuscule, et au déploiement dans l'accomplissement des besoins à l'aurore. Injonction du crépuscule~: qu'ils se distraient des [travaux] pointilleux. Conditions de l'aurore~: qu'ils s'occupent de [travaux] pointilleux. On a appris empiriquement que l'abaissement du Soleil sous l'horizon au commencement du lever de l'aurore et à la fin du coucher du crépuscule est de dix-huit degrés [mesurés] sur le cercle de hauteur passant par le centre du Soleil~; mais comme la coascension de [cet arc] est variable, la [durée] en heures entre le lever de l'aurore et le lever du Soleil est variable, de même que [la durée en heures] entre le coucher du Soleil et le coucher du crépuscule. \`A l'équateur terrestre, quand le Soleil est à l'un des deux équinoxes, cette grandeur de dix-huit degrés [équivaut] à une durée d'une heure et un cinquième d'heure. Quand le Soleil est à l'un des deux équinoxes, il n'y aucun lieu sur Terre où la durée de l'aurore ou du crépuscule soit inférieure à cela. \`A l'équateur terrestre, en deux positions [de l'écliptique] équidistantes d'un des deux équinoxes, les durées de l'aurore ou du crépuscule sont égales. La durée de l'aurore et du crépuscule dans le quatrième climat est deux heures quand le Soleil est vers le début du Cancer, et c'est environ une heure et un tiers d'heure quand il est vers le début du Capricorne. Aux lieux dont la latitude vaut $48;30$, quand le Soleil est au solstice d'été, l'aurore est contigüe au crépuscule. Là où la latitude est encore supérieure, leur contigüité [a lieu] selon que l'abaissement [maximal du Soleil] sous l'horizon est inférieur à cette grandeur, et alors le lever de l'aurore est avant le coucher du crépuscule\footnote{Pendant la période de l'année où l'abaissement maximal du Soleil sous l'horizon est inférieur à $18°$, aurore et crépuscule seront contigus.}. Lorsque la latitude [du lieu] est égale au complément de l'inclinaison [de l'écliptique], au solstice situé du côté opposé à la latitude, le Soleil culmine quand il est à l'horizon~; il ne se lève pas, et la durée [de l'aurore et du crépuscule] est cinq heures et un tiers d'heure. Ce qui reste des vingt-quatre heures est treize heures et un tiers~: c'est la durée de l'obscurité. Le Soleil touche [aussi] l'horizon [au solstice] situé du même côté que la latitude~; alors il ne disparaît pas, et il n'y a donc ni aurore ni crépuscule à ces dates. \newpage\phantomsection \includepdf[pages=30,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Lorsque la latitude [du lieu] est $90$ (à l'horizon où c'est comme une meule), la durée de chacun est de $50$ jours -- de nos jours à nous. L'explication des grandeurs que nous avons mentionnées et de celles que nous n'avons pas mentionnées figure dans les livres pratiques. \newpage\phantomsection \index{BFBGAQ@\RL{nhr}!AJAYAOBJBD BFBGAGAQ@\RL{ta`dIl al-nahAr}, équation du jour} \index{BFBGAQ@\RL{nhr}!BBBHAS BFBGAGAQ@\RL{qws al-nhAr}, arc diurne, durée du jour} \index{BDBJBD@\RL{lyl}!BBBHAS BDBJBD@\RL{qaws al-layl}, arc nocturne, durée de la nuit} \index{ASBHAY@\RL{sw`}!ASAGAYAGAJ BEAYBHALAI@\RL{sA`At ma`UjaT}, heures courbes} \index{ASBHAY@\RL{sw`}!ASAGAYAGAJ BEASAJBHBJAI@\RL{sA`At mustawiyaT}, heures égales} \index{AUAHAM@\RL{.sb.h}!AUAHAM AUAGAOBB@\RL{.sb.h .sAdiq}, aurore franche} \index{ATBGAQ@\RL{^shr}!ATBGAQ BBBEAQBJBJ BHASAW@\RL{^shr qamariyy}, mois lunaire} \index{ATBCBD@\RL{^skl}!AJATBCBCBDAGAJ@\RL{t^skkulAt}, phases (de la Lune)} \index{ATBEAS@\RL{^sms}!ATBEAS@\RL{^sams}, Soleil} \index{BBBEAQ@\RL{qmr}!BBBEAQ@\RL{qamar}, Lune} \index{AWBDAY@\RL{.tl`}!BEAWAGBDAY@\RL{m.tAl`}, coascension} \index{BJBHBE@\RL{ywm}!BJBHBE AHBDBJBDAJBG@\RL{al-yawm bilaylatihi j al-'ayyAm bilayAlIhA}, nychtémère (\textit{litt.} le jour avec sa nuit)} \addcontentsline{toc}{chapter}{II.10 Détermination des parties du jour, les heures, et des multiples des heures, les mois et les années} \includepdf[pages=31,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \begin{center} \Large Dixième section \normalsize Détermination des parties du jour, les heures, et des multiples des heures, les mois et les années \end{center} Il est bien connu que l'\emph{arc diurne} est la moitié d'une circonférence de l'équateur si l'équation est nulle, et qu'il faut y ajouter ou en retrancher le double de l'équation si elle n'est pas nulle. En vérité, [l'arc diurne devrait mesurer] combien tourne l'équateur à partir de l'instant où s'est levé à l'horizon la moitié du corps du Soleil jusqu'à l'instant où s'est couché la moitié~; et selon cette interprétation, il est supérieur à la grandeur ci-dessus, d'un excédent égal à la coascension de l'arc parcouru par le Soleil le long de l'écliptique pendant ce jour par rapport à cette contrée. L'arc nocturne dépend [aussi] de ce qu'on vient de dire concernant l'arc diurne~; et l'équation du jour exigée est la moitié de l'excédent de l'arc diurne sur une demi-circonférence de l'équateur. Tantôt, on divise chacun des deux arcs en quinze portions, et on appelle ces portions \emph{heures égales} à cause du fait que leurs parts sont toujours [en nombre] égal. Tantôt, [on les divise] en douze portions, et on appelle ces portions \emph{heures courbes} à cause du fait que leurs parts sont [en nombre] variable et dépendent de la variation des deux temps\footnote{Les deux <<~temps~>> sont le jour et la nuit.} suivant qu'ils sont courts ou longs. Si les deux temps sont égaux, chacun sera douze heures, égales et courbes. Si les deux temps diffèrent, alors le nombre d'heures égales augmente et [le nombre] de parts de chaque heure courbe augmente, car l'une est plus que l'autre. Dans l'esprit des astronomes, le début du jour et de l'arc diurne est au lever du Soleil, et c'est le point de vue naturel. Dans l'esprit des légistes, c'est au lever de l'aurore franche ; alors [l'arc diurne] dépasse l'autre, de la différence entre les deux levers. Le début de la nuit est au coucher du Soleil. Chez certains astronomes le début du nychtémère est à midi, chez d'autres c'est à minuit, chez d'autres encore qui sont légistes c'est au début de la nuit, et chez d'autres c'est au début du jour ; [il dure] jusqu'au prochain [événement] semblable. Quand on conçoit le mois à partir des phases de la Lune selon sa position relative au Soleil, et selon que l'excès du mouvement de la Lune sur le mouvement du Soleil -- en mouvements vrais -- fait une circonférence, alors son nombre [de jours] est variable, à cause de l'irrégularité des deux mouvements. \newpage\phantomsection \index{BGBDBD@\RL{hll}!BGBDAGBD@\RL{hilAl}, croissant (de Lune)} \index{BCAHAS@\RL{kbs}!BJBHBE ASBFAI BCAHBJASAI@\RL{yawm / sanaT kabIsaT}, jour / année intercalaire} \index{BJBHBE@\RL{ywm}!BJBHBE BCAHBJASAI@\RL{yawm kabIsaT / mu^stariqaT / lawA.hiq}, jour intercalaire / volé / supplémentaire} \index{ASBFBH@\RL{snw}!ASBFAI ATBEAS@\RL{sanaT al-^sams}, année solaire} \index{ASBFBH@\RL{snw}!BAAUBHBD ASBFAI@\RL{fu.sUl al-sanaT}, les saisons} \index{AYAOBD@\RL{`dl}!AYAJAOAGBD AQAHBJAYBJBJ@\RL{i`tidAl rabI`iyy}, équinoxe de printemps} \includepdf[pages=32,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent \uwave{Les observateurs} disent que la période va d'un jour de conjonction au [suivant], ou bien d'une nuit de visibilité du croissant à la [suivante], ou bien de la dernière phase [de la Lune] à la [dernière phase] suivante, par convention. Les astronomes parlent de la différence entre les deux mouvements \emph{moyens}, au sens où ils soustraient le Soleil moyen de la Lune moyenne, et ils trouvent ainsi que l'intervalle entre deux conjonctions est vingt-neuf jours et demi et une [petite] fraction. Puisque deux mois successifs réunis font cinquante-neuf jours, ils posent qu'un mois [sur deux] vaut trente jours et l'autre vingt-neuf jours. Les fractions accumulées en sus de [chaque période de vingt-neuf jours et] demi ajoutent, tous les trente ans, onze jours. Onze mois parmi les mois devant compter vingt-neuf jours, tous les trente ans, deviendront ainsi onze mois de trente [jours]. Ces jours s'appellent des jours \emph{intercalaires}~: un jour intercalaire est un jour ajouté à une année et constitué des fractions en sus des jours entiers, et ils ajoutent les jours intercalaires à [certains] mois. Autrement dit, ils posent que le premier mois de l'année (\textit{mu\d{h}arram}) fait trente jours, que le deuxième fait vingt-neuf jours, et ainsi de suite jusqu'à la fin de l'année, mais \textit{\b{d}\=u al-\d{h}ijja} compte alors vingt-neuf jours et un cinquième et un sixième de jour, c'est-à-dire $0;22$ jour, résultat de la multiplication par douze du $0;1;50$ en sus du demi-jour. [Ce mois] devient [un mois de] trente [jours] lors de l'\emph{année intercalaire} qui est l'année comptant le jour intercalaire, trentième jour de \textit{\b{d}\=u al-\d{h}ijja}, constitué des fractions [accumulées]. C'est ainsi que sont les mois {lunaires}~; on en tire les mois vrais et les mois moyens. L'année est le retour du Soleil en son lieu sur l'écliptique. Ce retour implique le retour des conditions de l'année selon les saisons. Ceci se produit en trois cent soixante-cinq jours un quart, et une fraction. [L'année] consiste en douze mois parmi les mois lunaires, et onze jours, sans compter les fractions. On considère souvent que l'année va du jour où le Soleil arrive à l'équinoxe de printemps jusqu'au [jour où il revient au] même point. Ses mois commencent aux jours où il arrive aux points semblables de l'écliptique~; ou bien les mois comptent trente jours, trente jours, etc. mais ils ajoutent au dernier cinq ou six jours. Les cinq jours sont dits \emph{volés} ou \emph{supplémentaires}, et le sixième est dit \emph{intercalaire} (comme on dit de l'année). L'année [ainsi conçue] est l'année \emph{solaire vraie}, et ces mois sont des mois solaires vrais, ou bien des mois conventionnels qu'on peut faire commencer à n'importe quel jour ne coïncidant peut-être avec aucune observation de la position du Soleil. On choisit des mois aux alentours de trente jours parce que les mois lunaires sont proches de cela. \newpage\phantomsection \index{ASBFBH@\RL{snw}!ASBFAI BBBEAQBJAI@\RL{sanaT qamariyaT}, année lunaire} \includepdf[pages=33,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Il arrive souvent qu'on prenne la fraction en sus des trois cent soixante-cinq jours égale à un quart exactement~: on intercale un jour tous les quatre ans. Il arrive souvent qu'on l'élimine complètement, et ces années sont les années solaires conventionnelles. Ceux qui prennent en considération les mois lunaires posent que l'année est solaire, et, tous les trois ans ou tous les deux ans, ils ajoutent un mois à l'année pour rassembler les [intervalles] de onze jours -- sans les fractions -- suivant leur convention. Certains peuples posent que douze mois lunaires font une année, et ils l'appellent \emph{année lunaire}. Chaque peuple a une origine à laquelle il rapporte toutes les années de son histoire. La connaissance détaillée de ceci relève des livres de pratique. \newpage\phantomsection \addcontentsline{toc}{chapter}{II.11 Les degrés de transit des astres au méridien, leurs levers, et leurs couchers} \index{BEAQAQ@\RL{mrr}!AOAQALAI BEBEAQAQ@\RL{darjaT mamarr}, degré de transit} \index{BEBJBD@\RL{myl}!AOAGAEAQAI BEBJBD@\RL{dA'iraT al-mIl}, cercle de déclinaison} \index{AOBHAQ@\RL{dwr}!AOAGAEAQAI AYAQAV@\RL{dA'iraT al-`r.d}, cercle de latitude} \index{BEAQAQ@\RL{mrr}!AEANAJBDAGBA BEBEAQAQ@\RL{i_htilAf al-mamarr}, anomalie du transit} \index{AYAOBD@\RL{`dl}!AJAYAOBJBD AOAQALAI BEBEAQAQ@\RL{ta`dIl darjaT al-mamarr}, équation du degré de transit} \index{BFAUBA@\RL{n.sf}!AOAGAEAQAI BFAUBA BFBGAGAQ@\RL{dA'iraT n.sf al-nhAr}, méridien} \includepdf[pages=34,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \begin{center} \Large Onzième section \normalsize Les degrés de transit des astres au méridien, leurs levers, et leurs couchers \end{center} Le \emph{degré de transit} de l'astre est le [point] du cercle de l'écliptique qui traverse le méridien en même temps que l'astre le traverse, et le cercle de déclinaison [de l'astre] indique [ce point]. On a déjà montré que le degré de longitude est indiqué par le cercle de latitude~; donc si ces deux cercles sont confondus, comme il arrive lorsque l'astre est sur le cercle passant par les quatre pôles, ou bien si [l'astre] a une latitude nulle, alors [le degré de transit et le degré de longitude] coïncident. Ailleurs qu'en ces deux positions, ils diffèrent. La différence s'appelle \emph{anomalie du transit}, et l'arc de l'équateur situé entre son intersection avec le cercle de latitude de l'astre et son intersection avec son cercle de déclinaison s'appelle \emph{équation du degré de transit}. L'anomalie est maximale aux [positions] proches des commencements du Bélier et de la Balance, et elle est minimale aux [positions] proches des commencements du Cancer et du Capricorne. Si le pôle de l'écliptique est sur le méridien, c'est-à-dire quand les deux points des solstices [y] sont aussi et quand les deux points des équinoxes sont à l'horizon, alors le passage de l'astre [au méridien] se fait en même temps que [celui du point indiquant] sa longitude, parce que le méridien est son cercle de latitude. Si celui de deux pôles [de l'écliptique] qui est visible est à l'Est du méridien -- ceci a lieu quand la moitié de l'écliptique centrée en l'équinoxe d'automne passe [par le méridien] et quand sa moitié Sud se lève, si le pôle visible est au Nord, ou bien quand la moitié de l'écliptique centrée en l'équinoxe de printemps passe [par le méridien] et quand sa moitié Nord se lève, si le pôle visible est au Sud -- alors tout astre dont la latitude est du côté du pôle visible passe au méridien \emph{après} son degré de transit. En effet, son cercle de latitude issu du pôle le plus proche rencontre au méridien l'astre en premier, puis il rencontre son degré de transit seulement après avoir traversé le méridien et être passé à l'Ouest~; [inversement], si son degré de transit vient au méridien, l'astre est encore du côté Est [du méridien]. [Dans les mêmes conditions], tout astre dont la latitude est du côté opposé au pôle visible passe au méridien \emph{avant} son degré de transit. En effet, le cercle de latitude en question rencontre au méridien le degré [de transit] de l'astre en premier, puis il rencontre l'astre seulement après avoir traversé le méridien et être passé à l'Ouest. \newpage\phantomsection \index{AWBDAY@\RL{.tl`}!AWBDBHAY@\RL{.tulU`}, lever (d'un astre)} \index{AZAQAH@\RL{.grb}!AZAQBHAH@\RL{.gurUb}, coucher (d'un astre)} \index{AZAQAH@\RL{.grb}!AOAQALAI AZAQBHAH@\RL{darjaT al-.gurUb}, degré du coucher} \index{AWBDAY@\RL{.tl`}!AOAQALAI AWBDBHAY@\RL{darjaT al-.tulU`}, degré du lever} \index{BABHBB@\RL{fwq}!ADBABB@\RL{'ufuq}, horizon} \includepdf[pages=35,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Si le pôle visible est à l'Ouest -- ceci a lieu quand la moitié de l'écliptique centrée en l'équinoxe de printemps passe [par le méridien] et quand sa moitié Nord se lève, si le pôle visible est au Nord, ou bien quand la moitié de l'écliptique centrée en l'équinoxe d'automne passe [par le méridien] et quand sa moitié Sud se lève, si le pôle visible est au Sud -- alors tout astre dont la latitude est du côté du pôle visible passe au méridien \emph{avant} son degré de transit, et [tout astre] dont la latitude est du côté opposé passe [au méridien] \emph{après} son degré de transit, pour les mêmes raisons que dans le paragraphe précédent. Le degré du lever ou du coucher d'un astre est [le point] du cercle de l'écliptique qui se lève ou qui se couche en même temps que cet astre. Si l'astre a une latitude nulle, alors son degré de longitude est égal à son degré de lever et de coucher. Il en est de même d'un astre qui vient à l'horizon en même temps que le pôle de l'écliptique~: l'horizon est alors son cercle de latitude puisqu'il passe par l'astre et par le pôle de l'écliptique. Si l'horizon est à latitude nulle, comme il arrive à l'équateur terrestre, alors le lever et le coucher d'un astre sont comme les transits au méridien pour les autres horizons. [Tout astre] qui vient à l'horizon en même temps que le pôle et le solstice se lève et se couche en même temps que son degré [de longitude], comme on vient de le dire. [Tout astre] situé du côté du pôle visible de l'écliptique se lève avant son degré [de longitude] et se couche après. [Tout astre] situé du côté du pôle caché se lève après son degré [de longitude] et se couche avant. En effet, le cercle de latitude issu du pôle visible atteint l'astre situé du même côté, [quand l'astre est] à l'horizon, avant [d'atteindre] son degré de longitude~; et [il atteint] son degré de longitude, [quand celui-ci est] à l'horizon, avant [d'atteindre] l'astre situé du côté du pôle caché. Là, le pôle Nord [de l'écliptique] est visible pendant le lever de la moitié centrée en l'équinoxe de printemps et pendant le passage de la moitié Sud au méridien au dessus [de l'horizon]. Le pôle Sud est visible pendant le lever de la moitié centrée en l'équinoxe d'automne et pendant le passage de la moitié Nord au méridien. Dans les horizons inclinés, la règle du lever et du coucher des astres est comme nous l'avons décrite pour l'équateur terrestre. Quant au passage [au méridien] des moitiés de l'écliptique, il y a certes une différence. Il est possible qu'un des deux pôles [de l'écliptique] soit visible, et que [la partie] qui transite ou qui se lève soit un arc inférieur à une moitié, ou bien supérieur. La règle ci-dessus suit son cours quand la latitude du lieu est supérieure à l'inclinaison [de l'écliptique], sans avoir à distinguer [de cas], parce qu'un des deux pôles de l'écliptique est alors toujours visible. \newpage\phantomsection \includepdf[pages=36,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \noindent La règle reste identique là où la latitude n'est pas supérieure~: si celui des deux pôles de l'écliptique situé au Nord est visible, alors la règle passe, mais s'il est caché, alors c'est la règle inverse. En effet, l'astre se lève après son degré [de longitude] et il se couche avant son degré [de longitude] si sa latitude est au Nord, et inversement si sa latitude est au Sud. Après réflexion, il ne vous échappera pas que le degré du lever de l'astre se lève de jour s'il est entre le Soleil et le point opposé [au Soleil], et qu'il se lève de nuit s'il est entre le point opposé et le Soleil. Le degré de coucher se couche de nuit s'il est entre ces deux-là, et il se couche de jour s'il est entre ces deux-ci. Parmi les astres situés sur un même grand cercle\footnote{sous-entendu, un cercle de latitude.} coupé par la trajectoire diurne toujours visible maximale, ceux qui sont plus proches du pôle se lèvent avant ceux qui sont plus loins, et ils se couchent après ceux qui sont plus loins. C'est pourquoi l'écart entre les degrés de longitude et de lever est plus grand pour un astre proche du pôle que pour un astre loin [du pôle]. \newpage\phantomsection \index{BFAUBA@\RL{n.sf}!ANAWAW BFAUBA BFBGAGAQ@\RL{_ha.t.t n.sf al-nahAr}, ligne méridienne} \index{BFAUBA@\RL{n.sf}!AOAGAEAQAI BFAUBA BFBGAGAQ@\RL{dA'iraT n.sf al-nhAr}, méridien} \index{ATAQBB@\RL{^srq}!ANAWAW BEATAQBB BEAZAQAH@\RL{_ha.t.t al-ma^sraq wa-al-ma.grab}, ligne de l'Est et de l'Ouest} \index{ASBEAJ@\RL{smt}!AOAGAEAQAI ACBHBHBD ASBEBHAJ@\RL{dA'iraT 'awwal al-sumUt}, cercle origine des azimuts} \index{BBAHBD@\RL{qbl}!BBAHBDAI@\RL{qiblaT}, \textit{qibla}, direction de la \textit{Ka`ba}} \index{BBBJAS@\RL{qys}!BEBBBJAGAS@\RL{miqyAs j maqAyIs}, gnomon} \index{BGBFAO@\RL{hnd}!AOAGAEAQAI BGBFAOBJAI@\RL{dA'iraT hindiyaT}, cercle indien} \index{ASBEAJ@\RL{smt}!ASBEAJ@\RL{smt}, direction, azimut} \index{BEBCBCAI@\RL{makkaT}, La Mecque} \addcontentsline{toc}{chapter}{II.12 Détermination du méridien du lieu et de l'azimut de la \textit{qibla}} \includepdf[pages=37,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \begin{center} \Large Douzième section \normalsize Détermination du méridien du lieu et de l'azimut de la \textit{qibla} \end{center} \noindent On détermine la \emph{ligne méridienne} en prenant le Soleil à deux hauteurs égales de part et d'autre de sa hauteur maximale~; on trace les deux directions de son ombre portées par un même gnomon sur un terrain plan~; puis on bissecte l'angle formé entre elles. Pour ce faire, on prend le sommet de l'angle comme centre, on trace un arc de cercle qui coupe les deux ombres, et on mène une droite entre le milieu [des extrémités] de cet arc et le centre. Cette droite est dans le plan du cercle méridien et on l'appelle ligne méridienne. La ligne passant par le centre et perpendiculaire à cette ligne est dans le plan du cercle origine des azimuts, et on l'appelle ligne de l'Est et de l'Ouest. Voici une autre méthode pour déterminer [la ligne méridienne]. On installe un gnomon sur un terrain extrêmement plan~; on y trace un cercle de rayon deux fois plus long que le gnomon~; on observe l'entrée de l'ombre du gnomon dans le cercle avant midi, puis sa sortie [du cercle] après [midi]~; une fois connues l'entrée et la sortie, on mène une droite entre le milieu de l'arc joignant les extrémités des deux ombres et le centre~; c'est la ligne méridienne. La ligne qui lui est perpendiculaire et qui passe par le centre du cercle est la ligne de l'Est et de l'Ouest. Ces deux lignes font quatre quarts de cercle, et on découpe chaque quart en $90$ subdivisions égales afin d'y lire les azimuts là où l'ombre tombe sur la circonférence~: [le nombre] de ces subdivisions entre le point de l'Est ou de l'Ouest et l'ombre est un azimut. Ce cercle s'appelle cercle \emph{indien}. L'azimut de la \textit{qibla} est l'intersection de l'horizon du lieu et du cercle azimutal passant par le zénith du lieu et par le zénith de la Mecque. La ligne menée entre ce point [d'intersection] et le centre de l'horizon est la ligne d'azimut de la \textit{qibla}~; c'est la \uwave{tangente} à l'arc [d'un grand cercle] sur lequel est la base du \textit{mi\d{h}r\=ab}. Si l'orant en prière met cette \uwave{tangente} entre ses deux jambes en s'asseyant dessus, alors il peut prier du lieu où il s'agenouille sur la circonférence d'un cercle à la surface de la Terre. Ce qui est entre ses deux pieds et le centre de la Maison\footnote{la maison sacrée, c'est-à-dire la \textit{Ka`ba}.} va à la rencontre de la droite menée entre la Maison et le point qui culmine au dessus d'elle au zénith de la Mecque. [La \uwave{tangente}] peut certes aller à la rencontre de la Maison, mais puisque l'horizon de la Mecque est au dessous de l'horizon de l'orant, la vision [de l'orant] ne peut pas viser la Maison. \newpage\phantomsection \begin{center} \small\input{fig038.pdf_tex}\normalsize \end{center} \newpage\phantomsection \begin{center} \small\input{fig039.pdf_tex}\normalsize \end{center} \newpage\phantomsection \index{BBAHBD@\RL{qbl}!BBAHBDAI 2@\RL{qaws in.hirAf al-qiblaT}, arc de déviation de la \textit{qibla}} \index{BEBCBCAI@\RL{makkaT}, La Mecque} \index{AOBHAQ@\RL{dwr}!BEAOAGAQ@\RL{mdAr}, trajectoire, trajectoire diurne (\textit{i. e.} cercle parallèle à l'équateur)} \index{ATAQBB@\RL{^srq}!BEATAQBB@\RL{ma^sraq}, Est} \index{ATAQBB@\RL{^srq}!ANAWAW BEATAQBB BEAZAQAH@\RL{_ha.t.t al-ma^sraq wa-al-ma.grab}, ligne de l'Est et de l'Ouest} \index{AZAQAH@\RL{.grb}!BEAZAQAH@\RL{ma.grab}, Ouest} \label{var14}\label{var17} \includepdf[pages=38,pagecommand={\thispagestyle{plain}}]{edit2.pdf} \emph{L'arc de déviation} de la \textit{qibla} est l'arc de l'horizon entre son point d'intersection avec le cercle azimutal et son point d'intersection avec le méridien~; il mesure combien l'orant doit se détourner de la direction d'un des quatre points [cardinaux] -- le Nord, le Sud, l'Est ou l'Ouest -- pour faire face à la Maison. Pour déterminer les azimuts [de la \textit{qibla}], il faut déterminer la longitude de la Mecque, celle du lieu supposé, et leurs latitudes. En prenant comme origine les \^Iles Canaries, la longitude de la Mecque vaut soixante-dix-sept parts et un sixième~; en prenant comme origine la côte de la mer, elle vaut soixante-sept parts et un sixième~; sa latitude vaut vingt-et-une parts et deux tiers. La Mecque est à l'Est de chaque pays dont la longitude est moindre que la longitude de la Mecque, et elle est à l'Ouest de chaque pays dont la longitude est supérieure à sa longitude. Si leur deux longitudes sont égales, alors la Mecque est sur le méridien [du pays], vers le Sud si sa latitude est inférieure à la latitude [du pays], et vers le Nord si sa latitude est supérieure à la latitude [du pays]. Si leurs deux latitudes sont égales, alors le pays et la Mecque sont ensemble sous la même trajectoire diurne~; si la longitude [du pays] est moindre que la longitude de la Mecque, alors la Mecque est à gauche du point où se lève l'équinoxe pour ce pays~; et si la longitude [du pays] est supérieure à sa longitude, alors la Mecque est à droite du point où se couche l'équinoxe~; mais elle n'est pas au point Est dans le premier cas, ni au point Ouest dans le second cas, ni sur la ligne de l'Est et de l'Ouest comme on peut croire. En effet, puisque leurs longitudes diffèrent, le cercle origine des azimuts de l'un [des deux pays] est tangent à la trajectoire diurne mentionnée en un point distinct du point où le cercle origine des azimuts de l'autre [pays] lui est tangent~; donc [la trajectoire diurne et l'horizon] se coupent en des points distincts des points Est et Ouest, et les lignes de l'Est et de l'Ouest [des deux pays] ne sont les mêmes. Ce qu'on a cru est vrai seulement dans les contrées de l'équateur terrestre, quelque soit la longitude~; car tous les zéniths y sont sur l'équateur, et [l'équateur] est donc le cercle origine des azimuts de toutes [ces contrées]. \label{loxoqibla} \newpage\phantomsection \includepdf[pages=39,pagecommand={\thispagestyle{plain}}]{edit2.pdf} Parmi les moyens les plus simples pour déterminer l'azimut de la \textit{qibla} en différentes longitudes ou même en différentes longitudes et latitudes, en voici un. On observe le transit du Soleil au zénith des gens de la Mecque -- c'est-à-dire quand il est là-bas à midi dans le huitième ou le septième degré des Gémeaux ou bien dans le vingt-troisième degré du Cancer. On prend l'écart entre les deux longitudes. On pose une heure pour chaque [portion] de quinze parts d'écart, et quatre minutes pour chaque part. Le total, ce sont des heures à compter de midi. Puis on fait une observation, le même jour, à cet instant avant midi si la Mecque est à l'Est, après midi si elle est à l'Ouest~; alors l'azimut de l'ombre du gnomon à l'heure [où tu observes] est l'azimut de la \textit{qibla}. On a besoin de retrancher l'arc de déviation aussi bien quand les deux [pays] diffèrent seulement en longitude que quand ils diffèrent en longitude et en latitude~; mais quand il ne diffèrent qu'en latitude, alors leurs azimuts sont sur la ligne méridienne, et l'orant fait donc face à point Sud si la latitude de la Mecque est inférieure, et il fait face au point Nord si sa latitude est supérieure. Dieu est le plus savant. \begin{center} Avec cette section s'achève la deuxième partie du livre \textit{L'achèvement de l'enquête et la correction des fondements}. Dieu fait réussir ce qui est juste, il nous a donné suffisamment et il a combé de bienfaits celui qui s'en remet à lui. \end{center} \label{txt_fin} \part*{Introduction générale} \addcontentsline{toc}{part}{Introduction générale} \noindent En 2009, j'avais lu l'\textit{Almageste} de Ptolémée et la \textit{Configuration des mouvements} d'Ibn al-Haytham, et j'avais suivi un cours de Régis Morelon sur l'histoire de l'astronomie ancienne. Fier de mes balbutiements en arabe, c'est avec ce bagage que je suis allé voir Roshdi Rashed. Je souhaitais étudier un texte médiéval arabe d'astronomie, il m'a suggéré de me pencher sur la \textit{Nih\=aya al-s\=ul f{\=\i} ta\d{s}\d{h}{\=\i}\d{h} al-'U\d{s}\=ul} d'{Ibn al-\v{S}\=a\d{t}ir} et il a mis à ma disposition un premier manuscrit. Le présent ouvrage est le fruit de cet exercice. L'édition du texte arabe et la traduction en français (p.~\pageref{txt_debut}-\pageref{txt_fin}) sont l'{\oe}uvre d'un débutant surtout intéressé par les questions mathématiques. Le commentaire mathématique (p.~\pageref{comm_debut}-\pageref{comm_fin}) tient donc une place qui m'est chère~; j'y ai seulement analysé la ``théorie planétaire'' contenue dans la première partie de la \textit{Nih\=aya}\footnote{On rencontrera deux autres titres commençant par le même mot, mais le titre abrégé \textit{Nih\=aya} désignera toujours la \textit{Nih\=aya al-s\=ul f{\=\i} ta\d{s}\d{h}{\=\i}\d{h} al-'u\d{s}\=ul} d'{Ibn al-\v{S}\=a\d{t}ir}.} . En guise d'introduction, j'ai réuni ici quelques éléments concernant la tradition matérielle qu'il est impossible de séparer du contexte et de l'Histoire~; j'indiquerai ensuite les conclusions générales qui se dégagent de mon étude. \paragraph{\'Eléments biographiques} Comme souvent ches les savants anciens, il est difficile de rassembler suffisamment d'éléments pour tracer un portrait bien net d'{Ibn al-\v{S}\=a\d{t}ir}\footnote{Pour le présent paragraphe, mes principales sources sont \cite{wiedemann1928} et \cite{king1975}.}. Né à Damas en 1306, il est recueilli après la mort de son père par un oncle qui lui enseigne l'art de ciseler l'ivoire ou la nacre. \`A l'âge de dix ans, il se rend au Caire et à Alexandrie où il étudie l'astronomie. Il construit des instruments mathématiques (quadrants\footnote{\textit{Cf.} \cite{schmalzl1929} p.~100-108.}), des cadrans solaires, et des astrolabes\footnote{Outre l'astrolabe de l'Observatoire de Paris mentionné ci-après, il y a eu un astrolabe d'{Ibn al-\v{S}\=a\d{t}ir} dans les collections de la Bibliothèque Nationale de France. Il est décrit par {L.~A. Sédillot} qui le date de 1337, \textit{cf.} \cite{sedillot1841} p.~191-194~; mais cet astrolabe, d'ailleurs incomplet, aurait rejoint la collection Hariri du Musée d'Art Islamique du Caire dans les années 1920. Je remercie Catherine Hofmann, Conservatrice du Département des Cartes et Plans de la B. N. F., pour cette dernière information qu'elle tient d'un ouvrage d'A. Turner à paraître en 2017. Une photographie figure dans \cite{king2005} t.~1 p.~729 fig.~8.2.}. Il y a à l'Observatoire de Paris un astrolabe qu'il aurait construit en 1326~: il avait alors seulement vingt ans\footnote{Observatoire de Paris, numéro d'inventaire 1, ancienne cote 14-1. Il en existe une bonne photographie dans le catalogue informatisé. Est gravé au dos de l'astrolabe~: \textit{\d{s}ana`ahu `al\=a b. ibr\=ah{\=\i}m b. mu\d{h}ammad b. ab{\=\i} mu\d{h}ammad b. ibr\=ah{\=\i}m sana sitta wa-`ishr\=un wa-sab`a-mi'a}. D. A. King fait l'analyse détaillée de cet instrument dans \cite{king2005} t.~2 p.~692-693.}. Un contemporain a décrit un astrolabe automatique qu'{Ibn al-\v{S}\=a\d{t}ir} exposait dans sa propre demeure\footnote{Dans la traduction de Sauvaire \cite{sauvaire1896} t. 2 p.~263, l'historien al-\d{S}afad{\=\i} contemporain d'{Ibn al-\v{S}\=a\d{t}ir} rapporte~: ``je l'ai vu plus d'une fois et suis entré dans son logis au mois de \textit{rama\d{d}\=an} de l'année 743 [=1345], pour examiner l'astrolabe qu'il avait inventé. Je trouvai qu'il l'avait placé dans la verticale d'un mur, dans sa demeure [...]. Cet astrolabe, qui avait la forme d'une arcade [...] mesurait trois quarts de coudée environ~; il tournait toujours, continuellement, le jour et la nuit, sans sable, ni eau, suivant les mouvements de la sphère céleste, mais il l'avait réglé sur des dispositions particulières. Cet instrument faisait connaître les heures égales et les heures de temps.''}. \textit{Muwaqqit} à la Mosquée de Damas, chargé de calculer les heures des prières, il y a aussi construit un grand cadran solaire horizontal à style incliné, daté de 1371, endommagé puis reconstruit presque à l'identique au XIXème siècle, ``le cadran solaire le plus sophistiqué de la période médiévale'' selon David A. King\footnote{\textit{Cf.} \cite{king2005} t.~2 p.~85. Louis Janin en a donné une description en 1972, \textit{cf.} \cite{ghanem1976} 108-121~; une photographie de la pierre dédicatoire est à la p.~72, un dessin à l'échelle de la copie d'al-\d{T}an\d{t}\=aw{\=\i} (1873) à la p.~112, et une photographie du style p.~111.}. {Ibn al-\v{S}\=a\d{t}ir} meurt vers 1375 à Damas. Outre des ouvrages sur les instruments mathématiques et les astrolabes, il est l'auteur de quatre traités d'astronomie théorique dont l'ordre de rédaction devait être le suivant~: -- \textit{Nih\=aya al-gh\=ay\=a f{\=\i} a`m\=al al-falak{\=\i}y\=at}, ouvrage aujourd'hui perdu qui aurait été un \textit{z{\=\i}j} (recueil de tables astronomiques) strictement ptoléméen\footnote{d'après la description qu'en donne {Ibn al-\v{S}\=a\d{t}ir} lui-même dans son \textit{Z{\=\i}j al-jad{\=\i}d}.}. -- \textit{Ta`l{\=\i}q al-ar\d{s}\=ad}, le \textit{Commentaire des observations}, ouvrage aujourd'hui perdu dans lequel {Ibn al-\v{S}\=a\d{t}ir} aurait consigné des observations astronomiques et en aurait déduit de nouveaux modèles. -- \textit{Nih\=aya al-s\=ul f{\=\i} ta\d{s}\d{h}{\=\i}\d{h} al-'u\d{s}\=ul}, objet de la présente édition. Y sont décrits de nouveaux modèles astronomiques~; c'est un ouvrage d'exposition dans la tradition des \textit{kit\=ab al-hay'a}. Les démonstrations en sont absentes~; {Ibn al-\v{S}\=a\d{t}ir} renvoie pour cela au \textit{Ta`l{\=\i}q al-ar\d{s}\=ad}. -- \textit{Al-z{\=\i}j al-jad{\=\i}d}~: un \textit{z{\=\i}j} construit au moyen des modèles décrits dans la \textit{Nih\=aya}. De nombreuses copies sont conservées\footnote{Une description précise de ce \textit{z{\=\i}j} est donnée par Kennedy \cite{kennedy1956}.}. Que retenir de ces quelques repères biobibliographiques~? Tôt astronome de métier, il ne faut pas chercher dans {Ibn al-\v{S}\=a\d{t}ir} la figure du savant universel qu'on trouverait chez un B{\=\i}r\=un{\=\i} ou un Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}. Son {\oe}uvre théorique se résume peut-être aux quatre ouvrages mentionnés ci-dessus. Plusieurs traces témoignent de son intérêt pour ces modèles mécaniques des cieux que sont les astrolabes~; ce sont aussi des instruments d'observation, et il est assez remarquable que l'observation soit le sujet d'un de ces titres, hélas perdu. \paragraph{Les manuscrits de la \textit{Nihaya}} Voici les manuscrits dont j'ai connaissance~: - Oxford, Bodleian Library, Marsh 139, que je désignerai par la lettre A [A,\RL{'a}]. Il compte 64 \textit{folii}. Le colophon indique qu'il a été copié en 768 [=1366]. La page de titre désigne l'auteur comme suit~: \textit{al-\v{s}ay\b{h} al-im\=am ab\=u al-\d{hasan} `al{\=\i} b. ibr\=ah{\=\i}m b. mu\d{h}ammad b. al-hum\=am ab{\=\i} mu\d{h}ammad b. ibr\=ah{\=\i}m b. `abd al-ra\d{h}man al-an\d{s}\=ar{\=\i} al-muwaqqit bi-al-j\=ami` al-ummawiy al-ma`r\=uf bi-ibn al-\v{s}\=a\d{t}ir}. - Oxford, Bodleian Library, Marsh 290 [B,\RL{b}], 63 \textit{folii}. \uwave{Une date, 941} [=1535], est indiquée f.~1r. - Oxford, Bodleian Library, Marsh 501 [C,\RL{_h}], 71 \textit{folii}. La page de titre indique \uwave{qu'il a été copié en 984 [=1577]}. - Oxford, Bodleian Library, Hunt 547 [D,\RL{d}]. Ce sont les f. 21 à 65 d'un codex datant de 979 [=1571]. Les portions inférieures des f.~26r-30v manquent car elles ont dû être endommagées lors d'un incendie. - Bibliothèque de Leyde, Or 194 [E,\RL{h}], 101 \textit{folii}, sans date. - Bibliothèque Nationale d'Egypte, 1.2.15 [F,\RL{f}], 74 \textit{folii}, abîmé, date d'environ 1325 [=1907], et ne contient que les onze premiers chapitres de la première partie\footnote{Ces informations proviennent du catalogue de D. A. King \cite{king1986}.}. - Bibliothèque Suleymaniyye, 339 [S,\RL{s}], 70 \textit{folii}, date de 751 [=1349], selon le catalogue Internet de la bibliothèque. Saliba affirme qu'il s'agit d'une première version de la \textit{Nih\=aya}, différente de celle des autres manuscrits existant\footnote{\textit{Cf.} \cite{saliba1990} p.~36. Dans \cite{rashed1997} p.~115, Saliba indique en effet quelques différences notables concernant le chapitre sur la Lune.}. - Jérusalem, Bibliothèque Kh\=alidiyya, n°66/5 [Q,\RL{q}]. D. A. King indique dans \cite{king1975} que ce manuscrit contient une copie de la \textit{Nih\=aya} sous le titre de \textit{Ris\=ala f{\=\i} al-hay'a al-jad{\=\i}da}. Je n'ai eu accès qu'à A, B, C, D et E. J'ai établi le texte des onze premiers chapitres de la première partie sur la base de ces cinq manuscrits~; pour la suite, bien que m'aidant parfois de B et E, je n'ai finalement retenu que A, C et D dans l'apparat critique. Dès le début, j'ai seulement indiqué les numéros de \textit{folio} pour A, C et D en marge de mon édition. Ces cinq manuscrits ne présentent aucune différence cruciale pour l'interprétation. Pour mieux les caractériser, je m'attacherai surtout aux omissions, qui sont toujours courtes. E en commet de nombreuses\footnote{Ainsi p.~\pageref{var4} l.~14-15, p.~\pageref{var6} l.~2-3, p.~\pageref{var7} l.~16, p.~\pageref{var2} l.~7, p.~\pageref{var3} l.~12, \textit{etc.}}, parfois par saut du même au même. B présente des signes de ressemblance avec E~: certaines omissions de B sont aussi commises par E mais corrigées par des ajouts en marge de E\footnote{Ainsi p.~\pageref{var10} l.~10-11 et p.~\pageref{var11} l.~2~; mais \textit{toutes} les omissions \textit{n'ont pas} été corrigées, ainsi p.~\pageref{var1} l.~2 (un seul mot).}. Il est difficile d'en tirer une conclusion concernant la relation entre B et E car il arrive qu'une omission dans B ne soit pas commise dans E\footnote{Rarement, ainsi p.~\pageref{var12} l.~9.}. Bien que B commette beaucoup moins d'omissions que E, il présente certains signes de gaucherie qui m'ont finalement conduit à le négliger\footnote{Ainsi p.~\pageref{var27} l.~9-10 où B déplace le pronom possessif, trompant le lecteur quant à l'attribution d'une doctrine à l'auteur~; p.~\pageref{var13} l.~1-3 où B et E commettent un contre-sens (corrigé en marge de E) en inversant deux négations~; p.~\pageref{var28} l.~8 où B et E ajoutent deux mots qui nuisent au sens (dans E, ils ont ensuite été barrés)~; p.~\pageref{var25} l.~14-15, encore un contre-sens par inversion qui cette fois-ci est pire dans E, car mal corrigé. Comparées à A, C et D, les figures dans B et E sont aussi d'une qualité assez pauvre~: ainsi aux figures p.~\pageref{soleil_trajectoires} et \pageref{soleil_orbes_solides}, B et E commettent plusieurs erreurs concernant la position des orbes du Soleil dans les quadratures et les octants, et même à l'apogée dans E.}. Plusieurs omissions sont propres à C et D\footnote{Ainsi p.~\pageref{var13} l.~14-15 et p.~\pageref{var14} l.~7-8. La figure des trajectoires des centres des orbes de Mars p.~\pageref{mars_trajectoires} est aussi omise par C et D seulement.}~; certaines ont toutefois été corrigées dans C, en marge\footnote{Ainsi p.~\pageref{var17} l.~5-6. \`A la p.~\pageref{var16} l.~1-2, une longue omission par saut du même a été corrigée en marge de C~; dans D, la portion correspondante du texte a été détruite par l'incendie, mais, à en juger par l'espace disponible, il est presque certain que le copiste avait omis le même passage.}. Je n'ai pas trouvé d'omission propre à D, mais l'écriture y est parfois difficilement lisible~; c'est pourquoi j'ai aussi retenu C qui présente pourtant d'autres omissions en sus de celles partagées avec D\footnote{Ainsi p.~\pageref{var18} l.~2-4, p.~\pageref{var19} l.~15, p.~\pageref{var20} l.~5.}. De rares omissions\footnote{Ainsi p.~\pageref{var21} l. 12-13 et p.~\pageref{var22} l.~19.} propres à B, C, D et E, et non commises par A, n'ont pas suffi à me convaincre que B, C, D et E auraient un ancêtre commun non partagé avec A. Sauf une exception\footnote{Page \pageref{var23} l.~22, un mot seulement.}, je n'ai repéré aucune omission propre à A. Tous ces indices renvoient au \textit{stemma} de la figure \ref{stemma}. \begin{figure} \begin{center} \input{stemma.pdf_tex} \end{center} \caption{\label{stemma}\textit{stemma}} \end{figure} D. A. King mentionne que V. Roberts avait préparé une édition et une traduction anglaise de la \textit{Nih\=aya} qui n'ont pas été publiées\footnote{\textit{Cf.} \cite{king1975}, p.~362, note 1.}. G. Saliba a indiqué, il y a trente ans, que sa propre édition critique serait bientôt prête\footnote{\textit{Cf.} \cite{saliba1987}.}, mais elle ne l'est pas encore~; quand elle verra le jour, étant l'{\oe}uvre d'un spécialiste y ayant consacré tant d'années, cette édition révèlera sans doute bien des choses qui m'ont échappé. {Ibn al-\v{S}\=a\d{t}ir} mentionne à deux reprises une ``troisième partie'' contenant des tables\footnote{\textit{Cf.} p.~\pageref{troisieme1} et \pageref{troisieme2} \textit{infra}.}, mais ceci ne figure pas dans le plan donné en introduction. Ce qui n'était peut-être qu'un projet lors de la rédaction des deux premières parties aurait pu ensuite évoluer en l'ouvrage intitulé \textit{Al-z{\=\i}j al-jad{\=\i}d}. Fuad Abbud a montré en 1962 que les tables du \textit{Z{\=\i}j al-jad{\=\i}d} peuvent être calculées au moyen des modèles de la \textit{Nih\=aya}\footnote{\textit{Cf.} \cite{ghanem1976} p.~73-80.}. Certains de ses calculs me semblent erronés, mais la conclusion semble juste. J'indiquerai dans mon commentaire le moyen de calculer les tables comme l'explique {Ibn al-\v{S}\=a\d{t}ir} dans la \textit{Nih\=aya} et j'en donnerai des extraits qui se révèlent comparables aux résultats de Fuad Abbud et aux valeurs échantillonnées par lui dans le \textit{Z{\=\i}j al-jad{\=\i}d}. \paragraph{Les concepts d'orbe et de mouvement} En 1959, Kennedy et Roberts, citant Voltaire, enjoignaient les historiens de l'astronomie à ``cultiver leur jardin'' et ils esquissaient les linéaments d'un programme de recherche \textit{on the general topic of late medieval planetary theory} centré autour de l'{\oe}uvre d'{Ibn al-\v{S}\=a\d{t}ir} et des auteurs qu'il avait lus. Parmi ces auteurs, les collaborateurs de Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i} à l'observatoire de Maragha (XIIIème siècle) jouent un rôle prépondérant. Ces auteurs sont mieux connus aujourd'hui qu'ils ne l'étaient en 1959, spécialement grâce aux travaux de Ragep sur \d{T}\=us{\=\i} \cite{altusi1993} et de Saliba sur `Ur\d{d}{\=\i} \cite{saliba1990}, et c'est surtout par rapport à eux que j'ai essayé de situer {Ibn al-\v{S}\=a\d{t}ir}. Les modèles planétaires de ces auteurs reposant tous exclusivement sur des mouvements de rotation uniforme, je les ai tous décrits et comparés dans mon commentaire mathématique, dans un langage géométrique moderne, au moyen de rotations affines de l'espace. Chez ces auteurs, il n'y a, à proprement parler, aucune théorie mathématique du mouvement. On ne connaît pas encore de suite aux débuts d'une théorie cinématique que constituait la \textit{Configuration des mouvements} d'Ibn al-Haytham au XIème siècle. Comparée à la \textit{Ta\b{d}kira} de Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}, la \textit{Nih\=aya} d'{Ibn al-\v{S}\=a\d{t}ir} témoigne pourtant d'une légère inflexion. Dans l'introduction philosophique de la \textit{Nih\=aya}, {Ibn al-\v{S}\=a\d{t}ir} mentionne deux doctrines.\label{mouvement} L'une classe les mouvements ainsi~: \begin{enumerate} \item chose mue par soi \begin{enumerate} \item volitif (animaux) \item naturel \begin{enumerate} \item qui ne procède pas d'une manière unique (plantes) \item qui procède d'une manière unique \begin{enumerate} \item rectiligne (éléments et composés) \item circulaire (orbes) \end{enumerate} \end{enumerate} \end{enumerate} \item chose mue par autre chose \begin{enumerate} \item par accident \item violent \end{enumerate} \end{enumerate} L'autre doctrine conduirait à la classification suivante~: \begin{enumerate} \item chose mue par soi \begin{enumerate} \item volitif \begin{enumerate} \item qui ne procède pas d'une manière unique (animaux) \item qui procède d'une manière unique (orbes) \end{enumerate} \item naturel \begin{enumerate} \item qui ne procède pas d'une manière unique (plantes) \item qui procède d'une manière unique (éléments et composés) \end{enumerate} \end{enumerate} \item chose mue par autre chose \begin{enumerate} \item par accident \item violent \end{enumerate} \end{enumerate} Al-\d{T}\=us\={\i} a suivi cette seconde doctrine quand il classe les mouvements ainsi\footnote{\textit{Cf.} \cite{altusi1993} p.~100.}~: \begin{enumerate} \item chose mue par soi \begin{enumerate} \item qui ne procède pas d'une manière unique, et qui a une âme \begin{enumerate} \item animaux \item plantes \end{enumerate} \item qui procède d'une manière unique, et qui a une nature \begin{enumerate} \item naturel (éléments) \item volitif (orbes) \end{enumerate} \end{enumerate} \item chose mue par autre chose \begin{enumerate} \item par accident \item violent \end{enumerate} \end{enumerate} Le mouvement d'un orbe est-il naturel ou volitif~? C'est seulement sur ce point que les deux doctrines diffèrent. Ibn al-\v{S}\=a\d{t}ir ne tranche pas entièrement cette question~; il semble opter pour le mouvement naturel, mais s'écarte probablement des autres tenants de cette doctrine en affirmant que le mouvement des astres est <<~simple-composé~>> et que les orbes eux-mêmes sont des corps <<~composés~>>. \label{simple_compo} Le mouvement individuel d'un orbe est un mouvement circulaire uniforme, donc simple en soi, mais composé avec le mouvement des orbes qui l'entraînent. Ainsi ce mouvement présente, du point de vue de l'observateur (qui ne voit jamais qu'un seul mouvement), un caractère <<~variable~>> et non uniforme, comme le dit Ibn al-\v{S}\=a\d{t}ir. C'est, en tant que tel, un mouvement composé~; mais il faut se garder de croire que, dans cette physique, la composition des \emph{mouvements} ressemblerait à l'immixtion des \emph{éléments simples} dans les corps composés\footnote{Selon Aristote, dans un mixte composé de deux corps, ``le résultat du mélange étant en acte autre que [les corps qui ont été soumis au mélange], mais étant encore, en puissance, l'un et l'autre [...], les mélanges proviennent manifestement d'éléments antérieurement séparés et pouvant se séparer de nouveau'' (\textit{De la génération et de la corruption} I 10, \textit{cf.} \cite{aristote2005} p.~47).}. Il est impossible à l'observateur de distinguer ces deux phénomènes dans le mouvement~: le mouvement simple (naturel ou volitif) propre à l'orbe, et les mouvements (par accident) que lui communiquent les orbes qui l'entraînent. On peut donc bien parler d'un mouvement à la fois simple et composé\footnote{Ibn al-\v{S}\=a\d{t}ir insiste sur l'impuissance phénoménologique de l'observateur dans le sixième chapitre, p.~\pageref{impuissance} \textit{infra}.}. Chaque mouvement simple, en tant que composé avec d'autres mouvements communiqués au mobile <<~par accident~>>, devient ici une entité nouvelle, un mouvement encore simple, mais <<~simple-composé~>> (et non plus circulaire uniforme), et c'est ce mouvement dont l'observateur est témoin. Seule la description mathématique permettra de distinguer les composantes des mouvements des astres. Si le concept de mouvement ``simple-composé'' gagne en réalité, c'est d'ailleurs au détriment du concept d'``orbe'' qui semble perdre en substance. L'astronome se garde bien de trancher sur l'essence des orbes, et le texte semble même souligner l'ambiguïté du concept. Le mot \textit{falak} est employé dans trois sens distincts~: c'est (rarement) \textit{les cieux} c'est-à-dire l'ensemble des corps célestes vus d'ici-bas\footnote{Ainsi p.~\pageref{cieux} \textit{infra}.}, c'est bien plus souvent \textit{un corps solide} à symétrie sphérique, et parfois simplement \textit{un cercle géométrique} trajectoire du centre d'un tel corps solide au sein d'un autre corps solide. Chaque chapitre contenant un modèle planétaire utilise les deux dernières acceptions avec en général deux figures, l'une représentant les sections des corps solides sphériques, l'autre représentant les trajectoires des centres des sphères\footnote{\textit{Cf.} par exemple les figures p.~\pageref{soleil_trajectoires} et \pageref{soleil_orbes_solides} \textit{infra} pour le Soleil.}. Le caractère impénétrable des orbes solides impose certaines contraintes cosmologiques qu'{Ibn al-\v{S}\=a\d{t}ir} ne se prive pas de mettre à profit\footnote{Dans la conclusion de la première partie, p.~\pageref{conclusion} et suivantes.} pour déterminer la taille minimale du Monde et l'ordre des planètes~; mais si le concept d'orbe solide semble avoir sa préférence, c'est surtout parce qu'il joue le rôle d'un \emph{référentiel solide en mouvement}, rôle essentiel dans les chapitres sur les mouvements en latitude par exemple\footnote{\textit{Cf.} les chapitres 24 et 25, p.~\pageref{lat_debut}-\pageref{lat_fin}, et mon commentaire p.~\pageref{preliminaires}.}. \paragraph{Les excentriques} Si les savants de Maragha semblent avoir ignoré la naissance d'une cinématique célestre entre les mains d'Ibn al-Haytham, ils ont pourtant poursuivi la remise en cause de l'astronomie de l'\textit{Almageste} que celui-ci avait proposé dans un autre livre, ses \textit{Doutes sur Ptolémée}. La question des excentriques, du point équant et du point de prosneuse était le principal chef d'accusation contre l'astronomie de Ptolémée. Concernant les excentriques, la position d'{Ibn al-\v{S}\=a\d{t}ir} semble être encore plus sévère que celle de ses prédécesseurs~: il refuse catégoriquement les excentriques, mais ce refus se révèle plus subtil qu'il n'y paraît. Tentons de mieux cerner le problème en raisonnant d'abord par élimination. Pour simplifier, je me limiterai ici aux mouvements en longitude. Un orbe est alors un cercle portant un point, par exemple une planète $P$, qui suit ainsi une trajectoire circulaire autour du centre de l'orbe. Soit $O$ le centre du Monde et $O_1$ le centre d'un orbe ``excentrique'' portant un point $P$. Alors $O_1\neq O$. On notera aussi, sur ce cercle, l'apogée $A$ dans la direction du point $O$. Je distinguerai quatre configurations possibles. \emph{Cas 1}. $O_1$ et $A$ sont immobiles, et le mouvement de $P$ le long du cercle de centre $O_1$ est à vitesse angulaire constante par rapport à $O$ (\textit{cf.} figure \ref{fig002}(i)). \emph{Cas 2}. $O_1$ et $A$ sont immobiles, et le mouvement de $P$ le long du cercle de centre $O_1$ est à vitesse angulaire constante par rapport à $O_1$. \emph{Cas 3}. $O_1$ est mu autour de $O$, et ce mouvement entraîne tout l'orbe excentrique et l'apogée $A$. La planète $P$ est mue le long du cercle de centre $O_1$, mais sa vitesse angulaire par rapport à $O_1$ n'est pas constante. \textit{Cf.} fig.~\ref{fig002}(iii). \emph{Cas 4}. $O_1$ est mu autour de $O$, et ce mouvement entraîne tout l'orbe excentrique et l'apogée $A$. La planète $P$ est mue le long du cercle de centre $O_1$, à vitesse angulaire constante par rapport à $O_1$. Les astronomes de Maragha puis {Ibn al-\v{S}\=a\d{t}ir} refusent d'emblée les cas 1 et 3 qui contiennent un mouvement circulaire \textit{non uniforme} par rapport au centre du cercle~: et {Ibn al-\v{S}\=a\d{t}ir} de nous dire qu'à ce compte, autant imaginer pour chaque planète un unique orbe centré en $O$ et l'emportant dans son mouvement irrégulier, avec ses accélérations, rétrogradations, \textit{etc.}\footnote{\textit{cf.} p.~\pageref{doutes_excentrique} \textit{supra}.}. Si l'on admettait des mouvements circulaires non uniformes, il ne serait plus besoin d'orbes multiples, d'excentriques, ni d'épicycles : un orbe par planète suffirait ! C'est pour cette raison purement méthodologique qu'{Ibn al-\v{S}\=a\d{t}ir} refusera systématiquement tout mouvement circulaire non uniforme. Qu'en est-il du cas 2~? Si l'on fait abstraction du mouvement diurne\footnote{qui entraîne le point $O_1$ et l'apogée $A$ en un tour toutes les vingt-quatre heures autour de $O$.}, Ptolémée concevait ainsi l'excentrique portant le Soleil qu'il jugeait exempt du mouvement de précession. Il n'est pas question ici de mouvement circulaire non uniforme. Comme les astronomes arabes, depuis le IXème siècle, imposent aussi au Soleil le mouvement de précession et le mouvement de l'Apogée, ce cas 2 n'est plus présent dans les textes d'astronomie, et il est donc difficile de deviner les objections qu'{Ibn al-\v{S}\=a\d{t}ir} aurait adressées à une telle configuration. L'exercice en vaut pourtant la peine. Il me semble que l'astronome d'alors aurait pu répondre en invoquant le premier principe physique mentionné p.~\pageref{pas_de_repos} dans l'introduction de la \textit{Nih\=aya}~: \begin{quote} ``\emph{Premier principe.} Le vide est impossible, et dans les orbes, le repos est impossible.'' \end{quote} Il aurait fait objection que l'orbe portant l'orbe excentrique ne peut être immobile. L'existence même d'un orbe \textit{fixé} dans une position dissymétrique par rapport au centre du Monde est une brèche à la ``perfection de la circularité''. Cette brèche ne peut être compensée qu'en rompant cette fixité, en faisant l'Apogée et tout l'excentrique se mouvoir d'un mouvement de révolution autour du centre du Monde: on rétablit ainsi une sorte de symétrie dynamique. On élimine ainsi le cas 2. D'ailleurs, le cas 1 déjà éliminé tombe aussi sous cette objection. Toutefois, une telle configuration avec un excentrique où l'apogée est immobile, et le point $P$ en révolution uniforme par rapport à $O_1$, est cinématiquement équivalente, quant à la trajectoire de $P$, à un modèle à deux orbes dont l'un, centré en $O$, entraîne un autre petit orbe portant $P$ : il suffit que le petit soit animé d'un mouvement de rotation sur lui-même dont la vitesse angulaire est l'opposé de la vitesse angulaire du grand orbe\footnote{\textit{Cf.} figure \ref{fig002}(ii) p.~\pageref{fig002}, et prop. 1 p.~\pageref{apollonius} \textit{infra}.}. {Ibn al-\v{S}\=a\d{t}ir} et tous les astronomes depuis Ptolémée en étaient conscients. Ne faisant intervenir ni mouvement circulaire non uniforme, ni excentrique, ni repos, un tel modèle serait parfaitement légitime au yeux de l'astronome d'alors. Qu'en est-il du cas 4 (fig.~\ref{fig002}(iii))~? Cette configuration ressemble beaucoup à la configuration à deux orbes, déférent et épicycle, que l'on vient de décrire~: un orbe meut le point $O_1$ circulairement autour de $O$, et un autre orbe meut le point $P$ circulairement autour de $O_1$. On parle d'''épicycle'' quand le rayon de l'orbe de $P$ est suffisamment petit pour que cet orbe ne contienne pas le point $O$, et on parle d'``excentrique'' dans le cas contraire. Voici une objection possible au cas 4. Le point $O$ tombe \textit{à l'intérieur} de l'orbe de $P$, mais l'orbe de $P$ doit être conçu comme un corps solide dès lors qu'un autre orbe l'entraîne dans un mouvement circulaire uniforme autour de $O$. Ce corps solide, fait d'éther, doit cependant enclore une cavité puisqu'au centre du Monde se trouve peut-être d'autres orbes -- au moins la Terre. Cette cavité doit être centrée en $O$ -- si elle était centrée en $O_1$, il y aurait du vide en son sein, puisque les orbes inférieurs et la Terre sont à symétrie sphérique de centre $O$. N'étant donc pas centrée en $O_1$ mais en $O$, cette cavité semble empêcher tout mouvement de rotation de l'orbe de $P$ sur elle-même au sein du référentiel constitué par l'orbe portant l'orbe de $P$ (\textit{cf.} figure \ref{fig002}(v))~; car la substance de l'éther était certainement conçue comme impénétrable. Chez les prédecesseurs d'{Ibn al-\v{S}\=a\d{t}ir}, la solution était d'imaginer un cavité centrée en $O_1$, et de remplir le vide intermédiaire au moyen d'un autre orbe, ou d'une autre composante connexe de l'orbe portant l'orbe excentrique (\textit{cf.} fig.~\ref{fig002}(vi)), compliquant ainsi davantage la hiérarchie des corps célestes\footnote{D'ailleurs, si le repos est impossible dans les orbes, comment expliquer que ces deux composantes connexes sont immobiles l'une par rapport à l'autre~?}. Mais {Ibn al-\v{S}\=a\d{t}ir} ne semble jamais s'inquiéter d'un tel problème. Si l'orbe de $P$ est un ``excentrique'', on peut échanger les rayons des deux orbes et obtenir ainsi une configuration cinématiquement équivalente quant à la trajectoire du point $P$, mais dans laquelle l'orbe de $P$ est un petit orbe de rayon inférieur au rayon de l'orbe le portant: il ne contient plus le point $O$ (\textit{cf.} figure \ref{fig002}(iii) et (iv)). En général, c'est la situation de la fig. \ref{fig002}(iv) qu'on rencontrera dans les modèles d'{Ibn al-\v{S}\=a\d{t}ir}. Pourquoi {Ibn al-\v{S}\=a\d{t}ir} refuse-t-il donc l'excentrique du cas 4 ? Aussi étrange que cela puisse paraître, dans la \textit{Nih\=aya}, on ne trouve aucune raison générale justifiant un tel refus par un recours aux principes de la philosophie naturelle. Résumons le problème. On rencontrait le cas 3 dans tous les modèles planétaires de Ptolémée\footnote{Sauf le Soleil mentionné dans le cas 1.}. Les astronomes de Maragha étaient parvenus à remplacer, dans chaque modèle ptoléméen, l'excentrique de la fig. \ref{fig002}(iii), dont le mouvement n'est pas uniforme par rapport à $O_1$, par un déférent en rotation uniforme par rapport à son centre. Pour ce faire, il faut changer la position de $O_1$, adjoindre un épicycle, \textit{etc.}~: je décrirai ces modèles dans mon commentaire, ils relèvent en général du cas 4. {Ibn al-\v{S}\=a\d{t}ir} quant à lui préfère toujours une solution usant de la figure \ref{fig002}(iv) \textit{sans excentrique}, même quand elle est cinématiquement équivalente à la figure \ref{fig002}(iii). Pourquoi~? G. Saliba a qualifié une telle attitude de ``purisme''\footnote{\textit{Cf.} \cite{saliba1990} p.~66.}. Ragep a récemment remarqué que cette attitude conduit, dans le cas de Mercure et Vénus, à des modèles où le centre de l'épicycle est, vu de la Terre, dans la direction du Soleil moyen, et que ce ``biais héliocentriste'' des modèles d'{Ibn al-\v{S}\=a\d{t}ir} rendait davantage plausible l'hypothèse d'une transmission entre l'astronomie arabe et la science de Copernic\footnote{\textit{Cf.} \cite{ragep2016}. Sur cette thématique du rapport à Copernic qui a peu occupé mon attention une longue littérature existe déjà, et cet article en constitue certainement le meilleur point d'entrée. Les faits importants sont dans Swerdlow-Neugebauer \cite{swerdlow1984}~: - le modèle des planètes supérieures du \textit{Commentariolus} de Copernic est identique au modèle d'{Ibn al-\v{S}\=a\d{t}ir} après changement d'origine - le modèle des planètes supérieures du \textit{De revolutionibus} est identique à celui de `Ur\d{d}{\=\i} (lui-même cinématiquement équivalent au modèle d'{Ibn al-\v{S}\=a\d{t}ir} - les modèles des planètes inférieures dans le \textit{De revolutionibus} sont une adaptation facile de ceux d'{Ibn al-\v{S}\=a\d{t}ir} - le modèle de la Lune du \textit{Commentariolus} et du \textit{De revolutionibus} est identique à celui d'{Ibn al-\v{S}\=a\d{t}ir} (pas besoin de changer d'origine puisque la Lune tourne autour de la Terre) Ragep montre que les modèles des planètes inférieures du \textit{Commentariolus} sont aussi une adaptation facile et cinématiquement équivalente des modèles d'{Ibn al-\v{S}\=a\d{t}ir} (\textit{cf.} \cite{ragep2016} p.~402 pour Vénus et p.~404-406 pour Mercure)~; enfin, il remarque que le ``biais héliocentriste'' des modèles d'{Ibn al-\v{S}\=a\d{t}ir} a pu faciliter le passage à l'héliocentrisme de Copernic.}; mais il concède lui-même que ce jugement \textit{a posteriori} ne peut valoir comme explication d'une telle préférence dans l'esprit du savant du quatorzième siècle. Qu'il me soit permis de formuler une conjecture~: \emph{quand {Ibn al-\v{S}\=a\d{t}ir} reproche aux savants de Maragha leurs modèles usant de la fig. \ref{fig002}(iii), c'est en fait la \emph{démonstration} de ces modèles qui est l'objet du reproche.} Quand les auteurs de Maragha démontrent, ils le font en prouvant que le mouvement décrit par l'astre dans leur modèle est \emph{proche} du mouvement décrit par l'astre \emph{dans le modèle de Ptolémée}. Ils le font au moyen de figures géométriques. Cette technique a d'ailleurs été reprise par les commentateurs modernes à partir de Kennedy\footnote{\textit{Cf.} la figure dans \cite{ghanem1976} p.~88~; je l'ai reproduite en fig.~\ref{fig051} p.~\pageref{fig051} \textit{infra}. Je ne me priverai pas de tels outils que j'ai résumés dans les propositions 1, 2 et 3 de mon commentaire p.~\pageref{apollonius}~; mais tandis que Kennedy et Roberts y voyait la commutativité de l'addition vectorielle, parfaitement comprise par Copernic et peut-être en gestation chez des savants comme {Ibn al-\v{S}\=a\d{t}ir} (\cite{ghanem1976} p.~60), il me semble que les travaux de ce dernier se situent à un autre niveau. Il s'agit de \emph{composer des rotations affines spatiales}, geste autrement plus avancé qu'une simple addition vectorielle. Des recherches récentes ont montré qu'il n'est pas incongru de parler de transformations géométriques dans les mathématiques médiévales arabes dès le IXème siècle (\textit{cf.} l'usage des projections cylindrique et conique dans le \textit{Traité sur l'astrolabe} d'al-Q\=uh{\=\i} étudié dans \cite{rashed2005} et \cite{abgrall2004}~; plus généralement, voir l'introduction de \cite{rashed2004}).} dans leurs efforts pour comprendre la genèse des modèles planétaires. Le point de départ heuristique et le garant final de vérité étaient le modèle géométrique ptoléméen dont les paramètres seulement étaient adaptés aux observations récentes. Ainsi Na\d{s}{\=\i}r al-D{\=\i}n al-\d{T}\=us{\=\i}, pour démontrer son nouveau modèle pour la Lune\footnote{C'est un modèle qui n'utilise pourtant aucun orbe excentrique au sens ci-dessus, \textit{cf.} fig.~\ref{fig021} p.~\pageref{fig021} \textit{infra}.} prouve que la trajectoire du centre de l'épicycle y est proche du cercle excentrique de Ptolémée~: et {Ibn al-\v{S}\=a\d{t}ir} lui reproche alors, avec raison, d'avoir conservé le concept d'excentrique ! Si cette conjecture est exacte, le projet d'{Ibn al-\v{S}\=a\d{t}ir} était celui d'une reconstruction globale de l'astronomie, \emph{en partant des données de l'observation}, et sans plus avoir recours aux modèles de l'\textit{Almageste}. S'explique alors qu'il renvoie à un ouvrage intitulé \textit{Commentaire des observations} pour la démonstrations de ses modèles. Le scénario imaginé par Saliba \cite{saliba1987} pour déduire le modèle du Soleil d'{Ibn al-\v{S}\=a\d{t}ir} me semble appuyer cette hypothèse. Le sens qu'{Ibn al-\v{S}\=a\d{t}ir} donne au mot ``observation'' n'est d'ailleurs pas ambigu~: ses usages dans la \textit{Nih\=aya} confirment qu'il s'agit toujours d'observer les positions des astres à une date donnée, ou bien leur diamètre apparent, ou encore des durées (d'une éclipse). C'est à peu près le concept moderne d'observation astronomique, sans toutefois la théorie des erreurs ni aucune réflexion approfondie sur la précision des mesures~: il faudra attendre bien des siècles pour voir éclore de tels outils. Cette conjecture que certains jugeront peut-être audacieuse s'est graduellement imposée à moi~; elle devrait gagner en plausibilité à la lecture du texte et du commentaire~; hélas, seul l'ouvrage perdu \textit{Commentaire des observations} pourrait l'infirmer ou l'étayer pleinement. \begin{figure} \begin{center} \input{fig002.pdf_tex} \caption{\label{fig002}Les apories de l'orbe excentrique} \end{center} \end{figure} \paragraph{Origine du mouvement de l'épicycle} Si l'excentrique est le principal chef d'accusation contre l'astronomie ptoléméenne, il y a un autre thème qui revient souvent dans les doutes exprimés par {Ibn al-\v{S}\=a\d{t}ir} : le fait que ``l'apogée de l'épicycle suive un point autre que le centre de l'orbe qui le porte''. Considérons un modèle contenant un orbe de centre $P_1$ portant un épicycle de centre $P_2$, et l'épicycle portant à son tour un astre en $P$ (\textit{cf.} fig.~\ref{fig013} p.~\pageref{fig013}). Les principes adoptés par {Ibn al-\v{S}\=a\d{t}ir} imposent au point $P_2$ d'être immobile au sein de l'orbe centré en $P_1$. Bien sûr l'orbe centré en $P_1$ est lui-même en rotation uniforme autour d'un axe passant par $P_1$, et ce mouvement entraîne $P_2$ et l'épicycle. De même, l'épicycle est animé d'un mouvement de rotation autour d'un axe passant par $P_2$, et ce mouvement est uniforme \emph{par rapport au référentiel constitué par l'orbe centré en $P_1$}. Notons $A$ l'apogée de l'épicycle sur la droite $(P_1P_2)$ ; comme la direction $(P_1P_2)$ est immobile au sein de l'orbe centré en $P_1$, alors l'angle $AP_2P$ doit croître uniformément. Mais il en est rarement ainsi dans les dispositifs proposés depuis Ptolémée pour sauver les phénomènes ! En général, l'origine $A'$ du mouvement uniforme de $P$ autour de $P_2$ oscille autour de $A$ de sorte à rester alignée avec $P_2$ et un autre point $N$ distinct de $P_1$. Dans le deuxième modèle de la Lune dans l'\textit{Almageste}, $N$ est le centre du Monde, distinct du centre $P_1$ de l'excentrique portant l'épicycle\footnote{\textit{Cf.} \cite{pedersen1974} p.~186-187.}. Dans le troisième modèle de la Lune, $N$ est le point de \textit{prosneuse}, distinct du centre de l'excentrique et du centre du Monde. Dans les modèles planétaires de l'\textit{Almageste}, $N$ est le point \emph{équant}, distinct du centre de l'excentrique et du centre du Monde. On retrouve ici à peu près la même attitude que vis-à-vis des excentriques chez notre auteur. Par exemple, quand al-\d{T}\=us{\=\i} conçoit un modèle à base de ``couple de \d{T}\=us{\=\i} curviligne'' pour éliminer le problème du point de prosneuse de la Lune, {Ibn al-\v{S}\=a\d{t}ir} lui reproche précisément d'avoir recours à l'excentrique \emph{et} au point de prosneuse~! Mais là encore, la solution d'al-\d{T}\=us{\=\i} consiste à offrir un modèle sans expliquer sa genèse, puis à montrer que son comportement est cinématiquement équivalent à celui de Ptolémée, au moins approximativement. Ce qui gêne {Ibn al-\v{S}\=a\d{t}ir}, n'est-ce-pas cette méthode démonstrative, en l'absence d'un exposé déduisant le modèle à partir de l'observation, dans un traité d'astronomie~? \begin{figure} \begin{center} \input{fig013.pdf_tex} \caption{\label{fig013}Quand l'apogée de l'épicycle suit un point autre que le centre de l'orbe qui le porte} \end{center} \end{figure} \paragraph{La possibilité mathématique d'une nouvelle astronomie} Si tel était le projet d'{Ibn al-\v{S}\=a\d{t}ir}, il ne s'agissait plus d'imaginer un modèle reproduisant les effets d'une théorie géométrique transmise depuis douze siècles pour résoudre enfin le problème de son insertion dans un édifice philosophique contraignant. Il s'agissait d'imaginer une méthode universelle conduisant directement des données de l'observation continue, toujours susceptibles d'être révisées, à un modèle dont les éléments étaient choisis dans un ensemble restreint (rotations affines spatiales) mais avec une liberté de combinaison que les mathématiques permettaient d'embrasser depuis seulement quelques siècles, en composant des rotations non homocentriques et d'axes pas nécessairement parallèles. On ne peut éluder la question de la possibilité mathématique de la réussite d'un tel projet~; {Ibn al-\v{S}\=a\d{t}ir} lui-même a dû se la poser. Faute de source, je suis réduit à formuler cette question dans un langage moderne. Tout mouvement curviligne, suffisamment régulier, dans l'espace affine $\mathbb{R}^3$, peut-il être décrit par une composée de rotations uniformes~? Il ne faut certes pas se limiter aux mouvements plans, car les modèles les plus intéressants conçus par les savants de Maragha et par {Ibn al-\v{S}\=a\d{t}ir} décrivent aussi les latitudes des astres\footnote{Et certes il ne faut pas non plus oublier la variation des distances des astres à la Terre. \`A la rigueur, un modèle d'Univers formé de sphères toutes homocentriques, en rotation uniforme les unes par rapport aux autres, pourrait suffire à décrire les longitudes et les latitudes des astres. Duhem s'était enthousiasmé de ce phénomène dans son analyse de l'astronomie d'Eudoxe et de la Physique péripatéticienne, \textit{cf.} \cite{duhem1913} vol.~I p.~126-129 et vol.~II p.~42-43~; mais dans les modèles purement homocentriques, la distance de chaque astre à la Terre est inévitablement constante.}. J'ai montré dans \cite{penchevre2016} comment le modèle d'{Ibn al-\v{S}\=a\d{t}ir} pour Vénus rend très bien compte des mouvements en longitude \emph{et} en latitude de l'astre. Il en est de même pour les autres planètes\footnote{\textit{Cf.} commentaire mathématique \textit{infra}, p.~\pageref{begin_saturne}--\pageref{end_saturne} pour les planètes supérieures, p.~\pageref{begin_mercure}--\pageref{end_mercure} pour Mercure.}. Les prédictions en latitude ne sont pas meilleures, d'un point de vue moderne, que celles de Ptolémée, car {Ibn al-\v{S}\=a\d{t}ir} s'appuyait sur les données phénoménologiques recueillies par Ptolémée et non sur de nouvelles observations~; mais il s'agit bien d'un modèle composé exclusivement de mouvements de rotation uniforme\footnote{Dans le commentaire mathématique, je montre ce qu'{Ibn al-\v{S}\=a\d{t}ir} doit à ses prédécesseurs Ibn al-Haytham et al-\d{T}\=us{\=\i} concernant les latitudes.}. Soit donc un repère spatial d'origine $O$ et un point matériel dont la position $P'$ au temps $t$ sera décrite par le vecteur $\overrightarrow{OP'}$. Les méthodes modernes de l'analyse fonctionnelle et la transformée de Fourier donnent une description du mouvement coordonnée par coordonnée comme somme de fonctions harmoniques, d'où une expression de la forme~: $$\overrightarrow{OP'}=\int(\cos\omega t.\mathbf{u}_\omega+\sin\omega t.\mathbf{v}_\omega)\,\text{d}\omega.$$ Sous l'hypothèse que le mouvement est suffisamment est régulier, on doit pouvoir approcher une telle expression au moyen d'une somme discrète, voire même finie. Posons donc~: $$\overrightarrow{OP'}=\sum_{k=1}^N\cos\alpha_k.\mathbf{u}_k+\sin\alpha_k.\mathbf{v}_k\qquad (\star)$$ où les $\mathbf{u}_k$, $\mathbf{v}_k$ sont des vecteurs constants et les $\alpha_k$ sont des fonctions affines de $t$, de la forme $\omega t+\phi$. Peu importe que les fréquences soient toutes, ou non, des multiples entiers d'une même fréquence fondamentale~: c'est le cas si le mouvement est périodique, mais il n'est guère évident que les mouvements célestes le soient. On notera $$\overrightarrow{OP}=\sum_{k=1}^N\mathbf{u}_k$$ On va à présent essayer de décrire $P'$ comme étant l'image du point $P$ par une composée de rotations affines. Remarquons d'abord que l'extrémité de chaque vecteur de la forme ${\cos\alpha.\mathbf{u}+\sin\alpha.\mathbf{v}}$ décrit une ellipse dans le plan $(\mathbf{u},\mathbf{v})$, et que cette ellipse est un cercle si et seulement si $\Vert\mathbf{u}\Vert=\Vert\mathbf{v}\Vert$ et $\mathbf{u}\perp\mathbf{v}$. Soit $(\mathbf{i},\mathbf{j})$ une base orthonormée du plan vectoriel $(\mathbf{u},\mathbf{v})$. Notons~: $$\mathbf{u}=u_x\mathbf{i}+u_y\mathbf{j},\qquad \mathbf{v}=v_x\mathbf{i}+v_y\mathbf{j}.$$ Posons~: \begin{align*} \mathbf{u}_1&=\dfrac{u_x+v_y}{2}\mathbf{i}+\dfrac{u_y-v_x}{2}\mathbf{j},\\ \mathbf{v}_1&=\dfrac{-u_y+v_x}{2}\mathbf{i}+\dfrac{u_x+v_y}{2}\mathbf{j},\\ \mathbf{u}_2&=\dfrac{u_x-v_y}{2}\mathbf{i}+\dfrac{u_y+v_x}{2}\mathbf{j},\\ \mathbf{v}_2&=\dfrac{u_y+v_x}{2}\mathbf{i}+\dfrac{-u_x+v_y}{2}\mathbf{j}. \end{align*} Il est alors facile de vérifier que~: $$\cos\alpha.\mathbf{u}+\sin\alpha.\mathbf{v} =(\cos\alpha.\mathbf{u}_1+\sin\alpha.\mathbf{v}_1) +(\cos\alpha.\mathbf{u}_2+\sin\alpha.\mathbf{v}_2),$$ et $\Vert\mathbf{u}_1\Vert=\Vert\mathbf{v}_1\Vert$, $\mathbf{u}_1\perp\mathbf{v}_1$, $\Vert\mathbf{u}_2\Vert=\Vert\mathbf{v}_2\Vert$, $\mathbf{u}_2\perp\mathbf{v}_2$~; les extrémités des deux vecteurs $(\cos\alpha.\mathbf{u}_1+\sin\alpha.\mathbf{v}_1)$ et $(\cos\alpha.\mathbf{u}_2+\sin\alpha.\mathbf{v}_2)$ décrivent donc des cercles à vitesse angulaire constante dans le plan $(\mathbf{u},\mathbf{v})$. Grâce à cette récriture, on peut donc supposer que, dans $(\star)$, pour tout $k$, $\Vert\mathbf{u}_k\Vert=\Vert\mathbf{v}_k\Vert$ et $\mathbf{u}_k\perp\mathbf{v}_k$. Notons $\mathbf{w}_k=\mathbf{u}_k\wedge\mathbf{v}_k$. Je désignerai par $R_{\alpha_k,\mathbf{w}_k}$ la rotation \emph{vectorielle} d'angle $\alpha_k$ autour du vecteur $\mathbf{w}_k$~; l'équation $(\star)$ devient alors~: $$\overrightarrow{OP'}=\sum_{k=1}^NR_{\alpha_k,\mathbf{w}_k}(\mathbf{u}_k),$$ où seuls les $\alpha_k$ dépendent de $t$. Notons $P_1$, $P_2$,..., $P_{N+1}$ les points définis par~: $$O=P_1,\quad \overrightarrow{P_kP_{k+1}}=\mathbf{u}_k\text{ pour }1\leq k\leq N.$$ En particulier $P=P_{N+1}$. Je désignerai par $R_{\alpha_k,\mathbf{w}_k,P_k}$ la rotation \emph{affine} d'angle $\alpha_k$ autour de la droite de vecteur directeur $\mathbf{w}_k$ passant par $P_k$. On va démontrer que $(\star)$ entraîne~: $$P'=R_{\alpha_1,\mathbf{w}_1,P_1}\left(R_{-\alpha_1,\mathbf{w}_1,P_2}\,R_{\alpha_2,\mathbf{w}_2,P_2}\right)\left(R_{-\alpha_2,\mathbf{w}_2,P_3}\,R_{\alpha_3,\mathbf{w}_3,P_3}\right)...\left(R_{-\alpha_{N-1},\mathbf{w}_{N-1},P_N}\,R_{\alpha_N,\mathbf{w}_N,P_N}\right)(P).$$ Bien que cette représentation ne soit pas unique, on tient là une description du mouvement comme composée de rotations uniformes. \emph{Démonstration.} On posera $\alpha_0=0$ de sorte que $R_{\alpha_0,\mathbf{w}_0,P_1}=\text{id}$. Soit $P^{(N+1)}=P$, et on définiera des points $P^{(1)}$, $P^{(2)}$,..., $P^{(N)}$ par la relation suivante pour $1\leq k\leq N$~: $$P^{(k)}=R_{-\alpha_{k-1},\mathbf{w}_{k-1},P_k}\,R_{\alpha_k,\mathbf{w}_k,P_k}(P^{(k+1)}).$$ En termes de rotations vectorielles, ceci revient à poser~: $$\overrightarrow{P_kP^{(k)}}=R_{-\alpha_{k-1},\mathbf{w}_{k-1}}R_{\alpha_k,\mathbf{w}_k}(\overrightarrow{P_kP^{(k+1)}}).$$ Reste à démontrer que~: $$\overrightarrow{P_1P^{(1)}}=\sum_{k=1}^NR_{\alpha_k,\mathbf{w}_k}(\mathbf{u}_k).$$ En fait, on va démontrer, par récurrence descendante sur $k$, que pour tout $k\leq N$~: $$\overrightarrow{P_kP^{(k)}}=R_{-\alpha_{k-1},\mathbf{w}_{k-1}}\left(\sum_{m=k}^NR_{\alpha_m,\mathbf{w}_m}(\mathbf{u}_m)\right).$$ Pour $k=N$, c'est vrai par définition de $P^{(N)}$. Supposons que l'égalité est vérifiée au rang $(k+1)$, alors~: $$\overrightarrow{P_{k+1}P^{(k+1)}}=R_{-\alpha_k,\mathbf{w}_k}\left(\sum_{m=k+1}^NR_{\alpha_m,\mathbf{w}_m}(\mathbf{u}_m)\right).$$ Au rang $k$, on aura~: \begin{align*} \overrightarrow{P_kP^{(k)}} &=R_{-\alpha_{k-1},\mathbf{w}_{k-1}}\,R_{\alpha_k,\mathbf{w}_k}\left(\overrightarrow{P_kP_{k+1}}+\overrightarrow{P_{k+1}P^{(k+1)}}\right)\\ &=R_{-\alpha_{k-1},\mathbf{w}_{k-1}}\,R_{\alpha_k,\mathbf{w}_k}\left(\mathbf{u}_k+R_{-\alpha_k,\mathbf{w}_k}\left(\sum_{m=k+1}^NR_{\alpha_m,\mathbf{w}_m}(\mathbf{u}_m)\right)\right)\\ &=R_{-\alpha_{k-1},\mathbf{w}_{k-1}}\left(R_{\alpha_k,\mathbf{w}_k}\mathbf{u}_k+\sum_{m=k+1}^NR_{\alpha_m,\mathbf{w}_m}(\mathbf{u}_m)\right)\\ &=R_{-\alpha_{k-1},\mathbf{w}_{k-1}}\left(\sum_{m=k}^NR_{\alpha_m,\mathbf{w}_m}(\mathbf{u}_m)\right), \end{align*} \textit{q. e. d.} \paragraph{Le projet d'{Ibn al-\v{S}\=a\d{t}ir} a-t-il réussi~?} Il est certes impossible qu'{Ibn al-\v{S}\=a\d{t}ir} ait réussi à formuler une méthode universelle pour calculer les fréquences propres et les coefficients intervenant dans une quelconque combinaison linéaire de fonctions harmoniques pour en déduire ensuite les rotations spatiales souhaitées. Nous frôlons là l'anachronisme. Pour jauger la réussite de ses recherches d'une manière plus respectueuse de l'histoire, il faut d'abord se poser la question de l'adéquation entre les prédictions de son modèle achevé et les données de l'observation, non seulement sur le plan qualitatif (rétrogradation, occurrence d'une éclipse, variation des distances à la Terre, \textit{etc.}), mais jusque sur le plan quantitatif de la précision des prédictions. Nos moyens sont maigres puisque nous ne disposons pas des observations de l'époque~: aucune observation datée n'est recueillie dans la \textit{Nih\=aya}. Si l'on retrouvait un jour le \textit{Commentaire des observations}, il ne faut pas s'attendre non plus à un recueil consignant la précision des mesures puisqu'aucune théorie des erreurs n'existait alors~: de telles données nous seraient donc de peu d'usage, à moins d'en avoir un nombre suffisant pour faire nous-mêmes des statistiques. Pour estimer l'adéquation du modèle à l'observation, il ne nous reste guère d'autre solution que de comparer ses prédictions aux prédictions de modèles modernes dont l'erreur, même pour une époque éloignée de la nôtre d'environ sept siècles, est certainement d'un ordre de grandeur bien moindre que celle entachant les modèles d'{Ibn al-\v{S}\=a\d{t}ir}. C'est ce que j'ai fait pour chaque astre~; dans le commentaire mathématique, je donne les courbes de l'équation en longitude\footnote{L'\textit{équation en longitude} désigne l'écart entre la longitude de l'astre et sa longitude \textit{moyenne}.} et de la latitude, calculées au moyen de différents modèles à comparer (les modèles de Ptolémée, ceux des savants de Maragha, ceux d'{Ibn al-\v{S}\=a\d{t}ir}), sur un même système d'axes pour chaque astre, et j'y adjoins en général le tracé des données calculées par le serveur d'éphémérides de l'IMCCE\footnote{Observatoire de Paris. Ce serveur d'éphémérides utilise la théorie planétaire moderne INPOP13c qui tient compte des perturbations entre les différents corps du système solaire et des effets relativistes. L'incertitude d'INPOP13c est de l'ordre de $15$ mas sur une période contemporaine d'un siècle, selon \cite{inpop13c} p.~11. En gros, $15\text{ mas / siècle}\times 7\text{ siècles}=0,105''$ serait l'ordre de grandeur de l'incertitude d'INPOP13c pour l'époque d'{Ibn al-\v{S}\=a\d{t}ir} il y a environ sept siècles. Je souhaiterais qu'un astronome de métier confirme ce raisonnement. Le cas échéant, cet ordre de grandeur est bien inférieur à l'erreur entachant les prédictions d'{Ibn al-\v{S}\=a\d{t}ir}.}. J'ai tracé ces courbes en général pour une demi-révolution, une entière, ou bien quelques révolutions de l'astre autour du Soleil~: elles permettent de se faire une première idée des défauts propres à chaque modèle. Pour estimer plus systématiquement l'incertitude des modèles d'{Ibn al-\v{S}\=a\d{t}ir}, j'ai calculé l'erreur absolue, en longitude et en latitude, \emph{par rapport aux longitudes et latitude calculées par le serveur de l'IMCCE}, pour un échantillon d'environ 2000 dates, sur une période de 60 ans à partir de l'\'Epoque de référence choisi par {Ibn al-\v{S}\=a\d{t}ir} (1331 ap. J.-C.)~; j'ai calculé pour chaque astre les fréquences cumulées de l'erreur absolue en degrés. Les résultats sont consignés dans le tableau \ref{tab1}. \begin{table} \begin{center} \begin{tabular}{ccccccc} \hline & Soleil & Lune & \multicolumn{2}{c}{Mercure} & \multicolumn{2}{c}{Vénus}\\ fréquence & lon. & lon. & lon. & lat. & lon. & lat. \\ \hline 50 \% & $<0°6'$ & $<0°35'$ & $<1°34'$ & $<0°18'$ & $<0°22'$ & $<0°15'$ \\ 70 \% & $<0°7'$ & $<0°53'$ & $<3°6'$ & $<0°44'$ & $<0°31'$ & $<0°23'$ \\ 90 \% & $<0°10'$ & $<1°18'$ & $<5°51'$ & $<1°15'$ & $<0°46'$ & $<1°6'$ \\ 95 \% & $<0°10'$ & $<1°28'$ & $<8°53'$ & $<1°32'$ & $<1°22'$ & $<1°41'$ \\ 98 \% & $<0°10'$ & $<1°37'$ & $<10°36'$ & $<2°3'$ & $<2°10'$ & $<2°15'$ \\ \hline \end{tabular} \medskip \begin{tabular}{ccccccc} \hline & \multicolumn{2}{c}{Mars} & \multicolumn{2}{c}{Jupiter} & \multicolumn{2}{c}{Saturne}\\ fréquence & lon. & lat. & lon. & lat. & lon. & lat. \\ \hline 50 \% & $<0°28'$ & $<0°26$ & $<0°10'$ & $<0°26'$ & $<0°18'$ & $<0°15'$ \\ 70 \% & $<0°45'$ & $<0°37$ & $<0°15'$ & $<0°32'$ & $<0°25'$ & $<0°18'$ \\ 90 \% & $<1°28'$ & $<0°51$ & $<0°23'$ & $<0°45'$ & $<0°36'$ & $<0°23'$ \\ 95 \% & $<1°59'$ & $<0°56$ & $<0°27'$ & $<0°50'$ & $<0°40'$ & $<0°25'$ \\ 98 \% & $<2°36'$ & $<1°2$ & $<0°31'$ & $<0°53'$ & $<0°43'$ & $<0°30'$ \\ \hline \end{tabular} \end{center} \caption{\label{tab1}Fréquences cumulées de l'erreur absolue en degrés de longitude et latitude des modèles d'{Ibn al-\v{S}\=a\d{t}ir}.} \end{table} \textit{Grosso modo}\footnote{Voici un exemple pour expliquer comment lire le tableau \ref{tab1}~: sur l'échantillon considéré, 70 \% des prédictions du modèle d'{Ibn al-\v{S}\=a\d{t}ir} concernant la latitude de Saturne présentent une erreur dont la valeur absolue est inférieure à $0°18'$.}, on retiendra que la précision des prédictions délivrées par les modèles d'{Ibn al-\v{S}\=a\d{t}ir} était de l'ordre \emph{du degré}. C'est peu satisfaisant, mais cet ordre de grandeur nous renseigne aussi sur la précision des observations avec les instruments d'alors. \paragraph{La deuxième partie de la \textit{Nih\=aya}} Il ne me semble pas utile de faire un commentaire détaillé de la deuxième partie de la \textit{Nih\=aya}~: celle-ci est moins originale, elle ressemble de très près à la partie III de la \textit{Ta\b{d}kira} de \d{T}\=us{\=\i} qu'{Ibn al-\v{S}\=a\d{t}ir} cite abondamment. Quelques ajouts notables dont des emprunts à Théodose, en particulier des observations empiriques concernant la durée des parties du jour sous différentes latitudes. Un chapitre fait exception~: le chapitre de géographie où {Ibn al-\v{S}\=a\d{t}ir} nomme les principales villes et contrées situées sous chaque climat. Je me suis beaucoup aidé de l'atlas \cite{kennedy1987} des Kennedy pour identifier les toponymes. Le \textit{Z{\=\i}j al-jad{\=\i}d} d'{Ibn al-\v{S}\=a\d{t}ir} compte parmi les sources utilisées par les Kennedy~; hélas les toponymes de la \textit{Nih\=aya} ne sont pas un sous-ensemble de ceux du {Z{\=\i}j}, ni l'inverse. Il est fait mention, dans ce chapitre de la \textit{Nih\=aya}, d'une carte des fleuves et des montagnes hélas absente des manuscrits que j'ai consultés\footnote{\textit{Cf.} p.~\pageref{carte_fleuves_montagnes} \textit{infra}. Un vague schéma fait peut-être office de carte dans les manuscrits C~f.~52v et D~f.~54v ; je ne l'ai pas reproduit.}. \paragraph{Astronomie et cosmologie} La première partie de la \textit{Nih\=aya}, si originale, se conclut sur des considérations cosmologiques, dans la lignée du \textit{Livre des Hypothèses} de Ptolémée, sur les distances des astres à la Terre et la taille du Monde. Quelle valeur donner à un tel choix~? Ces considérations cosmologiques montrent d'abord un {Ibn al-\v{S}\=a\d{t}ir} conscient de la \emph{sensibilité} de certains calculs à la précision des données de l'observation. Premier talon d'Achille de la cosmologie antique~: la méthode attribuée à Aristarque pour calculer la distance Terre-Soleil est \emph{très sensible} à la précision d'une observation pourtant \emph{si peu précise} que pouvait l'être celle du rayon de l'ombre lors des éclipses de Lune\footnote{\textit{Cf.} note~\ref{sensibilite} p.~\pageref{sensibilite} \textit{infra}}. Cela n'empêche pas {Ibn al-\v{S}\=a\d{t}ir} d'en tirer un maigre indice sur l'ordre des planètes, Lune-Mercure-Vénus-Soleil-Mars-Jupiter-Saturne selon lui contrairement à ce qu'affirmait al-`Ur\d{d}{\=\i} à Maragha ; mais autre chose semble dominer son intérêt pour la question cosmologique. Profusion des distances, surfaces, volumes, vitesses calculées dans plusieurs unités, tantôt en parasanges, tantôt en rayons terrestres, tantôt par quinzième d'heure, tantôt toutes les quatre secondes\footnote{Quatre secondes~: la durée qu'il faut à un homme ``pour compter rapidement jusqu'à six'', \textit{cf.} \pageref{quatre_secondes} \textit{infra}.}~: établissement d'ordres de grandeurs. Car ce n'est que cela. L'édifice ne donne aucune certitude sur des valeurs exactes, non seulement à cause de la sensibilité des calculs, mais aussi parce que les modèles planétaires ne prescrivent qu'une borne inférieure à l'épaisseur relative de chaque système d'orbes\footnote{Voir fig.~\ref{fig002}(viii) p.~\pageref{fig002} \textit{supra}~: rien n'empêche la présence de matière superflue en deça et au delà des épicycles dans les orbes déférents.}~! {Ibn al-\v{S}\=a\d{t}ir} le répète sans cesse~: ces distances ne peuvent être moindres, mais elles pourraient être plus grandes. Il en est ainsi du rayon du Monde (supérieur ou égal à $79088$ rayons terrestres) comme de la vitesse des astres les plus éloignés, les étoiles entraînées par le mouvement diurne à une vitesse d'au moins $23$ rayons terrestres toutes les quatre secondes. Un intérêt tout physique pour la mesure du Monde. Ces considérations ne peuvent être justifiées que si elles sont compatibles avec la théorie planétaire~: or on ne peut faire justice à {Ibn al-\v{S}\=a\d{t}ir} sans insister sur le fait que, pour la première fois dans l'histoire de l'astronomie, la théorie planétaire et la cosmologie sont unies en un tout cohérent. Jusqu'alors en effet, le modèle de la Lune impliquait des variations de la distance Terre-Lune aberrantes et en contradiction flagrante avec l'observation~; mais la distance Terre-Lune déduite de la parallaxe étant un autre ingrédient essentiel de la cosmologie ancienne, cette contradiction était un second talon d'Achille pour l'édifice entier. Le modèle de la Lune d'{Ibn al-\v{S}\=a\d{t}ir} donne, pour la première fois, une estimation raisonnable des variations relatives de la distance Terre-Lune\footnote{\textit{Cf.} fig. \ref{fig016} p.~\pageref{fig016} \textit{infra}.}. \paragraph{Remerciements} Chers professeurs, collègues, amis et proches qui m'avez inspiré, conseillé, rectifié, écouté et répondu~: Roshdi Rashed, Christian Houzel, Régis Morelon, Pascal Crozet, Ahmad Hasnawi, Marouane Ben Miled, Philippe Abgrall, George Saliba, Guillaume Loizelet, Valeria Candeli, Aline Auger, Lila Lamrani, Houda Ayoub, les lecteurs anonymes de mon article \cite{penchevre2016}, et Françoise mon épouse~; ce travail n'aurait pu voir le jour sans votre aide. Merci. \include{fichier3} \include{fichier1} \pagebreak \begin{center} \Large Numéraux \end{center} $$\RL{'a}=1,\,\RL{b}=2,\,\RL{j}=3,\,\RL{d}=4,\,\RL{h h-}=5,$$ $$\RL{w}=6,\,\RL{z}=7,\,\RL{.h}=8,\,\RL{.t}=9,$$ $$\RL{_A y}=10,\,\RL{k}=20,\,\RL{l}=30,\,\RL{m}=40,\,\RL{n}=50,$$ $$\RL{s .s}=60,\,\RL{`}=70,\,\RL{f}=80,\,\RL{.s .z}=90,$$ $$\RL{q}=100,\,\RL{r}=200,\,\RL{^s s}=300,\,\RL{t _t}=400,\,\RL{_t}=500,$$ $$\RL{_h}=600,\,\RL{_d d}=700,\,\RL{.d}=800,\,\RL{.z .g}=900$$ $$\RL{.g ^s}=1000$$ \pagebreak
1,116,691,501,162
arxiv
\section{#1} \setcounter{equation}{0}} \newtheorem{theo}{Theorem}[section] \begin{document} \newtheorem{lem}[theo]{Lemma} \newtheorem{prop}[theo]{Proposition} \newtheorem{coro}[theo]{Corollary} \title{Controlled Geometry via Smoothing \footnote{ 1991 {\em Mathematics Subject Classification}. Primary 53C20.}} \author{Peter Petersen\thanks{Partially supported by NSF and NYI grants} \\ \and Guofang Wei\thanks {Partially supported by NSF Grant \# DMS9409166. }\\ \and Rugang Ye\thanks{Partially supported by NSF Grant \# DMS9401106} } \date{} \maketitle \begin{abstract} We prove that Riemannian metrics with a uniform weak norm can be smoothed to having arbitrarily high regularity. This generalizes all previous smoothing results. As a consequence we obtain a generalization of Gromov's almost flat manifold theorem. A uniform Betti number estimate is also obtained. \end{abstract} \newcommand{\mbox{Tr}}{\mbox{Tr}} \newcommand{\mbox{inj}}{\mbox{inj}} \newcommand{\mbox{vol}}{\mbox{vol}} \newcommand{\mbox{diam}}{\mbox{diam}} \newcommand{\mbox{Ric}}{\mbox{Ric}} \newcommand{\mbox{conj}}{\mbox{conj}} \newcommand{\mbox{Hess}}{\mbox{Hess}} \newcommand{\mbox{div}}{\mbox{div}} \sect{Introduction} An ultimate goal in geometry is to achieve a classification scheme, using natural geometric quantities to characterize the topological type or diffeomorphism type of Riemannian manifolds. While this grand scheme seems to be an impossible dream, its basic philosophy has been a driving force in many important developments in Riemannian geometry. The sphere theorems and various topological finiteness theorems are typical examples. These results are concerned with control of global topology of manifolds, and a crucial point therein is to control, uniformly, the local topology. Control of local topology often follows from control of local geometry. Here, by local geometry, we mean the local behavior of the metric tensor. On the other hand, control of local geometry is frequently also the essential ingredient for control of global geometry, such as in Cheeger-Gromov's compactness theorem and its various extensions, which can be named geometric finiteness theorems. Notice that some rudimental topological finiteness results are direct corollaries of geometric finiteness theorems. But the significance of the latter goes beyond this. In any case, control of local geometry is obviously a key topic. An interesting and important aspect of this topic is various degrees of control of local geometry needed or available in different situations. In \cite{p}, the first author introduced a sequence of norms which provide a certain quantitative measure for local geometric control. These norms can be defined either in terms of the $C^{k,\alpha}$-norms or the $L^{k,p}$-norms for functions, and they are defined on a given scale. For example, the $C^{k,\alpha}$-norm of a Riemannian manifold on scale $r$ is bounded, if it is covered by coordinate charts of size comparable to $r$ such that the metric tensor expressed in the coordinates is uniformly bounded in $C^{k, \alpha}$-norm, and that the coordinate transition functions are uniformly bounded in $C^{k+1, \alpha}$-norm. Note that the local topology is uniformly trivial if one of these norms on some scale is bounded. To admit richer topological and geometric structures under norm bounds, we shall introduce a weak version of these norms. The essential new feature is that we allow coordinate maps to have double points. In spirit, this is similar to replacing an injectivity radius bound by a conjugate radius bound. (Of course, e.g. a weak (harmonic) $C^{0, \alpha}$ bound is so weak that it is far from implying a conjugate radius bound.) Indeed, our basic theme is to try to find the minimal degree of control of local geometry under which interesting geometric and topological consequences can be drawn. Traditional geometric conditions such as curvature bounds imply various degrees of local geometric control, so we can think from their perspectives. Historically, sectional curvature bounds were the first to be systematically studied. They can roughly be compared with weak $C^2$-norm bounds, at least the latter imply the former. Since understanding of sectional curvature bounds has been reached on a good level, it is natural to try to find the minimal degree of local geometric control under which a metric can be approximated by metrics with sectional curvature bounds or weak $C^2$-norm bounds (or better bounds). In this paper, we present a result towards this goal, along with some applications. The local geometric control we need is as weak as a bound on the weak harmonic $C^{0, \alpha}$-norm, or a bound on the weak $L^{1, p}$-norm. These do appear to be the sought-after minimal degree of local geometric control in our set-up. To formulate the result precisely, we introduce the following classes of Riemannian manifolds. (The definition of the weak norms are given in \S 2.) \newline \noindent {\bf Definition}. Given $n \geq 2$, $0 < \alpha < 1$, $p > n$ and function $Q: (0, \infty) \rightarrow [0, \infty)$ which is nondecreasing in $r$ and satisfies $\lim_{r \rightarrow 0} Q(r) = 0$, we define \[ {\cal M}(n, \alpha, Q) = \left\{ (M, g) \left| \begin{array}{l} (M,g) \mbox{ is a complete Riemannian manifold}, \dim M = n, \\ \mbox{the weak harmonic}\ C^{0, \alpha}\ \mbox{norm}\ \|(M,g)\|^{W,h}_{C^{0,\alpha}, r} \leq Q(r)\\ \mbox{ for all positive }\ r \leq 1 \end{array} \right. \right\}, \] and \[ {\cal M}(n, p, Q) = \left\{ (M, g) \left| \begin{array}{l} (M,g) \mbox{ is a complete Riemannian manifold}, \dim M = n, \\ \mbox{the weak}\ L^{1,p}\ \mbox{norm}\ \|(M,g)\|^{W}_{L^{1,p}, r} \leq Q(r)\\ \mbox{ for all positive}\ r\leq 1 \end{array} \right. \right\}. \] \noindent {\em Remark} 1. By Anderson-Cheeger's work \cite{ac} manifolds with a lower bound for Ricci curvature and a positive lower bound for conjugate radius belong to these two classes. \noindent {\em Remark} 2. It is unknown whether (weak) $L^{1,p}$ bounds imply (weak) harmonic $C^{0, \alpha}$ bounds. (By the Sobole v embedding, they do imply $C^{0,\alpha}$ bounds, but not yet harmonic $C^{0, \alpha}$ bounds. ) In other words, it is unknown whether controlled $C^{0,\alpha}$ harmonic coordinates exist under the assumption of a (weak) $L^{ 1, p}$ bound. This question seems rather subtle. We plan to return to it in the future. At the moment, we consider the said two bounds as independent conditions. Note that one can further ask whether (weak) $C^{0, \alpha}$ bounds imply (weak) harmonic $C^{0, \alpha}$ bounds. There does not seem to be any evidence to support an answer in the affirmative. \\ \begin{theo} \label{main} For every manifold $(M,g)$ in ${\cal M}(n, \alpha, Q)$ and positive numbers $\epsilon $, there are metrics $g_\epsilon$ on $M$ such that \begin{eqnarray} e^{-\epsilon}g \leq & g_\epsilon &\leq e^\epsilon g \label{t1},\\ \| (M,g_\epsilon)\|^{W}_{C^{0,\alpha}, r} & \leq & 2Q(r) \label{t2}, \\ \| (M,g_\epsilon)\|^W_{C^{k,\alpha}, r} & \leq & \widetilde{Q}, \end{eqnarray} \noindent where $k$ is an arbitary positive integer, $0 < r \leq 1$ and $\widetilde{Q} = \widetilde{Q} (n, k , \epsilon, \alpha, Q(r))$ denotes a positive number depending only on $n, k, \epsilon, \alpha$ and $Q(r)$. \end{theo} \begin{theo} \label{main'} For every closed manifold $(M,g)$ in ${\cal M}(n, p, Q)$ and positive numbers $\epsilon $, there are metrics $g_\epsilon$ on $M$ such that \begin{eqnarray} e^{-\epsilon}g \leq & g_\epsilon &\leq e^\epsilon g \label{t1'},\\ \| (M,g_\epsilon)\|^{W}_{L^{1,p}, r} & \leq & 2Q(r) \label{t2'}, \\ \| (M,g_\epsilon)\|^W_{L^{k,p}, r} & \leq & \widetilde{Q}, \end{eqnarray} \noindent where $k$ is an arbitary positive integer, $0 < r \leq 1$ and $\widetilde{Q} = \widetilde{Q} (n, k , \epsilon, p, Q(r))$ denotes a positive number depending only on $n, k, \epsilon, p$ and $Q(r)$. \end{theo} \noindent {\em Remark} Theorem 1.2 actually also holds for complete, non-compact manifolds. This will be shown in a paper by the third author. \\ Thus a metric with some regularity (given by the weak norm) can be deformed or smoothed to a nearby one with arbitrarily high regularity. In particular, {\it manifolds with a lower bound on Ricci curvature and a positive lower bound on conjugate radius can be smoothed.} Previous smoothing results have been concerned with metrics with various curvature bounds, and involved two independent techniques: the embedding method and the Ricci flow. The embedding technique in smoothing as used by Cheeger-Gromov \cite{cg} consists of embedding (or immersing) a given manifold into a Euclidean space and then perturbing it suitably by a smoothing operation, which is based on the classical convolution process. The smoothing result in \cite{cg} is that metrics on closed manifolds with lower and upper bounds on sectional curvatures and a positive lower bound on injectivity radius can be smoothed to metrics with bounds on all derivatives of the Riemann curvature tensor. Later, by embedding into a Hilbert space instead of a finite dimensional space, Abresch \cite{a} was able to remove the condition on injectivity radius and extend to complete manifolds. More recently Shen \cite{sh} showed that manifolds with a lower bound on sectionl curvatures and a positive lower bound on injectivity radius can be smoothed to having two-sided sectional curvature bounds. The technique of Ricci flow is based on the fundamental work of Hamilton \cite{ha}. Using this technique, Bemelmans-Min Oo-Ruh \cite{bmr} obtained the same result as in \cite{cg} without injectivity radius lower bound, and Shi \cite{s} obtained the same result as in \cite{a}. Later work considers metrics with other kinds ! of curvature bound. For example, in \cite{y1,y2} Yang dealt with integral bounds on sectional curvatures. In \cite{dwy}, Ricci curvature bounds were treated. By virtue of the available constructions of controlled harmonic coordinates under various curvature bounds, all these smoothing results are consequences of Theorem~\ref{main} or Theorem~\ref{main'}. As typical applications we present the following two results. \begin{theo}[Betti number estimate] For the class of manifolds $M^n$ in \\ ${\cal M}(n, \alpha, Q)$ or in ${\cal M}(n, p, Q)$, and satisfying $\mbox{diam}_M \leq D$, we have the estimate for the Betti numbers \begin{equation} \sum_i b^i(M^n) \leq C(n,D,\alpha,Q)\ \mbox{or}\ C(n,D,p,Q),\label{be} \end{equation} and the estimate for the number of isomorphism classes of rational homotopy groups \begin{equation} \pi_q(M) \otimes Q \leq C(n,q,D,\alpha,Q)\ \mbox{or}\ C(n,q,D,p,Q)\ \mbox{for}\ q \geq 2. \label{hge} \end{equation} \end{theo} (\ref{be}) follows from Theorem~\ref{main}, \ref{main'} and Gromov's uniform betti number estimate regarding sectional curvature \cite{g2}. This estimate can also be proved directly using Toponogov type comparison estimate introduced in \cite{w}, see \cite{pw} for details. In \cite{w} the same estimate (\ref{be}) is given for the class of manifolds satisfying $\mbox{Ric}_M \geq -(n-1)H$, conj $\geq r_0$ and $\mbox{diam}_M \leq D$. (\ref{hge}) follows from Theorem~\ref{main}, \ref{main'} and the results in \cite{r}. \begin{theo} \label{aflat} There exists an $\epsilon =\epsilon(n,\alpha,Q)$ or $\epsilon(n,p,Q)> 0$ such that if a manifold $M^n$ belongs to ${\cal M}(n, \alpha, Q)$ or ${\cal M}(n, p, Q)$ and diam $\leq \epsilon$, then $M$ is diffeomorphic to an infranilmanifold. \end{theo} This generalizes Gromov's almost flat manifold theorem \cite{g1} as well as its generalization in \cite{dwy}. (The proof is simple: combine Theorem 1.1 with \cite{g1}.) The proof of Theorem~\ref{main} and Theorem 1.2 uses the embedding method in \cite{a}. Roughly speaking, we embed a given Riemannian manifold into the Hilbert space of $L^2$-functions on it, and then use the embedding to pull back the $L^2$-metric of the Hilbert space. The crucial point is of course to find a suitable embedding, such that the pull-back metric will enjoy nice properties. In \cite{a}, the embedding is defined in terms of distance functions. In our situation, these functions are not appropriate, and we employ instead solutions of a canonical geometric partial differential equation. Now if e.g. the harmonic $C^{0,\alpha}$-norm of the manifold is bounded, then a uniform pointwise bound on sectional curvatures will hold for the pull-back metric, and hence we can apply the smoothing results for metrics with sectional curvature bounds as given e.g. in \cite{a} or \cite{s}. If we only assume that the weak harmonic $C^{0, \alpha}$-norm of the manifold is bounded, i.e. it is in the class ${\cal M}(n, \alpha, Q)$, the global embedding is generally not under control. To remedy the situation, we follow the idea in \cite{a} of employing instead local embeddings. In \cite{a}, Abresch uses the exponential map to lift local patches of the manifold and his local embeddings are exactly embeddings of these lifted patches. In our situation, the exponential map is not suitable. Our substitute for it is the coordinate maps. Thus we use them to lift local patches, and construct embeddings of the lifted patches via the same geometric partial differential equation as mentioned before. To make sure that the pull-back metrics induced by these local embeddings descend to the local patches and that the resulting metrics patch together to define a metric globally, it is crucial to require the embeddings to be equivariant under isometries. Since our embeddings are defined in terms of solutions of a canonical geometric PDE, they naturally share this equivariance property. Basically, the above scheme also works for manifolds in the class ${\cal M}(n, p, Q)$, but some modifications are necessary. As before, the said pull-back metrics descend to yield a new metric on the underlying manifold. But these metrics satisfy here an integral bound on sectional curvatures rather than a poinwise bound. This is a new situation. To handle it, we apply the Ricci flow and follow the arguments in \cite{dwy}. A pointwise bound on Ricci curvature is used in several places in\cite{dwy}. Since no such bound is available in our current situation, the arguments in \cite{dwy} need to be improved and modified. The result we thus arrive at not only completes the smoothing scheme for the class ${\cal M}(n, p, Q)$, but also provides some new understanding of short time existence of the Ricci flow. \sect{Norm, Weak Norm and Smoothing} Fix an integer $k \geq 0$ a number $0 \leq \alpha \leq 1$. The {\it $C^{k,\alpha}$-norm } of an $n$-dimensional Riemannian manifold $(M,g)$ on scale $r$, $\|(M,g)\|_{C^{k,\alpha}, r}$, is defined to be the infimum of positive numbers $Q$ such that there exist embeddings: $$\varphi_s: B(0,r) \subset R^n \rightarrow U_s \subset M$$ ($B(0,r)$ denotes the closed ball of radius $r$ centered at the origin) with images $U_s$, $s \in \cal S$ (an index set), with the following properties:\\ 1) $e^{-Q} \delta_{ij} \leq g_{s,ij} \leq e^Q \delta_{ij}$, \\ 2) Every metric ball $B(p, \frac{r}{10}e^{-Q}),\ p\in M$ lies in some set $U_s$,\\ 3) $r^{|j|+ \alpha} \|\partial^j g_{s,ij}\|_{C^\alpha} \leq Q$ for all multi-indices $j$ with $0 \leq |j| \leq k$. \\ Here $g_{s,ij}$ denote the coefficients of $g_s=\varphi_s^*g$ on $B(0,r)$, and $\delta_{ij}$ are the Kronecker symbols. Note that this definition is slightly different from the corresponding one in \cite{p}, where in addition the (rescaled) $C^{k+1, \alpha}$-norm of the transition functions are required to be under control. For convenience, we can call the $C^{k, \alpha}$-norm (of Riemannian manifolds) as defined in \cite{p} the {\it strong $C^{k, \alpha}$-norm}. (Note however that the "strong" harmonic $C^{k, \alpha}$-norm is equivalent to the harmonic $C^{k, \alpha}$-norm.) We define the harmonic $C^{k,\alpha}$-norm on scale $r$, $\|(M,g)\|^h_{C^{k, \alpha}, r}$, by requiring additionally the following\\ 4) $\varphi_s^{-1}: U_s \rightarrow R^n$ is harmonic, \\ which is equivalent to saying that \\ 4$'$) id $: B(0,r) \rightarrow B(0,r)$ is harmonic with respect to $g_{s}$ on the domain and the Euclidean metric on the target, which is in turn equivalent to saying that $$\sum_i \partial_i (g_s^{ij} \sqrt{ \det g_{s,ij}})= 0$$ for all $j$. If $k \geq 1$ and $p >n$ (when $k=1$) or $p > \frac{n}{2}$ (when $k \geq 2$), then we define the $L^{k,p}$-norm on the scale of $r$, $\|(M,g)\|_{L^{k,p}, r}$, by retaining 1) and 2), and replacing 3) by\\ 3) $r^{|j|-\frac{n}{p}} \|\partial^j g_{s,ij}\|_{L^p} \leq Q$ for all $1 \leq |j| \leq k$. The harmonic $L^{k,p}$-norm is defined similarly. For any choice of these norms, it is clear that the local topology is trivial on some uniform scale for any class of manifolds with uniformly bounded norm. (Note that the injectivity radius may not be uniformly positive though.) To allow nontrivial local topology, we introduce the weak norms $\|\ \ \|^W_{C^{k,\alpha},r}$ and $\|\ \ \|^W_{L^{k,p},r}$, which are defined in identical ways except that each $\varphi_s: B(0,r) \rightarrow U_s$ is assumed to be a {\it local} diffeomorphism instead of diffeomorphism. The corresponding weak harmonic norms $\|\ \ \|^{W,h}_{C^{k,\alpha},r}$ and $\| \ \ \|^{W,h}_{L^{k,p},r}$ are defined in a similar way, with 4) being replaced by 4$'$). Note that (weak) harmonic norms dominate (weak) norms on the same scale. We also have $\|\ \ \|^W_{\ ,r} \leq \|\ \ \|_{\ ,r}$ and $\|\ \ \|^{W,h}_{\ ,r} \leq \|\ \ \|^h_{\, r}$. All norms are continuous and non-decreasing in $r$. If $(M,g)$ is sufficiently smooth, these norms converge to zero as $r \rightarrow 0$. Furthermore, (weak) $C^{k,\alpha}\ (L^{k,p})$ norms vary continuously in the $C^{k,\alpha}\ (L^{k,p})$ topology of Riemannian manifolds. See \cite{p} for the relevant details. We point out that $R^n$ is the only space with norm $=0$ on all scales. And flat manifolds are the only spaces with weak norm $=0$ on all scales. Conventional geometric conditions such as curvature bounds imply norm bounds. Such implications are mostly contained in constructions of controlled harmonic coordinates and are a crucial ingredient for various compactness theorems. To have a clear perspective, we collect these results in the following proposition. \begin{prop} There is a $Q(H,i_0,r,p)$ with $\lim_{r \rightarrow 0} Q(H,i_0,r,p) =0$ such that for manifolds with \\ a) $|K| \leq H, \ \mbox{inj} \geq i_0$, then $\|(M,g)\|^h_{L^{2,p},r} \leq Q(H,i_0,r,p)$; \\ b) $|K| \leq H$, then $\|(M,g)\|^{W,h}_{L^{2,p},r} \leq Q(H,r,p)$; \\ c) $|\mbox{Ric} | \leq (n-1)H,\ \mbox{inj} \geq i_0$, then $\|(M,g)\|^h_{L^{2,p},r} \leq Q(H,i_0,r,p)$; \\ d) $\mbox{Ric} \geq -(n-1)H,\ \mbox{inj} \geq i_0$, then $\|(M,g)\|^h_{L^{1,p},r} \leq Q(H,i_0,r,p)$; \\ e) $\mbox{Ric} \geq -(n-1)H$, conj $ \geq i_0$, then $\|(M,g)\|^{W,h}_{L^{1,p},r} \leq Q(H,i_0,r,p)$. \end{prop} These results follow from works of Jost-Karcher \cite{jk}, Anderson \cite{an} and Anderson-Cheeger \cite{ac}. We now turn to the smoothing question. As explained in the introduction, our strategy is to first achieve sectional curvature bounds by embedding into the Hilbert space of $L^2$-functions. This is done in the next two sections. The higher regularity smoothing then easily follows from known smoothing results. Consider $(M,g) \in {\cal M}(n, \alpha, Q)$. We have a collection of local diffeomorphisms \[ \varphi_s:\ B(0,r) \rightarrow U_s \subset M \] satisfying 1), 2), 3) and 4$'$). In the next section we will construct a canonical embedding \[ F_s: (B(0,r),g_s) \rightarrow L^2 (B(0,r), g_s), \] where $g_s = \varphi_s^*g$. We use $F_s$ to pullback the $L^2$ metric of $L^2 (B(0,r), g_s)$ to produce a new metric $\tilde{g}_s$ on $B(0,r)$. This construction works for general metrics on $B(0,r)$, and has the following equivariance property, which will be proved in \S 4. Namely, if $g_1,\ g_2$ are two metrics on $B(0,r)$ such that there is an isometric embedding \[ \psi: (B(0,r), g_1) \rightarrow (B(0,r),g_2) \] and if $\tilde{g}_1,\ \tilde{g}_2$ are obtained via the above construction, then \[ \psi: (B(0,r), \tilde{g}_1) \rightarrow (B(0,r), \tilde{g}_2) \] is also an isometric embedding. Granted this (see Proposition~\ref{4.4}) we have \begin{prop} There exists a smooth metric $\bar{g}$ on $M$ such that the pullback of $\bar{g}$ by $\varphi_s$ is exactly $\tilde{g}_s$. \end{prop} \noindent {\em Proof.} Let $r_1= \frac{r}{10} e^{-Q}$. Then for every $p \in M$, $B(p,r_1) \subset U_s$ for some $s$. It follows that there exists a $\tilde{p} \in B(0,r)$ such that $B_{g_s}(\tilde{p},r_1) \subset B(0,r)$ and $\varphi_s (\tilde{p}) = p$. We now define the metric $\bar{g}$ as follows. If $X,Y \in T_pM$, then \[ \bar{g}(X,Y) = \tilde{g}_s \left(((\varphi_s)_*|_{\tilde{p}})^{-1}(X), ((\varphi_s)_*|_{\tilde{p}})^{-1}(Y)\right). \] To show that this metric is well-defined, let $\tilde{p}'$ be another such point, i.e. for some $s'$, $B_{g_{s'}}(\tilde{p}',r_1) \subset B(0,r)$ and $\varphi_{s'} (\tilde{p}') =p$. Let $r_4 = \frac{r}{20} e^{-4Q}$ and $r_3 = \frac{r}{20} e^{-3Q}$. Denote $g_0$ the Euclidean metric. Then we can show that \begin{lem} \label{hom} There is an isometric embedding \[ \psi: (B_{g_0}(\tilde{p},r_4),g_s) \rightarrow (B_{g_{s'}} (\tilde{p}',r_3),g_{s'}). \] \end{lem} \noindent {\em Proof.} First $\psi$ can be defined as follows. Since $g_s$ is $e^Q$-quasi-isometric to $g_0$, \begin{equation} \label{r34} B_{g_0}(\tilde{p}, r_4) \subset B_{g_s} (\tilde{p}, r_3). \end{equation} For any point $q \in B(\tilde{p},r_4)$, connect $q$ to the center point $\tilde{p}$ with a curve $\tilde{\gamma}$ in $B_{g_0}(\tilde{p},r_4)$ such that the length of $\tilde{\gamma}$ $l_{g_s}(\tilde{\gamma}) < r_3$. Since \[ \varphi_s:\ (B_{g_s} (\tilde{p}, r_3), g_s) \rightarrow B(p,r_3) \] is a local isometry and $\varphi_s (\tilde{p}) = p$. From (\ref{r34}) $\varphi_s$ maps the curve $\tilde{\gamma}$ to a curve $\gamma$ in $B(p,r_3)$ starting with $p$ and $l(\gamma) < r_3$. Again since $\varphi_{s'}$ is a local isometry and $\varphi_{s'} (\tilde{p}') =p$. The curve $\gamma$ then can be lifted via $\varphi_{s'}$ to a curve in $B_{g_{s'}}(\tilde{p}',r_3)$ starting with $\tilde{p}'$. The other end point of this curve is defined to be the image of $q$. (Note that, in general, lifting can not be done for incompelete space. Here the map is a local isometry and the curve starts from the center, and we have control on the length of the curve and the size of the metric ball, so it will not hit the boundary during lifting.) Now we will show that $\psi$ is well-defined, i.e. the image is independent of the choices of the curve $\tilde{\gamma}$. If $\tilde{\gamma}_1$ is another curve in $B_{g_0}(\tilde{p},r_4)$ connecting $q$ to the center point $\tilde{p}$ with $l_{g_s! }(\tilde{\gamma}_1) < r_3$, we ca $g_s$ is $e^Q$-quasi-isometric to $g_0$. Then $\varphi_s$ maps $\tilde{H}(s,t)$ to a homotopy $H(s,t)$ in $B(p,2r_3)$ with $l(H(s,\cdot)) < 2r_3$ for each $s$. Therefore $H(s,t)$ can be lifted via $\varphi_{s'}$ to a homotopy in $B_{g_{s'}}(\tilde{p}',2r_3)$ starting with $\tilde{p}'$. By the (localized) homotopy lifting lemma the other end points are all the same. Therefore $\psi$ is well-defined. Next we show that $\psi$ is one-to-one. Let $r_2 = \frac{r}{20} e^{-2Q}$. Then $$B_{g_{s'}} (\tilde{p}',r_3) \subset B_{g_0}(\tilde{p}',r_2) \subset B_{g_{s'}} (\tilde{p}',\frac{1}{2}r_1).$$ Since $B_{g_0}(\tilde{p}',r_2)$ is an Euclidean ball one can construct ``inverse" $\phi$ similarly as above: \[ \phi: \ (B_{g_{s'}} (\tilde{p}',r_3),g_{s'}) \rightarrow (B_{g_s}(\tilde{p},\frac{1}{2}r_1),g_s). \] Thus $\psi$ is one-to-one. That $\psi$ is an isometric embedding follows from the construction. \hspace*{\fill}\rule{3mm}{3mm}\quad Now using the equivariance, we have \[ \psi^* \tilde{g}_{s'} = \tilde{g}_s. \] Therefore \begin{eqnarray*} \lefteqn{\tilde{g}_{s'}\left(((\varphi_{s'})_*|_{\tilde{p}'})^{-1}(X), ((\varphi_{s'})_*|_{\tilde{p}'})^{-1}(Y)\right)} \\ & = & \tilde{g}_{s'}\left((\psi)_* \circ ((\varphi_s)_*|_{\tilde{p}})^{-1}(X), (\psi)_* \circ ((\varphi_s)_*|_{\tilde{p}})^{-1}(Y)\right) \\ & = & \psi^* \tilde{g}_{s'} \left(((\varphi_s)_*|_{\tilde{p}})^{-1}(X), ((\varphi_s)_*|_{\tilde{p}})^{-1}(Y)\right) \\ & = & \tilde{g}_s \left(((\varphi_s)_*|_{\tilde{p}})^{-1}(X), ((\varphi_s)_*|_{\tilde{p}})^{-1}(Y)\right). \end{eqnarray*} To show that $\varphi_s^* \bar{g} = \tilde{g}_s$, consider \[ \varphi_s : \ B(0, \frac{9}{10}r) \rightarrow U_s. \] In particular, for any $\tilde{p} \in B(0, \frac{9}{10}r)$, $B_{g_s}(\tilde{p},r_1) \subset B(0,r)$, and therefore $\tilde{p}$ can be used to define the metric $\bar{g}$ at $\varphi_s (\tilde{p})$. It follows from the definition that \begin{equation} \label{mm} \varphi_s^* \bar{g} = \tilde{g}_s. \end{equation} Finally, note that the smoothness of the metric $\bar{g}$ is an immediate consequence of (\ref{mm}). \hspace*{\fill}\rule{3mm}{3mm}\quad \sect{Embedding I} We continue with the above manifold $(M, g)$. Let $\Omega = B(0, r) \subset R^n$ with a pull back metric $\varphi_s^*g$. For convenience, this metric will be denoted by $g$. It is easy to see that $\|(\Omega, g)\|^h_{C^{0, \alpha}, r} \leq Q(r)$. We are going to construct an equivariant embedding of $(\Omega, g)$ into $L^2(\Omega, g)$ by associating to every point $p \in \Omega$ a geometric function $f_p \in L^2(\Omega) \equiv L^2(\Omega, g)$, which depends nicely on $p$. A natural choice seems to be the distance function measured from $p$. Indeed, it is used by Abresch in \cite{a}. However, under our rather weak assumptions on the metric it is impossible to have uniform control of the second order derivative of the distance function, which is needed to ensure that the pull-back metric induced by the embedding satisfies a sectional curvature bound. In fact, one can not even expect differentiability of the distance function in balls of uniform size. Our substitute for the distance function is solutions of a canonical geometric partial differential equation. Those solution functions have the crucial equivariance ( like the distance functions) and enjoy better regularity. Many choices of ``canonical" PDE solutions are possible, e.g. in \cite{a} Green's function is suggested. But Green's function is inconvenient because of its singularity. We shall employ a very simple and nicely-behaved PDE. Denote \[ \Omega_1 = \Omega \setminus \cup_{q \in \partial \Omega} \overline{B_g(q, i_0)}, \] where $i_0 =\frac{r}{10}$. ($B^g(q, \cdot)$ denotes the closed geodesic ball of center $q$ and radius $\cdot$ measured in $g$.) Then for $s \in \Omega_1$, let $h_s \in L^{1,2}_0 (\Omega)$ be the unique weak solution of the following Dirichlet boundary value problem: \begin{equation} \label{h} \left\{ \begin{array}{rcll} \Delta h_s & = & -1 & \mbox{in} \ B_g(s,i_0) \\ h_s & \equiv & 0 & \mbox{on} \ \partial B_g(s,i_0). \end{array} \right. \end{equation} Here the Laplace operator is defined with respect to the metric $g$. The function $h_s$ will be extended to be zero outside the geodesic ball. Since the harmonic $C^{0,\alpha}$-norm of $(\Omega, g)$ is uniformly bounded, it is easy to see that a uniform Poincare inequality holds on the balls $B_g(s,i_0)$ with dependence on $i_0$. A simple integration argument then yields a uniform estimate of the Sobolev norm of $h_s$. Uniform interior $C^{2, \alpha}$ estimates then follow readily, because in harmonic coordinates the Laplace operator takes the form $\Delta = g^{ij} \partial_i \partial_j$. We also have a uniform $L^{\infty}$ estimate up to boundary, but it seems impossible to obtain better estimate up to boundary because the control of the geometry of the boundary is very weak. At a first glance this appears to threaten to destroy the embedding scheme. Fortunately we have a way to get around it. On the other hand, we can not obtain control of the dependence of $h_s$ on the center $s$. To remedy this, we shall take a suitable average of $h_s$ over $s$. The resulting new family of functions will depend nicely on the center. Now let us state a few basic properties of the functions $h_s$ in the following proposition, which will be proved at the end of this section. Here, as before, we work under the assumption $\|(\Omega, g)\|^h_{C^{0, \alpha}, r} \leq Q(r)$. \begin{prop} \label{bh} Let $\bar{h}_s(p)$ be the solution of equation (\ref{h}) with respect to the canonical Euclidean metric $g_0$ on the Euclidean ball $B_{g_0}(s,i_0)$. Then for any $\epsilon > 0$ and fixed $0<R<1$, there is an $r_0 = r_0 (\epsilon,R,Q) > 0$ such that if $i_0 \leq r_0$, \begin{eqnarray} |h_s(p) - \bar{h}_s(p)| & < & \epsilon i_0^2, \label{c1} \\ | \frac{\partial}{\partial p} h_s(p) - \frac{\partial}{\partial p} \bar{h}_s(p)| & < &\epsilon i_0 \label{c2} \end{eqnarray} for all $s$ and all $p$ with $d_{g_0}(s,p) \leq Ri_0$. It will follow from the proof that $B_{g_0}(s,Ri_0) \subset B_g(s, i_0)$ so that these estimate make sense. Also \begin{equation} \label{ub} | \frac{\partial^2}{\partial p^2} h_s(p) |\leq C(n, Q,R), \ \ |\frac{1}{i_0} \frac{\partial}{\partial p} h_s(p) |\leq C(n, Q,R). \end{equation} \end{prop} Note that \begin{equation} \bar{h}_s(p) = \frac{1}{2n} (i_0^2 - d_{g_0}^2(s,p)). \end{equation} Therefore $\frac{2n}{i_0^2} \bar{h}_s(p) \leq \frac{1}{5}$ when $d_{g_0}(s,p) \geq \sqrt{\frac{4}{5}}i_0$. Choosing $R = \frac{10}{11}$ in Proposition~\ref{bh}, we have $\frac{2n}{i_0^2} h_s(p) < \frac{1}{4}$ when $ \sqrt{\frac{4}{5}}i_0 \leq d_{g_0}(s,p) \leq \frac{10}{11}i_0$ and $i_0$ is sufficiently small. Let $\beta = \beta_n \in C^\infty_0 ([0,\infty))$ be the cut off function. \[ \beta_n(t) = \left\{ \begin{array}{ll} 0 & \mbox{if} \ 0 \leq t \leq \frac{1}{4} \\ B_n & \mbox{if} \ t \geq \frac{1}{2} \end{array} \right., \] where $B_n$ is a constant which will be determined later. Then $\beta \left( \frac{2n}{i_0^2}h_s(p)\right) = 0$ near the sphere $d_{g}(s,p) = \frac{9}{10}i_0$ for all $i_0$ small. (Note that $d_{g}$ converges to $d_{g_0}$ when $i_0 \rightarrow 0$.) We define a new function which is $\beta \left( \frac{2n}{i_0^2}h_s(p)\right)$ restricted to the ball $B(s, \frac{9}{10}i_0)$ and identically zero outside. For simplicity we still denote this new function by $\beta \left( \frac{2n}{i_0^2}h_s(p)\right)$. As mentioned before, we have no control of the dependence of $h_s$ on the center $s$. The said average function is given as follows \begin{equation} f_p(q) = \int_\Omega \beta \left( \frac{2n}{i_0^2}h_s(p)\right) \beta \left( \frac{2n}{i_0^2}h_s (q) \right)\, ds. \end{equation} Note that $f_p(q)$ is symmetric in $p$ and $q$ and is $C^{2,\alpha}$ uniformly bounded in both variables. Now we define the embedding \begin{eqnarray*} F: \ \Omega_1 & \rightarrow & L^2(\Omega, g)\\ p & \rightarrow & i_0^{-\frac{3}{2}n+1} f_p(q) \end{eqnarray*} Note that \begin{eqnarray} d_{v_p}F : & q \longmapsto & 2ni_0^{-\frac{3}{2}n}\int_\Omega \beta'\left( \frac{2n}{i_0^2}h_s(p)\right) \langle \frac{1}{i_0}\nabla_{v_p} h_s(p), v_p \rangle \beta \left( \frac{2n}{i_0^2}h_s (q) \right)\, ds, \label{dff}\\ \nabla^2_{v_p,w_p}F: & q \longmapsto & 4n^2i_0^{-\frac{3}{2}n-1}\int_\Omega \beta''\left( \frac{2n}{i_0^2}h_s(p)\right) \langle \frac{1}{i_0}\nabla h_s(p), v_p \rangle \langle \frac{1}{i_0}\nabla h_s(p), w_p \rangle \beta \left( \frac{2n}{i_0^2}h_s (q) \right)\, ds \nonumber \\ & & + 2ni_0^{-\frac{3}{2}n-1}\int_\Omega \beta'\left( \frac{n}{i_0^2}\right) \nabla^2_{v_p,w_p} h_s(p) (v_p,w_p) \beta \left( \frac{n}{i_0^2}h_s (q) \right)\, ds. \label{dff2} \end{eqnarray} We first show that when $\Omega$ is an Euclidean domain, we can normalize $\beta$ so that $F$ is an isometric imbedding. In this case the imbedding function \[ \bar{f}_p(q) = \int_\Omega \beta \left(1-\frac{d_{g_0}^2(p,s)}{i^2_0}\right) \beta \left(1-\frac{d_{g_0}^2(q,s)}{i^2_0}\right)\, ds.\] By the symmetry of the integration domain, $B(p, \frac{\sqrt{3}i_0}{2}) \cap B(q, \frac{\sqrt{3}i_0}{2})$, and the integrand, $\bar{f}_p(q)$ depends only on $d_{g_0}(p,q)$ (and $\beta$). We can write $\bar{f}_p(q) = i^n_0 \tilde{f} (n, \frac{1}{i_0}d_{g_0}(p,q))$, $ d_{v_p}\bar{F}(q) = i_0^{-n/2}\tilde{f}' (n, \frac{1}{i_0}d_{g_0}(p,q)) \langle \nabla d_{g_0}(p,q), v_p \rangle$. Then \begin{eqnarray*} \| d_{v_p}\bar{F}\|^2_{L^2(\Omega)} & = & i_0^{-n}\int_{B(p, 2i_0)} \tilde{f}'^2 (n, \frac{1}{i_0}d_{g_0}(p,q)) \langle \nabla d_{g_0}(p,q), v_p \rangle^2 dq \\ & = & i_0^{-n}\int_0^{2i_0} r^{n-1} \int_{S^{n-1}} \tilde{f}'^2 (n, \frac{1}{i_0}r) \langle \xi, v \rangle^2 d\xi dr \\ & = & i_0^{-n}\frac{\mbox{vol} (S^{n-1})|v|^2}{n}\int_0^{2i_0} r^{n-1}\tilde{f}'^2 (n, \frac{1}{i_0}r) dr \\ & =& \frac{\mbox{vol} (S^{n-1})|v|^2}{n}\int_0^{2}r^{n-1}\tilde{f}'^2 (n,r) dr. \end{eqnarray*} Choose $B_n$ in the defintion of $\beta$ so that $\frac{\mbox{vol} (S^{n-1})}{n}\int_0^{2}r^{n-1}\tilde{f}'^2 (n,r) dr = 1$. Then we will have achieved the following. \begin{lem} $\bar{F}$ is an isometric embedding. \end{lem} With the above choice of $\beta$ we will show that $F$ is an almost isometric embedding when $\Omega$ is not necessarily an Euclidean domain and the second derivative of $F$ is also uniformly bounded. More precisely we have \begin{prop} \label{e-df} For any given $\epsilon_0 >0$, there exists an $r_0>0$ such that \begin{equation} \label{df1} (1+ \epsilon_0)^{-2} |v|^2 \leq \|d_{v_p}F\|^2_{L^2(\Omega)} \leq (1+ \epsilon_0)^{2} |v|^2, \end{equation} for all $v\in T_p\Omega_1$ and $0 < i_0 \leq r_0$. And \begin{equation} \label{df2} \|\nabla^2_{v_p,w_p}F\|^2 _{L^2(\Omega)} \leq C(n,\alpha,Q) i_0^{-2}|v|^2 \cdot |w|^2. \end{equation} \end{prop} \noindent {\em Proof.} By definition \[ \|d_{v_p}F\|^2_{L^2(\Omega)} = \int_{B(p,2i_0)} |d_{v_p}F(q)|^2 dq. \] Now the volume element of the metric $g$ is comparable with Euclidean one. Namely \begin{equation} \label{vv} e^{-Q(i_0)} \mbox{vol}_{R^n} \leq \mbox{vol}_g \leq e^{Q(i_0)} \mbox{vol}_{R^n}. \end{equation} Therefore it suffices to prove that $|d_{v_p}F(q)|$ is close to $|d_{v_p}\bar{F}(q)|$ when $i_0$ is small, which follows from (\ref{dff}) and Proposition~\ref{bh}. (\ref{df2}) also follows from (\ref{dff2}), (\ref{vv}) and Proposition~\ref{bh}. \hspace*{\fill}\rule{3mm}{3mm}\quad \\ {\em Proof of} Proposition~\ref{bh}. First we introduce some new functions. Let $\tilde{h}_{s, i_0}(p)$ be the solutions of (\ref{h}) on $B_{i_0^{-2}g}(s,1)$ with respect to the scaled metrics $i_0^{-2}g$, and $\bar{\tilde{h}}_s(p)$ the solution of (\ref{h}) with respect to the Euclidean metric on the Euclidean ball $B(s,1)$. Then \begin{equation} \label{hh} \tilde{h}_{s,i_0}(p)= i_0^{-2} h_s(p), \ \ \bar{\tilde{h}}_s(p) = i_0^{-2} \bar{h}_s(p). \end{equation} (Here the variables $s,p$ are in domain with different metrics for different functions.) Since \begin{equation} \label{normb} \|(B_{i_0^{-2}g}(s,1), i_0^{-2}g)\|_{C^{0,\alpha},1} = \|(B_g(s,i_0), g)\|_{C^{0,\alpha},i_0} \leq Q(i_0) \leq Q(1), \ \mbox{for}\ i_0 \leq 1, \end{equation} for the same reasons as mentioned before for $h_s$, we have the following estimates \begin{equation} \label{el} \| \tilde{h}_{s,i_0}\|_{L_0^{1,2}(B_g(s,1))} \leq C(n,Q(1)) \end{equation} and for any fixed $0<R<1$, \begin{equation} \label{ell} \| \tilde{h}_{s,i_0} \|_{C^{2,\alpha}(B_g(s,R))} \leq C(n, Q(1),R). \end{equation} On the other hand, the hypothesis $\|(B_{i_0^{-2}g}(s,1), i_0^{-2}g)\|_{C^{0,\alpha},1} \leq Q(i_0)$ implies that \[ e^{-Q(i_0)} d_{i_0^{-2}g_0}(p,s) \leq d_{i_0^{-2}g}(p,s) \leq e^{Q(i_0)} d_{i_0^{-2}g_0}(p,s). \] Therefore \[ B_{i_0^{-2}g_0} (s, e^{-Q(i_0)}R) \subset B_{i_0^{-2}g} (s, R) \subset B_{i_0^{-2}g_0}(s, e^{Q(i_0)}R). \] {}From these estimates and the uniqueness of the weak solution $\bar{\tilde h}_s$ it is easy to deduce the following: for each sequence of centers $s_k$ converging to some center $s_0$ and each sequence $i_0(k)$ converging to zero, the corresponding rescaled solutions $\tilde h_{s_k, i_0(k)}$ converge weakly to $\bar{\tilde h}_{s_0}$. Moreover, by the Arzela-Ascoli theorem, they also converge uniformly in $C^1$ on proper compact subsets of $B_{g_0}(s_0, 1)$. This convergence fact along with the smooth dependence of $\bar{\tilde h }_s$ on $s$ then imply that the $\tilde h_{s, i_0}$ converge uniformly with respect to $s$ in $C^1$ on proper compact subsets of $B_{g_0}(s, 1)$ as $i_0$ goes to zero. Consequently, for a fixed $R \in (0,1)$, given any $\epsilon >0$, there is an $r_0 >0$ such that for all $s$ and $p$ with $d_{g_0}(p,s) < R$, if $i_0 \leq r_0$, then \begin{equation} |\tilde{h}_s(p) -\bar{\tilde{h}}_s(p)| < \epsilon. \label{pc1} \end{equation} Similarly, \begin{equation} | \frac{\partial}{\partial p} \tilde{h}_s(p) -\frac{\partial}{\partial p} \bar{\tilde{h}}_s(p)| < \epsilon. \label{pc2} \end{equation} Hence for all $s$ and all $p$ with $d_{g_0}(s,p) \leq Ri_0$, \begin{eqnarray*} |h_s(p) - \bar{h}_s(p)| & < & \epsilon i_0^2, \\ | \frac{\partial}{\partial p} h_s(p) - \frac{\partial}{\partial p} \bar{h}_s(p)| & < &\epsilon i_0. \end{eqnarray*} (\ref{ub}) just follows from (\ref{ell}) and (\ref{hh}). \hspace*{\fill}\rule{3mm}{3mm}\quad \sect{Embedding II} In this section we study the geometry of $F(\Omega_1)$ as a submanifold in $L^2(\Omega)$. We will prove, among other things, two important properties of $F(\Omega_1)$. That is, the induced metric of $F(\Omega_1)$ has uniformly bounded sectional curvature and the embedding $F$ is equivariant. The geometry of $F(\Omega_1)$ is completely determined by the second fundamental form of its embedding into $L^2(\Omega)$, which in turn can be described by the family of orthogonal projections. $P(y):\ L^2(\Omega) \rightarrow T_yF(\Omega_1) \subset L^2(\Omega), y \in F(\Omega^1)$. We have \begin{lem} \label{4.1} The sectional curvature of $F(\Omega_1)$ is given by the following formula: \begin{equation} \label{curv} R(z_1, z_2) z_3 = [d_{z_1} P, d_{z_2}P] z_3, \ \ z_1,z_2,z_3 \in T_z F(\Omega_1). \end{equation} \end{lem} \noindent {\em Proof.} Since $P^2 =P$, one has \begin{equation} \label{p2} (d_{z_1}P)P + P (d_{z_1}P) = d_{z_1}P. \end{equation} Let $\nabla$ be the connection on $F(\Omega_1)$ and $d_{z_1}$ the directional derivative on the $L^2$ space. Then \begin{eqnarray*} \nabla_{z_1} z_2 & = & P(d_{z_1}z_2) \\ & = & d_{z_1}z_2 - (1- P)(d_{z_1}(Pz_2)) \\ & = & d_{z_1}z_2 - (1-P) \left[ (d_{z_1}P)z_2 + P (d_{z_1}z_2) \right] \\ & = & d_{z_1}z_2 - (1-P) (d_{z_1}P) (Pz_2) \\ & = & d_{z_1}z_2 - (d_{z_1}P)z_2. \end{eqnarray*} Here we have used (\ref{p2}) in the last equation. Therefore \begin{equation} \label{p1} \nabla_{z_1} = d_{z_1} - (d_{z_1}P). \end{equation} Now formula \ref{curv} follows from (\ref{p1}) and the definition of the curvature tensor. \hspace*{\fill}\rule{3mm}{3mm}\quad \begin{prop} \label{4.2} Let $\alpha_0 = (1+\epsilon_0)^2 C(n,\alpha,Q)$. Here $\epsilon_0, r_0, C$ are the same constants as in Proposition~\ref{e-df}. Then for all $0 < i_0 < r_0$, \begin{equation} \label{ep} \| d_{\dot{y}}P\|_{op} \leq \alpha_0 i_0^{-1} \| \dot{y} \|, \end{equation} \end{prop} \noindent {\em Proof.} Since $\left( 1-P(F(p)) \right) d_{w_p} F = 0$, \[ d_{d_{v_p}F} P \cdot d_{w_p} F = \left( 1-P(F(p)) \right) \nabla^2_{v_p,w_p}F. \] By (\ref{df1}) and (\ref{df2}), $\| d_{\dot{y}}P\|_{op} \leq \alpha_0 i_0^{-1} \| \dot{y} \|$. \hspace*{\fill}\rule{3mm}{3mm}\quad \\ Therefore the metric $\tilde{g} = F^* g_{L^2}$, the metric on $\Omega_1$ obtained by pulling back the $L^2$ metric, has bounded sectional curvatures. To prove the equivariance, we first note: \begin{lem} \label{uh} Let $h_s(p)$ be the function defined in (\ref{h}), and let $\psi: \Omega \rightarrow \Omega'$ be an isometric embedding. Then \begin{equation} h_{\psi (s)} (\psi (p)) = h_s(p). \end{equation} \end{lem} \noindent {\em Proof.} Since equation (\ref{h}) is invariant under isometry, this follows from the uniqueness of solutions to (\ref{h}). \hspace*{\fill}\rule{3mm}{3mm}\quad \\ Let $(\Omega,g)$ be as before and $F: \Omega_1 \rightarrow L^2(\Omega)$ the embedding defined in \S 3. With the above lemma, we can now prove \begin{prop} \label{4.4} If $\psi : (\Omega,g) \rightarrow (\Omega',g')$ is an isometric embedding, then $$\psi : (\Omega, \tilde{g}) \rightarrow (\Omega', \tilde{g}')$$ is also an isometric embedding. \end{prop} \noindent {\em Proof.} First, we assume $\psi$ is actually an isometry. Then \[ F \circ \psi (p) = f_{\psi (p)}, \] where the function \begin{eqnarray*} f_{\psi (p)} (q) & = & \int_\Omega \beta \left( \frac{2n}{i_0^2}h_s(\psi (p))\right) \beta \left( \frac{2n}{i_0^2}h_s (q) \right)\, ds. \\ & = & \int_\Omega \beta \left( \frac{2n}{i_0^2}h_{\psi^{-1}(s)}(p)\right) \beta \left( \frac{2n}{i_0^2}h_{\psi^{-1}(s)} (\psi^{-1}(q)) \right)\, ds. \end{eqnarray*} Here we have used Lemma~\ref{uh}. Since $\psi$ is an isometry, a change of coordinates yields \[ f_{\psi (p)}(q) = f_p (\psi^{-1} (q)). \] It follows then that \[ F \circ \psi = (\psi^{-1})^* \circ F, \] where we have denoted by $(\psi^{-1})^*$ the map on $L^2(\Omega)$ induced by $\psi^{-1}$. Therefore \[ \psi^* F^*g_{L^2} = (F \circ \psi)^* g_{L^2} = F^* ((\psi^{-1})^*)^* g_{L^2} = F^*g_{L^2}. \] This proves the equivariance when $\psi$ is an isometry. Since $\Omega, \Omega'$ are both domains of $R^n$, the general statement follows by applying the above to $\psi:\ \Omega \rightarrow \psi (\Omega)$. \hspace*{\fill}\rule{3mm}{3mm}\quad \\ {\it Proof of} Theorem 1.1. This theorem is a consequence of Proposition 2.2, Lemma 4.1, Proposition 4.2 and Proposition 3.3. \hspace*{\fill}\rule{3mm}{3mm}\quad \sect{Proof of Theorem 1.2} We consider $(M,g) \in {\cal M}(n,p,Q)$ and $\Omega = B(0, r) \subset R^n$ with the pull-back metric $\varphi_s^*g$, where $\varphi_s$ is a coordinate map. For convenience, we shall again denote the pull-back metric by $g$. We have the inequality $\|(\Omega, g)\|_{L^{1,p}, r} \leq Q(r)$. We employ the same embedding of $(\Omega, g)$ as before. Thus we use the same functions $h_s \in L_0^{1,2}(\Omega)$, as given by (3.1). But the estimates for $h_s$ are different now. By the $L^p$ elliptic theory we have uniform $L^{2,p}$ estimates for $h_s$ in the interior. A result similar to Proposition 3.1 then holds, namely we have the estimates (3.2), (3.3) and the second one in (3.4), while the first one in (3.4) is replaced by an estimate on the $L^p$-norm of the second order derivative. Now it is clear that our smoothing process produces a metric $\bar g$ on $M$ such that the lifted metrics $\varphi_s^* \bar g$ on $B(0, r/2)$ have uniformly bounded Sobolev constant and uniformly $L^p$-bounded sectional curvatures. Next we apply the Ricci flow to deform $\bar g$. For this purpose, we assume that $M$ is closed. We appeal to the arguments in \cite{dwy}. There, manifolds with a pointwise bound on Ricci curvature and a conjugate radius bound are treated. These conditions are used to show that controlled harmonic coordinates exist on lifted local patches, where the lifting is given by the exponential map. In these coordinates, the Ricci curvature bound then implies an $L^p$-bound on sectional curvatures. In our situation, we do have an $L^p$-bound on sectional curvatures. But the Ricci curvature bound is also used in several other places in \cite{dwy}. Since this bound is not available here, we need to modify the arguments in \cite{dwy}. In the key Proposition 3.1 (uniform short time existence of the Ricci flow with a priori control) in \cite{dwy}, we drop the estimate (3.4) on Ricci curvature. (Note that the conclusion of the proposition without (3.4) still suffices for our purpose.) We claim that the proposition then holds in our new situation. For convenience, we shall call this proposition the "Key Proposition". In \cite{dwy}, the proof of Key Proposition is based on four lemmata: Lemmata 3.2, 3.3, 3.4 and 3.5. Now let's take a look at these lemmata in our new situation. Lemma 3.2 (pointwise estimate for Riemann curvature tensor) holds without change. The proof of Lemma 3.3 ($L^p$-estimate for Riemann curvature tensor) depends on a {\it covering estimate}, i.e. an estimate for certain covering number and multiplicity regarding geodesic balls in the lifted patches. In \cite{dwy}, the estimate comes from the Bishop-Gromov covering argument, which depends on a pointwise lower bound for Ricci curvature. Now we do not have such a bound. But we still have a covering estimate, which follows from the properties of our coordinates and the basic control over the pull-back metrics as given by (the modified versions of) Propositions 3.1 and 3.2 (in the present paper). Another ingredient in the proof of Lemma 3.3 is an isometry correspondance between geodesic balls on different lifted patches. Now it is given by Lemma~\ref{hom}. (The proof of this correspondence given in \cite{dwy} does not work here.) Thus Lemma 3.3 also holds. Since we have dropped the estimate (3.4) about Ricci curvature in Key Proposition, Lemma 3.4 is no longer needed. Finally, note that the proof for Lemma 3.5 (estimate of the Sobolev constant) in \cite{dwy} goes by computing the change rate of the Sobolev constant along the flow. In \cite{dwy}, this rate is controlled by a uniform bound on Ricci curvature, which is not valid here. However, Lemma 3.2 contains an estimate for Ricci curvature at positive time $t$, namely it is dominated by a constant times $t^{-1/2}$. Since the function $t^{-1/2}$ is integrable at $0$, it is clear that the change rate of the Sobolev constant is still under control without a uniform Ricci curvature bound, and hence Lemma 3.5 carries over. (An alternative way of handling the Sobolev constant is to apply Yang's estimate for it in \cite{y2}, which uses only an $L^p$-bound on Ricci curvature and a positive lower bound on (local) volume. But that is more involved. ) We leave to the reader to formulate precisely the independent result implied by the above proof about short time existence of the Ricci flow.
1,116,691,501,163
arxiv
\section{Introduction and summary} One of the many surprises during the second superstring revolution was the realization that the construction of $\mathrm{SU}(N)$ instantons on $\mathbb{R}^4$ by Aityah-Drinfeld-Hitchin-Manin \cite{Atiyah:1978ri} and on asymptotically locally Euclidean (ALE) spaces by Kronheimer-Nakajima \cite{KronheimerNakajima,Bianchi:1996zj} have a physical realization in terms of D$p$-branes probing D$(p{+}4)$-brane on $\mathbb{R}^4$ \cite{Witten:1995gx} and/or ALE spaces \cite{Douglas:1996sw}. There, we have a gauge theory with eight supercharges on D$p$-branes such that its Higgs branch is given by the corresponding instanton moduli spaces: the equations of the ADHM and Kronheimer-Nakajima construction are the F-term and D-term conditions of the supersymmetric gauge theory. As a variation of this construction, we can consider M5-branes probing the $E_8$ end-of-the-world brane of the M-theory, either on $\mathbb{R}^4$ or on ALE spaces. The low-energy worldvolume theory on these M5-branes is a 6d \Nequals1 supersymmetric theory whose Higgs branch is intimately related to the $E_8$ instanton moduli spaces. These theories were studied already in the heyday of the second revolution, see e.g.~\cite{Aspinwall:1997ye}, We did not, however, have many methods to understand the properties of these theories back then, since these theories are intrinsically strongly-coupled. Therefore we could not say anything new regarding the mathematics of the $E_8$ instanton moduli spaces on $\mathbb{R}^4$ or ALE spaces, for which no constructions analogous to ADHM or Kronheimer-Nakajima are known even today. The situation has changed drastically since then, thanks to our improved understanding of strongly-coupled supersymmetric theories. Among others, we can count the class S construction in four dimensions initiated by \cite{Gaiotto:2009we}, the determination of the chiral ring of the Coulomb branch in three dimensions starting with \cite{Cremonesi:2013lqa}, and a new method to study \Nequals1 theories in six dimensions pioneered by \cite{Heckman:2013pva}. Combining these developments, we believe there might be a chance that the physics might shed new lights on the mathematics of the structure of the $E_8$ instanton moduli spaces on ALE spaces. The main target of our study in this paper is the 6d \Nequals1 theory on M5-branes probing the $E_8$ 9-brane on the A-type ALE singularity $\mathbb{C}^2/\mathbb{Z}_k$. Such a system can be labeled by the M5-brane charge $Q$ and the asymptotic holonomy $\rho:\mathbb{Z}_k \to E_8$. For some simple choices of $\rho$, the structure of the 6d theory on generic points on its tensor branch was already determined in \cite{DelZotto:2014hpa,Heckman:2015bfa}, which was further extended in \cite{Zafrir:2015rga,Ohmori:2015tka,Hayashi:2015zka}. Our first aim is to determine the tensor branch structure for an \emph{arbitrary} choice of the asymptotic holonomy $\rho$. We give a complete algorithm determining the gauge group and the matter content in terms of $\rho$. Along the way, we encounter a subtle feature that there are two distinct ways to gauge $\mathfrak{su}(2n+8)$ symmetry of $\mathfrak{so}(4n+16)$ flavor symmetry of an $\mathfrak{usp}(2n)$ gauge theory with $N_f=2n+8$ flavors, due to the fact that the outer automorphism of $\mathfrak{so}(4n+16)$ is \emph{not} a symmetry of the latter gauge theory. The Higgs branch $\mathcal{M}_{Q,\rho}$ of our theory $\mathcal{T}_{Q,\rho}^\text{6d}$ is not directly the instanton moduli space. In particular, $\mathcal{M}_{Q,\rho}$ has an action of $\mathrm{SU}(k)$, which we do not expect for the instanton moduli space. Rather, by a small generalization of the argument in \cite{Ohmori:2015pua}, we see that the $E_8$ instanton moduli space $\mathcal{M}^\text{inst}_{Q,\rho,\xi}$ of charge $Q$ and asymptotic holonomy $\rho$ on the ALE space $\widetilde{\mathbb{C}^2/\mathbb{Z}_k}$ is given by \begin{equation} \mathcal{M}^\text{inst}_{Q,\rho,\xi} = (\mathcal{M}_{Q,\rho} \times \mathcal{O}_\xi) /\!/\!/ \mathrm{SU}(k) \label{rel} \end{equation} where $\xi=(\xi_\mathbb{C},\xi_\mathbb{R})\in\mathfrak{su}(k)\otimes (\mathbb{C}\oplus \mathbb{R})$ is an element in the Cartan of $\mathfrak{su}(k)$ tensored by $\mathbb{R}^3$ specifying the hyperk\"ahler deformation parameter of the ALE space, $\mathcal{O}_\xi$ is the orbit of $\xi_\mathbb{C}$ in $\mathfrak{su}(k)_\mathbb{C}$ with the hyperk\"ahler metric specified by $\xi_\mathbb{R}$ as in \cite{KronheimerNilpotent}, and the symbol $/\!/\!/$ denotes the hyperk\"ahler quotient construction. This means that the space $\mathcal{M}_{Q,\rho}$ knows the structure of the instanton moduli on the ALE space for arbitrary deformation parameter $\xi$. The existence of such a generating space was conjectured by one of the authors in \cite{Tachikawa:2014qaa}, based on a study of $\mathrm{SO}(8)$ instantons on the ALE spaces. We then study the 4d theory which arises from the $T^2$ compactification of the 6d theory as in \cite{Ohmori:2015pua}. We find that they always correspond to a class S theory of type A, given by a sphere with three punctures. The 3d mirror of its $S^1$ compactification is a star-shaped quiver, whose structure can be deduced from the class S description by the methods of \cite{Benini:2010uu}. We find that they have the form of an \emph{over-extended} $E_8$ quiver. In 3d, the relation \eqref{rel} can be physically implemented by realizing $\mathcal{O}_\xi$ as the Coulomb branch of the $T[\mathrm{SU}(k)]$ theory. Using this, we will find that $\mathcal{M}^\text{inst}_{Q,\rho,\xi}$ is the Higgs branch of an affine $E_8$ quiver where $\xi$ is now the mass parameter of an $\mathrm{SU}(k)$ flavor symmetry. For $\xi=0$ this was already conjectured by mathematicians \cite{Nakajima:2015txa,Nakajima:2015gxa} and by physicists \cite{Cremonesi:2014xha,Mekareeya:2015bla}. \paragraph{Organization of the paper:} The rest of the paper is organized as follows. We start by recalling the geometric data characterizing our system in Sec.~\ref{sec:geometric}. Then in Sec.~\ref{sec:6d}, we provide the algorithm determining the 6d quiver theory in terms of the asymptotic holonomy. In Sec.~\ref{sec:lower}, we discuss its dimensional reduction to 5d, 4d and 3d in turn. In 5d and 4d, we translate the Kac labels to the three Young diagrams characterizing the brane web and the class S description. In 3d, we give the star-shaped quiver. Finally in Sec.~\ref{sec:examples}, we provide many examples illustrating our discussions. \paragraph{Accompanying Mathematica file:} The paper comes with a Mathematica file which implements the algorithm to produce the 6d quiver given the asymptotic $E_8$ holonomy. In addition, it allows the user to determine the 4d class S theory, and compute the anomalies from three different methods, namely the 6d field theory, the M-theoretic inflow, and the 4d class S technique. \paragraph{Summary of Notations:} \begin{itemize} \item The asymptotic holonomy $\rho:\mathbb{Z}_k\to E_8$ is given by an element $\vec w\in \mathfrak{e}_8$ in the Cartan subalgebra, or equivalently in terms of the Kac label \begin{equation} \un{n}:=\E{n_1}{n_2}{n_3}{n_4}{n_5}{n_6}{n_{4'}}{n_{2'}}{n_{3'}}, \end{equation} a set of non-negative integers arranged on the affine $E_8$ Dynkin diagram. For more details, see Sec.~\ref{sec2.1}. \item We have closely related quantities $N_\text{inst} $, $N_3$, $n_T $, $\kappa$, and $Q$, which are all essentially the number of M5-branes or equivalently the instanton charge on the ALE space. They all increase by one when we add one M5-brane to the system. Their constant parts are however different. We could have used just one out of them, but any choice would make at least one of the formulas quite unseemly. We therefore decided to keep them and provide a summary here. \begin{itemize} \item The integer $N_\text{inst} $ is defined in terms of the instanton number as \begin{equation} \int_{\widetilde{\mathbb{C}^2/\Gamma}}\mathop{\mathrm{tr}} F\wedge F \propto N_\text{inst} -\frac{\vev{\vec w,\vec w}}{2k}. \end{equation} See \eqref{E8instantonnumber} for details. \item Another integer $N_3$ satisfies $N_3=N_\text{inst} -k$, see \eqref{NprimeN0}, \eqref{NprimeN}. This is useful to parameterize the ranks of groups in the 3d quiver, see \eqref{N3d}. \item Another integer $n_T$ defined by $n_T =N_3+n_1+\cdots+n_6$ is useful to parameterize the class S data, see \eqref{N}. \item The integer $\kappa$ is the number of tensors of the 6d quiver. The difference between $n_T $ and $\kappa$ is determined by the Kac label and is described in the algorithm in Sec.~\ref{algorithm}. \item A rational number $Q$ is the M5-charge which appears in the inflow computation, and satisfies \begin{equation} Q=N_\text{inst} -\frac{\vev{\vec w,\vec w}}{2k}-\frac12(k-\frac1k), \end{equation}see \eqref{M5charge}. \end{itemize} \end{itemize} \section{Geometric preliminaries} \label{sec:geometric} \subsection{Topological data of the instanton configuration} \label{sec2.1} Here we recall the topological data necessary to specify a $G$-instanton on $\mathbb{C}^2/\Gamma$ or its resolution $\widetilde{\mathbb{C}^2/\Gamma}$, where $\Gamma\in \mathrm{SU}(2)$. On $\mathbb{C}^2/\Gamma$, we first need to specify the holonomy at the origin and at the infinity. They determine the representation $\rho_{0,\infty}: \Gamma\to G$, which we consider as a linear action on the complexified adjoint representation $\mathfrak{g}$. On $\widetilde{\mathbb{C}^2/\Gamma}$, we specify the holonomy at infinity $\rho_\infty$. In addition, we need to specify the class in $H^2(\widetilde{\mathbb{C}^2/\Gamma},\pi_1(G))$. This is the first Chern class when $G=\mathrm{U}(N)$ and the second Stiefel-Whitney class when $G=\mathrm{SO}(N)$. Finally we need to specify the instanton number, defined as the integral of $\mathop{\mathrm{tr}} F\wedge F$ over the ALE space. Unless otherwise mentioned, we normalize the trace so that the instanton on $\mathbb{R}^4$ of the smallest positive instanton number satisfies \begin{equation} \int \mathop{\mathrm{tr}} F\wedge F=1. \end{equation} On the ALE space, the instanton number is in general fractional. Our main interest lies in the case $G=E_8$ and $\Gamma=\mathbb{Z}_k$. Since $\pi_1(E_8)$ is trivial, we do not have to specify the class in $H^2$. A holonomy $\rho:\mathbb{Z}_k \to E_8$ can be nicely encoded by its Kac label \begin{equation} \un{n}:=\E{n_1}{n_2}{n_3}{n_4}{n_5}{n_6}{n_{4'}}{n_{2'}}{n_{3'}}. \end{equation} introduced in \S 8.6 of Kac's textbook \cite{Kac}. Let us quickly recall how it works. Let the image $g$ of the generator of $\mathbb{Z}_k$ in $E_8$ be \begin{equation} g=e^{2\pi i \vec w/k}\in E_8 \end{equation} where \begin{equation} \vec w=\sum_{i\neq 0} n_i \vec w_i \in \mathfrak{e}_8. \end{equation} where $\vec w_i$ are the fundamental weights of $E_8$. Since $g$ is of order $k$, $n_i$ are integers. We define $n_0$ so that $\sum d_i n_i = k$, where the Dynkin marks $\un{d}$ are given by \begin{equation} \un{d}=\E123456423. \end{equation} It is known that by the Weyl reflections and the shifts, we can arrange $n_i\ge 0$ for all $i$ and then the result is unique. This is the Kac label of the holonomy. The subalgebra of $\mathfrak{e}_8$ left unbroken by the holonomy $\rho$ can be easily read off from its Kac label. Namely, it is given by the subalgebra corresponding to the nodes $i$ of the Dynkin diagram where $n_i=0$, together with an Abelian subalgebra making the total rank $8$. On $\widetilde{\mathbb{C}^2/\mathbb{Z}_k}$, the instanton number modulo one is given by the classical Chern-Simons invariant evaluated on $S^3/\mathbb{Z}_k$ at infinity. One way to compute it is to introduce coordinates on $S^3/\mathbb{Z}_k$ using polar coordinates $\theta,\phi$ on $S^2$ and the angle $\psi$ along the $S^1$ fiber. The connection itself is $\propto \vec w (d\psi + \cdots)$. One finds that \begin{equation} \int\mathop{\mathrm{tr}} F\wedge F = N_\text{inst} - \frac{\vev{\vec w, \vec w}}{2k}\label{E8instantonnumber} \end{equation} where $N_\text{inst} $ is an integer. The hyperk\"ahler dimension of the moduli space is given by the formula \begin{equation} \dim_\mathbb{H} \mathcal{M}_{\widetilde{\mathbb{C}^2/\Gamma},\rho_\infty} = 30 N_\text{inst} - \vev{\vec w,\vec \rho}. \label{geometry-result} \end{equation} \subsection{Dimension of the instanton moduli space} In this subsection we derive the formula \eqref{geometry-result} of the dimension of the moduli space. Those readers who trust the authors can skip this subsection. This computation is of course not new. It is provided here to make this paper more self-contained. The basic tool is the Atiyah-Patodi-Singer index theorem. Its explicit form on the orbifold of $\mathbb{C}^2$ was worked out e.g.~in \cite{Nakajima} for $\Gamma\subset \mathrm{U}(2)$. Here we quote the form used in Kronheimer-Nakajima \cite{KronheimerNakajima} for $\Gamma\subset\mathrm{SU}(2)$. The formula for the orbifold is: \begin{multline} \dim_\mathbb{H} \mathcal{M}_{\mathbb{C}^2/\Gamma,\rho_\infty,\rho_0} = h^\vee(G) (\int\mathop{\mathrm{tr}} F\wedge F ) + \frac{1}{2|\Gamma|}\sum_{\gamma \neq e} \frac{\chi_{\rho_\infty}(\gamma) }{2-\chi_Q(\gamma)} - \frac{1}{2|\Gamma|}\sum_{\gamma \neq e} \frac{\chi_{\rho_0}(\gamma) }{2-\chi_Q(\gamma)}. \end{multline} Here, $h^\vee(G)$ is the dual Coxeter number of $G$, and the second and the third terms are the contributions from the $\eta$ invariant of $S^3/\Gamma$ at the asymptotic infinity and at the origin, respectively, and $Q$ is the standard two-dimensional representation of $\Gamma$ from the defining embedding $\Gamma\subset \mathrm{SU}(2)$, On the ALE space $\widetilde{\mathbb{C}^2/\Gamma}$, we have: \begin{equation} \dim_\mathbb{H} \mathcal{M}_{\widetilde{\mathbb{C}^2/\Gamma},\rho_\infty} = h^\vee(G) (\int\mathop{\mathrm{tr}} F\wedge F ) + \frac{1}{2|\Gamma|}\sum_{\gamma \neq e} \frac{\chi_{\rho_\infty}(\gamma) }{2-\chi_Q(\gamma)} - \frac1{24}\dim G \chi_\Gamma \end{equation} where the quantity \begin{equation} \chi_\Gamma:=r_\Gamma +1 -\frac{1}{|\Gamma|} \label{chiGamma} \end{equation} is the Euler number of $\widetilde{\mathbb{C}^2/\Gamma}$ as defined by the integral of the Pontrjagin density. Furthermore, \begin{equation} \frac1{24} (r_\Gamma +1 -\frac{1}{|\Gamma|}) = \frac{1}{2|\Gamma|}\sum_{\gamma \neq e} \frac{1 }{2-\chi_Q(\gamma)}, \end{equation} reflecting the fact that if the holonomy at the origin of an instanton on $\mathbb{C}^2/\Gamma$ is trivial, we can resolve/deform the instanton and the ALE at the same time to be on $\widetilde{\mathbb{C}^2/\Gamma}$. In the end, we find the formula \begin{equation} \dim_\mathbb{H} \mathcal{M}_{\widetilde{\mathbb{C}^2/\Gamma},\rho_\infty} = h^\vee(G) (\int\mathop{\mathrm{tr}} F\wedge F ) + \Delta\eta, \label{intermediate} \end{equation} where \begin{equation} \Delta\eta:=\frac{1}{2|\Gamma|}\sum_{\gamma \neq e} \frac{(\chi_{\rho_\infty}(\gamma) -\dim \mathfrak{g}) }{2-\chi_Q(\gamma)} \end{equation} Let us evaluate this formula when $G=E_8$ with the holonomy $\rho_\infty$ specified by $g=e^{2\pi i \vec w/k}$ with the Kac label $\un{n}$. The eta invariant is \begin{equation} \Delta\eta =\frac{1}{2|\Gamma|}\sum_{\gamma \neq e} \frac{(\chi_{\rho_\infty}(\gamma) -\dim \mathfrak{g}) }{2-\chi_Q(\gamma)} = \frac{1}{2k} \sum_{\vec \alpha:\text{all roots}} \sum_{\gamma\neq 1}\frac{\chi_{\vev{\vec w,\vec \alpha}}(\gamma)-1 }{2-\chi_Q(\gamma)} \end{equation} where \begin{equation} \chi_a(g^j) = e^{2\pi i aj/k}, \qquad \chi_Q(g^j) = 2\cos 2\pi j/k. \end{equation} Now, we note \begin{equation} \frac{1}{2k}\sum_{j=1}^{k-1} \frac{\chi_a(g^j) -1}{2-\chi_Q(g^j)} = - \frac{a(a-k)}{4k} \end{equation} for $a=0,1,\ldots k$. Then \begin{equation} \Delta\eta = 2\sum_{\vec\alpha:\text{positive roots}} -\frac{\vev{\vec \alpha,\vec w} (\vev{\vec \alpha,\vec w}-k) }{4k} \end{equation} since $0\le \vev{\vec\alpha,\vec w}\le k$ for positive roots $\vec\alpha$. Now we use \begin{equation} \sum_{\vec\alpha:\text{positive roots}} \vec\alpha = 2\rho,\qquad \sum_{\vec\alpha:\text{positive roots}} \vev{\vec v_1,\vec\alpha}\vev{\vec\alpha,\vec v_2} = h^\vee \vev{\vec v_1,\vec v_2} \end{equation} and find \begin{equation} \Delta\eta=\frac{h^\vee}{2k} \vev{\vec w,\vec w} - \vev{\vec w,\vec \rho}. \end{equation} To compute the dimension, we now plug in to \eqref{intermediate} the formula for $\Delta\eta$ found just above and the formula for the instanton number \eqref{E8instantonnumber}. The term proportional to $\vev{\vec w,\vec w}$ cancels out, and we indeed have the desired result \eqref{geometry-result}. \section{Six-dimensional description} \label{sec:6d} After these geometrical preliminaries, we move on to the field theoretical analysis. We start with the six-dimensional quiver descriptions. As already mentioned in the introduction, for various simple choices of $\rho$, the six-dimensional quivers were already determined in \cite{DelZotto:2014hpa,Heckman:2015bfa,Zafrir:2015rga,Ohmori:2015tka,Hayashi:2015zka}. By a series of trials and errors, and following the principle that the quiver should be determined in terms of the Kac label, the authors found the following algorithm. \subsection{The general structure of the quiver} Our 6d SCFT on the generic points on its tensor branch consists of a collection of $\kappa$ tensors, corresponding to a linear quiver of the form \begin{equation} G_1\times \mathrm{SU}(m_2) \times \mathrm{SU}(m_3)\times \cdots \times \mathrm{SU}(m_\kappa)\times [\mathrm{SU}(k)] \end{equation} where $G_1$ is on the $-1$ curve, the rest is on $-2$ curves, and the final $\mathrm{SU}(k)$ is a flavor symmetry. In the notation of \cite{Heckman:2015bfa}, we have \begin{equation} \overset{G_1}{1} \quad \overset{\mathfrak{su}(m_2)}{2} \quad \overset{\mathfrak{su}(m_3)}{2} \quad \cdots\quad \overset{\mathfrak{su}(m_\kappa)}{2} \quad [\mathrm{SU}(k)]. \end{equation} Below, we slightly abuse the notation and refer by $G_1$ the combination of the group and the non-fundamental hypermultiplets on the $-1$ curve. The choices are: \begin{itemize} \item $G_1=\mathrm{USp}(m_1)$, \item $G_1=\mathrm{SU}(m_1)$ with an antisymmetric hyper, or \item $G_1=\mathrm{SU}(m_1=6)$ with a rank $3$ antisymmetric half-hyper. \end{itemize} We consider the rank $1$ E-string theory as $\mathrm{USp}(0)$, and furthermore, the rank $2$ E-string theory is considered as a $\mathrm{USp}(0)$ connecting to an $\mathrm{SU}(1)$ group. We have $m_1\le m_2\le \cdots \le m_\kappa$, and we define $a_1,\ldots, a_9$ by \begin{equation} a_s = \#\{i \mid m_{i+1}-m_i = s\}. \end{equation} We can reconstruct the whole of $m_i$ from $m_1$, $\kappa$ and $a_1,\ldots a_9$. For example, when the quiver is \begin{equation} \mathrm{SU}(3) \times \mathrm{SU}(9) \times \mathrm{SU}(13) \times \mathrm{SU}(17) \times \mathrm{SU}(18) \times \mathrm{SU}(19) \times \mathrm{SU}(20) \end{equation} we have $a_8=a_7=a_5=a_3=a_2=0, a_6=1, a_4=2, a_1=3$. There are bifundamentals between two consecutive groups in the quiver, and finally fundamental hypers are added such that each group is anomaly free, that is: \begin{itemize} \item $N_f = 2N$ for $\mathrm{SU}(N)$, \item $N_f = N +8$ for $\mathrm{USp}(N)$ or $\mathrm{SU}(N)$ with an antisymmetric hyper, and \item $N_f=15$ for $\mathrm{SU}(6)$ with a rank $3$ antisymmetric half-hyper. \end{itemize} \subsection{The algorithm} \label{algorithm} Now we present the algorithm to determine the structure of the quiver given the Kac label $\un{n}$ and the number $\kappa$ of the groups. Along the way, we also define the quantity $n_T $ which will be used in the following. We will also need the quantity \begin{equation} N_3=n_T -n_1-n_2-n_3-n_4-n_5-n_6.\label{N} \end{equation} The algorithm is implemented in the accompanying Mathematica file, so that the reader can easily try it around. In general we have \begin{equation} a_i=n_i\quad\text{for}\quad i=1,2,3,4,5,6. \end{equation} To specify $a_{7,8,9}$, we need to consider various cases as summarized below: \if0 \[ \begin{tikzpicture}[ grow=right, level distance=5cm, level 1/.style={sibling distance=-2cm}, level 2/.style={sibling distance=-1cm}, level 3/.style={sibling distance=-1cm}, edge from parent/.style = {draw, -latex} ] \node{} child{ node {} child{ node{Case 1} edge from parent node[above] {$n'_4-n'_3=$ even} } child{ node{Case 2} edge from parent node[below] {$n'_4-n'_3=$ odd} } edge from parent node[above] {$n'_4\ge n'_3$} } child{ node {} child{ node{Case 4} edge from parent node[above]{$n'_2 < (n'_3-n'_4)/2$} } child{ node{} child{ node{Case 3} edge from parent node[above]{$n'_3-n'_4=$ odd} } child{ node{Case 5} edge from parent node[below]{$n'_3-n'_4=$ even} } edge from parent node[below] {$n'_2 \ge (n'_3-n'_4)/2 $} } edge from parent node[below]{$n'_3\ge n'_4$} }; \end{tikzpicture} \] \fi \[ \begin{cases} n'_4 \ge n'_3 & \longrightarrow\begin{cases} n'_4-n'_3=\text{even} & \longrightarrow\text{Case 1}, \\ n'_4-n'_3=\text{odd} & \longrightarrow\text{Case 2}, \end{cases}\\[2em] n'_3 \ge n'_4 & \longrightarrow\begin{cases} n'_2 < (n'_3-n'_4)/2 & \longrightarrow \text{Case 4}, \\ n'_2 \ge (n'_3-n'_4)/2 & \longrightarrow\begin{cases} n'_3-n'_4=\text{odd} & \longrightarrow\text{Case 3},\\ n'_3-n'_4=\text{even}& \longrightarrow\text{Case 5}. \end{cases} \end{cases} \end{cases} \] For each case, the output of the algorithm is $(a_{7,8,9}, G_1, n_T)$ as shown below: \begin{enumerate} \item $n'_4 \geq n'_3, n'_4 - n'_3 = \text{even}$: \begin{itemize} \item $a_7 = n'_3, a_8 = \frac{n'_4 - n'_3}{2}, a_9 = 0$. \item $G_1=\mathrm{USp}(2 n'_2)$. \item $n_T = \kappa-\frac{n'_4 + n'_3}{2}$. \end{itemize} \item $n'_4 \geq n'_3 + 1, n'_4 - n'_3 = \text{odd}$: \begin{itemize} \item $a_7 = n'_3, a_8 = \frac{n'_4 - n'_3 - 1}{2}, a_9 = 0$. \item $G_1=\mathrm{SU}(2 n'_2 +4)$ group with an antisymmetric hyper. \item $n_T = \kappa-\frac{n'_4 + n'_3 - 1}{2}$. \end{itemize} \item $n'_3 \geq n'_4 + 1, n'_3 - n'_4 = \text{odd}, n'_2 \geq \frac{n'_3 - n'_4 - 1}{2}$: \begin{itemize} \item $a_7 = n'_4, a_8 = \frac{n'_3 - n'_4 - 1}{2}, a_9 = 0$. \item $G_1=\mathrm{SU}(2 n'_2 + n'_4 - n'_3 + 4)$ group with an antisymmetric hyper. \item $n_T = \kappa-\frac{n'_4 + n'_3 - 1}{2}$. \end{itemize} \item $n'_3 > n'_4 + 2 n'_2 + \ell, n'_3 - n'_4 - 2 n'_2 = 3 x + \ell, x \in \mathbb{Z}, \ell=0,1,2$: \begin{itemize} \item $a_7 = n'_4, a_8 = n'_2, a_9 = \frac{n'_3 - n'_4 - 2 n'_2 - \ell}{3}$. \item $G_1$ is \begin{itemize} \item Rank $1$ $E_8$ for $\ell=0$, \item $\mathrm{SU}(3)$ for $\ell=1$, \item $\mathrm{SU}(6)$ with a half-hyper in the rank $3$ antisymmetric for $\ell=2$. \end{itemize} \item $n_T = \kappa-\frac{n'_3 + 2 n'_4 + n'_2 - l}{3}$. \end{itemize} \item $n'_3 \geq n'_4, n'_3 - n'_4 = \text{even}, n'_2 \geq \frac{n'_3 - n'_4}{2}$: \begin{itemize} \item $a_7 = n'_4, a_8 = \frac{n'_3 - n'_4}{2}, a_9 = 0$. \item $G_1=\mathrm{USp}(2 n'_2 + n'_4 - n'_3)$. \item $n_T = \kappa-\frac{n'_4 + n'_3}{2}$. \end{itemize} \end{enumerate} \subsection{A subtlety concerning the 6d $\theta$ angle} \label{sec:subtlety} Note that the quivers produced in Case 5 are the same ones as the ones produced by Case 1, as far as the data we described so far are concerned. This is perfectly fine when $n'_4=n'_3$, since in this case we are just applying the different cases to the same Kac label. However, when $n'_4\neq n'_3$, or equivalently when $a_8\neq 0$, the resulting quivers should however be subtly different, since e.g.~they reduce to different 4d class S theories and 3d star-shaped quivers. We argue that the difference between them is how one embeds the $\mathrm{SU}(2N+8)$ group into the $\mathrm{SO}(4N+16)$ global symmetry group of $\mathrm{USp}(2N)$. A relatively simple case is the following. Let us first consider the cases when $n'_4 = 2, n'_3 = 0 , n'_2 = 0$ versus $n'_4 = 0, n'_3 = 2, n'_2 = 1$, with the rest of labels being zero $n_{1,\ldots,6}=0$. Both theories have the form of a long $\mathrm{SU}(8)$ quiver gauging an $\mathrm{SU}(8)$ subgroup of the rank $1$ $E_8$ theory. The two differ by the embedding of $\mathrm{SU}(8)$ inside $E_8$ and in fact have different global symmetries. To see this, consider embedding $\mathrm{SU}(8)$ inside $\mathrm{SO}(16) \subset E_8$. The adjoint of $E_8$ decomposes under its $\mathrm{SO}(16)$ maximal subgroup as $\bold{248}\rightarrow \bold{120} + \bold{128}$. Now consider decomposing $\mathrm{SO}(16)$ to its $\mathrm{U}(1)\times \mathrm{SU}(8)$ maximal subgroup. Under this embedding the spinors of $\mathrm{SO}(16)$ decompose to the rank $x$ antisymmetric tensors of $\mathrm{SU}(8)$ for $x=0,2,4,6,8$ for one spinor and $x=1,3,5,7$ for the other. However only one spinor appears in the adjoint of $E_8$, and therefore there are two different embedding of $\mathrm{SU}(8)$ inside $E_8$. In one of them the $\bold{128}$ contains gauge invariant contributions leading to the larger global symmetry. The general case corresponds to the situation where $\mathrm{SU}(2N+8)$ is embedded in $\mathrm{SO}(4N+16)$. There is no distinction in the perturbative sector of the theory. However the theory possesses instanton strings. The ones for $\mathrm{USp}$ groups will be in a chiral spinor of the $\mathrm{SO}$ group and so will decompose differently depending on the embedding. This then leads to theories with distinct spectrum of string excitations. Also note that this only occurs if the entire $\mathrm{SO}$ symmetry is gauged leaving only a $\mathrm{U}(1)$ commutant. If we gauge an $\mathrm{SU}(x) \subset \mathrm{SO}(2x)\subset \mathrm{SO}(4N+16)$ with $x<2N+8$, then the chiral spinor of $\mathrm{SO}(4N+16)$ decomposes to non-chiral spinors of $\mathrm{SO}(2x)$ and therefore there is a single embedding. This agrees with the fact that the cases coincide when $a_8=0$. We can understand this distinction from the existence of the discrete $\theta$ angle in 6d, due to the fact that $\pi_5(\mathrm{USp}(2N))_5 = \mathbb{Z}_2$. Suppose now that the $\mathrm{USp}$ group has $2n$ half-hypermultiplets in the fundamental. Classically it has an $\mathrm{O}(2n)$ flavor symmetry, but the parity part flips the discrete theta angle. Therefore the flavor symmetry is actually $\mathfrak{so}(2n)$. The two embeddings of $\mathfrak{su}(n)$ into $\mathfrak{so}(2n)$ are related exactly by the parity part of $\mathrm{O}(2n)$, and therefore are inequivalent. The F-theoretical interpretation of these two inequivalent embeddings seems to be unknown. It would be interesting to work it out.\footnote{The authors thank D. R. Morrison for the correspondence on this point.} Note that an analogous phenomenon exists in $5d$, where given a pure $\mathrm{USp}$ group there are two distinct $5d$ SCFTs associated with this theory differing by the instanton spectrum of the $5d$ gauge theory. This is related to the existence of a $\mathbb{Z}_2$ valued $\theta$ angle originating from the fact that $\pi_4(\mathrm{USp}(2N))_4 = \mathbb{Z}_2$. \subsection{Anomalies and the inflow} The anomaly of these 6d SCFTs can be computed from their quiver description using the technique of \cite{Ohmori:2014kda,Intriligator:2014eaa}. We should be able to match it to the anomaly computed from the inflow using the M-theory description. The inflow computations of M5-branes probing the $E_8$ end-of-the-world brane and of M5-branes probing the $\mathbb{C}^2/\mathbb{Z}_k$ singularity was given in \cite{Ohmori:2014pca} and in an Appendix of \cite{Ohmori:2014kda}, respectively. We can combine the two computations into one and one finds the following contribution to the anomaly, excluding the most subtle contribution from the codimension-5 singularity where the $\mathbb{C}^2/\mathbb{Z}_k$ singularity hits the end-of-the-world brane: \begin{multline} I^\text{naive}_\text{inflow}(Q)= \frac{Q^3 k^2 }{6} c_2(R)^2 - \frac{ Q^2 k}{2} c_2(R) I_4 + \\ Q(\frac12 I_4^2-I_8) + (I_4-Q k c_2(R)) J_4 - \frac12 I^\text{vec}(\mathrm{SU}(k)) \end{multline} where $Q$ is the M5-chage of the configuration,\begin{align} I_8 &= \frac1{48}(p_2(N)+p_2(T)-\frac14(p_1(N)-p_1(T))^2),\\ I_4 &= \frac14(p_1(T) -2c_2(R)) ,\\ J_4 &= \frac1{48}(k-\frac1k)(4 c_2(R)+p_1(T)) + \frac14\mathop{\mathrm{tr}} F^2_{\mathrm{SU}(k)}. \end{align} Here $I_8$ comes from the M-theory interaction $\int C\wedge I_8$, $I_4$ appears in the boundary condition $G=I_4$ at the $E_8$ wall, and $J_4$ is the interaction on the $\mathbb{C}^2/\mathbb{Z}_k$ singular locus $\int C\wedge J_4$. In this section the normalization of $\mathop{\mathrm{tr}}$ is as in \cite{Ohmori:2014pca}. Let $\un{n}$ be the Kac label, and let $\vec w=\sum \vec w_i n_i$ be the corresponding weight vector. By performing computations for many choices of $\un{n}$, we find that \begin{equation} I_\text{quiver}(\un{n},N_3)=I^\text{naive}_\text{inflow}(Q)+c(\un{n}) \end{equation} where $c(\un{n})$ is a constant depending on the Kac label $\un{n}$ but independent of $N_3$ and \begin{equation} Q=N_3+\frac12(k+\frac1k-\frac{\vev{\vec w,\vec w}}{k}).\label{M5charge} \end{equation} Recall that the instanton number as defined by the integral of $\mathop{\mathrm{tr}} F\wedge F$ was given by \begin{equation} \int\mathop{\mathrm{tr}} F\wedge F =N_\text{inst} -\frac{\vev{\vec w,\vec w}}{2k}, \end{equation} see \eqref{E8instantonnumber}, and that $k-1/k$ is the Euler number of $\widetilde{\mathbb{C}^2/\Gamma}$, or equivalently of the integral of $-p_1/4$ there, see \eqref{chiGamma}. Then, assuming that \begin{equation} N_\text{inst} =N_3+k,\label{NprimeN0} \end{equation} we can rewrite the effective M5-brane charge $Q$ as \begin{equation} Q=\int\mathop{\mathrm{tr}} F\wedge F + \int_{\widetilde{\mathbb{C}^2/\mathbb{Z}_k}} \frac{p_1}4 \end{equation} which is what we expect from the curvature coupling on the E8 end-of-the-world brane. The authors made a guess of the formula for $c(\un{n})$ by trial and error. It has the form \begin{equation} \begin{split} c(\un{n})=&\frac1k (P_0(\un{n})+P_2(\un{n})+P_4(\un{n})+P_6(\un{n}))+\frac12 I_\text{free vector} \end{split} \end{equation} where \begin{equation} I_\text{free vector}=\frac1{5760}(-240 c_2(R)^2 - 120 c_2(R) p_1(T) - 7 p_1(T)^2 + 4 p_2(T)) \end{equation} is the anomaly polynomial of a free vector multiplet and $P_i(\un{n})$ is a homogeneous polynomial of $n_i$'s of degree $i$. Those polynomials are identified as \begin{align} P_0 =&\frac1{384} (-88 c_2(R)^2 + 32 c_2(R) p_1(T) - 5 p_1(T)^2 + 4 p_2(T))\\ \begin{split} P_2 =& \frac1{11520} k^2\left(2512 c_2(R)^2 - 760 c_2(R) p_1(T) + 157 p_1(T)^2 - 124 p_2(T)\right)\\ &+\frac1{5760}\left(15 \langle \vec{w},\vec{w}\rangle-k\langle \vec{w},\vec{\rho}\rangle\right)\left(112 c_2(R)^2 - 40 c_2(R) p_1(T) + 7 p_1(T)^2 - 4 p_2(T)\right) \end{split}\\ P_4 =&-\frac1{288}\left( 9 \langle \vec{w},\vec{w}\rangle^2 + 15 k^2 \langle \vec{w},\vec{w}\rangle - 2 k^4 - k \sum_{\vec\alpha\in\Delta^+}\langle \vec{w},\vec\alpha\rangle^3\right)\left(4 c_2(R)^2 - c_2(R) p_1(T)\right) \\ P_6 =&\frac1{240} \left(5 \langle \vec{w},\vec{w}\rangle^3 + 15 k^2 \langle \vec{w},\vec{w}\rangle^2 - 5 k^4 \langle \vec{w},\vec{w}\rangle + k^6 - k\sum_{\vec\alpha\in\Delta^+}\langle \vec{w},\vec\alpha\rangle^5\right) c_2(R)^2, \end{align} where $\Delta^+$ is the set of positive roots of $E_8$. The authors have not been able to determine how this formula come from the correct anomaly inflow calculation. It would be interesting to understand it. \section{Lower dimensional incarnations} \label{sec:lower} \subsection{Five-dimensional brane-web description} \label{sec:5d} We can reduce the 6d theory on a circle to 5d. Roughly speaking, there are two different types of reductions. For example, starting from the E-string theory, one can obtain $\mathrm{SU}(2)$ theory with eight flavors in one way, or the 5d SCFT with $E_8$ flavor symmetry in the other way. \paragraph{First reduction:} Keeping the radius of the circle non-zero the low-energy 5d theory is sometimes a 5d gauge theory. Specifically, the class of 6d theories we are considering can be realized by a brane construction involving a system of NS5-branes and D6-branes in the presence of an O8$^-$-plane \cite{Brunner:1997gf,Hanany:1997gh}. Performing T-duality on this system results in a brane configuration involving NS5-branes and D6-branes in the presence of an O8$^-$-plane. Alternatively, the system can also be described as D4-branes immersed in an O8$^-$-plane and D8-branes, in the presence of a $\mathbb{C}^2/\mathbb{Z}_k$ singularity \cite{BergmanGomez}. Either way, the system can sometimes be deformed so as to describe a 5d gauge theory. Specifically, when compactifying we have a choice of the value of the radius as well as the freedom to turn on holonomies for the global symmetries. These then become mass parameters in the 5d theory. In specific ranges of these parameters the 6d theory may flow at low-energy to a 5d quiver gauge theory with the coupling constants of the gauge theory identified with the mass deformations. In general, a given 6d SCFT may have several different low-energy 5d gauge theory descriptions depending on the specific deformations used. Various 5d descriptions of 6d theories, including the type we are interested in, were studied in \cite{Zafrir:2015rga,Hayashi:2015zka,HayashiKimLee,HayashiKimLee2}. We will not consider this problem here. \paragraph{Second reduction:} Instead we shall take the limit of zero radius. In this case we argue that the 6d theory flows in the IR to a 5d SCFT. Furthermore, we claim that the 5d SCFT can be readily described in terms of the integer $N$ and the Kac label $\un{n}$. To find the 5d theory, we first write down the 6d quiver following the algorithm presented in the last section. We realize this 6d quiver in type IIA using O8-planes, D8-branes, D6-branes and NS5-branes as in \cite{Brunner:1997gf,Hanany:1997gh}. We then compactify it on $S^1$, T-dualize it to type IIB, and manipulate the branes. We will detail the procedure in slightly more detail below. The result can be conveniently represented by a brane web, which has a star shape form with a group of $(1,0)$, $(0,1)$ and $(1,1)$ 5-branes all intersecting at a point. The 5-branes end on the appropriate 7-branes where some collection of 5-branes end on the same 7-brane. Specifying the configuration then is done by giving the distribution of 5-branes on the 7-branes. This is conveniently done by a Young diagram where each column represents a 7-brane, and the number of boxes in it represents the number of 5-branes ending on it. The three Young diagrams for the SCFTs we are considering are given by: \begin{equation} \begin{aligned} Y_1=(&n_T-n_6,\\ &n_T-n_6-n_5,\\ &n_T-n_6-n_5-n_4,\\ &n_T-n_6-n_5-n_4-n_3,\\ &n_T-n_6-n_5-n_4-n_3-n_2,\\ &n_T-n_6-n_5-n_4-n_3-n_2-n_1,\\ &1^k),\\ Y_2=(&2n_T+2n_{4'}+n_{2'}+n_{3'},\\ &2n_T+n_{4'}+n_{2'}+n_{3'},\\ &2n_T+n_{4'}+n_{3'}),\\ Y_3=(&3n_T+2n_{4'}+n_{2'}+2n_{3'},\\ &3n_T+2n_{4'}+n_{2'}+n_{3'}). \end{aligned} \label{classSdata} \end{equation} \paragraph{More detail of the second reduction:} For cases 1, 2 and 3, these results can be derived using the standard techniques. But there are some issues for cases 4 and 5. Case 4 naively does not have a brane construction of the type considered in \cite{Hanany:1997gh} so this procedure appears to be inapplicable in this case. However, a conjecture for the 5d theories that lift to these types of 6d SCFTs was given in \cite{Zafrir:2015rga,Hayashi:2015zka}, and we can use this conjecture to fill in this step for case 4. This leaves case 5. We can ask how does the 6d $\theta$ angle appears in the brane construction. In fact a similar issue arises in the analogue 5d system: D5-branes suspended between NS5-branes in the presence of an O7$^-$-plane. In that case it was observed by \cite{BergmanZafrir} that accounting for the 5d $\theta$ angle seems to necessitate the introduction of two variants of the O7$^-$-plane, where one is an $SL(2,\mathbb{Z})$ T-transform of the other. This in particular means that they differ by their decomposition into a pair of 7-branes. Note that the distinction between the two cases vanishes when there are D7-branes on the O7$^-$-plane. This becomes clear after we decompose the O7$^-$-plane into 7-branes which can be moved through the monodromy lines of the 7-branes which will change them by a T-transformation. This of course agrees with the unphysical nature of the 5d $\theta$ angle once flavors are present. There should be a similar distinction for the O8$^-$-plane, and so can account for the apparent 6d $\theta$ angle we observe. We will not pursue this here. However once we perform T-duality we end with a system with two O7$^-$-planes, and we expect that we can accommodate this in the observed difference in O7$^-$-planes. We have a discrete choice for each O7$^-$-plane leading to four possibilities. However we are free to perform a global T-transformation. Since all the external branes are D7-branes, this will lead us to the same system, save for changing the types of both orientifolds. Thus we conclude that there are only two distinct choices: the same or differing types. These cases are expected to differ only when there are no 7-branes on the O7$^-$-planes, and thus no D8-brane on the original O8$^-$-plane. This exactly agrees with the two cases, which coincide once $a_8 = 0$. We indeed find different 5d theories for these two choices, where the former is identified with case 1 while the latter with case 5. In this manner we can apply this procedure also to case 5. \subsection{Four-dimensional class S description} \label{sec:4d} We can compactify on an additional circle to 4d. Using the results of \cite{BBT}, it is straightforward to write the 4d theory. It is just an $A$ type class S theory given by the same set of Young diagrams as the 5d description, given above in \eqref{classSdata}. In fact it is also possible to motivate this class S description with the Young diagrams \eqref{classSdata} directly from the 4d description, and then use the preceding discussion to connect the 6d quiver data to the Kac labels. We start from the observation that the class S theory whose Young diagrams are \eqref{classSdata} can be thought of as generated by modifying the Young diagrams of the rank $N$ $E_8$ theory, which is given by a class S theory of type $\mathrm{SU}(6N)$ with Young diagrams $Y_1=(N^6)$, $Y_2=(2N^3)$, $Y_3=(3N^2)$. First the 4d theory needs to have the $\mathrm{SU}(k)$ global symmetry, coming from the $\mathbb{C}^2/\mathbb{Z}_k$ singularity. This is given by the $k$ boxes attached to the Young diagram $Y_1$ of the $E_8$ theory. That this is the correct way to account for it can be seen by comparing anomalies. For the type of 6d theories we are considering, there is a result due to \cite{Ohmori:2015pua} that allows for the computations of the central charges of the 4d result of the compactification of the 6d theory from the anomaly polynomial of the latter. Furthermore the anomaly polynomial of the 6d theories of the type we considered was studied in \cite{Zafrir:2015rga}. When applied to our case we find that $k^\text{4d}_{\mathrm{SU}(k)} = 2k + 12$ independent of the details of the Kac label. This agrees with the anomaly of the class S theory. In addition to the $\mathrm{SU}(k)$ we also have the commutant of the orbifold in $E_8$ as a global symmetry, which depends on the Kac labels. The $E_8$ global symmetry is accommodated by the Young diagram structure of the starting $E_8$ SCFT so it is natural to expect that modifying this will give the required global symmetry and take into account the Kac labels. The global symmetry which is manifest in the class S construction is $\mathrm{SU}(2)\times \mathrm{SU}(3) \times \mathrm{SU}(6)$ which can be identified with the three legs of the affine Dynkin diagram. This becomes more apparent once we compactify to 3d and consider the mirror dual, which we consider more extensively in the next subsection. The point is that we can associate a node in the legs of the affine $E_8$ Dynkin diagram roughly with the difference between neighboring columns. The central node can be associated with the difference between the sum of the first columns of the three Young diagrams and the the total number of boxes in any of them. When that difference is zero, we get the $E_8$ theory. It is now natural to associate that difference to the Kac label of the corresponding node. By the Kac prescription, this ensures that we get the correct global symmetry. This leads to the conjectured form. There is one ambiguity in determining the total number of boxes which is related to the rank of the initial $E_8$ theory. This should be related to the number of tensors in 6d, but we need to determine the exact mapping. For this we use the relation outlined in the previous sections between the 6d and 4d theories. We can perform various consistency checks of this proposal. One check is to compare anomalies. We already mentioned that these can be computed from the 6d anomaly polynomial, and compare the $\mathrm{SU}(k)$ central charge. We can also compare the central charges $a$ and $c$, and the dimension of the Coulomb branch. These can then be calculated from the 6d quiver on one side, and from the class S theory on the other, in terms of the Kac labels and $n_T$. For the computations on the class S side, we use the standard results of \cite{Gaiotto:2009we,Chacaltana:2010ks} and reviewed e.g.~in \cite{Tachikawa:2015bga}. The results themselves are rather complicated and not very illuminating, but we do find that all three objects agree between the two calculations. Any interested reader can play around with the Mathematica file which comes with this paper to confirm this point. \subsection{Three-dimensional star-shaped quiver description} \label{sec:3d} Let us now move on to the three dimensions. We translate the Young diagrams $Y_{1,2,3}$ given in \eqref{classSdata} which specify the class S punctures to the 3d mirror description using the results of \cite{Benini:2010uu}. We find that the resulting theory is given by the quiver gauge theory \begin{equation} \hat X:= {\Node{}{1}}-{\Node{}{2}} -\cdots-\Node{}{k} -\node{}{\tilde N_1}-\node{}{\tilde N_2}-\node{}{\tilde N_3}-\node{}{\tilde N_4}-\node{}{\tilde N_5}-\node{\cver{}{\tilde N_{3'}}}{\tilde N_6}-\node{}{\tilde N_{4'}}-\node{}{\tilde N_{2'}}~. \end{equation} Here, all nodes are unitary with the diagonal $\mathrm{U}(1)$ removed, and the gray and the black blobs are used as a visual aid for the affine Dynkin part and the over-extended part. The ranks of the groups are specified by the vector \begin{equation} \un{\tilde N}=N_3 \un{d}+\sum n_i \un{q_i} \label{N3d} \end{equation} where \begin{align} \un{q_1}&=\E123456423, & \un{q_2}&=\E223456423 ,& \un{q_3}&=\E333456423, \\ \un{q_4}&=\E444456423 ,& \un{q_5}&=\E555556423 , & \un{q_6}&=\E666666423, \\ \un{q_{4'}}&=\E444444212, & \un{q_{2'}}&=\E222222101,& \un{q_{3'}}&=\E333333211 \end{align} which is in fact given by a uniform formula \begin{equation} (q_{i})_j = d_i d_j - \vev{\vec w_i,\vec w_j} \end{equation} where $\vec w_i$ is the weight vector for the node $i\neq 1$ and $\vec w_1=0$. Another characterization of $\un{\tilde N}$ is \begin{equation} C \un{\tilde N}= \E{k}00000000 + \un{n} \label{kn} \end{equation} where $C$ is the affine Cartan matrix of $E_8$; this determines $\un{\tilde N}$ mod $\un{d}$. The dimension of the Coulomb branch $\hat\mathcal{M}$ is then \begin{equation} \dim_\mathbb{H} \hat\mathcal{M}=30(N_3 + k) - \vev{\vec w,\vec\rho} +\frac{k(k+1)}2-1 \end{equation} where $\vec w=\sum n_i \vec w_i$ is the Kac label as a weight vector and $\vec\rho=\sum_i \vec w_i$ is the Weyl vector. The Coulomb branch $\hat\mathcal{M}$ of this system $\hat X$ is closely related to the instanton moduli space $\mathcal{M}^\text{inst}$ on the ALE space $\widetilde{\mathbb{C}^2/\mathbb{Z}_k}$. To explain the relation, let us first recall that the resolution and deformation parameters of the ALE space can be specified by a parameter \begin{equation} \xi=(\xi_\mathbb{C},\xi_\mathbb{R})\in \mathfrak{su}(k)\otimes(\mathbb{C}\oplus \mathbb{R}) \end{equation} which takes values in the Cartan of $\mathfrak{su}(k)$ tensored by $\mathbb{R}^3$. We now need an auxiliary hyperk\"ahler space $\mathcal{O}_{\xi}$, which is the $\mathrm{SU}(k)_\mathbb{C}$ orbit of $\xi_\mathbb{C}$ in $\mathfrak{su}(k)$ with the hyperk\"ahler metric specified by $\xi_\mathbb{R}$. Equivalently, $\mathcal{O}_\xi$ is the Coulomb/Higgs branch of the $T[\mathrm{SU}(k)]$ theory whose quiver realization is given by \begin{equation} T[\mathrm{SU}(k)]= \Node{}{1}-\Node{}{2}-\cdots-\Node{}{k-1}-\sqnode{}{k} \end{equation} where the rightmost square node is a flavor symmetry and $\xi$ is the $\mathrm{SU}(2)_R$ triplet of mass parameters associated to it. We can now state the relation between $\hat\mathcal{M}$ and and $\mathcal{M}^\text{inst}$ by slightly modifying an argument given in \cite{Ohmori:2015pua}: \begin{equation} \mathcal{M}^\text{inst}=(\hat\mathcal{M} \times \mathcal{O}_{\xi} )/\!/\!/ \mathrm{SU}(k) \label{cM}. \end{equation} This relation can be understood as follows. The resolution/deformation parameter $\xi$ of the ALE space can be identified with the scalar vacuum expectation values of the 7d super $\mathrm{SU}(k)$ Yang-Mills theory supported on the M-theory singularity $\mathbb{C}^2/\mathbb{Z}_k$. The 6d SCFT on the M5-branes at the intersection of the $E_8$ wall and the $\mathbb{C}^2/\mathbb{Z}_k$ singularity couples to this 7d super Yang-Mills, via the standard coupling where the triplet moment map field of the 6d theory is identified with the limiting value of the triplet of scalars of the 7d bulk. The resulting hyperk\"ahler manifold is then given by the hyperk\"ahler reduction as in \eqref{cM}. Now, our system $\hat X$ can also be written using the theory $\tilde X$ \begin{equation} \tilde X:=\sqnode{}{k} -\node{}{\tilde N_1}-\node{}{\tilde N_2}-\node{}{\tilde N_3}-\node{}{\tilde N_4}-\node{}{\tilde N_5}-\node{\cver{}{\tilde N_{3'}}}{\tilde N_6}-\node{}{\tilde N_{4'}}-\node{}{\tilde N_{2'}}~. \end{equation} Indeed, \begin{equation} \hat X= (T[\mathrm{SU}(k)] \times \tilde X)/\!/\!/ \mathrm{SU}(k) \end{equation} where the symbol $T /\!/\!/ G$ means that we gauge the flavor symmetry $G$ of the theory $T$. So the theory $X$ whose Coulomb branch is $\mathcal{M}^\text{inst}$ in \eqref{cM} is given by\begin{equation} X= (T[\mathrm{SU}(k)] \times T[\mathrm{SU}(k)] \times \tilde X)/\!/\!/ (\mathrm{SU}(k) \times \mathrm{SU}(k)) \end{equation} But two $T[\mathrm{SU}(k)]$ gauged by a diagonal $\mathrm{SU}(k)$ is known to disappear, since it is the domain wall of 4d \Nequals4 SYM implementing the S-duality \cite{Gaiotto:2008ak}. So we have, in fact,\begin{equation} X=\tilde X \end{equation} and the ALE deformation parameter $\xi$ is now the mass parameter of the $\mathrm{SU}(k)$ flavor symmetry. We have \begin{equation} \dim_\mathbb{H} \mathcal{M}^\text{inst}= 30(N_3+k)-\vev{\vec w,\vec \rho}.\label{field-result} \end{equation} This nicely agrees with the computation from the geometry \eqref{geometry-result} by the identification \begin{equation} N_\text{inst} =N_3+k.\label{NprimeN} \end{equation} This relation between $N_3$ and $N_\text{inst} $ is also consistent with what we found from the inflow, see \eqref{NprimeN0}. We note that the theory $X=\tilde X$ is the theory whose Higgs branch is the $\mathrm{U}(k)$ instanton moduli on $\mathbb{C}^2/\Gamma_{E_8}$ \cite{KronheimerNakajima,Douglas:1996sw}. From this reason, the Coulomb branch, at least when the mass parameter is zero, has been conjectured to be the $E_8$ instanton moduli space on the singular space $\mathbb{C}^2/\mathbb{Z}_k$ by various people. This follows, at least in a rough form, from the string duality: consider the theory on M2-branes on $\mathbb{C}^2/\mathbb{Z}_k \times \mathbb{C}^2/\Gamma_{E_8}$. It has two supersymmetric branches of vacua, one describing $E_8$ instantons on $\mathbb{C}^2/\mathbb{Z}_k$ and another describing $\mathrm{U}(k)$ instantons on $\mathbb{C}^2/\Gamma_{E_8}$. If the former is the Coulomb branch, then the latter is the Higgs branch. Note that we arrived at the quiver gauge theory $X=\tilde X$ from a totally different method, by first studying the 6d quiver and then by reducing on successively on circles. Therefore, this agreement can be thought of as an overall consistency check of our construction. Now, applying \cite{KronheimerNakajima} and \cite{Douglas:1996sw} in our case, we see that the $\mathrm{U}(k)$ holonomy at infinity of $\mathbb{C}^2/\Gamma_{E_8}$ is trivial, and the first Chern class $c_1$ satisfies $\int_{E_i} c_1 = n_i$ which can be read off from \eqref{kn}. It would be interesting to understand from M-theory point of view why the first Chern class on the $\mathbb{C}^2/\Gamma_{E_8}$ side is given by the asymptotic $E_8$ holonomy on the $\mathbb{C}^2/\mathbb{Z}_k$. It seems important for the full story to consider a more general case where $\mathbb{C}^2/\mathbb{Z}_k$ is replaced by the multicenter Taub-NUT space, see e.g.~\cite{Bullimore:2015lsa,Nakajima:2015gxa}. \section{Examples} \label{sec:examples} Let us demonstrate the above general statement in various examples. In this section, we take $N$ to be the number of tensor multiplets in the $6d$ theory, which was denoted by $\kappa$ in the other sections. \subsection{The case of $k=2$} There are three possibilities. We label the cases with the Kac label $\un{n}$ and the group $H\subset E_8$ left unbroken by the Kac label. The choice $k=2$ is somewhat special, since the ALE space $\widetilde{\mathbb{C}^2/\mathbb{Z}_2}$, also known as the Eguchi-Hanson space, has an exceptional isometry $\mathrm{SU}(2)$. Then the generic flavor symmetry of the 6d SCFT should be $H\times \mathrm{SU}(2)^2$, where one $\mathrm{SU}(2)$ comes from the 7d gauge field on the singularity and another $\mathrm{SU}(2)$ comes from the isometry. \begin{enumerate} \item The first case is \begin{equation} \un{n}=\E200000000, \qquad H=E_8. \end{equation} The corresponding $6d$ theory is \begin{equation}} \newcommand{\ee}{\end{equation} \label{6dE8inst} [E_8] \, \, \, \, 1 \, \, \, \, \overset{\mathfrak{su}(1)}{2} \, \,\, \, \underset{\left[N_f=1 \right]}{\overset{\mathfrak{su}(2)}{2}} \, \,\, \, \underbrace{\overset{\mathfrak{su}(2)}{2} \quad \cdots\quad\overset{\mathfrak{su}(2)}{2}}_{N-3}\quad [\mathrm{SU}(2)]. \ee The $T^3$ reduction of this theory gives the following $3d$ ${\cal N}=4$ theory: \begin{equation}} \newcommand{\ee}{\end{equation} \label{compT2S16dE8inst} {\Node{}{1}}-{\Node{}{2}} -\node{}{N}-\node{}{2N}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N}~. \ee \item The second case is \begin{equation} \un{n}=\E010000000, \qquad H=E_7\times \mathrm{SU}(2). \end{equation} The corresponding $6d$ theory is \begin{equation}} \newcommand{\ee}{\end{equation} [E_7] \,\,\,\, 1 \,\,\,\, \underset{[N_f =2]}{\overset{\mathfrak{su}(2)}{2}} \,\,\,\, \underbrace{\overset{\mathfrak{su}(2)}{2} \,\,\,\, \cdots \,\,\,\, \overset{\mathfrak{su}(2)}{2}}_{N-2} \,\,\,\, [\mathrm{SU}(2)] \label{6dE7} \ee where the number of $\mathfrak{su}(2)$ gauge groups in the quiver is $N-1$. The Higgs branch dimension of the UV fixed point of this theory is $29 N +4 + 4(N-1) -3(N-1) = 30 N+3$. The mirror of the $T^3$ compactification of this theory is \begin{equation}} \newcommand{\ee}{\end{equation} \label{quivcompB3E7} \Node{}{1}-\Node{}{2}- \node{}{N+1}-\node{}{2N}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N}~, \ee The Coulomb branch dimension, which is the sum of the rank of the gauge groups minus one, is indeed $30N+3$. \item The third case is \begin{equation} \un{n}=\E000000010, \qquad H=\mathrm{SO}(16). \end{equation} The corresponding $6d$ theory is \begin{equation}} \newcommand{\ee}{\end{equation} \label{SO16k2} [\mathrm{SO}(16)] \,\,\,\, \overset{\mathfrak{sp}(1)}{1} \,\,\,\, \underbrace{\overset{\mathfrak{su}(2)}{2} \,\,\,\, \overset{\mathfrak{su}(2)}{2} \,\,\,\, \cdots \,\,\,\, \overset{\mathfrak{su}(2)}{2}}_{N-1} \,\,\,\, [\mathrm{SU}(2)] \ee where the number of $\mathrm{SU}(2)$ gauge groups associated with the $(-2)$ curves is $N-1$. The Higgs branch dimension of the UV fixed point of this theory is $29 N + 16 + 4 + 4 (N - 1) - 3 - 3 (N - 1) = 30N+16$. The mirror of the $T^3$ compactification of this theory is \begin{equation}} \newcommand{\ee}{\end{equation} \label{3dquivSO16k2} \Node{}{1}-\Node{}{2}-\node{}{N+2}-\node{}{2N+2}-\node{}{3N+2}-\node{}{4N+2}-\node{}{5N+2}-\node{\cver{}{3N+1}}{6N+2}-\node{}{4N+1}-\node{}{2N}. \ee The Coulomb branch dimension, which is the sum of the rank of the gauge groups minus one, is indeed $30N+16$. This is consistent with Figure 45 of \cite{Zafrir:2015rga}, namely the $T^2$ compactification of \eref{SO16k2} yields the class ${\cal S}$ theory whose Gaiotto curve is a sphere with punctures: \begin{equation}} \newcommand{\ee}{\end{equation} [(3N+1)^2], \qquad [(2N+1)^2,2N], \qquad [N^6,1^2]~. \ee \end{enumerate} Now let us comment on the flavor symmetry from the point of view of the 6d quiver. Since an $\mathrm{SU}(2)$-$\mathrm{SU}(2)$ bifundamental has an $\mathrm{SU}(2)$ flavor symmetry, the three 6d quivers presented above have order $N$ copies of $\mathrm{SU}(2)$ symmetries on the generic points of the tensor branch. In fact the same issue already appears in the case of $N$ M5-branes probing the $\mathbb{C}^2/\mathbb{Z}_2$ singularity, which has the quiver \begin{equation} [\mathrm{SU}(2)] \quad \underbrace{ \overset{\mathfrak{su}(2)}{2}\quad \cdots\quad \overset{\mathfrak{su}(2)}{2} }_{N}\quad [\mathrm{SU}(2)] \end{equation} which naively has too many $\mathrm{SU}(2)$ flavor symmetries. The issue can be resolved by recalling the fact derived in Appendix A of \cite{Ohmori:2015pia} that the basic 6d SCFT whose quiver on the tensor branch is given by $\mathrm{SU}(2)$ with $N_f=4$ with a naive $\mathrm{SO}(8)$ symmetry, only has an $\mathrm{SO}(7)$ symmetry under which the flavors transform in the spin representation. In the quiver representation of the same theory as \begin{equation} [\mathrm{SU}(2)_1]\quad \overset{\mathfrak{su}(2)}{2} \quad [\mathrm{SU}(2)_2],\label{Z2} \end{equation} this means the following: regard the bifundamental hypermultiplets on the left and on the right of the gauge group as the trifundamental half-hypermultiplets. At the quiver level there are therefore the flavor symmetry $\mathrm{SU}(2)_1\times \mathrm{SU}(2)_1'\times \mathrm{SU}(2)_2\times \mathrm{SU}(2)_2' \subset \mathrm{SO}(8)$. Under the $\mathrm{SO}(7)$ symmetry which is the flavor symmetry of the SCFT, only the diagonal subgroup of $\mathrm{SU}(2)_1'$ and $\mathrm{SU}(2)_2'$ survives. Applying this argument at every $\mathfrak{su}(2)$ node in \eqref{6dE8inst}, \eqref{6dE7}, \eqref{SO16k2}, and \eqref{Z2}, we see that the number of $\mathrm{SU}(2)$ flavor symmetries is reduced appropriately. There are also some interesting special cases with enhanced flavour symmetries when $N$ is small: \begin{enumerate} \item $N=2$, $H=E_8$. In this case the quiver \eqref{6dE8inst} degenerates to \begin{equation} [E_8] \, \, \, \, 1 \, \, \, \, \overset{\mathfrak{su}(1)}{2} \, \,\, \, \underset{\left[N_f=1 \right]} {[\mathrm{SU}(2)]} \end{equation} which is just the rank-2 E-string theory with three decoupled hypermultiplets. The 3d quiver in this case is \eqref{compT2S16dE8inst} for $N=2$: \begin{equation} {\Node{}{1}}-{\Node{}{2}} -\node{}{2}-\node{}{4}-\node{}{6}-\node{}{8}-\node{}{10}-\node{\cver{}{6}}{12}-\node{}{8}-\node{}{4}~. \end{equation} Its Coulomb branch is \begin{equation}} \newcommand{\ee}{\end{equation} \label{CoulHE8} \mathbb{H}^3 \times (\text{the reduced moduli space of 2 $E_8$ instantons on $\mathbb{C}^2$}) \ee and we indeed see the same decoupled structure. The explanation from the perspective of the Coulomb branch operators will be described below. \item $N=3$ with $H=E_8$. The 6d quiver is \begin{equation} [E_8] \, \, \, \, 1 \, \, \, \, \overset{\mathfrak{su}(1)}{2} \, \,\, \, \underset{\left[N_f=1 \right]}{\overset{\mathfrak{su}(2)}{2}} \, \,\, \, [\mathrm{SU}(2)]. \end{equation} The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-\node{}{3}-\node{}{6}-\node{}{9}-\node{}{12}-\node{}{15}-\node{\cver{}{9}}{18}-\node{}{12}-\node{}{6} \ee The flavour symmetry is enhanced to $G_2 \times E_8$. The explanation from the perspective of the Coulomb branch operators will also be described below. \item $N=2$, $H= E_7 \times \mathrm{SU}(2)$. The 6d quiver for this case reduces to \begin{equation} \label{E7SU2N2k2} [E_7] \,\,\,\, 1 \,\,\,\, \underset{[N_f=2]}{\overset{\mathfrak{su}(2)}{2}} \,\,\,\, [\mathrm{SU}(2)]. \end{equation} On the tensor branch, there is an $\mathrm{SO}(8)$ symmetry acting on the four flavors of $\mathrm{SU}(2)$ gauge group. In the SCFT it is known that there is only $\mathrm{SO}(7)$. The total symmetry is then $\mathrm{SO}(7)\times E_7$. In fact this 6d theory is the $(E_7, \mathrm{SO}(7))$ minimal conformal matter~\cite{DelZotto:2014hpa}, which describes ``half M5-branes'' on the $E_7$ singularity. The 3d quiver in this case is \eqref{quivcompB3E7} for $N=2$: \begin{equation} \Node{}{1}-\Node{}{2}- \node{}{3}-\node{}{4}-\node{}{6}-\node{}{8}-\node{}{10}-\node{\cver{}{6}}{12}-\node{}{8}-\node{}{4}~. \end{equation} This theory is the mirror of the $S^1$ reduction of the class ${\cal S}$ theory whose Gaiotto curve is a sphere with punctures \begin{equation}} \newcommand{\ee}{\end{equation} \label{classShalfM5onE7} [2^4, 1^4], \qquad [6^2], \qquad [4^3]~. \ee In \cite{Zafrir:2015rga,Ohmori:2015pua} the $T^2$ compactification was also identified with a class ${\cal S}$ theory of the $E_6$ type associated with the sphere with punctures $0$, $2A_1$ and $E_6(a_1)$. For consistency, these two class S theories should in fact be the same. Let us compute the central charges of \eref{classShalfM5onE7}. We find that the effective numbers of vector multiplets and hypermultiplets are $n_H= 112$ and $n_V = 49$, respectively. Thus, \begin{equation}} \newcommand{\ee}{\end{equation} a = \frac{1}{24}( 5 n_V+n_H) = \frac{119}{8}~, \qquad c = \frac{1}{12}(2n_V + n_H) = \frac{35}{2}~. \ee This agrees with $a$ and $c$ of the aforementioned class ${\cal S}$ theory of the $E_6$ type; see (7.1) of \cite{Ohmori:2015pua}. \end{enumerate} \subsection{Enhanced flavor symmetries from 3d quivers} In fact the symmetry enhancement of each of the three cases above can be generalized to other over-extended Dynkin quivers in 3d, namely: \begin{enumerate} \item For the quiver consisting of a tail ${\Node{}{1}}-{\Node{}{2}}$ attached to the affine Dynkin diagram of type $\mathfrak{g}$ with gauge groups being unitary groups of the ranks given by 2 times the dual Coxeter labels, the Coulomb branch moduli space is $\mathbb{H}^3 \times \widetilde{{\cal M}}_{2, \mathfrak{g}}$, where $\widetilde{{\cal M}}_{2, \mathfrak{g}}$ denotes the reduced two-instanton moduli space of group $\mathfrak{g}$ on $\mathbb{C}^2$. For example, the Coulomb branch of the quiver \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}} -{\node{}{2}}= {\node{}{2}} \ee is $\mathbb{H}^3 \times \widetilde{{\cal M}}_{2,\,\, \mathfrak{su}(2)}$, and the Coulomb branch of the quiver \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}} -{\node{}{2}}-\underset{\uer{}{2}}{\node{\cver{}{2}}{4}}-\node{}{2} \ee is $\mathbb{H}^3 \times \widetilde{{\cal M}}_{2, \,\, \mathfrak{so}(8)}$. \item For the quiver consisting of a tail ${\Node{}{1}}-{\Node{}{2}}$ attached to the affine Dynkin diagram of type $\mathfrak{g}$ with gauge groups being unitary groups of the ranks given by 3 times the dual Coxeter labels, the Coulomb branch moduli space has a symmetry $G_2 \times \mathfrak{g}$. For example, the Coulomb branch of the quiver \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}} -{\node{}{3}}= {\node{}{3}} \ee has a symmetry $G_2 \times \mathrm{SU}(2)$, and the Coulomb branch of the quiver \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}} -{\node{}{3}}-\underset{\uer{}{3}}{\node{\cver{}{3}}{6}}-\node{}{3} \ee has a symmetry $G_2 \times \mathrm{SO}(8)$. \item For the quiver consisting of a tail ${\Node{}{1}}-{\Node{}{2}}$ attached to the affine Dynkin diagram of type $\mathfrak{g}$ with the affine node being $U(3)$ and other gauge groups being unitary groups of the ranks given by 2 times the dual Coxeter labels, the Coulomb branch has a symmetry $\mathrm{SO}(7) \times \tilde{\mathfrak{g}}$, where $\tilde{\mathfrak{g}}$ is the commutant of $\mathfrak{su}(2)$ in $\mathfrak{g}$. For example, the Coulomb branch of the following quiver \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\node{}{3}}-{\node{}{4}}-\node{\overset{\cver{}{2}}{\cver{}{4}}}{6}-{\node{}{4}}-{\node{}{2}} \ee has a symmetry $\mathrm{SO}(7) \times \mathrm{SU}(6)$, where $\mathrm{SU}(6)$ is the commutant of $\mathrm{SU}(2)$ in $E_6$. \end{enumerate} In each of the above examples, the quiver contains of a balanced affine Dynkin quiver diagram as a subquiver. If we consider only this subquiver, the R-charges of the monopole operators in this theory vanish, and hence this subquiver is indeed a bad theory. By attaching a quiver tail ${\Node{}{1}}-{\Node{}{2}}-\cdots-{\Node{}{k}}$ to such a subquiver, the total quiver becomes good or ugly.\footnote{See also \cite{Gaiotto:2012uq} for a related consideration from the $4d$ point of view.} We would like to consider the contribution of this quiver tail to the Coulomb branch of the total quiver. \begin{enumerate} \item For this case, the node $\node{}{2}$, which is the affine node in the affine Dynkin diagram, is over-balanced in the sense of \cite{Gaiotto:2008ak}. Following \cite{Gaiotto:2008ak}, we can split the quiver into two parts, namely ${\Node{}{1}}-{\Node{}{2}} -\node{}{2}$ and the rest of the Dynkin diagram. The R-charge of the monopole operators from the subquiver ${\Node{}{1}}-{\Node{}{2}} -\node{}{2}$ receives the contribution from the hypermultiplets and vector multiplets in the way described in \cite{Gaiotto:2008ak}, except that there is no contribution from the vector multiplet of the rightmost node $\node{}{2}$, since this was cancelled inside the affine Dynkin quiver. The contribution from the subquiver is therefore the same as that of the quiver ${\Node{}{1}}-{\Node{}{2}} -\node{\bigcap}{2}$, where $\cap$ denotes an adjoint hypermultiplet of the $\mathrm{U}(2)$ rightmost node. The Coulomb branch of ${\Node{}{1}}-{\Node{}{2}} -\node{\bigcap}{2}$ contains 3 free hypermultiplets, which can be seen from the monopole operators with $\mathrm{SU}(2)_R$-spin $1/2$. This explains the $\mathbb{H}^3$ factor in \eref{CoulHE8}. The reduced moduli space of two $E_8$ instantons on $\mathbb{C}^2$ can be realised as in \cite{Cremonesi:2014xha}. \item Similarly, for this case, the total quiver can be split into ${\Node{}{1}}-{\Node{}{2}} -\node{}{3}$ and the rest of the Dynkin diagram. The contribution to the R-charge of the monopole operators from the subquiver ${\Node{}{1}}-{\Node{}{2}} -\node{}{3}$ can be realised from the quiver ${\Node{}{1}}-{\Node{}{2}} -\node{\bigcap}{3}$, where $\cap$ denotes an adjoint hypermultiplet of the $\mathrm{U}(3)$ rightmost node\footnote{The authors thank S. Cremonesi for this argument.}. Indeed, it was pointed out in section 3.3.2 of \cite{Cremonesi:2014vla} that the Coulomb branch of the latter model has a $G_2$ symmetry. (Note that the corresponding $4d$ class S theory had been studied in \cite{Gaiotto:2012uq}. The $G_2$ symmetry on the Higgs branch of such a theory had also been pointed out in that reference.) This therefore explains the $G_2$ symmetry in case 3. The $E_8$ symmetry follows from the Dynkin subquiver. \item Finally, for this case, \,\, $\node{}{4}$\,\, is the unbalanced node in the quiver. There are two contributions to the Coulomb branch operators with $\mathrm{SU}(2)_R$-spin $1$. One contribution can be realised using the quiver ${\Node{}{1}}-{\Node{}{2}}-{\node{}{3}} -\node{\bigcap}{4}$ in a similar fashion to the above discussion. This quiver has a Coulomb branch symmetry $\mathrm{SU}(4)$ and thus gives $15$ operators with $\mathrm{SU}(2)_R$-spin $1$ in the adjoint representation of $\mathrm{SU}(4)$. The other contribution can be seen as follows. Since the node ${\node{}{3}}$, which was originally a part of the affine Dynkin subquiver, now belongs to the tail ${\Node{}{1}}-{\Node{}{2}}-{\node{}{3}} -\node{}{4}$, we also need to take into account the contribution that arises from the removal of this node from such an affine Dynkin diagram. The second contribution thus comes from considering ${\Node{}{1}}-{\Node{}{2}}-\node{\bigcap}{4}$. There are $6$ Coulomb branch operators with $\mathrm{SU}(2)_R$-spin $1$ in the latter. Therefore, we have in total $15+6=21$ operators with $\mathrm{SU}(2)_R$-spin $1$; this explains the enhancement to the $\mathrm{SO}(7)$ symmetry. The remaining symmetry is thus the commutant of $\mathrm{SU}(2)$, which arises from node ${\node{}{3}}$, in the original symmetry associated with the affine Dynkin diagram. \end{enumerate} \subsection{The case of $k=4$} There are ten possibilities. The F-theory quiver for the $6d$ theories are listed on Page 73 of \cite{Heckman:2015bfa}. Here are the mirrors of the $T^3$ compactification of them. \begin{enumerate} \item The first case is \begin{equation} \un{n}=\E400000000, \qquad H=E_8. \end{equation} The 6d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} [E_8]\,\, 1 \,\, \overset{\mathfrak{su}(1)}2 \,\, \overset{\mathfrak{su}(2)}2 \,\,\overset{\mathfrak{su}(3)}2 \,\, \overbrace{\underset{[N_f =1]}{\overset{\mathfrak{su}(4)}2} \,\, \cdots\,\,\overset{\mathfrak{su}(4)}2}^{N-4} \,\, [\mathrm{SU}(4)] \ee and the 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N}-\node{}{2N}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N}. \ee \item The second case is \begin{equation} \un{n}=\E210000000, \qquad H=E_7\times \mathrm{U}(1) \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [E_7]\,\, 1 \,\,\underset{[N_f=1]}{\overset{\mathfrak{su}(2)}2} \,\,\overset{\mathfrak{su}(3)}2 \,\, \overbrace{\underset{[N_f=1]}{\overset{\mathfrak{su}(4)}2} \,\, ... \,\, \overset{\mathfrak{su}(4)}2}^{N-3} \,\, [\mathrm{SU}(4)]. \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29N + 2 + 6 + 12 + 4 + 16 (N - 3) - 3 - 8 - 15 (N - 3) = 30N+10~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+1}-\node{}{2N}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N} \ee and the dimension of the Coulomb branch is $30N+10$. \item The third case is \begin{equation} \un{n}=\E200000010, \qquad H=\mathrm{SO}(14)\times \mathrm{U}(1) \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [\mathrm{SO}(14)]\,\, \overset{\mathfrak{sp}(1)}1 \,\,\overset{\mathfrak{su}(3)}2 \,\,\overbrace{\underset{[N_f=1]}{\overset{\mathfrak{su}(4)}2} \,\, ... \overset{\mathfrak{su}(4)}2}^{N-2} \,\, [\mathrm{SU}(4)]. \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29N + 14 + 6 + 12 + 4 + 16 (N - 2) - 3 - 8 - 15 (N - 2) = 30N+23~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+2}-\node{}{2N+2}-\node{}{3N+2}-\node{}{4N+2}-\node{}{5N+2}-\node{\cver{}{3N+1}}{6N+2}-\node{}{4N+1}-\node{}{2N} \ee and the Coulomb branch dimension is $30N+23$. \item The fourth case is \begin{equation} \un{n}=\E020000000, \qquad H=E_7\times \mathrm{SU}(2). \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [E_7]\,\, 1 \,\,\overset{\mathfrak{su}(2)}{2} \,\,\overbrace{\underset{[\mathrm{SU}(2)]}{\overset{\mathfrak{su}(4)}2} \,\, ... \overset{\mathfrak{su}(4)}2}^{N-2} \,\, [\mathrm{SU}(4)]. \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N+ 8 + 8 + 16 (N - 2) - 3 - 15 (N - 2) = 30N+11~. \ee The 3d mirror is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+2}-\node{}{2N}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N} \ee The Coulomb branch dimension is $30N+11$. \item The fifth case is \begin{equation} \un{n}=\E000000020, \qquad H=\mathrm{SO}(16) \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [\mathrm{SO}(16)]\,\, \overset{\mathfrak{sp}(2)}1 \,\,\overbrace{\overset{\mathfrak{su}(4)}2 \,\, ... \overset{\mathfrak{su}(4)}2}^{N-1} \,\, [\mathrm{SU}(4)] \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N + 32 + 16 + 16 (N - 1) - 10 - 15 (N - 1) = 30N +37~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+4}-\node{}{2N+4}-\node{}{3N+4}-\node{}{4N+4}-\node{}{5N+4}-\node{\cver{}{3N+2}}{6N+4}-\node{}{4N+2}-\node{}{2N} \ee and the Coulomb branch dimension is $30N+37$. \item The sixth case is\begin{equation} \un{n}=\E010000010, \qquad H=\mathrm{SO}(12)\times\mathrm{SU}(2)\times \mathrm{U}(1) \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [\mathrm{SO}(12)]\,\, \overset{\mathfrak{sp}(1)}1 \,\,\overbrace{\overset{\mathfrak{su}(4)}{\underset{[\mathrm{SU}(2)]}2} \,\, ... \overset{\mathfrak{su}(4)}2}^{N-1} \,\, [\mathrm{SU}(4)] \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N + 12 + 8 + 8 + 16 (N - 1) - 3 - 15 (N - 1) = 30N +24~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+3}-\node{}{2N+2}-\node{}{3N+2}-\node{}{4N+2}-\node{}{5N+2}-\node{\cver{}{3N+1}}{6N+2}-\node{}{4N+1}-\node{}{2N} \ee The Coulomb branch dimension is $30N+24$. \item The seventh case is \begin{equation} \un{n}=\E101000000, \qquad H=E_6 \times \mathrm{SU}(2) \times \mathrm{U}(1). \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [E_6]\,\, 1 \,\,\overset{\mathfrak{su}(3)}{\underset{[\mathrm{SU}(2)]}2} \,\,\overbrace{\underset{[N_f=1]}{\overset{\mathfrak{su}(4)}2} \,\, ... \overset{\mathfrak{su}(4)}2}^{N-2} \,\, [\mathrm{SU}(4)]. \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N + 6 + 12 + 4 + 16 (N - 2) - 8 - 15 (N - 2) = 30N +12~. \ee The 3d mirror is \begin{equation}} \newcommand{\ee}{\end{equation} \label{30Nplus12} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+2}-\node{}{2N+1}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N} \ee and the Coulomb branch dimension is $30N+12$. \item The eighth case is \begin{equation} \un{n}=\E100000001,\qquad H=\mathrm{SU}(8)\times\mathrm{U}(1) \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [\mathrm{SU}(8)]\,\, \overset{\mathfrak{su}(3)}1 \,\, \overbrace{\underset{[N_f=1]}{\overset{\mathfrak{su}(4)}2} \,\, ... \overset{\mathfrak{su}(4)}2}^{N-1} \,\, [\mathrm{SU}(4)]. \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N + 24 + 12 + 4 + 16 (N - 1) - 8 - 15 (N - 1) = 30N +31~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+3}-\node{}{2N+3}-\node{}{3N+3}-\node{}{4N+3}-\node{}{5N+3}-\node{\cver{}{3N+1}}{6N+3}-\node{}{4N+2}-\node{}{2N+1} \ee and the Coulomb branch dimension is $30N+31$. \item The ninth case is \begin{equation} \un{n}=\E000100000,\qquad H=\mathrm{SO}(10)\times\mathrm{SU}(4) \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [\mathrm{SO}(10)]\,\, 1 \,\,\overbrace{\overset{\mathfrak{su}(4)}{\underset{[\mathrm{SU}(4)]}2} \,\, ... \overset{\mathfrak{su}(4)}2}^{N-1} \,\, [\mathrm{SU}(4)]. \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N + 16 + 16 (N - 1) - 15 (N - 1) = 30+15~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+3}-\node{}{2N+2}-\node{}{3N+1}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N} \ee and the dimension of the Coulomb branch is $30N+15$. \item The final tenth case is \begin{equation} \un{n}=\E000000100,\qquad H=\mathrm{SU}(8)\times\mathrm{SU}(2), \end{equation} with the 6d quiver \begin{equation}} \newcommand{\ee}{\end{equation} [\mathrm{SU}(8)]\,\, \overset{\mathfrak{su}(4)}{\underset{\text{[antisym]}}1} \,\,\overbrace{\overset{\mathfrak{su}(4)}2 \,\, ... \overset{\mathfrak{su}(4)}2}^{N-1} \,\, [\mathrm{SU}(4)] \ee The dimension of the SCFT Higgs branch is \begin{equation}} \newcommand{\ee}{\end{equation} 29 N + 32 + 6 + 16 N - 15 N = 30N+38~. \ee The 3d quiver is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}-{\Node{}{3}} -{\Node{}{4}}-\node{}{N+4}-\node{}{2N+4}-\node{}{3N+4}-\node{}{4N+4}-\node{}{5N+4}-\node{\cver{}{3N+2}}{6N+4}-\node{}{4N+2}-\node{}{2N+1} \ee and the dimension of the Coulomb branch is $30N+38$. \end{enumerate} \subsection{Theories differing by the $6d$ $\theta$ angle} In this subsection we look at the $4d$ and $3d$ theories generated from $6d$ SCFTs differing by the choice of $6d$ $\theta$ angle. The first case where this possibility occurs is for $k=8$, where the two choices are given by Kac labels $n'_3=2, n'_2=1$ for one and $n'_4=2$ for the other with the rest zero. These can be generalized to $k=2l+8$ with Kac labels $n'_3=2, n'_2=1+l$ for one and $n'_4=2, n'_2=l$ for the other with the rest zero. The $6d$ quiver in both cases is given by: \begin{equation}} \newcommand{\ee}{\end{equation} \overset{\mathfrak{usp}(2l)}1 \,\,\overbrace{\overset{\mathfrak{su}(2l+8)}{\underset{[\mathrm{SU}(8)]}2} \,\, ... \overset{\mathfrak{su}(2l+8)}2}^{N-1} \,\, [\mathrm{SU}(2l+8)] \ee where we identify the case $n'_4=2, n'_2=l$ with $\theta=0$ and $n'_3=2, n'_2=1+l$ with $\theta=\pi$. The associated $4d$ theories are different for the two cases. In the $\theta=0$ case we associate the class S theory given by: \begin{equation}} \newcommand{\ee}{\end{equation} [(N-1)^6, 1^{2l+8}], \quad [2N+l+2, 2N+l,2N], \quad [(3N+l+1)^2]~, \ee while the $\theta=\pi$ case is associated with: \begin{equation}} \newcommand{\ee}{\end{equation} [(N-1)^6, 1^{2l+8}], \quad [(2N+l+1)^2,2N], \quad [3N+l+2,3N+l]~. \ee The $3d$ quivers are: \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}- ... -{\Node{}{2l+7}}-{\Node{}{2l+8}} -\node{}{N+2l+7}-\node{}{2N+2l+6}-\node{}{3N+2l+5}-\node{}{4N+2l+4}-\node{}{5N+2l+3}-\node{\cver{}{3N+l+1}}{6N+2l+2}-\node{}{4N+l}-\node{}{2N} , \ee for the $\theta=0$ case, and \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}}-{\Node{}{2}}- ... -{\Node{}{2l+7}}-{\Node{}{2l+8}} -\node{}{N+2l+7}-\node{}{2N+2l+6}-\node{}{3N+2l+5}-\node{}{4N+2l+4}-\node{}{5N+2l+3}-\node{\cver{}{3N+l}}{6N+2l+2}-\node{}{4N+l+1}-\node{}{2N} , \ee for the $\theta=\pi$ case. We can now inquire as to how these theories differ from one another. In the $l=0$ case they differ already at the level of the global symmetry, where the $\theta=0$ case has an $\mathrm{SU}(8)^2 \times \mathrm{SU}(2) \times \mathrm{U}(1)$ global symmetry while the $\theta=\pi$ case has an $\mathrm{SU}(8)^2 \times \mathrm{U}(1)^2$ global symmetry. In this case we have an $\mathrm{SU}(8)$ gauging of $E_8$ and the two choices differ by their commutant inside $E_8$. We note that this difference is in accordance with the symmetry expected from the Kac labels. When $l>0$ the symmetries of the two theories agree. We can calculate the $4d$ anomalies of the two theories and find that all of them agree between the two theories. Again this is consistent with our interpretation as the $4d$ anomalies can be computed from their $6d$ counterparts, which in turn are independent of the $\theta$ angle. From our $6d$ interpretation we expect the two to differ slightly in their operator spectrum. Particularly the $\theta$ angle should affect the USp gauge group instanton strings changing their charges under the global and gauge symmetries. Upon compactification to lower dimensions these should map to local operators. We can observe this from the $3d$ quivers. We get a tower of monopole operators from every node. The basic monopole operator from the balanced nodes leads to enhancement of symmetry. We also have a basic monopole operator from the unbalanced nodes. These provide operators with higher R-charges, and we can read of their R-charges and non-abelian global symmetry charges from the quiver. We have three unbalanced nodes. Two of them give the same contribution in both theories: one operator of $\mathrm{SU}(2)_R$ spin $\frac{N}{2}$ in the bifundamental of the $\mathrm{SU}(2l+8)\times \mathrm{SU}(8)$ global symmetry, and one operator of $\mathrm{SU}(2)_R$ spin $2$ in the $\bold{28}$ of the $\mathrm{SU}(8)$ global symmetry. These can be readily identified with gauge invariants in the $6d$ quiver, where the former is the one made from $N-2$ $\mathrm{SU}(2l+8)\times \mathrm{SU}(2l+8)$ bifundamentals and the flavors, and the later is made from two $\mathrm{SU}(8)$ flavors and the $\mathrm{USp}(2l)\times \mathrm{SU}(2l+8)$ bifundamental. The last one differ slightly between the two theories. In the $\theta=0$ case it is a flavor singlet with $\mathrm{SU}(2)_R$ spin $\frac{l+2}{2}$. Particularly for $l=0$ this gives the conserved current enhancing the $\mathrm{U}(1)$ to $\mathrm{SU}(2)$. In the $\theta=\pi$ case, however, it is in the $\bold{8}$ of $\mathrm{SU}(8)$ with $\mathrm{SU}(2)_R$ spin $\frac{l+3}{2}$. We can interpret these states as coming from the $\mathrm{USp}$ gauge group instanton strings wrapped on the circle. These are in the spinor of $\mathrm{SO}(4l+16)$, and depending on the $\theta$ angle decompose to all the even or odd rank antisymmetric tensor representations of the gauge $\mathrm{SU}(2l+8)$ connected to the $\mathrm{USp}$ gauge group. In the $\theta=0$ case we get the even rank representations, which contain a gauge invariant part which is a flavor symmetry singlet. In the $\theta=\pi$ case we get the odd rank representations, which do not contain any gauge invariants. However we can combine it with one of the $\mathrm{SU}(2l+8)$ flavors to form an invariant. This should contribute a state in the $\bold{8}$ of $\mathrm{SU}(8)$ with $\mathrm{SU}(2)_R$ spin which is greater by $\frac{1}{2}$ from that of the singlet. This agrees with what we observe. It might be interesting to study more accurately the spectrum, particularly, the Higgs branch chiral ring, and compare against the $6d$ expectations. We will not pursue this here. \def\varpi{\varpi} \subsection{Massive E-string theories} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline $m_0$ & $E_{9-m_0}$ & Kac label $\un{n}/\varpi$ & $r_i/\varpi$ \\ \hline \hline $1$ & $E_8$ & $\E 100000000$ & $\E000000000$ \\ \hline $2$ & $E_7 $ & $\E {0} {1} 0000000$ & $\E {1}00000000$ \\ \hline $3$ & $E_6$ & $\E {0} {0} {1} 000000$ & $\E {2}{1}0000000$ \\ \hline $4$ & $\mathrm{SO}(10)$ & $\E {0} {0} {0} {1} 00000$ & $\E{3} {2} {1}000000$ \\ \hline $5$ & $\mathrm{SU}(5)$ & $\E {0} {0} {0} {0}{1} 0000$ & $\E{4} {3} {2} {1} 00000$\\ \hline $6$ & $\mathrm{SU}(3) \times \mathrm{SU}(2)$ & $\E {0} {0} {0} {0}{0}{1} 000$ & $\E{5} {4} {3} {2} {1} 0000$ \\ \hline $7$ & $\mathrm{SU}(2) \times \mathrm{U}(1)$ & $\E {0} {0} {0} {0}{0}{0} {1} 0 {1}$ & $\E {6} {5} {4} {3} {2} {1} {0} {0} {0}$ \\ \hline $8$ & $\mathrm{SU}(2)$ & $\E {0} {0} {0} {0}{0}{0} {2} 0 0$ & $\E {7} {6} {5} {4} {3} {2} {0} {0} {1}$ \\ \hline \end{tabular} \end{center} \caption{The values of $r_i$ in \eref{mirgenmassEstring} and the Kac label for each $m_0$.} \label{tab:m0ri} \end{table}% In this subsection, we consider the following $6d$ theory \begin{equation}} \newcommand{\ee}{\end{equation} \label{genmassEstring} {\cal T}^{6d}_E(\varpi ,m_0, N): \qquad [E_{9-m_0}] \,\, {1} \hspace{.4cm} \overset{\mathfrak{su}_{m_0}}{2} \hspace{.2cm} \overset{\mathfrak{su}_{2m_0}}{2} \hspace{.2cm} \ldots \overset{\mathfrak{su}_{(\varpi-1)m_0}}{2} \, \, \underset{[N_f = m_0]}{\overset{\mathfrak{su}_{\varpi m_0}}{2}} \,\, \underbrace{\overset{\mathfrak{su}_{\varpi m_0}}{2} \,\, \cdots \,\, \overset{\mathfrak{su}_{\varpi m_0}}{2}}_{N-\varpi-1} \,\, [{\rm SU}(\varpi m_0)]. \ee These theories were studied in \cite{DelZotto:2014hpa, Zafrir:2015rga,Ohmori:2015tka, Bah:2017wxp}. They can be called the ``massive E-string theories'' as in the last reference, since they correspond to NS5-branes probing the O8-D8 combination in the presence of the Romans mass. The mirror of the $T^3$ compactification of \eref{genmassEstring} is \begin{equation}} \newcommand{\ee}{\end{equation} \label{mirgenmassEstring} {\Node{}{1}}-{\Node{}{2}}- \cdots -{\Node{}{\varpi m_0 }}-\node{}{N+r_1}-\node{}{2N+r_2}-\node{}{3N+r_{3}}-\node{}{4N+r_4}-\node{}{5N+r_5}-\node{\cver{}{3N+r_{3'}}}{6N+ r_6}-\node{}{4N+r_{4'}}-\node{}{2N+r_{2'}} \ee where the values of $r_i$ and the Kac labels for each $m_0$ are given in Table \ref{tab:m0ri}. Note that \begin{equation}} \newcommand{\ee}{\end{equation} \sum_i {r_i} = \frac{1}{2} \varpi m_0 (m_0-1)~. \ee The SCFT Higgs branch dimension of \eref{genmassEstring} is \begin{equation}} \newcommand{\ee}{\end{equation} \label{HiggsTE} \dim^{\text{SCFT}}_\mathbb{H} \text{Higgs of ${\cal T}^{6d}_E(\varpi ,m_0, N)$} = 30N + \frac{1}{2} \varpi m_0^2(\varpi+1) -1~; \ee this is equal to the Coulomb branch dimension of \eref{mirgenmassEstring}. \subsection{Higgsing the $\mathrm{SU}(k)$ flavour symmetry} In the theories we have discussed so far, there is always an $\mathrm{SU}(k)$ flavour symmetry which came from the gauge symmetry on the $\mathbb{C}^2/\mathbb{Z}_k$ singularity. From the $3d$ quiver perspective, this symmetry arises from the topological symmetry associated with the nodes in the tail ${\Node{}{1}}-{\Node{}{2}}-\cdots -\Node{}{k}$. We can obtain another class of models by on nilpotent VEVs that Higgs the flavour symmetry $\mathrm{SU}(k)$.\footnote{The authors thank Alessandro Tomasiello for the discussion about this class of theories.} Suppose that such VEVs are in the nilpotent orbit of $\mathrm{SU}(k)$ given by $\bigoplus_i J_{s_i}$ where $J_s$ is a $s\times s$ Jordan block so that $Y = [s_1, s_2, \ldots, s_\ell]$ is a corresponding partition of $k$. Assuming that the 6d quiver theory before the Higgsing has a sufficiently long plateau of $\mathrm{SU}(k)$ gauge groups, this Higgsing can be performed exactly as in 4d class S theory e.g.~as described in Sec.~12.5 of \cite{Tachikawa:2013kta}. Its effect in 6d quiver was studied in \cite{Heckman:2016ssk,Mekareeya:2016yal}. In the end, we see that the tail on the right-hand side of the quiver to have the form \begin{equation}} \newcommand{\ee}{\end{equation} \label{gluegenmassEstring1} \cdots \,\, \overset{\mathfrak{su}({k})}{2} \,\, \,\, \underset{[N_f = u_{\ell'}]}{{\overset{\mathfrak{su}({k})}{2}}} \, \, \, \, \underset{[N_f = (u_{\ell'-1}-u_{\ell'})]}{{\overset{\mathfrak{su}{(k-u_{\ell'})}}{2}}} \, \, \, \, \cdots \, \, \, \, \underset{[N_f = (u_{2}-u_3)]}{\overset{\mathfrak{su}({u_2+u_1})}{2}}~ \, \, \, \, \underset{[N_f = (u_{1}-u_2)]}{\overset{\mathfrak{su}({u_1})}{2}}~, \ee where $u_i$ are the elements of the transpose $ Y^T= [ u_1, u_2, \ldots, u_{\ell'}]$, and we define $u_i =0$ for $i > \ell'$. The SCFT Higgs branch dimension of \eref{gluegenmassEstring1} is \begin{equation}} \newcommand{\ee}{\end{equation} \label{HiggsdimgluegenmassEstring1} \dim^{\text{SCFT}}_\mathbb{H} \text{Higgs of \eref{gluegenmassEstring1}} = \left[30(N_3+k) - \langle \vec w, \vec \rho \rangle +\frac{1}{2} k(k+1) -1 \right] -\dim_\mathbb{H} \mathcal{O}_Y \ee where $\mathcal{O}_Y$ is the nilpotent orbit labeled by $Y$. The mirror of the $T^3$ compactification of \eref{gluegenmassEstring1} is \begin{equation}} \newcommand{\ee}{\end{equation}\label{mirrgenmassEstring1} \frac{T_{Y}(\mathrm{SU}(k))~ \times~ \sqnode{}{k} -\node{}{\tilde N_1}-\node{}{\tilde N_2}-\node{}{\tilde N_3}-\node{}{\tilde N_4}-\node{}{\tilde N_5}-\node{\cver{}{\tilde N_{3'}}}{\tilde N_6}-\node{}{\tilde N_{4'}}-\node{}{\tilde N_{2'}}}{\mathrm{U}(k)/\mathrm{U}(1)}. \ee In other words, we simply replace the tail ${\Node{}{1}}-{\Node{}{2}}-\cdots -\sqNode{}{k}$ for the theories discussed in the preceding sections by $T_{Y}(\mathrm{SU}(k))$, where the latter is defined as in \cite{Gaiotto:2008ak}. The Coulomb branch dimension of \eref{mirrgenmassEstring1} is \begin{equation}} \newcommand{\ee}{\end{equation} \label{Coul3dmirrgenmassEstring1} \begin{split} &\dim_\mathbb{H} \text{Coulomb of \eref{mirrgenmassEstring1}} \\ &= \left[ 30(N_3+k) - \langle \vec w, \vec \rho \rangle \right] + \left[\frac{1}{2} \{ (k^2 -1) - (k-1) \} -\dim_\mathbb{H}\mathcal{O}_Y \right] + (k-1)\\ & = 30 (N_3+k) + \frac{1}{2}k(k+1) -1 - \dim_\mathbb{H} \mathcal{O}_Y - \langle \vec w, \vec \rho \rangle~, \end{split} \ee where the terms in the second square brackets in the second line denote the Coulomb branch dimension of $T_{Y}(\mathrm{SU}(k))$. This result is indeed in agreement with \eref{HiggsdimgluegenmassEstring1}. As an example, let us consider ${\cal T}^{6d}_E(k ,{m_0 = 1}, N)$ of the previous section and perform the Higgsing with $Y=[k-1,1]$. The resulting $6d$ theory is \begin{equation}} \newcommand{\ee}{\end{equation} [E_8] \, \, 1 \, \, \overset{\mathfrak{su}(1)}{2} \, \, \overset{\mathfrak{su}(2)}{2} \, \,\ldots \, \, \overset{\mathfrak{su}(k-1)}{2} \, \, \underset{\left[N_f=1 \right]}{\overset{\mathfrak{su}(k)}{2}} \, \, \overset{\mathfrak{su}(k)^{N-2k}}{2} \, \, \underset{\left[N_f=1 \right]}{\overset{\mathfrak{su}(k)}{2}}\, \, \overset{\mathfrak{su}(k-1)}{2} \,\, \ldots \,\, \overset{\mathfrak{su}(2)}{2} \, \, \overset{\mathfrak{su}(1)}{2}~, \ee where the number of tensor multiplets is $N$. This theory is similar to that discussed in (36) of \cite{Aspinwall:1997ye}, (5.2) of \cite{Hanany:1997gh}, except that we have only one $(-1)$-curve in the quiver, instead of two. The mirror of the $T^3$ compactification of this theory is \begin{equation}} \newcommand{\ee}{\end{equation} {\Node{}{1}} -{\Node{}{k}} -\node{}{N}-\node{}{2N}-\node{}{3N}-\node{}{4N}-\node{}{5N}-\node{\cver{}{3N}}{6N}-\node{}{4N}-\node{}{2N}~, \label{E8E8quiv} \ee This quiver is a ``good'' theory in the sense of \cite{Gaiotto:2008ak} if $N+1\geq2k$ and $k \geq 2$. In this case, this quiver is the $3d$ mirror theory of the $S^1$ reduction of the class ${\cal S}$ theory of type $\mathrm{SU}(6N)$ associated a sphere with the punctures \begin{equation}} \newcommand{\ee}{\end{equation} [N^5, N-k,k-1,1], \quad [(3N)^2], \quad [(2N)^3]~. \ee \section*{Acknowledgments} The authors thank Hiroyuki Shimizu for the collaboration at the early stages. NM sincerely thanks Stefano Cremonesi, Amihay Hanany and Alessandro Tomasiello for a close collaboration, invaluable insights, and several useful discussions. He also grateful to the hospitality of the organisers of the Pollica Summer Workshop 2017, including Fernando Alday, Philip Argyres, Madalena Lemos and Mario Martone. He is supported in part by the INFN, the ERC Starting Grant 637844- HBQFTNCER, as well as the ERC STG grant 306260 through the Pollica Summer Workshop. KO gratefully acknowledges support from the Institute for Advanced Study. YT is partially supported in part byJSPS KAKENHI Grant-in-Aid (Wakate-A), No.17H04837 and JSPS KAKENHI Grant-in-Aid (Kiban-S), No.16H06335. YT and GZ are partially supported by WPI Initiative, MEXT, Japan at IPMU, the University of Tokyo. \bibliographystyle{ytphys}
1,116,691,501,164
arxiv
\section{Introduction} What is the probability that an extensive observable in a physical system has a value far from its average? Such questions are the subject of large-deviation theory~\cite{Touchette}, which provides a mathematical foundation for classical thermodynamics~\cite{Ruelle}, and has more recently been applied to a variety of problems in non-equilibrium statistical mechanics~\cite{fluct,JL,Bod-Der,Simon09,Imparato09,popkov,evans04,kcm-transition,hedges09}. Notable results in these non-equilibrium settings include analyses of fluctuation theorems~\cite{fluct}; exact results for exclusion processes and models of energy transport~\cite{JL,Bod-Der,Simon09,Imparato09,popkov}; a proposed non-equilibrium counterpart of detailed balance in sheared systems~\cite{evans04}; and dynamical phase transitions in glassy systems~\cite{kcm-transition,hedges09,jack-rom09,elmatad10}. Here, we focus on trajectories in which an extensive measurement of dynamical activity~\cite{kcm-transition,maes06-activity,lecomte-chaotic} has a non-typical value, and we discuss how these trajectories can be characterised in terms of effective interactions~\cite{Simon09,popkov,evans04,baule-prl08,simha-evans08,Jack-Sollich-PTP,baule-jstat2010,touchette-arxiv2013}. The nature of these effective interactions is important in interpreting measurements of large deviations in non-equilibrium systems. In particular, if the interactions are simple and short-ranged, one can interpret the large-deviation behaviour of the system in terms of its response to these short-ranged forces. In this situation, physical intuition and results from the existing literature can be very useful. However, there are at least some cases~\cite{JL,Bod-Der,elmatad10,Jack-Sollich-PTP} where long-ranged effective interactions appear, and lead to unusual new behaviour (for example, phase transitions to states with long-ranged order may appear in one-dimensional systems). In these cases, interpreting and analysing results for large deviations is more difficult. An example is given by the very stable ``glass'' states that appear in glassy model systems, when one considers large deviations of the dynamical activity~\cite{kcm-transition,hedges09,jack-rom09,jack11-stable,Speck12} -- it is not clear what effective interactions might be required to stabilise these states, and it is therefore difficult to understand what kinds of equilibrium or non-equilibrium protocols might be used to prepare them in the laboratory. In this study, we present numerical and analytical results for the one-dimensional East model~\cite{Jackle91}. This simple spin system is an example of a kinetically constrained model~\cite{Ritort-Sollich,GST-kcm}, where complex dynamical behaviour arises at low temperature, while all thermodynamic quantities remain very simple. The model has been studied extensively in the context of glassy systems~\cite{Sollich-Evans,Aldous02,East-gap}, particularly within the theory of dynamical facilitation~\cite{GC-east}. At low temperatures, it supports a very broad spectrum of time scales -- this results in a range of glassy phenomena, and also considerable structure in the large deviation functions of the model. The model is useful for our purposes because its large deviations are quite rich and complex, but it is still tractable both numerically and analytically. We present several methods for characterising the effective interactions associated with large deviations in this model. The one-dimensional nature of the model greatly facilitates our analysis in this article, but we argue that our methods and general results have potential applicability for analysing effective interactions in a wide range of systems. Our results are the first to give direct insights into effective interactions over a range of biases in a many-body system, going beyond previously studied cases where very few degrees of freedom were considered~\cite{popkov,evans04,baule-prl08,simha-evans08,baule-jstat2010,touchette-arxiv2013}, or the analysis was restricted to the limit of strong biasing~\cite{popkov}. The paper is organised as follows: In Section~\ref{sec:model}, we define the East model, the biased ensembles of trajectories that we study, and the observables that we use to characterise these ensembles. Section~\ref{sec:num} gives an overview of the response to the bias, illustrated by numerical results. In Section~\ref{sec:pert}, we use a linear-response (perturbative) formalism to analyse the effective interactions in the system, for small $\nu$: the resulting physical picture is discussed in Section~\ref{sec:hier}. Since strong long-ranged effective interactions appear even at the perturbative level, we then use non-perturbative variational schemes to estimate effective interactions (Sec.~\ref{sec:var}). Finally, we discuss implications of our results for more general systems in Sec.~\ref{sec:summ}. \section{Model and ensemble definitions} \label{sec:model} \subsection{East model} The one-dimensional East model~\cite{Jackle91,Ritort-Sollich} has binary spins $n_i=0,1$ where the sites $i=1\dots N$ form a linear chain, with periodic boundaries. We identify $n_i=1$ as the `up' state and $n_i=0$ as the `down' state. The energy of the system has the simple form $E_0=\sum_i n_i$, and spins flip with Glauber rates, subject to the kinetic constraint that spin $i$ may flip only if spin $i-1$ is in the `up' state, $n_{i-1}=1$. That is, spin $i$ flips from $0\to1$ with rate $n_{i-1}c$, and from $1\to0$ with rate $n_{i-1}(1-c)$, where $c=(1+\mathrm{e}^\beta)^{-1}$ is equal to the equilibrium fraction of `up' spins, and $\beta$ is the inverse temperature. We use the notation $\mathcal{C}=(n_1,n_2,\dots,n_N)$ to represent a configuration of the system. We focus on the behaviour for small $c$ (low temperature). At equilibrium, the dynamical behaviour of the system is hierarchical~\cite{Sollich-Evans}: motion on a length scale $\ell$ requires the system to overcome an energy barrier of height \begin{equation} \alpha_\ell = \lceil \log_2 \ell \rceil. \label{equ:alpha} \end{equation} That is, $\alpha_\ell$ is the smallest integer greater than or equal to $\log_2 \ell$. Specifically, this is the energy barrier associated with relaxation of an up-spin that is a distance $\ell$ from the nearest up-spin to its left. At low temperature, the rates for such processes scale as \begin{equation} \tau_\ell = c^{\alpha_\ell}\sim \mathrm{e}^{-\beta \alpha_\ell}, \qquad \ell \lesssim 1/c . \label{equ:tau-ell} \end{equation} The typical distance between up spins in the system at equilibrium is $1/c$: if one assumes that (\ref{equ:tau-ell}) applies on this length scale then one estimates the relaxation time of the system to be $\tau_{1/c} \sim \mathrm{e}^{\beta^2 /\ln 2}$~\cite{Sollich-Evans,Aldous02}. However, while this argument is persuasive, the bulk relaxation time at equilibrium in fact scales as $\tau_0 \sim \mathrm{e}^{\beta^2 /(2\ln 2)}$~\cite{East-gap}. The slower divergence arises because the simple argument above only considers the energy barrier of the most efficient path, but neglects speed-ups arising from the availability of many paths. These become significant for $\ell\approx 1/c$. For length scales $\ell$ larger than $1/c$, the system relaxes by undergoing approximately $c\ell$ successive events, each operating on a length scale of order $1/c$, and taking a time of order $\tau_0$: hence we expect $\tau_\ell \sim c\ell \tau_0$. \subsection{Large deviations} \label{sec:large-dev} To investigate large deviations in the East model, we begin with its master equation: \begin{equation} \partial_t P(\mathcal{C},t) = -r(\mathcal{C}) P(\mathcal{C},t) + \sum_{\mathcal{C}'(\neq \mathcal{C})} W(\mathcal{C} \leftarrow \mathcal{C}') P(\mathcal{C}',t) , \end{equation} where $W(\mathcal{C}\leftarrow\mathcal{C}')$ is the transition rate from $\mathcal{C}'$ to $\mathcal{C}$ and \begin{equation} r(\mathcal{C}) = \sum_{\mathcal{C}' \neq \mathcal{C}} W(\mathcal{C}' \leftarrow \mathcal{C}) \label{equ:def-r} \end{equation} is the escape rate from configuration $\mathcal{C}$. Writing $|P(\mathcal{C},t)\rangle=\sum_\mathcal{C} P(\mathcal{C},t)|\mathcal{C}\rangle$, the master equation is $\partial_t |P(\mathcal{C},t)\rangle = \mathbb{W} |P(\mathcal{C},t)\rangle$ where $\mathbb{W}$ is the `master operator' (or generator). Using a spin-$\frac12$ basis for the configurations of the system, one has~\cite{kcm-transition} \begin{equation} \mathbb{W} = \sum_i \hat{n}_{i-1} \left[ (1-c)\sigma^-_i + c \sigma^+_i - (1-2c)\hat{n}_i -c \right] , \end{equation} where $\hat{n}_i|\mathcal{C}\rangle = n_i |\mathcal{C}\rangle$ gives the state of spin $i$ in configuration $\mathcal{C}$, while the $\sigma_i^\pm$ are raising and lowering operators for spin $i$. (Explicitly, if we write a configuration with $n_i=0$ or $n_i=1$ as $|\cdots0\cdots\rangle$ or $|\cdots1\cdots\rangle$ respectively, then the operators act as $\sigma_i^+ |\cdots0\cdots\rangle= |\cdots1\cdots\rangle$ and $\sigma_i^- |\cdots1\cdots\rangle = |\cdots0\cdots\rangle$, while $\sigma_i^+|\cdots1\cdots\rangle = 0 = \sigma_i^-|\cdots0\cdots\rangle$.) Large deviations of the dynamical activity in the East model were first considered in~Ref.~\cite{Merolle} and later in~\cite{kcm-transition,elmatad13}. Consider a trajectory of length (``observation time'') $t_\mathrm{obs}$, which contains a total of $K$ configuration changes. At equilibrium, $K$ has a probability distribution $P_0(K)$: if the system size $N$ and the observation time $t_\mathrm{obs}$ are large then the central limit theorem implies that the variance and mean of $K$ both scale as $Nt_\mathrm{obs}$. Thus, $P_0(K)$ becomes sharply peaked as $N,t_\mathrm{obs} \to\infty$. Nevertheless, one may still consider trajectories with non-typical values of $K$. One expects \begin{equation} P_0(K) \simeq \mathrm{e}^{-Nt_\mathrm{obs} \pi_K(k)} , \end{equation} where $k=K/(Nt_\mathrm{obs})$ and $\pi_K(k)$ is a `spacetime free energy' or `rate function' that determines the probability of particular values of $K$~\cite{pi-foot}. In practice, it is convenient to follow an equivalent route, concentrating not on $\pi_K(k)$ but on its Legendre transform $\psi_K(s) = \min_k[sk + \pi_K(k)]$. To achieve this, one defines (as in Ref.~\cite{kcm-transition}) a probability distribution over trajectories: \begin{equation} \mathrm{Prob}[\mathcal{C}(t),s] = \mathrm{Prob}[\mathcal{C}(t),0] \cdot \frac{ \mathrm{e}^{-sK} }{ \langle \mathrm{e}^{-sK} \rangle_0 } , \label{equ:s-ens} \end{equation} where $\mathcal{C}(t)$ represents a trajectory of the system and $K$ denotes its activity, while the parameter $s$ biases the equilibrium distribution $\mathrm{Prob}[\mathcal{C}(t),0]$, favouring trajectories with non-typical values of $K$. The notation $\langle \cdot \rangle_0$ indicates an equilibrium average. Following~\cite{kcm-transition,hedges09}, we refer to the probability distribution $\mathrm{Prob}[\mathcal{C}(t),s]$ as an `$s$-ensemble'. In the following, we use $\langle \cdot \rangle_s$ to represent an average with respect to this distribution. Standard arguments based on equivalence of ensembles indicate that averages with respect to $\mathrm{Prob}[\mathcal{C}(t),s]$ are equivalent to averages over trajectories with fixed values of $K$ (see also~\cite{touchette-arxiv2013}). In the long time limit, the free energy $\psi_K(s)$ may be obtained from $\langle \mathrm{e}^{-sK} \rangle_0 \simeq \mathrm{e}^{-Nt_\mathrm{obs} \psi_K(s)}$~\cite{psi-foot}. We will consider averages of one-time quantities (like $\langle n_i(t) \rangle_s$), as well as quantities that depend on the whole trajectory (like $\langle K \rangle_s$). Averages of one-time quantities are independent of the time $t$ at which they are evaluated, except for initial and final `transient' regimes near $t=0$ and $t=t_\mathrm{obs}$. Unless otherwise stated, we evaluate all one-time quantities at time $t=t_\mathrm{obs}/2$, which is representative of the steady-state regime. As discussed in~\cite{kcm-transition}, properties of $s$-ensembles in the East model may be obtained by analysis of the operator \begin{equation} \mathbb{W}_K(s) = \sum_i \hat{n}_{i-1} \big[ \mathrm{e}^{-s} (1-c)\sigma^-_i + \mathrm{e}^{-s} c \sigma^+_i - (1-2c)\hat{n}_i -c \big] . \end{equation} The largest eigenvalue of $\mathbb{W}_K(s)$ is equal to $-N\psi_K(s)$. Let the left and right eigenvectors corresponding to this eigenvalue have elements $u_\mathcal{C}$ and $v_\mathcal{C}$ respectively, normalised such that $\sum_\mathcal{C} u_\mathcal{C} v_\mathcal{C}=1$. Then the steady-state probability of configuration $\mathcal{C}$ within the $s$-ensemble is $p_\mathcal{C} = u_\mathcal{C} v_\mathcal{C}$. The large deviations of $K$ are closely related to those of the `time-integrated escape rate'~\cite{kcm-transition}, \begin{equation} R[\mathcal{C}(t)] = \int_0^{t_\mathrm{obs}}\!\!\mathrm{d}t \, r(\mathcal{C}(t)) , \end{equation} where $r(\mathcal{C})$ was defined in (\ref{equ:def-r}), above. By analogy with (\ref{equ:s-ens}), we define a `$\nu$-ensemble' by \begin{equation} \mathrm{Prob}[\mathcal{C}(t),\nu] = \mathrm{Prob}[\mathcal{C}(t),0] \cdot \frac{ \mathrm{e}^{\nu R} }{ \langle \mathrm{e}^{\nu R} \rangle_0 } . \label{equ:nu-ens} \end{equation} The properties of this ensemble may be obtained from the operator \begin{equation} \mathbb{W}_R(\nu) = \mathrm{e}^{s} \mathbb{W}_K(s), \qquad \mathrm{e}^{s} = 1 - \nu . \label{equ:nu-s} \end{equation} The largest eigenvalue of $\mathbb{W}_R(\nu)$ is therefore $-N\psi_R(\nu)$ with $\psi_R(\nu) = \mathrm{e}^s \psi_K(s)|_{\mathrm{e}^s=1-\nu}$, and the eigenvectors associated with this eigenvalue are the same $u_\mathcal{C}$ and $v_\mathcal{C}$ found by diagonalising $\mathbb{W}_K(s)$. We use $\langle \cdot \rangle_\nu$ to represent averages with respect to $\mathrm{Prob}[\mathcal{C}(t),\nu]$. From (\ref{equ:nu-s}), it follows that the $\nu$-ensemble and the $s$-ensemble contain the same information (at least for $\nu<1$). Since the dynamics of the East model obeys detailed balance, and the activity $K$ is time-reversal symmetric, the operator $\mathbb{W}_K(s)$ may be symmetrised~\cite{kcm-transition} as $\mathbb{H}_K(s) = \mathrm{e}^{\beta\sum_i \hat{n}_i/2} \mathbb{W}_K(s) \mathrm{e}^{-\beta\sum_i \hat{n}_i/2}$; the same transformation also symmetrises $\mathbb{W}_R(\nu)$. It follows~\cite{Jack-Sollich-PTP} that both $s$-ensembles and $\nu$-ensembles may be described by ``auxiliary Markov models'' that obey detailed balance with respect to the distribution $P_s(\mathcal{C}) \propto u_\mathcal{C}^2 \mathrm{e}^{-\beta\sum_i n_i}$. Hence, we write \begin{equation} u_\mathcal{C} = \mathrm{e}^{-\Delta V_\mathcal{C}/2} \label{equ:def-VC} \end{equation} and interpret $\Delta V_\mathcal{C}$ as an `effective potential' that acts to drive the system into the $\nu$-ensemble (see also~\cite{Simon09}). If $\Delta V_\mathcal{C}$ is known then steady state averages of one-time quantities may be obtained by the methods of equilibrium statistical mechanics, using the energy function $\beta E(\mathcal{C}) = \beta \sum_i n_i + \Delta V_\mathcal{C}$. The main purpose of this paper is to investigate the `effective potential' $\Delta V_\mathcal{C}$ that describes the $\nu$-ensemble for the East model. \subsection{Observables and correlation functions} \label{sec:def-correl} As well as the effective potential $\Delta V_{\mathcal{C}}$, which fully describes the biased state, we also consider several simpler observables that provide insight into the response to the bias. These include the mean escape rate, \begin{equation} r(\nu) = \frac{1}{Nt_\mathrm{obs}}\langle R \rangle_\nu , \end{equation} which indicates how strong the bias $\nu$ must be, in order to produce a particular deviation of $R$ from its average. From the definition of the dynamical free energy $\psi_R(\nu)$ as the long-time limit of $-1/(Nt_\mathrm{obs})\ln\langle\mathrm{e}^{\nu R}\rangle_0$ it follows that $r(\nu)=-\psi_R'(\nu)$. The derivative $\chi_R(\nu)=r'(\nu)=-\psi_R''(\nu)$, then indicates the size of the fluctuations of $R$, via \begin{eqnarray} \chi_R(\nu) & = \frac{1}{Nt_\mathrm{obs}} [ \langle R^2 \rangle_\nu - \langle R \rangle^2_\nu] \nonumber \\ & = \frac{1}{Nt_\mathrm{obs}} \sum_{ij} \int_0^{t_\mathrm{obs}}\!\! \mathrm{d} t\, \mathrm{d} t' \langle \delta r_i(t) \, \delta r_j(t') \rangle_\nu , \label{equ:chi-rr} \end{eqnarray} where \begin{equation} r_i=(1-c) n_{i-1} n_i + c n_{i-1} \overline{n}_i \label{equ:def-ri} \end{equation} is the escape rate at site $i$. We use the notation $\overline{n}_i = 1-n_i$ and $\delta O = O - \langle O \rangle$. To characterise spatial correlations in the biased state, we define \begin{equation} C(x) = \langle \delta n_i\, \delta n_{i+x} \rangle_\nu . \label{equ:cx} \end{equation} At equilibrium, $C(x)=c(1-c)\delta_{x,0}$, since the energy function of the system lacks any interactions between spins. However, the presence of a non-zero effective potential $\Delta V_{\mathcal{C}}$ modifies $C(x)$, which then provides a simple characterisation of response to the bias. Finally, an observable that is very useful for probing the structure of biased states is a probability distribution for domain sizes, denoted by $p(d)$. Here, each domain consists of a single up-spin and all the down-spins to its right, up to (but not including) the next up-spin. This definition is slightly different from the natural definition of domains in (for example) a one-dimensional Ising model, where a block of adjacent up spins would form a single domain: in our case, each up spin forms its own separate domain. Our definition is motivated by the fact that up spins are typically quite rare, and the spacing between these spins is a dominant factor in determining dynamical behaviour. More formally, we define observables $U_{i,i+r}$ and $D_{i,i+r}$ which are equal to unity if spins $i$ to $i+r$ are all up ($U$), or all down ($D$), and zero otherwise. That is, \begin{eqnarray} U_{i,i+r} &= \prod_{j=i}^{i+r} n_j , \qquad D_{i,i+r} &= \prod_{j=i}^{i+r} \overline{n}_j . \label{equ:UD} \end{eqnarray} In this notation, the domain size distribution is \begin{equation} p(d) = \frac{ \langle n_i \cdot D_{i+1,i+d-1} \cdot n_{i+d} \rangle_\nu}{ \langle n_i \rangle_\nu} . \label{equ:def-pd} \end{equation} At equilibrium, one has the distribution $p^0(d) = c(1-c)^{d-1}$. \subsection{Range of effective interactions} For a finite system, it is always possible to write the eigenvector $u_\mathcal{C}$ as $\mathrm{e}^{-\Delta V_\mathcal{C}/2}$, as in (\ref{equ:def-VC}). However, the question of whether $\Delta V_\mathcal{C}$ has the features expected of a ``physical'' potential is a subtle one. For spin models, Ising-like potentials such as $V_\mathcal{C} = \mu \sum_i n_i - \epsilon \sum_i n_i n_{i+1}$ are familiar, but $\Delta V_\mathcal{C}$ may also contain two-body interaction terms like $n_i n_{i+x}$ of larger range $x$, or many-body terms like $n_i n_{i+1}n_{i+2}$, which is a three-body interaction of range 2. While these are less familiar in physical situations, we show below that $\Delta V_\mathcal{C}$ for the East model with $\nu>0$ must contain a combination of such terms with unbounded range (up to the system size), and we argue that this behaviour can be expected more generally. Specifically, if the longest range interaction term in $\Delta V_\mathcal{C}$ has range $B-1$ then the domain size distribution $p(d)$ must decay exponentially for $d>B$ (a derivation is given in~\ref{app:pd-exp}). In the following, we will present numerical and analytic evidence that for $\nu>0$, $p(d)$ decays faster than exponentially at large $d$, indicating the effective potential contains \emph{long-ranged interactions}: that is, no effective potential with interactions of bounded range can fully represent $\Delta V_\mathcal{C}$ as the system size $N\to\infty$. In this sense, the states that occur for $\nu>0$ in these systems are qualitatively more complex than those of classical spin models at equilibrium. This is part of the reason for the rich behaviour that has been observed in the large deviations of such systems, even in one dimension~\cite{JL,Bod-Der,Imparato09,kcm-transition}. We also note in passing that effective potentials similar to $\Delta V_\mathcal{C}$ have been considered in the mathematical literature. For example, instead of considering configurations $\mathcal{C}$ within long trajectories, one can consider the configuration of a single (two-dimensional) lattice plane, within a three-dimensional Ising model~\cite{Maes99-ising}. Similar questions also arise within the renormalisation group~\cite{Sokal93}. The main question that has been addressed there is whether the effective interactions are ``Gibbsian'': that is, whether $\Delta V_\mathcal{C}$ is a ``reasonable Hamiltonian'' in the sense defined rigorously in~\cite{Sokal93}. (The idea is based on considerations of locality, in the sense that the interactions encoded by $\Delta V_\mathcal{C}$ should decay sufficiently quickly with interaction range~\cite{Sokal93}.) In the following, we focus on the specific interactions that we find within the East model: whether these interactions are Gibbsian or not is a question that we postpone for later studies. \section{Overview of response to bias, and numerical results} \label{sec:num} This Section introduces the most important features of the response of the East model to the bias $\nu$. We present numerical results that illustrate the structure that develops when the bias $\nu$ is applied, and we also discuss two concepts that are important for interpreting this structure: the notion of `quasiequilibrium', and the existence of scaling behaviour at small $c$. \subsection{Mean activity and susceptibility} The $\nu$-ensemble may be sampled numerically using transition path sampling (TPS)~\cite{TPS-review}: we follow the methods described in~\cite{hedges09,elmatad10}. For small systems (at least up to $N=14$) one may also diagonalise the operator $\mathbb{W}_R(\nu)$ exactly, and obtain $\Delta V_\mathcal{C}$ directly from its eigenvectors. (It is convenient to first symmetrise the operator, as described in Section~\ref{sec:model}.) \begin{figure} \hfill \includegraphics[width=7cm]{fig1a-new.eps} \includegraphics[width=7cm]{fig1b.eps} \par\hfill \includegraphics[width=7cm]{fig1c-new.eps} \includegraphics[width=7cm]{fig1d-new.eps} \caption{ The mean escape rate $r(\nu)$, up-spin density $\rho(\nu)=\langle n_i\rangle_\nu$, and the susceptibility $\chi_R(\nu)$, in the East model for $c=0.1$. (a)~Mean escape rate $r(\nu)$ for the range $0<\nu<1$ (corresponding to $-\infty<s<0$ in the $s$-ensemble). This is compared with the prediction of the quasiequilibrium (QE) assumption (\ref{equ:qe-pred}). (b)~Mean escape rate $r(\nu)$ for small $\nu$. The QE assumption holds quite accurately. (c)~Mean up-spin density $\rho(\nu)$. The dashed line indicates $\rho(\nu)=\frac13$, which is the predicted density in the limit $c\ll\nu\ll 1$ (see text). (d)~The susceptibility $\chi_R(\nu)$, for small $\nu$. In all panels, solid circles show data from exact diagonalisation of the operator $\mathbb{W}_R(\nu)$ and open circles are data from transition path sampling (TPS)~\cite{TPS-review}. The system sizes are $N=14$ (exact diagonalisation), $N=32$ (TPS for $\nu\geq0.01$) and $N=64$ (TPS for $\nu<0.01$). % % } \label{fig:rnu} \end{figure} Fig.~\ref{fig:rnu} shows numerical results for the average escape rate in the $\nu$-ensemble, and the associated susceptibility $\chi_R(\nu)$. For the exact diagonalisation, we show results for $N=14$: we also analyzed the case $N=12$ for which the results agree to within the symbol sizes, indicating that finite-size effects are small. In the TPS calculations, we vary $t_\mathrm{obs}$ according to the state point, to ensure convergence of the large-$t_\mathrm{obs}$ limit. We have not carried out a detailed analysis of finite size effects but we do ensure that systems are significantly larger than almost all domains that appear at each state point (see discussion in future sections). We therefore expect finite size effects to be small in these cases too. In Fig.~\ref{fig:rnu}(d), it is striking that $\chi_R(\nu)$ is large as $\nu\to0$, so that the average escape rate $r(\nu)$ responds strongly to the bias $\nu$. We emphasise however that this susceptibility is finite even as $N,t_\mathrm{obs}\to\infty$. To show this, we use a property of the East model at equilibrium. Suppose that $A$ and $B$ are two observables that depend on the spins only in non-overlapping regions of the system. By this we mean that all the spins that $A$ depends on are to the left of those in $B$, or vice versa. Then one has \begin{equation} \langle \delta A(t) \delta B(t')\rangle_0=0 . \label{equ:ABzero} \end{equation} This result~\cite{Ritort-Sollich} follows from the directionality of the kinetic constraint in the East model, which means that information can only flow from left to right in the system. Causality then implies that the state of spins with $j>i$ at time $t$ cannot affect the behaviour of spin $i$ for any $t'>t$. In fact, this property holds both at equilibrium and during relaxation towards equilibrium, which is sufficient to prove that a restricted version of (\ref{equ:ABzero}) holds both in and out of equilibrium. That is, (\ref{equ:ABzero}) holds both in equilibrium and out of equilibrium as long as $t'>t$ and observable $B$ is localised to the left of observable $A$. At equilibrium, time-reversal symmetry then implies that (\ref{equ:ABzero}) holds for all $t$ and $t'$. Combining Eqs (\ref{equ:def-ri},\ref{equ:ABzero}), one finds that $\langle \delta r_i(t) \,\delta r_j(t')\rangle_0 = 0$ for $|i-j|>1$. And for any $i,j$ with $|i-j|\leq 1$, the finite spectral gap~\cite{East-gap} of the operator $\mathbb{W}$ means that $\langle \delta r_i(t) \,\delta r_j(t')\rangle_0$ decays exponentially for long times. Thus, the combined integral and sum in (\ref{equ:chi-rr}) leads to a finite result, at $\nu=0$. We also note that the ratio $r(0)/\chi_R(0)$ sets a natural scale for $\nu$: the strength of bias required to introduce an $O(1)$ relative change in activity. From (\ref{equ:chi-rr}), this can be estimated to be is of the order of $\tau_0^{-1}$ where $\tau_0$ is the equilibrium relaxation time, of the order of the inverse spectral gap. In fact, the bias $\nu$ has a hierarchy of natural scales, of which this is just the smallest. We return to this point in later Sections. \subsection{Quasiequilibrium condition} \label{sec:qe-simple} One effect of the parameter $\nu$ is to bias the system away from its equilibrium state. However, an important observation for interpreting the results of this article is that some degrees of freedom in the East model remain `quasiequilibrated' in the presence of the bias, at least as long as $c$ is small and $\nu$ is not too large. This means that even if configurations of the system have probabilities far from their equilibrium values, \emph{ratios} of probabilities for some configurations may be almost unaffected by the bias. In other words, one may identify pairs of states for which $\Delta V_\mathcal{C} - \Delta V_{\mathcal{C}'}$ is small, even if the absolute values of the effective potential are large. To illustrate this situation, suppose that spin $i$ is ``facilitated'' in the East model (that is, $n_{i-1}=1$). Then spin $i$ will flip on a relatively rapid time scale of order $c^{-1}$. At low temperatures (small $c$), it is likely that spin $i-1$ will flip only on a much slower time scale. In this case, spin $i$ typically flips many times before spin $i-1$ flips at all. Holding all other spins in the system constant, one then compares configuration $\mathcal{C}$ (where $n_i=0$), with configuration $\mathcal{C}'$ (where $n_i=1$). Regardless of whether the system was initially in $\mathcal{C}$ or $\mathcal{C}'$, the rapid flips of spin $i$ mean that the ratio of probabilities of these configurations after a time of order $1/c$ will be very close to $c/(1-c)$, which is equal to their ratio at equilibrium. (Here we exploit the smallness of $\nu$ to neglect the effects of the bias on local spin flips.) It follows that $\langle n_{i-1} n_{i} \rangle_\nu \approx \langle n_{i-1} \rangle_\nu c$. To the extent that this holds, one has from (\ref{equ:def-ri}) that \begin{equation} \langle r_i\rangle_\nu \approx 2c(1-c)\langle n_i \rangle_\nu, \qquad r(\nu) \approx 2c(1-c) \langle \rho\rangle_\nu \label{equ:qe-pred} \end{equation} where $\rho = N^{-1}\sum_i n_i$. These relations indicate that the escape rate and the density of up spins in the biased ensemble are tightly correlated for the East model, as found numerically in~\cite{Merolle,Jack06} Fig.~\ref{fig:rnu} confirms that the quasiequilibrium relation (\ref{equ:qe-pred}) does hold quite accurately for $\nu\lesssim c$. For larger $\nu$, quasiequilibrium breaks down: the response for $\nu=O(1)$ changes in character, resulting in a steady increase in the escape rate but departure from quasiequilibrium. For $c \ll \nu \ll 1$, we expect a plateau in $r(\nu)$, which represents a quasiequilbrium state with $\langle n_i \rangle_\nu \approx \frac13$ (see Sec.~\ref{sec:hier} below). The data in Fig.~\ref{fig:rnu}(c) are consistent with such a regime, although numerical limitations prevent us from obtaining results at smaller $c$, which would allow this hypothesis to be tested further. In the regime $\nu>1$, the function $r(\nu)$ increases smoothly, finally saturating in the state with all up spins as $\nu\to\infty$ (data not shown). The quasi-equilibrium argument above can also be applied to individual configurations, not just averages over configurations. It then says that while a spin at site $i-1$ is up ($n_{i-1}=1$), it causes the escape rate of the configuration $R=\sum_j r_j$ (or more precisely its contribution from site $i$) to be higher by $2c(1-c)-2c^2(1-c) = 2c(1-c)^2 \approx 2c$ than the equilibrium average of $2c^2(1-c)$ per site. Conversely, a down-spin lowers the escape rate, changing it by $-2c^2(1-c)\approx -2c^2$. \subsection{Spatial correlations} \label{sec:cx_pd} We now turn to spatial structure in the $\nu$-ensemble. Our overall aim is to arrive at a representation of the effective potential $\Delta V_\mathcal{C}$, but we begin by considering simple measures of order, to gain an overview of the structure of the system. In Fig.~\ref{fig:gr-pd}(a), we show the two-point correlation function $C(x)$ defined in (\ref{equ:cx}). For $\nu>0$, up spins appear to `repel' each other: the probability of finding two up-spins close together is suppressed, with respect to the value for independently fluctuating spins found at equilibrium. For $\nu\gtrsim 10^{-3}$, one observes oscillations in $C(x)$: just beyond the range of the short-ranged `repulsive' correlations is a `nearest neighbour peak' where up spins are more likely to be found. \begin{figure} \hfill \includegraphics[width=5cm]{fig2a-new.eps} \includegraphics[width=5cm]{fig2b-new.eps} \includegraphics[width=5cm]{fig2c-new.eps} \caption{ Spatial structure in the East model for $c=0.1$. (a)~Correlation function $C(x)$. (b)~Distribution of domain sizes $p(d)$. (c)~Plot of $p(d)/p^0(d)$ for $\nu=10^{-5}$, where $p^0(d)$ is the equilibrium distribution of domain sizes. The dashed line shows the prediction of linear response theory (LR), obtained via the fluctuation formula (\ref{equ:pt1}), by numerical evaluation of the relevant equilibrium correlation function. } \label{fig:gr-pd} \end{figure} The correlations that are apparent in $C(x)$ remain quite weak as $\nu$ increases, in stark contrast to the large changes in $r(\nu)$ and $\chi_R(\nu)$ shown in Fig.~\ref{fig:rnu}. A more revealing measurement of the system's structure in the $\nu$-ensemble is to consider the distribution of domains of down spins $p(d)$ as defined in (\ref{equ:def-pd}). Figs.~\ref{fig:gr-pd}(b,c) show that there is considerable structure in the tails of $p(d)$. From a physical point of view, the key aspects of $p(d)$ are that the distribution narrows as $\nu$ increases, with large domains being strongly suppressed, while small domains (for example $d=1$ and $d=2$) are only weakly affected. The probability associated with the larger domains is transferred to intermediate domain lengths which become increasingly common, eventually leading to a peak in $p(d)$ at an emergent length scale $d^*>1$. Since the relaxation times for the largest domains are longest, then (\ref{equ:chi-rr}) indicates that the suppression of large domains is directly linked with the suppression of the susceptibility $\chi_R(\nu)$ as $\nu$ is increased [recall Fig.~\ref{fig:rnu} and Eq.~(\ref{equ:chi-rr})]. We also note in passing that the arguments leading to the quasiequilibrium condition (\ref{equ:qe-pred}) predict $p(d=1)=c$, which holds quite accurately in Fig.~\ref{fig:gr-pd}(b). In the following Sections, we will concentrate on $p(d)$ as an observable that reveals the dominant correlations within the biased steady state. \subsection{Scaling behaviour} \begin{figure} \hspace{2cm} \includegraphics[width=8cm]{fig3-new.eps} \caption{Domain distribution $p(d)$, for $\nu=c^2$ and varying $c$. This distribution is peaked around an emergent length scale and the peak becomes sharper as $c$ is reduced, indicating that this length scale can be associated with the limit of small-$c$, given $\nu=c^2$. In this limit, scaling arguments (see Sec.~\ref{sec:hier} below) predict that $p(d) = O(c)$ for $d\leq 4$ while $p(d)$ has a positive limit for $d\geq 5$. This is consistent with the data, although the system is still quite far from the small-$c$ limit. } \label{fig:sc2} \end{figure} It is well-known that length and time scales are intrinsically connected in the East model, both at equilibrium and in the aging behaviour~\cite{Sollich-Evans}. Indeed, it is clear from (\ref{equ:tau-ell}) that the model obeys scaling relations in the limit $c\to0$, where both length and time scales diverge. In the presence of the bias $\nu$, it will be useful to consider limits where both $c$ and $\nu$ go to zero together: we typically fix $\nu=c^b$ and then take $c\to0$. Fig.~\ref{fig:sc2} shows numerical results with $\nu=c^2$, as $c$ is varied. One sees that $p(d)$ develops a peak at an emergent length scale, and that this peak becomes increasingly well-defined as $c$ is reduced. In the next Section, use a perturbative scheme to analyse this kind of behaviour in more detail. This leads to a physical picture that we outline in Section.~\ref{sec:hier}, where we identify the limit of $c\to0$ at $\nu=c^b$ with a sharply-defined $b$-dependent length scale. \section{Linear response theory} \label{sec:pert} At first order in $\nu$, we can obtain a formula for the effective potential $\Delta V_\mathcal{C}$ using linear response theory (perturbation theory about the equilibrium state). Consider a one-time quantity $f$, and its average $\langle f \rangle_\nu$. For small $\nu$, Eq.~(\ref{equ:nu-ens}) gives \begin{equation} \langle f \rangle_\nu=\langle f \rangle_{0} + \nu \langle \delta f \, \delta R \rangle_0 + O(\nu^2) . \label{equ:LR} \end{equation} We note that $\langle \delta f \,\delta R \rangle_0 = \langle f \,\delta R \rangle_0$: it is sometimes convenient to use this latter form in the following. In addition, the first-order term in (\ref{equ:LR}) may be written as \begin{equation} \delta \langle f \rangle = \nu \sum_j \int_0^{t_\mathrm{obs}} \!dt\, \langle \delta f(t')\, \delta r_j(t) \rangle_0. \label{equ:pt1} \end{equation} where we used (\ref{equ:def-ri}), and we evaluate $f$ at $t'=t_\mathrm{obs}/2$ to avoid transient regimes, as discussed above. For large enough $t_\mathrm{obs}$ (and using time-reversal symmetry at equilibrium), this may also be written as~\cite{kcm-transition} \begin{equation} \delta \langle f \rangle = 2\nu \sum_j \int_0^{\infty} \!dt\, \langle \delta f(0) \, \delta r_j(t) \rangle_0. \label{equ:pt_prop} \end{equation} To obtain $\Delta V_\mathcal{C}$, we take $f=e_\mathcal{C}$ to be an indicator function, which has a value of unity if the system is in configuration $\mathcal{C}$, and zero otherwise. One finds \begin{eqnarray} \Delta V_\mathcal{C} & = -\frac{\delta \langle e_{\mathcal{C}} \rangle}{ \langle e_{\mathcal{C}}\rangle_0 } + O(\nu^2) \nonumber \\ & = - 2\nu \sum_j \int_0^{\infty} \!dt\, \frac{ \langle \delta r_j(t) e_{\mathcal{C}}(0) \rangle_0 }{ \langle e_{\mathcal{C}}\rangle_0 } + O(\nu^2) . \label{equ:eCC} \end{eqnarray} We define \begin{equation} R_\mathcal{C} \equiv \frac{1}{\langle e_{\mathcal{C}}\rangle_0 } \sum_j \int_0^{\infty} \!dt\, \langle \delta r_j(t) e_{\mathcal{C}}(0) \rangle_0 \end{equation} as the propensity~\cite{propensity} for activity $R$ associated with configuration $\mathcal{C}$: this may be obtained by averaging the observable $R$ over trajectories with initial condition $\mathcal{C}$. Hence, the effective potential of configuration $\mathcal{C}$ in the presence of the bias $\nu$ is given, to leading order, by its propensity: \begin{equation} \Delta V_\mathcal{C} = -2\nu R_{\mathcal{C}} + O(\nu^2) . \label{equ:dV-pert} \end{equation} Hence, at this order, all effective interactions may be obtained numerically by the relatively simple procedure of calculating propensities. However, this does not address the central challenge associated with characterising the effective interactions. After all, if the system size is $N$, then the propensity $R_\mathcal{C}$ is a set of $2^N$ numbers: one must still address how the dependence of $R_\mathcal{C}$ on the structure of the system can be represented in a useful way. This will be the main strength of the variational approach in Section~\ref{sec:var}, below. \subsection{Enhancement of the density $\langle n_i\rangle_\nu$} \label{sec:pert-rho} In the remainder of this section, we use (\ref{equ:pt1}) to investigate the effect of a small bias $\nu$ on the structure of the system. We begin with the response of the mean density of up-spins: $\delta \langle n_i\rangle$. We recall from (\ref{equ:ABzero}) that equilibrium two-time correlation functions vanish unless they involve observables on overlapping regions of the chain, so that \begin{equation} \delta \langle n_i \rangle = 2\nu \int_0^\infty\!\mathrm{d}t\, \langle \delta n_i(0) [ \delta r_{i+1}(t) + \delta r_{i}(t) ] \rangle_0 . \label{equ:n-pert} \end{equation} The dominant contribution to this response appears because $r_{i+1}>0$ if and only if $n_i=1$: physically, this is the statement that if spin $i$ is up then spin $i+1$ is able to flip. The correlation function in the expression for $\delta \langle n_i \rangle$ that corresponds to this effect is \begin{eqnarray} \langle \delta n_i(0) \delta r_{i+1}(t) \rangle_0 & = \langle \delta n_i(0) [ (1-c)n_i(t) n_{i+1}(t) + c n_i(t) \overline{n}_{i+1}(t)] \rangle_0 \nonumber \\ & \approx 2c(1-c) G_1^0(t) , \label{equ:nr} \end{eqnarray} where the single-site autocorrelation function $G_1(t)=\langle \delta n_i(t')\, \delta n_{i}(t'+t) \rangle_\nu$ and the superscript $0$ indicates that we evaluate it at equilibrium ($\nu=0$). The approximate equality holds if $\langle n_i(0) n_i(t) n_{i+1}(t)\rangle \approx \langle n_i(0) n_i(t) \rangle \langle n_{i+1}(t)\rangle$, which is to be expected, based on quasiequilibrium arguments along the lines of those in Section~\ref{sec:qe-simple}. The function $G_1^0(t)$ decays from $c(1-c)$ to 0 on the time scale $\tau_0$. We approximate the time-integral in Eq.~\ref{equ:n-pert} by multiplying its maximal value by this relaxation time, leading to $\delta \langle n_i \rangle \simeq 4\nu c^2\tau_0$, which diverges as $c\to0$ (recall that $\tau_0$ diverges faster than any power of $1/c$). Thus, the response of the density $\rho$ to the field $\nu$ diverges very quickly as $c\to0$. The strong quasiequilibrium correlation between $\rho$ and $R$ means that $R$ also responds very strongly, consistent with the large value of $\chi_R(0)$ shown in Fig.~\ref{fig:rnu}. \subsection{Spatial structure in the biased state} \label{sec:rel_enh} Given that the up-spin density $\langle n_i\rangle$ responds strongly to the bias $\nu$ already within the linear response regime, the next step is to consider the spatial correlations that accompany this increase. \begin{figure} \hspace{2.5cm}\includegraphics[width=7cm]{fig4.eps} \caption{ Sketch illustrating the configurations $\mathcal{C}$ and $\mathcal{C}'$ discussed in Section~\ref{sec:rel_enh}. Black bars indicate up spins ($n_i=1$). At low temperatures, the hierarchy of energy scales in the East model means that $\mathcal{C}$ will almost certainly relax to $\mathcal{C}'$ on a time scale $\tau_3$ given by (\ref{equ:tau-ell}); this process is much faster than any other local relaxation mechanism. The propensity $R_\mathcal{C}$ differs from $R_{\mathcal{C}'}$ primarily due to the facilitated site that is marked with a $\times$. } \label{fig:d-sketch} \end{figure} We first consider two configurations $\mathcal{C}$ and $\mathcal{C}'$, with $\mathcal{C}$ having one more up spin than $\mathcal{C}'$, as shown in Fig.~\ref{fig:d-sketch}. On increasing $\nu$, we expect the probability of $\mathcal{C}$ to be enhanced with respect to $\mathcal{C}'$, since the density of up spins is increasing. To obtain the enhancement of $\mathcal{C}$ with respect to $\mathcal{C}'$, it is sufficient to estimate the propensity difference $R_\mathcal{C} - R_{\mathcal{C}'}$. We focus on the domains in the system, defined as in Sec.~\ref{sec:cx_pd} (each up spin starts a new domain, and the domain length is the distance to the next up spin to the right). As in Fig.~\ref{fig:d-sketch}, let the domains in the region of interest of $\mathcal{C}'$ be $(\dots,\ell,m,\dots)$ and those in $\mathcal{C}$ be $(\dots,\ell,d,m-d,\dots)$. The two configurations coincide exactly in regions indicated by $(\dots)$. We further assume that $\alpha_m, \alpha_\ell > \alpha_d$, where the barrier height $\alpha_\ell$ was defined in (\ref{equ:alpha}). Then, the hierarchical relaxation in the East model means that as $c\to0$, the local time evolution of $\mathcal{C}$ and $\mathcal{C}'$ is deterministic, in the sense that $\mathcal{C}$ relaxes to $\mathcal{C}'$, on a time scale $\tau_d$, given by (\ref{equ:tau-ell}). (For small $c$, the probability that $\mathcal{C}$ relaxes to some other local structure is vanishingly small, due to the separation of time scales in the problem.) After the time lag $\tau_d$, the two configurations behave the same, so all contributions to $R_\mathcal{C}-R_{\mathcal{C}'}$ come from times smaller than $\tau_d$. The dominant contribution to $R_\mathcal{C} - R_{\mathcal{C}'}$ comes from the spin marked $\times$ in Fig.~\ref{fig:d-sketch}. This spin is facilitated throughout the time $\tau_d$ so its contribution to $R_\mathcal{C}$ is approximately $2c \tau_d$. (This may be shown using an analysis similar to that leading to (\ref{equ:nr}) above, or the quasi-equilibrium argument explained at the end of Sec.~\ref{sec:qe-simple}.) Thus, the enhancement of $\mathcal{C}$ with respect to $\mathcal{C}'$ is determined by \begin{equation} \Delta V_{\mathcal{C}'} - \Delta V_{\mathcal{C}} = 2\nu (R_\mathcal{C} - R_{\mathcal{C}'}) + O(\nu^2) \approx 4\nu c\tau_d + O(\nu^2), \label{equ:pc-prop} \end{equation} where the approximate equality holds for small $c$ and $2\leq d\lesssim1/c$. (For $d=1$, a similar argument shows that the propensity difference is of order unity: here the dominant contribution comes from the spin at distance $d=1$ itself, which has a flip rate of $1-c\approx 1$, rather than its right neighbour.) The key point here is that the difference in effective potential in (\ref{equ:pc-prop}) depends very strongly on the position of the extra up spin in $\mathcal{C}'$. If $d=1$ and the extra up spin is adjacent to an existing one, the difference in effective potential between $\mathcal{C}$ and $\mathcal{C}'$ is $\nu\cdot O(1)$, which is small on the natural scale of $\nu$. But if (for example) $d=1+2^b$ then the enhancement diverges as $\nu\cdot O(c^{-b})$: configurations where the extra up spin is far from any existing spin are very strongly enhanced if $b>1$. This strong dependence of $\Delta V_\mathcal{C}$ on the relative positions of the up spins in $\mathcal{C}$ is the mechanism for the repulsive correlations in $C(x)$ shown in Fig.~\ref{fig:gr-pd}. Configurations where the spins are spaced out have large propensities for activity $R$, since the up spins increase activity, and widely-spaced up spins persist for longer time within the system. Hence, the effect of the bias $\nu$ is to favour configurations $\mathcal{C}$ with long-lived active regions, since these give the largest contributions to $R_\mathcal{C}$. \subsection{Response of $p(d)$ to the bias $\nu$} \label{sec:pd_enh} To further elucidate this effect, we show how $\nu$ affects the domain structure in the system, by calculating the response of $p(d)$ at leading order in $\nu$. Recall that $p(d)$ can be written as in Eq.~(\ref{equ:def-pd}) above, in terms of the observable $D_{i,i+r}$ defined in (\ref{equ:UD}). The linear response relation~(\ref{equ:LR}) then gives for the enhancement of $p(d)$ at first order in $\nu$ \begin{eqnarray} \frac{p(d)}{p^0(d)} &= 1 + 2\nu \left\langle \left( \frac{n_i D_{i+1,i+d-1} n_{i+d}}{\langle n_i D_{i+1,i+d-1} n_{i+d}\rangle_0} - \frac{n_i}{\langle n_i \rangle_0} \right) \delta R \right\rangle_0 + O(\nu^2) \label{equ:pdpert1} \end{eqnarray} The right hand side of (\ref{equ:pdpert1}) is straightforward to evaluate numerically: see Fig.~\ref{fig:gr-pd}(c). For small $c$, the right hand side of (\ref{equ:pdpert1}) may be estimated following the discussion of Sec.~\ref{sec:pert-rho}. The first term in the average in (\ref{equ:pdpert1}) has contributions of the form \begin{equation} \frac{1}{\langle n_i D_{i+1,i+d-1} n_{i+d}\rangle_0} \int \mathrm{d}t\, \langle [ n_i(t') D_{i+1,i+d-1}(t') n_{i+d}(t') ] \delta r_j(t) \rangle_0, \label{equ:nifr} \end{equation} for which the largest contributions come from $j=i+1$ and $j=i+d+1$: these sites are adjacent to the spins that are up at time $t'=t_\mathrm{obs}/2$ and hence facilitated. If $j=i+1$, Eq.~(\ref{equ:nifr}) gives $\approx 2c\tau_0$ as in Sec.~\ref{sec:pert-rho}; the amplitude $2c$ comes from the quasiequilibration rule discused in Sec.~\ref{sec:qe-simple}. However, there is an analogous contribution from the $\langle n_i \delta R\rangle_0$ term in (\ref{equ:pdpert1}) which exactly cancels this effect. Evaluating (\ref{equ:nifr}) when $j=i+d+1$, one obtains $\approx 2c\tau_d$, which is an alternative derivation of the enhancement considered in Sec.~\ref{sec:rel_enh}. In addition, there are contributions to (\ref{equ:pdpert1}) of the form of (\ref{equ:nifr}), with sites $j$ between $i$ and $i+d$. These spins are down and unfacilitated at time $t=0$, and the typical time scale for spin $i+a$ to become facilitated is $\tau_a$. Such a spin therefore contributes $-\tau_a\langle r \rangle = -2c^2\tau_a$ to (\ref{equ:pdpert1}). This contribution is smaller than the contribution of site $i+d+1$, but if $d$ is large then there are many down spins between $i$ and $i+d$, and these contributions become significant. If we use the coarse (over-)estimate $\tau_a \approx \tau_d$, the total contribution from down-spins is $-2(d-1)c^2\tau_d \approx (-cd)(2c\tau_d)$. Since $\tau_d\approx (cd)\tau_0$, this negative contribution scales with $d^2$ and thus dominates for large $d$ over the positive contribution from the up-spin at $j=i+d+1$, $2c\tau_d$, which scales linearly with $d$. A more refined estimate accounting for the $a$-dependence of the contributions from the internal down-spins at $i+a$ only changes the prefactor of the leading $d^2$-dependence of the negative linear response contribution. Combining all these results, and taking $c\to0$, we expect \begin{equation} \frac{p(d)}{p^0(d)} \simeq 1 + A_d \nu/c^{\alpha_d-1}+ O(\nu^2), \qquad 2\leq d \lesssim 1/c; \label{equ:pd-pert-result1} \end{equation} For domains of size $1$, one has (by arguments analogous to those in Sec.~\ref{sec:rel_enh}) \begin{equation} \frac{p(d=1)}{p^0(d=1)} \simeq 1+A_1\nu + O(\nu^2), \label{equ:pd-pert-result-qe} \end{equation} and for large domains \begin{eqnarray} \frac{p(d)}{p^0(d)} &\simeq 1+4\nu (1-cd)c\tau_d + O(\nu^2) \nonumber \\ & \simeq 1-A_d\nu\tau_0c^2d(cd-1)+ O(\nu^2), \quad d \gtrsim 1/c. \label{equ:pd-pert-result2} \end{eqnarray} Here, all the $A_d=O(1)$ as $c\to0$. We observe that for large domain sizes $d \gg 1/c$, the linear response result diverges with $d$, so the result is necessarily applicable only in a range of $\nu$ that becomes vanishingly small as $d\to\infty$. However, such large domains are extremely rare in any case and the effect of $\nu>0$ is to \emph{suppress} them very strongly (recall Fig.~\ref{fig:gr-pd}). In this case, the breakdown of perturbation theory in the calculation of $p(d)$ does not lead to any apparent non-perturbative effect in observables like $r(\nu)$ or $C(x)$. On the other hand, if we bias to lower than average activity by taking $\nu<0$, large domains are \emph{enhanced} non-perturbatively as $d\to\infty$: this effect drives the phase transition into the inactive state~\cite{kcm-transition}, with large domains predominating for all $\nu<0$. \section{Hierarchy of responses at small $c$, and link to aging} \label{sec:hier} \begin{figure} \hspace{2cm} \includegraphics[width=11cm]{fig5-new.eps} \caption{ Sketch showing how the density of up spins $\rho = \langle n_i \rangle_\nu$ depends on $\nu$, for very small $c$. Both axes are logarithmic. The solid line shows the limiting behaviour as $c\to0$, taken at fixed $(\ln\nu)/(\ln c)$, while the dashed line shows the expected behaviour for a system with small positive $c$. The effect of the bias $\nu$ becomes significant when $\nu \approx \tau_0^{-1}$, the inverse bulk relaxation time. As $\nu$ is increased further towards unity, the system eventually responds in a sequence of steps. We expect a weak dependence on $\nu$ whenever $c^{b} \ll \nu \ll c^{b-1}$ for integer $b$, leading to plateaus in $\rho$. Within each plateau ($b\geq1$), our conjecture is that the spacing between up spins converges to $1+2^b$ as $c\to0$, so $\rho\to\frac{1}{1+2^b}$. For large $b$, the plateaux are then bounded by $\rho=2^{-b-1}$ and $\rho=2^b$, corresponding to power-law behaviour $\rho\sim 2^{-(\ln\nu)/(\ln c)} \sim \nu^{T\ln 2}$ (dashed lines). From the quasiequilibrium argument, one expects also $r(\nu) \approx 2c \langle n_i \rangle_\nu = 2c\rho(\nu)$: when comparing this prediction with Fig.~\ref{fig:rnu}, we emphasise that those numerical results are still rather far from the small-$c$ limit. } \label{fig:plat} \end{figure} The results of Section~\ref{sec:pert} can be summarised by (\ref{equ:pd-pert-result1}-\ref{equ:pd-pert-result2}), which encode a hierarchy of responses to the bias $\nu$. When $c$ is small, the linear responses for different $d$ scale as $\nu/c^{\alpha_d-1}$. Estimating the range over which this linear response prediction is valid is not a trivial task, because the responses are divergent for large $d$, consistent with the phase transition to an inactive state as $\nu$ becomes negative. However, based on the numerical results of Sec.~\ref{sec:num} and the linear response calculation, we can formulate a picture that captures the main features of the responses to the bias, both linear and nonlinear. The main idea is a generalisation of the quasiequilibrium relation discussed in Section~\ref{sec:qe-simple}. That argument indicates that $p(d=1)\approx c$ as long as $\nu\ll1$, regardless of whether $\nu$ is small compared with other powers of $c$. Put another way, nonlinear responses in (\ref{equ:pd-pert-result-qe}) do not set in until $\nu\simeq 1$. We are proposing here that similar quasiequilibrium conditions hold for larger $d$, and small $c$. In particular, for $\nu/c^{b-1} \ll 1$, domains of sizes $1\leq d \leq2^b$ are weakly affected by the bias, so that $p(d)=O(c)$ as $c\to0$. This corresponds to the assumption that the linear response prediction (\ref{equ:pd-pert-result1}) remains accurate until the relative linear response correction becomes of order unity. [More precisely, we expect that nonlinear corrections to $p(d)$ are at most of absolute size $O(c)$ in this regime, not $O(1)$.] On the other hand, the perturbative expansion indicates that larger domains ($d>2^b$) have much larger responses which must be nonlinear in nature: we expect that $p(d)$ has a non-zero limit for those domain sizes, so that the mean domain size (and hence $r(\nu)$ and $\langle \rho \rangle_\nu$) remain finite as $c\to0$. To illustrate this effect, recall Fig.~\ref{fig:sc2} which shows data for $\nu=c^2$, as $c$ is decreased. One sees that $p(d)$ for $d\leq 4$ decreases as $c$ is reduced, consistent with the expectation that it tends to zero as $c\to0$; on the other hand $p(d)$ has a non-zero limit for $d\geq 5$. We identify an emergent length scale $d^*=5$: domains smaller than $d^*$ have a vanishing probability in the relevant limit, domains of size close to $d^*$ are very likely, while larger domains have finite (typically smaller) probabilities. In general, if $\nu=c^b$ for some integer $b$, and we consider the limit $c\to0$, then one expects a similar situation with a length scale $d^*=1+2^b$. It is also useful to consider the case $c^b \ll \nu \ll c^{b-1}$ with $c\to0$. Then, one expects small domains ($d\leq 2^b$) to be quasi-equilibrated with $p(d)\simeq c$, while larger domains ($d>2^b$) should have $p(d)=O(1)$, weakly dependent on $\nu$. In fact, our numerical results indicate that almost all domains will be of size $d=1+2^b$ in this limit, leading to a finite density of up spins $\langle n_i\rangle_\nu = 1/(1+2^b)$. This allows the system to maximise the escape rate $r$, subject to the quasiequilibrium constraint on smaller domains. The simplest case is the limit $c \ll \nu \ll 1$ where this analysis predicts $\langle n_i\rangle_\nu \approx \frac13$. This behaviour is consistent with Fig.~\ref{fig:rnu}(c), where $\rho(\nu) = \langle n_i\rangle_\nu$ increases quickly to a value close to $\frac13$ at $\nu\approx c =0.1$ before increasing more weakly for large $\nu$. The value of $c$ is not small enough to saturate the limit and establish a clear plateau in $r(\nu)$ but the numerical results are certainly consistent with the picture proposed here. Returning to the general argument, Fig.~\ref{fig:plat} summarizes the predicted hierarchy of responses in a sketch. The plateau structure in $\rho(\nu)$ and its power-law asymptote $\rho\sim \nu^{T\ln2}$ are intriguingly similar to the one observed in the out-of-equilibrium aging dynamics of the East model~\cite{Sollich-Evans}, with $\nu$ playing the role of the inverse age. A similar generalisation of quasiequilibrium to that described here can also be observed in equilibrium dynamics, through an analysis of metastable states~\cite{jack-patch13}. \section{Variational approaches} \label{sec:var} \newcommand{\tilde{u}}{\tilde{u}} \newcommand{\Delta\tilde{V}}{\Delta\tilde{V}} \newcommand{F^\mathrm{var}}{F^\mathrm{var}} \newcommand{p^\mathrm{var}}{p^\mathrm{var}} \newcommand{\Delta V^\mathrm{var}}{\Delta V^\mathrm{var}} To investigate the response of the system to $\nu$ beyond first order, we exploit a variational method, which relies on the time-reversal symmetry of the $\nu$-ensemble. The master operators $\mathbb{W}_K(s)$ and $\mathbb{W}_R(\nu)$ may be symmetrised, as described in Sec.~\ref{sec:large-dev}. Hence (see~\ref{app:block} and Refs.~\cite{kcm-transition,Jack-Sollich-PTP}), one may obtain the effective potential $\Delta V_\mathcal{C}$ by minimising the variational `free energy' (per site): \begin{equation} F(\Delta\tilde{V}) = -N^{-1} \frac{ \sum_{\mathcal{C},\mathcal{C}'} \mathrm{e}^{-\Delta\tilde{V}_{\mathcal{C}'}/2} [\mathbb{W}_R(\nu)]_{\mathcal{C}',\mathcal{C}} \mathrm{e}^{-\Delta\tilde{V}_{\mathcal{C}}/2} p^0_{\mathcal{C}}} { \sum_{\mathcal{C}} \mathrm{e}^{-\Delta\tilde{V}_{\mathcal{C}}} p^0_\mathcal{C} } , \label{equ:Fvar} \end{equation} where $\Delta\tilde{V}$ is a variational estimate of the effective potential, $[\mathbb{W}_R(\nu)]_{\mathcal{C}',\mathcal{C}}$ is a matrix element of the operator $\mathbb{W}_R(\nu)$, and $p^0_\mathcal{C}=\mathrm{e}^{-\beta\sum_i n_i}/(1+\mathrm{e}^{-\beta})^N$ is the equilibrium probability of configuration $\mathcal{C}$. On minimising $F(\Delta\tilde{V})$ over all the $\Delta\tilde{V}_\mathcal{C}$, the minimal value of $F$ is equal to the dynamical free energy $\psi_R(\nu)$, and $\Delta \tilde V_\mathcal{C}$ is equal to the effective potential $\Delta V_\mathcal{C}$. Hence, if a suitable exact parameterisation of $\Delta \tilde V_\mathcal{C}$ may be found, one may obtain the effective potential by minimising $F(\Delta\tilde{V})$. More typically, one makes an approximate parameterisation of the effective potential, and minimises $F$ with respect to the variational parameters. For a given parameterisation of $\Delta \tilde V_\mathcal{C}$ (``trial potential''), we denote the minimal value of $F$ by $F^\mathrm{var}$ and the corresponding estimate of $\Delta \tilde V_\mathcal{C}$ by $\Delta V^\mathrm{var}_\mathcal{C}$. We note in passing that an alternative to this variational approach would be to use a density-matrix renormalisation group method (see e.g.~\cite{gorissen09}), which is related to a variational search over matrix product states~\cite{schollwoeck11}. However, the advantage of (\ref{equ:Fvar}) is that it has a clear interpretation as a variational search in a given space of effective potentials $\Delta V$. \subsection{Trial potential functions} We first describe three trial potentials that we have investigated. \subsubsection{Block model.} \label{sec:block} A completely general trial potential should include all possible $m$-body interactions of all ranges, To approximate this, we consider interactions within `blocks' of size $B$. The potential includes $m$-body interactions up to $m=B$, with a maximal interaction range of $B-1$. It also permits a transfer-matrix representation of the probability distribution over configurations $\mathcal{C}$ in the $\nu$-ensemble. The idea is to consider a block of spins, ${\cal B}_{i}=(n_i,n_{i+1},\dots,n_{i+B-1})$, and that each possible block configuration has its own contribution to the trial potential. That is, if $e_{\cal B}({\cal B}_i)$ is an indicator function, equal to unity when block ${\cal B}_i$ has the specific configuration ${\cal B}$ and zero otherwise, then $\Delta\tilde{V}_\mathcal{C} = \sum_i \sum_{\cal B} z_{\cal B} e_{\cal B}({\cal B}_i)$, where the $z_{\cal B}$ are variational parameters that determine the block probabilities. There are $2^B$ trial weights $z_{\cal B}$, but these in fact provide an overcomplete basis for the possible interactions in $\Delta\tilde{V}$, because the numbers of blocks in the $2^B$ different states are not independent. For example, if $N_{010}=\sum_i \overline{n}_i n_{i+1} \overline{n}_{i+2}$ is the number of blocks with configuration `$010$' (and similarly for other block configurations) then one may use $\overline{n}=1-n$ to write $N_{010}=\sum_i[ n_{i+1}\overline{n}_{i+2} - n_{i} n_{i+1}\overline{n}_{i+2} ] = \sum_i[ n_{i+1}\overline{n}_{i+2} n_{i+3} + n_{i+1}\overline{n}_{i+2} \overline{n}_{i+3} - n_{i} n_{i+1}\overline{n}_{i+2} ] = N_{101} + N_{100} - N_{110}$. Here the last equality involved a relabelling within the summation, which relies on the periodic boundaries of the system. In this way, the numbers of all blocks of length $B$ that begin with a down spin (`0') can be expressed exactly in terms of numbers of blocks that begin with an up spin (`1'). There are $2^{B-1}$ such numbers, and it is then not difficult to see that one can equivalently specify the numbers of all blocks of length $1$ to $B$ that start {\em and} end with a 1. (E.g.\ for $B=2$ one can use $N_{1}=N_{11}+N_{10}$ and $N_{11}$ instead of $N_{10}$ and $N_{11}$.) Using this latter representation, a general trial potential for the block model with block length $B=2$ can be written as \begin{equation} \Delta\tilde{V}_\mathcal{C} = \sum_{i} h n_i + J n_i n_{i+1} , \label{equ:ut-block} \end{equation} which includes a field $h$ and a two-body Ising-like coupling as variational parameters. For higher $B$ one has in addition all interaction terms up to range $B-1$ and involving up to $B$ spins. In~\ref{app:block}, we show how the variational free energy in (\ref{equ:Fvar}) may be calculated for this model, given the weights $z_{\cal B}$. The method relies on a transfer matrix representation of $\mathrm{e}^{-\Delta\tilde{V}_\mathcal{C}}$. This allows $F$ to be minimised (numerically) over the $z_{\cal B}$, leading to an estimate $\Delta\tilde{V}^\mathrm{var}$ for the effective interactions. \subsubsection{$p_d$ model.} \label{sec:var-pd} The block model is a general variational ansatz, but the number of variational parameters increases exponentially with the maximal interaction range $B-1$. In the following, this will limit our numerical results to $B\leq 6$. As we have already discussed, Fig.~\ref{fig:gr-pd} indicates that the true effective potential $\Delta V_\mathcal{C}$ includes rather long-ranged interactions, which would require much larger values of $B$ to capture them. For this reason, we have designed trial potentials in order to account for the particular structures that occur in the East model. Instead of including all interactions up to range $B$, these potentials include a specific subset of long-ranged interactions. In particular, we concentrate on the `domains' discussed above and construct a $\Delta \tilde V_\mathcal{C}$ that depends only on the sizes of these domains. Thus, this trial potential will be accurate if all information about structure in the $\nu$-ensemble is contained in the distribution $p(d)$ shown in Fig.~\ref{fig:gr-pd}. Formally, we include specific $m$-body interactions of all ranges \begin{equation} \Delta\tilde{V}_\mathcal{C} = \sum_{i,d} z_d n_i D_{i+1,i+d-1} n_{i+d} , \label{equ:ut-pd} \end{equation} where the $z_d$ are variational parameters associated with the possible domain sizes. Given these weight factors, one may derive the distribution of domain sizes $p(d)$ associated with this variational ansatz. In fact, it is convenient to work directly with this distribution, which we denote by $p_d$ to distinguish its status as a variational parameter from a measured $p(d)$. The variational free energy is \begin{equation} F_d =\frac{1}{\sum_d d p_d} \Big\{ (1-\nu)[c + (1-2c)p_1] - 2\sqrt{c(1-c)} \sum_{d\geq 2} \sqrt{p_1 p_{d-1} p_d} \Big\} . \label{equ:Fd} \end{equation} The derivation of this free energy is discussed in~\ref{app:pd}: it is to be minimised subject to the normalisation constraint $\sum p_d = 1$. In principle the sums over $d$ in (\ref{equ:Fd}) run over all $d\geq1$ and $d\geq 2$ respectively, but for numerical work we truncate the sums at a cutoff $d^*$ by assuming $p_d=0$ for $d>d^*$: domains much larger than $1/c$ are extremely rare in the system so the results depend negligibly on $d^*$. \subsubsection{$p_{fe}$ model.} \label{sec:var-fe} \begin{figure} \hspace{2.5cm}\includegraphics[width=7cm]{fig6-new.eps} \caption{Sketch showing how the domains are identified in the $p_{fe}$ and $p_d$ models. The (unlabelled) domain of size $d=1$ in the $p_d$ representation is combined with the adjacent domain to form an composite domain with $(f,e)=(2,2)$ in the $p_{fe}$ model. } \label{fig:pfe-sketch} \end{figure} The $p_d$ model gives useful results, but we will find that it does not account accurately for the quasiequilibrium conditions described in Sec.~\ref{sec:hier}. This shortcoming limits its accuracy, so we discuss one systematic improvement to the $p_d$ model, which captures some features of this quasiequilibration. Recall that in the $p_d$ model, the system is divided into domains that start at each up spin, and each domain is assumed to be independent. Each domain then consists of a single up spin, followed by a block of $d-1$ down spins. An alternative and more general assumption is to start domains at each occurrence of the block `$01$'. The situation is illustated in Fig.~\ref{fig:pfe-sketch}. Each domain consists of a block of up spins, followed by a block of down spins. The domain state is specified by $2$ numbers $f,e$, so that there are $f$ (``full'') up spins and $e$ (``empty'') down spins. Fig.~\ref{fig:pfe-sketch} shows that the domains in this `$fe$'-representation are in general larger than those in the $p_d$-model representation. We then construct an effective potential on the assumption that the $fe$-domains are independent: \begin{equation} \Delta\tilde{V}_\mathcal{C} = \sum_{i,f,e} z_{fe} \overline{n}_{i-1} \cdot U_{i,i+f-1} \cdot D_{i+f,i+f+e-1} \cdot n_{i+f+e} , \end{equation} where the $z_{fe}$ are the variational parameters. We refer to this model as the `$p_{fe}$ model'. As with the $p_d$ model, it is more convenient to work with the probability distribution over the domains, which we denote by $p_{fe}$. One can show that the $p_d$ model corresponds to the special case $p_{fe} = p_1^{f-1} p_{e+1}$, which emphasises that the $p_{fe}$ model is a generalisation of the $p_d$ model. In particular, the $p_{fe}$ model allows the length $e$ of a down-block to depend on the number of adjacent up spins to its left. The derivation of the variational free energy for the $p_{fe}$ model is discussed in~\ref{app:pfe}: the result is \begin{eqnarray} F_{fe} = & \frac{1}{\sum_{fe}(f+e)p_{fe}} \cdot \Big\{ (1-\nu)\sum_{fe} p_{fe} [c + (f-1)(1-c)] \nonumber \\ & - 2\sqrt{c(1-c)} \Big[ \sum_{fe} \sqrt{p_{f,e+1}p_{f+1,e}} + \sum_{f,f',e} \sqrt{p_{f'1}p_{fe}p_{f+f'+1,e}} \Big] \Big\} , \label{equ:Ffe} \end{eqnarray} The minimisation is subject to the normalisation constraint $\sum_{f,e}p_{fe}=1$. Numerically we again model $p_{fe}$ explicitly only up to cutoffs $f^*$ and $e^*$. These cutoffs cannot be made too large because the number of variational parameters is now $f^* e^*$. To soften the impact of the cutoffs we therefore do not set $p_{fe}$ directly to zero beyond the cutoffs, but assume an exponential tail instead that is obtained by linear extrapolation of $\ln p_{fe}$. \subsection{Variational results} \begin{figure} \hfill \includegraphics[width=7cm]{fig7a-new.eps} \includegraphics[width=7cm]{fig7b-new.eps} \caption{ (a) Variational free energy, $F^\mathrm{var}$ at $c=0.1$, using different trial potentials. Since $F^\mathrm{var}$ varies over several orders of magnitude, we plot $-F^\mathrm{var} / f_0$, where $f_0= \nu r(0)$. One has $-F^\mathrm{var} / f_0\to1$ as $\nu\to 0$, so that $F^\mathrm{var} = -f_0 + O(\nu^2)$ at small bias. A larger number in this representation indicates a better trial potential: the $p_{fe}$ model and the $B=6$ block model perform significantly better than the (simpler) $p_d$ model. (b)~Variational estimates of $r(\nu)$ compared with the numerical results shown in Fig.~\ref{fig:rnu} (labelled ``exact''). The dotted (horizontal) line indicates $r(0)=2c^2(1-c)$. } \label{fig:rnuvar} \end{figure} We have used numerical minimisation to obtain results for the three trial potentials, at $c=0.1$. Fig.~\ref{fig:rnuvar} shows the values of $F^\mathrm{var}$ that we obtained, and the corresponding estimates for $r(\nu)$. These are compared with the numerical results shown in Fig.~\ref{fig:rnu}. In general, the $p_{fe}$ model and the $B=6$ block model seem to capture the data quite well, while the $p_d$ model gives less good agreement. It is also clear from $r(\nu)$ that the variational models perform best for larger $\nu$, with significant deviations for smaller $\nu$. \begin{figure*} \hfill \includegraphics[width=7cm]{fig8a-new.eps} \includegraphics[width=7cm]{fig8b-new.eps}\par \hfill \includegraphics[width=7cm]{fig8c-new.eps} \includegraphics[width=7cm]{fig8d-new.eps} \caption{(a-d) Comparison between the domain-size distributions $p(d)$ obtained numerically by TPS or exact diagonalisation (labelled ``exact'') with those obtained from (approximate) variational analyses. All results are at $c=0.1$, and for four different values of $\nu$ as shown in each plot. The variational approaches are most effective for larger $\nu$ when domains are typically short. In panel (d), the unbiased distribution $p^0(d)$ is shown as a dashed line. The inset shows the same results on a logarithmic scale, emphasising the differences in the tails of the distribution. For this value of $\nu$, the $p_d$ model and the $B=6$ model give similar results, and are both close to the unbiased distribution $p^0(d)$. (The legend is omitted: symbols are the same as in all other panels.)} \label{fig:pdvar} \end{figure*} Since the minimisation yields the effective interaction potential $\Delta V^\mathrm{var}_\mathcal{C}$, we are also able to calculate variational predictions for one-time quantities in the $\nu$-ensemble. As a stringent test of these variational distributions, Fig.~\ref{fig:pdvar} shows the estimates for $p(d)$ that we obtain from the variational treatment, compared with numerical results from Sec.~\ref{sec:num}. For the largest $\nu$ ($=0.63$), all three models describe the data quite accurately, although deviations are apparent for the $p_d$ model. Nearly all domains in this system are short, so one can expect a relatively simple effective interaction to accurately describe these data. For $\nu=0.1$, there is clear structure in the system, including a most probable domain size of $d=3$, which is captured quite accurately by both the $p_{fe}$ model and the $B=6$ block model, although not by the $p_d$ model. We recall from Sec.~\ref{sec:pert} that for $\nu\approx c$, one expects domains of lengths $d\leq 2$ to have probabilities of order $c$, while larger domains have probabilities of order unity. This is consistent with the most likely domain size of $d=3$. For smaller $\nu$, the structure in $p(d)$ becomes more complex, and even the $p_{fe}$ and $B=6$ models fail to accurately describe the effective interactions in the system. We note in particular that $\nu=0.01$ corresponds to $\nu=c^2$ for this case, in which case Sec.~\ref{sec:pert} predicts that $p(d)$ should be of order unity for $d\geq 5$, but of order $c$ for $d\leq 4$. It is apparent that the $p_{fe}$ and $B=6$ models fail to capture this aspect of the linear response to the system. Finally, we note that for very small $\nu\approx 10^{-4}$, these variational models significantly underestimate the suppression of large domains. For the block model, it is easily shown (see~\ref{app:pd-exp}) that $p(d)$ must decay exponentially for $d>B$, in contrast to the faster decrease found in our numerically exact results. It is clear from the numerical results in Figs.~\ref{fig:rnuvar} and \ref{fig:pdvar} that the $p_d$ model gives quite a crude description of the effective interactions in the system. However, this model can be studied analytically, with several useful results, which we summarize briefly to conclude this section. Looking at the linear response for small $\nu$, one finds for $p_1$ a positive relative correction of $O(\nu)$ in line with the notion of quasi-equilibrium for small $\nu$, while for all other $p_d$ the relative correction is $O(\nu/c)$. With increasing $d$ the correction becomes negative and its amplitude grows, so that the model captures at least qualitatively the large-$d$ divergence of the perturbative correction shown in Fig.~\ref{fig:gr-pd}(c). For nonzero $\nu$ one can show that $p_d$ must decay faster than exponentially, indicating the presence of long-ranged effective interactions that will make block models with fixed block lengths poor approximations. Finally one can look at the $c\to 0$ limit of the $p_d$ model at fixed small (but non-zero) $\nu$. One finds $p_1=O(c)$, consistent again with quasi-equilibrium, while $p_d=O(1)$ for $d\geq 2$. The predicted activity is $r(\nu)=c\cdot f_r(\nu)$ with $f_r$ of $O(1)$. Assuming that quasiequilibrium along the lines of (\ref{equ:qe-pred}) holds, one infers that $\langle \rho \rangle_\nu = f_\rho(\nu)$ with $f_\rho(\nu) \approx \frac12 f_r(\nu)$ of $O(1)$: the density of up-spins remains finite, in contrast to the equilibrium case $\nu=0$ where $\langle \rho\rangle_0=c$ vanishes as $c\to 0$. From the discussion of Sec.~\ref{sec:hier}, one would expect that $f_\rho(\nu\to0)=1/3$, so $f_r(\nu\to0) = 2/3$. The $p_d$ model predicts correctly that $f_r$ and $f_\rho$ are of order unity as $c\to0$, but gives a rather poor estimate of the shapes of the function, yielding e.g. $f_r(\nu)\sim \nu^{1/2}$ for small $\nu$. \subsection{Limitations of variational schemes: the complex hierarchical response to $\nu$} The results of Sec.~\ref{sec:pert} suggest a hierarchy of responses to the bias $\nu$, as discussed in Sec.~\ref{sec:hier}. The general idea is that the system remains quasiequilibrated on short length scales, while large length scales respond strongly to the bias. The linear response of a configuration to the bias is given by its propensity for activity, $R_\mathcal{C}$. We find that $R_\mathcal{C}$ is dominated by long time scales in the model, which are typically associated with large scale structures in the configuration $\mathcal{C}$. Fig.~\ref{fig:pdvar} shows data at $\nu=10^{-4}$ and $c=0.1$ which indicate that none of the variational models used here are successful in capturing the long-ranged correlations that appear in this perturbative regime. While the $B=6$ block model necessarily excludes long-ranged correlations, the failure of the $p_{fe}$ model indicates that domain sizes alone are not sufficient to predict the propensities $R_\mathcal{C}$, so that $\Delta\tilde{V}$ necessarily includes interactions between domains of different sizes. \begin{figure} \hfill \includegraphics[width=14cm]{fig9-new.eps} \caption{ (a) Configurations $\mathcal{C}_1$ and $\mathcal{C}_2$ that illustrate how the propensity depends on correlations between domains, and not just on their domain lengths. (See discussion in main text.) Vertical bars indicate up spins ($n_i=1$) as before. The $\times$ signs mark the sites whose contributions are most relevant when comparing the propensities of these configurations. (b) Configurations $\mathcal{C}_1'$ and $\mathcal{C}_2'$ indicate that while some relevant correlations between domains can be captured within the $p_{fe}$ model, there are still configurations that have different propensities, but equal probabilities within the $p_{fe}$ model. } \label{fig:var-fail} \end{figure} To see how structure among domains is important in determining their propensities (and hence their effective potentials), it is useful to consider the configurations $\mathcal{C}_1$ and $\mathcal{C}_2$ shown in Fig.~\ref{fig:var-fail}(a). Within the $p_d$ model, these configurations are equally likely. However, to estimate their propensities, one follows the analysis of Sec.~\ref{sec:rel_enh} and first assumes that the least long-lived up spin in each state relaxes quickly to 0. Then one considers the contributions to $R_{\mathcal{C}_1}$ and $R_{\mathcal{C}_2}$ from the spins marked $\times$. In $\mathcal{C}_1$, the relevant spin remains facilitated for an $O(c^{-2})$ time scale while in $\mathcal{C}_2$ it remains facilitated for an $O(c^{-1})$ time scale. The contributions to $R_{\mathcal{C}_1}$ and $R_{\mathcal{C}_2}$ are therefore $O(1/c)$ and $O(1)$ respectively: the enhancement of $\mathcal{C}_1$ in the presence of the bias is much stronger than that of $\mathcal{C}_2$. The $p_d$ model cannot capture this difference. However, within the $p_{fe}$ model, configurations $\mathcal{C}_1$ and $\mathcal{C}_2$ have independent weights, proportional to $p_{2,1} p_{1,e}$ and $p_{1,1} p_{2,e}$ respectively. Thus, the model accounts for their different enhancements in the presence of the bias. The argument that facilitated spins remain quasiequilibrated implies that $p_{f+1,e} \approx c p_{f,e+1}$, so (for example) $p_{2,1}\approx c p_{1,2}$. The numerical results obtained from the $p_{fe}$ model are broadly consistent with this result. This effect may be also rationalised in the superspin picture of Ref.~\cite{Sollich-Evans} if one removes facilitated up spins to leave (relatively) long-lived superspins. It is the spacing of the superspins that determines the propensity. While the $p_{fe}$ model performs better than the $p_d$ model for configurations $\mathcal{C}_1$ and $\mathcal{C}_2$, its main shortcoming may be understood in a similar way. One generalises the previous argument by multiplying all length scales by two and all time scales by $1/c$. The configurations $\mathcal{C}_1'$ and $\mathcal{C}_2'$ shown in Fig.~\ref{fig:var-fail}(b) have equal probability within the $p_{fe}$ model but the spin marked with $\times$ in $\mathcal{C}_1'$ remains facilitated for a time that is $O(c)$ shorter than the marked spin in $\mathcal{C}_2'$. Thus, the relative propensities of the configurations are different, but this effect is missed within the $p_{fe}$ model. Thus, the $p_{fe}$ model can distinguish configurations that respond to the bias at $O(\nu)$ from those that respond at $O(\nu/c)$, but it cannot distinguish those that respond at $O(\nu/c^2)$ or greater. A perfect trial potential would distinguish configurations that respond at $O(\nu/c^b)$, for all the possible values of $b$ discussed in Sec.~\ref{sec:hier}. However, the point here is that the $p_{fe}$ model specifically allows for $p_d$-domains with $d=1$ to be correlated with other domains to their right. All other correlations amongst domains are forbidden. But to resolve the difference in propensity between $\mathcal{C}_1'$ and $\mathcal{C}_2'$ in Fig.~\ref{fig:var-fail}, one requires specific correlations between domains with $d=2$ and other domains. And while including such correlations explicitly would allow characterisation of configurations which respond at $O(\nu/c^2)$, the procedure used to derive $\mathcal{C}_1'$ and $\mathcal{C}_2'$ from $\mathcal{C}_1$ and $\mathcal{C}_2$ can be repeated to obtain a $\mathcal{C}_1''$ and $\mathcal{C}_2''$, one of which will respond at $O(\nu/c^3)$. Making this distinction will rely on specific correlations among domains with $d=4$ and greater. One sees that accounting specifically for these increasingly complex correlations quickly becomes prohibitive. The $p_{fe}$-model serves as a useful indication of the physics at work and the kinds of effective interaction that are expected. We are exploring further improvements to this trial potential, but these are beyond the scope of this study. \section{Outlook} \label{sec:summ} To end this article, we discuss which of the features of the analysis here may be generalised to other glassy systems in the presence of biased activity. At the perturbative level, we showed that the response of a configuration depends only on its propensity: this result is general. The problem of finding the effective interactions for the linear response regime is therefore equivalent to finding a model that accurately describes the propensity of a configuration. In the East model, this requires consideration of structure on quite large length scales, and an accurate description requires identification of the long-lived superspins in the system, which is a difficult task. In the general case, we have shown that variational calculations can be useful in showing what effective interactions can reproduce the correlations found in biased states. However, the trial distributions must be informed by considerable physical insight to yield useful results. On the other hand, the hierarchy of time scales and the quasi-equilibrium features of the biased East model do simplify the description of the effective interactions. If the model is quasiequilibrated on short length scales, this means that effective interactions on those scales are weak and may be neglected. Recent work on atomistic model glass-formers~\cite{jack11-stable} and spin-glass models~\cite{jack-rom09} does indicate that time-scale separation in biased ensembles can be used to simplify the description of the biased states. This may well be a useful simplification to guide future studies. \ack We thank Fred van Wijland and Juan P. Garrahan for many useful discussions on large deviations, and emergent structure in ensembles of trajectories. RLJ was supported by the EPSRC through grant EP/I003797/1. \begin{appendix} \section{Calculations using variational trial potentials} \subsection{Block model} \label{app:block} In this section, we describe how the variational free energy in (\ref{equ:Fvar}) is calculated for the block model of Sec.~\ref{sec:block}. The block configuration ${\cal B}_i=(n_i,n_{i+1},\dots,n_{i+B-1})$ is a binary string of length $B$, and a configuration may be specified by its block configurations: $\mathcal{C} = ( \dots, {\cal B}_i, {\cal B}_{i+1}, \dots )$. The blocks are overlapping, so the specification is overcomplete: the final $B-1$ spins in ${\cal B}_i$ are equal to the first $B-1$ spins in ${\cal B}_{i+1}$, etc. \newcommand{\tilde p}{\tilde p} We write $\tilde p(\mathcal{C}) = \mathrm{e}^{-\Delta\tilde{V}_\mathcal{C}-\beta E_0(\mathcal{C})}$ where $E_0=\sum_i n_i$ is the energy of the East model, and we identify $\tilde p$ as the (unnormalised) trial probability distribution associated with the trial potential $\Delta\tilde{V}$. From (\ref{equ:ut-block}), one has $\mathrm{e}^{-\Delta\tilde{V}_\mathcal{C}} = \prod_i \mathrm{e}^{-z_{{\cal B}_i}}$. It is convenient to write $\tilde p = \prod_i M_{{\cal B}_{i},{\cal B}_{i+1}}$, where $M_{{\cal B}_i,{\cal B}_{i+1}}=\mathrm{e}^{-z_{{\cal B}_i}-\beta n_i}$, recalling that the last $B-1$ spins of ${\cal B}_i$ always coincide with the first $B-1$ spins of ${\cal B}_{i+1}$. One then generalises $M$ to a ``transfer matrix'' of size $2^B \times 2^B$, by setting $M_{{\cal B},{\cal B}'}=0$ if the last $B-1$ spins of ${\cal B}$ \emph{do not} coincide with the first $B-1$ spins of ${\cal B}'$. With this choice, averages with respect to $\tilde p$ can be evaluated as matrix traces. E.g., for a periodic chain of length $L$, one has $\sum_{\mathcal{C}} \tilde p(\mathcal{C}) = {\rm tr}(M^L)$. We note that for $B=2$, the block model reduces to the 1$d$ Ising model, which would usually be solved using a $2\times 2$ transfer matrix: the method presented here uses a $4\times 4$ transfer matrix. This is less efficient numerically, but its generalisation to larger $B$ is simpler. We now relate the matrix $M$ to the variational free energy $F$ defined in (\ref{equ:Fvar}). To this end, we symmetrise the operator in (\ref{equ:Fvar}), noting that $[\mathbb{W}_R(\nu)]_{\mathcal{C}',\mathcal{C}} p^0(\mathcal{C}) = \sqrt{p^0(\mathcal{C}')} [\sum_i \hat{H}_i(\nu)]_{\mathcal{C}',\mathcal{C}} \sqrt{p^0(\mathcal{C})}$ where \begin{equation} \hat{H}_i(\nu) = \hat{n}_{i-1} \big[ \sqrt{c(1-c)}(\sigma^-_i + \sigma^+_i) - (1-\nu)(1-2c)\hat{n}_i - (1-\nu)c \big] \end{equation} is a symmetric (self-adjoint) operator associated with flips of spin $i$. Since the trial potentials that we consider are translationally invariant along the chain, (\ref{equ:Fvar}) becomes \begin{equation} F = \frac{ - \sum_{\mathcal{C},\mathcal{C}'} \sqrt{\tilde p(\mathcal{C}')} \left[\hat{H}_i(\nu)\right]_{\mathcal{C}',\mathcal{C}} \sqrt{\tilde p(\mathcal{C})} } {\sum_{\mathcal{C}} \tilde p(\mathcal{C}) } \label{equ:Fvar-H} \end{equation} where $[\hat{H}_i(\nu)]_{\mathcal{C}',\mathcal{C}}$ indicates a matrix element of $\hat{H}_i(\nu)$. It is convenient to write the numerator here as $\sum_{\mathcal{C},\mathcal{C}'} O_i(\mathcal{C},\mathcal{C}') \tilde p(\mathcal{C})$, with \begin{equation} O_i(\mathcal{C}',\mathcal{C}) = \sqrt{\frac{\tilde p(\mathcal{C}')}{\tilde p(\mathcal{C})}} \left[\hat{H}_i(\nu)\right]_{\mathcal{C}',\mathcal{C}} . \label{equ:Oi} \end{equation} We note (i)~that $O_i(\mathcal{C}',\mathcal{C})=0$ unless $\mathcal{C}$ and $\mathcal{C}'$ coincide for all spins $j$ except $j=i$, and (ii)~that $O_i(\mathcal{C}',\mathcal{C})$ depends only on spins $n_{i-B+1},\dots,n_{i+B-1}$ (as long as $B>1)$. One may therefore use the transfer matrix representation of $\tilde p$ to sum over all spins $n_j'$ except for $j=i$, and over all spins $n_j$ with $j<i-B+1$ or $j>i+B-1$. The result (for a periodic chain of length $L$) is \begin{equation} \fl F_B = \frac{1}{\mathrm{tr}(M^L)} \sum_{n_i'} \sum_{n_{i-B+1},\dots,n_{i+B-1}} O(\mathcal{C},\mathcal{C}') \left[ \prod_{j=i-B+1}^{i-1} M_{{\cal B}_j,{\cal B}_{j+1}} \right] (M^{L-B+1})_{{\cal B}_{i},{\cal B}_{i-B+1}} \label{equ:Fvar-block} \end{equation} For long chains, the matrix element $(M^{L-B+1})_{{\cal B},{\cal B}'}$ can be replaced by $\lambda_\mathrm{max}^{L-B+1} x({\cal B}) y({\cal B}')$ where $\lambda_\mathrm{max}$ is the largest eigenvalue of $M$ and $x$ and $y$ the corresponding right and left eigenvectors. The resulting expression may therefore be evaluated by constructing and diagonalising $M$. After minimising $F$ over the variational parameters $z_{\cal B}$, one may then evaluate any one-time observable in the $\nu$-ensemble, via the transfer matrix $M$. \subsection{Exponential decay of domain distribution in the block model} \label{app:pd-exp} Here, we outline a derivation that shows that if the effective potential of a system is a block model of range $B$ then distributions of domain sizes decay exponentially for ranges $r>B$. For systems described by a block model, the probability of observing a block in state $\cal B$ is \begin{equation} P({\cal B}) = \frac{ \mathrm{tr}( M^N e^{\cal B}) }{ \mathrm{tr}( M^N ) } \end{equation} where $M$ is the transfer matrix of the previous section, and $e^{\cal B}$ is a diagonal matrix with $e_{{\cal B},{\cal B}}=1$ and all other entries being zero. Assuming that the matrix $M$ has a gap between its largest and second largest eigenvalues then for very large $N$, $M^N \approx \bm{v} \lambda^N \bm{u}^T$ where $\lambda$ is the largest eigenvalue of $M$ and $\bm{u},\bm{v}$ the associated left and right eigenvectors, normalised such that $\bm{u}\cdot\bm{v}=1$. Hence, as $N\to\infty$, one has $P({\cal B}) \to v_{\cal B} u_{\cal B}$. Further, the probability that blocks $i$ and $i+1$ are in states ${\cal B}, {\cal B}'$ is \begin{equation} P({\cal B},{\cal B}') = \frac{ \mathrm{tr}( M^{N-1} e^{\cal B} M e^{\mathcal{B}'}) }{ \mathrm{tr}( M^N ) } \end{equation} Again, for large $N$ the trace is dominated by the largest eigenvalue of $M$, so that $P({\cal B},{\cal B}') \to u_{\cal B} M_{{\cal B},{\cal B}'} v_{{\cal B}'} / \lambda$. Finally, the analogous property for three successive blocks (for $N\to\infty$) is easily shown to be $P({\cal B},{\cal B}',{\cal B}'') \to u_{\cal B} M_{{\cal B},{\cal B}'} M_{{\cal B}',{\cal B}''} v_{{\cal B}''} / \lambda^2$ from which we can read off that \begin{equation} P({\cal B},{\cal B}',{\cal B}'') = \frac{ P({\cal B},{\cal B}') P({\cal B}',{\cal B}'') }{ P({\cal B}') }. \label{equ:BBB} \end{equation} Recalling that specifying three successive blocks determines the configuration of $B+2$ successive spins, consider the probability of finding these spins in the configuration $(n_i,0^B,n_{i+B+1})$, where $0^B$ stands for $B$ successive down spins. From (\ref{equ:BBB}), this is seen to be \begin{equation} P(n_i,0^B,n_{i+B+1}) = \frac{ P(n_i,0^B) \, P(0^B,n_{i+B+1}) }{ P(0^B) }, \end{equation} and generalising to more than three blocks leads to the more general formula \begin{equation} P(n_i,0^{B+x},n_{i+B+x+1}) = \frac{ P(n_i,0^{B}) \, P(0^{B+1})^{x} \, P(0^{B},n_{i+B+x+1}) }{ P(0^{B})^{x+1} }. \end{equation} Finally, identifying the domain size distribution $p(d) = P(1,0^{d-1},1)/P(1)$, one sees that $p(d)$ decays exponentially for $d>B$, proportional to $[P(0^{B+1})/P(0^B)]^{d-B}$. A similar result is familiar for the domain structure in one-dimensional Ising systems: in that case $B=2$, and we note that our definition of $p(d)$ is then directly related to the distribution of sizes of spin-down domains. \subsection{Variational free energy for $p_d$-model} \label{app:pd} The calculation of the variational free energy for the $p_d$-model also relies on properties of the matrix element $O_i(\mathcal{C},\mathcal{C}')$ given in (\ref{equ:Oi}). Since $O_i(\mathcal{C}',\mathcal{C})\neq 0$ only if $n_{i-1}=1$, one should consider only configurations $\mathcal{C}$ where a `domain' starts at site $i-1$. This occurs with probability $\langle n_{i-1}\rangle^\mathrm{var}=\frac{1}{\sum d p_d}$ where we use the notation $\langle\cdot\rangle^\mathrm{var}$ for averages with respect to the trial distribution $\tilde p$. Recalling in addition that $O_i(\mathcal{C}',\mathcal{C})=0$ unless either $\mathcal{C}=\mathcal{C}'$ or $\mathcal{C}'$ and $\mathcal{C}$ differ only at spin $i$, one finds that $O_i(\mathcal{C}',\mathcal{C})$ depends on the size $d$ of the domain starting at site $i-1$; if $d>1$ then it depends only on $d$ while if $d=1$ then $O_i(\mathcal{C}',\mathcal{C})$ also depends on the size of domain $d'$ that starts at $i$. Identifying the relevant cases leads directly from (\ref{equ:Fvar-H}) to (\ref{equ:Fd}). \subsection{Variational free energy for the $p_{fe}$-model} \label{app:pfe} Within the $p_{fe}$ model, the variational free energy is calculated similarly to the $p_d$ model. The probability that spin $i-1$ occupies a particular position within a domain of parameters $(f,e)$ is $p_{fe}/\sum_{f'e'}(f'+e')p_{f'e'}$. The matrix element $O_i(\mathcal{C},\mathcal{C}')\neq0$ only if spin $i-1$ is one of the $f$ up spins in this domain. The derivation of the variational free energy then follows that for the $p_d$ model, except that it requires an additional explicit summation over the $f$ possible positions of spin $i-1$ within the domain. The matrix element $O_i(\mathcal{C},\mathcal{C}')$ depends only on the domain containing site $i-1$, except in the case that this domain has $e=1$, in which case it depends additionally on the next domain to the right. Enumerating the specific cases, one arrives at (\ref{equ:Ffe}). \end{appendix} \section*{References}
1,116,691,501,165
arxiv
\section{Introduction}\label{sec:introduction} Be/X-ray transients harbour high-magnetic field (B$\sim$10$^{12-13}$ G) neutron stars (NSs) that move around Be-type stars in eccentric orbits. Typically, these binaries can exhibit two types of transient activity: more common, short-lived (i.e., small fraction of the orbital period) type-I outbursts that typically occur at the periastron passage of the NSs, and giant, type-II outbursts that can last significantly longer than an orbital period. In the first case, the NSs accrete matter from the Be-star decretion disks (e.g., \citealt{Okazaki2001}) and reach X-ray luminosities of L$_{X}\sim$10$^{36-37}$ erg s$^{-1}$. In the second case, during the type-II outbursts the luminosities usually reach the Eddington limit for a NS but the exact physical mechanism(s) causing these outbursts is not understood (\citealt{Moritani2013}; \citealt{Monageng2017}).\\ \indent Most studies of Be/X-ray transients have been performed at high luminosities \citep[i.e., $>$10$^{36}$ erg s$^{-1}$; e.g., see the review by][]{Reig2011} and consequently not much is known about their behaviour when they accrete at lower X-ray luminosity. However, several Galactic systems have been detected at luminosities of L$_{X}\sim$10$^{34-35}$ erg s$^{-1}$ (e.g., \citealt{Motch1991}; \citealt{Campana2002}; \citealt{Rutledge2007}) indicating that they are still accreting but at relatively low rates. In addition, detailed X-ray studies (using the \textit{Chandra} and \textit{XMM-Newton} observatories) of the Be/X-ray binary populations in the Magellanic Clouds (i.e., the Small Magellanic Cloud; \citealt{Laycock2010}; \citealt{Haberl2016}) have showed that a significant number of the Be/X-ray transients in those galaxies, can be detected at similar low luminosities in between their outbursts. This fact indicates that such low-luminosity states are a common property of Be/X-ray transients.\\ \indent At these low rates, the NS spin period is a key component in determining how the accretion proceeds. For slow spinning NSs (spin periods of hundreds of seconds) matter may still be directly accreted onto the stellar surface (i.e., at the magnetic poles) but for the fast spinning systems (with spin periods of only a few seconds) matter is likely ejected from the inner part of the system due to the pressure of the rotating NS magnetic field (the so-called propeller effect; e.g., \citealt{Illarionov1975}; \citealt{Stella1986}; \citealt{Romanova2004}; \citealt{DAngelo2010}; \citealt{Tsygankov2016}). In the latter case, one would expect that the observed flux is not pulsed. However some sources still exhibit pulsations in this regime, probably because matter is able to leak through the field lines and reach the surface at the magnetic poles, thus creating hot spots (e.g., \citealt{Elsner1977}; \citealt{Ikhsanov2001}; \citealt{Lii2014}).\\ \indent Some sources are detected at even lower luminosities of L$_{X}\sim$10$^{32-34}$ erg s$^{-1}$ (e.g., \citealt{Mereghetti1987}; \citealt{Roberts2001}; \citealt{Campana2002}; \citealt{Reig2014}; \citealt{Elshamouty2016}), but the cause of this faint emission is still unclear. One possibility is the accretion down to the magnetosphere although it is likely that this emission would not peak in the X-rays but at longer wavelengths (e.g., UV; \citealt{Tsygankov2016}). However, the pulsed emission seen in some systems at those luminosities (e.g., \citealt{Rothschild2013}; \citealt{ Doroshenko2014}; Table 2 of \citealt{Reig2014}) suggests that, for at least those systems, the matter does reach the surface at the magnetic poles, supporting the idea of leakage of matter through the magnetospheric barrier (\citealt{Orlandini2004}; \citealt{Mukherjee2005}).\\ \indent Although low level accretion onto the magnetic poles could be the physical mechanism behind these low luminosities in some sources, it is also possible that in other systems the accretion has fully halted and we see a cooling NS. During outburst the accreted matter might heat up the NS (due to the pycnonuclear reactions deep in the crust; \citealt{Brown1998}) and, when the accretion has stopped, the deposited heat is radiated away. This explanation has been proposed for several sources but it remains to be confirmed (\citealt{Campana2002}; \citealt{Wijnands2013}; \citealt{Reig2014}; \citealt{Elshamouty2016}; \citealt{Tsygankov2017b}).\\ \indent In the heating and cooling scenario, one would expect that after a long and strong period of accretion (i.e., after type-II outbursts), the crust might be heated very significantly and it might have become hotter than the core. When the accretion has stopped, the crust would then slowly cool down until equilibrium is reached again. This process would be similar to what has been observed for several low-magnetic field NS systems (for recent discussions see \citealt{Degenaar2015} and \citealt{Parikh2017}). \cite{Wijnands2016} attempted to test this hypothesis in two Be/X-ray transients (4U 0115+63 and V0332+53) after the end of their type-II outbursts. They found that after those bright outbursts both sources showed elevated emission in a meta-stable state above their known quiescent levels (e.g., \citealt{Elshamouty2016}; \citealt{Tsygankov2017b}). \cite{Wijnands2016} suggested that this "plateau phase" could indeed be consistent with the slow cooling of the crust, although they could not exclude accretion scenarios. Here we further investigate this plateau phase for 4U 0115+63.\\ \section{Observations, Analysis and Results}\label{sec:analysis} After the \textit{Swift}/X-Ray Telescope (XRT) data reported in \cite{Wijnands2016}, we obtained several extra \textit{Swift}/XRT observations as well as an \textit{XMM-Newton} one of 4U 0115+63 (Table \ref{tab:Table1} and Table \ref{tab:Table2}). This source harbours a magnetized NS (B$\sim$1.3$\times$10$^{12}$ G; \citealt{Raguzova2005}), with a spin period of P$_\mathrm{s}\sim$3.62 s (\citealt{Cominsky1978}) and an orbital period of P$_\mathrm{orb}\sim$24.3 days (\citealt{Rappaport1978}). 4U 0115+63 was monitored with the XRT in the Photon Counting (PC) mode and observed by \textit{XMM-Newton} with the EPIC detectors in the full frame (pn) and large window (MOS) modes. The \textit{Swift}/XRT PC mode only allows for a time resolution of $\sim$2.5 s which is insufficient to study possible pulsations due to the NS spin. For \textit{XMM-Newton}, we used the pn to search for and study the pulsations (time resolution 73.4 ms; section \ref{subsec:timing}) due to its higher count rate.\\ \begin{figure} \centering \includegraphics[width=1\columnwidth, trim=1cm 0 1cm 2.5cm]{Final_version_lc_v2_Final.ps} \caption{\textit{Swift}/XRT (blue) and \textit{Swift}/BAT (black) light curves during and after the 2015 type-II outburst of 4U 0115+63. The time of our \textit{XMM-Newton} observation is given as a red symbol converted in \textit{Swift}/XRT count rate (section \ref{subsec:lightcurve}). The dotted brown line corresponds to the quiescent level of 4U 0115+63. Vertical dotted lines indicate the time of periastron passages. We label our results in the different source phases as a continuation of the ones used in \citet*{Wijnands2016}. Errors are 3$\sigma$.} \label{fig:BAT_lc} \end{figure} \subsection{Light curve}\label{subsec:lightcurve} The 2015 type-II outburst was monitored using the \textit{Swift}/BAT and \textit{Swift}/XRT (see Figure \ref{fig:BAT_lc}). In particular, the decay of this outburst was intensively monitored using the \textit{Swift}/XRT which also allowed to observe the transition of the source into the low-luminosity state (see also \citealt{Wijnands2016}). This was the first time such a state was detected for this system but very likely it was present after the end of the previous type-II outbursts as well. However, those possible occurrences were missed because no sensitive X-ray observations after the end of those outbursts were obtained in the past.\\ \indent We obtained the \textit{Swift}/BAT data from the hard X-ray transient monitor web page\footnote{\url{http://swift.gsfc.nasa.gov/results/transients/weak/4U0115p634/}} (\citealt{Krimm2013}) and the \textit{Swift}/XRT light curve from the interface build \textit{Swift}/XRT products\footnote{\url{http://www.swift.ac.uk/user_objects/}} (\citealt{Evans2009}). For a direct comparison the source count rate during the \textit{XMM-Newton} observation was converted to XRT count rate (for the energy range 0.5-10 keV) applying the WEBPIMMS\footnote{\url{http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl}} tool and the spectral parameters obtained from the first observation during the low luminosity state (hereafter "plateau phase") in our paper. From Figure \ref{fig:BAT_lc} (see also \citealt{Wijnands2016}), we can see that after the type-II outburst the source did not go directly to quiescence (\citealt{Campana2002}; \citealt{Tsygankov2017b}) but settled down at a plateau that is a factor of $\sim$10 brighter. Almost $\sim$240 days after the type-II outburst, the source experienced several type-I outbursts (e.g., \citealt{Nakajima2016a,Nakajima2016b,Nakajima2016c}).\\ \indent After the episodes reported by \cite{Wijnands2016}, the \textit{Swift}/XRT count rate continued to decrease until our "\textit{XMM-Newton} observation (b)" (see Figure \ref{fig:BAT_lc}): from $\sim$0.015 counts s$^{-1}$ during "Interval I" of \citet*[i.e., at the start of the plateau phase]{Wijnands2016} to $\sim$0.002 counts s$^{-1}$ during \textit{XMM-Newton} observation (b). A similar decreasing trend is seen in the X-ray luminosities (Figure \ref{fig:Parameters}, top panel) obtained from our spectral analysis (see section \ref{subsec:spectra}). Despite that a general decreasing trend is seen in the \textit{Swift}/XRT count rate, it unexpectedly increased slightly again to $\sim$0.007 count s$^{-1}$ ("Interval IV (c)") after the\textit{ XMM-Newton} observation (b). During the plateau phase the \textit{Swift}/BAT count rate appears to have brief excursions above the background possible indicating occasional enhanced activity (Figure \ref{fig:BAT_lc}). However, those variations are consistent with statistical fluctuations and do not represent detections of the source (see \citealt{Krimm2013} for a detailed description about the \textit{Swift}/BAT data processing). The monitoring campaign stopped for $\sim$53 days after our Interval IV (c) \textit{Swift}/XRT observation until the source exhibited a type-I outburst ("1$^{st}$ Type-I outburst (d)") during which the \textit{Swift}/XRT count rate reached $\sim$4 counts s$^{-1}$. The decay during this outburst was monitored with the \textit{Swift}/XRT and 26 days after the peak of the outburst we obtained a 8.8 ksec observation in which the source only showed 8 photons ("Quiescence I (e)"; see Figure \ref{fig:BAT_lc}). This level is consistent with the quiescent level of the source (Figure \ref{fig:BAT_lc}; \citealt{Tsygankov2017b}).\\ \indent The \textit{Swift}/BAT continued to monitor the source and 17 days later another type-I outburst was detected (Figure \ref{fig:BAT_lc}; \citealt{Nakajima2016b}). We monitored the source using the XRT to determine how it would transit to quiescence. Surprisingly during the first observation the source was not in quiescence but at relatively high count rate ($\sim$0.15 counts s$^{-1}$) and 12 days later the source increased again in count rate indicating the start of the next type-I outburst ("2$^{nd}$ Type-I outburst (f)"; Figure \ref{fig:BAT_lc}). Already within 7 days of the peak , the source was back in quiescence; we obtained 5 observations ("Quiescence II (g)") during which the source was not detected or only marginally. The stacking of these observations shows a weak (9 photons in 4.7 ksec) but clear detection.\\ \subsection{Spectral analysis}\label{subsec:spectra} \begin{figure} \centering \includegraphics[width=\columnwidth, trim=0.5cm 0 1.25cm 0]{Parameters_noUL_v2_Final.ps} \caption{Evolution (using a black-body model) of the luminosity (top; 0.5-10 keV), temperature (middle) and emission radius (bottom; fixed values not shown). The blue points are the \textit{Swift}/XRT observations and the red point is the \textit{XMM-Newton} one. The dotted brown line corresponds to parameters in quiescence (\citealt{Tsygankov2017b}). Labels as in Figure \ref{fig:BAT_lc}. Errors are 1$\sigma$.} \label{fig:Parameters} \end{figure} For the spectral analysis, we only used \textit{Swift}/XRT observations obtained after those already analyzed by \cite{Wijnands2016} (ObsID 000311720[42-59]) and used the results obtained by these authors for the earlier observations. The data extraction and analysis have been performed in the same way as in \cite{Wijnands2016} so that the results can be directly compared. Due to the very low count rates in the observations with ObsIDs 000311720[42-45] (a) and 000311720[55-59] (g), we stacked those ones. The data were analysed with HEASOFT v.6.17. After reprocessing the raw data with the XRTPIPELINE, we extracted source and background spectra using XSELECT. The source information (count rates, spectra) was obtained from a circular region with a radius of 15 pixels centered around the source position (\citealt{Reig2015}), and background information from a surrounding annulus with an inner-outer radius of 60-110 pixels. Only during the type-I outbursts observations the source count rate was $\geq$0.5 counts s$^{-1}$ causing piled-up. Therefore, those data were corrected following the standard thread\footnote{\url{http://www.swift.ac.uk/analysis/xrt/pileup.php}}. The exposure maps were obtained through XRTPIPELINE and were used to create ancillary response files using XRTMKARF. We employed the response matrix files (version 14) from the \textit{Swift} calibration database.\\ \indent For the \textit{XMM-Newton} data, we used SAS (version 1.2) for reducing and analysing them. The EMPROC and EPPROC tasks produced calibrated event lists, which were filtered against background flaring (determined in the 10-12 keV range with a threshold rate$\geq$0.5 counts s$^{-1}$ for pn; in the $>$10 keV range with a threshold rate$\geq$0.3 counts s$^{-1}$ for MOS). Source counts and spectra were obtained from a circular region with a 20 arcsec radius, and the background ones from a circular region free of sources with a 50 arcsec radius on the same CCD. The data were not piled-up. The response matrix files were generated using RMFGEN and the ancillary response files by ARFGEN.\\ \indent In Figure \ref{fig:Parameters} we show the evolution of the source during the plateau phase plotting our new results with those of \cite{Wijnands2016}. Due to the often very few counts in the spectra we rebinned the data (with GRPPHA for \textit{Swift} and SPECGROUP for \textit{XMM-Newton} data) to 1 count per bin and we used C-statistics to fit the spectra. The data were fitted in the 0.5-10 keV energy range using XSPEC v.12.9.0. The pn, MOS1 and MOS2 spectra were fitted simultaneously, with all parameters tied between the spectra. We fitted the data with either an absorbed power-law model (PEGPWRLW) or an absorbed blackbody model (BBODYRAD). We note that we cannot exclude other single component models or spectral models with multiple components (e.g., a blackbody model plus a power-law contribution; potentially describing multiple emission mechanisms). However, the quality of our spectra is typically rather poor and we cannot distinguish between different single component model (except for the \textit{XMM-Netwon} spectrum, see below), let alone that we can obtain constraining results when fitting multiple components. Therefore, we have opted to only report on the spectral results obtained with two of the most basic models used in the fitting of X-ray binary spectra: the blackbody model and the power-law model. This will allow us to determine general trends in the data (i.e., softening of the spectra in time) which will help in the interpretation of the results.\\ \indent We included absorption by the interstellar medium (TBABS) with abundances set to WILM (\citealt{Wilms2000}) and cross-sections to VERN (\citealt{Verner1996}). The column density was fixed to the same value as in \citet[i.e., N$_{H}$=9$\times$10$^{21}$ cm$^{-2}$]{Wijnands2016}. We adopted a distance of D=7 kpc (\citealt{Negueruela2001}). In the PEGPWRLW model we set the energy boundaries to 0.5 and 10 keV, so that the model normalization gives the unabsorbed flux for that energy range (see Table \ref{tab:Table2}). For the BBODYRAD fits, we left the emitting radius as a free parameter and determined the unabsorbed 0.5-10 keV flux by using CFLUX (see Table \ref{tab:Table1}).\\ \indent The results of our spectral analysis are given in Tables \ref{tab:Table1} and \ref{tab:Table2}. Although we cannot statistically prefer one of the models over the other during the plateau phase because of the \textit{Swift} data quality, the \textit{XMM-Newton} spectrum is of high enough one that the black-body model is preferred. The PEGPWRLW fits suggest that the spectra of the plateau phase are softer (typical photon indices of 1.5-3.6) than those of the type-I outbursts (indices of 0.5-1.2; Table \ref{tab:Table2}). The spectra of the plateau phase can be adequately described by a BBODYRAD model with temperatures of kT$_{bb}\sim$ 0.4-0.8 keV with radii R$_{bb}\sim$ 0.1-0.5 km (Table \ref{tab:Table1}; Figure \ref{fig:Parameters}), smaller than the radius of a NS. This suggests that the emission comes from hot spots on the surface (likely at the magnetic poles). This is confirmed by the detection of pulsations in the \textit{XMM-Newton} data (section \ref{subsec:timing}). The (c) state shows a higher luminosity (L${_X}\sim$2.1$\times$10$^{33}$ erg s$^{-1}$; Figure \ref{fig:Parameters}) than during (b), as well as a higher temperature (from $\sim$0.44 keV to $\sim$0.85 keV) but a significantly smaller emission radius (from $\sim$0.51 to $\sim$0.18 km; Table \ref{tab:Table1}). The source is detected at L${_X}\sim$10$^{32-33}$ erg s$^{-1}$ (0.5-10 keV) during the low luminosity state, whereas during the type-I outbursts it increases to L${_X}\sim$10$^{34-36}$ erg s$^{-1}$. The (e) and (g) states are consistent with quiescence (Figure \ref{fig:Parameters}).\\ \subsection{Timing analysis}\label{subsec:timing} The \textit{XMM-Newton} pn data were barycentric corrected using the task BARYCEN and the region selection applied was the same as used in the spectral analysis. We rebinned the data to 0.1 s time resolution and then used 16384 points (thus 1638.4 s of data) to make a FFT. This resulted in a power-density spectrum in the frequency range of 6.104$\times$10$^{-4}$ Hz to 5 Hz (the Nyquist frequency). No background subtraction was performed prior to the calculation of the FFTs. The Poisson level was removed from the final power spectrum, which was then normalized using an rms normalization, where the power density units are (rms/mean)$^2$ Hz$^{-1}$. The power spectrum in Figure \ref{fig:Pulse} shows a peak close to the known NS spin frequency ($\sim$0.3 Hz).\\ \indent The light curve was then folded using PRESTO (\citealt{Ransom2001}) to determine the precise spin frequency. The best results were obtained using a frequency of (27677600.0$\pm$9.2)$\times$10$^{-7}$ Hz yielding a period of (361300.0$\pm$1.5)$\times$10$^{-5}$ s. The pulse profile (see inset in Figure \ref{fig:Pulse}) was fitted with a sinusoid using the procedure described in \cite{Patruno2010} to obtain the pulse fractional amplitude. We fitted one harmonic and two harmonics (to only one cycle but for clarity two cycles are shown in Figure \ref{fig:Pulse}). When one harmonic is fitted we find the fractional amplitude to be FA$_1$=83.3$\pm$9.5 per cent, with a $\chi^{2}_{\nu}$=1.3 for 46 degrees of freedom. This indicates that the fit is marginally acceptable with a p-value of 7$\%$. To explore the option that the profile might contain multiple harmonics, we fitted the profile with two harmonics. We found FA$_1$=83.3$\pm$8.7 and FA$_2$=28.4$\pm$7.6 per cent with $\chi^{2}_{\nu}$=1.09 for 44 degrees of freedom. Therefore, fitting two harmonics to the profile only yielded a marginally better $\chi^{2}_{\nu}$ and consequently we cannot conclude that the second harmonic is real. If it would be real, it might be due to slight beaming of the emission (e.g., by Compton scattering due to low-level accretion) or because there are two hot spots contributing to the observed emission. However, because of the low significance of this second harmonic we will not discuss it or its potentially consequences on the interpretation further in our paper.\\ \begin{figure} \centering \includegraphics[width=0.9\columnwidth, trim=2.5cm 1cm 4cm 2.5cm]{PaperPlot_2periods_2Harmonic_4jul2017_v1.ps} \caption{Power spectrum obtained from the EPIC-pn data showing a clear signal at $\sim$0.277 Hz that corresponds to the NS spin frequency. Inset: folded (using the detected frequency) pulse profile. Two cycles are shown for clarity. The red curve is a fit to the profile using two harmonically related sinusoids.} \label{fig:Pulse} \end{figure} \section{Discussion}\label{sec:discussion} We have presented additional \textit{Swift}/XRT and \textit{XMM-Newton} observations (Figure \ref{fig:BAT_lc}) during the low-luminosity plateau phase previously identified for 4U 0115+63 (\citealt{Wijnands2016}). From our \textit{Swift} data we can infer that the plateau phase lasted between 160 and 200 days. The exact duration cannot be estimated due to the lack of monitoring during the final state of this state and because the system started to exhibit type-I outbursts. \cite{Wijnands2016} proposed three models to explain the plateau phase: accretion down to the NS magnetosphere, cooling of an accretion heated NS crust and direct low-level accretion onto the NS magnetic poles. The magnetospheric accretion model can likely be excluded because no pulsations are expected in this scenario (\citealt{Campana2002}), contrary to our \textit{XMM-Newton} results where we detect a clear pulsation at the NS spin frequency. This assumes that pulsations were also always present during the XRT observations in the plateau phase. The data do not allow this to be tested, but if they were not present during (some of) these XRT observations, it might still be possible that magnetospheric accretion could still have happened during some of those observations. Moreover, it is indeed possible that multiple emission mechanisms were active during different observations in the plateau phase (or even during a specific observation; e.g., cooling emission during some observations and accretion emission during others; i.e., at periastron), but the current data set is of too low quality to further investigate this. In the following two sections we, for simplicity, assume that only one emission mechanisms is active during the full duration of the plateau phase. This will allow us to investigate if indeed one single mechanism is sufficient to explain this enigmatic state and what the possible problems are with this hypothesis.\\ \subsection{Cooling of the neutron star heated crust}\label{subsec:cooling} The accretion-induced heating of the NS crust and its subsequent cooling has been studied extensively for accreting low-magnetic field (B$\leq$10$^8$ G) NSs, but not for high magnetic-field NSs (B$\sim$10$^{12-13}$ G). The strong magnetic field could alter the heating and cooling processes significantly. \cite{Wijnands2016} suggest that the magnetic field configuration in the crust might be such that the cooling process could take place through hot spots, an idea supported by the small radii of the emission regions found in our spectral analysis. Such interpretation might be consistent with the detection of pulsations during our \textit{XMM-Newton} observation.\\ \indent Our \textit{Swift}/XRT monitoring campaign during the plateau phase is consistent with the cooling hypothesis, except for the last observation (Interval IV in Figure \ref{fig:BAT_lc}). During that observation the luminosity increases compared to the previous \textit{XMM-Newton} one, concurrently, we detect a significant increase in the fit temperature (Figure \ref{fig:Parameters}; Table \ref{tab:Table1}). The emitting region notably decreased in size but this was not enough to compensate the rise in temperature so that the observed luminosity increased. This indicates that the energy release at the hot spots increases significantly and this cannot straightforwardly be explained in the cooling hypothesis.\\ \indent Interval IV was obtained after a periastron passage which could lead to matter transfer from the Be disk or from the wind of the companion to the NS. This matter could leak through the magnetosphere barrier and potentially cause an increase in luminosity. The lack of monitoring after Interval IV does not allow us to determine if this observation was part of an overall trend or a special event on top of a general decay curve. Nevertheless, the spectral shape is difficult to explain in this scenario (section \ref{subsec:accretion}) and therefore we suggest an explanation for this enigmatic behaviour in the cooling scenario.\\ \begin{figure} \centering \includegraphics[width=1.2\columnwidth, trim=5cm 1cm 2cm 1.2cm]{Transition_phase_model_Final.ps} \caption{Phase Transition model (not to scale). Colour code: red (liquid crustal layer, so-called "ocean"), pink (solid crustal layer), yellow (core), green (hot spot region at the magnetic pole), garnet (hot spot central emission region) and dashed black lines (magnetic field lines). \textit{Upper Panel}: Initial state configuration of the crust once the accretion has halted. \textit{Bottom Panel}: Configuration after the cooling has proceeded. The inner layers of the ocean have become solid and the magnetic field lines that are placed in this region conduct the heat faster towards the magnetic pole. As a consequence, the central region of the hot spot emits higher heat flux.} \label{fig:Phase_model} \end{figure} \indent One possible explanation, which we call the "Phase Transition model" (Figure \ref{fig:Phase_model}), is based on the possible phase transitions the crust matter might undergo during the cooling process. Two matter states likely are present: a solid state in the inner part and the liquid one, also called "ocean", that formed on top of the solid crust due to the heat generated by the accretion. When the accretion has halted, the liquid ocean slowly becomes solid again as the temperature decreases. Since we see pulsations during our \textit{XMM-Newton} observation (assuming pulsations were also present during the \textit{Swift}/XRT observations), the heat stored in the crust must be preferably released at hot spots, likely at the magnetic poles. Therefore, it is plausible to assume that the magnetic field in the crust might be parallel to the surface and only exits the crust at the poles (Figure \ref{fig:Phase_model}).\\ \indent Once the cooling starts, the bottom parts of the ocean become solid. At this point, some magnetic field lines that were first in the liquid ocean are now in the new solid layer. It has been shown that in a solid crust layer the thermal conductivity is larger than when this layer is liquid (\citealt{Horowitz2007}; \citealt{Brown2009}; \citealt{Mckinven2016}). As a consequence, the heat flux toward the hot spots increases significantly in the layer that solidified. Therefore, we can distinguish two regions in the hot spots: one in the outer part that is produced by those field lines guiding heat from the liquid layers, and other in the central part of the hot spots that is produced by the magnetic field located in the solid crustal layer and shows a higher heat flux. If originally all the field lines were located in the liquid layers, then when part of those layers become solid, the inner part of the hot spot can suddenly rise in temperature. Thus the hot spot does not physically shrink in size, but its core becomes significantly hotter and the observed emission is dominated by this central region. Therefore we observe an artificially smaller size for the emitting region with higher temperature and luminosity. In addition, an extra amount of energy may be released due to the phase transition which would heat the inner part of the hot spot.\\ \indent This model could in principle explain what we observed in Interval IV, but we stress that detailed calculations should be performed to determine if this is also physically possible. In addition, one prediction of the model is that after the initial artificial observed decrease in size of the hot spot, it should appear to grow again when the cooling proceeds because most of the liquid layers will solidify and thus the region with enhanced heat flux will increase. We do not have additional observations after this enigmatic \textit{Swift}/XRT observation so we cannot test this prediction.\\ \subsection{Direct accretion onto neutron star magnetic poles}\label{subsec:accretion} In the direct accretion scenario, the matter is guided by the magnetic field to the poles forming hot spots. Although it remains unknown how the accretion works at low accretion rate and what type of spectra would be produced, it might be possible that it would produce softer spectra than those obtained in higher accretion rate states (\citealt{Zampieri1995}). The Be/X-ray transient 1A 0535+26 was observed at similar low luminosities and it showed clear accretion features (source detection at $>$ 100 keV and aperiodic variability in the light curve; \citealt{Orlandini2004}, \citealt{Rothschild2013}; \citealt{Doroshenko2014}), so low-level accretion can definitely occur in these systems. However, its spectrum was harder than what we observe for 4U 0115+63 indicating that the emission is not formed in the same way in both systems. Two possibilities are left: either low-level accretion onto strongly magnetized NSs could give two types of spectra, or the physical mechanisms are widely different and we indeed could see cooling emission in 4U 0115+63.\\ \indent When assuming that the plateau phase in 4U 0115+63 is due to low-level accretion down to the surface of the NS, the behaviour seen during Interval IV could be due to some increase in accretion triggered by the periastron passage. However, if the direct accretion scenario is the right model to explain the source behaviour during this observation, there are still some caveats to be clarified. A temperature increase can naturally be explained if the accretion rate onto the pole increases, but it is not clear why the size of the emitting region would suddenly decrease significantly. This could signal a change in accretion geometry (e.g., disk like to wind like, or vice versa) but it is unclear if this is realistic. One key ingredient that needs to be explained is why we observe such a soft spectra in the plateau phase compared to the type-II and the mini-type I outbursts (\citealt{Wijnands2016}) where the source is definitely accreting. Detailed models are needed that calculate the expected spectral shape during these different accretion states in order to make any further conclusive statements.\\ \section{Conclusions}\label{sec:conclusions} We have presented the latest results of our monitoring campaign using \textit{Swift} and \textit{XMM-Newton} on the Be/X-ray transient 4U 0115+63 during its low-luminosity state observed after its 2015 type-II outburst. The aim of our study was to investigate the possible physical mechanism or mechanisms behind this low-luminosity state that lasted for $\sim$200 days. The three previously proposed mechanisms (\citealt{Wijnands2016}) to explain this state are: accretion down to the NS magnetosphere, accretion onto the magnetic poles of the NS, or cooling emission from a NS heated due to the accretion of matter during the type-II outburst. The low number of photons detected during the low-luminosity state only allowed for simple, single component models (i.e., a power-law or blackbody models) to be fitted to the X-ray spectra. The spectra were relatively soft and were well described by the blackbody model. In addition, we detected pulsations in our \textit{XMM-Newton} observation, indicative of emission from the NS magnetic poles. As a consequence, the hypothesis of emission from the magnetospheric boundary is ruled out because pulsations are not expected in this scenario (assuming that pulsations were presented during the whole low-luminosity state; this could not be tested with the available data). However, we cannot conclusively reject the other two scenarios (or a combination of them), although both cannot straight forwardly explain our results. More dense monitoring of Be/X-ray transients after their type-II outbursts is needed to further determine the mechanism behind the low-luminosity state they can exhibit after those outbursts.\\ \section*{Acknowledgements}\label{acknowledgements} ARE and RW acknowledge support from an NWO Top Grant, Module 1, awarded to RW. ASBN and AP are supported by an NWO Vidi grant, awarded to AP. YC is supported by the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Global Fellowship (grant agreement No.703916). ND acknowledges support from an NWO Vidi grant.\\
1,116,691,501,166
arxiv
\section{Introduction} The persistent global consistency of the Standard Model (SM) predictions against the data collected so far at the CERN Large Hadron Collider (LHC) is radically changing our perspective on the origin of possible new physics beyond the SM, and on its characteristic energy scale. This is also supported by the Higgs boson discovery \cite{Aad:2012tfa}, which is in good agreement with SM expectations \cite{ATLAS:2019slw,Sirunyan:2018koj}. On the other hand, the growing evidences for dark matter (DM)~\cite{Ade:2013zuv}, and the lack of suitable DM candidates in the SM framework are strengthening the possibility of the existence of physics beyond the SM, provided DM does not have a purely gravitational origin, as in the case of primordial black holes. A proper new physics model should be able to explain the correct dark-matter relic abundance and other experimental constraints coming from the direct and indirect searches for DM, while keeping itself well decoupled from ordinary matter in order to escape present TeV-range constraints in the LHC searches. The present picture can motivate the idea that the new physics responsible for DM might reside in a hidden or Dark Sector (DS), the latter consisting of new particles which are singlets under the SM gauge interactions. The DS can eventually interact with the SM via some portal-type interactions, mediated by heavy messengers which can communicate tree-level interactions between the SM and the DS fields~\cite{Essig:2013lka}. This mechanism can give rise to low-energy effective interactions between SM and DS particles induced by higher-dimensional operators, the latter being suppressed by the characteristic scale of the messenger fields. Then, a quite heavy messenger sector might naturally explain why the new physics, and in particular the DM sector, is still escaping all direct and indirect searches~\cite{Essig:2013lka}. A DS might contain a light or ultralight subsector. It might also be charged under its own long-distance interactions, in complete agreement with cosmological and astrophysical observations. Long-range forces mediated by a massless dark photon, corresponding to an exact $U(1)_D$ gauge symmetry in the DS, might also have a role in this picture. A DS scenario of the latter kind, which aims to solve the hierarchy puzzle of the SM Yukawa couplings, also providing natural DM candidates, has been recently proposed in \cite{Gabrielli:2013jka,Gabrielli:2016vbb}, implying a possible deep connection between the origin of Flavor and the DM interpretation. Dark-photon scenarios (both in the massive and massless cases) have been extensively considered in the literature in new-physics extensions of the SM gauge group \cite{Essig:2013lka},\cite{Holdom:1985ag}. They have also been investigated in cosmology and astrophysics~\cite{Spergel:1999mh,Vogelsberger:2012ku,Aarssen:2012fx,Tulin:2013teo,Ackerman:mha,Fan:2013tia,ArkaniHamed:2008qn,Zurek:2013wia}, mainly for improving models predictions. In collider physics, most of present dark-photon searches focus on massive dark photons, where the broken $U(1)_D$ gauge field naturally develops by kinetic mixing a tree-level (milli-charged) interaction with ordinary charged matter~\cite{Holdom:1985ag}. However, a massless dark photon can behave in a radically different way with respect to a massive one. Indeed, the kinetic mixing among the ordinary photon and a massless dark photon can be rotated away, restricting dark-photon interactions with ordinary matter to higher-dimensional operators~\cite{Holdom:1985ag}. Most of present astrophysical and laboratory constraints applying to massive dark photons can be evaded in the massless case, allowing for potentially large $U(1)_D$ couplings in the DS~\cite{Ackerman:mha}. The DS could also contain the so-called Axion-Like Particles (ALP's), $a$, loosely referring to neutral light (or ultralight) scalar (or pseudo-scalar) particles. These particles can be present in SM extensions motivated by the solution to the strong-CP problem (in which case the ALP is a QCD axion~\cite{Peccei:1977hh}) or can be pseudo-Nambu-Goldstone bosons corresponding to spontaneously broken continuous symmetries, either in the visible or in the DS, or a moduli field in string models \cite{Witten:1984dg,Svrcek:2006yi,Arvanitaki:2009fg,Acharya:2010zx}. In the literature, the phenomenological aspects, including collider search of ALP's, have been extensively studied~\cite{Bauer1,Bauer2,Dolan,Brivio,Mimasu:2014nea,Jaeckel,Baldenegro:2018hng}. Most of these studies focus on the ALP effective coupling to photons and/or gluons via the usual $a\, F^{\mu\nu} \!\tilde{F}_{\mu\nu}$ and $a \,G^{\mu\nu}\! \tilde{G}_{\mu\nu}$ types of interaction, involving the photon and gluon field-strength tensors, respectively. In a possible theory UV completion, this kind of effective interactions could result (at one loop) from integrating out some heavy messenger fields connecting the dark and the observable sectors. Then the effective scale could be identified with the typical mass of the messengers running in the loop, properly rescaled by the product of internal couplings and loop factor suppressions. While fixed-target experiments~\cite{Jaeckel:2010ni,graham} and $B$ factories~\cite{Merlo:2019anv} are particularly useful for searching new weakly-coupled particles like ALP's in the MeV-GeV range, high-energy colliders are more effective for constraining ALP masses above a few GeV's. Collider investigations carried out so far in this context, involving the aforementioned dimension-five operators, mostly focused on {\it tri-photon} and/or {\it mono-photon+missing-energy} signatures (depending on the ALP stability at the detector length scale) in the context of past and future $e^+e^-$ collider experiments as well as of (HL-)LHC experiments~\cite{Bauer1,Bauer2,Brivio,Mimasu:2014nea,Jaeckel,Baldenegro:2018hng}. Motivated by the above scenarios, we analyze a different type of portal interactions, that is Dark-ALP portals which connect the visible sector and the DS via a higher-dimensional effective operator $a F^{\mu\nu} \!\tilde{\bar{F}}_{\mu\nu}$, involving a photon $\gamma$, a dark photon $\bar{\gamma}$ and an ALP $a$, being ${\bar{F}}_{\mu\nu}$ the dark-photon field strength. The $a\gamma\bar\gamma$ interaction can arise from the usual ALP coupling with photons, $a\, F^{\mu\nu} \!\tilde{F}_{\mu\nu}$, after rotating away the kinetic mixing in the photon dark-photon sector, or can directly be induced at one loop by short-distance effects, after integrating out some heavy messenger fields. The same kind of interactions has been considered in the context of various low-energy constraints in~\cite{kaneta1,kaneta,deNiverville:2018hrc}. We are now going to focus on a massive ALP scenario, with the $a$ mass $M_a$ in the range of $10\,{\rm GeV} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 230\,{\rm GeV}$, and a massless dark photon. Indeed, the new effective operator not only provides a rich phenomenology in the context of astrophysical and cosmological observations, including the possibility of low-energy observations, but also opens up new ALP search strategies at collider experiments. In particular, we propose to look at the {\it di-photon plus missing-energy} channel $e^+e^- \to \gamma \gamma \bar{\gamma} $ with the aim to probe both the $a\, F^{\mu\nu} \!\tilde{F}_{\mu\nu}$ and $a\, F^{\mu\nu} \!\tilde{\bar{F}}_{\mu\nu}$ types of effective operators, in the context of future $e^+e^-$ collider experiments. As we will argue, this new channel not only has a better {\it signal-to-background} ratio compared to the conventional {\it tri-photon} channel, even before imposing any hard cut on the final state objects. It also has more distinct kinematical features as compared to its SM background counterpart, thanks to the presence of a massless {\it invisible} dark-photon in the final state. We will illustrate how such signal can be efficiently separated from the SM background by adopting various unconventional kinematic observables. In particular, we find that requiring an almost vanishing {\it missing mass} in the final state is particular effective in separating most of the SM background. While in the following we will assume new interactions involving a {\it pseudoscalar} ALP coupling, our results can be easily generalized for a neutral {\it scalar} particle. The present search can be of relevance for the various $e^+e^-$ collider projects that are presently under discussion, in particular for the linear colliders ILC~\cite{Bambade:2019fyw}, and CLIC~\cite{Charles:2018vfv}, and for the circular options FCC-ee~\cite{Abada:2019zxq,Abada:2019lih} and CEPC~\cite{CEPCStudyGroup:2018ghi}. We will assume as reference for the collision center-of-mass (c.m.) energies, and integrated luminosities, the ones corresponding to the FCC-ee staging~\cite{Benedikt}. A straightforward projection for the setup corresponding to a different machine can be done in most cases. We have organized the present paper in the following way. In the next section, we discuss the theoretical framework under consideration, and various constraints relevant for the present study. In Section \ref{eventseln}, after presenting the LEP data implications on the present model, we suggest a new collider search strategy and corresponding event selection criteria at future $e^+e^-$ colliders. Section \ref{results} contains the results of our analyses. Finally, our conclusions are reported in Section \ref{conclusion}. \section{Theoretical Framework} A portal interaction connecting the dark sector and the visible sector can be parametrized in the following way \begin{eqnarray} {\cal L}_{eff} = \frac{C_{a\gamma{\gamma}}}{\Lambda} a\, F^{\mu\nu} \!\tilde{F}_{\mu\nu} + 2 \frac{C_{a\gamma\bar{\gamma}}}{\Lambda} a\, F^{\mu\nu} \!\tilde{\bar{F}}_{\mu\nu}, \label{Leff} \end{eqnarray} where $F^{\mu\nu}$ and $\bar{F}_{\mu\nu}$ are respectively the field strength of the photon $\gamma$ and the dark photon $\bar{\gamma}$, $a$ is the ALP field, ${C_{a\gamma\bar{\gamma},a\gamma\bar{\gamma}}}$ are dimensionless couplings, and $\Lambda$ is some high energy scale\footnote{The factor 2 in the second term has been introduced to take into account the effect of the Wick contractions in the first term arising from the matrix element with two external photon states.}~\cite{kaneta1}. In principle, one could also add a term of the form $\frac{C_{a\bar{\gamma}\bar{\gamma}}}{\Lambda} a\, \bar{F}^{\mu\nu} \!\tilde{\bar{F}}_{\mu\nu}$ which might be present as well if one consider a specific UV completion of the low energy theory. In our analyses this term doesn't play any role apart from modifying the ALP decay width, and hence the branching fractions of ALP decays. In specific UV completions of the low energy theory, all these effective couplings can be related to each other. However, their connection will be in general model dependent. For instance, one could have an additional vertex involving the $Z$ boson, namely $\frac{C_{a \,Z\bar{\gamma}}}{\Lambda} a F_Z^{\mu\nu}\! \tilde{\bar{F}}_{\mu\nu}$, where $F_Z^{\mu\nu}$ is the $Z$ field strength, related to the interactions in Eq.(\ref{Leff}). In the following, we will take these couplings as independent parameters, assume that the contribution from $C_{a Z\bar{\gamma}}$ is negligible with respect to the two photon's operators, and consider only the effects of the two dominant terms in~Eq.(\ref{Leff}). For our collider analysis, we have considered ALP masses in the range $10 {\rm ~GeV} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} \sqrt{s}$ , where $\sqrt{s}$ is the c.m. energy of $e^+e^-$ collisions. Actually, the existing experimental limits related to ALP searches are mostly sensitive to a light ALP [in particular, lighter than $\mathcal{O}$(GeV)], and come from {\it e.g.} beam dump and light shining wall experiments, LSND and MinibooNE experiments, lepton $g-2$~\cite{Essig:2013lka}. Given the ALP mass range we consider in the present analysis, most of the stringent bounds on the axion coupling $C_{a\gamma\gamma}/\Lambda$ in Eq.(\ref{Leff}) carried out for very light ALP's can be neglected. \section{Dark-ALP portal with heavy ALP's at $e^+e^-$ colliders} \label{eventseln} In order to probe both the ALP-photon-photon, $a\gamma{\gamma}$, and ALP-photon-dark-photon, $a\gamma\bar{\gamma}$, couplings, as defined in Eq.(\ref{Leff}), we study the $e^+e^- \to \gamma\gamma\bar\gamma$ process at electron-positron colliders, where the dark photon $\bar\gamma$ behaves in the detector just like a neutrino, {\it i.e.} giving rise to {\it massless} missing momentum~\cite{Biswas:2015sha}. One of the main advantages of the {\it di-photon+missing-energy} channel over the conventional {\it tri-photon} channel (associated to the process $e^+e^- \to a\gamma\to \gamma\gamma\gamma$) is the smaller SM background. As for illustration, at $\sqrt{s}\simeq M_Z$ the SM background in the {\it tri-photon} channel is about 4 pb with nominal cuts on the photon energy ($E_{\gamma}> 2$ GeV) and rapidity ($|\eta|<2.5$) along with the isolation between any photon pair ($\Delta R > 0.01$ where, $\Delta R = \sqrt{\Delta\eta^2 + \Delta\phi^2}$), whereas that in the {\it di-photon+missing-energy} channel coming from $\gamma\gamma\nu\bar{\nu}$ production is about 0.1 pb. As collision c.m. energy, we consider the energies foreseen by the FCC-ee present staging, that is $\sqrt{s}\simeq [M_Z$, 160~GeV, 240 GeV], with integrated luminosity $L\simeq[150, 10, 5]$~ab$^{-1}$, respectively~\cite{Benedikt} (leaving aside here possible higher energy runs at $\sqrt{s}\simeq 365$ GeV). The corresponding results for the ILC and CEPC cases, and some extrapolation to the CLIC energies, can be anyway obtained in a quite straightforward way from the present discussion. The $e^+e^- \to \gamma\gamma\bar\gamma$ process arises from the couplings in Eq.(\ref{Leff}) via the Feynman diagrams detailed in Figure \ref{FD}. The two diagrams interfere in a constructive way. Assuming $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}= 1$, the contribution due to interference varies from 4\% to about 17\% of the total cross section within the ALP mass range considered in this analyses. The dominant contributions come from the incoherent sum of the individual sub-processes. The impact of interference effects can of course change as one moves away from the $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}= 1$ assumption (a similar behaviour affects the ALP total width corresponding to the decays $a\to \gamma\bar{\gamma},\gamma{\gamma}$). \begin{figure}[H] \begin{center} \hskip -0.8cm \includegraphics[width=0.35\textwidth]{Latest_PLOTS/e+e-2aph_phphdph.pdf} \hskip 1.2cm \includegraphics[width=0.35\textwidth]{Latest_PLOTS/e+e-2adph_phphdph.pdf} \vskip 0.2cm \caption{Feynman diagrams for the signal sub-processes $e^+e^- \to \gamma a \to \gamma \gamma \bar{\gamma}$ and $e^+e^- \to \bar{\gamma} a \to \gamma \gamma \bar{\gamma}$} \label{FD} \end{center} \end{figure} In the $e^+e^- \to \gamma\gamma\bar\gamma$ process, the ALP is produced on-shell, and it further undergoes a two body-decay to $\gamma\gamma$ or $\gamma\bar\gamma$. This helps a lot in the characterisation of the signal versus the SM background, which is mainly arising from the $e^+e^- \to \gamma\gamma\bar{\nu}\nu$ process, for an {\it invisible} dark photon. Indeed, one can treat the signal events as a $2\to 2\to 3$ process (modulo interference effects) and the momenta of the final state particles are correspondingly constrained, with different constraints for different subprocesses. On the other hand, the background $e^+e^- \to \gamma\gamma\bar{\nu}\nu$ is characterised by $2\to 4$ and $2\to 3\to 4$ sub-processes, and the corresponding phase-space behaviour is in general significantly different from the signal one. The presence of the missing momentum associated to a single invisible {\it massless} dark photon in the final state will provide a crucial additional handle to separate the signal from the SM background. In the following we use the MadGraph5 event generator~\cite{Alwall:2014hca} to simulate both the signal and the background events. We have implemented the effective ALP vertices using the FeynRules packages (v2.0)~\cite{Alloul:2013bka}. The output of the FeynRules (UFO model files) is then interfaced with MadGraph5 (v2.6.3.2). \subsection{LEP Analysis}\label{seclep} The effective $a\gamma\bar\gamma$ interactions between the ALP, photon and dark-photon can already be tested and constrained using the existing LEP data \cite{Gataullin:2005ge}. The L3 collaboration searched for single or multi-photon events with missing energy arising from $e^+e^- \to \bar{\nu}\nu \gamma (\gamma)$, in the c.m. energy range $\sqrt{s} = (189 - 208)$ GeV, with a total integrated luminosity of $619 ~{\rm pb}^{-1}$. In Table \ref{tablep}, we show the number of predicted signal events in our model, and the expected SM background events, along with the observed data in the {\it multiphoton+missing-energy} channel, after applying the set of cuts described in~\cite{Gataullin:2005ge}. \begin{table}[!h] \large \begin{center} \tabulinesep=1.2mm \begin{tabular}{|c|c|} \hline \hline $M_a$ (GeV) & $N_{\rm events} (n\gamma + \slashed{E})_{\rm LEP}$ \\ \hline 10 & 304 \\ \hline 50 & 810 \\ \hline 80 & 583 \\ \hline 150 & 77.8 \\ \hline 190 & 1.92 \\ \hline \hline expected (SM) & 115 \\ \hline observed & 101 \\ \hline \hline \end{tabular}\\ \caption{Number of predicted signal events in our model (assuming $C_{a\gamma \gamma} = C_{a\gamma \bar{\gamma}} =1$ and $\Lambda = 1$ TeV), in the {\it multiphoton+missing-energy} final state, with an integrated luminosity of 619 pb$^{-1}$, in the range $\sqrt{s} = (189-208)$ GeV, after applying the selection described in~\cite{Gataullin:2005ge}. The expected SM events and the observed ones are also presented in the last two rows, respectively, showing no excess in the data.} \label{tablep} \end{center} \end{table} We have also estimated the corresponding $2\sigma$ exclusion limits on the model parameters ({\it i.e.}, $C_{a\gamma \gamma} ~{\rm and}~ C_{a\gamma \bar{\gamma}}$, at fixed $M_a$ and $\Lambda$) coming from the LEP analysis, and presented them in the following sections. Note that the LEP analysis of the {\it multiphoton+missing-energy} channel has not been optimized for the $e^+e^- \to \gamma\gamma\bar\gamma$ process. In the next subsection, we will discuss the relevant optimization strategy in the context of future $e^+e^-$ colliders. \subsection{Future $e^+e^-$ colliders}\label{secfccee} In this section we discuss the prospect of the $e^+ e^- \to \gamma \gamma \bar{\gamma}$ in the context of future $e^+e^-$ colliders. We start by discussing the signal cross section and the event selection criteria taking into account various kinematical distributions. First we plot in Figure \ref{widthxsec} the ALP total decay width (left), and the total $e^+e^- \to \gamma\gamma\bar{\gamma}$ cross section (right) versus the ALP mass, assuming $C_{a\gamma\gamma} = C_{a\gamma\bar{\gamma}} = 1$ and $\Lambda = 1$ TeV. The left plot shows how, even for ALP masses as low as 10 GeV, the ALP is not stable on the detector length scale down to couplings of the order $C_{a\gamma\gamma}, C_{a\gamma\bar{\gamma}} \sim 10^{-4}$, with $\Lambda\sim 1$ TeV, for which the decay length is of the order of 0.1 mm. For narrower ALP widths, a displaced-vertices strategy might be in order. \vskip 0.2cm \begin{figure}[H] \begin{center} \hskip -0.8cm \includegraphics[width=0.44\textwidth]{Latest_PLOTS/decay-width.pdf} \hskip 1.2cm \includegraphics[width=0.44\textwidth]{Latest_PLOTS/XSEC_SIG.pdf} \caption{The left figure shows the variation of the ALP total decay width ($\Gamma_{\rm tot}$), as a function of the ALP mass $M_a$. The right figure shows the total cross section $\sigma(e^+ e^- \to \gamma \gamma \bar{\gamma})$ at different c.m. energies versus the ALP mass in the range $10 {\rm ~GeV} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} \sqrt{s}$. We assume $C_{a\gamma \gamma} = C_{a\gamma \bar{\gamma}} =1$ and $\Lambda = 1$ TeV. The cross section presented is the leading-order one, not including the contribution from the process with extra photons from initial-state radiation.} \label{widthxsec} \end{center} \end{figure} In order to account for initial-state radiation effects, we have also added the radiative contribution of the $e^+e^- \to \gamma\gamma\gamma\bar{\gamma}$ and $e^+e^- \to \gamma\gamma\gamma\,\bar{\nu}\nu$ processes to the signal and background, respectively. The effect of finite detector resolution on the energy of each photon has been incorporated with a gaussian smearing function parametrised as in a typical ILC detector: \begin{equation} \frac{\Delta E}{E} = \frac{16.6\%}{\sqrt{E/{\rm GeV}}} \oplus 1.1\%. \end{equation} In order to reconstruct signal events as events containing at least two isolated photons with some missing energy, we impose the following set of basic cuts on the final-state observables (in the following, we call {C1} this set of basic cuts for the {\it diphoton+missing-energy} events): \begin{itemize} \item minimum energy for each photon, $E_{\gamma} > 2.$ GeV, \item maximum rapidity for each photon, $|\eta_{\gamma}|<2.5$, \item angular separation between any pair of photons greater than $15^{\rm o}$, \item minimum missing transverse energy, $\slashed{E}_T > 5.$ GeV. \end{itemize} In Figure \ref{ecm91a}, we plot the normalised distributions for the missing energy $\slashed{E}$ associated to the dark photon (or, in the background case, to the $\bar{\nu}\nu$ system), and for the hardest-photon energy $E_{{\gamma}_1}$, for signal and background, at a c.m. energy $\sqrt{s}\simeq M_{Z}$. Figure \ref{ecm91b} shows the missing mass $\slashed{M}$ distribution associated to the invisible system (with $\slashed{M}$ defined below), and the diphoton-system invariant mass distribution, for signal and background. The same is shown in Figures \ref{ecm160a}, \ref{ecm160b}, \ref{ecm240a} and \ref{ecm240b} at different c.m. energies, namely, $\sqrt{s}=160$ GeV and 240 GeV. All these distributions have been obtained after applying the set of basic cuts {C1}, mentioned above. In general, the shape of the $\slashed{E}$ distributions for the signal contains a peak superimposed on a box distribution. This corresponds to the different kinematics associated the two subprocesses where either a $\gamma\bar\gamma$ or a $\gamma \gamma$ system is resonating at the ALP mass (see Figure \ref{FD}). In one case, the ALP is produced in association with a photon, giving rise to the box distribution for the dark photon which arises from the ALP decay. In the second case, the ALP is produced in association with the dark photon, which is essentially monochromatic. The peak position corresponding to the monochromatic component in the $\slashed{E}$ distribution is given by \begin{equation} \slashed{E}_{peak} \simeq \frac{\sqrt{s}}{2}(1-\frac{M_a^2}{s}), \end{equation} while the minima and maxima of the box distributions are given by \begin{equation} \slashed{E}_{min} \simeq \frac{1}{2}\frac{M_a^2}{\sqrt{s}} \; , \;\;\; \slashed{E}_{max} \simeq \frac{\sqrt{s}}{2}. \end{equation} When $M_a\approx \sqrt{s}$, the distribution splits up into two separate kinematical regions for a given $M_a$, because $\slashed{E}_{peak}\ll \slashed{E}_{min}\approx \slashed{E}_{max}$. On the other hand, for $0 \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} \sqrt{s/2}$, the peak lies between the $\slashed{E}_{min}$ and $\slashed{E}_{max}$ ($\slashed{E}_{min} < \slashed{E}_{peak} < \slashed{E}_{max}$). This is clearly shown in Figures \ref{ecm91a}, \ref{ecm160a}, and \ref{ecm240a}. \vskip 0.3cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/EMISS_MZ.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/PHOTON_P1_MZ.pdf} \caption{Missing energy ($\slashed{E}$) and hardest-photon energy ($E_{\gamma_1}$) distributions in the $e^+ e^- \to \gamma\gamma+\slashed{E}$ final state at $\sqrt{s}\simeq M_Z$, for a few ALP-mass benchmarks and the SM background ($\gamma\gamma\nu\bar{\nu}$). The basic cuts applied are the C1 set as defined in the text.} \label{ecm91a} \end{center} \vskip 0.3cm \end{figure} \vskip -0.5cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/XMMISS_MZ.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/PHOINVMASS12_MZ.pdf} \caption{Missing mass ($\slashed{M}$) and diphoton invariant mass ($M_{\gamma\gamma}$) distributions in the $e^+ e^- \to \gamma\gamma+\slashed{E}$ final state at $\sqrt{s}\simeq M_Z$, for a few ALP-mass benchmarks and the SM background ($\gamma\gamma\nu\bar{\nu}$). The basic cuts applied are the C1 set as defined in the text.} \label{ecm91b} \end{center} \end{figure} \vskip -0.5cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/EMISS_160.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/PHOTON_P1_160.pdf} \caption{Missing energy ($\slashed{E}$) and hardest-photon energy ($E_{\gamma_1}$) distributions in the $e^+ e^- \to \gamma\gamma+\slashed{E}$ final state at $\sqrt{s}=160$ GeV, for a few ALP-mass benchmarks and the SM background ($\gamma\gamma\nu\bar{\nu}$). The basic cuts applied are the C1 set as defined in the text.} \label{ecm160a} \end{center} \end{figure} \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/XMMISS_160.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/PHOINVMASS12_160.pdf} \caption{Missing mass ($\slashed{M}$) and diphoton invariant mass ($M_{\gamma\gamma}$) distributions in the $e^+ e^- \to \gamma\gamma+\slashed{E}$ final state at $\sqrt{s}=160$~GeV, for a few ALP-mass benchmarks and the SM background ($\gamma\gamma\nu\bar{\nu}$). The basic cuts applied are the C1 set as defined in the text.} \label{ecm160b} \end{center} \end{figure} In addition to the $\slashed{E}$ variable, we have introduced the invariant mass of the invisible system~\cite{Biswas:2015sha}. The invariant mass of the invisible system (or {\it missing mass}) $\slashed{M}$ is defined as \begin{equation} \slashed{M} = \sqrt{\slashed{E}^2 - \slashed{\vec{P}}^2} , \end{equation} and turns out to be a crucial variable to separate the signal from the SM background. Indeed, the signal invisible momentum is carried by the dark photon which is massless in our scenario. Therefore the missing mass is expected to peak near zero. On the contrary, the background invisible momentum is carried by the $\bar{\nu}\nu$-system for which the mass distribution is expected to be dominant at quite large values. We extensively make use of the $\slashed{E}$ and $\slashed{M}$ kinematical variables to separate the signal from the SM background. To this end, we set an upper limit on the missing energy and the missing mass associated with the accepted events. These two variables are quite independent, unless one fixes the invariant mass of the diphoton system $M_{\gamma\gamma}$, on which we actually do not put any constraint. On the basis of the above distributions, we will impose a cut $\slashed{M}< 10$ GeV, and a cut $\slashed{E} < \sqrt s/2 $, in order to optimize the signal significance, for the following choise of $\sqrt{s}$ and $M_{a}$ parameters: \vskip -0.5cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/EMISS_240.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/PHOTON_P1_240.pdf} \caption{Missing energy ($\slashed{E}$) and hardest-photon energy ($E_{\gamma_1}$) distributions in the $e^+ e^- \to \gamma\gamma+\slashed{E}$ final state at $\sqrt{s}=240$ GeV, for a few ALP-mass benchmarks and the SM background ($\gamma\gamma\nu\bar{\nu}$). The basic cuts applied are the C1 set as defined in the text.} \label{ecm240a} \end{center} \end{figure} \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/XMMISS_240.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/PHOINVMASS12_240.pdf} \caption{Missing mass ($\slashed{M}$) and diphoton invariant mass ($M_{\gamma\gamma}$) distributions in the $e^+ e^- \to \gamma\gamma+\slashed{E}$ final state at $\sqrt{s}=240$ GeV, for a few ALP-mass benchmarks and the SM background ($\gamma\gamma\nu\bar{\nu}$). The basic cuts applied are the C1 set as defined in the text.} \label{ecm240b} \end{center} \end{figure} \begin{itemize} \item $\sqrt{s}\simeq M_{Z}$, $10 {\rm ~GeV} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 80 {\rm ~GeV}$; \item $\sqrt{s}=160$ GeV, $10 {\rm ~GeV} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 150 {\rm ~GeV}$; \item $\sqrt{s}=240$ GeV, $10 {\rm ~GeV} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 230 {\rm ~GeV}$. \end{itemize} \vskip 0.5cm \section{Results and discussion} \label{results} We now present the results of our analyses. The final choice of the event selection criteria is a natural consequence of the kinematic distributions discussed in the previous section. The combination of the missing energy $\slashed{E}$ and missing mass $\slashed{M}$ cuts reduces the SM background substantially. For $\sqrt{s}\simeq M_{Z} $ and 160~GeV, the optimized choice of cuts is independent of the ALP masses. This is because both the missing-energy and the missing-mass distributions have very small or almost negligible overlap with background events. On the other hand, on the basis of $\slashed{E}$ distributions, a mild $M_a$ dependence is expected at $\sqrt{s}=240$~GeV. However, we could obtain a significance very close to one corresponding to the $M_a$ dependent optimized cuts, by adopting the same (rescaled) cut choice as used at lower $\sqrt s$. In Tables \ref{eventtab1}, \ref{eventtab2} and \ref{eventtab3}, we present the numerical results of our analysis for a few representative $M_a$ benchmarks in order to illustrate the effects of the cut-flow on the signal and the background events discussed earlier, at various $e^+e^-$ collision energies. The signal events reported correspond to $C_{a\gamma\gamma} = C_{a\gamma\bar{\gamma}} = 1$ and $\Lambda = 1$ TeV, and can be easily rescaled for different couplings and energy scale. We name $\sigma_{\rm cut}$, the residual cross sections after applying the complete cut-flow to the process phase-space. \vskip 0.4cm \begin{table}[!h] \large \begin{center} \tabulinesep=1.2mm \begin{tabular}{|l|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c|}{$\sqrt{s}\simeq M_Z$} & \multicolumn{3}{|c}{~~~~~~~~~~~~$N_{\rm events}$}& \\ \hline $M_a$ (GeV) & $\sigma_{\rm tot}$ (fb) & Basic cuts & $\slashed{E}< 46$ GeV & $\slashed{M}< 10$ GeV & $\sigma_{\rm cut}$ (fb) \\ \hline ~ ~10 & ~2002~ & 839514 & 813996 & 704647 & 1410 \\ \hline ~ ~50 & ~719~ & 935228 & 929024 & 813901 & 585 \\ \hline ~ ~80 & ~25.9~ & 877285 & 874469 & 833578 & 21.7 \\ \hline \hline ~ $\gamma\gamma \nu \bar{\nu}$ & ~2544~ & 12884 & 44 & 1 & 0.0025 \\ \hline \hline \end{tabular}\\ \caption{Cross sections (before and after cuts) and event counts out of $10^6$ simulated events, for the signal and the SM $\gamma\gamma \nu \bar{\nu}$ background at $\sqrt s\simeq M_{Z}$ (versus $M_a$), applying the C1 set of basic cuts as described in the text, followed by further sequential cut optimization. We assume $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=1$ and $\Lambda = 1$ TeV.} \label{eventtab1} \end{center} \end{table} \begin{table}[!h] \large \begin{center} \tabulinesep=1.2mm \begin{tabular}{|l|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c|}{$\sqrt{s}=160$ GeV} & \multicolumn{3}{|c}{~~~~~~~~~~~~$N_{\rm events}$}& \\ \hline $M_a$ (GeV) & $\sigma_{\rm tot}$ (fb) & Basic cuts & $\slashed{E}< 80$ GeV & $\slashed{M}< 10$ GeV & $\sigma_{\rm cut}$ (fb) \\ \hline ~ ~ 10 & ~2054~ & 511509 & 476414 & 381606 & 784 \\ \hline ~ ~ 80 & ~885~ & 950028 & 942370 & 773597 & 685 \\ \hline ~ ~150 & ~4.62~ & 882485 & 878570 & 818390 & 3.78 \\ \hline \hline ~ $\gamma\gamma \nu \bar{\nu}$ & ~3089~ & 186189 & 133 & 1 & 0.0031 \\ \hline \hline \end{tabular}\\ \caption{Cross sections (before and after cuts) and event counts out of $10^6$ simulated events, for the signal and the SM $\gamma\gamma \nu \bar{\nu}$ background at $\sqrt s = 160$ GeV (versus $M_a$), applying the C1 set of basic cuts as described in the text, followed by further sequential cut optimization. We assume $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=1$ and $\Lambda = 1$ TeV.} \label{eventtab2} \end{center} \end{table} \begin{table}[!h] \large \begin{center} \tabulinesep=1.2mm \begin{tabular}{|l|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c|}{$\sqrt{s}=240$ GeV} & \multicolumn{3}{|c}{~~~~~~~~~~~~$N_{\rm events}$}& \\ \hline $M_a$ (GeV) & $\sigma_{\rm tot}$ (fb) & Basic cuts & $\slashed{E}< 120$ GeV & $\slashed{M}< 10$ GeV & $\sigma_{\rm cut}$ (fb) \\ \hline ~ ~ 10 & ~2071~ & 245025 & 229988 & 163157 & 338 \\ \hline ~ ~ 80 & ~1478~ & 954942 & 941647 & 728484 & 1077 \\ \hline ~~ 150 & ~473~ & 947603 & 941097 & 753879 & 356 \\ \hline ~~ 230 & ~2.92~ & 912354 & 907124 & 798522 & 2.33 \\ \hline \hline ~ $\gamma\gamma \nu \bar{\nu}$ & 1615 & 175959 & 16729 & 5 & 0.0081 \\ \hline \hline \end{tabular}\\ \caption{Cross sections (before and after cuts) and event counts out of $10^6$ simulated events, for the signal and the SM $\gamma\gamma \nu \bar{\nu}$ background at $\sqrt s = 240$ GeV (versus $M_a$), applying the C1 set of basic cuts as described in the text, followed by further sequential cut optimization. We assume $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=1$ and $\Lambda = 1$ TeV.} \label{eventtab3} \end{center} \end{table} In Figures \ref{exclplot_ecm91}, \ref{exclplot_ecm160} and \ref{exclplot_ecm240}, we show the exclusion plot in the $(C_{ax}, M_a)$ plane, where $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=C_{ax}$, for various c.m. energies and corresponding expected integrated luminosities. The $2\sigma$ limit has been obtained by assuming for the signal significance the following definition \begin{eqnarray} \tilde\sigma = \sqrt{2[(S+B)ln(1+S/B)-S]} \label{eqnsignf} \end{eqnarray} where $S$ and $B$ are the numbers of observed signal and background events, respectively, corresponding to the residual signal and background cross sections $\sigma_{\rm cut}$ in Tables \ref{eventtab1}--\ref{eventtab3}. One can see that the $2\sigma$ reach for $M_a\!\!\sim$10 GeV is less sensitive compared to the one for $M_a\!\!\sim$ 20~GeV, at $\sqrt{s} =$ 160 and 240 GeV, despite the large production cross section in this mass region. The decrease in sensitivity at low $M_a$ is due to the effect of the photon isolation requirement when the two photons coming from the ALP decay are mostly collimated. This effectively reduces the cut efficiency. On the other hand, the $2\sigma$ limit on the couplings becomes less sensitive at larger $M_a$ values, because of the corresponding lower production cross section. \\ From Figures \ref{exclplot_ecm91}, \ref{exclplot_ecm160} and \ref{exclplot_ecm240}, one can see that, for masses $M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 60$ GeV, the best exclusion limit is achieved at $\sqrt{s}\simeq M_{Z}$. This is because of the very high expected integrated luminosity of about 150 ab$^{-1}$ at this energy, despite the comparatively lower signal cross sections. For 60 GeV$\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} M_a \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 90$ GeV, a slightly better sensitivity is obtained at $\sqrt{s}=160$ GeV, where a factor-two enhancement in luminosity with respect to $\sqrt{s}=240$ GeV more than compensates the enhancement in the production cross section at higher collision energy. \vskip 0.4cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_1D_MZ.pdf} \caption{The $2\sigma$ exclusion limit in the ($C_{ax},M_a$) plane at $\sqrt{s}\simeq {M_{Z}}$, for an integrated {luminosity} of 150 ab$^{-1}$, assuming $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=C_{ax}$, and $\Lambda = 1$ TeV. } \label{exclplot_ecm91} \end{center} \vskip 0.3cm \end{figure} \vskip -0.5cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_1D_160.pdf} \caption{The $2\sigma$ exclusion limit in the ($C_{ax},M_a$) plane at $\sqrt{s}= {160}$ GeV, for an integrated {luminosity} of ${10} {\rm ~ab}^{-1}$, assuming $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=C_{ax}$, and $\Lambda = 1$ TeV. } \label{exclplot_ecm160} \end{center} \end{figure} In Figures \ref{exclplot_10_80} and \ref{exclplot_150_230}, we also plot the $2\sigma$ exclusion contours in the $(C_{a\gamma{\gamma}}, C_{a\gamma\bar{\gamma}})$ plane for fixed $M_a$. The red, purple and green lines represent the bounds obtained from the analysis discussed in Section~\ref{secfccee}. The black dashed curve represents the bound coming from the LEP analysis (as described in \ref{seclep}), by assuming a null result. In these plots, we dubbed ${\rm LEP}_{200}$ the LEP searches in the range $\sqrt{s}= (189-208)$ GeV, with a total integrated luminosity of 619~pb$^{-1}$~\cite{Gataullin:2005ge}. From the $2D$ plots, it is also clear that the future collider experiments will be much more sensitive, and have much better reach in the $(C_{a\gamma{\gamma}}, C_{a\gamma{\bar\gamma}})$ plane, thanks to both a higher luminosity and to the optimized selection strategy. The significance definition in Eq.(\ref{eqnsignf}) is used everywhere to make this comparison. \\ \vskip 0.4cm \begin{figure}[H] \begin{center} \vskip -0.2cm \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_1D_240.pdf} \caption{The $2\sigma$ exclusion limit in the ($C_{ax},M_a$) plane at $\sqrt{s}= {240}$ GeV, for an integrated {luminosity} of 5 ab$^{-1}$, assuming $C_{a\gamma\gamma}=C_{a\gamma\bar{\gamma}}=C_{ax}$, and $\Lambda = 1$ TeV. } \label{exclplot_ecm240} \end{center} \end{figure} \vskip -0.3cm \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_M10_2D.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_M80_2D.pdf} \caption{The $2\sigma$ exclusion limits in the $(C_{a\gamma{\gamma}}, C_{a\gamma{\bar\gamma}})$ plane for fixed $M_a$ (left $M_a=10$ GeV, right $M_a=80$ GeV). The colored lines (red, green and purple) depict the bound coming from the analysis at $\sqrt{s}\simeq {M_{Z}}$, 160~GeV, and 240~GeV, for an integrated {luminosity} of 150 ab$^{-1}$, 10 ab$^{-1}$, and 5 ab$^{-1}$, respectively, and the black dashed line represents the bound coming from the LEP analysis as described in the text. We have assumed $\Lambda = 1$~TeV.} \label{exclplot_10_80} \end{center} \end{figure} One can notice from the plots in Figures \ref{exclplot_10_80} and \ref{exclplot_150_230}, that when one of the couplings is much lower than the other, the sensitivity to the smaller coupling is independent of the latter. This feature arises from the scaling of the cross section with the couplings, when ignoring interference effects. Indeed, the cross section for the $e^+e^- \to \gamma\gamma\bar{\gamma}$ process scales as $\sigma (e^+e^- \to \gamma\gamma\bar{\gamma}) \approx C^2_{a \gamma\gamma}[\sigma_1 + \frac{\Gamma_{1}}{\Gamma_{2}}\sigma_2]$ for $C_{a \gamma\gamma}\ll C_{a \gamma\bar{\gamma}}$, while for $C_{a \gamma\bar{\gamma}}\ll C_{a \gamma\gamma}$ the scaling is given by $\sigma (e^+e^- \to \gamma\gamma\bar{\gamma}) \approx C^2_{a \gamma\bar{\gamma}}[\frac{\Gamma_{2}}{\Gamma_{1}} \sigma_1 + \sigma_2]$ [where the factors $\sigma_1 = \sigma ({e^+e^- \to \gamma a})$, $\sigma_2 = \sigma ({e^+e^- \to \bar{\gamma} a})$, $\Gamma_1 = \Gamma ({a\to \gamma{\gamma}})$, and $\Gamma_2 = \Gamma ({a\to \gamma\bar{\gamma}})$ are all evaluated at $C_{a\gamma\gamma} = C_{a\gamma\bar{\gamma}} = 1$ and $\Lambda = 1$~TeV]. Therefore the total cross section in either of these limits roughly scales with the smallest of the two couplings squared, and becomes independent of the other. \begin{figure}[H] \begin{center} \hskip -2cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_M150_2D.pdf} \hskip 0.5cm \includegraphics[width=0.49\textwidth]{Latest_PLOTS/Exclplot_M230_2D.pdf} \caption{The $2\sigma$ exclusion limits in the $(C_{a\gamma{\gamma}}, C_{a\gamma{\bar\gamma}})$ plane for fixed $M_a$ and given $\sqrt{s}$ (left: $M_a=150$ GeV at $\sqrt{s}=$ 160~GeV and 240~GeV (green and purple lines), right: $M_a=230$ GeV at $\sqrt{s}=$ ~240~GeV (purple line)), for integrated {luminosities} as shown. The black dashed line in the left figure represents the bound coming from the LEP analysis as described in the text, for $M_a=150$ GeV. We have assumed $\Lambda = 1$~TeV.} \label{exclplot_150_230} \end{center} \end{figure} Furthermore, the sensitivity of the exclusion limit in the $(C_{a\gamma{\gamma}}, C_{a\gamma{\bar\gamma}})$ plane for fixed $M_a$ is not symmetric in the two couplings. The sensitivity reach for $C_{a\gamma\bar{\gamma}}$ is indeed slightly better than that for $C_{a\gamma\gamma}$. This is because, the partial widths $\Gamma_2$ and $\Gamma_1$ enter in the expression for the cross sections describing the scaling (just defined) in inverted ratios, and $\frac{\Gamma_{2}}{\Gamma_{1}} > 1$. Figures~\ref{exclplot_10_80} and \ref{exclplot_150_230} indeed show that, when there is a clear hierarchy in the $a\gamma{\gamma}$ and $a\gamma\bar{\gamma}$ couplings, the $2\sigma$ exclusion dependence is restricted to just the smallest coupling. However, in case a $e^+ e^- \to \gamma\gamma\bar{\gamma}$ signal were detected, our analysis would not be able to distinguish the corresponding hierarchical ordering of the two couplings. \vskip 1.0cm \section{Summary and conclusions} \label{conclusion} We have considered an axion-like particle with mass in the range $(10 - 230)$ GeV, coupled to a photon and a massless dark-photon through higher-dimensional effective operators. Such operators may occur naturally in a theory which includes light pseudo-scalar and dark-photon particles, and can be generated at the loop level by heavy messenger particles charged under both $U(1)_{em}$ and $U(1)_{D}$. We have investigated the future collider prospects for a scenario including these effective operators. In particular, we have thoroughly studied and lay down the strategy to probe a Dark-ALP portal as defined in Eq.(\ref{Leff}), at future $e^+e^-$ colliders. The conventional search for {\it three-photon} final states (which does not include the $a\gamma\bar{\gamma}$ coupling) is replaced by the $2\gamma+\slashed{E}$ signature arising from the $e^+ e^- \to \gamma\gamma\bar{\gamma}$ process, with the missing momentum arising from the dark-photon in the collision final state well characterized by a vanishing missing mass. We assumed a prompt ALP decay on a typical-detector length scale. The main SM background coming from the $ e^{+}e^{-} \to \gamma\gamma \nu \bar{\nu}$ channel can then be controlled by proper cutflows on the most sensitive kinematic variables, namely the missing energy $\slashed{E}$ and missing mass $\slashed{M}$. Relevant kinematical distributions and consequent selection strategies for the $e^+ e^- \to \gamma\gamma\bar{\gamma}$ process have been analysed. The missing mass variable associated to the dark photon turns out to be particularly efficient for the $S/B$ optimization. Projections at future $e^+e^-$ colliders for the corresponding constraints on the $a\gamma{\gamma}$ and $a\gamma\bar{\gamma}$ couplings have been discussed. For ALP masses not too close to the production threshold, the FCC-ee has in general the potential to constrain down to $\mathcal{O} (10^{-3}-10^{-4})$ the ALP-photon-photon ($a\gamma{\gamma}$) and ALP-photon-dark-photon ($a\gamma\bar{\gamma}$) couplings, as defined in Eq.(\ref{Leff}), assuming $\Lambda = 1$~TeV. Note that our analysis is not sensitive to the CP-property of the produced scalar particle, and can also be effectively used to probe possible couplings of CP-even scalars to photons and dark-photons, if occurring in particular new physics scenarios. The corresponding search strategies for the ILC, CLIC, CEPC collision-energy and integrated-luminosity setups can be inferred by the present analysis in a quite straightforward way. \section{Acknowledgements} E.G. wish to thank the Theoretical Physics Department at CERN for its hospitality during the completion of this work. S.B. would like to thank the Korea Institute for Advanced Study for extending its hospitality while part of the work has been carried out, and also thanks E. J. Chun for useful discussions and comments. A.C. would like to thank University Grant Commission (UGC), India, for supporting this work by means of a NET Fellowship (Ref. No. 22/06/2014 (i) EU-V and Sr. No. 2061451168).
1,116,691,501,167
arxiv
\section{Introduction}\label{sec:intro} Casimir and related effects, where quantum effects depend upon the existence of boundary conditions for a quantum field, have been extensively studied~\cite{rev}. Many different points of view and approaches to this kind of problem have been followed, under quite different sets of assumptions regarding the system; i.e., its intrinsic properties, and the conditions under which one wants to evaluate the Casimir force. \noindent In particular, there has been much interest in studying the Casimir effect at a finite temperature ($T>0$), a study that can be undertaken at different levels, distinguished by the way of taking into account all the possible $T>0$ effects. One could, for example, include thermal effects in the description of the matter on the mirrors (a very active research topic~\cite{dispers}), or rather for the physical description of the vacuum field~\cite{Lim}, or for both. In this article, we shall assume perfect (at any temperature) mirrors, with thermal effects restricted to the vacuum field. It has been noted that thermal and Casimir-like effects share some similarities; this should hardly be surprising, since both may be regarded, essentially, as finite-size effects, the former in the Euclidean time interval $[0,\beta]$ (with periodicity/antiperiodicity conditions), and the latter for a spacial coordinate with Dirichlet or Neumann conditions. Even though the nature of the boundary conditions is different, it should be expected that, when both effects are present, interesting relations (`dualities') will arise between the dependence on the inverse temperaure $\beta = T^{-1}$ and the distance between the mirrors. In this article, we present an approach to the calculation of the Casimir free energy for a scalar field at finite temperature in $d+1$ dimensions, subject to Dirichlet boundary conditions, including self-interactions. This approach is based on the use of a path-integral formulation, whereby the $d+1$-dimensional problem is mapped to a dimensionally reduced one, with fields living on the boundaries. For all the cases considered we analyze the high and low temperature expansions, computing also perturbative corrections to the Casimir free energy due to the non-interacting fields. This work is organized as follows: in section~\ref{sec:method} we present our approach, introducing conventions and definitions. In section~\ref{sec:scalarfree} we deal with the calculations and the corresponding results for a free real scalar field. Interactions are introduced in~\ref{sec:scalarint} and, in~\ref{sec:concl}, we present our conclusions. \section{The method}\label{sec:method} The main object we shall be interested in is the free energy $F(\beta,a)$ for a real scalar field $\varphi$ in $d+1$ spacetime dimensions, which is subject to Dirichlet boundary conditions on two plane mirrors, separated by a distance $a$ along the direction corresponding to the $x_d$ coordinate. In terms of the corresponding partition function ${\mathcal Z}(\beta,a)$, $F(\beta,a)$ is given by: \begin{equation} F(\beta,a) \;=\; -\frac{1}{\beta} \, \ln {\mathcal Z}(\beta,a) \;, \end{equation} while for ${\mathcal Z}(\beta,a)$ we shall use its standard functional integral representation: \begin{equation} {\mathcal Z}(\beta,a) \;=\; \int \big[{\mathcal D}\varphi\big] \, e^{- S(\varphi)}\;, \end{equation} where $S$ is the Euclidean action at $T>0$; i.e., with an imaginary time variable restricted to the $[0,\beta]$ interval~\cite{Kapusta:2006pm} and periodic boundary conditions for the fields with respect to this coordinate are implicitly assumed. In this work, we consider an action $S$ such that \mbox{$S = S_0 + S_I$}, with the free part of the action, $S_0$, is given by: \begin{equation}\label{eq:defs0} S_0(\varphi) \;=\; \frac{1}{2} \, \int_0^\beta d\tau \int d^dx \big(\partial_\mu \varphi \partial_\mu \varphi + m^2 \varphi^2 \big) \;, \end{equation} while the interaction part, $S_I$, is given by: \begin{equation}\label{eq:defsi} S_I(\varphi) \;=\;\frac{\lambda}{4!} \int_0^\beta d\tau \int d^dx \, \big[ \varphi(\tau, {\mathbf x}) \big]^4 \;. \end{equation} We have used brackets in $\big[{\mathcal D}\varphi\big]$ to indicate that the path-integral measure includes only those field configurations satisfying Dirichlet boundary conditions on the locii of the mirrors, which correspond to two parallel planes. Accordingly, we use a coordinate system such that, if the Euclidean coordinates are denoted by \mbox{$x=(x_0,x_1,\ldots,x_d)$} ($x_0 \equiv \tau$), the mirrors then correspond to the regions: $x_d=0$ and $x_d=a$, and will be parametrized as follows: \begin{equation} x = (x_\parallel, 0) \;,\;\; x = (x_\parallel, a) \;, \end{equation} where $x_\parallel=(\tau,\mathbf{x_{\parallel}})=(\tau,x_1,\dots,x_{d-1})$. Then we have: \begin{equation}\label{eq:defmeasure} \big[{\mathcal D}\varphi\big] \;=\; {\mathcal D}\varphi \; \delta\big[\varphi(\tau,{\mathbf x}_\parallel,0)\big] \; \delta\big[\varphi(\tau,{\mathbf x}_\parallel,a)\big] \;, \end{equation} where the $\delta$'s with braketed arguments are understood in the functional sense; for example: \begin{equation} \delta\big[\varphi(\tau,{\mathbf x}_\parallel,a)\big] \;=\; \prod_{\tau,{\mathbf x}_\parallel} \, \delta\big(\varphi(\tau,{\mathbf x}_\parallel,a)\big) \;\;, \end{equation} while the ones on the right hand side are ordinary ones. Besides, we assume that the ($\beta$-dependent) factor that comes from the integration over the canonical momentum has been absorbed into the definition of ${\mathcal D}\varphi$, so that performing the integral over $\varphi$ does indeed reproduce the partition function (without any missing factors). Following~\cite{Kardar:1997cu}, the functional $\delta$-functions are exponentiated by means of two auxiliary fields, $\xi_1(x_\parallel)$ and $\xi_2(x_\parallel)$, living in $d$ spacetime dimensions: \begin{equation}\label{eq:z00f} {\mathcal Z}(\beta,a) \;=\; \int {\mathcal D}\varphi \, {\mathcal D}\xi_1 \, {\mathcal D}\xi_2 \; e^{-S(\varphi) \,+\, i\,\int d^{d+1}x J_p(x) \varphi(x)}\,, \end{equation} where we introduced the singular current: \begin{equation} J_p(x)\;\equiv\; \delta (x_d) \xi_1(x_\parallel) \,+\, \delta (x_d- a) \xi_2(x_\parallel) \,, \end{equation} whose support is the region occupied by the mirrors. Since we shall treat interactions in a perturbative approach, it is convenient to deal first with the free theory, as a necessary starting point to include the interactions afterwards. \section{Free theory}\label{sec:scalarfree} When the theory is free ($\lambda=0$), $S = S_0$, and the integral over $\varphi$ becomes Gaussian. The resulting partition function, denoted by ${\mathcal Z}^{(0)}(\beta,a)$ is then: \begin{equation}\label{eq:zint1} {\mathcal Z}^{(0)}(\beta,a) \;=\; {\mathcal Z}^{(0)}(\beta) \,\times \, \int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \; e^{-S_p(\xi)} \;, \end{equation} where ${\mathcal Z}^{(0)}(\beta)$ is the (free) partition function in the absence of the mirrors, and: \begin{equation} S_p(\xi) \;=\; \frac{1}{2} \,\int d^dx_\parallel d^dy_\parallel \xi_a(x_\parallel) \Omega_{ab}(x_\parallel,y_\parallel) \xi_b(y_\parallel) \;, \end{equation} where $a,b=1,2$, and \begin{equation}\label{eq:omega} \Omega(x_{\parallel};y_{\parallel})=\left[ \begin{array}{cc} \Delta(x_{\parallel},0;y_{\parallel},0) & \Delta(x_{\parallel},0;y_{\parallel},a) \\ \Delta(x_{\parallel},a;y_{\parallel},0) & \Delta(x_{\parallel},a;y_{\parallel},a) \\ \end{array} \right] \;, \end{equation} where $\Delta$ is the free, imaginary-time, propagator. It may be written explicitly as follows: \begin{equation} \Delta(\tau_x,\mathbf{x};\tau_y,\mathbf{y})=\frac{1}{\beta}\sum_n \int \frac{d^d\mathbf{k}}{(2\pi)^d} e^{i\omega_n(\tau_x\ - \tau_y) +i\mathbf{k}(\mathbf{x}-\mathbf{y})}\, \widetilde{\Delta}(\omega_n,\mathbf{k})\;, \end{equation} with \mbox{$\widetilde{\Delta}(\omega_n,{\mathbf k}_\parallel) = (\omega_n^2 + {\mathcal E}^2({\mathbf k}_\parallel))^{-1}$}. \noindent We have used the notation \mbox{${\mathcal E}({\mathbf k}_\parallel) = \sqrt{ {\mathbf k}_\parallel^2 + m^2}$}, and \mbox{$\omega_n = \frac{2 \pi n}{\beta}$} denotes the Matsubara frequencies. $\Delta$ is the inverse, in the space of $\tau$-periodic functions, of $K\equiv(-\partial^2 +m^2)$. Expression (\ref{eq:zint1}) allows one to extract from the free energy the term that would correspond to a free field in the absence of mirrors, $F^{(0)}$, plus another contribution, which we shall denote by $F^{(0)}_p$: \begin{equation} F^{(0)}(\beta,a) \;=\; F^{(0)}(\beta) \,+\,F^{(0)}_p(\beta,a) \;, \end{equation} \begin{equation} F^{(0)}_p(\beta,a)\,=\, - \frac{1}{\beta} \, \ln\Big[\frac{{\mathcal Z}^{(0)}(\beta,a)}{{\mathcal Z}^{(0)}(\beta)}\Big] \;, \end{equation} which clearly contains the Casimir effect information (including thermal corrections). However, it also carries information that is usually unwanted, associated to the ever-present self-energy of the mirrors. Indeed, we see that it includes a divergent contribution to the free energy, which can be neatly identified, for example, by noting that, when $a \to \infty$: \begin{equation} \frac{{\mathcal Z}^{(0)}(\beta,a)}{{\mathcal Z}^{(0)}(\beta)}\,\to\, \big[{\mathcal Z}^{(0)}_m(\beta)\big]^2 \;, \end{equation} where ${\mathcal Z}_m^{(0)}$ is the contribution corresponding to one mirror: \begin{eqnarray} {\mathcal Z}^{(0)}_m(\beta)&=& \int {\mathcal D}\xi_1 \; e^{-\frac{1}{2}\int d^dx_\parallel \xi_1(x_\parallel) \Delta(x_{\parallel},0;y_{\parallel},0) \xi_1(y_\parallel)} \nonumber\\ &=& \int {\mathcal D}\xi_2 \; e^{-\frac{1}{2}\int d^dx_\parallel \xi_2(x_\parallel) \Delta(x_{\parallel},a;y_{\parallel},a) \xi_2(y_\parallel)} \;. \end{eqnarray} We then identify \begin{equation} F^{(0)}_m(\beta) \;\equiv\; -\frac{1}{\beta} \ln\big[{\mathcal Z}^{(0)}_m(\beta)\big] \end{equation} as the free energy term that measures a mirror's self-interaction. It is, of course, independent of $a$. Extracting also this contribution, we have the following decomposition for $F^{(0)}$ \begin{equation}\label{eq:decomp} F^{(0)}(\beta,a)\;=\; F^{(0)}(\beta) \,+\, 2\, F^{(0)}_m(\beta) \,+\, F^{(0)}_c(\beta,a) \end{equation} where $F^{(0)}_c(\beta,a)$ has a vanishing limit when \mbox{$a \to \infty$}. We can produce a more explicit expression for the interesting term $F^{(0)}_c$, starting from its defining properties above. Indeed, we have: \begin{equation} F^{(0)}_c(\beta,a) \,=\, -\frac{1}{\beta} \, \ln \Big\{\frac{{\mathcal Z}^{(0)}(\beta,a)}{{\mathcal Z}^{(0)}(\beta) \big[{\mathcal Z}^{(0)}_m(\beta)\big]^2 }\Big\}\;, \end{equation} which, by integrating out the auxiliary fields, may be written in terms of the matrix kernel $\Omega$: \begin{equation} F^{(0)}_c(\beta,a) \,=\, \frac{1}{2 \beta} {\rm Tr} \ln \Omega \,-\, \frac{1}{2 \beta} {\rm Tr} \ln \Omega_\infty \end{equation} where $\Omega_\infty \equiv\Omega|_{a \to \infty}$ and the trace is over both spacetime coordinates and $a, b$ indices. Assuming that an UV regularization is introduced in order to make sense of each one of the traces above, and using `reg' to denote UV regularized objects, we see that: \begin{eqnarray} F^{(0)}_{c,reg}(\beta,a) &=& \frac{1}{2 \beta} \Big[{\rm Tr} \ln \Omega \Big]_{reg} \,-\, \frac{1}{2 \beta} \Big[{\rm Tr} \ln \Omega_\infty \Big]_{reg} \nonumber\\ &=& \frac{1}{2 \beta} \Big[{\rm Tr} \ln\big(\Omega_\infty^{-1} \Omega \big) \Big]_{reg} \;. \end{eqnarray} As we shall see, the trace on the second line is finite, and has a finite limit when the regulator is removed. Using some algebra we may reduce the trace to one where only continuous indices appear: \begin{equation} F^{(0)}_c(\beta,a) \,=\, \frac{1}{2\beta} \,{\rm Tr} \ln \Omega_c \;, \end{equation} where a `reduced' kernel $\Omega_c$ which is given by: \begin{equation} \Omega_c (x_\parallel,y_\parallel) \,=\, \delta(x_\parallel - y_\parallel) \,-\, T(x_\parallel,y_\parallel) \;, \end{equation} with: \begin{eqnarray} T(x_\parallel,y_\parallel)&=& \int d^dz_\parallel d^dw_\parallel d^du_\parallel [\Omega_{11}]^{-1}(x_\parallel,z_\parallel) \Omega_{12}(z_\parallel,w_\parallel)\nonumber\\ &\times& [\Omega_{22}]^{-1}(w_\parallel,u_\parallel)\Omega_{21}(u_\parallel,y_\parallel) \;, \end{eqnarray} has been introduced. Finally note that, due to translation invariance along the mirrors, the free energy shall be proportial to $V_p$, the area of the mirrors. Then, to absorbe this divergence we shall rather consider the corresponding free energy {\em density\/}, obtained by dividing the extensive quantity by $V_p$. In particular, \begin{equation} {\mathcal F}^{(0)}_c(\beta,a) \,\equiv \, \lim_{V_p \to \infty} \Big[\frac{1}{V_p} F^{(0)}_{c,V_p}(\beta,a)\Big] \end{equation} where $ F^{(0)}_{c,V_p}$ on the right hand side is evaluated for a system with a finite parallel volume. Moreover, taking advantage of the translation invariance along the parallel coordinates, we may use a Fourier transformation to write: \begin{equation}\label{eq:denercas} {\mathcal F}^{(0)}_c(\beta,a) \;=\; \frac{1}{2 \beta} \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \sum_{n =-\infty}^{+\infty} \ln \widetilde{\Omega_c}^{(n)}({\mathbf k}_\parallel) \;, \end{equation} where \begin{equation} \widetilde{\Omega_c}^{(n)}(|{\mathbf k}_\parallel|) \;=\; 1 \,-\, \widetilde{T}^{(n)}({\mathbf k}_\parallel)\;, \end{equation} with: \begin{equation}\label{eq:defT} \widetilde{T}^{(n)}({\mathbf k}_\parallel) \;\equiv\; [{\widetilde\Omega}_{11}^{(n)}({\mathbf k}_\parallel) ]^{-1} \,{\widetilde\Omega}_{12}^{(n)}({\mathbf k}_\parallel ) \, [ {\widetilde\Omega}_{22}^{(n)}({\mathbf k}_\parallel )]^{-1} \, {\widetilde\Omega}_{21}^{(n)}({\mathbf k}_\parallel )\;. \end{equation} where the tilde denotes Fourier transformation in both $\tau$ and ${\mathbf x_\parallel}$. Note the appearance of the reciprocals of the matrix elements of ${\widetilde \Omega}^{(n)}$ (not to be confused with the matrix elements of the inverse of that matrix). The explicit form of the objects entering in (\ref{eq:defT}) may be obtained from (\ref{eq:omega}): \begin{eqnarray} {\widetilde\Omega}_{11}^{(n)}({\mathbf k}_\parallel) = {\widetilde\Omega}_{22}^{(n)}({\mathbf k}_\parallel) &=& \frac{1}{2 \sqrt{\omega_n^2 + {\mathcal E}^2({\mathbf k}_\parallel)}} \nonumber\\ {\widetilde\Omega}_{12}^{(n)}({\mathbf k}_\parallel) = {\widetilde\Omega}_{21}^{(n)}({\mathbf k}_\parallel) &=& \frac{e^{- a\, \sqrt{\omega_n^2 + {\mathcal E}^2({\mathbf k}_\parallel)}}}{2 \sqrt{\omega_n^2 + {\mathcal E}({\mathbf k}_\parallel)}} \;, \end{eqnarray} so that: \begin{equation} \widetilde{\Omega}_c^{(n)}({\mathbf k}_\parallel) \;=\; 1 \,-\, e^{- 2 a \sqrt{\omega_n^2 + {\mathcal E}^2({\mathbf k}_\parallel)}} \;. \end{equation} It is then evident that the sum and the integral in (\ref{eq:denercas}) converge, since the integrand falls off exponentially for large values of the momenta/indices. This behaviour should be expected, since we have constructed this object subtracting explicitly the would-be self-energy parts, the possible source of UV divergences. Thus, the main result of the previous calculations is a (finite) expression for the free energy, which can be written as follows: \begin{equation}\label{eq:freexact} {\mathcal F}^{(0)}_c(\beta,a) \;=\; \frac{1}{2 \beta} \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \sum_{n =-\infty}^{+\infty} \ln \Big[ 1 \,-\, e^{- 2 a \sqrt{\omega_n^2 + {\mathcal E}^2({\mathbf k}_\parallel)}} \Big] \;. \end{equation} It is worth noting that the expression above has been obtained subtracting the free energy corresponding to a situation where the mirrors are separated by an infinite distance; on the other hand, we note that, when the $\beta \to \infty$ limit is taken, the sum is replaced by an integral, and we obtain: \begin{equation}\label{eq:enexact} {\mathcal E}^{(0)}_c(a) \,\equiv\,{\mathcal F}^{(0)}_c(\infty,a) \,=\, \frac{1}{2} \int \frac{d^dk_\parallel}{(2\pi)^d} \ln \Big( 1 \,-\, e^{- 2 a \sqrt{ k_\parallel^2 + m^2 }} \Big) \;, \end{equation} which is the Casimir energy per unit area. With this in mind, one may introduce yet another quantity; the {\em temperature dependent} part of the free energy, whose area density we will denote by ${\mathcal F}_t^{(0)}(\beta,a)$, and, from the discussion above, it is given by: \begin{equation} {\mathcal F}_t^{(0)}(\beta,a) = {\mathcal F}_c^{(0)}(\beta,a) \,-\, {\mathcal F}_c^{(0)}(\infty,a)\;. \end{equation} By using a rather simple rescaling it the integral in the second term, we may write: \begin{eqnarray} {\mathcal F}_t^{(0)}(\beta,a) &=& \frac{1}{2 \beta} \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \Big\{ \sum_{n =-\infty}^{+\infty} \ln\big[ 1 \,-\, e^{ - \frac{2 a}{\beta} \sqrt{(2 \pi n)^2 + (\beta {\mathcal E}({\mathbf k}_\parallel))^2 }}\big] \nonumber\\ &-& \int_{-\infty}^\infty d\nu \, \ln\big[ 1 \,-\, e^{ - \frac{2 a}{\beta} \sqrt{(2 \pi \nu)^2 + (\beta {\mathcal E}({\mathbf k}_\parallel))^2}}\big] \Big\} \;, \end{eqnarray} which yields the Casimir free energy as a difference between a series and an integral, although the for the thermal part, since the sum over energy modes has already been (implicitly) performed. Before evaluating the free energy for different numbers of spacetime dimensions, we explore below some consequences of a duality between $\beta$ and $a$. \subsection{Duality}\label{ssec:dual} The dual role played by $\beta$ and $a$ in the free energy may be understood, for example, by attempting to calculate that object by following an alternative approach, based on the knowledge of the exact energies of the field modes, emerging from the existence of Dirichlet boundary conditions. The energies $\omega_l$ of the stationary modes are: \begin{equation} w_l(\mathbf{k_\parallel})\,=\,\sqrt{\frac{\pi^2 l^2}{a^2}+\mathbf{k}_\parallel^2 + m^2}\;\;, \;\;\;\; l \in {\mathbb N}\;. \end{equation} Since each one of these modes behaves as a harmonic oscillator degree of freedom, its free energy $f\big[w_l({\mathbf k}_\parallel)\big]$ has the following form: \begin{eqnarray} f\big[w_l({\mathbf k}_\parallel)\big]&=& -\frac{1}{\beta} \,\ln \big[ \sum_{N=0}^\infty e^{-\beta w_l({\mathbf k}_\parallel) (N+\frac{1}{2})} \big] \nonumber\\ &=& \frac{1}{2} w_l({\mathbf k}_\parallel) + \frac{1}{\beta} \, \ln \big[ 1 \, - \, e^{-\beta w_l({\mathbf k}_\parallel)} \big]\;. \end{eqnarray} Now we consider ${\mathcal F}_t^{(0)}(\beta,a)$, the {\em temperature dependent\/} part of the free energy density, obtained by summing the second term in the expression above over all the degrees of freedom, and dividing by the parallel area: \begin{equation} {\mathcal F}_t^{(0)}(\beta,a)\,=\, \frac{1}{\beta} \, \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \sum_{l = 1}^{+\infty} \ln \big[ 1 \, - \, e^{-\beta w_l({\mathbf k}_\parallel)} \big] \;. \end{equation} This may also be written as follows: \begin{eqnarray}\label{eq:freet} {\mathcal F}_t^{(0)}(\beta,a) &=& \frac{1}{2 \beta} \, \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \sum_{l = - \infty}^{+\infty} \ln \big[ 1 \, - \, e^{-\beta w_l({\mathbf k}_\parallel)} \big] \nonumber\\ &-& \frac{1}{\beta} \, \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \ln \big[ 1 \, - \, e^{-\beta {\mathcal E}({\mathbf k}_\parallel)} \big] \;. \end{eqnarray} We now recall that we are considering the free energy for one and the same system, albeit with different normalizations: subtraction at $a \to \infty$ in (\ref{eq:freexact}), and at $\beta \to \infty$ in (\ref{eq:freet}). Then we may write: \begin{equation} {\mathcal F}_c^{(0)}(\beta,a) \,-\, \lim_{\beta \to \infty} {\mathcal F}_c^{(0)}(\beta, a) \,=\, {\mathcal F}_t^{(0)}(\beta,a) \,-\, \lim_{a \to \infty} {\mathcal F}_t^{(0)}(\beta, a) \;, \end{equation} or: \begin{equation} {\mathcal F}_c^{(0)}(\beta,a) \,-\, {\mathcal E}_c^{(0)}(a) \,=\, {\mathcal F}_t^{(0)}(\beta,a) \,-\, {\mathcal F}_t^{(0)}(\beta) \;, \end{equation} where ${\mathcal F}_t^{(0)}(\beta)$ is an $a$-independent object, which corresponds to the free energy of a free field. Thus, if we are going to be concerned with temperature and $a$-dependent quantities, for example, to study the temperature dependence of the Casimir force, we only need derivatives with respect to $\beta$ and $a$ of the expression above, and we see that: \begin{equation} \frac{\partial^2}{\partial a \partial \beta} \big[{\mathcal F}_c^{(0)}(\beta,a) \big] \,=\,\frac{\partial^2}{\partial \beta \partial a} \big[ {\mathcal F}_t^{(0)}(\beta,a) \big]\;, \end{equation} or: \begin{equation}\label{eq:aux1} \widetilde{\mathcal F}_c^{(0)}(\beta,a) \,=\, \widetilde{\mathcal F}_t^{(0)}(\beta,a) \, \equiv \, \widetilde{\mathcal F}^{(0)}(\beta,a) \;, \end{equation} where the tildes denote subtraction of any term which has a vanishing mixed second partial derivative. On the other hand, from the explicit form of the free energy density as sum over modes (Matsubara or Casimir), we find the identity: \begin{equation} \big[\beta \widetilde{\mathcal F}_t^{(0)}\big] (2 a , \beta/2) \,=\, \big[ \beta \widetilde{\mathcal F}_c^{(0)}\big] (\beta,a) \;, \end{equation} which, combined with (\ref{eq:aux1}) yields a duality between $a$ and $\beta$: \begin{equation}\label{eq:duality} \widetilde{\mathcal F}^{(0)}(2 a , \beta/2) \,=\, \frac{\beta}{2 a} \, \widetilde{\mathcal F}^{(0)}(\beta,a)\;. \end{equation} To proceed, we make the simplifying assumption that $m=0$, and transform variables in the integral over $\mathbf{k_\parallel}$ to have a dimensionless integral, obtaining \begin{equation} \beta \,\widetilde{\mathcal F}^{(0)}(\beta,a)\;=\;(\frac{2\pi}{\beta})^{d-1} g(\gamma,d) \,, \end{equation} where $g(\gamma,d)$ is given by \begin{equation}\label{eq:G} g(\gamma,d)\,=\,C_d \,\sum_n \int_0^{\infty} dx\, x^{d-2}\,\ln \left(1-e^{-\gamma \sqrt{n^2+x^2}} \right) \;, \end{equation} where $\gamma= 4\pi a/\beta$ and $C_d$ is a constant factor that depends solely on the dimension $d$. Now, using the duality formula ($\ref{eq:duality}$), we obtain the an interesting relation involving $g$: \begin{equation}\label{eq:Tdualidad} \gamma^{d-1}g(\gamma,d)\,=\,\alpha^{d-1}g(\alpha,d)\,, \end{equation} where $\alpha=\beta \pi /a$ (or $\alpha = (2 \pi)^2/\gamma$). Note that this relation has immediate relevance to relate the low and high temperature regimes. Indeed, writing the expression above more explicitly, we see that: \begin{equation}\label{eq:Tdualidadp} g(\frac{4 \pi}{\beta} , d)\,=\, \big(\frac{\beta}{2 a}\big)^{d-1} \, g(\frac{\pi \beta}{a} , d)\,. \end{equation} This duality relation had been pointed out, for the $d=3$ case, by Balian and Duplantier~\cite{balian:2004}. In the remainder of this section, we apply the previous results and study some particular properties of the free energy corresponding the the free scalar field, firstly for the $d=1$ case, which we single out since it can be exactly solved, and then for $d > 1$. \subsection{$d=1$} The free energy (neglecting an irrelevant constant) is given by the expression: \begin{equation}\label{eq:ecas1+1} {\mathcal F_c^{(0)}}(\beta,a)= \frac{1}{2 \beta} \sum_{n\ne0} \, \ln (1-e^{- 2 a \mid \omega_n \mid})\,, \end{equation} which may be written as an infinite product: \begin{equation}\label{eq:ecas1+1Tb} {\mathcal F_c^{(0)}}(\beta,a)= \frac{1}{\beta} \, \ln \prod_{n=1}^{+\infty}(1-q^{2n}), \end{equation} where $q=e^{-2 \pi a /\beta}$, and $\mid q \mid$ <1 for $T>0$. Using standard properties of ellipthic functions, we get a more explicit results for the free energy: \begin{equation}\label{eq:defzz0} {\mathcal F_c^{(0)}}(\beta,a)= \frac{\pi a}{6 \beta^2}+ \frac{1}{\beta} \, \ln[\eta(2a/\beta \,i)], \end{equation} where $\eta(z)$ is Dedekind's eta function. Although it has been obtained for $T>0$, one can verify that it also yields the proper results for \mbox{$T \to 0$}, namely, \mbox{${\mathcal E}_c^{(0)}(a)= -\frac{\pi}{24 a}$}. Besides, the $\eta$-function satisfies the property: \mbox{$\eta (1/z) = \sqrt{i z}\, \eta(z)$} which, in this context, is tantamount to the duality relation. \subsection{$d > 1$} Now, we exploit the duality relation to extract the high and low temperature behavior of the Casimir energy, for $d>1$. In a high temperature expansion, formula ($\ref{eq:G}$) can be expanded for small $e^{-\gamma}$. The infinite-temperature limit corresponds to $\gamma \to \infty$, thus $n=0$ yields the leading contribution in this expansion, with the $n=1,\,2, \ldots $ terms producing higher-order corrections. The explicit form of the leading high temperature term is: \begin{equation}\label{eq:Ftt} \tilde{\mathcal F}_c^{(0)}(\beta,a) \,\sim\,- \frac{1}{2\pi^{d/2} \beta}\frac{\Gamma(d/2)\zeta(d)}{(2a)^{d-1}} \;,\;\; \beta \sim 0\;, \end{equation} whereas aplying the duality formula (\ref{eq:duality}) $a \to \beta /2$, we have the first order contribution in the low temperature regime, \begin{equation}\label{eq:Ft0} {\mathcal F_c^{(0)}}(2a,\beta/2)\,\sim\,-\frac{1}{2\pi^{d/2}} \frac{\Gamma(d/2)\zeta(d)}{(\beta)^d}\;\;\; (\beta \to \infty), \end{equation} which is $a$ independent. The sub-leading contributions ($n=1,2,...$) include more involved integrals; nevertheless, when $d=3$ and $\gamma \to \infty$, it reduces to terms like $e^{-n \,\gamma }$. Thus, the most significant contribution for $T\to \infty$ is: \begin{equation}\label{eq:Ftt1} -\frac{1}{2 a \beta^2} e^{-4\pi a /\beta}\,. \end{equation} Therefore, the duality formula implies that, when $T \to 0$, \begin{equation}\label{eq:Ft01} \tilde{\mathcal F}_c^{(0)}(\beta,a) \sim -\frac{1}{2 a \beta^2} e^{-\pi \beta /a}\,. \end{equation} Collecting these results for $d=3$, we see that, for high temperatures: $T \to \infty$ \begin{equation} {\mathcal F_c^{(0)}}(\beta,a)\,\sim\,-\frac{1}{16 \pi} \frac{\zeta(3)}{a^2 \beta}-\frac{1}{2 a \beta^2} e^{-4\pi a /\beta}\,\,, \end{equation} while, for low temperatures the corresponding behaviour is: \begin{equation} {\mathcal F_c^{(0)}}(2a, \beta/2)\, \sim\,-\frac{\pi^2}{1440 a^3} \left[1+ \frac{360}{\pi^3}\frac{a^3}{\beta^3}\zeta(3)+ \frac{720}{\pi^2}\frac{a^2}{\beta^2} e^{-\pi \beta /a} \right]\,. \end{equation} \section{Interacting theory}\label{sec:scalarint} Let us now consider the theory that results from turning on the quartic self-interaction term, studying the corresponding corrections to the free energy. This can be done in (at least) two different ways. We shall consider first the would-be more straightforward approach, which amounts to expanding the interaction term in powers of the coupling constant first, postponing the integration over the scalar field until the end of the calculation. We call it `perturbative' since it is closer in spirit to standard perturbation theory. \subsection{Perturbative approach}\label{ssec:pert} We have to evaluate: \begin{eqnarray} {\mathcal Z}(\beta, a) &=& \int [{\mathcal D}\varphi] \,e^{-S_I(\varphi)} \, e^{-S_0(\varphi)} \nonumber\\ &=& {\mathcal Z}^{(0)}(\beta, a) \;\; \langle e^{-S_I(\varphi)} \rangle \;, \end{eqnarray} where we have used $\langle \ldots \rangle$ to denote functional averaging with the free (Matsubara) action and the scalar-field integration measure satisfying Dirichlet boundary conditions. Namely, \begin{equation} \langle \ldots \rangle \;\equiv\; \frac{\int [{\mathcal D}\varphi] \ldots e^{-S_0(\varphi)}}{\int [{\mathcal D}\varphi] \, e^{-S_0(\varphi)}} \;. \end{equation} Thus, the different terms in the perturbative expansion are obtained by expanding in powers of the coupling constant, $\lambda$. For example, the first order term, ${\mathcal F}^{(1)}$, may be written as follows: \begin{equation} {\mathcal F}^{(1)}(\beta,a) \;=\; \frac{\lambda}{4! \beta} \int d^{d+1}x \, \langle [\varphi(x)]^4 \rangle \;, \end{equation} which may be expressed (via Wick's theorem), in terms of the average of just two fields. Indeed, the generating functional for these averages, $Z_0^J(\beta,a)$, is simply: \begin{equation}\label{eq:zJcomplete} {\mathcal Z}_0^J(\beta,a)\;=\; \int \big[{\mathcal D}\varphi\big] \, e^{- S_0(\varphi) + \int d^{d+1}x \, J(x) \varphi(x)}\;. \end{equation} We again introduce the auxiliary fields to impose the periodicity constraints on the measure, so that: \begin{equation}\label{eq:genfunct0} {\mathcal Z}_0^J(\beta,a) \,=\, \int {\mathcal D}\varphi {\mathcal D}\xi \, \, e^{ - S_0[\varphi] + \int d^{d+1}x [i J_p(x)+J(x)]\varphi (x)}\,. \end{equation} Integrating out the scalar field $\varphi$, and using a simplified notation for the integrals, we obtain: \begin{eqnarray} {\mathcal Z}_0^J(\beta,a) &=& {\mathcal Z}_0(\beta,a) \, \int {\mathcal D}\xi \, \exp \Big[ - \frac{1}{2} \int_{x_\parallel,y_\parallel} \xi_\alpha(x_\parallel) \Omega_{\alpha\beta}(x_\parallel,y_\parallel) \xi_\beta(y_\parallel) \nonumber\\ &+& i \int_{x_\parallel}\xi_\alpha(x_\parallel) L_\alpha(x_\parallel) \,+\, \frac{1}{2} \int_{x,y} J(x) \Delta(x,y) J(y) \Big] \;, \end{eqnarray} where: $L_\alpha (x_\parallel) \equiv \int_y \Delta(x_\parallel,a_\alpha;y) J(y)$, with $a_\alpha = (\alpha - 1) a$, $\alpha=1,2$. Integrating now the auxiliary fields, we see that: \begin{equation} {\mathcal Z}_0^J(\beta,a) \;=\; {\mathcal Z}_0 (\beta,a)\, e^{\frac{1}{2} \int_{x,y} J(x) G(x,y) J(y) } \;, \end{equation} where $G$, the (free) thermal correlation function in the presence of the mirrors, may be written more explicitly as follows: \begin{eqnarray} G(x;y) \;= \; \langle \varphi(x) \varphi(y) \rangle &=& \Delta(x;y) \;-\; \int_{x'_\parallel,y'_\parallel} \Big\{ \Delta(x_\parallel,x_d; x'_\parallel,a_\alpha) \nonumber\\ &\times& [\Omega^{-1}]_{\alpha\beta}(x'_\parallel,y'_\parallel) \Delta(y'_\parallel,a_\beta; y_\parallel,y_d) \Big\} \;, \end{eqnarray} and this is the building block for all the perturbative corrections. It is quite straightforward to check that the propagator above does verify the Dirichlet boundary conditions, when each one of its arguments approaches a mirror. For example: \begin{eqnarray} \lim_{x_d \to a_\gamma} G(x;y) &=& \Delta(x_\parallel,a_\gamma;y) \;-\; \int_{x'_\parallel,y'_\parallel} \Big\{ \Omega_{\gamma \alpha}(x_\parallel; x'_\parallel) [\Omega^{-1}]_{\alpha\beta}(x'_\parallel,y'_\parallel) \Delta(y'_\parallel,a_\beta; y_\parallel,y_d) \Big\} \nonumber\\ &=& \Delta(x_\parallel,a_\gamma;y) \;-\; \int_{y'_\parallel} \delta_{\gamma \beta} \delta(x_\parallel,y'_\parallel) \Delta(y'_\parallel,a_\beta; y_\parallel,y_d) \nonumber\\ &=& \Delta(x_\parallel,a_\gamma;y) \,-\, \Delta(x_\parallel,a_\gamma;y) \;=\;0 \;, \end{eqnarray} and analogously for the second argument. The thermal correlation function $G(x;y)$ above is composed of two contributions: one is the usual free thermal propagator in the absence of boundary conditions and the other come from reflexions on the surface boundaries. In this way we write \begin{equation} G(x;y)\,=\, \Delta(x;y)-M(x;y)\,, \end{equation} where $M(x;y)$ is given by: \begin{equation}\label{eq:Mpropagador} M(x;y)=\frac{1}{\beta}\sum_n \int \frac{d^{d-1}\mathbf{k}_\parallel}{(2\pi)^{d-1}} e^{iw_n(\tau_x\ - \tau_y)+i\mathbf{k}_\parallel(\mathbf{x}_\parallel-\mathbf{y}_\parallel)}\, \widetilde{M}(w_n,\mathbf{k}_\parallel;x_d,y_d)\,\,, \end{equation} with $\widetilde{M}(w_n,\mathbf{k}_\parallel;x_d,y_d)$ equal to: \begin{equation}\label{eq:Mpropagfourier} \frac{e^{-(|x_d|+|y_d|)E}-e^{-(|x_d-a|+|y_d|+a)E}-e^{-(|x_d|+|y_d-a|+a)E} +e^{-(|x_d-a|+|y_d-a|)E}}{2E(1-e^{-2aE})} \end{equation} where $E=\sqrt{w_n^2+\mathbf{k}_\parallel^2+m^2}$. We apply now the previous approach to the calculation of the first-order correction to the free energy (in fact, to the force) and the self-energy function: \subsubsection{First-order correction to the free energy} We want to evaluate thermal corrections to Casimir energy in the interacting theory, in particular, to understand how its divergences should be dealt with. To automatically subtract $a$-independent contributions, we work with the derivative of the free energy with respect to $a$. Obviously, this is a force, and it has the same amount of information as an energy which has been subtracted to avoid $a$-independent infinities. Indeed, it can be integrated over any finite range of distances to obtain de energy difference. Using $F_{cas}$ to denote these derivatives, we have: \begin{equation} F_\textrm{cas}\,\,=\,\,F_\textrm{cas}^{(0)}+F_\textrm{cas}^{(1)}\,, \end{equation} Where the term $F_\textrm{cas}^{(1)}$ is the first-order correction to free Casimir force $F_\textrm{cas}^{(0)}$. There is a term where only $\Delta(x,y)$ appears; this can be renormalized as usual in finite-temperature field theory, by a zero-temperature subtraction plus the inclusion of the first-order (temperature-dependent) mass counterterm contribution~\cite{Kapusta:2006pm}. Then, keeping temperature {\em and} $a$ dependent terms only, we see that \begin{equation} F_\textrm{cas}^{(1)}\,=\,F_{\textrm{cas},\,\Delta \,M}^{(1)}+F_{\textrm{cas},\,M\,M}^{(1)}\,\,, \end{equation} where: \begin{equation} F_{\textrm{cas},\,\Delta \,M}^{(1)}\,=\,-\frac{\lambda}{2 \beta} \,\Delta^T(x,x)\int d^{d+1}x \,\frac{\partial M(x,x)}{\partial a} \end{equation} and \begin{equation} F_{\textrm{cas},\,M\,M}^{(1)}\,\,=\frac{\lambda}{4 \beta} \int d^{d+1}x \,M(x,x) \frac{\partial M(x,x)}{\partial a}\,\,. \end{equation} Using the notation $f_{\Delta,M}$ and $f_{M\,M}$ for the corresponding area densities (we omit super and subscripts): \begin{equation} f_{\Delta,M}\,=\,-\frac{\lambda}{2 \beta} \Delta^T(0) \sum_n \int \frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}} \int_{-\infty}^{\infty}dx_d \frac{\partial \widetilde{M}(\mathbf{k_\parallel},w_n,x_d)}{\partial a}\,\,, \end{equation} and \begin{eqnarray} f_{M,M}&=&\frac{\lambda}{4 \beta^2} \sum_{n,l} \int \frac{d^{d-1} \mathbf{k_\parallel}}{(2 \pi)^{d-1}} \int \frac{d^{d-1}\mathbf{p_\parallel}}{(2 \pi)^{d-1}} \int_{-\infty}^{\infty}dx_d \nonumber\\ &\times& \widetilde{M}(\mathbf{k_\parallel},w_n,x_d) \frac{\partial \widetilde{M}(\mathbf{p_\parallel},w_l,x_d)}{\partial a}\;, \end{eqnarray} and the first-order correction to the Casimir force will be given by: \begin{equation} \frac{F_\textrm{cas}^{(1)}}{A_{d-1}}\,=\,f_{\Delta,M}+f_{M,M}\,\,. \end{equation} The term $\widetilde{M}(w_n,\mathbf{k}_\parallel;x_d)$ can be obtained from (\ref{eq:Mpropagfourier}); after changing variables from $x\,\, \to \,\,x+\frac{a}{2}$ (to obtain a symmetrized form), we perform an integration over $d+1$-dimensional spacetime, obtaining: \begin{equation} f_{\Delta,M}\,=\,-\frac{\lambda}{4 \beta} \Delta^T(0) \sum_n \int \frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}} \left(\frac{1}{2E_k}+\frac{a}{2}\,\textrm{csch}^2(aE_k)-\frac{\textrm{coth}(aE_k)}{2E_k} \right)\;. \end{equation} The $T$-dependent function $\Delta^T(0)$ is given by \begin{equation} \Delta^T(0)=\int \frac{d^d \mathbf{p}}{(2\pi)^d} \frac{1}{w} n_B(\beta,w)\,\,, \end{equation} where $n_B(\beta,\omega)$ is the Bose distribution function, with $\omega=\sqrt{\mathbf{p}^2+m^2}$. $f_{\Delta,M}$ may be expressed in yet another way, using the definition of the Bose distribution function but with $\beta$ replaced by $2a$ while $\omega$ is replaced by $E_k=\sqrt{\omega_n^2+\mathbf{k}_\parallel^2+m^2}$, i.e., $n_B(\beta,\omega)$ is replaced by \begin{equation} n_k(2a,E_k) \equiv \frac{1}{e^{2a\, E_k}-1} \;. \end{equation} Then \begin{equation} f_{\Delta,M}\,=\,\frac{\lambda}{4 \beta} \Delta^T(0) \sum_n \int \frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}} \left[\frac{n_k}{E_k}-2a \,n_k\,(1+n_k) \right]\,\,, \end{equation} which is clearly a convergent quantity. For the remaining term, we proceed in a similar way, where now the $p$ represent $(\omega_l,\mathbf{p_\parallel})$ \begin{equation} f_{M,M}\,=\,\frac{\lambda}{4 \beta^2} \sum_{n,l} \int \frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}} \int \frac{d^{d-1}\mathbf{p_\parallel}}{(2 \pi)^{d-1}} f(k,p)\,, \end{equation} where \begin{eqnarray} f(k,p) &=& \left(n_p+1\right)\left(\frac{n_p}{E_k^2}-\frac{n_k n_p}{E_k/2a}\right)\nonumber\\ &+&\left(n_p+\frac{1}{2}\right)\left(\frac{n_k}{E_k E_p}-\frac{E_k\,n_p-E_p\,n_k}{E_k(E_k^2-E_p^2)}\right)-\frac{n_p}{2E_k(E_k+E_p)} \, \end{eqnarray} which is also convergent. \subsubsection{First-order correction to the self-energy} We conclude the application of this method with the calculation of the tadpole diagram contribution to the self-energy. There is here a surface divergence due to the $M(x;x)$ contribution: \begin{equation}\label{eq:selener} \Pi=\frac{\lambda}{2}(\Delta(x,x)- M(x,x))\,. \end{equation} As before, it is convenient to separate from the self-energy the contribution which is present even when the mirrors are infinitely distant; namely, \begin{equation} \Pi=\Pi_{free}+\Pi_{mir}\,, \end{equation} where $\Pi_{free}=\frac{\lambda}{2} \Delta(x,x)$ and $\Pi_{mir}=- \frac{\lambda}{2} M(x,x)$ which contain the contribution to the selfenergy coming from the Dirichlet boundary conditions. The term $\Pi_{free}$ has UV and IR divergences, which are usually analyzed in standard finite temperature calculations. On the other hand, the $\Pi_{mir}$ term has no UV divergences, although it is IR divergent in low dimensions. Moreover, it presents surface divergences when the $x_d$ coordinate tends to $x_d=0$ or $x_d=a$. We will classify these divergences according to the dimension of the theory. When $d=1$ and $m=0$, we obtain, \begin{equation} \Pi_{mir}=-\frac{\lambda}{2 \beta} \sum_n \frac{e^{-2|x|E}-2e^{-(|x|+|x-a|+a)E}+e^{-2|x-a|E} }{2E(1-e^{-2aE})} , \end{equation} where $E=|\omega_n|$. Then we see that \begin{displaymath} \Pi_{mir} = -\frac{\lambda}{4 \pi} \sum_{n=1}^{\infty} \frac{1}{n} \times \left\{ \begin{array}{ll} e^{\gamma s n} & \textrm{if $s<0$}\\ \\ \frac{cosh((s-1/2)\gamma n)-e^{-\gamma n /2}}{sinh (\gamma n/2)} & \textrm{if $0<s<1$}\\ \\ e^{-\gamma(s-1)n} & \textrm{if $s>1$} \end{array} \right. \end{displaymath} where $\gamma=4\pi a/\beta$ and $s=x/a$. The sum for $s<0$ and $s>1$ can be performed. For $s<0$, the selfenergy is proportional to $\ln (1-exp(\gamma s))$, which diverges for $s \to 0$, regardless of the temperature. For $d\geq 2$, it is convenient to separate the self-energy part which into two pieces, one coming from $n=0$, and the other coming from the remaining modes: \begin{equation} \Pi_{mir}=\Pi_{mir}^{n=0}+\Pi_{mir}^{n \neq 0}\,, \end{equation} and \begin{displaymath} \Pi_{mirr}^{n=0} = -\frac{\lambda}{4 \pi \beta} \int_{0}^{\infty} \, dp \, \frac{1}{p} \times \left\{ \begin{array}{ll} e^{2s\,p} & \textrm{if $s<0$}\\ \\ \frac{cosh((2s-1)p)-e^{-p}}{sinh (p)} & \textrm{if $0<s<1$}\\ \\ e^{-2(s-1)p} & \textrm{if $s>1$} \end{array} \right. \end{displaymath} \begin{displaymath} \Pi_{mirr}^{n \neq 0} = -\frac{\lambda}{2 \pi \beta} \sum_{n=1}^{\infty} \times \left\{ \begin{array}{ll} K_0 (-n \gamma s) & \textrm{if $s<0$}\\ \\ \int_1^{\infty} \frac{1}{\sqrt{p^2-1}}\frac{cosh((s-1/2)\gamma n p)-e^{-\gamma n p/2}}{sinh (\gamma n p/2)} & \textrm{if $0<s<1$}\\ \\ K_0 (n \gamma (s-1)) & \textrm{if $s>1$} \end{array} \right. \;. \end{displaymath} Again, the zero mode contribution $\Pi_{mirr}^{n=0}$ has an IR divergent behaviour for low momenta, while there is a divergence on the mirrors. For instance, for $s<0$ the selfenergy is proportional to $log(-2s)$. The $\Pi_{mirr}^{n \neq 0}$ term can be analized in the high and low temperature limits, i.e. $\gamma \gg 1$ and $\gamma \ll 1$, respectively. The divergence of the function $K_0 (x)$ is logarithmic in $x=0$, thus surface divergences also are logarithmic. Moreover, we see that $\Pi_{mir}^{n = 0} \gg \Pi_{mir}^{n \neq 0}$. For $d>2$, the IR divergences are absent, and the surface diverces appear explicity. The two contributions are given by: \begin{displaymath} \Pi_{mirr}^{n=0} = -\frac{\lambda \,\,\Gamma(\frac{d}{2}-1)}{\beta \,2^{d+1} \pi^{\frac{d}{2}} a^{d-2}} \times \left\{ \begin{array}{ll} \frac{1}{s^{d-2}} & \textrm{if $s<0$}\\ \\ \zeta(d-2,1-s)+\zeta(d-2,s)-2\zeta(d-2) & \textrm{if $0<s<1$}\\ \\ \frac{1}{(s-1)^{d-2}} & \textrm{if $s>1$} \end{array} \right. \end{displaymath} \begin{displaymath} \Pi_{mirr}^{n \neq 0} = -\frac{\lambda \, \pi^{\frac{d-3}{2}}}{2 \beta^{d-1} \Gamma(\frac{d-1}{2})} \sum_{n=1}^\infty \int_1^{\infty}dp\, n^{d-2} (p^2-1)^{\frac{d-3}{2}} \left\{ \begin{array}{ll} e^{\gamma n s\, p} & \textrm{if $s<0$}\\ \\ \frac{cosh((s-\frac{1}{2})\gamma n p)-e^{-\gamma n \frac{p}{2}}}{sinh (\gamma n \frac{p}{2})} & \textrm{if $0<s<1$}\\ \\ e^{-\gamma n (s-1)p}& \textrm{if $s>1$} \end{array} \right. \end{displaymath} The integral for $s<0$ or $s>1$ can be performed \begin{displaymath} \Pi_{mirr}^{n \neq 0} = -\frac{\lambda}{\beta^{\frac{d}{2}}}\frac{1}{(2a)^{\frac{d}{2}-1}} \sum_{n=1}^\infty n^{\frac{d}{2}-1} \times \left\{ \begin{array}{ll} (-1)^{\frac{d}{2}-1}\frac{K_{\frac{d}{2}-1}(-\gamma n s)}{s^{\frac{d}{2}-1}} & \textrm{if $s<0$}\\ \\ \frac{K_{\frac{d}{2}-1}(\gamma n (s-1))}{s^{\frac{d}{2}-1}}& \textrm{if $s>1$} \end{array} \right. \end{displaymath} In the case of $d=3$ dimension we have a simple expression \begin{displaymath} \Pi_{mirr}^{n \neq 0} = -\frac{\lambda}{8 \pi a \beta } \times \left\{ \begin{array}{ll} -\frac{1}{s (e^{-\gamma s}-1)} & \textrm{if $s<0$}\\ \\ \frac{1}{(s-1) (e^{\gamma (s-1)}-1)}& \textrm{if $s>1$} \end{array} \right. \end{displaymath} By the above results we conclude that the surface divergences of the $\Pi_{mirr}^{n = 0}$ term are polynomial: $\sim s^{-(d-2)}$, whereas the term $\Pi_{mirr}^{n \neq 0}$ which has the sum of non-zero modes presents a more severe divergence proportional to $s^{-(d-1)}$. This has just been shown explicity for the $d=3$ case. In figures 1, 2, 3 and 4, we plot the self-energy for 1, 2, and 3 dimensions, for different values of the parameters. \subsection{Non-perturbative approach}\label{ssec:nonpert} Let us conclude this section by considering a second, alternative procedure to the one just explained. It amounts to integrating out, albeit formally, the scalar field {\em before\/} the perturbative expansion. Indeed, introducing, from the very beginning, the auxiliary fields used to impose the Dirichlet conditions, we see that the partition function becomes \begin{equation} {\mathcal Z}(\beta,a) \;=\;\int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \; e^{- {\mathcal W}[i J_p]} \; , \end{equation} where ${\mathcal W}$ denotes the generating functional of connected correlation functions, at finite temperature, and with an unconstrained (no Dirichlet conditions) scalar field integration measure: \begin{equation} e^{- {\mathcal W}[J]} \;\equiv\; \int {\mathcal D}\varphi \, e^{-S[\varphi] + \int d^{d+1}x J(x) \varphi(x) } \;. \end{equation} Of course, the arbitrary current $J$ in the definition above must be replaced by $J_p$, which does depend on the auxiliary fields and on $a$, the distance between the two mirrors. Besides, we assume that, in the course of evaluating ${\mathcal W}$, a renormalization procedure has been used to make sense of the possible infinites, in the usual way. To proceed, we recall that ${\mathcal W}[J]$ does have a functional expansion: \begin{equation} {\mathcal W}[J] \;=\; {\mathcal W}_0 \, + \, \frac{1}{2} \int d^{d+1}x \int d^{d+1}y \; {\mathcal W}_2(x,y) J(x) J(y) \,+\, \ldots \end{equation} where ${\mathcal W}_k$ correponds to the connected $k$-point correlation functions. Odd terms are, for the quartic perturbation, absent from the expansion. Then one may invoque some approximation that allows one to truncate the functional expansion. Of course, it cannot be a naive perturbative expansion in the coupling constant; one should rather use, for example, a mean field or large-$N$ expansion. Then the leading term will be just the quadratic one: \begin{equation} {\mathcal Z}(\beta,a) \;\sim\; {\mathcal Z}(\beta) \,\times \, {\mathcal Z}_q(\beta,a) \end{equation} where ${\mathcal Z}(\beta)= e^{{\mathcal W}_0}$ is the thermal partition function in the absence of mirrors, while \begin{equation} {\mathcal Z}_q(\beta,a) \;=\; \int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \; e^{- \frac{1}{2} \int d^{d+1}x \int d^{d+1}y \; {\mathcal W}_2(x,y) J_p(x) J_p(y)} \;, \end{equation} depends on ${\mathcal W}_2(x,y)$, the full renormalized thermal propagator. Using the explicit form of $J_p$, we see that the integral is a Gaussian: \begin{equation}\label{eq:zq1} {\mathcal Z}_q(\beta,a) \;=\; \int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \; e^{-S_q(\xi)} \end{equation} where \begin{equation} S_q(\xi) \;=\; \frac{1}{2} \,\int d^dx_\parallel d^dy_\parallel \xi_a(x_\parallel) \Omega^{q}_{ab}(x_\parallel,y_\parallel) \xi_b(y_\parallel) \;, \end{equation} and \begin{equation} \Omega^q(x_{\parallel};y_{\parallel})=\left[ \begin{array}{cc} D(x_{\parallel},0;y_{\parallel},0) & D(x_{\parallel},0;y_{\parallel},a) \\ D(x_{\parallel},a;y_{\parallel},0) & D(x_{\parallel},a;y_{\parallel},a) \\ \end{array} \right] \;, \end{equation} where $D$ is the {\em full\/} imaginary-time propagator. It may be written quite generally as follows: \begin{equation} D(\tau_x,\mathbf{x};\tau_y,\mathbf{y})=\frac{1}{\beta}\sum_n \int \frac{d^d\mathbf{k}}{(2\pi)^d} e^{i\omega_n(\tau_x\ - \tau_y) +i\mathbf{k}(\mathbf{x}-\mathbf{y})}\, \widetilde{D}(\omega_n,\mathbf{k})\;, \end{equation} with \mbox{$\tilde{D}(\omega_n,{\mathbf k}_\parallel)$} a generally complicated function of its arguments. However, we note that some non perturbative corrections do produce a simple result. For example, considering the IR resummed version of the massless scalar field, yields an expression of the form: \begin{equation} \tilde{D}(\omega_n,\mathbf{k}) \,=\, \big[ \omega_n^2 + {\mathbf k}_\parallel^2 + \Pi_\beta(T) \big]^{-1} \end{equation} where $\Pi_\beta(T)$ is the thermal mass. For example, the first two non-trivial contributions correspond to a term which is linear in $\lambda$ plus a non-analytic term: \begin{equation} \Pi_\beta(T) \,=\, \frac{\lambda T^2}{24} \big[ 1 \,-\, 3 ( \frac{\lambda}{24 \pi^2} )^{\frac{1}{2}} \,+\, \ldots \big] \;. \end{equation} Upon insertion of this expression, we see that the corresponding contribution to the Casimir free energy becomes: \begin{equation} {\mathcal F}_c(\beta,a) \;=\; \frac{1}{2 \beta} \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \sum_{n =-\infty}^{+\infty} \ln \Big[ 1 \,-\, e^{- 2 a \sqrt{\omega_n^2 + {\mathbf k}_\parallel^2 + \Pi_\beta(T)}} \Big] \;, \end{equation} where the $a \to \infty$ contribution has already been subtracted. \section{Conclusions}\label{sec:concl} We have obtained exact results for the free energy in a Casimir system in $1+1$ dimensions and well as low and high temperature expansions for $d>1$, based on a duality relation between inverse temperatures and distances between mirrors, in $d+1$ spacetime dimensions. For the interacting theory, we have derived two different approaches. In the first approach, perturbative expansion has been used to obtain the first order corrections to thermal Casimir force, showing that it is finite once the standar renormalization of thermal field theory in the absence of mirrors is performed. On the other hand, we also studied the selfenergies, showing that they have surface divergences in their coordinate dependence, and that those divergences are of a polynomial type, with a degree which depends on the dimension. Finally, we have shown that a variation in the order used to integrate the auxiliary fields yields a different method, whereby the one particle irreducible functions naturally appear. \section*{Acknowledgements} C.C.T and C.D.F. thank CONICET, ANPCyT and UNCuyo for financial support.
1,116,691,501,168
arxiv
\section{\@startsection{section}{1}% \usepackage{amssymb,amsthm, amsmath, amsfonts, url, graphicx,verbatim,scrextend} \usepackage{amscd} \usepackage{enumitem} \hyphenation{res-pec-tiv} \oddsidemargin20mm \evensidemargin4mm \headsep25mm \theoremstyle{plain} \newtheorem*{thm*}{Theorem} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{plain} \newtheorem{lem}[thm]{Lemma} \theoremstyle{plain} \newtheorem{prop}[thm]{Proposition} \theoremstyle{plain} \newtheorem{cor}[thm]{Corollary} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \theoremstyle{remark} \newtheorem*{acknowledgement*}{Acknowledgement} \title{Isometric immersions of Riemannian manifolds in $k$-codimensional Euclidean space} \author{Dan Gregorian Fodor} \date{} \begin{document} \maketitle \pagenumbering{roman} \pagestyle{myheadings}\markboth{}{} \pagenumbering{arabic} \pagestyle{myheadings} \begin{abstract} We use a new method to give conditions for the existence of a local isometric immersion of a Riemannian $n$-manifold $M$ in $\mathbb{R}^{n+k}$, for a given $n$ and $k$. These equate to the (local) existence of a $k$-tuple of scalar fields on the manifold, satisfying a certain non-linear equation involving the Riemannian curvature tensor of $M$. Setting $k=1$, we proceed to recover the fundamental theorem of hypersurfaces. In the case of manifolds of positive sectional curvature and $n\geq 3$, we reduce the solvability of the Gauss and Codazzi equations to the cancelation of a set of obstructions involving the logarithm of the Riemann curvature operator. The resulting theorem has a structural similarity to the Weyl-Schouten theorem, suggesting a parallelism between conformally flat $n$-manifolds and those that admit an isometric immersion in $\mathbb{R}^{n+1}$. \end{abstract} \begin{description}[leftmargin=8em] \item [2010 Mathematics Subject Classification] 53A07. \item[Keywords and phrases] Isometric immersions; Curvature equation; Weyl-Schouten-type theorem. \end{description} \section*{Introduction} The problem of immersions of manifolds into Euclidean space is well studied. The Stiefel-Whitney classes \cite{Hatcher}, Whitney and Nash embedding theorems \cite{HH}, and the Cartan-Janet theorem \cite{HH} give us lower bounds on the number of dimensions required to embed various classes of manifolds into Euclidean space. Here we look into the inverse problem: if we fix the dimension $m$ of the Euclidean ambient space in which a (local) isometric embedding exists, what can that tell us about the metric of our manifold? This acts as a sort of measure for the complexity of the metric, with the simplest metric the lowest number of dimensions: an $n$-manifold admits an isometric immersion in $\mathbb{R}^{n}$ if and only if its metric is flat. In section $2$ we rewrite the (local) immersion of an $n$-manifold $(M,g)$ in $\mathbb{R}^{n+k}$ as a section of the $\mathbb{R}^{k}$ bundle over a flat $n$-manifold $(M,f)$. Thus, the existence of a local immersion becomes equivalent to the existence of a $k$-tuple of scalar fields $h\m{\tau}$, ${\tau}\in \{1..k\}$ such that $f_{ab}= g_{ab}-h\m{\tau}_{;a}h\m{\tau}_{;b}$ is a positive-definite flat metric. The flatness of $f$ is characterised by the vanishing of its curvature. We next apply formulas connecting the curvatures of $2$ metrics, from \cite[Theorem 4.1]{Fodor}, to explicitly write the curvature of $f$ in terms of $g's$ existing connection and curvature. Thus we rewrite the existence of a local immersion in $\mathbb{R}^{n+k}$ as the existence of a $k$-tuple of scalars $h\m{\tau}$, satisfying a certain nonlinear equation involving $g's$ connection and curvature: $$(h\m{\alpha}_{;pj}h\m{\beta}_{;ik}-h\m{\alpha}_{;pk}h\m{\beta}_{;ij})( \delta\m{\alpha\beta}-h\m{\alpha}_{;a}h\m{\beta}_{;b}g^{ab})^{-1}=R_{pijk},\label{eq q}$$ with $f$ positive definite (Theorem 2.3). This is derived by setting the formula for $f's$ curvature to $0$. In section 3 we study the previous equation for $k$=1, obtaining: $$\frac{h_{;pj}h_{;ik}-h_{;pk}h_{;ij}}{1-g^{ab}h_{;a}h_{;b}} = R_{pijk},$$ with positive definitness of $f$ becoming $g^{ab}h_{;a}h_{;b} < 1$. Thus we characterize the local existence of an immersion of $(M,g)$ in $\mathbb{R}^{n+1}$ as the existence of a scalar field $h$ satisfying the above conditions. This $h$ is basically a \emph{height} function lifting the flat manifold $(M,f)$ into $\mathbb{R}^{n+1}$. We show the equivalence of the above criteria to the Gauss and Codazzi equations, setting $\Pi_{ab}= {h_{;ab} }/{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$. The Codazzi equation $\Pi_{a[b;c]} = 0$ becomes an integrability condition allowing us to recover $h$ form $\Pi_{ab}$ satisfying the Gauss equation and a set of initial conditions. We then recover the fundamental theorem of hypersurfaces (Theorem 3.2). In section 4 we rewrite the Gauss and Codazzi equations in a new form for manifolds of positive curvature. Considering ${\mathbb{R}_{ab}}^{cd}$ as an operator on the space of $2$-forms, we show there exists $\Pi_{ab}$ satisfying $$\Pi_{ac}\Pi_{bd}-\Pi_{ad}\Pi_{bc}=R_{abcd}$$ if and only if the Weil component of $R^{*}_{abcd}$ vanishes, with ${\mathbb{R}^{*}_{ab}}^{cd} = ln({\mathbb{R}_{ab}}^{cd})$ (the logarithm of the curvature operator). Due to positive curvature, $\Pi_{ab}$ turns out to be uniquely defined as $\Pi_{ab}= \pm e^{P^{*}_{ab}}$, where $P^{*}_{ab}$ is the Schouten component of ${\mathbb{R}^{*}_{ab}}^{cd}$ (Theorem 4.4). We strengthen these results and rewrite the fundamental theorem of hypersurfaces (which characterises immersability in an Euclidian subspace of codimension $1$) in a form analogous to the Weyl-Schouten theorem (which characterises conformal flatness). This is done in section 5 as Theorem 5.1. In section 6 we study submanifolds of $(M,g)$ immersed in $\mathbb{R}^{n+1}$ obtained by intersecting the immersion of $M$ with an $n$-plane. These are just the areas of constant $h$ for particular $h$. We obtain a formula for the Riemannian curvature of these manifolds and find it equal to the curvature of the ambient manifold multiplied by a scaling factor greater than or equal $1$. Thus, if we study the cross-sections of the immersion (in $\mathbb{R}^{n+1}$) of an $n$-sphere with hyperplanes, they will be $(n-1)$-spheres with greater (or equal) positive curvature. If we immerse manifolds of negative curvature, the cross-sections will be manifolds with greater or equal negative curvature. The cross-sections of immersions of flat manifolds will be flat manifolds. \section{Notational conventions} Since we use $m$-tuples of scalars, (or sections of an $\mathbb{R}^{m}$ bundle over the $n$-manifold), we need a way to index their elements. In addition to lower indices $V_a$, signifying covariant tensors, and upper indices $V^a$, signifying contravariant tensors, we shall also use middle indices $V\m{\alpha}$ to signify $m$-tuples, or sections of the $\mathbb{R}^{m}$ bundle. That is, $V\m{\alpha} = (V\m{1},V\m{2},...V\m{m})$. This gives us two additional rules: just as lower indices contract only with upper indices and vice-versa, so do middle indices contract only with middle indices, contraction signifying the ordinary cartesian product over $m$-vectors in $\mathbb{R}^{m}$. Eg. $V\m{\eta}K\m{\eta} = \sum_{\eta=1}^{m}V\m{\eta}K\m{\eta}$. Additionally, if a tensor has only middle indices, then its covariant derivative coincides with its ordinary derivative (this is a generalisation of the same rule for scalars, as such tensors can be seen as groupings of scalar-fields). \section{Conditions for local immersions} If we have an immersion $h:M\rightarrow \mathbb{R}^m$, of an $n$-manifold $M$ into $\mathbb{R}^{m}$, we can consider it to induce a metric on the manifold, namely the pull-back of the Euclidean metric along $h^{*}$. Associating to every point of the manifold the coordinate field $h\m{\tau}$, with $\tau \in\{1,2,...m\}$, the induced metric is $g_{ab}= h\m{\tau}_{,a}h\m{\tau}_{,b}$. Therefore, we have the following: \begin{defn} A Riemannian $n$-manifold $M$ admits a (local) isometric immersion in $\mathbb{R}^{m}$ if there exists (locally) on the manifold an $m$-tuple of scalar fields, $h\m{\tau}$, satisfying $h\m{\tau}_{,a}h\m{\tau}_{,b} = g_{ab}$. \end{defn} The above formulation does not tell us very much about the relationships between the scalar fields and the intrinsic invariants of $M$. On the other hand, we know that an $n$-manifold admits a local immersion into $\mathbb{R}^{n}$ if and only if its curvature tensor vanishes. We would like a theorem that appears as a generalization of this particular case. The remainder of this section will be devoted to proving such a theorem. \begin{lem} \label {1} A Riemannian $n$-manifold $(M,g)$ admits a local isometric immersion in $\mathbb{R}^{n+k}$ if and only if there exists locally a $k$-tuple of scalars $h\m{\tau}$, such that $f_{ab}= g_{ab}-h\m{\tau}_{;a}h\m{\tau}_{;b}$ is a flat Riemannian metric. \end{lem} \begin{proof} Assume there exists locally a $k$-tuple $h\m{\tau}$ such that $f_{ab}=g_{ab}-h\m{\tau}_{;a}h\m{\tau}_{;b}$ is a flat metric. We can find a faithful (isometric) coordinate chart $m$ from $(M,f_{ab})$ to an open subset $M_f$ of $\mathbb{R}^{n}$. Consider the $\mathbb{R}^{k}$-bundle over $M_f$, and let $h\m{\tau}$ be a section of that bundle. The metric associated to $(m(p),h\m{\tau}(p))$, taken as a subset of $M_f\times \mathbb{R}^{k}$, is $f_{ab}+h\m{\tau}_{,a}h\m{\tau}_{,b} = f_{ab}+h\m{\tau}_{;a}h\m{\tau}_{;b} = g_{ab}$. However, since $M_f$ is a subset of $\mathbb{R}^n$, $M_f\times \mathbb{R}^{k}$ can be taken as a subset of $\mathbb{R}^{n}\times \mathbb{R}^{k}=\mathbb{R}^{n+k}$. In conclusion, $(m,h\m{\tau})$ is a local, isometric immersion of $(M,g)$ in $\mathbb{R}^{n+k}$. Conversely, assume a local isometric immersion $i:M_{i}\rightarrow \mathbb{R}^{n+k}$ exists, with $M_{i}\subset M$. Select a point $q\in M_{i}$ and choose normal coordinates at $q$. Let $p_{T}:M_{i}\rightarrow (T_{q}\cong \mathbb{R}^{n})$ be defined such that $p_T(r)$ is the orthogonal projection of $i(r)$ onto the tangent bundle of $i(M_{i})$ at $i(q)$. Also, define $p_{N}:M_{i}\rightarrow (N_{q}\cong \mathbb{R}^{k})$, such that $p_N(r)$ is the orthogonal projection of $i(r)$ onto the normal bundle of $i(M_{i})$ at $i(q)$. The identification of the tangent bundle with $\mathbb{R}^{n}$ and normal bundle with $\mathbb{R}^{k}$ can be made using a standard choice of normal coordinates. Since the differential of $p_{T}$ at $q$ is the linear isomorphism $\delta^{b}_a$, due to the inverse function theorem, there exists an open set $M_{q}\subset M_{i}$ on which $p_{T}$ is a bijection. The pullback metric induced by $p_{T}$ on $M_{q}$ is the metric $f_{ab}$ of $T_{q}$, which is flat. By setting $h\m{\tau}(r) = p_{N}(r)$ for $r\in M_{q}$, the metric obeys the required equation: $g_{ab}=f_{ab}+h\m{\tau}_{;a}h\m{\tau}_{;b}$ for a flat $f_{ab}$, thus completing the proof. \end{proof} \subsection{Equation for local immersions} \begin{thm} A Riemannian $n$-manifold $(M,g)$ admits a local isometric immersion in $\mathbb{R}^{n+k}$ if and only if there exists locally a $k$-tuple of scalars $h\m{\alpha}$, satisfying the following equation: \begin{equation}(h\m{m}_{;pj}h\m{n}_{;ik}-h\m{m}_{;pk}h\m{n}_{;ij})(( \delta\m{mn}-h\m{m}_{;a}h\m{n}_{;b}g^{ab})^{-1})=R_{pijk},\label{eq q}\end{equation} with $f_{ab}= g_{ab}-h\m{k}_{;a}h\m{k}_{;b}$ positive definite. \end{thm} \begin{proof} Lemma 2.2 says the immersion exists if and only if there exists a $k$-tuple $h\m{\tau}$ such that $f_{ab}=g_{ab}-h\m{\tau}_{;a}h\m{\tau}_{;b}$ is a flat Euclidean metric. Define $\Gamma^{a}_{bc} = \frac{1}{2} (f^{-1})^{an}(f_{nb;c}+f_{nc;b}-f_{bc;n})$ and\\ \hspace*{1.6cm}$R^{*l}_{ijk}=\Gamma^{l}_{ik;j} - \Gamma^{l}_{ij;k}+ \Gamma^{l}_{js}\Gamma^{s}_{ik} - \Gamma^{l}_{ks}\Gamma^{s}_{ij}$. According to \cite[Theorem 4.3]{Fodor}, the curvature of a secondary metric $f_{ab}$ on a manifold $M$ is $R^{l}_{ijk}+R^{*l}_{ijk}$. As such, $f_{ab}$ is flat if and only if $R^{l}_{ijk}+R^{*l}_{ijk} = 0$. Substituting $f_{ab}=g_{ab}-h\m{\tau}_{;a}h\m{\tau}_{;b}$, we obtain: $\Gamma^{a}_{bc} = -(f^{-1})^{an}h\m{\tau}_{;n}h\m{\tau}_{;bc}$ \\ Denote $T_{n}K_{n} = T_{a}K_{b}(f^{-1})^{ab}$ as a shorthand-contraction.\\ We have: \\ $\Gamma^{l}_{ik;j} = -(f^{-1})^{ln}_{;j}h\m{\tau}_{;n}h\m{\tau}_{;ik} - (f^{-1})^{ln}h\m{\tau}_{;nj}h\m{\tau}_{;ik}-(f^{-1})^{ln}h\m{\tau}_{;n}h\m{\tau}_{;ikj}$\\\\ $(f^{-1})^{ln}_{;j}=-(f^{-1})^{lq}f_{qm;j}(f^{-1})^{mn}=(f^{-1})^{lq}(h\m{\tau}_{;q}h\m{\tau}_{;m})_{;j}(f^{-1})^{mn}=\\\hspace*{1.2cm} =(f^{-1})^{lq}(h\m{\tau}_{;q}h\m{\tau}_{;mj}+h\m{\tau}_{;qj}h\m{\tau}_{;m})(f^{-1})^{mn}$.\\ This gives:\\ $\Gamma^{l}_{ik;j} = -(f^{-1})^{ln}(((h\m{\tau}_{;n}h\m{\tau}_{;jm})(h\m{p}_{;m}h\m{p}_{;ik})+(h\m{\tau}_{;nj}h\m{\tau}_{;m})(h\m{p}_{;m}h\m{p}_{;ik}))+\\\hspace*{3cm} +(h\m{\tau}_{;nj}h\m{\tau}_{;ik} +h\m{\tau}_{;n}h\m{\tau}_{;ikj}))$ and\\ $\Gamma^{l}_{jm}\Gamma^{m}_{ik}= (f^{-1})^{ln}(h\m{\tau}_{;n}h\m{\tau}_{;jm})(h\m{p}_{;m}h\m{p}_{;ik})$.\\\\ $\Gamma^{l}_{ik;j} + \Gamma^{l}_{jm}\Gamma^{m}_{ik}= -(f^{-1})^{ln}((h\m{\tau}_{;nj}h\m{\tau}_{;m})(h\m{p}_{;m}h\m{p}_{;ik})+(h\m{\tau}_{;nj}h\m{\tau}_{;ik} +h\m{\tau}_{;n}h\m{\tau}_{;ikj}))$ \\ $\Gamma^{l}_{ik;j} + \Gamma^{l}_{jm}\Gamma^{m}_{ik}= -(f^{-1})^{ln}((h\m{\tau}_{;nj}h\m{p}_{;ik})(h\m{\tau}_{;m}h\m{p}_{;m} + \delta\m{\tau p} )+h\m{\tau}_{;n}h\m{\tau}_{;ikj})$\\\\ $R^{l}_{ijk}+R^{*l}_{ijk} = 0 \Longrightarrow R^{l}_{ijk} + \Gamma^{l}_{ik;j} - \Gamma^{l}_{ij;k}+ \Gamma^{l}_{js}\Gamma^{s}_{ik} - \Gamma^{l}_{ks}\Gamma^{s}_{ij} = 0 \Longrightarrow$\\ $R^{l}_{ijk} = (\Gamma^{l}_{ij;k} + \Gamma^{l}_{km}\Gamma^{m}_{ij}) - (\Gamma^{l}_{ik;j} + \Gamma^{l}_{jm}\Gamma^{m}_{ik}) $.\\\\ Multiplying by $f_{nl}$ and expanding the terms, we get:\\ $f_{nl}R^{l}_{ijk}=(h\m{\tau}_{;nj}h\m{p}_{;ik}- h\m{\tau}_{;nk}h\m{p}_{;ij})(h\m{\tau}_{;m}h\m{p}_{;m} + \delta\m{\tau p}) + h\m{\tau}_{;n}(h\m{\tau}_{;ikj} - h\m{\tau}_{;ijk})$\\\\ Applying the Ricci identity, we get:\\ $h\m{\tau}_{;n}(h\m{\tau}_{;ikj} - h\m{\tau}_{;ijk}) = h\m{\tau}_{;n}( h\m{\tau}_{;l}R^{l}_{ikj}) = -(h\m{\tau}_{;n} h\m{\tau}_{;l})R^{l}_{ijk}$\\ $(f_{nl}+h\m{\tau}_{;n} h\m{\tau}_{;l})R^{l}_{ijk}=(h\m{\tau}_{;nj}h\m{p}_{;ik}- h\m{\tau}_{;nk}h\m{p}_{;ij})(h\m{\tau}_{;m}h\m{p}_{;m} + \delta\m{\tau p})$\\ $g_{nl}R^{l}_{ijk}=(h\m{\tau}_{;nj}h\m{p}_{;ik}- h\m{\tau}_{;nk}h\m{p}_{;ij})(h\m{\tau}_{;m}h\m{p}_{;m} + \delta\m{\tau p})$\\ $R_{nijk}=(h\m{\tau}_{;nj}h\m{p}_{;ik}- h\m{\tau}_{;nk}h\m{p}_{;ij})(h\m{\tau}_{;m}h\m{p}_{;m} + \delta\m{\tau p})$ \begin{equation} R_{nijk}=(h\m{\tau}_{;nj}h\m{p}_{;ik}- h\m{\tau}_{;nk}h\m{p}_{;ij})(h\m{\tau}_{;a}h\m{p}_{;b}(f^{-1})^{ab} + \delta\m{\tau p}). \label{eq:rnh} \end{equation} Now we prove $(h\m{\tau}_{;a}h\m{p}_{;b}(f^{-1})^{ab} + \delta\m{\tau p}) =( \delta\m{\tau p}-h\m{\tau}_{;a}h\m{p}_{;b}g^{ab})^{-1}$:\\\\ $(h\m{\tau}_{;a}h\m{p}_{;b}(f^{-1})^{ab} + \delta\m{\tau p})( \delta\m{pq}-h\m{p}_{;a}h\m{q}_{;b}g^{ab})=$\\ $= \delta\m{\tau q} + h\m{\tau}_{;a}h\m{q}_{;b}(f^{-1})^{ab} - h\m{\tau}_{;a}h\m{q}_{;b}g^{ab} - h\m{\tau}_{;a}(f^{-1})^{ac}h\m{p}_{;c}h\m{p}_{;d}g^{db}h\m{q}_{;b} = $\\ $= \delta\m{\tau q} + h\m{\tau}_{;a}h\m{q}_{;b} ((f^{-1})^{ab} - g^{ab} - (f^{-1})^{ac}h\m{p}_{;c}h\m{p}_{;d}g^{db})=$\\ $= \delta\m{\tau q} + h\m{\tau}_{;a}h\m{q}_{;b} ((f^{-1})^{ac}(g_{cd})g^{db} - (f^{-1})^{ac}(f_{cd}) g^{db} - (f^{-1})^{ac}(h\m{p}_{;c}h\m{p}_{;d})g^{db})=$\\ $=\delta\m{\tau q} + h\m{\tau}_{;a}h\m{q}_{;b}(f^{-1})^{ac}g^{db}(g_{cd} - f_{cd} - h\m{p}_{;c}h\m{p}_{;d})= \delta\m{\tau q}$\\\\ Combining with \eqref{eq:rnh}, we get: \begin{equation*} R_{nijk}=(h\m{\tau}_{;nj}h\m{p}_{;ik}- h\m{\tau}_{;nk}h\m{p}_{;ij})( \delta\m{\tau p}-h\m{\tau}_{;a}h\m{p}_{;b}g^{ab})^{-1}, \end{equation*} thus completing the proof. \end{proof} \noindent The algebraic relationships of \cite[Theorem 4.3]{Fodor} suffice to determine whether the curvature of a metric is $0$. However, in cases where the metric can be pseudo-Euclidean, they do not distinguish between Euclidean and pseudo-Euclidean (eg. Minkowski) flat metrics. We construct our immersion as a section of the $\mathbb{R}^k$ bundle over a flat manifold $(M,f)$. The additional condition that $f$ be positive definite ensures that $M$ is Euclidean and not pseudo-Euclidean. Also, it is sufficient to check the condition at a single point. As $f$ is invertible, no changes of signature occur on its domain of definition. If we change the signature of $f$ in our theorem from $(k,0)$ to $(k-p,p)$, the existence of the $h\m{\alpha}, \alpha\in \{1,..., k\}$, scalar fields become instead conditions for the local immersion of our $n$-manifold in a pseudo-Euclidean space of signature $(n+k-p,p)$. Many metrics which do not admit local immersions in an Euclidean space $\mathbb{R}^{n+k}$ may instead admit immersions in a pseudo-Euclidean space of the same dimension. \section{Immersion of $n$-manifolds in $\mathbb{R}^{n+1}$} Due to its non-linear terms, obstructions to the solvability of \eqref{eq q} appear difficult to calculate. Here we will study the case when $k=1$, that is, we are immersing an $n$-manifold into $\mathbb{R}^{n+1}$. The $k$-tuples become singular scalars, and the equation simplifies to: \begin{equation}\frac{h_{;pj}h_{;ik}-h_{;pk}h_{;ij}}{1-g^{ab}h_{;a}h_{;b}} = R_{pijk}.\label{eq p}\end{equation} The condition that $f_{ab}$ is positive-definite becomes $g^{ab}h_{;a}h_{;b} < 1$. \begin{thm} \label{thm:R} A Riemannian $n$-manifold $M$ admits a local isometric immersion in $\mathbb{R}^{n+1}$ if and only if there exists a local scalar field $h$, satisfying \eqref{eq p}, and $g^{ab}h_{;a}h_{;b} < 1$. \end{thm} \newpage \subsection{Recovering the fundamental theorem of hypersurfaces} \hspace{1cm} \begin{lem} Assume a Riemannian $n$-manifold $(M,g)$ has a local scalar field $h$ satisfying $R_{pijk} = \frac{h_{;pj}h_{;ik}-h_{;pk}h_{;ij}}{1-g^{ab}h_{;a}h_{;b}} $. Then $\Pi_{ab}= \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$ satisfies $R_{pijk} = \Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$ (Gauss equation), and $\Pi_{a[b;c]}=0$ (Codazzi equation). \end{lem} \begin{proof} By substituting $\Pi_{ab}= \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$ into the Gauss and Codazzi equations, we check they are satisfied. \end{proof} \begin{lem} Assume a Riemannian manifold $M$ has on an open contractible subset $N$, a symmetric bilinear form $\Pi_{ab}$ satisfying $R_{pijk} = \Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$ and $\Pi_{a[b;c]}=0$. Then, for any point $p \in N$, and any initial conditions $h$, and $h_{;a}$ at $p$, with $h_{;a}h_{;b}g^{ab}< 1$, there is a maximal open connected set $S\subset N$, $p\in S$, on which there exists $h$ satisfying $\Pi_{ab}= \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$, and the initial conditions at $p$. The field $h$ is unique. \end{lem} \begin{proof} The existence of a field of real symmetric bilinear forms $\Pi_{ab}$ satisfying $R_{pijk} = \Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$ gives the first obstruction to the existence of a height function $h$, satisfying $\Pi_{ab}= \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$. Its existence also guarantees that $g^{ab}h_{;a}h_{;b} < 1$ is automatically satisfied, as having $g^{ab}h_{;a}h_{;b} \geq 1$ would introduce imaginary factors in $\Pi$. The form $\Pi_{ab}$ will be shown to be the second fundamental form of Gauss. Given the existence of such a form, can we always locally construct $h$ satisfying $\frac{h_{;ab}}{(1-g^{ab}h_{;a}h_{;b})^{1/2}} = \Pi_{ab}$? The case of $3$-manifolds provides a counterexample, as $3$-manifolds with a given $\Pi_{ab}$ satisfying the Gauss equation can always be found, yet not all such manifolds admit immersion into $\mathbb{R}^4$. There exists an additional obstruction, which turns out to be the Codazzi equation. We now shall study the obstructions to recovering $h$ from $\Pi$. Consider a path $c^{n}:[0,1]\rightarrow M$ from a point $p$ to $q$, parameterized by natural parameter $t$, and fix $h_{;a}$ at $p$, as an initial condition with $g^{ab}h_{;a}h_{;b}<1$. Knowing $\Pi_{ab} = \frac{h_{;ab}}{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$, we can recover $h_{;ab}$ at $0$. Thus, we recover the $h_{;a}$ along the path by integrating the equation: \begin{equation} \frac {D {h_{;a}}}{dt} = (1-g^{mn}h_{;m}h_{;n})^{1/2} \Pi_{ab} \frac{d c^b}{dt}, \quad g^{ab}h_{;a}h_{;b}<1. \label{eq:dha} \end{equation} This gives us the maximal domain $S\subset N$ and the initial conditions for $h$, and $h_{;a}$ at $p$ guarantee uniqueness. A vector field $h_{a}$ satisfying $\Pi_{ab} = \frac{h_{a;b}}{(1-g^{ab}h_{a}h_{b})^{1/2}}$ admits a local existence if and only if the result of such integration is independent of the path. $h_{a;b}$ must obey the Ricci identity: $h_{a;bc}-h_{a;cb} = h_{n}R^{n}_{abc}$. We do not know $h_{a;b}$, but only $\Pi_{ab} = \frac{h_{a;b}}{(1-g^{ab}h_{a}h_{b})^{1/2}}$. As such, we will apply the anticommutator to the last two indices of $\Pi_{ab;c}$, and examine the result. \\ $\Pi_{a[b;c]} = \frac{h_{a;[bc]}}{(1-g^{ab}h_{a}h_{b})^{1/2}}+\frac{1}{2}((\frac{1}{(1-g^{ab}h_{a}h_{b})^{1/2}})_{;c}h_{a;b}- (\frac{1}{(1-g^{ab}h_{a}h_{b})^{1/2}})_{;b}h_{a;c})$.\\ Expanding $(\frac{1}{(1-g^{ab}h_{a}h_{b})^{1/2}})_{;c}$, we get $\frac{-2{g^{ab}h_{a}h_{b;c}}}{(1-g^{ab}h_{a}h_{b})^{3/2}}$. Substituting in $\Pi_{a[b;c]}$ and grouping the terms, we obtain: $\Pi_{a[b;c]} = \frac{h_{a;[bc]} - \frac{h_{n}g^{nm}(h_{m;b}h_{ac}-h_{m;c}h_{a;b})}{2(1-g^{ab}h_{a}h_{b})} }{(1-g^{ab}h_{a}h_{b})^{1/2}}$ $\Pi_{a[b;c]} = \frac{h_{a;[bc]} -\frac{1}{2} h_{n}{\mathbb{R}^n}_{abc}}{(1-g^{ab}h_{a}h_{b})^{1/2}}$.\\ The Ricci identity for $h_{a}$ holds if and only if $\Pi_{a[b;c]}=0$. Thus, $h_{a}$ exists if and only if $\Pi_{a[b;c]}=0$. Or, $\Pi_{ab}$ is a Codazzi tensor. This is the second obstruction to the local existence of $h$. However, since $\Pi_{ab}$ is symmetric, so is $h_{;ab}$. Therefore, if $h_{;a}$ exists, so does $h$. \end{proof} \subsubsection{Notes on the extensibility of solutions} Setting initial conditions $h$ and $h_{;a}$ at a given point, we can define and extend the $h$ field satisfying the above equations in any direction on a contractible set, until $h_{;a}h_{;b} g^{ab} = 1$. If we take $h_{;a}(p) = 0$ as initial condition on a point $p$, and set $|v^{a}\Pi_{ab}v^{b}| \leq r$ for $v^{a}g_{ab}v^{b} = 1$, then we can define the $h$-field on at least a ball of radius $\frac{\pi}{2r}$ around $p$. The equation $$\frac {D {h_{;a}}}{dt} = (1-g^{mn}h_{;m}h_{;n})^{1/2} r \frac{d c^a}{dt},$$ obtained by maximising \eqref{eq:dha}, for a geodesic $c^{a}$, gives $|h_{;a}| = \sin(rd)$ at a distance of $d$ from $p$. Given a resulting immersion, this is also a lower bound for the injectivity radius of its projection to the hyperplane tangent to $p$. As we shall see in the following theorem, different initial conditions for $h$ at a point result in different immersions that are rigid transformations of one another, and are defined on different subdomains of $m$ . We can glue them together (applying transformations to their immersion coordinates to make them match) to extend a solution past the injectivity radius. Note that the resulting $h$ of the extended solution might not always satisfy $\Pi_{ab}= \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$ : there may be regions past the injectivity radius where it satisfies $\Pi_{ab}= - \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$. Both these equations are equally valid solutions for $R_{pijk} = \frac{h_{;pj}h_{;ik}-h_{;pk}h_{;ij}}{1-g^{ab}h_{;a}h_{;b}}$ (see Theorem \ref{thm:R}). To understand this phenomenon, we shall see how $h$ behaves on the simple example of the circle immersed in $\mathbb{R}^{2}$. We have $f:[-\pi,\pi]\rightarrow \mathbb{R}^2$, $f(x) = (\cos(x),-\sin(x))$, giving us $h(x) = -\sin(x)$. The circle has a constant principal curvature of $1$, giving us the equations: $h^{\prime\prime} = {\pm}(1-h^{\prime}h^{\prime})^{\frac{1}{2}}$ . We see that $h(x) = -\sin(x)$ satisfies the positive-signed equation on $[0,\frac{\pi}{2}]$ and the negative-signed equation on $[-\frac{\pi}{2},0]$. At $0$ we have a region on which $|h^{\prime}|= 1$. By reversing the choice of sign of our second fundamental form, we either reverse the sign of $h$, or swap the domains on which these two variants of the equation are satisfied.\\ \noindent Next we present a new proof for the fundamental theorem of hypersurfaces, \cite[Theorem 7.1]{KN}. \begin{thm} {\bf Fundamental theorem of hypersurfaces:}\\ A Riemannian $n$-manifold $M$ admits a local isometric immersion in $\mathbb{R}^{n+1}$ if and only if there exists locally a symmetric $\Pi_{ab}$ satisfying ${R_{abcd}}$ = $\Pi_{ac}\Pi_{bd}-\Pi_{ad}\Pi_{bc}$ and $\Pi_{a[b;c]}=0$. For any given $\Pi_{ab}$ satisfying these equations, there exists a local immersion, unique up to rigid transformations, such that $\Pi_{ab}$ acts as the second fundamental form of said immersion. \end{thm} \begin{proof} By choosing a point $p\in M$ and setting $h_{;a}(p)$ and $h(p)$ as initial conditions, the given $\Pi_{ac}$ satisfying the Gauss and Codazzi equations allows us to construct a unique $h$-field in a neighborhood $M_{p}$ of $p$, satisfying $ \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}=\Pi_{ab}$, as shown if Theorem 3.1. The resulting immersion will have the form $i:M_{p}\rightarrow \mathbb{R}^{n+1}$, $i(x)=(f\m{\tau}(x),h(x))$, with $\tau\in\{1,2,..n\}$, where $f\m{\tau}:M_{p}\rightarrow \mathbb{R}^{n}$ is a local isometric immersion of $(M_{p}, f_{ab})$ into $\mathbb{R}^{n}$, with $f_{ab}$ being the flat metric $f_{ab}=g_{ab}-h_{;a}h_{;b}= f\m{\tau}_{;a}f\m{\tau}_{;b}$, and associated unit-normal field $(1-g^{ab}h_{;a}h_{;b})^{\frac{1}{2}}(-f\,{\tau}_{;p}(f^{-1})^{pk}h_{;k},1)$. The proof consists of two steps: first we show that for an immersion of the form $i(x)=(f\m{\tau}(x),h(x))$, the second fundamental form satisfies $\Pi_{ab} = \frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$, then we show that "fixing" an immersion of $M_{p}$ with given $\Pi_{ab}$, in $\mathbb{R}^{n+1}$ using a rigid transformations that identifies the point $p$ with a point $i(p)\in \mathbb{R}^{n+1}$ and a normal and $n$-frame at $p$ with an $(n+1)$-frame at $i(p)$ is equivalent to picking the initial conditions of $h(p)$, $h_{;k}(p)$ and $f\m{\tau}:(M_{p}, f_{ab})\rightarrow \mathbb{R}^{n}$ that uniquely determine the solution to the equation. As such, all local isometric immersions of $(M_{p},g)$ in $\mathbb{R}^{n+1}$ with a the same given (scalar-valued) fundamental form $\Pi_{ab}$ are rigid-transformations of one-another.\\ \noindent\rule{8cm}{0.4pt} \begin{lem} For a given isometric immersion $I\m{\upsilon}:(M,g)\rightarrow \mathbb{R}^{n+k}$ with $\upsilon \in\{1,2..n+k\}$, the second fundamental form (taken as a vector-valued form with values in the normal-bundle as sub-bundle of $\mathbb{R}^{n+k}$) takes the form $\Pi\m{\upsilon}_{ab}=I\m{\upsilon}_{;ab}$. \end{lem} \begin{proof} Let $V^{a}$ be a vector-field and $V\m{\upsilon} = V^{a} I\m{\upsilon}_{;a}$ be its pushforward in the ambient-bundle of the immersion. We have $V\m{\upsilon}_{;b} = V^{a}_{;b} I\m{\upsilon}_{;a} + V^{a} I\m{\upsilon}_{;ab}$. As $I\m{\upsilon}_{;a}$ acts as a pushforward from the tangent bundle of the manifold to the tangent bundle of the immersion, using the Gauss formula, we identify $V_{a;b} I\m{\upsilon}_{;a}$ as the push-forward of the Levi-Civita connection applied to $V^{a}$, and $I\m{\upsilon}_{;ab}$ as the second fundamental form, with values in the normal bundle. \end{proof} \noindent\rule{8cm}{0.4pt}\\ \noindent Applying to our $i\m{\epsilon}:M_{p}\rightarrow \mathbb{R}^{n+1}$, $i(x)=(f\m{\tau}(x),h(x)),\epsilon\in\{1,2,....n+1\}$ we get $\Pi\m{\epsilon}_{ab}=(f\m{\tau}_{;ab},h_{;ab})$, $\tau\in\{1,2,....n\}$ as vector-valued second fundamental form. It can be checked that $V\m{\epsilon} = (1-g^{ab}h_{;a}h_{;b})^{\frac{1}{2}}(-f\,{\tau}_{;p}(f^{-1})^{pk}h_{;k},1)$ is a valid choice of unit-normal field , as $V\m{\epsilon}V\m{\epsilon} = 1$ (unit) and $({V\m{\epsilon}})(i\m{\epsilon}_{;p}) = 0$ (normal). We recover our scalar-valued second-fundamental form as $\Pi_{ab} = \Pi\m{\epsilon}_{ab}V\m{\epsilon}$. This gives $\Pi_{ab}= (h_{;ab} - (f\m{\tau}_{;ab}f\,{\tau}_{;p})(f^{-1})^{pk}h_{;k})(1-g^{ab}h_{;a}h_{;b})^{\frac{1}{2}}$. (3)\\ We have $f\m{\tau}_{;ab}f\,{\tau}_{;p}=\frac{1}{2}((f\m{\tau}_{;a}f\m{\tau}_{;p})_{;b}+(f\m{\tau}_{;b}f\m{\tau}_{;p})_{;a}-(f\m{\tau}_{;a}f\m{\tau}_{;b})_{;p} )= \\ \hspace*{3cm}=-\frac{1}{2}((g_{ap}-h_{;a}h_{;p})_{;b}+(g_{bp}-h_{;b}h_{;p})_{;a}-(g_{ab}-h_{;a}h_{;b})_{;p})=\\ \hspace*{3cm}=\frac{1}{2}((h_{;a}h_{;p})_{;b}+(h_{;b}h_{;p})_{;a}-(h_{;a}h_{;b})_{;p} )= (-h_{;ab}h_{;p})$ \hspace{0.25cm} (4)\\ Substituting (4) in (3) gives:\\ $\Pi_{ab}= (h_{;ab} + h_{;ab}h_{;p}(f^{-1})^{pk}h_{;k})(1-g^{ab}h_{;a}h_{;b})^{\frac{1}{2}} = \\\hspace*{0,6cm}=h_{;ab}(1+ h_{;p}h_{;k}(f^{-1})^{pk})(1-g^{ab}h_{;a}h_{;b})^{\frac{1}{2}} $ \hspace*{0.25cm} (5) \\Equation (2) from the proof of theorem 2.3 gives:\\ $(1+ h_{;p}h_{;k}(f^{-1})^{pk}) = (1-g^{ab}h_{;a}h_{;b})^{-1}$. \\ Substituting in (5) gives:\\ $\Pi_{ab}=\frac{h_{;ab} }{(1-g^{ab}h_{;a}h_{;b})^{1/2}}$, as expected. This completes the first part of the proof. Consider an immersion $i:M_{p}\rightarrow \mathbb{R}^{n+1}$, split as $i(x)=(f\m{\tau}(x),h(x))$. Let $(v^1,v^2,v^3...v^n)$ be an orthonormal frame at $p\in M_{p}$, and require that $i(p)=(0,0...0)$, $ \nabla i(v^k)=e_{k}$, the standard basis at $p$, and $e_{n+1}$ be the unit-normal vector of the immersion at $i(p)$. Knowing that $e_{n+1}= (0\m{\tau},1)$ is the unit-normal at $i(p)$ gives $h_{;k}(p)=0$. Additionally, $i(p)=(0,0...0)$ gives $h(p)=0$. As $\Pi_{ab}$ is given, this determines a unique $h$-field. We have $g_{ab}(p)=f_{ab}(p)$ and $i\m{\epsilon}_{;k}(p) =(f\m{\tau}_{;k}(p),h_{;k}(p))= (f\m{\tau}_{;k}(p),0_{;k})$. The immersion $f\m{\tau}:(M_{p}, f_{ab})\rightarrow \mathbb{R}^{n}$ is required to take $p$ to $(0,0...0)$ and the frame $(v^1,v^2,v^3...v^n) \in T_{p}M$ to $(e_1,e_2,e_3...e_n)\in T_{0}\mathbb{R}^n$. As such, it is uniquely determined, and so is $i\m{\epsilon}$ for the given $\Pi_{ab}$ satisfying the Gauss and Codazzi equations. This completes the proof. \end{proof} \section{Reducing the equation to obstruction-form} In this section we derive necessary and sufficient conditions for the existence of $\Pi_{ab}$ satisfying the Gauss and Codazzi equations (hence the existence of local immersions) as the vanishing of certain obstructions, in manifolds of positive-sectional curvature and $n\geq 3$. \begin{prop} For a given tensor $R_{pijk}$, possessing the symmetries of the curvature tensor, there exists an $n$-manifold $M$ immersed in $\mathbb{R}^{n+1}$, such that $R_{pijk}$ is equal to the Riemannian curvature field of $M$ at a given point, if and only if there exists a symmetric tensor $\Pi_{ab}$ such that our tensor satisfies: $$\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}=R_{pijk}.$$ \end{prop} \begin{proof} The fact that an immersion requires the existence of a $\Pi_{ab}$ tensor was proven above. For the converse, use the $\Pi_{ab}$ tensor to construct the immersion given by: $f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n+1}$, $f(v^n) = (v^n, \frac{\Pi_{ab}v^av^b}{2})$. The induced metric will have the required curvature tensor $R_{pijk}=\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$ at coordinates $v^n=0$. \end{proof} \begin{lem} A sectionally-positive tensor $R_{pijk}$ admitting a form $R_{pijk}=\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$ is positive-definite (as an operator on the space of $2$-forms). \end{lem} \begin{proof} Let $(v^1,v^2,....v^n)$ be an orthonormal basis that diagonalizes $\Pi_{ab}$, with eigenvalues $(\lambda_1, \lambda_2,....\lambda_n)$. Then the eigenvectors of $R_{pijk}$ can be written as $\{w^{kp}= v^k \wedge v^p \mid k,p\in \{1,2...n \}, k<p \}$, with eigenvalues $\lambda_k\lambda_p$ for $w^{kp}$. All the eigenvectors are decomposable $2$-forms, therefore positive sectional curvature implies positive-definite curvature. \end{proof} \begin{cor} A sectionally positive curvature tensor that is not positive-definite requires an Euclidean space of dimension at least $n+2$ for the existence of an isometric immersion. \end{cor} \begin{proof} Assume dimension $n+1$ is sufficient for an isometric immersion. Then, according to Proposition 4.1, there exists a symmetric tensor $\Pi_{ab}$ satisfying $\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}=R_{pijk}$. According to Lemma 4.2, a sectionally positive curvature tensor admiting such a form is positive definite, contradicting our hypothesis. This completes the proof. \end{proof} \noindent {\bf Decomposition of the curvature tensor}\\ Given a curvature tensor $R_{pijk}$ and a fixed metric tensor $g_{ab}$, we recall and make use of the following standard definitions:\\\\ The Ricci tensor as $R_{ij} = {R^k}_{ikj}$, and the scalar curvature as $R={R^a}_a$.\\ The scalar part as $S_{pijk}=\frac{R}{n(n-1)} (g_{pj}g_{ik}-g_{pk}g_{ij})$. \\ The traceless Ricci tensor as $S_{ab}=R_{ab}-\frac{R}{n}g_{ab}$. \\ The semi-traceless part as $E_{abcd}= \frac{1}{n-2}(g_{ac}S_{bd}+S_{ac}g_{bd}-S_{ad}g_{bc}-g_{ad}S_{bc})$. The Schouten tensor as $P_{ab} = \frac{1}{n-2}(R_{ab}-\frac{R}{2(n-1)}g_{ab}) $.\\ The Weyl tensor, or fully-traceless part as \\$C_{abcd}= R_{abcd}-(E_{abcd}+S_{pijk})=R_{abcd}- (g_{ac}P_{bd}+P_{ac}g_{bd}-P_{ad}g_{bc}-g_{ad}P_{bc}) $.\\ \noindent These formulas are usually derived with respect to the Riemann tensor as base-component, but they can be applied to any $(4,0)$ symmetric operator on the space of $2$-forms. Let ${R_{ab}}^{cd}$ be a positive definite Riemann tensor, taken as an operator on the space of $2$-forms. We define ${\mathbb{R}^{*}_{ab}}^{cd} = ln({R_{ab}}^{cd})$. Using this, we can construct a new set of operators from ${\mathbb{R}^{*}_{ab}}^{cd}$, analogous to those defined from ${R_{ab}}^{cd}$. \mbox {For example, $W^{*}_{abcd}$ as the Weyl-component of $R^{*}_{abcd}$.} \begin{thm} Let ${R_{abcd}}$ be a positive-definite Riemann tensor and $n\geq 3$. Then $R_{pijk}$ admits a writing of the form $R_{pijk}=\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$, for a symmetric $\Pi_{pk}$, if and only if the Weyl-component of ${\mathbb{R}^{*}_{ab}}^{cd} = ln({R_{ab}}^{cd})$ is $0$. Furthermore, the (only) resulting solutions are $\Pi_{ab}= \pm e^{P^{*}_{ab}}$, where $P^{*}_{ab}$ is the Schouten component of ${\mathbb{R}^{*}_{ab}}^{cd}$. \end{thm} \begin{proof} Assume ${R_{abcd}}$ can be written as $\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$. Select a basis $(v^1,v^2,....v^n)$ that diagonalizes $\Pi_{pk}$ with eigenvalues $(\lambda_1, \lambda_2,...,\lambda_n)$. The eigenvectors of $R_{pijk}$ can be written as $\{w^{kp}= v^k \wedge v^p \mid k,p\in \{1,2...n \}, k<p \}$, with eigenvalues $\lambda_k\lambda_p$ for $w^{kp}$. Since the eigen-values of $R_{pijk}$ are all positive and take the form $\lambda_k\lambda_p$, then the eigen-values of any $\Pi_{ij}$ are either all positive or all negative. If they are all positive, then $\Pi_{ik}$ is uniquely determined from $R_{pikj}$ as $e^{P^{*}_{ab}}$, where $P^{*}_{ab}$ is the Schouten component of ${R^{*}_{ab}}^{cd}$. The eigenvalues of ${R^{*}_{ab}}^{cd}$ take the form $ln(\lambda_{k}\lambda_{p})=ln(\lambda_{k})+ln(\lambda_{p})$ for eigenvectors $v^k \wedge v^p$, and the eigenvalues of $P^{*}_{ab}$ are $ln(\lambda_{k})$ for eigenvectors $v^k$. We recover $\Pi_{ab}$ as $e^{P^{*}_{ab}}$. If they are all negative, then multiply $\Pi_{pj}$ by a factor of $-1$ to relate to the unique positive solution. ${\mathbb{R}^{*}_{ijkl}}$ is equal to $P^{*}_{ik}g_{jl}+g_{ik}P^{*}_{jl}-P^{*}_{il}g_{jk}+g_{il}P^{*}_{jk}$, thus having vanishing Weyl component. Conversely, assume the Weyl component of $R^{*}_{pikj}$ vanishes. \\Then $R^{*}_{pikj}=P^{*}_{ik}g_{jl}+g_{ik}P^{*}_{jl}-P^{*}_{il}g_{jk}-g_{il}P^{*}_{jk}$, and ${R_{pi}}^{kj}=e^{({R_{pi}^{*}}^{kj})}$ satisfies\\ $R_{pijk}=\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$, for \iffalse with \fi $\Pi_{ab}= e^{P^{*}_{ab}}$. This can be checked by using a basis that diagonalizes $P^{*}_{ab}$: Let $(v^1,v^2,....v^n)$ be such a basis, with eigen-values $(ln(\lambda_1), ln(\lambda_2),....ln(\lambda_n))$. The eigen-vectors of $R^{*}_{pikj}=P^{*}_{ik}g_{jl}+g_{ik}P^{*}_{jl}-P^{*}_{il}g_{jk}+g_{il}P^{*}_{jk}$ take the form $w^{kp}= v^k \wedge v^p$ with eigen-values $ln(\lambda_{k}\lambda_{p})=ln(\lambda_{k})+ln(\lambda_{p})$. We check that ${R_{pi}}^{kj}=e^{({R_{pi}^{*}}^{kj})}$ satisfies $R_{pijk}=\Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$, for \iffalse for \fi $\Pi_{ab}= e^{P^{*}_{ab}}$, by verifying the two sides have the same eigen-vectors with the same corresponding eigen-values. Due to the previous sections, $\pm \Pi_{ab}$ are the only solutions. \end{proof} The uniqueness of $\Pi_{ab}$ implies that for $n\geq 3$ and positive sectional curvature an isometric immersion, if it exists, it is rigid, \cite[page 4]{LW}. Attempting to extend the procedure to arbitrary curvature runs into problems, as the logarithm function is not uniquely defined on negative-valued operators. If $n\geq 3$ and $R_{abcd}$ is of rank greater than one, then it is still true that $R_{abcd}= \Pi_{pj}\Pi_{ik}-\Pi_{pk}\Pi_{ij}$ has either $0$ or $2$ solutions, with $\Pi_1 = - \Pi_2$. In the case of $n=2$, there are an infinite number of solutions. They satisfy $det(\Pi)=R$. Negative curvature operators $R_{abcd}$ for $n\geq 3$ do not have (real) solutions, as they take the form $i\Pi_{pj}$, with $\Pi_{pj}$ being a solution for the positive operator $-R_{abcd}$. Although manifolds of negative curvature and dimension greater than $2$ do not admit local isometric immersions in $\mathbb{R}^{n+1}$, they may admit immersions in a pseudo-Euclidean space of signature $(n,1)$ (see end of Section 2). \begin{thm} For $n\geq 3$, a Riemannian $n$-manifold $M$ of positive sectional curvature admits a local isometric immersion in $\mathbb{R}^{n+1}$ if and only if it is of positive curvature operator and the following conditions are satisfied: $C^{*}_{abcd}=0$, and $e^{P^{*}}_{a[b;c]}=0$, where $C^{*}_{abcd}$ and $P^{*}_{ab}$ are the Weyl and Schouten tensors of ${\mathbb{R}^{*}_{ab}}^{cd}=ln({R_{ab}}^{cd})$. \end{thm} \begin{proof} Combine the fundamental theorem of hypersurfaces (Theorem 3.2) with Lemma 4.2 and Theorem 4.4: The fundamental theorem of hypersurfaces states that for $M$ to admit an isometric immersion in $\mathbb{R}^{n+1}$ is equivalent to the existence of $\Pi$ satisfying the Gauss and Codazzi equations. Lemma 4.2 states that manifolds of sectionally positive curvature satisfying Gauss equation have positive curvature operators. Theorem 4.4 (applied for manifolds of positive curvature) allows us to rewrite the Gauss and Codazzi equations in the forms stated above: the existence of $\Pi$ satisfying the Gauss equation becomes $C^{*}_{abcd}$ = 0, with $\Pi_{ab}$ uniquely determined as $\pm e^{P^{*}}_{ab}$. \end{proof} For the case of $n=3$, this recovers \cite[Theorem 7]{LW}. \begin{lem} For $n>3$ and $M$ a Riemannian manifold of positive sectional curvature, $C^{*}_{abcd} = 0$ implies \hspace*{0.0ex} $e^{P^{*}}_{a[b;c]}=0$. That is, for a manifold of positive sectional curvature and $n>3$, the existence of $\Pi_{ab}$ satisfying the Gauss equation implies $\Pi_{ab}$ also satisfies the Codazzi equation. \end{lem} \begin{proof} From Theorem 4.4 we know that $C^{*}_{abcd} = 0$ implies the existence of a tensor $\Pi_{ab}$ such that $R_{abcd}=\Pi_{ac}\Pi_{bd}-\Pi_{ad}\Pi_{bc}$, and $\Pi_{ab}= e^{P^{*}_{ab}}$. Note that $\Pi$ is invertible. Our aim is to prove that $\Pi_{a[b;c]} = 0$. \\The second Bianchi identity says: $R_{abcd;e} + R_{abde;c} + R_{abec;d}=0$; \\ Denote $T_{abcde} = R_{abcd;e} + R_{abde;c} + R_{abec;d}$ and $Y_{abc}= \Pi_{a[b;c]}$.\\ Substituting $R_{abcd}=\Pi_{ac}\Pi_{bd}-\Pi_{ad}\Pi_{bc}$ in $T_{abcde}$ and grouping, we obtain:\\ $T_{abcde}=4(\Pi_{d\lbrack b}Y_{a\rbrack ce} + \Pi_{c\lbrack b}Y_{a\rbrack ed} + \Pi_{e\lbrack b}Y_{a\rbrack dc})$.\\ Let $T_{bde} = T_{abcde}(\Pi^{-1})^{ac}$ and $T_e= T_{bde}(\Pi^{-1})^{bd}$. We have:\\ $T_{bde} = T_{abcde}(\Pi^{-1})^{ac} = (n-3)Y_{bde} + 2Y_{ac\lbrack e}\Pi_{d\rbrack b}(\Pi^{-1})^{ac}$\\ $T_e= T_{bde}(\Pi^{-1})^{bd} = 2(n-2)Y_{ace} (\Pi^{-1})^{ac}$.\\ $(n-2)(n-3)Y_{bde}=(n-2)T_{bde}-T_{\lbrack e}\Pi_{d\rbrack b}$.\\ $(n-2)(n-3)\Pi_{b[d;e]}=(n-2)T_{bde}-T_{\lbrack e}\Pi_{d\rbrack b}$.\\ Since $T_{abcde}$, $T_{bde}$ and $T_e$ are $0$, we know $Y_{bde}=\Pi_{a[b;c]} = 0$ for $n>3$, thus completing our proof. \end{proof} \section{ Similarities to the Weyl-Schouten theorem and conformally flat manifolds} \noindent Combining Theorem 4.5 and Lemma 4.6, we obtain the following theorem: \begin{thm} For $n\geq 3$, a Riemannian $n$-manifold $M$ of positive curvature operator admits a local isometric immersion in $\mathbb{R}^{n+1}$ if and only if, when $n=3$, $e^{P^{*}}_{a[b;c]}=0$, and when $n>3$, $C^{*}_{abcd}=0$, \noindent where $C^{*}_{abcd}$ and $P^{*}_{ab}$ are the Weyl and Schouten tensors of ${\mathbb{R}^{*}_{ab}}^{cd}=ln({\mathbb{R}^{*}_{ab}}^{cd})$. \end{thm} \noindent Compare this to the Weyl-Schouten theorem: \begin{thm}[Weyl-Schouten] For $n\geq 3$, a Riemannian $n$-manifold is conformally flat if and only if, when $n=3$, $P_{a[b;c]}=0$, and when $n>3$, $C_{abcd}=0$, \noindent where $C_{abcd}$ and $P_{ab}$ are the Weyl and Schouten tensors of $R_{abcd}$. \end{thm} Note that when $n=2$, every Riemannian manifold is conformally flat. Analogously, when $n=2$, every Riemannian manifold admits a local isometric immersion in $\mathbb{R}^3$ (Cartan-Janet theorem). \section{Cross-sections and new structures resulting from immersions} The existence of immersions of an $n$-manifold $M$ in $\mathbb{R}^{n+k}$ allows for the study of new geometric structures. We can associate to $M$ a new notion of distance, the Euclidean distance of the immersed points in $\mathbb{R}^{n+k}$. When a global immersion exists, this distance is smooth globally, unlike the intrinsic distance which is not smooth near its cut locus. Additionally we can study a new class of $p$-submanifolds of $M$, those that appear as the intersection of $M$'s immersion with various $(n+p)$-hyperplanes of $\mathbb{R}^{n+k}$. For our particular approach when $k=1$, the $(n-1)$-manifolds that are the cross-sections of $M$ by $n$-planes are the equipotential lines of $h$. If two points have the same $h$, the Euclidean distance between them is the distance measured along the resulting flat metric $f_{ab}=g_{ab}-h_{;a}h_{;b}$. For any two points $a$ $b$, there exists a choice of $h$ such that $h(a)=h(b)$. We shall calculate the curvature tensor of cross-section $(n-1)$-submanifolds, or the manifolds for which $h$ is constant, for a particular $h$. Let $N$ be such a submanifold, and $R_N$ be its curvature tensor. We shall use $a,b,c,d\in \{1,2..n\}$ as to index over $TM$, and $i,j,k,l\in \{1,2..n-1\}$ to index over $TN$. We know that $$ (R_{N})_{ijkl} = (R_{M})_{ijkl} +(K_{ik}K_{jl}-K_{il}K_{jk})$$ where $K$ is the second fundamental form of $N$ as a submanifold on $M$, $R_{M}$ is the curvature tensor of $M$. We may compute, $K_{ij}$ as the restriction of $n_{a;b}$ to $TN$, where $n_{a}$ is a field of unit co-vectors normal to $N$. Since $h$ is constant on $N$, $h_{;a}$ is already a field of normal vectors. We shall denote $P^{a}_{i}$ to be the linear map from $TN$ to $TM$ over $N$, ie. the differential of $N$'s immersion map. Let $n_{a}=\frac{h_{;a}}{(g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}$ be our field of unit normal co-vectors. This gives:\\ $K_{ij}=n_{d;c}P^{d}_{i}P^{c}_{j} = \frac{1}{(g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}h_{;dc}P^{d}_{i}P^{c}_{j}=\frac{(1-g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}{(g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}\Pi_{cd}P^{d}_{i}P^{c}_{j}\implies $\\ $K_{ij}=\frac{(1-g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}{(g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}\Pi_{ij}$. The second fundamental form $K_{ij}$ is the \mbox{restriction} of $\frac{(1-g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}{(g^{mn}h_{;m}h_{;n})^{\frac{1}{2}}}\Pi_{ab}$ to $TN$. Plugging this in the curvature equation:\\ $(R_{N})_{ijkl} = (R_{M})_{ijkl} +(K_{ik}K_{jl}-K_{il}K_{jk})$, we get: $(R_{N})_{ijkl}= \frac{1}{g^{mn}h_{;m}h_{;n}}(R_{M})_{ijkl}$. The curvature tensor of the cross-section manifold, $(R_{N})_{ijkl}$, is simply the restriction/pullback of the curvature tensor of our ambient manifold, $(R_{M})_{abcd}$, with a scaling factor of $\frac{1}{g^{mn}h_{;m}h_{;n}}$. \section{Conclusion} We have calculated the obstructions to the existence of local isometric immersions in $\mathbb{R}^{n+1}$ of $n$-manifolds having positive-sectional curvature. The resulting theorem has a structural similarity to Weyl-Schouten theorem, hinting at a relationship between the two classes of manifolds: conformally flat and locally-hypersurfaces of $\mathbb{R}^{n+1}$. Potential further areas of study are finding links that make this relationship explicit or derive it as a consequence of some more general principle, reductions to obstruction-form of immersion equations when codimension is higher than $1$, and immersions in various non-Euclidean spaces or with various structures (eq: immersions in pseudo-Euclidean spaces, complex spaces, immersions of K{\"a}hler manifolds, etc.)
1,116,691,501,169
arxiv
\section{Introduction} In this paper we are interested to the zygodactyly phenomenon in birds, and in particolar in parrots. This arrangement, common in species living on trees, is a distribution of the foot with two toes facing forward and two back. We give a model for the foot, and thanks to the methods of iterated function system we are able to describe the reachability set. Moreover we give a necessary and sufficient condition for the grasping problem. Finally we introduce a hybrid dynamical system modeling owl's foot in various stages of hunting (flying, attack, grasp). A discrete dynamical system models the position of the extremal junction of every finger. A \emph{configuration} is a sequence of states of the system corresponding to a particular choice for the controls, while the union of all the possible states of the system is named \emph{reachable set for the finger}. The closure of the reachable workspace is named \emph{asymptotic reachable set}. Our model includes a control parameter on every phalanx of every finger of the robot hand, ruling the angle between the current phalanx and the previous one. Such an angle ranges in $[\pi-\omega,\pi]$, where $\omega$ is a fixed quantity describing the maximal rotation. Note that when the angle is $\pi$ the phalanx is consecutive to the previous. We assume a constant ratio $\rho$ between the lengths of two consecutive phalanxes. The structure of the finger ensures the set of possible configurations to be self-similar. In particular the sub-configurations can be looked at as scaled miniatures with constant ratio $\rho$, named \emph{scaling factor}, of the whole structure. This is the key idea underlying our model and our main tool of investigation. Biomechanics of avian foot, in particular in the case of arboreal birds, is widely investigated in the literature. We refer to Norberg, (1986) and the references therein for a discussion on the mechanics and energetics of trunk climbing and grasping of the treecreeper. Bock (1999) describes the morphology of woodpeckers and the biomechanical analysis of climbing and perching. Zinoviev and Dzerzhinsky(2000) studied the forces acting on avian limbs on various stages of locomotion, while Sustaita et al. (2013) survey the tetrapod grasping in several clades, including birds. The above mentioned papers share a mechanical approach to the analysis of grasping and perching capabilities of avian feet: forces acting on bird's foot are described by considering in detail the whole skeleton-muscolar system. Our model is a simplified version of these systems, on the other hand the dynamical system we consider is indeed a control system: this yields the possibility of investigating at once all the physically reasonable configurations of the foot. Moreover we shall show that a description of the reachable set of bird's foot can be obtained with an effortable computational cost. The discrete control theoretic approach in the investigation of limb's kinematics is common in robotics - among many others, we refer to the papers by Imme and Chirikjian (1996), Lichter, Sujan and Dubowsky (2002), Lai and Loreti (2012) for an overview on robotic fingers that are investigated in a fashion similar to the one proposed in the present paper. Finally we note that the connection between the biomechanics of avian feet and robotics is an active research domain, mostly motivated by the fact that the lomocotion of birds turned out to be more efficient with respect with human lomocotion - see for instance the project by Mederreg et al. (2003) where the lomocotion of birds is mimicked in a robotic device. \section{Preliminaries: Iterated Function Systems} An iterated function system (IFS) is a set of contractive functions $f_j:\mathbb C\to\mathbb C$. We recall that a function in a metric space $(X,d)$ is a contraction, if for every $x,y\in X$ $$d(f(x),f(y))< c\cdot d(x,y)$$ for some $c<1$. Hutchinson (1981) showed that every finite IFS, namely every IFS with finitely many contractions, admits a unique non-empty compact fixed point $R$ w.r.t the Hutchinson operator $$\mathcal F: S\mapsto \bigcup_{j=1}^J f_j(S)$$ Moreover for every non-empty compact set $S\subseteq \mathbb C$ $$\lim_{k\to\infty} \mathcal F^k(S)=R.$$ The \emph{attractor} $R$ is a self-similar set and it is the only bounded set satisfying $\mathcal F(R)=R$. This result was lately generalized to the case of infinite IFS (Mihail and Miculescu. (2009)). \section{The finger model on the complex plane} Recall on physical parameters \begin{itemize} \item $\omega\in(0,2\pi)\setminus\{\pi\}$ is the greatest rotation of phalanxes; \item $\rho>1$ is the scaling factor of phalanxes; \end{itemize} Reference parameters \begin{itemize} \item $x_0$ is the position of the initial junction of the finger, it is assumed for simplicity coinciding with the origin; \item $v_0$ represents the initial orientation of the finger: in particular when no rotations are actuated, the finger forms a $-v_0\omega$ angle with the $x$-axis; \end{itemize} Control parameters \begin{itemize} \item $x_k$ is the position of the $k$-th junction on the complex plane; \item $v_k\in R=\{r_0=0<a_1<\dots<r_n=1\}$ is the rotation control for the $k$-th phalanx; \item rotations are modeled with complex exponentiation - see Figure \ref{grafico}; \end{itemize} \begin{equation}\label{sys} \begin{cases} \displaystyle{x_k=x_{k-1}+\frac{1}{\rho^k} e^{-i\sum_{n=0}^{k} v_n~\omega}}\\ \displaystyle{x_0=0} \end{cases} \end{equation} remark that $v_0\in \mathbb{R}$ is not a control variable but it represent the initial orientation of the finger. \begin{figure} \hskip-3cm \begin{picture}(10,110) \includegraphics[scale=0.6]{rot010.pdf} \put(-130,100){$x_{k-1}$} \put(-10,100){$x_{k}$} \put(-60,20){$x_{k+1}$} \put(-25,85){$\omega$} \end{picture} \caption{\label{grafico} In this figure the $k+1$-th rotation control $v_{k+1}$ is equal to $1$: this corresponds to rotate the $k+1$-th phalanx, whose junctions are $x_k$ and $x_{k+1}$ of an angle $\omega$ with respect to the $k$-phalanx.} \end{figure} The reachable set in time $k$ is $$R_k(\rho,\omega,R):=\left\{\sum_{j=1}^k\frac{1}{\rho^j}e^{-i\omega\sum_{n=0}^j v_n}\mid u_j\in E;v_n\in R\right\}.$$ We also define \emph{asymptotically reachable set} the following \begin{align*} &R_\infty(\omega,\rho, R):=\overline{\bigcup_{k=0}^\infty R_k}\\ =&\displaystyle\left\{\sum_{i=1}^\infty\frac{1}{\rho^j}e^{-i\omega\sum_{n=0}^j v_n}\mid u_j\in E;v_n\in R\right\} \end{align*} where $\overline R$ denotes the closure of a set $R$. \\ Following proposition shows that the control system (\ref{sys}) and, in particular, its reachable sets are associated to the (possibly infinite) IFS $$\mathcal F(\rho,\omega,R):=\left\{f_v: x\mapsto \frac{e^{-i\omega v}}{\rho}(x+1)\mid v\in R\right\}.$$ \begin{proposition}[Lai and Loreti (2012)]\label{p1} For every $k\in\mathbb{N}$ \begin{equation}\label{Rk} R_k(\rho,\omega,R)=e^{-i\omega v_0}\mathcal F^k(\rho,\omega,R)(\{x_0\}). \end{equation} Moreover the asymptotic reachable set satisfies $$R_\infty(\rho,\omega,R)=e^{-i\omega v_0} R$$ where $R$ is the attractor of $\mathcal F(\rho,\omega,R)$. \end{proposition} \begin{proof}[Sketch of proof] Equality (\ref{Rk}) can be proved by induction on $k$. Since $e^{i\omega v_0}R_\infty(\rho,\omega)$ is a compact set, it is the attractor if and only if it is invariant with respect to $\mathcal F(\rho,\omega,R)$. This invariance property can be shown by double inclusion. \end{proof} \section{Zygodactyly bird's foot} Zygodactyly bird's foot is composed by four pairwise opposable it occurs parrots, woodpeckers, cuckoos and some owls. The arrangement of the fingers varies with species, the case we take into account is described in Figure \ref{parrot}.\\ \begin{figure} \begin{center} \begin{picture}(110,300) \put(35,0){\includegraphics[scale=0.28]{toe.pdf}} \put(65,120){\small $\frac{\pi}{6}$} \put(65,195){\small $\frac{\pi}{6}$} \put(100,155){\small $\frac{5}{6}\pi$} \put(26,155){\small $\frac{5}{6}\pi$} \put(-5,263){\small Finger 2} \put(110,289){\small Finger 3} \put(115,3){\small Finger 4} \put(5,80){\small Finger 1} \end{picture} \end{center} \caption{\label{parrot} Our model for a zygodactyl bird's foot - the scaling ratio $\rho$ is $1.2$.} \end{figure} The junctions of every finger are coplanar, we denote $p_i$, with $i=1,\dots,4$, the plane the $i$-th finger belongs to. All the planes of the fingers are assumed to be orthogonal to the $xy$-plane and we call $\Omega_i$, with $i=1,\dots,4$, the angle the plane $p_i$ forms with $xz$-plane. In our example $\Omega_1=\Omega_2=\pi/12$, $\Omega_3=\Omega_3=-\pi/12$, so that Finger 1 and Finger 3 (and Finger 2 and Finger 4) are coplanar. The initial rotation $v_0^i$ of the $i$-th finger is set to $0$ for $i=1,3$, while for $v_0^2=v_0^4=\pi$. All fingers have in common their first junction, that, for seek of simplicity, coincides with the origin. We assume all fingers to have same scaling factor (in particular $\rho$ is set equal to Golden Mean), the same maximal rotation angle $\omega$ and the same control set $R=[0,1]$, so that each phalanx of each finger of the bird's foot can rotate of an angle $v\omega\in[0,\omega]$. We discuss the reachability of Finger 3, the other cases being similar. By Proposition \ref{p1}, the reachable set of the extremal junction of Finger 3, $R^3_4(\rho,\omega,R)$, can be obtained by following algorithm \begin{enumerate} \item iterate $4$ times the IFS $\mathcal F(\rho,\omega,R)$ with initial datum $\{0\}$; \item apply the isomorphism between $\mathbb{C}$ and $\mathbb{R}^3\cap xz$-plane given by $x+iz\mapsto(x,0,z)$; \item apply a rotation of $\Omega_3$ about the $z$-axis. \end{enumerate} See Figure \ref{finger1} for some examples. \begin{figure} \subfloat[$\omega=\pi/12$]{\includegraphics[scale=0.3]{e4.jpg}} \subfloat[$\omega=\pi/6$]{\includegraphics[scale=0.3]{e4_2.jpg}} \caption{\label{finger1} An approximation of $R^3_4(\rho,\omega,R)$ for $\omega=\pi/12,\pi/6$ and $\rho$ equal to the Golden Mean, obtained by an uniform discretization of the control set $R=[0,1]$. The consistency of this approximation is given by the continuity in Hausdorff metric of the attractor of $\mathcal F(\rho,\omega,R)$ with respect to $R$ - see Lai and Loreti (2012).} \end{figure} \subsection{Perching on a branch} We consider the ability of our model bird's foot to lay in a stable equilibrium on branch, modeled as a cylinder. We say that a configuration is \emph{stable on a branch} if at least two phalanxes of two opposable fingers are tangent to the cylinder, while by \emph{grasping configuration} we mean a stable configuration where at least a couple of tangent phalanxes have non-positive scalar product. Our aim is to describe the possible branches (i.e. cylinders) that can be laid on or grasped by a couple of fingers. We discuss the case of coplanar fingers, say Finger 1 and Finger 3, so that the problem can be set on the complex plane and reduces to consider the problem of being stable on/grasping an appropriate ellipse, namely the section of the cylinder-branch related to the plane $p^1(=p^2)$ - see Figure \ref{plane}. \begin{figure} \begin{picture}(200,170) \put(-30,0){\includegraphics[scale=1]{graspplane.jpg}} \put(150,20){\textred{$y$}} \put(145,155){\textgreen{$z$}} \put(165,130){$p^1$} \put(20,75){\textblue{$x$}} \end{picture} \caption{\label{plane} The cylinder represents a branch and it has radius $1$ and axis $x=0,~z=-1$. The plane $p^1$ is the plane Finger 1 and Finger 3 belong to. The black ellipse is the intersection between $p^1$ and the boundary of the cylinder-branch.} \end{figure} The $k$-th phalanx of the $i$-th finger can be parametrized as follows $$\phi^{(i)}(t;(v_k),k)=x_{k-1}^{(i)}+\frac{t^{(i)}}{\rho^k}e^{-i\sum_{n=0}^k v^{(i)}_nw}$$ with $t^{(i)}\in[0,1]$. On the other hand the ellipse generated by the intersection of a cylinder (with axis parallel to $y$-axis) and $p^1$ has a generic center of the form $(0,c_y)$ and radii of the form $(r \Omega_1,r)$. It can be parametrized as follows \begin{align*} x(\theta;c_y,r)&= r\cos(\Omega_1) \cos(\theta)\\ y(\theta; c_y,r)&= c_y+r \sin(\theta) \end{align*} and its tangent vector given by is \begin{align*} \dot x(\theta;c_y,r)&= -r\cos(\Omega_1) \sin(\theta)\\ \dot y(\theta;c_y,r)&= r \cos(\theta) \end{align*} Using the classical isomorphism $(x,y)\mapsto x+iy$ the ellipse on the complex plane may be parametrized by $$\gamma(\theta;c_y,r)= c_y+ r(\cos(\Omega_1) \cos(\theta)+i \sin(\theta)).$$ We set $$\dot \gamma(\theta;c_y,r):= -r\cos(\Omega_1) \sin(\theta)+ i r \cos(\theta)$$ A phalanx with control sequence $(v_k)$ is tangent to an ellipse on the complex plane with center $ic_y$ and radii $(r \cos(\Omega_1),r)$ if and only if the following system of equations admits a solution $(t,\theta)$. \begin{equation}\label{stable} \begin{cases} \phi^{(i)}(t;(v_k),k) = \gamma(\theta; c_y,r) \quad & ~\text{ (Incidence Condition)}\\ \arg(\phi^{(i)}(t;(v_k),k))=\arg(\dot \gamma(\theta;c_y,r)) \quad &~\text{ (Paralellism Condition)}\\ \omega\in[0,2\pi),~t\in[0,1] \end{cases} \end{equation} We introduce the scalar product on complex field $\cdot$, $$x\cdot y:=(\Re(x),\Im (x))\cdot (\Re(y),\Im (y)).$$ \begin{figure} \includegraphics[scale=0.4]{lay.pdf} \caption{A stable configuration using Finger 1 and Finger 3 with $\rho$ equal to the Golden Mean, $\omega=\pi/6$, $\Omega_1=\pi/12$. The control sequences are constantly equal to $1$, the bird's foot lays on a cylinder of radius $r\sim 0.58$ with axis $x=0$, $z=-r$.} \end{figure} By the reasonings above we have following proposition. \begin{proposition} The bird's foot can be stable on a cylinder of center $c$ and radius $r$ using Finger 1 and Finger 3 if there exist two control sequences $(v^{(i)}_{k\leq K_i})$, with $i=1,3$ and $K_1\leq 2$ and $K_3\leq4$, such that system (\ref{stable}) admits a solution.\\ If moreover $(v^{(i)}_{k\leq K_i})$, with $i=1,3$, also satisfy the grasping condition \begin{equation} \label{grasp} \left(\frac{t^{(1)}}{\rho^{K_1}}e^{-i\sum_{n=0}^{K_1} v^{(1)_n}w}\right) \cdot \left(\frac{t^{(3)}}{\rho^{K_3}}e^{-i\sum_{n=0}^{K_3} v^{(3)}_nw}\right) < 0 \end{equation} then the resulting configuration grasps the cylinder. \end{proposition} \begin{remark} Both terms of the scalar product in (\ref{grasp}) represent the orientation of the $K_i$-th phalanx of the $i$-th finger, the grasping condition follows thus directly by the definition of grasping configuration. \end{remark} \section{Owl's foot} Owls have zygodactyly feet ensuring a good grasp when perching or clutching a pray. They are characterized by the further ability of rotating the a third toe to the front when in flight - see Figure \ref{owl}. In particular, when attacking the pray, the talons are spread out wide to increase the chance of a successful strike. Owl's feet are also endowed with the so called digital tendon locking mechanism (TLM), common among bats too. When an object, say a perch or a pray, touches the base of the foot then TLM engages and keeps the toes locked around the object without the need for the muscles to be contracted (Quinn and Baumel (1990)). \begin{figure} \subfloat[Zygodactily configuration]{\includegraphics[scale=0.7]{zampa2.jpg}}\hskip0.5cm \subfloat[Isodactily configuration]{\includegraphics[scale=0.7]{zampa1.jpg}} \caption{\label{owl} Owl's foot in two configurations, (A) is suitable for grasping, (B) is the typical flight configuration: as soon as a pray is targeted, phalanxes spread wide in order to maximize the chance of grabbing it.} \end{figure} From a mathematical point of view, TLM can be modeled as an hybrid system: when the first phalanx of any toe, namely the base of the foot, is not in contact with an object then the position of the phalanxes evolves according the control dynamics described in previous section. If otherwise the first phalanx belongs to an appropriate region of $\mathbb{R}^3$ denoted by $\mathcal O$ and representing an obstacle, say a pray or a branch, then TLM engages. Since TLM is not a voluntary movement, then the corresponding dynamic is not controlled - see Figure. \ref{owl}-(A). \begin{figure} \subfloat[TLM engagement for Finger 4]{ \includegraphics[scale=0.6]{TLM.pdf}}\hskip0.5cm \subfloat[Talon (i.e. extremal of the toe) trajectory]{ \includegraphics[scale=0.6]{TLM-trajectory.pdf}} \caption{\label{tlm} Various stages of TLM engagement: all rotational controls are equal to $v(t)\in[0,\pi/6]$, $t\in[0,T]$, where $T$ is engagement time. } \end{figure} As in previous sections, let $x_{k}^{(h)}$ be the position of the $k$-th junction of the $h$-th toe of owl's foot and $u_{k}^{(h)}\in[0,1]$ be the corresponding rotation control. In our model phalanxes are segments, in particular the first phalanx of the $h$-th toe is the set $$P^{(h)}:=\left\{ a x_{1}^{(h)}\mid a \in [0,1]\right\}.$$ Also define the TLM map $v(t):[0,T]\mapsto [0,1]$ where $T$ is the is the engagement time, that is the time requested to the toes to contract when it touches an object $\mathcal O$, and $v(t)$ is a continuous non-decreasing map. We have for every $h=1,\dots,4$ \begin{equation} \begin{cases} \displaystyle{x_{k}^{(h)}(t)= \sum_{j=1}^{k+1} \frac{1}{\rho^j} e^{-i\omega \sum_{n=1}^j u_{n}^{(h)}(t)}} \quad & \text{if } P^{(l)}\cap \mathcal O =\emptyset; l=1,\dots,4\\ \displaystyle{x_{k}^{(h)}(t)= \sum_{j=1}^{k+1} \frac{1}{\rho^j} e^{-i\omega j v(t)}}\quad & \text{otherwise} \end{cases} \end{equation} \begin{remark} If $\mathcal O$ is convex then $P^{(l)}\cap \mathcal O \not=\emptyset$ for some $l=1,\dots,4$ is satisfied for every $t\in[0,T]$. At time $T$ the tendon is locked and no further movement is allowed. This is indeed the goal of such mechanism: to keep the toes contracted without the contribution of muscles. We finally remark that disengagement of TLM is yet an open problem among biologists and zoologists. \end{remark}
1,116,691,501,170
arxiv
\section{Introduction} One of the principal objectives in modern cosmology is a good description of the physics that govern the formation of structure from the galactic through to the supercluster scale. Recent theoretical and observational work in this area has concentrated on understanding the relationship between the dynamically dominant dark matter and the baryonic matter in the form of stars and gas. The major challenge has been that observables such as the galaxy correlation function (e.g., Zehavi et al. 2005a, 2005b; Eisenstein et al. 2005; Norberg et al. 2002, 2001), luminosity function (LF; e.g., Babbedge et al. 2006; Ilbert et al. 2005; Dahlen et al. 2005; Wolf et al. 2003; Blanton et al. 2003; Nakamura et al. 2003; Kochanek et al. 2001; Bell et al. 2003; Cole et al. 2001), and the Lyman-alpha forest (e.g., McDonald et al. 2006; Kim et al. 2002; Croft et al. 2002) constrain the distribution of stellar mass and gas in the universe, yet dark matter is the dominant gravitational component, and numerical simulations are more effective at predicting its distribution than that of baryonic mass.. \newline\indent Despite the difference between that which is easy to observe, and that which is easy to simulate, recent combined N-body + Smoothed Particle Hydrodynamics (SPH) simulations have suggested that dissipation from the baryonic component is relatively unimportant in the process of galaxy formation because the force of gravity from galactic dark matter halos is overwhelmingly dominant (e.g., Weinberg et al. 2006; Nagai \& Kravtsov 2005). These results naturally explain why purely N-body simulations have been able to reproduce the correlation function (e.g., Col\'{i}n et al. 1999; Berlind \& Weinberg 2002; Kravtsov et al. 2004; Tinker et al. 2005; Nagai \& Kravtsov 2005; Tasitsiomi et al. 2004) and the number of satellite galaxies per halo, the Halo Occupation Number (HON) or its probability distribution, the Halo Occupation Distribution, (HOD, e.g., Berlind \& Weinberg 2002, see Cooray \& Sheth 2002 for a review). Furthermore, full SPH simulations (e.g., Weinberg et al. 2006, 2004; Yoshikawa et al. 2001; Pearce et al. 2001) and semi-analytic models (e.g., Berlind et al. 2005, 2003; Cole et al. 2000; Zhu et al. 2006) which can be directly compared to observations, have also begun to enjoy a great deal of success at reproducing the LF, correlation function, and HON. These recent improvements in the quality of simulations and semi-analytic models as well as new observational datasets such as the SDSS and 2dFRS represent a major breakthrough in our understanding of the relative distribution of baryonic and dark matter in the universe and consequently, the formation of large-scale structure. \newline\indent Detailed studies of galaxy clusters offer a complementary approach to the correlation function, Lyman-alpha forest, and simulations for studying the relationship between baryonic and dark matter. Clusters allow us to probe the distribution of baryons and dark matter in the most massive collapsed halos (which also happen to be the best-resolved objects in numerical simulations). The advantage of studying clusters is that the baryonic content in the form of stars and hot gas in the Intra-Cluster Medium (ICM) can be directly measured with observations, and in addition to this, cluster halos are sufficiently massive that their dark matter mass can be measured using either weak lensing, X-ray data, or the dynamics of cluster galaxies. This allows direct comparison between the baryonic and non-baryonic component on a halo-by-halo basis, whereas for galaxy-mass halos, the mass-to-light ratio (M/L), HON, or bias is usually measured statistically using either galaxy-galaxy lensing (e.g., Seljak et al. 2005; Sheldon et al. 2004; Hoekstra et al. 2005, 2002), or by stacking galaxies and studying the dynamics of satellite galaxies (e.g., Conroy et al. 2005, Brainerd et al. 2003), although recently strong-lensing analysis of a sample of individual systems has been done (Treu et al. 2006). The advantage of being able to study individual halos is that not only can the correlation between baryonic and dark matter be measured (via the HON or M/L ratio), but the relative {\it scatter} in the correlation can also be determined. Good constraints on the scatter are important because it is a direct measure of the stochasticity in the baryonic/dark matter bias. \newline\indent In practice, the best way to measure the baryonic content of clusters in the form of stars is with detailed modeling of the stellar populations using either high-resolution spectroscopy, or multi-band photometric observations. As these observations are often difficult to obtain for large samples of galaxies, K-band light has frequently been adopted as a cheap and efficient proxy for the stellar mass of galaxies. K-band light is neither strongly affected by dust, nor strongly enhanced by starbursts, and is therefore a good tracer of underlying stellar mass of a galaxy (e.g., Brinchmann \& Ellis 2000; Gavazzi et al. 1996; Rix \& Rieke 1993). Furthermore, k-corrections are generally small, and fairly independent of galaxy type (Poggianti 1997; Mannucci et al. 2001). Therefore, a study of the relative abundances of K-band light and mass in clusters is a good probe of the relationship between the cluster dark matter mass and the baryonic mass in the form of stars. \newline\indent In addition to being useful for determining the relationship between dark matter and baryonic matter, a good understanding of the slope and scatter of the cluster K-band luminosity-mass correlation and K-band richness-mass correlation will be extremely valuable in the era of high-yield cluster surveys such as the Red-sequence Cluster Survey 2 (RCS-2, Yee et al. 2007), the South Pole Telescope (SPT, Ruhl et al. 2004), and the Atacama Cosmology Telescope (ACT, Kosowsky 2006). These projects will use the evolution of the cluster mass function, N(M, $z$) as a probe of cosmological parameters (see Mohr 2004 for a review); however, mass measurements for the thousands of clusters that will be detected in these surveys are unlikely to be available from traditional techniques such as L$_{x}$, T$_{x}$, weak-lensing, or dynamics and therefore new, observationally cheaper proxies for cluster mass will be required. Two candidates for this mass indicator are the integrated Sunyaev-Zeldovich Effect (SZE) $y$-parameter (e.g., Motl et al. 2005), as well as the total cluster K-band luminosity or K-band selected richness (e.g., L04, Ramella et al. 2004, Rines et al. 2004). For the SZE surveys the $y$-parameter has significant potential as a mass indicator (e.g., Hallman et al. 2006, Motl et al. 2005) as it is a measure of the mass of cluster baryons in the form of hot gas. For optical/IR surveys, K-band richness/luminosity is an attractive alternative because it is a measure of the total mass of baryons in the form of stars in cluster galaxies. Gladders et al. (2006) have already shown that optical richness works well as a cluster mass proxy, and using the optically-selected RCS-1 survey and an empirical mass-richness calibration they determined cosmological parameters consistent with recent concordance values (e.g., Spergel et al. 2006), thus illustrating the potential for this technique. K-band light is a better tracer of stellar mass than optical light, and therefore it might be expected that the scatter in the K-band light/richness vs. mass relation will be smaller than the scatter in the optical light/richness vs. mass relation, or at least might have fewer catastrophic outliers. Using an observable with a smaller scatter, and fewer outliers could improve the accuracy in the measured cluster mass function (e.g., Lima \& Hu 2005). \newline\indent Since the advent of the 2-Micron All Sky Survey (2MASS, Skrutskie et al. 2006) several studies of the relationship between K-band light and dark matter in local ($z <$ 0.1) clusters have been done. Lin et al. (2003, 2004; hereafter L04), Rines et al. (2004), Kochanek et al. (2003), and Ramella et al. (2004) all show that the K-band selected number counts and total cluster K-band light are indeed correlated with the dark matter mass, and have rms scatters of $\sim$ 40\%. L04, Rines et al. (2004), and Ramella et al. (2004) also show that the slope of the L$_{K}$ vs. M$_{200}$ relation is shallower than unity for clusters, suggesting that the conversion of baryons into stellar mass proceeds less efficiently in higher-mass halos. Recently, Lin et al. (2006) presented an analysis of the K-band selected HON for a heterogeneous sample of 27 clusters at 0 $< z <$ 0.9 and found no significant evolution in the HON-Mass relation with redshift. Interestingly, this result was different from their original analysis using a subsample of $\sim$ 20 clusters at 0.2 $< z <$ 0.9 where they found the HON at fixed mass increases by a factor of $\sim$ 2 at $z > 0.2$. \newline\indent The purpose of this paper is to extend the analysis of the local cluster K-band HON and M/L ratio to higher redshift using a well-defined sample of clusters with wide-field imaging and extensive spectroscopic data. Our sample consists of 15 massive, X-ray selected clusters that were part of the Canadian Network for Observational Cosmology (CNOC1, Yee et al. 1996) project. The K-band imaging, LF and density profiles for the clusters were presented in the first paper in this series (Muzzin et al. 2006, hereafter Paper I). In addition to the K-band imaging, these clusters also have considerable optical spectroscopy and $g$ and $r$ photometry both of which extend to R $\sim$ R$_{200}$ for each cluster. \newline\indent The structure of this paper is as follows: In $\S$2 we briefly describe the dataset, and in $\S$3 present updated values for the cluster dynamical masses and X-ray temperatures. In $\S$4 we present the cluster HON and discuss its redshift evolution. Section 5 shows that total K-band luminosity (L$_{200,K}$) and K-band selected richness (parameterized by B$_{gc,K}$) are correlated with the cluster halo mass, and can potentially be used as cheap proxies for this quantity in cluster abundance surveys. In $\S6$ we present the K-band M/L ratios for the CNOC1 clusters and in $\S$7 we present a measurement of $\Omega_{m}$ using the Oort (1958) technique. We conclude with a summary in $\S8$. When computing magnitudes and angular sizes we adopt an $\Omega_{m}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, H$_{0}$ = 70 km s$^{-1}$ Mpc $^{-1}$ cosmology. \section{Data} The CNOC1 clusters are a set of 16 massive clusters with redshifts 0.17 $< z <$ 0.54. Fifteen of the clusters were detected in the Einstein Medium Sensitivity Survey (EMSS, Gioia et al. 1990). Abell 2390 was added as the 16th cluster and is also a massive cluster with strong X-ray emission. The clusters have extensive optical photometry and spectroscopy which were obtained as part of the CNOC1 project (Yee et al. 1996). Our sample is comprised of 15 of the 16 CNOC1 clusters. The cluster MS0906+11 is omitted because it was shown to be a strong binary in redshift-space by Carlberg et al. (1996) and therefore the mass measurement for the cluster is unreliable. The optical photometric and spectroscopic data as well as the K-band photometric data used in this paper were already presented in Paper I. We refer to that paper as well as the CNOC1 paper (Yee et al. 1996) for complete details of the observations, reductions and photometry and below present only a quick overview of the data. \subsection{Optical Photometry and Spectroscopy} Gunn {\it g} and {\it r} band imaging data were obtained at the 3.6m Canada-France-Hawaii-Telescope (CFHT) using the Multi-Object Spectrograph (MOS) camera. Photometry was performed on these data using the Picture Processing Package (PPP, Yee 1991). The photometry reaches a 5$\sigma$ depth of $\sim$ M$^{*}$ + 3 in both the $g$ and $r$ bands. The CNOC1 collaboration also obtained $>$ 2500 spectroscopic redshifts in the fields of the 15 clusters using the CFHTMOS. Of these, approximately one-half are cluster members. The spectroscopy is sparsely sampled, but complete to a depth of K$^{*}$ + 2 for all but the two highest-redshift clusters (MS0016+16 and MS0451-03). The spectroscopy for those clusters is complete to $\sim$ K$^{*}$ + 1. \subsection{Near Infrared Photometry} K-band imaging for 14 of the 15 clusters was obtained at the Kitt Peak National Observatory (KPNO) 2.1m telescope using the Ohio State / NOAO Infrared Imaging Spectrograph (ONIS). Observations were taken in a Mauna Kea filter set version of the K$_{s}$ filter (Tokunaga K), which is nearly identical to the 2MASS K$_{s}$ filter. For our analysis we treat the Tokunaga K filter as the K$_{s}$-band and hereafter refer to it as the ``K-band''. K$_{s}$-band imaging of MS0440+02 was obtained using the PISCES camera on the Steward Observatory 90$''$ telescope. Photometry for the K-band data was also performed using PPP. The K-band imaging is complete to a 5$\sigma$ depth of $\sim$ K$^{*}$ + 2 for all clusters, except the two highest-redshift clusters where it is complete to $\sim$ K$^{*}$ + 1 (see Table 1 of Paper I for a summary of observations). \section{Cluster Physical Parameters} The masses of the CNOC1 clusters are without a doubt the most well-studied for clusters at moderate redshift. There are numerous measurements using X-ray temperatures (T$_{x}$, Hicks et al. 2006; Lewis et al. 1999; Henry 2000; Mushotzky \& Scharf 1997), the dynamics of cluster galaxies (van der Marel et al. 2000; Borgani et al. 1999; Carlberg et al. 1996; 1997; Diaferio et al. 2005), and both strong (Fahlman et al. 1994; Luppino \& Gioia 1995; Pierre et al. 1996, Wu 2000) and weak lensing (Allen 1998; Hoekstra et al. 1998; Smail et al 1995; 1997). In this section we discuss the masses and dynamical radii we adopt for comparing with the K-band properties. \subsection{Dynamical Masses} \indent The first study of the dynamics of the CNOC1 clusters was done by Carlberg et al. (1996) who measured the line-of-sight velocity dispersion ($\sigma_{1}$) for each cluster. These velocity dispersions were used to calculate the virial mass (M$_{vir}$), the radius at which the cluster mass density exceeds the critical density of the universe by a factor of 200 (R$_{200}$), and the mass contained within that radius (M$_{200}$), by assuming the cluster had an isotropic velocity ellipsoid. In subsequent work, Carlberg et al. (1997) and van der Marel et al. (2000) examined in detail the velocity dispersion profiles, and velocity anisotropy of the clusters. van der Marel et al. (2000) verified that they are compatible with an isotropic velocity ellipsoid and that the cluster mass profile was consistent with an NFW (Navarro et al. 1997) profile. The cluster velocity dispersions were also determined by Borgani et al. (1999) using different background subtraction techniques. They found that the different techniques provide velocity dispersions that are self-similar to $\sim$ 10\%, and that the velocity dispersions they derive are also consistent to $\sim$ 10\% with the Carlberg et al. (1997) velocity dispersions. \newline\indent Unfortunately, the M$_{200}$ and R$_{200}$ values determined from these studies are now out of date because they were computed assuming cosmological parameters which are different from recent concordance values (e.g., Spergel et al. 2006). Fortunately, the velocity dispersions do not depend on cosmology and, given that the clusters are consistent with an isotropic velocity ellipsoid, we have recomputed both R$_{200}$ and M$_{200}$ with a $\Omega_{m}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, H$_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$ cosmology using the equations, \begin{equation} R_{200} = \frac{\sqrt{3}\sigma_{1}}{10 H(z)}, \end{equation} and, \begin{equation} M_{200} = \frac{4}{3}\pi R_{200}^3 \cdot 200\rho_{c}, \end{equation} where H($z$) is the Hubble constant at redshift $z$, and $\rho_{c}$ is the critical density for a flat universe. Equation 1 can be derived by assuming M$_{200}$ $\approx$ M$_{vir}$ and equating the virial theorem (M$_{vir}$ = $\frac{3}{G}\sigma_{1}^{2}R_{vir}$) and Equation 2. For all clusters the $\sigma_{1}$ values determined by Carlberg et al. (1997) are used to compute M$_{200}$ and R$_{200}$. Errors in M$_{200}$ are calculated by propagating the errors in $\sigma_{1}$ in the standard way. The new values of R$_{200}$ and M$_{200}$ are on average $\sim$ 1.6 times larger than the Carlberg et al. (1997) values. We list the updated values in Table 1. \subsection{X-ray Temperatures} \indent X-ray temperatures have been measured for the CNOC1 clusters by several authors. The clusters we originally discovered in the Einstein Medium Sensitivity Survey (EMSS, Gioia et al. 1990); however, only X-ray fluxes were computed using the original data. Subsequently, Lewis et al. (1999) measured T$_{x}$ for 13 of the 15 clusters in our sample using data from the ROSAT satellite. They found that the average cluster masses determined from T$_{x}$ were in good agreement with the updated dynamical masses, even though individual clusters could have discrepancies as large as a factor of 2. Recently, Hicks et al. (2006) used archival Chandra Advanced CCD Imaging Spectrometer (ACIS) data to determine accurate T$_{x}$ values for 13 of the 15 clusters in our sample. They use single and double-$\beta$-model fits to determine the temperatures. Several clusters (A2390, MS0839+29, MS1358+62, MS1455+22) are also corrected for significant cooling flows. They find that the X-ray masses they compute agree well with the dynamical masses determined with the velocity dispersions of Carlberg et al. (1997) and Borgani et al. (1999). These new T$_{x}$ values are superior to the older data and for comparisons with the HON (\S 4), L$_{200,K}$ (\S 5.1), and B$_{gc,K}$ (\S 5.2) we use these values. There are two clusters without a Chandra observation (MS1224+20, MS1231+15). For MS1224+20 we use the temperature listed in Lewis et al. (1999). MS1231+15 has no X-ray temperature available so we assume a temperature of 6 keV, which is appropriate given the X-ray luminosity (Yee \& Ellingson 2003). We list the X-ray temperatures and their associated errors in Table 1. \section{The Halo Occupation Number} Clusters are massive dark matter halos, and therefore computing their HON simply requires counting the total number of cluster members within some dynamical radius, typically R$_{200}$. The extensive spectroscopy for the CNOC1 clusters could be used to differentiate between field and cluster galaxies; however, we prefer to measure the HON using statistical background subtraction. Using spectroscopic data to separate field and cluster galaxies is preferable when computing cluster properties that depend on the luminosity of cluster members (e.g., LFs, total luminosity $\S$5.1, or M/L ratios $\S$6.1) because these calculations require information on the membership of individual galaxies to compute distance moduli and k-corrections. However, unless the spectroscopy of a cluster field is quite complete, statistical background subtraction is better-suited for counting the overdensity of cluster galaxies. Furthermore, using statistical background subtraction provides a consistent technique for each cluster and avoids any biases from cluster-to-cluster that might be caused by poor determination of the spectroscopic selection function. \newline\indent The background counts are measured from our own K-band imaging survey of CNOC2 fields (Yee et al. 2000). The data was taken using the PISCES camera on the Steward Observatory 90$''$ telescope and are reduced and photometered using the same techniques as for the CNOC1 dataset. The imaging consists of 23 (8$'$ $\times$ 8$'$) fields and covers a total area of 0.26 square degrees. Complete details of the observations, data reduction, and photometry will be presented in a future paper (Lin et al. in preparation). Figure 1 shows background counts from the CNOC2 fields with the number counts from the K20 Survey (Cimatti et al. 2002) and the MUNICS Survey (Drory et al. 2001) overplotted for comparison. The CNOC2 galaxy counts agree well with the counts from these surveys as well as various other K-band surveys in the literature (e.g., Elston et al. 2006; Maihara et al. 2001; Saracco et al. 2001; Gardner et al. 1993). \newline\indent We measure the HON for the clusters within the radius R$_{500}$ (the radius at which the cluster mass density exceeds the critical density of the universe by a factor of 500), rather than the more typical R$_{200}$. It would be preferable to use R$_{200}$, as it is similar to the cluster virial radius; however, some of the clusters are very sparsely sampled at R$_{200}$, and therefore require large corrections to compensate for the poor sampling. These corrections propagate as large uncertainties in the total number of galaxies. Conversely, the coverage at R$_{500}$ is complete for all clusters (except MS1455+22) and therefore those measurements have much smaller uncertainties and should also be more robust. The cluster R$_{500}$ is estimated from the R$_{200}$ values assuming an NFW profile with c = 5. Hereafter we refer to the HON within R$_{500}$ as N$_{500}$ and the HON within R$_{200}$ as N$_{200}$. \newline\indent Thus far, the largest study of the cluster HON is the work of L04 who measured N$_{500}$ and N$_{200}$ for 93, X-ray selected, $z <$ 0.1 clusters using 2MASS data. In order to make a direct comparison between our sample and their sample, we measure N$_{500}$ using the same limiting magnitude as L04. Their technique for measuring the HON involves determining the Schechter parameters K$^{*}$ and $\phi^{*}$ for each cluster and then integrating the Schechter function to an absolute magnitude limit of M$_{K}$ = -21 to find the total cluster counts. For their ensemble of 93 clusters, they measured K$^{*}$ = -24.02 $\pm$ 0.02. Therefore, statistically speaking, their N$_{500}$ values are measured at K$^{*}$ + 3, although the depth varies from cluster-to-cluster. Our data does not reach K$^{*}$ + 3 for most clusters, therefore galaxies are counted to K$^{*}$ + 1 and then these counts are extrapolated to K$^{*}$ + 3. \newline\indent K$^{*}$ evolves with redshift; however, we showed in Paper I that the evolution tightly follows the luminosity evolution of a single-burst population formed at high-redshift. The single-burst model from Paper I is used to infer K$^{*}$ at the redshift of each cluster. The background subtracted, K $<$ K$^{*}$ + 1 counts are scaled to the appropriate counts for K$^{*}$ + 3 by extrapolating a Schechter function with a faint-end slope of $\alpha$ = -0.9. This value of $\alpha$ is consistent with the faint-end slope measured for the CNOC1 cluster LF in Paper I ($\alpha$ = -0.84 $\pm$ 0.08) and is similar to the faint-end slope measured for the L04 data ($\alpha$ = -0.84 $\pm$ 0.02). \newline\indent We plot N$_{500}$ vs. M$_{500}$ for the clusters in the left panel of Figure 2, and N$_{500}$ vs. T$_{x}$ in right panel of Figure 2. In both cases there is a good correlation between the two parameters. The Spearman rank-correlation coefficients are 0.79 and 0.77 respectively, implying probabilities of 7.1 $\times$ 10$^{-4}$, 8.5 $\times$ 10$^{-4}$ that the data are uncorrelated. \newline\indent One cluster which has an extremely small N$_{500}$ for its mass is MS1455+22. This cluster is also a significant outlier in the M$_{200}$-B$_{gc}$ relation (B$_{gc}$ is a measure of cluster richness, see $\S$5.2) from Yee \& Ellingson (2003, hereafter YE03). They suggest that the most logical interpretation is either that the mass has been significantly overestimated or else that the cluster is in a very advanced state of evolution, different from most clusters in this redshift range. Therefore, MS1455+22 is excluded when we fit both M$_{500}$ vs. N$_{500}$ and and T$_{x}$ vs. N$_{500}$ and is plotted in Figure 2 as an open triangle. Using a $\chi^2$-minimization technique for errors in both parameters (Press et al. 1992) we find the best-fit relation for the N$_{500}$ - M$_{500}$ correlation is \begin{equation} Log(M_{500}) = (1.40 \pm 0.22)Log(N_{500}) + (11.58 \pm 0.54), \end{equation} and has a reduced-$\chi^2$ of 1.60. For the N$_{500}$ - T$_{x}$ the best-fit relation is \begin{equation} Log(T_{x}) = (0.77 \pm 0.11)Log(N_{500}) + (-1.01 \pm 0.27), \end{equation} and has a reduced-$\chi^2$ of 2.46. The correlations have Root-Mean Squared (rms) scatters of 51\% and 50\% respectively. For the remainder of the paper, the percentage rms scatter in all correlations is computed using the same method as YE03. The rms deviation from the fit in terms of the dependent variable (in this case, M$_{500}$) is determined and then compared to its mean value from the entire sample. \newline\indent The scatter in these correlations is larger than the 35\% scatter in the local scaling relations measured by L04. However, the random errors in our data are larger than theirs, and clearly a portion of the scatter comes from these measurement errors. Given that the total scatter is the result of both the measurement errors in M$_{500}$ and N$_{500}$ as well as some intrinsic scatter, we estimate the intrinsic scatter by assuming that the measurement scatter is independent of the intrinsic scatter and then subtracting the mean measurement scatter of the dependent variable in quadrature from the total scatter. Using this simplistic method we estimate that the intrinsic scatter in N$_{500}$ - M$_{500}$ is 37\% and for N$_{500}$ - T$_{x}$ it is 46\%. \newline\indent We compare the N$_{500}$ values from the CNOC1 clusters with the L04 values in the top panel of Figure 3. L04 measured N$_{500}$ by integrating the cluster LFs over an NFW spatial distribution and therefore it is the N$_{500}$ within a spherical volume. Our N$_{500}$ is measured using statistical background subtraction and is therefore the number of galaxies in a cylinder of radius R$_{500}$. We convert our N$_{500}$ in cylinders to N$_{500}$ in spheres by multiplying by a deprojection constant of 0.791, the value for the percentage difference in enclosed number of galaxies between spheres and cylinders at R$_{500}$ for an NFW profile with c = 5. The dash-dot blue line in Figure 3 is the best-fit relation for the L04 clusters, and the solid red line is the best-fit relation for the CNOC1 clusters. The slope of the CNOC1 relations (M$_{500}$ $\propto$ N$_{500}^{0.71\pm0.11}$) is slightly shallower than the L04 relation that includes the BCGs (M$_{200}$ $\propto$ N$_{500}^{0.82\pm0.04}$), but is consistent within the errors. It is also consistent with the scaling relation found by Rines et al. (2004) for the CAIRNS clusters (M$_{200}$ $\propto$ N$_{200}^{0.70\pm0.09}$); however, their relation is measured using N$_{200}$ and M$_{200}$ and a different luminosity cut. The overall normalization of the CNOC1 N$_{500}$ is nearly identical to the L04 normalization. As a comparison between the samples we plot the measured N$_{500}$ values divided by those predicted from the local M$_{500}$-N$_{500}$ relation in the bottom panel of Figure 3 . Figure 3 shows that the HON - mass correlation measured by L04 does not evolve between $z = $ 0 and $z \sim$ 0.3. The higher redshift CNOC1 clusters have N$_{500}$ values which are only 4\% $\pm$ 11\% smaller than would be predicted by the local relation. The mean(median) N$_{500}$/N$_{500,fit local} $ is 0.96(0.90) $\pm$ 0.11, and is consistent with no evolution in the HON with redshift. As a comparison, the L04 clusters have a mean(median) N$_{500}$/N$_{500,fit local} $ of 1.04(1.01) $\pm$ 0.04. \newline\indent The non-growth of the HON at a fixed mass with increasing redshift is different from the results of L04. In addition to their local clusters, L04 also computed the HON for a subset of clusters with $0.1 < z <$ 0.9 (K-band imaging from De Propris et al. 1999) which had available X-ray temperatures and compared it to the local clusters. Their comparison showed that the HON was roughly a factor of 2 larger in the $z >$ 0.1 clusters. From this they suggested that half of all cluster galaxies may be removed (via either merging or disruption) between $z \sim$ 0.6 (all but two of their clusters are $z <$ 0.6) and $z = 0$ in order to match the local HON. \newline\indent It is unclear why the conclusions of our studies are so different. We did note in Paper I that it is challenging to reconcile the idea of a strong evolution of the cluster HON between $z \sim$ 0.6 and $z = $ 0 and the passive evolution of the K-band cluster LF. Paper I showed that the evolution of the cluster K$^{*}$ tightly follows the prediction of a passive evolution model between $z \sim$ 0.5 and $z = 0$ and those results reaffirmed the findings of the pioneering cluster K-band luminosity function study of de Propris et al. (1999), as well as recent K-band LFs of high-redshift clusters (e.g., Toft et al. 2003; Kodama \& Bower 2003; Ellis \& Jones 2002; Strazzullo et al. 2006). Interestingly, all of the $z >$ 0.1 clusters used by L04 are part of the de Propris et al. (1999) sample. If 50\% of cluster galaxies are either merged, or disrupted by tidal forces between $z \sim$ 0.6 and $z = 0$, then it seems unlikely that the evolution of K$^{*}$ could follow a passive evolution model, unless the probability of being merged or tidally disrupted was independent of galaxy K-band luminosity (i.e., stellar mass). While mass-independence may be plausible in the case of mergers, more massive galaxies are more difficult to disrupt. If a significant number of disruptions do occur, they should preferentially destroy galaxies on the faint-end of the luminosity function and therefore will change its overall shape. \newline\indent L04 appeal to the N-body simulations of Kravtsov et al. (2004) for an explanation of the strong redshift evolution of the HON. Kravtsov et al. (2004) show that for dark matter halos, the HON at a given mass becomes approximately a factor of three larger from $z =$ 0 to $z =$ 5. They show that the dark matter HON at a given mass is smaller at lower redshift because low mass halos merge with high mass halos as well as become tidally truncated or disrupted by massive halos in high-density regions such as clusters. \newline\indent More recent simulations which incorporate both dark matter as well as baryonic matter using Smoothed Particle Hydrodynamics (SPH, e.g., Nagai \& Kravtsov 2005, Weinberg et al. 2006) show that when the baryonic component is included, the interpretation of the HON derived by counting galaxies becomes more subtle. Nagai \& Kravtsov (2005) simulated a set of 8 clusters and found that as subhalos enter the cluster R$_{200}$ they are stripped of $\sim$30\% of their total dark matter mass. Once they fall into the cluster they are further stripped until $\sim$70\% of their original dark matter mass is removed. Their result confirms the earlier N-body simulations which showed the dark matter HON is depleted with decreasing redshift, especially in high-density environments; however, the SPH simulations also show that primarily the dark matter is stripped from the halos and very little of the baryonic mass (in the form of stars and gas) is lost during this process. The baryonic matter remains because it tends to cool, and fall to the center of the subhalo and is amongst the most tightly bound mass in the subhalo. \newline\indent These simulations suggest that the HON, when selected by counting the number of massive dark matter subhalos is quickly depleted in high density regions such as clusters because of tidal stripping and merging of the subhalos. However, when the HON is selected by the baryonic mass of halos (i.e., using K-band number counts) there should be little evolution in the HON at a given halo mass, because the baryonic component of subhalos is tightly bound, and will only be depleted in the case of full disruption. This interpretation is consistent with our data and is also consistent with the observation that both K$^{*}$ (Paper I, L04, de Propris et al. 1999) and $\alpha$ (L04) for cluster galaxies are basically identical across more than an order-of magnitude in cluster mass (Paper I, L04, de Propris et al. 1999). Given that the efficiency of tidal stripping is correlated with cluster mass, it is hard to understand how the K-band LF of cluster galaxies could be independent of cluster mass unless the baryonic component is unaffected in the process. \newline\indent In summary, we find that the K-band selected HON of clusters at a given mass does not evolve significantly between $z =$ 0 and $z \sim$ 0.3. This result is consistent with the more recent work of Lin et al. (2006) who measured the HON within R$_{2000}$, R$_{1000}$ and R$_{700}$ for a sample of 27 clusters 0 $< z <$ 0.9 (a few of which are included in L04) and also find no-evolution in the HON at fixed mass. We argue that the non-evolution of the HON is naturally explained by recent SPH simulations, which show that while dark matter halos encounter a significant amount of tidal stripping in high-density regions such as clusters, the baryonic component within remains mostly intact. Furthermore, this interpretation reconciles the strong tidal stripping with the purely passive evolution of the cluster K-band LF, and its invariance across the cluster mass spectrum. \newline\indent Our data cannot rule out the possibility that the simulations may over-exaggerate the tidal stripping and that similar to the baryonic component, the dark matter mass-selected HON at a fixed cluster mass also does not evolve within the same redshift range. A useful way to test this would be to compare the M/L ratios of cluster galaxies and field galaxies at large radii and look for evidence of truncated halos within the cluster population. Unfortunately, this is a challenging task, because there are few tracers of the dark matter potential at large radii for galaxies. There is now evidence from galaxy-galaxy lensing studies of massive clusters that supports the tidal-stripping scenario (Halkola et al. 2006, Natarajan et al. 2002); however, in lower-mass clusters (M $\sim$ 10$^{13}$-10$^{14}$ M$_{\odot}$), the same trend is not observed (Mandelbaum et al. 2006). A more comprehensive study of galaxy-galaxy lensing in clusters in the range of M $\sim$ 10$^{14}$-10$^{15}$ M$_{\odot}$ would be particularly useful for testing this interpretation. \begin{figure*} \epsscale{1.0} \plotone{f1.eps} \caption{\footnotesize {\it Red Dots}: Number of galaxies per square degree per 0.5 magnitudes as a function of magnitude determined from 0.26 square degrees of CNOC2 K-band imaging data. {\it Green Triangles}: Number counts from the K20 Survey. {\it Blue Squares}: Number counts from the MUNICS Survey.} \end{figure*} \begin{figure*} \plotone{f2.eps} \caption{\footnotesize Left Panel: Plot of M$_{500}$ vs. N$_{500}$ for the CNOC1 clusters. The solid line is the best-fit linear relation and has an rms scatter of 50\%. Right Panel: Plot of T$_{x}$ vs. N$_{500}$ for the same clusters. The solid line is the best-fit linear relation and has an rms scatter of 51\%. MS1455+22 is plotted as an open triangle.} \end{figure*} \begin{figure*} \epsscale{0.8} \plotone{f3.eps} \caption{\footnotesize {\it Top Panel}: Plot of N$_{500}$ vs. M$_{500}$ for the CNOC1 clusters (large red points) and the L04 clusters (small blue points). MS1455+22 is plotted as an open triangle. The solid red line is the best-fit relation for the CNOC1 clusters, and the dash-dot blue line is the best-fit relation for the L04 clusters. The slope is the CNOC1 correlation is slightly shallower, but consistent with the L04 slope. {\it Bottom Panel}: Plot of the measured N$_{500}$ divided by the value predicted by the L04 scaling relation as a function of redshift for the L04 clusters (blue points) and CNOC1 clusters (red points). The CNOC1 normalization is consistent with the L04 normalization suggesting no evolution in the HON with redshift.} \end{figure*} \section{K-Band Light and Richness as an Indicator of Cluster Mass} In this section we measure the total K-band luminosity within R$_{200}$ for the clusters (L$_{200,K}$) and examine its correlation with cluster mass. We also determine the correlation between K-band selected richness and cluster mass. \subsection{Total K-Band Luminosity} \indent L$_{200,K}$ for a galaxy cluster is defined as the sum of the K-band luminosity of all cluster galaxies within R$_{200}$ to a fixed absolute magnitude. In principle the measurement is straightforward; however, in practice, data are never homogeneous and additional corrections must be applied. In the case of the CNOC1 clusters, the clusters lie at a fairly large range of redshifts and therefore the light from each galaxy must be both k-corrected and evolution corrected to a common redshift. The need for these corrections means that the membership and approximate spectral-type for each galaxy in the cluster field must be determined. This is not as precise using statistical background subtraction and therefore we make use of the extensive spectroscopic catalogues. The spectroscopy is complete to K$^{*}$ + 1 for all clusters, but is sparsely sampled. Therefore, a spectroscopic selection function is used to correct for the sampling. The method for determining the spectroscopic selection function was developed by the CNOC1 (Yee et al. 1996) \& CNOC2 (Yee et al. 2000) collaborations and is discussed in detail in those papers as well as Paper I. Similar to the analysis in Paper I, we ignore the ``secondary'' selection effects (color, position, and $z$) and use only the magnitude weights, which are the overwhelmingly dominant selection bias (Yee et al. 1996). \newline\indent The k-correction for each galaxy is taken from the models of Poggianti (1997). The k-corrections depend only mildly on spectral-type and the color/spectral-type model discussed in Paper I is used to estimate a spectral-type for each galaxy. Also, the LFs from Paper I showed that the luminosity evolution in the clusters is passive, but significant (0.35 $\pm$ 0.06 magnitudes from $z$ = 0 to $z$ $\sim$ 0.5). Therefore, an evolution correction from the Poggianti (1997) models is applied to each galaxy, where we have transformed the Poggianti (1997) evolution corrections to our cosmology (see Paper I, \S 6.1.2). By applying both k and evolutionary corrections the measured L$_{200,K}$ is corrected to $z$ = 0, and therefore these values can be directly compared to local studies. \newline\indent The L$_{200,K}$ for some clusters must also be corrected for the incomplete coverage of R$_{200}$. We noted that for the HON (\S 4) the coverage corrections propagate to large errors in the HON when it is computed within R$_{200}$ and because of that, the HON was measured within R$_{500}$. The incomplete radial coverage is less problematic when computing L$_{200,K}$ because much of the total cluster luminosity comes from the BCG ($\sim$ 5 - 30\%). Despite the fact that the number of cluster galaxies at R$_{500}$ $<$ R $<$ R$_{200}$ is large, the most luminous galaxies tend to reside in the cluster core, and therefore galaxies at R$_{500}$ $<$ R $<$ R$_{200}$ contribute less to the total luminosity than to the HON. \newline\indent The clusters without complete coverage have been observed in a strip through the cluster core; therefore, the coverage problems are corrected by computing the luminosity in circular shells. The total luminosity of a shell is multiplied by the ratio of the total area of the shell, to the total area of the shell with imaging/spectroscopic data. Several of the clusters (MS0016+16, MS0451-02, MS1006+12, MS1008-12, and MS1455+22) have no coverage for a few of the outermost shells. For these clusters a correction based on the profile of clusters which have the best coverage in the outer regions is applied. This correction is small in four of the five cases ($\sim$ 5-10\%), again because the majority of the cluster light comes from the core. MS1455+22 has a very large R$_{200}$ (which is likely to be overestimated, see \S 4) and therefore the correction is much larger ($\sim$ 60\%). \newline\indent Once the selection function, k-correction and evolution corrections have been applied, and the sum of the luminosity of galaxies brighter than K$^{*}$ + 1 has been tabulated, the ``total'' luminosity of the cluster is finalized by extrapolating the LF to a fixed absolute magnitude. We choose to extrapolate to M$_{K}$ = -21 (i.e., K$^{*}$ + 3 at $z$ = 0), the same limiting absolute magnitude used by L04 and and effectively the same limiting magnitude of the HON. We evolution-correct the combined LF from all 15 clusters to z = 0 (K$^{*}$ = -24.14 $\pm$ 0.15 and $\alpha$ = -0.84 $\pm$ 0.08, see Paper I) and use this for extrapolation. This LF is consistent with the average LF found by L04 (K$^{*}$ = -24.02 $\pm$ 0.02 and $\alpha$ = -0.84 $\pm$ 0.02) and because our clusters have been evolution corrected to $z =$ 0, extrapolating the luminosity to M$_{K}$ = -21 results in a L$_{200,K}$ which can be directly compared between the samples. Before the correction is applied, the contribution from the BCGs is removed from the total cluster light. The BCGs contribute a significant, but variable, fraction of the total cluster light ($\sim$ 5-30\% at K$^{*}$ + 1) and do not obey a Schechter function. Including them as part of the extrapolation of the Schechter function would overestimate the total cluster light. The light from the BCG galaxy is re-added to the total once the extrapolation has been done. Lastly, a bootstrap error for each cluster is computed by performing the entire analysis using 300 resamplings with replacement from the data. Table 2 lists the final L$_{200,K}$ values as well as the bootstrap errors. \newline\indent We examine the correlation between the cluster mass and light by plotting Log(M$_{200}$) vs. Log(L$_{200,K}$) in the first panel of Figure 4. The best-fit linear relation is \begin{equation} Log(M_{200}) = (1.20 \pm 0.16)Log(L_{200,K}) - (0.95 \pm 2.21), \end{equation} and the fit has a reduced-$\chi^2$ of 2.65, where again we have ignored MS1455+22 because it is likely to have an incorrect M$_{200}$. The rms scatter in the M$_{200}$ - L$_{200,K}$ relation is 43\% and from this we estimate that the intrinsic scatter is 31\%. The scatter in the correlation is about a factor of 1.3 larger than the 34\% total, and 24\% intrinsic scatter measured by L04 for local clusters. If we invert the axes, we find L$_{200,K}$ $\propto$ M$_{200}^{0.83\pm0.11}$. This agrees with the scaling relation for local clusters with the BCG included which is L$_{200,K}$ $\propto$ M$_{200}^{0.72\pm0.04}$ (L04), and somewhat less well with the scaling relation for local groups which is L$_{200,K}$ $\propto$ M$_{200}^{0.64\pm0.06}$ (Ramella et al. 2004). This shows that the slope, while slightly shallower for groups, does not change significantly over $\sim$ 2 orders of magnitude in cluster mass. Furthermore, the reasonable agreement between the slope and intrinsic scatter in the correlations at different redshifts suggests that whatever physical processes are responsible for building in the correlation have largely taken place by $z$ $\sim$ 0.3, and that there is little change in the L$_{200}$ - M$_{200}$ relation from $z \sim$ 0.3 to $z$ = 0, besides possibly a small decrease in the scatter. \newline\indent In the second panel of Figure 4 we plot the correlation between the cluster X-ray temperature and L$_{200,K}$. The best fit relation with MS1455+22 excluded is \begin{equation} Log(T_{x}) = (0.77 \pm 0.08)Log(L_{200,K}) + (-9.45 \pm 1.07), \end{equation} and has a reduced-$\chi^2$ of 2.35. The rms scatter in the relation is 24\% and this corresponds to an intrinsic scatter of 14\%, notably better than the M$_{200}$ - L$_{200,K}$ relation. As the CNOC1 clusters are all massive, relaxed X-ray systems, this may not be surprising. Using $r$-band data, YE03 found the correlation between B$_{gc}$ and T$_{x}$ for the CNOC1 clusters to have a smaller scatter (21\%) than the B$_{gc}$ - M$_{200}$ relation (31\%), although they used the T$_{x}$ values determined by Lewis et al. (1999) rather than the Hicks et al. (2006) values. \newline\indent If we compare the scatter in the M$_{200}$ vs. L$_{200,K}$ relation to the scatter in the correlations of M$_{200}$ vs. T$_{x}$, L$_{x}$, and B$_{gc}$ (optical) determined by YE03 for these clusters, we find that the total K-band light is approximately as accurate as those parameters in inferring the dynamical mass. YE03 showed the scatter in the M$_{200}$ vs. T$_{x}$, L$_{x}$, and B$_{gc}$ (optical) for these clusters was 29\%, 35\%, and 31\% respectively. This demonstrates that L$_{200,K}$ is as good as, but no more accurate than traditional mass indicators at inferring the cluster dynamical mass. It appears the fact that K-band light is a fairly clean tracer of an individual galaxy's stellar mass does not result in a smaller scatter for the L$_{200,K}$ - M$_{200}$ scaling relation, suggesting that the amount of scatter measured in optical scaling relations is a real scatter in relative amounts of stellar mass and dark matter mass in clusters, and not enhanced due to a variance in star-formation properties or stellar populations from cluster-to-cluster. \newline\indent It is possible that the scatter we measure is enhanced by systematics in the data analysis caused by the need for k-corrections, evolution corrections, spectroscopic selection functions, and coverage corrections; however, the level of scatter we find is of order that found by L04 in local clusters, which do not suffer from those uncertainties. Given that there is no reason to expect the scatter in the scaling relations to become {\it smaller} at higher-redshift, it would suggest that these corrections have been applied properly and do not significantly increase the overall scatter. \newline\indent The correlations from Figure 4 are pleasing in the sense that total K-band light is observationally much cheaper to measure than L$_{x}$, T$_{x}$ or M$_{200}$ (via a velocity dispersion or weak lensing) for a given cluster. For clusters in this redshift range, a wide-field NIR camera on a 4m-class telescope can achieve a limiting magnitude of M$_{K}$ $<$ K$^{*}$ + 1 with an integration time of only a few minutes. This means followup of many candidates in the generation of high-yield cluster cosmology projects is easily achievable. Unfortunately, a major concern for cosmology projects is that although L$_{200,K}$ appears to be a good proxy for M$_{200}$ or T$_{x}$ it cannot be used in practice because it requires an {\it a priori} knowledge of R$_{200}$. In the next section we show that the NIR-selected richness in a fixed aperture is also well-correlated with the cluster mass and can be used as a mass proxy. \begin{figure*} \plotone{f4.eps} \caption{\footnotesize Left Panel: Plot of Log(M$_{200}$) vs. Log(L$_{200,K}$) for the CNOC1 clusters. The solid line is the best-fit linear relation. The rms scatter around the relation is 43\% (31\% intrinsic). Right Panel: Plot of T$_{x}$ vs. Log(L$_{200,K}$). The solid line is the best-fit linear relation and as an rms scatter of 24\% (14\% intrinsic). MS1455+22 is plotted as an open triangle in both panels and is excluded in the fits. } \end{figure*} \subsection{B$_{gc}$ as an Indicator of Cluster Mass} The good correlation between L$_{200,K}$ and M$_{200}$ found for the CNOC1 clusters, and by other authors (e.g. Lin et al. 2003; L04; Rines et al. 2004; Ramella et al. 2004) demonstrates that K-band light can be used as an efficient estimator of the cluster halo mass. Of course, measuring L$_{200,K}$ requires knowledge of R$_{200}$, which is generally an unknown, whereas the cluster richness, parameterized by B$_{gc}$ does not require R$_{200}$. B$_{gc}$ is the amplitude of the 3-dimensional, spatial correlation function between the cluster center and the cluster galaxies. If the shape of the 2-dimensional angular correlation function is assumed (i.e., w($\theta$) $\propto$ $\theta^{(1-\gamma)}$, with $\gamma$ $\sim$ 1.8), then its amplitude can be measured by counting galaxies in a fixed aperture around the cluster center. B$_{gc}$ is then measured by deprojecting the angular correlation function via ($\xi(r) \propto r^{-\gamma}$) and scaling it by a luminosity function. A full derivation and motivation for B$_{gc}$ is presented in Longair \& Seldner (1979). The B$_{gc}$ parameter is defined as: \begin{equation} B_{gc} = \mbox{N}_{net}\frac{(3-\gamma)\mbox{D}^{\gamma-3}\theta^{\gamma-1}}{2 \mbox{A$_{\theta}$} \mbox{I}_{\gamma}\Psi[\mbox{M}(\mbox{m}_{0},z)]}, \end{equation} where N$_{net}$ is the background corrected cluster galaxy counts, D is the angular diameter distance to the cluster redshift, $\theta$ is the angular size of the counting aperture, A$_{\theta}$ is the angular area of the counting aperture, I$_{\gamma}$ is geometric deprojection constant which depends on $\gamma$ (I$_{\gamma}$ = 3.78 for $\gamma$ $\sim$ 1.8), and $\Psi$ is the integrated cluster LF up to the apparent magnitude M, which corresponds to an absolute magnitude m$_{0}$ at the redshift $z$. The normalization of the LF ($\phi^{*}$) is the universal normalization, and not the cluster normalization, so that B$_{gc}$ is the overdensity of galaxies compared to the field density, not the average cluster density. This formula for B$_{gc}$ is slightly different than formula presented in Yee \& Lopez-Cruz (1999, their equation 3) in that it contains the A$_{\theta}$ term. We note that in their formula for B$_{gc}$, Yee \& Lopez-Cruz (1999) had inadvertently left out the $A_{\theta}$ term due to a transcribing error. \newline\indent At first, B$_{gc}$ may appear to be an unnecessarily complicated measure of the cluster richness; however, using it has some distinct advantages. As interest in an observationally cheap proxy for cluster mass has increased, so have the number of studies which have looked at the correlation between cluster richness and mass (e.g., Hansen et al. 2005; Miller et al. 2005; Gilbank et al. 2004; L04; Rines et al. 2004; Ramella et al. 2004; Lin et al. 2003; Kochanek et al. 2003; YE03). These studies define the cluster richness as the number of galaxies within a fixed aperture, to a fixed magnitude limit. However, because different authors use different apertures, and different magnitude limits, comparison between studies is extremely difficult. The advantage of the B$_{gc}$ parameter is that it assumes a universal spatial distribution, and luminosity function for clusters. Therefore, aside from statistical fluctuations, {\it the value of B$_{gc}$ is, in principle, identical regardless of what aperture is used to count galaxies, and which magnitude limit is chosen}. \newline\indent Yee \& Lopez-Cruz (1999) verified this was true, but showed that measuring B$_{gc}$ from a fixed aperture with R = 500 kpc (using H$_{0}$ = 50 km s$^{-1}$ Mpc$^{-1}$) produced the lowest statistical errors. Furthermore, they showed that counting galaxies to M $<$ M$_{*}$ + 1 was all that was required for B$_{gc}$ to be robust. Therefore, our values of B$_{gc,K}$ are computed using a fixed aperture of 500 kpc (although we use H$_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$) and by counting galaxies to K$^{*}$ + 1. \newline\indent Again, because B$_{gc}$ requires counting the number of cluster members, we use statistical background subtraction, rather than the spectroscopic weights. YE03 demonstrated that B$_{gc}$'s computed using both techniques agree extremely well. In order to determine the integrated LF parameter ($\Psi$[M(m$_{0}$,$z$)]) a universal cluster luminosity function must be assumed. We showed in Paper I that besides the passive evolution of the stellar populations, this is a reasonable assumption for the CNOC1 clusters. When calculating $\Psi$[M(m$_{0}$,$z$)] we assume passive evolution of the LF of our ``average'' cluster (K$^{*}$ = -24.14, $\alpha$ = -0.84). Unfortunately, we do not have a measurement of the universal $\phi^{*}$ in units of Number/Mpc$^{3}$. This cannot be determined from the cluster data alone because clusters are high-density regions in the universe, and not representative of the mean value of $\phi^{*}$. Rather than using values from studies of the K-band field LF (which still have fairly large errors) we adopt the approach of Yee \& Lopez-Cruz (1999) for determining $\phi^{*}$. They showed that a self-consistent value of $\phi^{*}$ could be estimated by evolving the cluster LF assuming a simple model for the evolution of M$^{*}$({\it z}). The normalization ($\phi^{*}$) is determined by requiring that the integrated counts from the LF reproduce the background galaxy counts. We perform the same procedure using a z$_{f}$ = 5.0 model passive evolution model (see Paper I) to parameterize the evolution of M$^{*}$({\it z}). By comparing the model background counts to the true background counts and using a $\chi^2$-maximum-likelihood technique we determine $\phi^{*}$ = 4.34 $\times$ 10$^{-3}$ Mpc$^{-3}$. This value of $\phi^{*}$ is slightly larger than the $\phi^{*}$ = 3.40 $\pm$ 0.29 $\times$ 10$^{-3}$ Mpc$^{-3}$ determined locally (Kochanek et al. 2001), and about 3 times as large as the $\phi^{*}$ = 1.78$^{+1.5}_{-0.9}$ $\times$ 10$^{-3}$ Mpc$^{-3}$ measured at 0.2 $<$ {\it z} $<$ 0.65 by Pozzetti et al. (2003). However, even if this value is incorrect it will be systematically incorrect for all clusters and will only affect the intercept in the correlation between B$_{gc,K}$ and other physical parameters. It will not affect the slope or scatter, which are of principle interest. \newline\indent Column 6 of Table 2 lists the measured values of B$_{gc}$ for the clusters. The errors have been computed using the prescription from Yee \& Lopez-Cruz (1999), \begin{equation} \frac{\Delta B_{gc,K}}{B_{gc,K}} = \frac{(N_{net} + 1.3^{2}N_{bg})^{1/2}}{N_{net}}, \end{equation} where N$_{bg}$ is the number of background counts within the 500 kpc and the 1.3$^2$ term is used to approximately account for the clustering of background galaxies. \newline\indent In the left panel of Figure 5 we plot Log(B$_{gc,K}$) vs. Log(M$_{200}$) for the clusters. The best fit linear relation with MS1455+22 excluded is \begin{equation} Log(M_{200}) = (1.62 \pm 0.24)Log(B_{gc,K}) + (9.86 \pm 0.77). \end{equation} The fit has a reduced-$\chi^2$ of 1.29, and the rms scatter of the correlation is 35\% (18\% intrinsic). This implies that the K-band selected richness is slightly better than the total K-band light for predicting the mass of a galaxy cluster. The difference in total and (intrinsic) scatter between the two parameters is notable, 35\%(18\%) in M$_{200}$ - B$_{gc,K}$ vs. 43\%(31\%) in M$_{200}$ - L$_{200,K}$ and this may be because of the smaller aperture used to measure B$_{gc}$. \newline\indent In the right panel of Figure 5 we plot Log(B$_{gc,K}$) vs. Log(T$_{x}$). The best fit linear relation with MS1455+22 excluded is \begin{equation} Log(T_{x}) = (0.94 \pm 0.13)Log(B_{gc,K}) - (2.20 \pm 0.42), \end{equation} and has a reduced-$\chi^2$ of 2.00 and an rms scatter of 25\% (16\% intrinsic). This is very similar to the scatter seen in the L$_{200,K}$ - T$_{x}$ relation and shows that the fixed-aperture richness is an excellent indicator of the cluster X-ray temperature at the $\sim$ 25\% level. \newline\indent If we compare the accuracy of B$_{gc,K}$ to the optical B$_{gc}$ we find that they are almost identical at predicting M$_{200}$ (scatters of 35\%, and 31\% respectively) as well as T$_{x}$ (scatters of 25\% and 21\% respectively). B$_{gc,K}$ for the CNOC1 clusters also has a similar scatter to the fixed-aperture richnesses measured for local clusters in the K-band. L04 showed that the scatter in the number of galaxies within 0.75 Mpc vs. M$_{500}$ was 43\%, and estimated that $\sim$ 24\% of the scatter was intrinsic. The L04 cluster masses are measured using T$_{x}$ and therefore the it is more relevant to compare the L04 intrinsic scatter to the intrinsic scatter in the CNOC1 B$_{gc}$ - T$_{x}$ relation. The scatter in the L04 relation is somewhat larger than the scatter from the CNOC1 relation, but this may be due to the fact that the CNOC1 clusters are generally more massive than the L04 clusters. \begin{figure*} \epsscale{0.9} \plotone{f5.eps} \caption{\footnotesize Left Panel: Plot of Log(M$_{200}$) vs. Log(B$_{gc,K}$) for the CNOC1 clusters. The solid line is the best-fit linear relation and has an rms scatter of 35\% (18\% intrinsic). Right Panel: Plot of Log(T$_{x}$) vs. Log(B$_{gc,K}$) for the same clusters. The solid line is the best-fit linear relation and has an rms scatter of 25\% (16\% intrinsic). MS1455+22 is plotted as an open triangle in both panels and is excluded in the fits.} \end{figure*} \newline\indent These results demonstrate that relatively cheap IR imaging can be used to determine cluster masses with good accuracy. Specifically, IR imaging of even higher redshift clusters ($z >$ 0.5) will be valuable for determining masses in large optical cluster surveys. IR richnesses will likely be more robust than optical richnesses at predicting cluster masses at high-redshift because at high-redshift optical bandpassess probe bluer parts of a galaxy's spectrum where the star-formation properties can drastically alter a galaxy's luminosity. \newline\indent It is also possible that in the case of large cluster surveys, B$_{gc,K}$ may prove to be as valuable for predicting the cluster halo mass as the cluster velocity dispersion or X-ray temperature. Determining the cluster mass from either the line-of-sight velocity dispersion or X-ray temperature require assumptions about the dynamical state of the cluster. Dynamical masses require that the cluster is in virial equilibrium and that that the shape of the velocity ellipsoid is known (generally it is assumed to be isotropic). X-ray masses require the assumption of hydrostatic equilibrium, and spherical symmetry. They also require a correction if the cluster has a cooling flow. In cases where these assumptions do not apply (such as cluster-cluster mergers, or a collapsing system in the process of forming) the masses of the clusters may be poorly estimated, and these clusters will contaminate the measurement of the cluster mass function. The number of catastrophic outliers can have serious consequences on the measured cosmological parameters because they are particularly sensitive to the number of rare, massive clusters. On the other hand, if clusters which are unrelaxed, or currently undergoing a merger do not have the K-band light of their galaxies altered significantly, then using K-band richness as a mass indicator could potentially be a robust method for determining the masses of clusters that are not in dynamical or hydrostatic equilibrium. \newline\indent One way to test this hypothesis is to use weak-lensing to measure the mass of clusters and compare those masses to B$_{gc}$, and B$_{gc,K}$. Weak-lensing does not require any assumptions about the dynamical state of the clusters and therefore it is an unbiased (although statistically noisy) measure of the cluster mass. Weak lensing masses for a subsample of the CNOC1 clusters have been measured and will be compared to B$_{gc,K}$ in a future paper. \section{The K band Mass-to-Light Ratio} Recent studies of the K-band M/L ratio in local clusters (Lin et al. 2003, L04, Rines et al. 2004, Ramalla et al. 2004) have shown that the K-band cluster M/L ratio is an increasing function of cluster mass (although Kochanek et al. 2003 find it is roughly constant with mass). These studies have also found that the M/L ratio is a slowly decreasing function of radius, with the integrated M/L ratio at R$_{200}$ (M$_{200}$/L$_{200,K}$) being $\sim$ 15\% smaller than the M/L ratio at R$_{500}$ (e.g., L04). Here we present the first K-band M/L ratios of massive, intermediate-redshift clusters. \newline\indent We use the k and evolution corrected L$_{200,K}$ values to compute M$_{200}$/L$_{200,K}$ and therefore it is the M$_{200}$/L$_{200,K}$ ratio of clusters corrected to $z$ = 0. The values are listed in Table 3 and the errors computed by propagating the M$_{200}$ and L$_{200}$ errors in quadrature. In Figure 6 we plot Log(M$_{200}$/L$_{200,K}$) vs Log(M$_{200}$). There is a clear correlation of M/L with M$_{200}$ in the CNOC1 clusters. The Spearman rank-correlation coefficient for these data is 0.60, which implies a probability of 0.017 that the variables are uncorrelated. The solid line in the plot is the best-fit linear relation with MS1455+22 excluded: \begin{equation} Log(M_{200}/L_{200,K}) = (0.57 \pm 0.13) Log(M_{200}) - (6.92 \pm 2.04). \end{equation} The fit has a reduced-$\chi^2$ of 0.964 and implies that M/L $\propto$ M$^{0.57 \pm 0.13}$. Interestingly, this slope is about a factor of 3 steeper than the M/L $\propto$ M$^{0.17 \pm 0.11}$ that would be inferred using the Log(M$_{200}$) - Log(L$_{200,K}$) relation (Eqn 5) where the variables are less directly correlated (they still both depend on R$_{200}$). As a comparison, the inferred relation is plotted as the dashed line in Figure 6. The inferred relation, M/L $\propto$ M$^{0.17 \pm 0.11}$, agrees well with the relation from local clusters where M/L $\propto$ M$^{\alpha}$ with $\alpha$ = 0.26 $\pm$ 0.04 (L04), and $\alpha$ = 0.31 $\pm$ 0.09 (Rines et al. 2004), but does not agree well with the value for local groups, $\alpha$ = 0.56 $\pm$ 0.05 (Ramella et al. 2004). The Ramella et al. relation is similar to the correlation obtained by fitting Log(M/L) vs. Log(M$_{200}$) directly. \newline\indent It is likely that the inconsistent slopes from our own data, as well as between the local cluster and group M/L vs. M$_{200}$ are primarily caused by the large scatter in the M$_{200}$ - L$_{200,K}$ scaling relations and the difficulties associated in fitting to data with a larger scatter than is accounted for by the error bars. It is probably not caused by real differences in the M/L vs. M$_{200}$ scaling relations for the different cluster samples. In our own data the M$_{200}$ - L$_{200,K}$ relation has a large reduced-$\chi^2$ (2.65). Examination of Figure 4 suggests that the inflated reduced-$\chi^2$ is not caused by a linear fit being the incorrect model for these data, but is caused by a handful of significant outliers to the relation. \newline\indent Another consideration is that the intrinsic correlation between M/L and M$_{200}$ (mass is in both parameters) can contribute to the discrepancy in slope. The Ramella et al. (2004) masses and our own masses are determined from the cluster velocity dispersion ($\sigma_{1}$) and equation 2, where M$_{200}$ $\propto$ $\sigma_{1}^{3}$. The L04 masses are determined from T$_{x}$ using the fit from Finoguenov et al. (2001) where M$_{200}$ $\propto$ T$_{x}^{1.58}.$ Given that M$_{200}$ depends on $\sigma_{1}^{3}$ but only on T$_{x}^{1.58}$, catastrophic errors in $\sigma_{1}$ will translate to larger errors in M$_{200}$ than catastrophic errors in T$_{x}$ will. Consequently, given that the Log(M/L) vs. Log(M$_{200}$) slope is $<$ 1, clusters which have their velocity dispersions incorrectly measured (and not properly accounted for by the errors, such as in the case of non-virialization) will steepen the slope of the M/L vs. M$_{200}$ correlation because the mass term is in both parameters. \newline\indent As a test, we remove the highest and lowest mass CNOC1 clusters (MS0440+02 and MS0451-03, both of which are significant outliers in the M$_{200}$ - L$_{200}$ relation) and refit. We find a slope of $\alpha$ = 0.44 $\pm$ 0.15, which is more consistent with the $\alpha$ = 0.17 $\pm$ 0.11 inferred directly from the M$_{200}$ vs. L$_{200,K}$ relation. Unfortunately, this problem of outliers makes determining a robust slope for the M/L vs. M$_{200}$ correlation difficult. The data are less directly correlated in the case of Log(M$_{200}$) vs. Log(L$_{200,K}$), and therefore we return to that relation and adopt M$_{200}$/L$_{200,K}$ $\propto$ M$^{0.17 \pm 0.11}$ as the best slope for M/L vs. M$_{200}$ scaling relation for the CNOC1 clusters. \newline\indent This slope, where M/L increases with increasing M$_{200}$ supports the scenario proposed by both L04 and Rines et al. (2004), where either the star-formation efficiency of cluster galaxies is a decreasing function of cluster mass, or else that the amount of intra-cluster light is an increasing function of cluster mass. Interestingly, the slope of this scaling relation for the CNOC1 clusters is consistent with the the slope in local clusters and this shows that the M/L vs. M$_{200}$ correlation is in place by at least $z \sim$ 0.3, roughly 4 Gyr in lookback time. \begin{figure*} \epsscale{0.8} \plotone{f6.eps} \caption{\footnotesize Plot of Log(M$_{200}$/L$_{200,K}$) vs Log(M$_{200}$) for the CNOC1 clusters. The solid line is the best-fit relation. The dashed line is the relation that is inferred from the Log(M$_{200}$) - Log(L$_{200,K}$) fit. MS1455+22 is plotted as an open triangle and is excluded in the fit.} \end{figure*} \section{$\Omega_{m}$ from the Oort Technique} The original purpose of the CNOC1 project was to measure the cosmological density parameter $\Omega_{m,0}$ $\equiv$ $\rho_{o}$/$\rho_{c}$ using the Oort (1958) technique. The cluster M/L ratio, divided by the M/L ratio for closure (M/L)$_{c}$ $\equiv$ $\rho_{c}$/$j$, where $j$ is the field luminosity density, provides a direct measure of $\Omega_{m,0}$ which is independent of the Hubble parameter. There has been some concern with this method because recent numerical simulations suggest that light is a biased tracer of dark matter, and that the M/L ratio of any region in the universe which contains galaxies automatically provides an underestimate of the universal M/L ratio, and an underestimate of $\Omega_{m,0}$ (e.g., Ostriker et al. 2003). However, this is only a significant concern for low-density regions in the universe (i.e., the group scale or less). Unlike those lower density regions, rich clusters are assembled from regions of $>$ 10 Mpc across (Carlberg et al. 1997) and therefore, provided that observations extend to the entire volume of the cluster (i.e., R $\sim$ R$_{200}$), clusters should contain a sufficient collapsed volume to provide a representative sample of the universal M/L ratio (e.g. Carlberg et al. 1996, Carlberg et al. 1997). This is yet to be verified in numerical simulations as, until recently, they lacked enough volume to contain rich clusters. The CNOC1 project was designed specifically with this biasing concern in mind and therefore is a wide-field study of a sample of rich clusters. \newline\indent Although biasing is not a major concern for the cluster technique, we know that the stellar populations of galaxies in the cluster and field environments are different (e.g., Ellingson et al. 2001, Balogh et al. 1999, Poggianti et al. 1999, Dressler et al. 1997) and without a good understanding on how these differences affect the integrated luminosity of these regions, there arises the potential for a systematic error in $\Omega_{m,0}$. \newline\indent Using the $r$-band data for the CNOC1 clusters, Carlberg et al. (1997) showed that $\Omega_{m,0}$ = 0.19 $\pm$ 0.06 (random) $\pm$ 0.04 (systematic). In their analysis they found that the $r$-band cluster galaxies were on average 0.11 $\pm$ 0.05 mag fainter than field galaxies at this redshift and hence they decreased the cluster M/L ratio accordingly. Given the need for this correction, it is advantageous to perform the same analysis with infrared data, because infrared light is a better tracer of total stellar mass than optical light. The infrared luminosity of galaxies does not depend strongly on the current star-formation properties of the stellar population and consequently, little or no correction is required to account for the varying star-formation properties of the field and cluster environment. The only potential correction required would be if the cluster environment played a role in the star-formation {\it efficiency} of galaxies. A differing star-formation efficiency could manifest itself in two ways: 1) the stellar mass may be distributed differently in the cluster/field environment (i.e., the {\it shape} of the field and cluster LFs may be different); or 2) baryons may be converted into stars at a different rate than in the field (i.e., the ratio of the stellar mass to dark matter mass could be different between these environments). \newline\indent In terms of problem 1), Paper I showed that the cluster K$^{*}$ was brighter than, but consistent with the field value ($\Delta$K$^{*}$ = 0.25 $\pm$ 0.26 mag). Paper I also showed that $\alpha$ from the CNOC1 cluster LF was the same as $\alpha$ for local clusters as well as the local field. Those results agreed well with local measurements which showed the cluster K$^{*}$ and $\alpha$ are roughly the same as the field (although we note that both L04 and Rines et al. 2004 find that the cluster K$^{*}$ is $\sim$ 0.2 mag brighter at the 2$\sigma$ level). Given that both K$^{*}$ and $\alpha$ for the cluster LF are similar to the field LF, the distribution of galaxies in terms of luminosity (and by corollary stellar mass) should be similar in both environments. \newline\indent In terms of problem 2), it is important to note that the cluster M/L ratio continues to be an increasing function of cluster mass, even for objects as massive as the CNOC1 clusters. This shows that the overall rate of converting baryons to stars is a function of environment. The rich CNOC1 clusters have a K-band M/L ratio which is about an order of magnitude larger than the M/L ratio of groups (e.g., L04, Ramella et al. 2004), and in turn, the group M/L ratio is about an order of magnitude larger than for individual galaxies (e.g., Brinchmann \& Ellis 2000). Remarkably, despite the fact that rich clusters are two orders of magnitude less efficient in converting baryons to stars than individual galaxies, their continually increasing M/L ratio with increasing mass suggests the possibility that they may still be slightly biased tracers of stellar mass in the universe. Unless the M/L vs. M$_{200}$ relation flattens at the scale of $\sim$ 10$^{15}$ M$_{\odot}$, the M/L ratio of clusters could be lower than the universal value. Unfortunately, there is no data for the K-band M/L ratio of superclusters to determine if the M/L ratio flattens at this size scale. \newline\indent The mean M/L ratio for the clusters (corrected for passive evolution to $z = 0$) is 61.2 $\pm$ 1.8 M$_{\odot}$/L$_{\odot}$. Combining this with the luminosity density of the local universe measured by Kochanek et al. (2001) we find $\Omega_{m,0}$ = 0.22 $\pm$ 0.02 (random). This result is in good agreement with combined constraints from WMAP third-year results (WMAP3) and various other techniques (Spergel et al. 2006). The combined WMAP3 constraints on $\Omega_{m,0}$ range from a low-density of $\Omega_{m,0}$ = 0.226$^{+0.030}_{-0.041}$ to a high-density of $\Omega_{m,0}$ = 0.299$^{+0.019}_{-0.025}$. Interestingly, the measurement from the clusters is closest to the lowest density of the of preferred values from combined constraints. Previous studies using K-band M/L ratios of local clusters have also found values of $\Omega_{m,0}$ on the low-end of preferred values. \newline\indent Both Rines et al. (2004) and Lin et al. (2003) measured $\Omega_{m,0}$ using their local cluster samples. Rines et al. found that the result depended strongly on the location in the clusters where the M/L ratio was measured. The cluster infall region had a lower M/L ratio than the viralized region and therefore they found $\Omega_{m,0}$ = 0.18 $\pm$ 0.04 from the virial region and $\Omega_{m,0}$ = 0.13 $\pm$ 0.03 from the infall region. Lin et al. (2003) performed the same analysis with a larger sample of 27 clusters. They found $\Omega_{m,0}$ = 0.17 $\pm$ 0.02 using the mean M/L ratio of all clusters, and $\Omega_{m,0}$ = 0.19 $\pm$ 0.03 using a subsample of their most massive clusters. \newline\indent Our analysis is in good agreement with both these studies, as well as the $r$-band measurement from the same clusters by Carlberg et al. (1997). Our results further confirm that $\Omega_{m,0}$ from clusters agrees well with concordance values but tends to be on the low-density end of preferred values. It is possible that the cluster measurements prefer lower $\Omega_{m,0}$ because their M/L ratios are still lower than the universal value. A measurement of the K-band M/L ratio on the supercluster scale would be a useful way to test this hypothesis. Regardless, the fact that the cluster $\Omega_{m,0}$ still agrees well with recent measurements of $\Omega_{m,0}$ using a variety of independent techniques, and combinations thereof suggests that the cluster M/L ratio is unlikely to be significantly lower than the universal value. \section{Summary} \indent We have presented the K-band scaling relations for 15 moderate-redshift clusters with extensive optical spectroscopy and wide-field K-band imaging. The cluster HON is well-correlated with M$_{500}$ and T$_{x}$; however, the intrinsic scatter in the scaling relations at $z \sim$ 0.3 is fairly large (37\% and 46\% respectively). Comparing to local clusters we find that the HON is consistent with no evolution at fixed cluster mass out to $z \sim$ 0.5. This result, in tandem with the purely passive evolution of the cluster LF, and the fact that the cluster LF does not depend on mass suggests that if the significant tidal stripping of galaxy halos in clusters seen in numerical simulation occurs, the stellar mass within the halo is tightly bound and remains intact. This interpretation is further supported by recent SPH simulations which show that the baryonic matter is the amongst the most tightly bound mass within the halo. \newline\indent Our data also show that both L$_{200,K}$ and B$_{gc,K}$ are well-correlated with the cluster dynamical mass and X-ray temperature. The slope of the L$_{200}$ - M$_{200}$ relation at $z \sim$ 0.3 (L$_{200,K}$ $\propto$ M$_{200}^{0.83 \pm 0.11}$) is consistent with the slope measured for local clusters, suggesting that the cluster scaling relations are in place by at least $z \sim$ 0.3. The good correlation, and relatively small scatter (intrinsic + measurement) in the B$_{gc,K}$ - M$_{200}$ relation (35\%) and B$_{gc,K}$ - T$_{x}$ relation (25\%) suggests that B$_{gc,K}$ would be useful mass indicator for upcoming cluster cosmology projects. \newline\indent Our results from the M/L ratios of the clusters show that the moderate-redshift CNOC1 clusters are very similar to local clusters in that the M/L ratio is a slowly increasing function of cluster mass. By comparing the cluster M/L ratio to the local luminosity density we estimate that $\Omega_{m,0}$ = 0.22 $\pm$ 0.02, which agrees well with the original analysis of the CNOC1 clusters using optical data as well as with the local estimates using K-band photometry. The measured value is also in good agreement with recent WMAP third year joint constraints; however, similar to previous cluster studies the value is on the low-density end of preferred values. \newline\indent The combined analysis of this paper and Paper I present a relatively simple picture of the evolution of the near-infrared properties of clusters from $z = 0$ to $z \sim$ 0.3. The correlation between the cluster near-infrared properties (i.e., L$_{200,K}$, N$_{500}$, and M$_{200}$/L$_{200,K}$) and M$_{200}$ shows no significant change between $z = 0$ and $z \sim$ 0.3. Furthermore, the scatter in these scaling relations is similar at both redshifts. The cluster LF shows only passive evolution between $z = 0$ and $z \sim$ 0.3 and does not depend on cluster mass. In addition to this, the (small) difference between the field and cluster LF at $z = 0$ is unchanged at $z \sim$ 0.3. These results all show that there is little evolution in the the bulk of the stellar mass in cluster galaxies over this redshift range besides the passive aging of the stellar populations. \newline\indent Specifically, it appears that 1) the significant tidal stripping of halo in high-density regions seen in N-body simulations does not affect the stellar mass contained in galaxies nor change the cluster scaling relations; 2) given passive evolution of the LF, and no-evolution in the HON, mergers and disruptions are unlikely to play a significant role in cluster galaxy evolution at $z <$ 0.3; and 3) that the changes seen in the cluster stellar populations over the redshift interval $z = 0$ to $z \sim$ 0.3 (i.e., the increase in blue-fraction/star-formation properties, and evolution of the morphology-density relation) are ``superficial'' - they result from changes in a small part of a galaxy's stellar mass, while the majority of the stellar mass is already in place and passively evolving. \newline\indent Overall, it appears that the bulk of the stellar mass in cluster galaxies is in place, and evolving passively with few mergers or disruptions between $z \sim$ 0.3 and $z = 0$, and that the the cluster scaling relations are produced by processes that occur at higher redshifts. \acknowledgements We thank the anonymous referee for a careful report which improved the clarity of this paper. A.M. would like to cite useful conversations and help from David Gilbank and Kris Blindert which significantly improved the quality of the data analysis. A. M. acknowledges financial support from the National Science and Engineering Research Council (NSERC) in the form of PGSA and PGSD2 scholarships. The research of H.K.C.Y. is supported by grants from Canada Research Chair Program, NSERC and the University of Toronto. P.H. acknowledges support from NSERC. \include{tab1} \include{tab2}
1,116,691,501,171
arxiv
\section{Introduction} Multiphase flows of two immiscible fluids are encountered in many industrial applications ranging from small scale droplet interactions \cite{Khatavkar}, phase separation and microstructural evolution in materials \cite{Fried}, bubbly and slug flows in oil and natural gas pipelines to free-surface ocean waves in the marine environment \cite{Sorensen, Chakrabarti}. The effects of multiphase flows are crucial from industrial point of view since they directly affect the design and optimization of the structures subjected to the interfacial flow conditions. Therefore it is essential to understand these effects physically and numerically to have improved structural designs and safer operational conditions. Of particular interest to the present work is the robust and efficient two-phase modeling of free-surface ocean waves to predict the air-water interface and the hydrodynamic forces on submerged offshore structures. The numerical treatment of two-phase flow involving immiscible fluids poses certain challenges owing to the physical complexity in the representation and evolution of the phase interface \cite{Multiphase_Tryg}. The representation of the continuum interface between the two phases can be considered by either interface tracking or interface capturing techniques. For interface tracking, a boundary between the two fluid sub-domains is explicitly tracked, for example, front tracking \cite{Unverdi} and arbitrary Lagrangian-Eulerian \cite{donea} methods. While these methods can accurately locate the interface position by tracking the moving boundary or markers on the interface, they may lead to difficulties for relatively larger interface motion and topological changes. During the topological changes, re-meshing can be computationally expensive for interface tracking methods in three-dimensions. In interface capturing, no explicit representation of the interface is considered, instead the interface is represented implicitly using a field function throughout the computational domain on an Eulerian grid, such as level-set, volume-of-fluid and phase-field approaches. Through an implicit evolution of the interface field function, topological changes such as merging and breaking of interfaces can be naturally handled by interface capturing methods. During the evolution of interface, the discontinuous nature of physical quantities such as density, viscosity and pressure can also pose difficulties during the numerical treatment of the interface. In particular, these discontinuities in the properties can lead to unphysical and spurious oscillations in the solution which can lead to unbounded behavior. The background fixed mesh has to be sufficiently resolved to capture these discontinuities leading to high computational cost. Furthermore, surface tension or capillary forces along the interface have to be accurately modeled. Several models have been discussed in the literature to evaluate the surface tension, one of which is the continuum surface force method \cite{Brackbill}. The conservation of mass is another challenge which is crucial to obtain accurate solution of the multiphase flow. Among the types of interface capturing methods, the level-set and volume-of-fluid (VOF) are the most popular methods. The VOF method utilizes a volume of fluid function to extract the volume fraction of each fluid within every computational cell. While the VOF method conserves mass accurately for incompressible flows, the calculation of interface curvature and normal from volume fractions can be quite complex due to interface reconstruction \cite{Scardovelli, Unverdi, Rider}. In addition, the smearing of the interface by numerical diffusion causes additional difficulty. On the other hand, the level-set approach allows to have a non-smeared interface by constructing a signed-distance function (level-set function) of a discretization point to the interface \cite{Sethian}. The zero-level-set of the distance function provides a sharp interface description and the curvature of the interface can be approximated with high accuracy. However, the level-set method does not conserve mass. The reasons behind the inability of level-set to conserve mass are numerical dissipation introduced due to discretization and re-initialization process to keep the level-set function as a signed distance function \cite{Lanhao}. Although methods including high order discretization scheme \cite{Sussman_1}, improved re-initialization \cite{Sussman_1, Peng, Russo}, coupled particle tracking/level-set \cite{Enright_1, Enright_2, Koh} and coupled level-set/volume-of-fluid \cite{Sussman_2, Wang, Yang} have been proposed to deal with the mass conservation issue, they make the scheme more complex and computationally expensive. To improve the mass conservation property, conservative level-set method was proposed in \cite{Olsson_1, Olsson_2} where the signed distance function was replaced with a hyperbolic tangent function. However, re-initialization was necessary to maintain the width of the hyperbolic tangent profile in this case. One of the recent improvements in the level-set approach is the XFEM which utilizes the enrichment of shape functions of the elements near the interface \cite{Fries_1} to get more accurate solution. Both the level-set and the volume-of-fluid techniques utilize some kind of geometric reconstruction for the modeling of the interface. This could be quite tedious to implement and extend to multi-dimensions for unstructured grids over complex geometries. Interface capturing for two-phase flows can be further classified under sharp-interface and diffuse-interface methods. In the sharp-interface method, the interface between the two phases is treated as infinitely thin with physical properties such as density and viscosity having a bulk value till the point of the discontinuity at the interface. The equations of motion are solved separately for the two domains and appropriate pressure- and velocity-jump boundary conditions are imposed to have localized interactions at the infinitely thin phase interface. In the diffuse-interface methods, however, a gradual and smooth variation of physical properties is assumed across the phase interface of finite thickness. The equations of motion are solved on the entire domain as one-fluid formulation. The physical properties are a function of a conserved order parameter which is solved by minimizing a free energy functional based on thermodynamic arguments \cite{Anderson}. Similar to the VOF and level-set methods, the interface location is defined by the contour levels of the phase-field order parameter. Unlike the sharp-interface description, the phase-field formulation does not require to satisfy the jump conditions at the interface and there is rapid but smooth variations of the order parameter and other physical quantities in the thin interfacial region. The diffuse-interface description has been shown to approach the sharp-interface limit asymptotically \cite{Anderson_1}. The phase-field models originate from thermodynamically consistent theories of phase transitions via minimization of gradient energy across the diffuse interface. The phases are indicated by an order parameter $(\phi)$ (e.g., $\phi=1$ in one phase and $\phi=-1$ in another phase) which is solved over an Eulerian mesh and the interface is evolved. As they fall under the diffuse-interface description, no conditions at the interface between the two phases are required to be satisfied. The surface tension and capillary effects can be modeled as a function of the phase-field order parameter depending on the minimization of the Ginzburg-Landau energy functional. Therefore, these methods do not require any re-initialization or reconstruction at the interface. Furthermore, the mass conservation property can be imparted in a relatively simple manner. The topology changes in the interface can be handled by the phase-field models easily given that the mesh is refined enough to capture these phenomena. Due to the mass conservation property and simplicity to handle topology changes, we consider the phase-field technique for physical modeling of multiphase systems. In the phase-field models, the interface is evolved by solving a transport equation written in the form of a gradient flow of the energy functional, either in the $L^2(\Omega)$ norm (Allen-Cahn (AC) equation \cite{Allen_cahn}) or in the $H^{-1}(\Omega)$ norm (Cahn-Hilliard (CH) equation \cite{Cahn_Hilliard}). The numerical challenges in solving these transport equations are mass conservation, inherent nonlinearities and the high order of the CH equation. While the CH equation conserves mass naturally, the conventional AC equation does not provide the mass conserving property. Method employing Lagrange multiplier which is global and time dependent has been suggested to impart this property to the AC equation in \cite{Rubinstein, Bronsard}. Mass conservation using local and nonlocal Lagrange multiplier was suggested in \cite{Brassel, Kim_2} since it was observed that small geometric features were not maintained by the equation when a global multiplier depending on time was considered \cite{H_G_Lee}. We have implemented the latter strategy to satisfy the mass conservation in the present study. A review of the phase-field techniques to handle the nonlinearities in energy stable schemes is discussed in \cite{Tierra}. The treatment of the nonlinear double-well potential function in the energy functional affects the energy stability property of the underlying discretization. The fourth-order of the CH equation is another challenge, which restricts its flexibility for generic unstructured meshes. Under the finite element variational framework, we cannot employ linear finite elements to discretize the equation because the higher order derivatives will be zero. CH equation is thus written as a set of two coupled equations making the system complex and increasing the degrees of freedom. A comparison between the AC and CH equations has been carried out in \cite{Dongsun, Jeong}. A review of the phase-field models for multiphase flows can be found in \cite{Kim}. A second-order time accurate scheme for the CH equation was proposed under mixed-finite element framework in \cite{Gomez}. While the present study deals with the linear finite elements, solving the Allen-Cahn equation is more convenient and less computationally expensive and is thus chosen for the study. Some of the recent works which have used the conservative AC equation and coupled it with Navier-Stokes equations are \cite{Zhang, Yang, Vasconcelos}. The Allen-Cahn phase-field equation mathematically represents a complex nonlinear convection-diffusion-reaction (CDR) equation. The solution of this equation can exhibit oscillations when the convection or reaction effects are dominant. The basic strategy of the finite difference and finite volume discretizations to deal with such instabilities is classical upwinding technique. Although the algorithm is stable, it is only first-order accurate and leads to smearing of the solution. In the present article, our objective is to solve the Allen-Cahn equation under the variational framework while maintaining boundedness property, mass conservation and energy stability in the underlying discretization. We develop the positivity preserving variational (PPV) technique \citep{PPV} to capture and reduce the oscillations near the interface and maintain boundedness in the solution under a relatively coarser mesh. We add a nonlinear stabilization term to the linear stabilized variational form to achieve the positivity property. The mass of the Allen-Cahn equation is conserved by enforcing a Lagrange multiplier term which has both global and local dependence on the solution. The locally dependent part of the multiplier term is written as a mid-point approximation to achieve the energy stability in the scheme. We theoretically prove the mass conservation and energy-stability properties of the proposed scheme. The novel features of the present variational scheme are the positivity preserving and energy-stability properties for solving the Allen-Cahn phase-field equation on a general unstructured finite element mesh. The simplicity of the present formulation makes it easier to implement on existing stabilized finite element codes. The PPV-based Allen-Cahn implementation is observed to be second-order accurate in space and more than first-order accurate in time through numerical experiments. We perform the implicit discretizations of the Navier-Stokes and the Allen-Cahn equations. The time-splitting allows us to decouple the incompressible Navier-Stokes and Allen-Cahn solvers at each time step, which provides an avenue for the staggered solution updates in a straightforward manner. The coupled multiphase solver has been demonstrated to handle high density and viscosity ratios with surface tension effects. We carry out standard benchmarks such as Laplace-Young law and sloshing tank problem to assess the coupled phase-field implementation for incompressible flows. While the Laplace-Young law establishes the ability of the formulation to handle high density ratios, the sloshing tank problem demonstrates the free-surface modeling capabilities of the scheme. Simulations of dam break problem test the ability of the scheme to handle the topological changes of the interface in two- and three-dimensions. These numerical experiments allow to quantify the accuracy of the phase-field scheme in capturing the interface location and to characterize its sensitivity with the model and discretization parameters. Finally, a practical problem of 3D wave-structure interaction of a vertical truncated cylinder of circular cross-section is simulated and the wave run-up is compared with the available results in the literature. The presented 3D finite-element based phase-field solver can be easily extended to complex geometries. The organization of the paper is as follows. Section \ref{proposed method} describes the positivity preserving method dealing with the variational formulation of the Allen-Cahn equation. The variational discretization of the Navier-Stokes equations is presented in Section \ref{section:NS} for the sake of completeness. The details about the coupling and the algorithm employed are discussed in Section \ref{imp_details}. Some numerical tests results assessing the effectiveness of the implementation are shown in Section \ref{tests}, leading to the application to the wave-structure interaction problem in Section \ref{RU_test}. Finally, we conclude the paper with some key results from the study in Section \ref{conclusion}. While Appendix A provides a proof for the discrete energy conservation for our Allen-Cahn phase-field formulation, the mass conservation is proven in Appendix B. \section{Positivity preserving variational scheme for the Allen-Cahn equation} \label{proposed method} Before proceeding to the variational formulation based on the positivity preserving scheme \cite{PPV}, we first review the Allen-Cahn phase-field equation for immiscible two-phase flows. The key idea behind the phase-field model is to replace the sharp interface region by thin transition layers via thermodynamic considerations and the conservation laws for the diffuse-interface description of different phases of fluids. The diffuse-interface technique recovers to the classical jump discontinuity conditions at the interface asymptotically \citep{Anderson_1}. \subsection{The Allen-Cahn equation} Consider a physical domain $\Omega(\boldsymbol{x},t)$ with spatial and temporal coordinates $\boldsymbol{x}$ and $t$, respectively. The phases of a binary phase mixture are indicated by an order parameter $\phi(\boldsymbol{x},t)$. Under the diffuse-interface description, the interface has a finite thickness and $\phi(\boldsymbol{x},t)$ varies gradually across the interface. The thickness of the diffuse interface layer is represented by $\varepsilon$. Under the thermodynamic arguments, the interface evolution and its dynamics are governed by the Ginzburg-Landau energy functional $\mathcal{E}(\phi)$, which can be expressed as \begin{align} \label{energy_functional} \mathcal{E}(\phi) = \int_\Omega \bigg( \frac{\varepsilon^2}{2}|\nabla\phi|^2 + F(\phi)\bigg) \mathrm{d}\Omega. \end{align} This energy functional consists of two components. The first term in Eq.~(\ref{energy_functional}) represents the interfacial energy which depends on the gradient of the order parameter. On the other hand, the second term is a double-well potential function which represents the free energy of mixing and depends on the local value of $\phi(\boldsymbol{x},t)$. The competition between these two components to minimize the energy gives rise to the evolution of the phase-field interface. It is known that the Allen-Cahn equation is the gradient flow of $\mathcal{E}(\phi)$ in the space $L^2(\phi)$ \begin{align} \partial_t\phi = - \bigg( \frac{\delta\mathcal{E}(\phi)}{\delta\phi} \bigg). \end{align} Here, $\partial_t\phi$ is the partial temporal derivative of the order parameter $\phi(\boldsymbol{x},t)$ and $F(\phi)$ is the double-well energy potential which is taken as $F(\phi) = \frac{1}{4}(\phi^2-1)^2$ in this study. Therefore, the two stable phases have $\phi=1$ and $\phi=-1$ for the energy potential with the interface over which $\phi$ varies gradually from $-1$ to $1$. To determine the profile of the interface at equilibrium, we minimize the energy functional by taking its variational derivative and equating it to zero. The variational derivative of the energy functional with respect to $\phi$ is called the chemical potential, which is given by \begin{align} \label{potential} \frac{\delta\mathcal{E}}{\delta\phi} = -\varepsilon^2\nabla^2\phi + F'(\phi). \end{align} The evolution of $\phi$ with time tends to minimize the energy functional where the diffusive effects of $\nabla^2\phi$ and the reactive effects of $F'(\phi)$ compete with each other. The profile of the interface at equilibrium is obtained by finding the roots of Eq.~(\ref{potential}) as: \begin{align} \label{eqm_profile} \phi(z) = \mathrm{tanh}\bigg( \frac{z}{\sqrt{2}\varepsilon} \bigg), \end{align} where $z$ is the coordinate normal to the interface. The equilibrium interface thickness denoted by $\varepsilon_\mathrm{eqm}$ is the distance over which $\phi$ varies from $-0.9$ to $0.9$ which can be found out as $2\sqrt{2}\varepsilon\mathrm{tanh}^{-1}(0.9) = 4.164\varepsilon$. In the present study, we consider the convective form of the Allen-Cahn equation with a Lagrange multiplier to conserve mass. For improved capturing of the physical features in the solution, we employ the modified AC equation with both local and global multiplier terms: \begin{equation} \left. \begin{aligned} \label{CAC} \partial_t\phi + \boldsymbol{u}\cdot\nabla\phi - \varepsilon^2\nabla^2\phi + F'(\phi) - \beta(t)\sqrt{F(\phi)} = 0,\ \ &\mathrm{on}\ (\boldsymbol{x},t)\in \Omega,\qquad \\ \boldsymbol{u}|_{\Gamma} = 0,\ \ &\mathrm{on}\ \boldsymbol{x}\in\Gamma, \\ \frac{\partial \phi}{\partial n}\bigg|_{\Gamma} = \mathbf{n}\cdot\nabla\phi = 0,\ \ &\mathrm{on}\ (\boldsymbol{x},t)\in \Omega,\\ \phi |_{t=0} = \phi_0, \ \ &\mathrm{on}\ \boldsymbol{x}\in\Omega, \end{aligned} \right\} \end{equation} where $\boldsymbol{u}$ is the convection velocity and $\mathbf{n}$ is the outward normal to the boundary $\Gamma$. $F'(\phi)$ is the derivative of $F(\phi)$ with respect to $\phi$. The third equation in Eq.~(\ref{CAC}) is the homogeneous Neumann boundary condition on $\phi$. The parameter $\beta(t)$ is the time dependent part of the Lagrange multiplier which can be derived using the incompressible flow condition of divergence-free velocity and the given boundary conditions as \begin{align} \label{eqn:beta} \beta(t) = \frac{\int_\Omega F'(\phi)\mathrm{d}\Omega}{\int_\Omega \sqrt{F(\phi)}\mathrm{d}\Omega}. \end{align} The Lagrange multiplier is written in such a way that \begin{align} \label{K_cons} \int_\Omega K(\phi) \mathrm{d}\Omega =\ \mathrm{constant}, \end{align} where $K(\phi) = 0.5(\phi^3/3 - \phi)$. Our aim here is to solve Eq.~(\ref{CAC}) while preserving positivity and boundedness in the solution. This equation will be later coupled with the incompressible Navier-Stokes equations for two-phase flow modeling. In the next subsection, we present the variational formulation to solve the AC equation using the PPV method. \subsection{A new positivity preserving variational formulation} The Allen-Cahn based phase-field equation can be expressed as a convection-diffusion-reaction equation. Variational discretization based on Galerkin method of the equation leads to spurious oscillations in the solution when the convection or reaction effects are dominant \citep{PPV}. Although linear stabilization methods such as Streamline-Upwind Petrov-Galerkin (SUPG) and Galerkin/least-squares (GLS) can reduce spurious oscillations, the solution has oscillatory behavior near the region of high gradients. The solution thus obtained can affect the physical results and lead to wrong prediction of the underlying phenomenon being studied. With regard to phase-field modeling of two-phase flows, the interface between the two fluids is represented by the order parameter $\phi$ which is solved by the AC equation. The variable $\phi$ is also used to interpolate the physical properties of the fluid phases such as density and viscosity, which should always be positive. Oscillations in $\phi$ can lead to unbounded values which can cause the density or viscosity to be negative, thus producing unstable or unphysical results. The proposed PPV-based formulation \cite{PPV} reduces these oscillations in the solution near the region of high gradients by enforcing the positivity property of the underlying element-level matrix. Before proceeding to the presentation of the spatial PPV-based variational formulation, we first discretize the AC-based phase-field equation in time via variational integration. \subsubsection{Temporal discretization} It is known that variational time integrators have the energy conserving property in comparison to Runge-Kutta time integration \cite{VTI}. To enforce this property, we employ the generalized-$\alpha$ time integration technique \cite{Gen_alpha,Jansen} to discretize the AC equation (Eq.~\ref{CAC}) in time domain. Generalized-$\alpha$ method can be unconditionally stable and second-order accurate for linear problems \cite{Gen_alpha}. The scheme enables user-controlled high frequency damping, which is desirable for a coarser discretization in space and time. This is achieved by specifying a single parameter called the spectral radius $\rho_{\infty}$. This algorithm dampens the spurious high frequency responses, but retains the second-order accuracy. Let $\partial_t{\phi}^{\mathrm{n}+\alpha_\mathrm{m}}$ be the derivative of $\phi$ with respect to time at $t^{\mathrm{n}+\alpha_\mathrm{m}}$. The statement for the time discretized AC-based phase-field equation can be written as: \begin{align} G(\partial_t{\phi}^{\mathrm{n}+\alpha_\mathrm{m}}, \phi^{\mathrm{n}+\alpha}) = \partial_t{\phi}^{\mathrm{n}+\alpha_\mathrm{m}} + \boldsymbol{u}\cdot\nabla\phi^{\mathrm{n}+\alpha} &- \varepsilon^2\nabla^2\phi^{\mathrm{n}+\alpha} + F'(\phi) - \beta(t)\sqrt{F(\phi)} = 0,\\ \phi^\mathrm{n+1} &= \phi^\mathrm{n} + \Delta t\partial_t{\phi}^\mathrm{n} + \gamma\Delta t(\partial_t{\phi}^\mathrm{n+1} - \partial_t{\phi}^\mathrm{n}), \label{app_1} \\ \partial_t{\phi}^{\mathrm{n}+\alpha_\mathrm{m}} &= \partial_t{\phi}^\mathrm{n} + \alpha_\mathrm{m} (\partial_t{\phi}^\mathrm{n+1} - \partial_t{\phi}^\mathrm{n}),\label{app_2} \\ \phi^{\mathrm{n}+\alpha} &= \phi^\mathrm{n} + \alpha (\phi^\mathrm{n+1} - \phi^\mathrm{n}), \label{npalpha} \end{align} where $\Delta t$ is the time step size, $\alpha_\mathrm{m}$, $\alpha$ and $\gamma$ are the generalized-$\alpha$ parameters defined as: \begin{align} \alpha_\mathrm{m} = \frac{1}{2}\bigg(\frac{3-\rho_{\infty}}{1+\rho_{\infty}}\bigg),\qquad \alpha = \frac{1}{1+\rho_{\infty}},\qquad \gamma = \frac{1}{2} + \alpha_\mathrm{m} - \alpha. \end{align} Here, the energy potential is $F(\phi) = \frac{1}{4}(\phi^2-1)^2$ which is the double-well potential with equal well-depth. For imparting the energy stability property, $F'(\phi)$ is written as \cite{Du} \begin{align} \label{F_phi} F'(\phi) =& \frac{F(\phi^\mathrm{n+1}) - F(\phi^\mathrm{n})}{\phi^\mathrm{n+1} - \phi^\mathrm{n}}. \end{align} Let $K'(\phi) = \sqrt{F(\phi)} = 0.5(\phi^2 - 1)$ be written as \begin{align} \label{K_phi} \Aboxed{K'(\phi) &= \sqrt{F(\phi)} = \frac{K(\phi^\mathrm{n+1}) - K(\phi^\mathrm{n})}{\phi^\mathrm{n+1} - \phi^\mathrm{n}}} \end{align} The expressions in Eqs.~(\ref{F_phi}) and (\ref{K_phi}) are formulated to provide the energy stability property to the variational scheme. Considering the mid-point approximation of the spatial part of the Lagrange multiplier helps in simplifying the expressions in the derivation of the discrete energy law leading to an energy-stable scheme. A detailed derivation of discrete energy law has been presented in \ref{DEL}. Using Eq.~(\ref{npalpha}) to replace the expression for $\phi^\mathrm{n+1}$ in $F'(\phi)$ and $\sqrt{F(\phi)}$ and rearranging the terms, the Allen-Cahn equation can be written in the form of convection-diffusion-reaction equation as follows: \begin{align} G(\partial_t{\phi}^{\mathrm{n}+\alpha_\mathrm{m}}, \phi^{\mathrm{n}+\alpha}) = \partial_t{\phi}^{\mathrm{n}+\alpha_\mathrm{m}} + \boldsymbol{u}\cdot\nabla\phi^{\mathrm{n}+\alpha} - k\nabla^2\phi^{\mathrm{n}+\alpha} + s\phi^{\mathrm{n}+\alpha} - f = 0, \end{align} where \begin{align} \mathrm{Convection\ velocity} &= \boldsymbol{u},\\ \mathrm{Diffusion\ coefficient} &= k = \varepsilon^2,\\ \mathrm{Reaction\ coefficient} &= s = \frac{1}{4}\bigg[ \frac{(\phi^\mathrm{n+\alpha})^2}{\alpha^3} - \bigg(\frac{3}{\alpha^3} - \frac{4}{\alpha^2}\bigg)\phi^\mathrm{n+\alpha}\phi^\mathrm{n} + \bigg( \frac{3}{\alpha^3} - \frac{8}{\alpha^2} + \frac{6}{\alpha} \bigg) (\phi^\mathrm{n})^2 - \frac{2}{\alpha} \bigg]\nonumber \\ &- \frac{\beta(t)}{2}\bigg[ \frac{\phi^\mathrm{n+\alpha}}{3\alpha^2} + \frac{1}{3}\bigg( -\frac{2}{\alpha^2} + \frac{3}{\alpha} \bigg)\phi^\mathrm{n} \bigg],\\ \mathrm{Source\ term} &= f = -\frac{1}{4}\bigg[ \bigg(-\frac{1}{\alpha^3} +\frac{4}{\alpha^2} - \frac{6}{\alpha} + 4 \bigg)(\phi^\mathrm{n})^3 + \bigg( \frac{2}{\alpha} - 4\bigg)\phi^\mathrm{n} \bigg] \nonumber \\ &+ \frac{\beta(t)}{2}\bigg[ \frac{1}{3}\bigg( \frac{1}{\alpha^2} - \frac{3}{\alpha} + 3 \bigg)(\phi^\mathrm{n})^2 - 1\bigg]. \end{align} \subsubsection{Spatial discretization} Now, we formulate the positivity preserving variational form of the AC-based phase-field equation. Consider the spatial discretization of the computational domain $\Omega$ into $\mathrm{n_{el}}$ number of elements such that $\Omega = \cup_\mathrm{e=1}^\mathrm{n_{el}} \Omega^\mathrm{e}$ and $\emptyset = \cap_\mathrm{e=1}^\mathrm{n_{el}} \Omega^\mathrm{e}$. Let $\mathcal{S}^\mathrm{h}$ be the space of trial solution, the values of which equal the given Dirichlet boundary condition and $\mathcal{V}^\mathrm{h}$ be the space of test functions which vanish on the Dirichlet boundary, $\Gamma_D$. The variational form of the AC equation can be written as: find $\phi_\mathrm{h}(\boldsymbol{x},t^{\mathrm{n}+\alpha}) \in \mathcal{S}^\mathrm{h}$ such that $\forall w_\mathrm{h} \in \mathcal{V}^\mathrm{h}$, \begin{align} \int_\Omega \bigg( w_\mathrm{h}\partial_t{\phi}_\mathrm{h} + w_\mathrm{h}(\boldsymbol{u}\cdot\nabla\phi_\mathrm{h}) - w_\mathrm{h} k\nabla^2\phi_\mathrm{h} + w_\mathrm{h}s\phi_\mathrm{h} - w_\mathrm{h}f \bigg) \mathrm{d}\Omega = 0. \end{align} Using the divergence theorem and the fact that $w_\mathrm{h} = 0$ on $\Gamma_D$, \begin{align} &\int_\Omega \bigg( w_\mathrm{h}\partial_t{\phi}_\mathrm{h} + w_\mathrm{h}(\boldsymbol{u}\cdot\nabla\phi_\mathrm{h}) + \nabla w_\mathrm{h}\cdot(k\nabla\phi_\mathrm{h}) + w_\mathrm{h}s\phi_\mathrm{h} - w_\mathrm{h}f \bigg) \mathrm{d}\Omega - \int_{\Gamma_N} w_\mathrm{h}g \mathrm{d}\Gamma = 0, \end{align} where $g$ is the boundary condition on the Neumann boundary $\Gamma_N$ which is zero in this case. The above equation is the standard Galerkin finite element formulation. Next, we apply the positivity preserving stabilization terms to the formulation above: \begin{align} \label{PPV_AC} &\int_\Omega \bigg( w_\mathrm{h}\partial_t{\phi}_\mathrm{h} + w_\mathrm{h}(\boldsymbol{u}\cdot\nabla\phi_\mathrm{h}) + \nabla w_\mathrm{h}\cdot(k\nabla\phi_\mathrm{h} ) + w_\mathrm{h}s\phi_\mathrm{h} - w_\mathrm{h}f \bigg) \mathrm{d}\Omega \nonumber \\ &+ \displaystyle\sum_\mathrm{e=1}^\mathrm{n_{el}}\int_{\Omega^\mathrm{e}}\bigg( \big(\boldsymbol{u}\cdot\nabla w_\mathrm{h} \big)\tau \big( \partial_t{\phi}_\mathrm{h} + \boldsymbol{u}\cdot\nabla\phi_\mathrm{h} - \nabla\cdot(k\nabla\phi_\mathrm{h}) + s\phi_\mathrm{h} -f \big) \bigg) \mathrm{d}\Omega^\mathrm{e} \nonumber \\ &+ \displaystyle\sum_\mathrm{e=1}^\mathrm{n_{el}}\int_{\Omega^\mathrm{e}} \chi \frac{|\mathcal{R}(\phi_\mathrm{h})|}{|\nabla\phi_\mathrm{h}|}k_s^\mathrm{add} \nabla w_\mathrm{h}\cdot \bigg( \frac{\boldsymbol{u}\otimes \boldsymbol{u}}{|\boldsymbol{u}|^2} \bigg) \cdot \nabla\phi_\mathrm{h} \mathrm{d}\Omega^\mathrm{e} + \sum_\mathrm{e=1}^\mathrm{n_{el}} \int_{\Omega^\mathrm{e}}\chi \frac{|\mathcal{R}(\phi_\mathrm{h})|}{|\nabla \phi_\mathrm{h}|} k^\mathrm{add}_{c} \nabla w_\mathrm{h} \cdot \bigg( \mathbf{I} - \frac{\boldsymbol{u}\otimes \boldsymbol{u}}{|\boldsymbol{u}|^2} \bigg) \cdot \nabla\phi_\mathrm{h} \mathrm{d}\Omega^\mathrm{e} \nonumber \\ &= 0, \end{align} where $\tau$ is the stabilization parameter given by \cite{Hughes_X} \begin{align} \tau &= \bigg[ \bigg( \frac{2}{\Delta t}\bigg)^2 + \boldsymbol{u}\cdot \boldsymbol{G}\boldsymbol{u} + 9k^2 \boldsymbol{G}:\boldsymbol{G}+ s^2\bigg] ^{-1/2}, \end{align} where $\boldsymbol{G}$ is the element contravariant metric tensor \cite{Akkerman,Bazilevs_book}, which is defined as \begin{align} \boldsymbol{G} = \frac{\partial \boldsymbol{\xi}^T}{\partial \boldsymbol{x}}\frac{\partial \boldsymbol{\xi}}{\partial \boldsymbol{x}}, \end{align} where $\boldsymbol{x}$ and $\boldsymbol{\xi}$ are the physical and parametric coordinates respectively and $\mathcal{R}(\phi_\mathrm{h})$ is the residual of the Allen-Cahn equation given as \begin{align} \mathcal{R}(\phi_\mathrm{h}) = \partial_t\phi_\mathrm{h} + \boldsymbol{u}\cdot\nabla\phi_\mathrm{h} - \nabla\cdot(k\nabla\phi_\mathrm{h}) + s\phi_\mathrm{h} -f. \end{align} In Eq.~(\ref{PPV_AC}), the first line represents the Galerkin terms. The second line consists of linear stabilization terms in the form of SUPG stabilization with the stabilization parameter $\tau$. Note that only the convective term has been taken in the weighting function, in contrast to the combined GLS-subgrid scale methodology of PPV where we also take the reaction term. This new modification has been performed to ensure the mass conservation property in the variational form. The proof of mass conservation property of the modified PPV scheme has been derived in \ref{DMC}. Terms in the third line of Eq.~(\ref{PPV_AC}) are the positivity preserving nonlinear stabilization terms which enforce the positivity property to the element-level matrix. The idea behind the PPV technique is first to apply the discrete upwind operator to the Galerkin and linear stabilized element-level matrix. This could be seen as addition of diffusion to convert the element matrix to a non-negative matrix in one-dimension. While the discrete upwinding provides the positivity property, it makes the solution first-order accurate. Therefore, the addition of diffusion is restricted to the region near the discontinuity which exhibits oscillations. This is accomplished by the dependence of the nonlinear terms on the residual of the equation. This idea is extended to multi-dimensions by considering the nonlinear terms in both streamline and crosswind directions. The first and second terms in the third line correspond to the streamline and crosswind directions respectively. The details of the derivation of the added diffusions $k_s^\mathrm{add}$, $k_c^\mathrm{add}$ and $\chi$ can be found in \cite{PPV}. For the current context of AC-based phase-field equation, \begin{align} \chi &= \frac{2}{|s|h + 2|\boldsymbol{u}|},\\ k_s^\mathrm{add} &= \mathrm{max} \bigg\{ \frac{||\boldsymbol{u}| - \tau|\boldsymbol{u}|s|h}{2} - (k + \tau|\boldsymbol{u}|^2) + \frac{sh^2}{6}, 0 \bigg\},\\ k_c^\mathrm{add} &= \mathrm{max} \bigg\{ \frac{|\boldsymbol{u}|h}{2} - k + \frac{sh^2}{6}, 0 \bigg\}, \end{align} where $|\boldsymbol{u}|$ is the magnitude of the convection velocity and $h$ is the characteristic element length defined in \cite{PPV}. Some of the prominent properties of the PPV method applied to the CDR equation are: (i) the method has at least second-order of spatial accuracy in the different convection- and reaction-dominated regions, (ii) a general application of the method to arbitrary topology can be implemented, and (iii) the method captures the high gradient internal and boundary layers in multi-dimensions. The aforementioned properties have been thoroughly investigated and tested for different convection- and reaction-dominated regimes in \cite{PPV}. We next present the Navier-Stokes equations and its variational form of the two-phase solver. \section{Variational discretization of the Navier-Stokes equations} \label{section:NS} For the computation of the multiphase incompressible flows, we next present the one-fluid formulation of the unsteady Navier-Stokes equations in this section. For the sake of completeness, the governing equations and their variational counterparts are first presented. \subsection{The incompressible Navier-Stokes equations} The unsteady Navier-Stokes equations for a viscous incompressible flow on a physical domain $\Omega(\boldsymbol{x},t)$ are \begin{align} \label{NS} \rho(\phi)\frac{\partial {\boldsymbol{u}}}{\partial t} + \rho(\phi){\boldsymbol{u}}\cdot\nabla{\boldsymbol{u}} &= \nabla\cdot {\boldsymbol{\sigma}} + \mathbf{SF}(\phi) + \boldsymbol{b}(\phi),&&\mathrm{on}\ (\boldsymbol{x},t)\in \Omega,\\ \nabla\cdot{\boldsymbol{u}} &= 0,&&\mathrm{on}\ (\boldsymbol{x},t)\in \Omega, \end{align} where ${\boldsymbol{u}} = {\boldsymbol{u}}(\boldsymbol{x},t)$ denotes the fluid velocity defined for each spatial point $\boldsymbol{x} \in \Omega$, $\rho(\phi)$ is the fluid density, $\boldsymbol{b}(\phi)$ is the body force applied on the fluid, $\mathbf{SF}(\phi)$ is the singular force acting at the interface which ensures the pressure jump across the interface and ${\boldsymbol{\sigma}}$ is the Cauchy stress tensor for a Newtonian fluid, given as \begin{align} {\boldsymbol{\sigma}} = -{p}\boldsymbol{I} + \mu(\phi)( \nabla{\boldsymbol{u}}+ (\nabla{\boldsymbol{u}})^T), \end{align} where ${p}$ denotes the fluid pressure and $\mu(\phi)$ is the dynamic viscosity of the fluid. The singular force is replaced by a continuum surface force (CSF) \cite{Brackbill}, which depends on the order parameter. Several forms of $\mathbf{SF}(\phi)$ have been used in the literature which are reviewed in \cite{Kim, Kim_3}. In this study, we employ the following definition: \begin{align} \mathbf{SF}(\phi) = \sigma\varepsilon\alpha_\mathrm{sf}\nabla\cdot( |\nabla\phi|^2\mathbf{I} - \nabla\phi \otimes \nabla\phi ), \end{align} where $\sigma$ is the surface tension, $\varepsilon$ is the interface thickness parameter defined in the Allen-Cahn phase-field equation and $\alpha_\mathrm{sf}$ is a constant parameter. The order parameter $\phi$ needs to be locally in the equilibrium state. Hence, to match the surface tension of the sharp-interface description, $\alpha_\mathrm{sf}$ must satisfy the following condition \cite{Kim_3} \begin{align} \varepsilon\alpha_\mathrm{sf} \int_{-\infty}^{\infty} \bigg( \frac{\mathrm{d}\phi}{\mathrm{d}z} \bigg)^2 dz &= 1,\\ \alpha_\mathrm{sf} &= \frac{3\sqrt{2}}{4}, \end{align} where we apply the expression of the equilibrium interface profile $\phi(z)$ given in Eq.~(\ref{eqm_profile}). This gives rise to the continuum surface force \begin{align} \mathbf{SF}(\phi) = \sigma\varepsilon\frac{3\sqrt{2}}{4}\nabla\cdot( |\nabla\phi|^2\mathbf{I} - \nabla\phi \otimes \nabla\phi ). \end{align} The physical parameters of the fluid such as $\rho$ and $\mu$ are dependent on the order parameter $\phi$ as \begin{align} \rho(\phi) &= \frac{1+\phi}{2}\rho_1 + \frac{1-\phi}{2}\rho_2, \label{dens}\\ \mu(\phi) &= \frac{1+\phi}{2}\mu_1 + \frac{1-\phi}{2}\mu_2, \label{visc} \end{align} where $(\cdot)_i$ denotes the value of $(\cdot)$ on the phase $i$ of the fluid. The body force $\boldsymbol{b}$ is generally taken as the gravity force dependent on the order parameter $\phi$ as $\boldsymbol{b}(\phi) = \rho(\phi)\boldsymbol{g}$, $\boldsymbol{g}$ being the acceleration due to gravity. Therefore, $\rho=\rho_1$ with $\mu=\mu_1$ on the fluid phase $1$ when $\phi=1$ and $\rho=\rho_2$ with $\mu=\mu_2$ on the fluid phase $2$ when $\phi=-1$. \subsection{The variational formulation} For consistency, the Navier-Stokes equations are discretized using the generalized-$\alpha$ time integration. Let $\partial_t{\boldsymbol{u}}^\mathrm{n+\alpha_m}$ be the temporal derivative of $\boldsymbol{u}$ at time $t^\mathrm{n+\alpha_m}$. The expressions used in the variational formulation are given as \begin{align} {\boldsymbol{u}}_\mathrm{h}^\mathrm{n+1} &= {\boldsymbol{u}}_\mathrm{h}^\mathrm{n} + \Delta t\partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n} + \gamma \Delta t( \partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n+1} - \partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n}), \\ {\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha} &= {\boldsymbol{u}}_\mathrm{h}^\mathrm{n} + \alpha({\boldsymbol{u}}_\mathrm{h}^\mathrm{n+1} - {\boldsymbol{u}}_\mathrm{h}^\mathrm{n}), \\ \partial_t\boldsymbol{u}_\mathrm{h}^\mathrm{n+\alpha_m} &= \partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n} + \alpha_\mathrm{m}( \partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n+1} - \partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n}). \end{align} Let $\mathcal{S}^\mathrm{h}$ be the space of trial solution, the values of which satisfy the Dirichlet boundary condition and $\mathcal{V}^\mathrm{h}$ be the space of test functions which vanish on the Dirichlet boundary. The variational form of the flow equation can be written as: find $[ {\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha}, {p}_\mathrm{h}^\mathrm{n+1}] \in\mathcal{S}^\mathrm{h}$ such that $\forall [\boldsymbol{\psi}_\mathrm{h}, q_\mathrm{h}] \in \mathcal{V}^\mathrm{h}$, \begin{align} &\int_{\Omega} \rho(\phi) ( \partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha_m} + {\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha} \cdot \nabla{\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha})\cdot \boldsymbol{\psi}_\mathrm{h} \mathrm{d\Omega} + \int_{\Omega} {\boldsymbol{\sigma}}_\mathrm{h}^\mathrm{n+\alpha} : \nabla \boldsymbol{\psi}_\mathrm{h} \mathrm{d\Omega} - \int_{\Omega} \mathbf{SF}^\mathrm{n+\alpha}_\mathrm{h}(\phi)\cdot\boldsymbol{\psi}_\mathrm{h} \mathrm{d\Omega}- \int_{\Omega} \nabla q_\mathrm{h}\cdot {\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha} \mathrm{d\Omega}\nonumber \\ &+ \displaystyle\sum_\mathrm{e=1}^\mathrm{n_{el}}\int_{\Omega^\mathrm{e}} \frac{\tau_\mathrm{m}}{\rho(\phi)} (\rho(\phi){\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha}\cdot \nabla\boldsymbol{\psi}_\mathrm{h}+ \nabla q_\mathrm{h} )\cdot \boldsymbol{\mathcal{R}}_\mathrm{m}({\boldsymbol{u}},{p}) \mathrm{d\Omega^e} + \displaystyle\sum_\mathrm{e=1}^\mathrm{n_{el}}\int_{\Omega^\mathrm{e}} \nabla\cdot \boldsymbol{\psi}_\mathrm{h}\tau_\mathrm{c}\rho(\phi) \boldsymbol{\mathcal{R}}_\mathrm{c}(\boldsymbol{u}) \mathrm{d\Omega^e}\nonumber \\ & -\displaystyle\sum_\mathrm{e=1}^\mathrm{n_{el}}\int_{\Omega^\mathrm{e}} \tau_\mathrm{m} \boldsymbol{\psi}_\mathrm{h}\cdot (\boldsymbol{\mathcal{R}}_\mathrm{m}({\boldsymbol{u}},{p}) \cdot \nabla {\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha}) \mathrm{d\Omega^e} -\displaystyle\sum_\mathrm{e=1}^\mathrm{n_{el}}\int_{\Omega^\mathrm{e}} \frac{\nabla \boldsymbol{\psi}_\mathrm{h}}{\rho(\phi)}:(\tau_\mathrm{m}\boldsymbol{\mathcal{R}}_\mathrm{m}({\boldsymbol{u}},{p}) \otimes \tau_\mathrm{m}\boldsymbol{\mathcal{R}}_\mathrm{m}({\boldsymbol{u}},{p})) \mathrm{d\Omega^e}\nonumber \\ &= \int_{\Omega} \boldsymbol{b}(t^\mathrm{n+\alpha})\cdot \boldsymbol{\psi}_\mathrm{h} \mathrm{d\Omega} + \int_{\Gamma_\mathrm{h}} \boldsymbol{h}\cdot \boldsymbol{\psi}_\mathrm{h} \mathrm{d\Gamma}, \end{align} where the terms in the first line correspond to the Galerkin terms including the transient and convective terms of the momentum equation, the viscous stress and surface tension terms and the variational form of the continuity equation. The second line represents the Galerkin/least-squares stabilization terms for the momentum and continuity equations. With the use of linear finite element spaces, the higher order derivatives of the weighting function related to the viscous stress tensor will be very small and hence, they have been neglected. The two residual terms in the third line are introduced as the approximation of the fine scale velocity on the element interiors and its corresponding convective stabilization based on the multi-scale argument \cite{Hughes_conserve,Hsu,Akkerman}. $\boldsymbol{h}$ is the corresponding Neumann boundary condition for the Navier-Stokes equations. The element-wise residual of the momentum and the continuity equations denoted by $\boldsymbol{\mathcal{R}}_\mathrm{m}$ and $\boldsymbol{\mathcal{R}}_\mathrm{c}$ respectively are given by \begin{align} \boldsymbol{\mathcal{R}}_\mathrm{m}({\boldsymbol{u}},{p}) &= \rho(\phi)\partial_t{\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha_m} + \rho(\phi){\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha} \cdot \nabla{\boldsymbol{u}}_\mathrm{h}^\mathrm{n+\alpha} - \nabla \cdot {\boldsymbol{\sigma}}_\mathrm{h}^\mathrm{n+\alpha} - \mathbf{SF}^\mathrm{n+\alpha}_\mathrm{h}(\phi) - \boldsymbol{b}(t^\mathrm{n+\alpha}), \\ \boldsymbol{\mathcal{R}}_\mathrm{c}(\boldsymbol{u}) &= \nabla\cdot\boldsymbol{u}^\mathrm{n+\alpha}_\mathrm{h}. \end{align} The stabilization parameters $\tau_\mathrm{m}$ and $\tau_\mathrm{c}$ are the least-squares metrics added to the element-level integrals in the stabilized formulation \cite{Brooks,Hughes_X,Tezduyar_1,France_II} and are defined as \begin{align} \tau_\mathrm{m} &= \bigg[ \bigg( \frac{2}{\Delta t}\bigg)^2 + {\boldsymbol{u}}_\mathrm{h}\cdot \boldsymbol{G}{\boldsymbol{u}}_\mathrm{h} + C_I \bigg(\frac{\mu(\phi)}{\rho(\phi)}\bigg)^2 \boldsymbol{G}:\boldsymbol{G}\bigg] ^{-1/2},\\ \tau_\mathrm{c} &= \frac{1}{\mathrm{tr}(\boldsymbol{G})\tau_\mathrm{m}}, \label{tau_c} \end{align} where $C_I$ is a constant derived from the element-wise inverse estimate \cite{Hughes_inv_est}, $\boldsymbol{G}$ is the element contravariant metric tensor and $\mathrm{tr(\boldsymbol{G})}$ in Eq.~(\ref{tau_c}) denotes the trace of the contravariant metric tensor. The stabilization in the variational form provides stability to the velocity field in convection dominated regimes of the fluid domain and circumvents the Babu$\mathrm{\check{s}}$ka-Brezzi condition which is required to be satisfied by any standard mixed Galerkin method \cite{Johnson}. The element metric tensor $\boldsymbol{G}$ deals with different element topology for different mesh discretization and has been greatly studied in the literature \cite{Johnson,Hsu,Tezduyar_1,Hughes_V,Tezduyar_stab}. \section{Implementation details} \label{imp_details} A schematic of the iterative coupling between the Navier-Stokes and the Allen-Cahn equations is shown in Fig. \ref{solver_schematic_1}. For efficiency and robustness, we perform the implicit discretizations of the Navier-Stokes and the Allen-Cahn equations. We employ the time-splitting procedure to decouple the incompressible Navier-Stokes and Allen-Cahn updates at the discrete level. While the Navier-Stokes equations provide a predictor fluid velocity, the Allen-Cahn equation is solved to provide an updated order parameter to interpolate the density, viscosity, capillary and body forces on the Navier-Stokes side. Consider the velocity $\boldsymbol{u}(\boldsymbol{x},t^\mathrm{n})$, pressure $p(\boldsymbol{x},t^\mathrm{n})$ and the order parameter $\phi(\boldsymbol{x},t^\mathrm{n})$ at time $t^\mathrm{n}$. In the first step of the predictor-corrector iteration $\mathrm{k}$, the velocity and pressure are predicted by solving the Navier-Stokes equations. In the second step, the computed fluid velocity is transferred to the Allen-Cahn equation. The convective AC equation is solved using the transferred fluid velocity to obtain an updated order parameter $\phi_\mathrm{k+1}$ in the third step. Finally, in the fourth step, the updated order parameter is utilized in the interpolation of density $\rho(\phi)$, viscosity $\mu(\phi)$, $\mathbf{SF}(\phi)$ and body force $\boldsymbol{b}(\phi)$. These updated quantities are then fed into the Navier-Stokes equations in the next iteration. At the end of the nonlinear iterations when the solver has achieved the convergence criteria, the values at the next time step $t^\mathrm{n+1}$ are updated and the coupled solver is advanced in time. \begin{figure}[!htbp] \centering \begin{tikzpicture}[decoration={markings,mark=at position 0.5 with {\arrow[scale=2]{>}}},every node/.style={scale=0.9},scale=0.9] \draw[-,black] (0,0) node[anchor=north,black]{} to (0,11); \draw[-,black] (6,0) node[anchor=north,black]{} to (6,11); \fill[black] (5.85,0.85) rectangle (6.15,1.15) ; \fill[black] (5.85,3.85) rectangle (6.15,4.15) ; \fill[black] (5.85,6.85) rectangle (6.15,7.15) ; \fill[black] (5.85,9.85) rectangle (6.15,10.15) ; \draw[black,fill=black] (0,1) circle (0.9ex); \draw[black,fill=black] (0,4) circle (0.9ex); \draw[black,fill=black] (0,7) circle (0.9ex); \draw[black,fill=black] (0,10) circle (0.9ex); \draw[black,postaction={decorate}] (0,1) to (6,1); \draw[black,postaction={decorate}] (4,0) to (6,1); \draw[black,postaction={decorate}] (0,10) to (2,11); \draw[black,postaction={decorate}] (6,1) to (6,4); \draw[black,postaction={decorate}] (6,4) to (0,1); \draw[black,postaction={decorate}] (0,1) to (0,4); \draw[black,postaction={decorate}] (0,4) to (6,4); \draw[black,postaction={decorate}] (6,4) to (6,7); \draw[black,postaction={decorate}] (6,7) to (0,4); \draw[black,postaction={decorate}] (0,4) to (0,7); \draw[black,postaction={decorate}] (0,7) to (6,7); \draw[black,postaction={decorate}] (6,7) to (6,10); \draw[black,postaction={decorate}] (6,10) to (0,7); \draw[black,postaction={decorate}] (0,7) to (0,10); \draw[black,postaction={decorate}] (0,10) to (6,10); \draw[black,thick] (-0.25,7.6) to (0.25,7.8); \draw[black,thick] (-0.25,7.75) to (0.25,7.95); \draw[white] (0,7.72) to (0,7.83) ; \draw[black,thick] (5.75,7.6) to (6.25,7.8); \draw[black,thick] (5.75,7.75) to (6.25,7.95); \draw[white] (6,7.72) to (6,7.83) ; \draw[black] (-0.2,1) node[anchor=east,black]{$\phi(\boldsymbol{x},t^\mathrm{n})$}; \draw[black] (-0.2,10) node[anchor=east,black]{$\phi(\boldsymbol{x},t^\mathrm{n+1})$}; \draw[black] (6.2,1) node[anchor=west,black]{$\boldsymbol{u}(\boldsymbol{x},t^\mathrm{n})$,}; \draw[black] (6.2,0.5) node[anchor=west,black]{$p(\boldsymbol{x},t^\mathrm{n})$}; \draw[black] (6.2,10.5) node[anchor=west,black]{$\boldsymbol{u}(\boldsymbol{x},t^\mathrm{n+1})$,}; \draw[black] (6.2,10) node[anchor=west,black]{$p(\boldsymbol{x},t^\mathrm{n+1})$}; \draw (6,4) -- (0,1) node [midway, above, sloped] {$\boldsymbol{u}_\mathrm{k+1}$}; \draw (3,4.3) node[anchor=west]{$\phi_\mathrm{k+1}$}; \draw (6,2.5) node[anchor=west,black]{[1]}; \draw (3.4,2.5) node[anchor=west,black]{[2]}; \draw (0,2.5) node[anchor=east,black]{[3]}; \draw (3.9,3.7) node[anchor=east,black]{[4]}; \draw (1.5,2.5) node[anchor=east]{$\mathrm{k}=0$}; \draw (0.45,2.25) rectangle (1.45,2.7) ; \draw (1.5,5.5) node[anchor=east]{$\mathrm{k}=1$}; \draw (0.45,5.25) rectangle (1.45,5.7) ; \draw (0.4,8.5) node[anchor=west]{$\mathrm{k}=\mathrm{nIter}$}; \draw (0.45,8.25) rectangle (1.95,8.7) ; \draw (6.7,4.5) node[rotate=90, anchor=west,black]{Navier-Stokes system}; \draw (-1.0,7.0) node[rotate=90, anchor=east,black]{Allen-Cahn system}; \end{tikzpicture} \caption{A schematic of predictor-corrector procedure for Navier-Stokes and Allen-Cahn coupling via nonlinear iterations. Here, $\mathrm{nIter}$ denotes the maximum number of nonlinear iterations to achieve a desired convergence tolerance within a time step at $ t \in [t^\mathrm{n},t^\mathrm{n+1}]$. Decoupled Navier-Stokes and Allen-Cahn equations are implicitly discretized.} \label{solver_schematic_1} \end{figure} In this study, the incremental flow velocity, pressure and the order parameter are evaluated by employing Newton-Raphson type iterations at each time step and the generalized-$\alpha$ time integration \cite{Gen_alpha} is applied to update the solver. The linear system for the incompressible Navier-Stokes equations can be written as \begin{align} \label{LS_NS} \begin{bmatrix} \boldsymbol{K}_\Omega & & \boldsymbol{G}_\Omega \\ & \\ -\boldsymbol{G}^T_\Omega & &\boldsymbol{C}_\Omega \end{bmatrix} \begin{Bmatrix} \Delta \boldsymbol{u}^\mathrm{n+\alpha_f} \\ \\ \Delta p^\mathrm{n+1} \end{Bmatrix} = -\begin{Bmatrix} \boldsymbol{\mathcal{R}}_\mathrm{m} \\ \\ \boldsymbol{\mathcal{R}}_\mathrm{c} \end{Bmatrix} \end{align} where $\boldsymbol{K}_\Omega$ is the stiffness matrix of the momentum equation which consists of inertia, convection, diffusion and stabilization terms, $\boldsymbol{G}_\Omega$ is the discrete gradient operator, $\boldsymbol{G}^T_\Omega$ is the divergence operator and $\boldsymbol{C}_\Omega$ is the pressure-pressure stabilization term. $\Delta \boldsymbol{u}$ and $\Delta p$ are the increments in velocity and pressure respectively. Similar linear system for the Allen-Cahn equation can be expressed as \begin{align} \label{LS_AC} \begin{bmatrix} \boldsymbol{K}_{AC} \end{bmatrix} \begin{Bmatrix} \Delta \phi^\mathrm{n+\alpha} \end{Bmatrix} = \begin{Bmatrix} \mathcal{R}(\phi) \end{Bmatrix} \end{align} where $\boldsymbol{K}_{AC}$ is the stiffness matrix of the Allen-Cahn equation which includes inertia, convection, diffusion, reaction and stabilization terms. $\Delta \phi$ is the increment in the solution for the order parameter. The algorithm is summarized in Algorithm \ref{algorithm_1}. In the present investigation, we consider equal-order interpolations for all the quantities ($\boldsymbol{u},p,\phi$) for finite element discretization. At each time step, the nonlinear errors of fully-coupled implicit systems of the Navier-Stokes and the Allen-Chan equations are minimized by Newton-Raphson iterations. In our numerical experiments, we have found that about 2-3 nonlinear iterations are enough to obtain a reasonably converged solution. Moreover, we want to emphasize that each nonlinear iteration of the algorithm consists of just one pass through the coupled Navier-Stokes and the Allen-Cahn phase-field system. This single-pass explicit coupling in turn helps in reducing the computational time without compromising with the accuracy and stability of the solution. We solve the coupled equations at discrete time steps to capture the transient flow characteristics to converge to a steady state solution, leading to a sequence of linear system of equations. For the linear system of equations, the matrices are formed and stored using the Harwell-Boeing sparse matrix format. The linear system is solved via the Generalized Minimal RESidual (GMRES) algorithm proposed in \cite{saad1986}, which relies on the preconditioned Krylov subspace iteration and the modified Gram-Schmidt orthogonalization. The number of matrix-vector products determines the dimension of the Krylov space from which a solution is computed. The phase-field two-phase solver relies on a hybrid parallelism for parallel computing. It employs a standard master-slave strategy for distributed memory clusters via message passing interface (MPI) based on domain decomposition strategy \cite{mpi}. The parallel implementation for the two-phase solver takes the advantage of state-of-the-art hierarchical memory and parallel architectures. \begin{figure}[H] \centering \floatstyle{ruled} \newfloat{algorithm}{H}{loa} \floatname{algorithm}{Algorithm} \begin{algorithm} \caption{Partitioned staggered coupling of implicit Navier-Stokes and Allen-Cahn solvers} \label{algorithm_1} \begin{tabbing} \ Given $\boldsymbol{u}^0$, $p^0$, $\phi^0$ \\ \qquad Loop over time steps, $\mathrm{n}=0,1,\cdots$ \\ \qquad\ Start from known variables $\boldsymbol{u}^\mathrm{n}$, $p^\mathrm{n}$, $\phi^\mathrm{n}$\\ \qquad\ Predict the solution: \\ \qquad\qquad $\boldsymbol{u}^\mathrm{n+1}_{(0)}=\boldsymbol{u}^\mathrm{n}$\\ \qquad\qquad $p^\mathrm{n+1}_{(0)} = p^\mathrm{n}$ \\ \qquad\qquad $\phi^\mathrm{n+1}_{(0)} = \phi^\mathrm{n}$\\ \qquad\ Loop over the nonlinear iterations, $\mathrm{k}=0,1,\cdots$ until convergence \\ \qquad \begin{tikzpicture} \draw (1,0) node(E){[1] \textbf{Navier-Stokes Implicit Solve}}; \draw (-2,-0.75) node(E1)[anchor=west]{(a) Interpolate solution:}; \draw (-1.5,-1.25) node(E2)[anchor=west] {$\boldsymbol{u}^\mathrm{n+\alpha_f}_\mathrm{(k+1)} = \boldsymbol{u}^\mathrm{n} + \alpha_\mathrm{f}(\boldsymbol{u}^\mathrm{n+1}_\mathrm{(k)} - \boldsymbol{u}^\mathrm{n})$}; \draw (-1.5,-1.85) node(E3)[anchor=west] {$p^\mathrm{n+1}_\mathrm{(k+1)} = p^\mathrm{n+1}_\mathrm{(k)}$}; \draw (-2,-2.75) node(E4)[anchor=west]{(b) Solve for $\Delta \boldsymbol{u}^\mathrm{n+\alpha_f}$ and $\Delta p^\mathrm{n+1}$ in Eq.~({\ref{LS_NS}})}; \draw (-2,-3.65) node(E5)[anchor=west]{(c) Correct solution:}; \draw (-1.5,-4.15) node(E6)[anchor=west]{$\boldsymbol{u}^\mathrm{n+\alpha_f}_\mathrm{(k+1)} = \boldsymbol{u}^\mathrm{n+\alpha_f}_\mathrm{(k+1)} + \Delta\boldsymbol{u}^\mathrm{n+\alpha_f}$}; \draw (-1.5,-4.75) node(E7)[anchor=west]{$p^\mathrm{n+1}_\mathrm{(k+1)} = p^\mathrm{n+1}_\mathrm{(k+1)} + \Delta p^\mathrm{n+1}$}; \draw (-2,-5.65) node(E8)[anchor=west]{(d) Update solution:}; \draw (-1.5,-6.15) node(E9)[anchor=west]{$\boldsymbol{u}^\mathrm{n+1}_\mathrm{(k+1)} = \boldsymbol{u}^\mathrm{n} + \frac{1}{\alpha_\mathrm{f}}(\boldsymbol{u}^\mathrm{n+\alpha_f}_\mathrm{(k+1)} - \boldsymbol{u}^\mathrm{n})$}; \draw (-1.5,-6.75) node(E10)[anchor=west]{$p^\mathrm{n+1}_\mathrm{(k+1)} = p^\mathrm{n+1}_\mathrm{(k+1)}$}; \node[fit = (E) (E1) (E2) (E3) (E4) (E5) (E6) (E7) (E8) (E9) (E10),style={block3}](NS){}; \end{tikzpicture} \hspace{0cm} \begin{tikzpicture} \node (0,0){}; \draw (0,2) node[anchor=north]{$\xrightarrow{\makebox[2cm]{[2] $\boldsymbol{u}^\mathrm{n+1}_\mathrm{(k+1)}$}}$}; \draw (0,6) node[anchor=north]{$\xleftarrow{\makebox[2cm]{[4] $\phi^\mathrm{n+1}_\mathrm{(k+1)}$}}$}; \draw (0,5.25) node[anchor=north]{to interpolate}; \draw (0,4.75) node[anchor=north]{$\rho(\phi)$, $\mu(\phi)$}; \draw (0,4.25) node[anchor=north]{$\mathbf{SF}(\phi)$, $\boldsymbol{b}(\phi)$}; \end{tikzpicture} \begin{tikzpicture} \centering \draw (0.75,0) node(E){[3] \textbf{Allen-Cahn Implicit Solve}}; \draw (-2,-0.75) node(E1)[anchor=west]{(a) Interpolate solution:}; \draw (-1.5,-1.25) node(E2)[anchor=west] {$\phi^\mathrm{n+\alpha_f}_\mathrm{(k+1)} = \phi^\mathrm{n} + \alpha_\mathrm{f}(\phi^\mathrm{n+1}_\mathrm{(k)} - \phi^\mathrm{n})$}; \draw (-1.5,-1.85) node(E3)[anchor=west] { }; \draw (-2,-2.75) node(E4)[anchor=west]{(b) Solve for $\Delta \phi^\mathrm{n+\alpha_f}$ in Eq.~(\ref{LS_AC})}; \draw (-2,-3.65) node(E5)[anchor=west]{(c) Correct solution:}; \draw (-1.5,-4.15) node(E6)[anchor=west]{$\phi^\mathrm{n+\alpha_f}_\mathrm{(k+1)} = \phi^\mathrm{n+\alpha_f}_\mathrm{(k+1)} + \Delta\phi^\mathrm{n+\alpha_f}$}; \draw (-1.5,-4.75) node(E7)[anchor=west]{ }; \draw (-2,-5.65) node(E8)[anchor=west]{(d) Update solution:}; \draw (-1.5,-6.15) node(E9)[anchor=west]{$\phi^\mathrm{n+1}_\mathrm{(k+1)} = \phi^\mathrm{n} + \frac{1}{\alpha_\mathrm{f}}(\phi^\mathrm{n+\alpha_f}_\mathrm{(k+1)} - \phi^\mathrm{n})$}; \draw (-1.5,-6.75) node(E10)[anchor=west]{ }; \node[fit = (E) (E1) (E2) (E3) (E4) (E5) (E6) (E7) (E8) (E9) (E10),style={block4}](AC){}; \end{tikzpicture} \end{tabbing} \end{algorithm} \vspace{-0.5cm} \label{solver_schematic_2} \end{figure} \section{Numerical tests} \label{tests} In this section, we present some numerical tests to assess the scheme for the coupled Allen-Cahn and Navier-Stokes equations. In all the test cases, the location of the interface between the two phases of the fluid is evaluated by linearly interpolating the order parameter field $\phi$ and finding the interface where $\phi=0$. \subsection{Verification of the Allen-Cahn implementation} We first validate the Allen-Cahn solver using the volume-conserved motion by curvature in two-dimensions \cite{H_G_Lee}. A square computational domain $[0,1]\times [0,1]$ with varying element sizes is considered. Periodic boundary conditions are imposed on all the boundaries. \begin{figure}[!htbp] \centering \begin{subfigure}[b]{0.5\textwidth} \qquad \begin{tikzpicture}[decoration={markings,mark=at position 1.0 with {\arrow{>}}},scale=5.9] \draw (0,0) -- (1,0)-- (1,1) -- (0,1) -- cycle; \draw[fill={rgb:black,1;white,2}, fill opacity=0.4] (0.25,0.25) circle (0.1cm); \draw[fill={rgb:black,1;white,2}, fill opacity=0.4] (0.57,0.57) circle (0.15cm); \draw (0.25,0.25) node(A){$\Omega_1$}; \draw (0.57,0.57) node(B){$\Omega_1$}; \node [below=0.3cm of A]{Circle 1}; \node [below=0.6cm of B]{Circle 2}; \draw (0.75,0.25) node{$\Omega_2$}; \draw[thick,postaction={decorate}] (0,0) to (0.2,0); \draw[thick,postaction={decorate}] (0,0) to (0,0.2); \draw (0.2,0) node[anchor=north]{X}; \draw (0,0.2) node[anchor=east]{Y}; \draw[postaction={decorate}] (1.05,0.5) to (1.05,1); \draw[postaction={decorate}] (1.05,0.5) to (1.05,0); \draw (1.01,0) -- (1.1,0); \draw (1.01,1) -- (1.1,1); \draw (1.05,0.5) node[anchor=west]{$1$}; \draw[postaction={decorate}] (0.5,1.05) to (1,1.05); \draw[postaction={decorate}] (0.5,1.05) to (0,1.05); \draw (0,1.01) -- (0,1.1); \draw (1,1.01) -- (1,1.1); \draw (0.5,1.05) node[anchor=south]{$1$}; \end{tikzpicture} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{AC_validation.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \ \ \ \includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip,width=8.5cm]{AC_initial.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip,width=8.5cm]{AC_final.eps} \caption{} \end{subfigure} \caption{Evolution of the radii of two-dimensional circles: (a) Schematic diagram showing the computational domain, (b) Validation of the evolution of the radii of the two circles with the literature \cite{H_G_Lee}, and the contour plots of the order parameter $\phi$ at (c) $t=0$, and (d) $t=100$. In (a), $\Omega_1$ and $\Omega_2$ are the two phases with periodic boundary conditions imposed on all the sides. } \label{ac_val} \end{figure} The initial condition is given by: \begin{align} \phi(x,y,0) = 1 + \mathrm{tanh}\bigg( \frac{R_1 - \sqrt{(x-0.25)^2 + (y-0.25)^2}}{\sqrt{2}\varepsilon} \bigg) + \mathrm{tanh}\bigg( \frac{R_2 - \sqrt{(x-0.57)^2 + (y-0.57)^2}}{\sqrt{2}\varepsilon} \bigg) \end{align} where $R_1=0.1$ and $R_2=0.15$ are the radii of the two circles centered at $(0.25,0.25)$ and $(0.57,0.57)$, respectively. The variation of the change of the radii of the two circles is tracked and validated with the results obtained in \cite{H_G_Lee} for $\varepsilon=0.01$. The time step size in the present simulation is $0.1$ with the final time $t=100$. The problem set-up with the evolution of the radii of the two circles is shown in Fig. \ref{ac_val}. The results are in very close agreement with the reference. Further analysis is carried out to quantify the appropriate number of elements required across the interface to capture the evolution of the radii. Let $N_\varepsilon$ denote the number of elements across the equilibrium interface thickness, $\varepsilon_\mathrm{eqm}= 4.164\varepsilon$. We simulate the evolution of the radii for $N_\varepsilon \in [3,10]$ and quantify the percentage error ($e_1$) in the radii at the final time $t=100$ defined as \begin{align} e_1 = \frac{|R_i - R_\mathrm{ref}|}{R_\mathrm{ref}} \times 100, \end{align} where $R_i$ is the radius corresponding to different resolutions at $t=100$ and $R_\mathrm{ref}$ is the radius obtained for $N_{\varepsilon}=10$ at $t=100$. The results are summarized in Table \ref{table_ac}. It can be observed that for circle 2 (the large circle), $N_{\varepsilon}>3$ is sufficient for getting the solution within $1\%$ error while for circle 1, $N_{\varepsilon}$ should be $>6$ to obtain results with similar accuracy. The reason is the different curvature of the two circles. With the decreasing radius of circle 1, its curvature increases and more number of elements are needed to sufficiently capture its interface. \renewcommand{\arraystretch}{0.5} \begin{table}[!h] \caption{Percentage error in the final radii of the two circles for different mesh resolutions} \centering \begin{tabular}{ M{3cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} N } \hline \centering \textbf{$N_{\varepsilon}$}$\rightarrow$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ &\\[10pt] \hline \centering Circle 1 & 10.39 & 5.29 & 3.26 & 1.54 & 0.91 & 0.58 &\\[10pt] \centering Circle 2 & 1.28 & 0.60 & 0.36 & 0.20 & 0.15 & 0.02 &\\[10pt] \hline \end{tabular} \label{table_ac} \end{table} To understand the behavior of the solver under spatial and temporal refinements, we evaluate the $L^2$ error of the solution over the whole domain at $t=50$ by decreasing element and time step sizes. The $L^2$ error is calculated as: \begin{align} e_2 = \frac{||\Phi-\Phi_\mathrm{ref}||_2}{||\Phi_\mathrm{ref}||_2}, \end{align} where $\Phi$ is the vector of the solution of order parameter $\phi$ at the final time $t=50$ over the whole domain for the respective refinement, $\Phi_\mathrm{ref}$ is the vector of the solution of the finest resolution and $||\cdot||_2$ is the $L^2$ norm. The spatial refinement is carried out uniformly by decreasing the element size from $h=1/72$ until $h=1/2304$. The finest grid $h=1/2304$ is selected as the reference solution for evaluating the error norm. We take $\Delta t=0.025$ and decrease it by a factor of $2$ until $\Delta t=3.90625\times 10^{-4}$. In a similar manner, the finest $\Delta t$ is taken as the reference solution. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8] \begin{loglogaxis}[ width=0.5\textwidth, height=0.5\textwidth, xlabel={$h$}, ylabel={$e_2$}, xmin=5e-4, xmax=0.03, ymin=1e-5, ymax=0.1, legend pos=north west, ] \addplot[mark=*,black,mark size={3pt},line width={1pt}] coordinates { (1/72,0.017285213259038)(1/144,0.004418405301747)(1/288,0.001115072427403)(1/576,2.669733237974486e-04)(1/1152,5.346981682268451e-05) }; \addplot[dotted,red,mark size={3pt},line width={1pt}] coordinates { (0.015,0.01)(0.002,1.778e-4) } coordinate [pos=0.4] (A) coordinate [pos=0.6] (B) coordinate [pos=0.5] (C) coordinate [pos=0.7] (D); \draw[red,line width={1pt}] (A) |- (B) ; \draw (C) node[anchor=west,black]{$\ \ \ \ \ \ 2.0$}; \draw (D) node[anchor=west,black]{$\quad\ \ 1$}; \end{loglogaxis} \draw (3.2,-1.5) node[anchor=east,black,scale=0.9]{(a)}; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.8] \begin{loglogaxis}[ width=0.5\textwidth, height=0.5\textwidth, xlabel={$\Delta t$}, ylabel={$e_2$}, xmin=5e-4, xmax=0.05, ymin=1e-7, ymax=4e-5, legend pos=north west, ] \addplot[mark=*,mark options={solid}, black,mark size={3pt},line width={1pt}] coordinates { (0.025,2.771611524155344e-05)(0.0125,1.380186873448174e-05)(6.25e-3,6.712093119755828e-06)(3.125e-3,3.133188127919196e-06)(1.5625e-3,1.352202843113804e-06)(7.8125e-4,4.513311653281769e-07) }; \addplot[dotted,red,mark size={3pt},line width={1pt}] coordinates { (0.03,2e-5)(2e-3,2.626e-7) } coordinate [pos=0.4] (A) coordinate [pos=0.6] (B) coordinate [pos=0.5] (C) coordinate [pos=0.7] (D); \draw[red,line width={1pt}] (A) |- (B) ; \draw (C) node[anchor=west,black]{$\ \ \ \ \ \ 1.6$}; \draw (D) node[anchor=west,black]{$\quad\ \ 1$}; \end{loglogaxis} \draw (3.2,-1.5) node[anchor=east,black,scale=0.9]{(b)}; \end{tikzpicture} \caption{Convergence study for the present method through dependence of non-dimensionalized $L^2$ error $(e_2)$ as a function of: (a) uniform mesh refinement $h$, and (b) uniform temporal refinement $\Delta t$} \label{error_1} \end{figure} The mesh and temporal convergence are plotted in Fig. \ref{error_1}. Some of the peculiar observations from the convergence plots are: (i) the spatial convergence is of second-order with the mesh refinement, and (ii) temporal convergence initially starts with first-order, but increases to a value of $1.6$ as time step is further refined. The order of temporal convergence is not exactly second-order due to the nonlinear energy-stable properties which are imparted by the mid-point approximation in the discretization. Finally, to analyze the conservation of mass or the order parameter $\phi$, we quantify the percentage change of the mass across the time domain of the experiment compared with the initial mass, which comes out as $5.4577 \times 10^{-5} ~\%$. Note that the mass conservation depends on the tolerance of the linear GMRES solver as well as the nonlinear tolerance. A linear tolerance of $10^{-15}$ with nonlinear tolerance of $10^{-4}$ are chosen for this experiment. Within this limit, it can be concluded that the mass is conserved for the presented scheme. \subsection{Laplace-Young law} To verify the coupling between the Allen-Cahn and the incompressible Navier-Stokes equations at high density ratio, we carry out a test to verify the Laplace-Young law. The law states that the pressure difference ($\Delta p$) across the interface of a static bubble in a two-phase fluid system is equivalent to the ratio of the surface tension ($\sigma$) and the radius of curvature ($R$) of the bubble, \begin{align} \Delta p = p_\mathrm{in} - p_\mathrm{out} = \frac{\sigma}{R}. \end{align} In the equilibrium state, the velocity vanishes $\boldsymbol{u}=\mathbf{0}$ and the pressure gradient balances the surface tension force, i.e., \begin{align} \nabla p = \sigma\varepsilon\frac{3\sqrt{2}}{4}\nabla\cdot( |\nabla\phi|^2\mathbf{I} - \nabla\phi \otimes \nabla\phi ). \end{align} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \qquad \begin{tikzpicture}[decoration={markings,mark=at position 1.0 with {\arrow{>}}},scale=5.9] \draw[fill={black!10}] (0,0) -- (1,0)-- (1,1) -- (0,1) -- cycle; \draw[fill=white] (0.5,0.5) circle (0.15cm); \draw (0.5,0.5) node(A){$\Omega_2$}; \node [below = -0.2cm of A]{$(\rho_2, \mu_2)$}; \draw (0.75,0.75) node(B){$\Omega_1$}; \node [below = -0.2cm of B]{$(\rho_1, \mu_1)$}; \draw[thick,postaction={decorate}] (0,0) to (0.2,0); \draw[thick,postaction={decorate}] (0,0) to (0,0.2); \draw (0.2,0) node[anchor=north]{X}; \draw (0,0.2) node[anchor=east]{Y}; \draw[postaction={decorate}] (1.05,0.5) to (1.05,1); \draw[postaction={decorate}] (1.05,0.5) to (1.05,0); \draw (1.01,0) -- (1.1,0); \draw (1.01,1) -- (1.1,1); \draw (1.05,0.5) node[anchor=west]{$4$}; \draw[postaction={decorate}] (0.5,1.05) to (1,1.05); \draw[postaction={decorate}] (0.5,1.05) to (0,1.05); \draw (0,1.01) -- (0,1.1); \draw (1,1.01) -- (1,1.1); \draw (0.5,1.05) node[anchor=south]{$4$}; \draw (0.35,0.35) -- (0.35,0.23); \draw (0.65,0.35) -- (0.65,0.23); \draw[<->] (0.35,0.26) -- (0.65,0.26); \draw (0.5,0.26) node[anchor=south] {$2R$}; \end{tikzpicture} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{LY_validation.eps} \caption{} \end{subfigure} \caption{Laplace-Young law: (a) Schematic diagram showing the computational domain, (b) Comparison of the pressure difference across the interface in a static bubble obtained from the simulation with the Laplace-Young law. In (a), $\Omega_1$ and $\Omega_2$ are the two fluid phases with densities $\rho_1=1000$ and $\rho_2=1$, viscosities $\mu_1=10$, $\mu_2=0.1$ and acceleration due to gravity $\boldsymbol{g}=(0,0,0)$ and all the boundaries have periodic boundary condition.} \label{Val_1} \end{figure} The density and dynamic viscosity of the two fluids are taken as $\rho_1=1000$, $\rho_2=1$, $\mu_1=10$ and $\mu_2=0.1$. In the numerical tests, we consider a domain size $\Omega = [0,4]\times [0,4]$ with uniform structured mesh of grid size $1/200$ with different radii of the bubble ($0.2$, $0.25$, $0.3$, $0.35$ and $0.4$ units) and three surface tensions ($0.5$, $0.25$ and $0.05$ units). Periodic boundary conditions are applied on all the boundaries. The initial condition is given by: \begin{align} \phi(x,y,0) = -\mathrm{tanh}\bigg( \frac{R - \sqrt{(x-x_c)^2 + (y-y_c)^2}}{\sqrt{2}\varepsilon} \bigg), \end{align} where $R$ is the radius of the bubble with its centre at $(x_c,y_c)=(2,2)$. The interface thickness parameter is $\varepsilon=0.01$. The time step size is taken as $\Delta t=0.01$s and the pressure difference is measured after $5000$ time steps. The schematic of the computational domain and the representative results are shown in Fig. \ref{Val_1}. The pressure difference in Fig. \ref{Val_1}(b) shows good agreement with the Laplace-Young law. This verifies the coupling between the Allen-Cahn and the Navier-Stokes equations at high density ratio. \subsection{Sloshing tank problem} To further validate the coupling, we test a standard sloshing tank problem. A rectangular computational domain $\Omega \in [0,1] \times [0,1.5]$ is considered for the simulation. It is discretized with different resolutions characterized by the number of elements in the equilibrium interface thickness denoted by $N_\varepsilon$. Varying densities and viscosities of the two fluid phases as $\rho_1 = 1000$, $\rho_2 = 1$, $\mu_1 = 1$, $\mu_2 = 0.01$ and $\boldsymbol{g} = (0, -1, 0)$ will also test the ability of the solver to handle high density and viscosity ratios. The capillary effects due to surface tension have been neglected in this test case. The initial condition is defined as: \begin{align} \phi(x,y,0) = - \mathrm{tanh}\bigg( \frac{ y - (1.01 + 0.1\mathrm{sin}((x - 0.5)\pi)) }{\sqrt{2}\varepsilon}\bigg) \end{align} Slip boundary condition is set along the walls and a Dirichlet boundary condition is prescribed at the top boundary as $p=0$. The problem set-up is given in Fig. \ref{ST_1}(a) with the contour plor of $\phi$ of the initial condition in Fig. \ref{ST_1}(b). \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \qquad \begin{tikzpicture}[decoration={markings,mark=at position 1.0 with {\arrow{>}}},scale=5] \draw[fill=black!10] plot[smooth] coordinates {(0,0.91) (0.1,0.9149) (0.2,0.9291) (0.3,0.9512) (0.4,0.9791) (0.5,1.01) (0.6,1.0409) (0.7,1.0688) (0.8,1.0909) (0.9,1.1051) (1.0,1.11)} -- (1,0) -- (0,0) -- (0,0.91); \draw (0,0.91) -- (0,1.5)-- (1,1.5) -- (1,1.11); \draw (0.5,0.75) node[anchor=south](A){$\Omega_1$}; \node [below = 0.0cm of A]{$(\rho_1, \mu_1)$}; \draw (0.5,1.25) node[anchor=south](B){$\Omega_2$}; \node [below = 0.0cm of B]{$(\rho_2, \mu_2)$}; \draw (0.2,0.92) node[anchor=south]{$\Gamma$}; \draw[thick,postaction={decorate}] (0,0) to (0.2,0); \draw[thick,postaction={decorate}] (0,0) to (0,0.2); \draw (0.2,0) node[anchor=north]{X}; \draw (0,0.2) node[anchor=east]{Y}; \draw[postaction={decorate}] (1.05,0.5) to (1.05,1.5); \draw[postaction={decorate}] (1.05,0.5) to (1.05,0); \draw (1.01,0) -- (1.1,0); \draw (1.01,1.5) -- (1.1,1.5); \draw (1.05,0.75) node[anchor=west]{$1.5$}; \draw[postaction={decorate}] (0.5,1.55) to (1,1.55); \draw[postaction={decorate}] (0.5,1.55) to (0,1.55); \draw (0,1.51) -- (0,1.6); \draw (1,1.51) -- (1,1.6); \draw (0.5,1.55) node[anchor=south]{$1$}; \draw (0.5,0) node{}; \end{tikzpicture} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 1.0cm 9cm 0.2cm},clip,width=6.5cm]{ST_initial.eps} \caption{} \end{subfigure} \caption{Sloshing in a rectangular tank: (a) Schematic diagram showing the computational domain, (b) Contour plot of the order parameter $\phi$ at $t=0$. In (a), $\Omega_1$ and $\Omega_2$ are the two fluid phases with densities $\rho_1=1000$ and $\rho_2=1$, viscosities $\mu_1=1$, $\mu_2=0.01$ and acceleration due to gravity $\boldsymbol{g}=(0,-1,0)$ and the walls of the tank have slip boundary condition with $p=0$ at the upper boundary. } \label{ST_1} \end{figure} To have a detailed analysis of the problem, we perform a series of experiments to assess: (a) the effectiveness of the PPV technique, (b) appropriate number of elements ($N_\varepsilon$) required in the equilibrium interface thickness, (c) proper value of $\Delta t$ to obtain results with sufficient accuracy, and (d) the effect of $\varepsilon$ on the two-phase flow solution. These convergence and sensitivity studies are hereby presented. The quantification of error in the following analysis is carried out by evaluating the $L^2$ error $e_3$ defined by \begin{align} e_3 = \frac{|| \Phi - \Phi_\mathrm{ref} ||_2}{|| \Phi_\mathrm{ref} ||_2}, \end{align} where $\Phi$ is the temporal solution of the interface elevation at the left boundary, $\Phi_\mathrm{ref}$ is the solution of the finest resolution associated with the respective study and $||\cdot||_2$ is the $L^2$ norm. \subsubsection{Effectiveness of the PPV technique} First, we emphasize the advantage of using the PPV-based technique for the phase-field equation. In the convection- and reaction-dominated regions, the linear stabilized variational formulation results in oscillations near the regions with high gradients, which can result into unbounded solution. However, the order parameter $\phi$ solved by the Allen-Cahn equation needs to be bounded at all times to ensure that positive values of density and viscosity are transferred to the Navier-Stokes equations. The present variational technique detects the regions of high gradients depending on the residual of the equation and adds diffusion to those regions to eliminate such oscillations. Further detailed analyses and results in one- and two-dimensions can be found in \cite{PPV}. To demonstrate the effect of the PPV technique to the present problem, we simulate two test cases: (a) with nonlinear PPV stabilization term and (b) without nonlinear PPV term. $\Delta t = 0.001$, $\varepsilon=0.01$ and $N_\varepsilon=3$ and $N_\varepsilon=4$ are considered for the tests. The minimum and maximum values of the solution of the interface evolution at the left boundary are evaluated to quantify the bounds of the numerical solution. The values are summarized in Table \ref{table_ppv_comp}. From the table, it is evident that the current formulation reduces the oscillations and preserves the boundedness and positivity in the solution. The solution obtained are also compared in Fig. \ref{ppv_comp} for $N_\varepsilon=3$. The reference solution is the solution for $N_\varepsilon=8$, $\Delta t=0.001$ and $\varepsilon=0.01$. The percentage error for the non-PPV-based solution is $0.3548 \%$ and that for the PPV-based solution is $0.1905\%$. For the case when the nonlinear PPV term is not used, the solution is unbounded. However, for evaluating the density and viscosity values, we chop off the values larger than $1$ and smaller than $-1$ to get physical values for density and viscosity. We observe from Fig. \ref{ppv_comp} that this chopping leads to less accurate solution compared to the reference solution, while the PPV-based solution is more accurate. \renewcommand{\arraystretch}{0.5} \begin{table}[H] \caption{Bounds in the solution of interface evolution at the left boundary for different techniques} \centering \begin{tabular}{ M{3cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} N } \hline \centering \textbf{$N_{\varepsilon}$} & \multicolumn{2}{c}{$\mathrm{min(\phi)}$} & \multicolumn{2}{c}{$\mathrm{max(\phi)}$} &\\[10pt] \hline \centering & non-PPV & PPV & non-PPV & PPV &\\[10pt] \hline \centering 3 & -1.0544 & -1.0 & 1.0154 & 0.9999 &\\[10pt] \hline \centering 4 & -1.0065 & -1.0 & 1.0023 & 0.9999 &\\[10pt] \hline \end{tabular} \label{table_ppv_comp} \end{table} \vspace{-0.5cm} \begin{figure}[H] \centering \includegraphics[trim={0cm 0.3cm 0cm 0cm},clip,width=10cm]{ppv_comp2.eps} \caption{Sloshing tank problem: Effect of using the PPV technique on the evolution of the interface. $\varepsilon = 0.01$, $\Delta t=0.001$ and $N_\varepsilon=3$. The reference solution is obtained at $N_\varepsilon=8$.} \label{ppv_comp} \end{figure} \subsubsection{Effect of the number of elements in the equilibrium interfacial thickness ($N_\varepsilon$)} We systematically perform mesh convergence studies to quantify the sufficient resolution needed to obtain accurate solution. The mesh resolution is characterized by $N_\varepsilon$. Figure \ref{ST_N_eps} shows the evolution of the interface at the left boundary of the domain with different $N_\varepsilon$ with fixed $\varepsilon=0.01$ and $\Delta t=0.001$. The error is quantified in Table \ref{table_N_eps} with the reference solution taken as that with $N_\varepsilon = 8$. \vspace{-0.25cm} \renewcommand{\arraystretch}{0.5} \begin{table}[H] \caption{Error ($e_3$) in the solution for different $N_\varepsilon$} \centering \begin{tabular}{ M{3cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} N } \hline \centering \textbf{$N_{\varepsilon}$}$\rightarrow$ & $3$ & $4$ & $5$ & $6$ &\\[10pt] \hline \centering $e_3 (\times 10^{-3})$ & 2.491 & 1.094 & 0.564 & 0.208 &\\[10pt] \hline \end{tabular} \label{table_N_eps} \end{table} \vspace{-0.5cm} We conclude that $N_\varepsilon$ of 4 or 5 is sufficient to capture the interface. Increasing $N_\varepsilon$ gives more accurate solution however the difference is very small. Therefore, $N_\varepsilon$ of 4 or 5 seems to be a good compromise between accuracy and computational cost. \vspace{-0.5cm} \begin{figure}[H] \centering \includegraphics[trim={0cm 0.3cm 0cm 0cm},clip,width=10cm]{mesh_conv2.eps} \caption{Sloshing tank problem: Effect of $N_\varepsilon$ on the evolution of the interface. $\varepsilon = 0.01$ and $\Delta t=0.001$ are kept constant.} \label{ST_N_eps} \end{figure} \subsubsection{Effect of the time step size ($\Delta t$)} To observe the effect of $\Delta t$ on the solution, we plot the time convergence plot in Fig. \ref{ST_dt} with $N_\varepsilon = 4$ and $\varepsilon = 0.01$. Varying time steps between $0.1$ and $0.001$ are chosen. The plot shows that the difference in the solution is noticeable till $\Delta t=0.01$, after which further reduction in the time step size has little effect on the accuracy of the solution. The error is shown in Table \ref{table_dt} with solution at $\Delta t=0.001$ as the reference solution. \vspace{-0.25cm} \renewcommand{\arraystretch}{0.5} \begin{table}[H] \caption{Error ($e_3$) in the solution for different $\Delta t$} \centering \begin{tabular}{ M{3cm} M{1cm} M{1cm} M{1cm} M{1cm} M{1cm} M{1cm} M{1cm} M{1cm} N } \hline \centering \textbf{$\Delta t$}$\rightarrow$ & $0.1$ & $0.08$ & $0.06$ & $0.04$ & $0.02$ & $0.01$ & $0.008$ & $0.004$ &\\[10pt] \hline \centering $e_3 (\times 10^{-3})$ & 10.839 & 6.959 & 3.759 & 1.418 & 0.457 & 0.478 & 0.474 & 0.324 &\\[10pt] \hline \end{tabular} \label{table_dt} \end{table} \vspace{-0.5cm} \begin{figure}[H] \centering \includegraphics[trim={0cm 0.3cm 0cm 0cm},clip,width=10cm]{time_conv2.eps} \caption{Sloshing tank problem: Effect of $\Delta t$ on the evolution of the interface. $\varepsilon = 0.01$ and $N_\varepsilon=4$ are kept constant.} \label{ST_dt} \end{figure} \subsubsection{Effect of the interfacial thickness parameter ($\varepsilon$)} The parameter $\varepsilon$ represents the thickness of the interface. The limit $\varepsilon \to 0$ gives the sharp-interface limit. Although decreasing $\varepsilon$ will indeed give more accurate representation of the interface, it will increase the computational cost due to the requirement of large number of elements in the equilibrium interfacial thickness region. Figure \ref{ST_eps} shows the evolution of the interface for different $\varepsilon$ values with fixed $\Delta t=0.01$ and $N_\varepsilon = 4$. The error is quantified in Table \ref{table_eps} with the solution at $\varepsilon=0.005$ taken as the reference solution. A selection of $\varepsilon=0.01$ is a good compromise between accuracy and cost of computation. \vspace{-0.25cm} \renewcommand{\arraystretch}{0.5} \begin{table}[H] \caption{Error ($e_3$) in the solution for different $\varepsilon$} \centering \begin{tabular}{ M{3cm} M{2cm} M{2cm} N } \hline \centering \textbf{$\varepsilon$}$\rightarrow$ & $0.02$ & $0.01$ &\\[10pt] \hline \centering $e_3 (\times 10^{-3})$ & 5.469 & 1.269 &\\[10pt] \hline \end{tabular} \label{table_eps} \end{table} \vspace{-0.5cm} \begin{figure}[H] \centering \includegraphics[trim={0cm 0.3cm 0cm 0cm},clip,width=10cm]{eps_conv2.eps} \caption{Sloshing tank problem: Effect of $\varepsilon$ on the evolution of the interface. $\Delta t=0.01$ and $N_\varepsilon=4$ are kept constant.} \label{ST_eps} \end{figure} \vspace{-0.5cm} Finally, we compare and validate the results obtained from the current simulation considering $N_\varepsilon=4$, $\Delta t=0.01$ and $\varepsilon=0.01$ with those obtained using XFEM in \cite{Fries_1}. The method in \cite{Fries_1} employs the local enrichment of the pressure interpolation shape functions and solves the level-set equation to capture the interface with reinitialization procedure. The present method is much simpler to implement without performing any reinitialization or geometric reconstruction. From Fig. \ref{ST_comp}, we observe that sufficiently accurate results can be obtained by applying the diffuse-interface approach. Furthermore, the proposed algorithm consisting of single-pass explicit partitioned staggered coupling discussed in Section \ref{imp_details} helps to reduce the computational time without compromising with the accuracy and stability of the solution. For demonstrating the ability of the solver to handle topological changes of the interface, we simulate the two- and three-dimensional dam break problem for unstructured meshes. \vspace{-0.25cm} \begin{figure}[H] \centering \includegraphics[trim={0cm 0.3cm 0cm 0cm},clip,width=10cm]{comp_ref2.eps} \caption{Comparison of the solution considering $N_\varepsilon = 4$, $\Delta t=0.01$ and $\varepsilon = 0.01$ with XFEM-based level set approach in \cite{Fries_1}.} \label{ST_comp} \end{figure} \subsection{Two-dimensional dam break problem} In this subsection, we test the two-dimensional dam break problem to assess the ability of the solver to handle breaking and merging of the interface for the two-phase flow problem. A rectangular computational domain $[0,0.584]\times [0,0.438]$ shown in Fig. \ref{db_2D}(a) is considered for the present study. A water column of size $0.146 \times 0.292$ units is placed at the left boundary of the domain at time $t=0$. \begin{figure}[H] \centering\hspace{-0.6cm} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[decoration={markings,mark=at position 1.0 with {\arrow{>}}},scale=11] \draw (0,0) -- (0.584,0)-- (0.584,0.438) -- (0,0.438) -- cycle; \draw[fill={rgb:black,1;white,2}, fill opacity=0.4] (0,0)--(0.146,0)--(0.146,0.252) arc (0:90:0.04)--(0,0.292)--(0,0); \draw (0.073,0.146) node(A){$\Omega_1$}; \draw (0.292,0.146) node{$\Omega_2$}; \draw[thick,postaction={decorate}] (0,0) to (0.05,0); \draw[thick,postaction={decorate}] (0,0) to (0,0.05); \draw (0.05,0) node[anchor=north]{X}; \draw (0,0.05) node[anchor=east]{Y}; \draw[postaction={decorate}] (0.624,0.219) to (0.624,0.438); \draw[postaction={decorate}] (0.624,0.219) to (0.624,0); \draw (0.594,0) -- (0.654,0); \draw (0.594,0.438) -- (0.654,0.438); \draw (0.624,0.219) node[anchor=west]{$0.438$}; \draw[postaction={decorate}] (0.292,0.478) to (0.584,0.478); \draw[postaction={decorate}] (0.292,0.478) to (0,0.478); \draw (0,0.448) -- (0,0.508); \draw (0.584,0.448) -- (0.584,0.508); \draw (0.292,0.478) node[anchor=south]{$0.584$}; \draw[postaction={decorate}] (0.166,0.146) to (0.166,0); \draw[postaction={decorate}] (0.166,0.146) to (0.166,0.292); \draw (0.156,0.292) -- (0.176,0.292); \draw (0.166,0.146) node[anchor=west]{$b$}; \draw[postaction={decorate}] (0.073,0.312) to (0.146,0.312); \draw[postaction={decorate}] (0.073,0.312) to (0,0.312); \draw (0.146,0.302) -- (0.146,0.322); \draw (0.073,0.312) node[anchor=south]{$a$}; \end{tikzpicture} \caption{} \end{subfigure}\hspace{0.5cm \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=7.25cm]{db_2D_0_new.eps} \caption{} \end{subfigure} \caption{Two-dimensional dam break problem: (a) schematic diagram showing the computational domain, and (b) the contour plot of the order parameter $\phi$ at $t=0$. In (a), $\Omega_1$ and $\Omega_2$ are the two phases with $\rho_1=1000$, $\rho_2=1$, $\mu_1=10^{-3}$ and $\mu_2=10^{-5}$, acceleration due to gravity is taken as $\mathbf{g}=(0,-9.81,0)$ and slip boundary condition is imposed on all the boundaries.} \label{db_2D} \end{figure} The initial condition is given by: \begin{align} \phi(x,y,0) = \begin{cases} -\mathrm{tanh}\big( \frac{y-b}{\sqrt{2}\varepsilon} \big),\ \mathrm{for}\ x \leq (a - r), y \geq (b-r) \\ \\ -\mathrm{tanh}\big( \frac{x-a}{\sqrt{2}\varepsilon} \big),\ \mathrm{for}\ x > (a-r), y < (b-r) \\ \\ \mathrm{tanh}\big( \frac{r - \sqrt{(x-(a-r))^2 + (y-(b-r))^2} }{\sqrt{2}\varepsilon} \big),\ \mathrm{for}\ x \geq (a-r), y \geq (b-r) \\ \\ 1,\ \mathrm{elsewhere} \end{cases} \end{align} where $a=0.146$, $b=0.292$ and $r=0.04$ are the width, height and the radius of the curve of the water column respectively, $\phi=1$ and $\phi=-1$ correspond to the order parameter on $\Omega_1$ and $\Omega_2$ respectively. A non-uniform mesh consisting of about 13,500 nodes and 13,300 four node quadrilaterals is employed for the study. The interfacial thickness parameter $\varepsilon$ was selected as $0.005$. The total computational time for simulating $1000$ time steps with step size of $\Delta t=0.001$ was $0.453$ hour on $4$ CPUs. The interface location at the left and bottom boundaries are tracked with time for validation with experiments \cite{Martin_Moyce, Ubbink_thesis} and interface tracking simulation \cite{Walhorn_thesis}. The temporal variation of the interface location based on the non-dimensional water column width/height is compared with the results from the literature in Fig. \ref{db_2D_val} where a good agreement is found. The evolution of the height of the water column is quantified very well. However, the expansion of the water column (in width) in the experiment is slower than what is predicted from the simulation. This delay has been pointed out in \cite{Fries_1} to be due to the time required to remove the partition which holds the water column to its initial profile in the experiment. \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{x_a.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{y_b.eps} \caption{} \end{subfigure} \caption{Two-dimensional dam break problem: temporal evolution of non-dimensional water column (a) width and (b) height. The results are in good agreement with the literature.} \label{db_2D_val} \end{figure} The profiles of the interface evolution is shown in Fig. \ref{db_2D_profile}. It is in good agreement with the profiles obtained in the literature. Some variations can be found which may be due to the closed domain condition in the numerical study. \begin{figure}[H] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=5cm]{db_2Dwo_0_b.eps} \caption{} \end{subfigure \begin{subfigure}[b]{0.33\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=5cm]{db_2Dwo_0_2_b.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=5cm]{db_2Dwo_0_4_b.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=5cm]{db_2Dwo_0_6_b.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=5cm]{db_2Dwo_0_8_b.eps} \caption{} \end{subfigure \begin{subfigure}[b]{0.33\textwidth} \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width=5cm]{db_2Dwo_1_b.eps} \caption{} \end{subfigure} \caption{Interface profiles for two-dimensional dam break problem: (a) $t=0.0$, (b) $t=0.2$, (c) $t=0.4$, (d) $t=0.6$, (e) $t=0.8$ and (f) $t=1.0$. The profile shown depicts the contour of the order parameter cutoff below $0$, i.e., it only shows the domain consisting water.} \label{db_2D_profile} \end{figure} \subsection{Three-dimensional dam break with obstacle} To further assess the solver, we simulate the three-dimensional dam break problem with a rectangular obstacle. The computational domain is a cuboid of size $[0,3.22] \times [0,1] \times [0,1]$ with a water column of size $1.228 \times 0.55 \times 1$ units. The rectangular cuboid obstacle is of size $0.161 \times 0.161 \times 0.403$. The dimensions of the domain are given in Fig. \ref{db_3D}. To extract the pressure for comparison with the experiment, we had two probes at the following locations (with respect to the axis orientation given in the Fig. \ref{db_3D}): \begin{itemize} \item $(2.3955, 0.021, -0.5255)$: Pressure value (P1) \item $(2.3955, 0.061, -0.5255)$: Pressure value (P2) \end{itemize} The notation in the parentheses is the name of the probe in the experiment conducted by MARIN. To obtain the variation in the water level (or interface), the height of the water is tracked at two locations, viz., $(X,Z)=(2.724,-0.5)$ and $(X,Z)=(2.228,-0.5)$ corresponding to H1 and H2 probes of the experiment respectively. The data of the experiment is taken from \cite{Issa}. The computational domain is discretized into $430,000$ nodes with $2.5$ million tetrahedral elements. The interfacial thickness parameter is chosen as $\varepsilon=0.02$. The total computational time for the simulation of $1200$ time steps with $\Delta t=0.005$ was $2.2$ hours on $24$ CPUs. The comparison between the results obtained from the present simulation and the experiment is depicted in Fig. \ref{db_3D_val_p} and \ref{db_3D_val_h}. The change in the iso-contours of water $(\phi>0)$ have been plotted in Fig. \ref{db_3D_visual} which shows a good qualitative agreement with the results from the literature \cite{Elias,Akkerman_LS,Kleefsman}. \begin{figure}[!htbp] \centering \begin{subfigure}[b]{\textwidth} \qquad \begin{tikzpicture}[decoration={markings,mark=at position 1.0 with {\arrow{>}}},scale=2.5,every node/.style={scale=0.833}] \draw (0,0) -- (0,1)-- (4,2) -- (4,1) -- cycle; \draw (0,0) -- (-1,0.5) -- (-1,1.5) -- (0,1); \draw (-1,1.5) -- (3,2.5) -- (4,2); \draw[dashed] (3,2.5) -- (3,1.5)--(4,1); \draw[dashed] (3,1.5)--(-1,0.5); \draw[fill={rgb:black,1;white,2}, fill opacity=0.4] (0,0)--(-1,0.5)--(-1,1)--(0,0.5)--(0,0); \draw[fill={rgb:black,1;white,2}, fill opacity=0.4] (0,0)--(0,0.5)--(1.2,0.8)--(1.2,0.3)--(0,0); \draw[fill={rgb:black,1;white,2}, fill opacity=0.4] (0,0.5)--(-1,1)--(0.2,1.3)--(1.2,0.8)--(0,0.5); \draw (2.4,0.8)--(2.4,1)--(2.6,1.05)--(2.6,0.85)--(2.4,0.8); \draw (2.4,0.8)--(2,1)--(2,1.2)--(2.4,1); \draw (2,1.2)--(2.2,1.25)--(2.6,1.05); \draw[dotted] (-1,0.5)--(-1.3,0.425); \draw[dotted] (0,0)--(-0.3,-0.075); \draw[<->] (-0.25,-0.0625) -- (-1.25,0.4375)node[midway,sloped,above]{$1.0$}; \draw[dotted] (0,0)--(0.5,-0.25); \draw[dotted] (4,1)--(4.5,0.75); \draw[<->] (0.5,-0.25) -- (4.5,0.75)node[midway,sloped,above]{$3.22$}; \draw[dotted] (4,2)--(4.2,2.05); \draw[dotted] (4,1)--(4.2,1.05); \draw[<->] (4.1,2.025) -- (4.1,1.025)node[midway,sloped,above]{$1.0$}; \draw[dotted] (1.2,0.8)--(1.2,1.0); \draw[<->] (0,0.6) -- (1.2,0.9)node[midway,sloped,above]{$1.228$}; \draw[dotted] (1.2,0.8)--(1.35,0.8375); \draw[<->] (1.3,0.825) -- (1.3,0.325)node[midway,sloped,above]{$0.55$}; \draw[dotted] (2.4,0.8)--(2.6,0.7); \draw[<->] (2.5,0.75) -- (2.7,0.8) node[midway,sloped,below]{$0.161$}; \draw[dotted] (2.6,1.05)--(2.75,1.0875); \draw[dotted] (2.6,0.85)--(2.75,0.88); \draw[<->] (2.7,1.075) -- (2.7,0.875)node[midway,sloped,above]{$0.161$}; \draw[dotted] (2.4,0.8)--(2.25,0.7625); \draw[dotted] (2,1)--(1.85,0.9625); \draw[<->] (1.9,0.975) -- (2.3,0.775)node[midway,sloped,below]{$0.403$}; \draw[dotted] (2.6,0.85) -- (3.2,0.55); \draw[<->] (0.25,-0.125) -- (3.1,0.6)node[midway,sloped,above]{$2.5565$};; \draw (0.6,0.34) node{$\Omega_1$}; \draw (1,1.5) node{$\Omega_2$}; \draw[thick,postaction={decorate}] (0,0) to (0.2,0.05); \draw[thick,postaction={decorate}] (0,0) to (0,0.2); \draw[thick,postaction={decorate}] (0,0) to (0.2,-0.1); \draw (0.2,0.2) node[anchor=north]{X}; \draw (0.15,0.2) node[anchor=east]{Y}; \draw (0.3,-0.04) node[anchor=east]{Z}; \end{tikzpicture} \end{subfigure} \caption{Schematic diagram showing the computational domain for three-dimensional dam break with obstacle. $\Omega_1$ and $\Omega_2$ are the two phases with $\rho_1=1000$, $\rho_2=1.225$, $\mu_1=1.002\times 10^{-3}$ and $\mu_2=1.983\times 10^{-5}$, acceleration due to gravity is taken as $\mathbf{g}=(0,-9.81,0)$ and slip boundary condition is imposed on all the boundaries.} \label{db_3D} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{P1_1.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{P2_1.eps} \caption{} \end{subfigure} \caption{Three-dimensional dam break problem: temporal evolution of the pressure at the probe points: (a) P1 and (b) P2. The results are in good agreement with the literature.} \label{db_3D_val_p} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{H1_1.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={10cm 0 11.8cm 0cm},clip,width=7cm]{H2_1.eps} \caption{} \end{subfigure} \caption{Three-dimensional dam break problem: temporal evolution of the height at the probe points: (a) H1 and (b) H2. The results are in good agreement with the literature.} \label{db_3D_val_h} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 2.5cm 0.2cm 0.2cm},clip,width=7cm]{0_125.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 2.5cm 0.2cm 0.2cm},clip,width=7cm]{0_75.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 2.5cm 0.2cm 0.2cm},clip,width=7cm]{1_125.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 2.5cm 0.2cm 0.2cm},clip,width=7cm]{2.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 2.5cm 0.2cm 0.2cm},clip,width=7cm]{2_5.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 2.5cm 0.2cm 0.2cm},clip,width=7cm]{3.eps} \caption{} \end{subfigure} \caption{Three-dimensional dam break problem: temporal evolution of the iso-contours of water $(\phi>0)$ at time: (a) $0.125$ s, (b) $0.75$ s, (c) $1.125$ s, (d) $2$ s, (e) $2.5$ s and (f) $3$ s.} \label{db_3D_visual} \end{figure} These tests confirm the ability of the solver to capture the air-water interface with topological changes. After assessing the present formulation for the Allen-Cahn equation and its coupling with the Navier-Stokes equations, we next demonstrate its application to the wave-structure interaction problem. \section{Application to wave-structure interaction} \label{RU_test} Before we consider the phase-field two-phase solver to wave run-up problem, we first briefly present a background theory of ocean waves. Propagation of free-surface waves is a very complex phenomenon due to the irregular nature of the waves and nonlinear effects. It is very challenging to develop an extensive mathematical formulation to predict this phenomenon. The simplest model is the linear or first-order wave theory where it is assumed that the fluid is incompressible, inviscid and irrotational, density is uniform throughout the fluid, waves are planar and monochromatic. Nonlinear boundary conditions at the free-surface are linearized to further simplify the model \cite{Sorensen, Chakrabarti}. A more accurate model involving the nonlinearities of the wave phenomenon is the second-order Stokes wave theory developed in \cite{Stokes}. The notations used in the description of the wave theory are shown in Fig. \ref{notations_WT}. Apart from the notations shown, $T$ is the time period of the wave, $\omega = 2\pi/T$ is the angular frequency and $k=2\pi/\lambda$ is the wave number. We summarize the results obtained by the second-order Stokes wave theory. The free-surface profile of the wave is given as \begin{align} \label{interface_profile} \eta(x,t) = \frac{H}{2}\mathrm{cos}(kx-\omega t) + \frac{\pi H^2}{8\lambda}\frac{(\mathrm{cosh}(kd))(2+\mathrm{cosh}(2kd))}{\mathrm{sinh}^3(kd)} \mathrm{cos}[2(kx-\omega t)]. \end{align} The horizontal and vertical components of the velocity to generate the above profile are given as \begin{align} u(x,z,t) &= \frac{Hgk}{2\omega}\frac{\mathrm{cosh}[k(d+z)]}{\mathrm{cosh}(kd)}\mathrm{cos}(kx-\omega t) + \frac{3H^2\omega k}{16}\frac{\mathrm{cosh}[2k(d+z)]}{\mathrm{sinh}^4(kd)}\mathrm{cos}[2(kx-\omega t)], \label{u_eqn}\\ w(x,z,t) &= \frac{Hgk}{2\omega}\frac{\mathrm{sinh}[k(d+z)]}{\mathrm{cosh}(kd)}\mathrm{sin}(kx-\omega t) + \frac{3H^2\omega k}{16}\frac{\mathrm{sinh}[2k(d+z)]}{\mathrm{sinh}^4(kd)}\mathrm{sin}[2(kx-\omega t)]. \label{w_eqn} \end{align} \begin{figure}[H] \centering \begin{tikzpicture}[decoration={markings,mark=at position 1.0 with {\arrow[scale=1]{>}}},every node/.style={scale=0.9},scale=0.9] \draw[-,black] (-2,0) node[anchor=north,black]{} to (12,0); \draw[-,line width=2pt,black] (-2,-4) node[anchor=north,black]{} to (12,-4); \draw (1,-1) cos (2,0) sin (3,1) cos (4,0) sin (5,-1) cos (6,0) sin(7,1) cos(8,0) sin(9,-1); \draw (11,0) node(A){}; \node[above=0.35cm of A,draw,minimum size=0.1cm, regular polygon, regular polygon sides=3, rotate=180] (11,0.5){}; \draw[line width=1.5pt,postaction={decorate}] (-1,0) to (0.5,0); \draw[line width=1.5pt,postaction={decorate}] (-1,0) to (-1,1.5); \draw (0.5,0) node[anchor=north]{X}; \draw (-1,1.5) node[anchor=east]{Z}; \draw[->] (11.5,-2) -- (11.5,0); \draw[->] (11.5,-2) -- (11.5,-4); \draw (11.5,-2) node[anchor=west] {$d$}; \draw[->] (7,0.5) -- (7,1); \draw[->] (7,0.5) -- (7,0); \draw (7,0.5) node[anchor=west]{$A$}; \draw[->] (10,0) -- (10,1); \draw[->] (10,0) -- (10,-1); \draw[-,dotted] (5,-1) -- (10,-1); \draw[-,dotted] (7,1) -- (10,1); \draw (10.25,0.1) node[anchor=south]{$H$}; \draw[->] (4,-2) -- (2,-2); \draw[->] (4,-2) -- (6,-2); \draw[-,dotted] (2,0) -- (2,-2); \draw[-,dotted] (6,0) -- (6,-2); \draw (4,-2) node[anchor=north]{$\lambda$}; \draw (3,1) node[anchor=south]{$\eta(x,t)$}; \node[right=1cm of A] {Mean water level}; \draw (5,-4) node[anchor=south]{Wall}; \draw[->] (3,2) -- (4,2) ; \draw (3.5,2) node[anchor=south]{Direction of wave motion}; \end{tikzpicture} \caption{Notations and coordinate system in the description of the second-order Stokes wave theory: $\eta(x,t)$ is the profile of the free-surface, $d$ is the water depth, $A$ is the amplitude of the wave, $H=2A$ is the height of the wave and $\lambda$ is the wavelength.} \label{notations_WT} \end{figure} With the description of the second-order Stokes waves, we set up a numerical wave tank to demonstrate the wave run-up problem across a vertical truncated cylinder. The vertical truncated cylinder is one of the most common structural members in many offshore structures, e.g., gravity-based structures, semi-submersibles and tension-leg platforms. The wave run-up on a submerged structure is measured by the run-up ratio, $R/A$ defined as the ratio of the highest free-surface elevation of the fluid (water in this case) at the front face of the cylinder to the amplitude of the incident incoming wave. The run-up depends on two crucial non-dimensional parameters: the wave steepness, $kA$ and the wave scattering parameter, $ka$, which are given by \begin{align} kA = \frac{2\pi A}{\lambda},\\ ka = \frac{2\pi a}{\lambda}, \end{align} where $A$ is the amplitude of the incident wave, $a$ is half of the cross-sectional width of the submerged structure (radius of the cylinder in this case) and $\lambda$ is the wavelength of the incoming wave. If the steepness of the incoming wave is increased ($kA$ is increased) or if the cross-sectional width of the structure is increased ($ka$ is increased) leading to more resistance to the flow, the run-up ratio increases. The quantification of the run-up ratio with $kA$ and $ka$ is of a particular interest in ocean engineering. Some of the experimental and numerical works dealing with the run-up ratio of different cross-sectional structures are discussed in \cite{Morris_thesis, Thiagarajan}. For the present demonstration, we consider a truncated circular cylinder as the structure being impinged by the incoming waves. We keep the parameter $ka$ constant and quantify the run-up on the cylinder by varying $kA$ values. The computational setup is similar to that employed in \cite{Thiagarajan} and is depicted as a schematic in Fig. \ref{RU}. The size of the computational domain is $24$m$\times$$2$m$\times$$2$m, i.e., $L=24$m and $W=2$m. The depth of the water is $d=1.2$m and the draft of the submerged cylinder is $0.4$m. The diameter of the cylinder is $2a=0.2$m and its total height is $h_c = 0.8$m. The centre of the cylinder is at a distance of $L_c=3.6$m from the left boundary. The boundary conditions employed in the simulation are as follows: velocity inlet according to Eqs.~(\ref{u_eqn}) and (\ref{w_eqn}) to simulate Stokes waves at the left boundary, stress-free boundary condition with zero pressure at the top boundary and slip boundary condition at the two sides at $y=-1$m and $y=1$m. All other boundaries (bottom and right) have no-slip boundary condition. The order parameter $\phi$ is initialized with the interface at $z=0$m with $\Omega_1$ phase depicting water ($\rho_1=1000, \mu_1=1.002\times 10^{-3}$) and $\Omega_2$ representing air ($\rho_2=1.225, \mu_2=1.983\times 10^{-5}$). The interface thickness parameter for the Allen-Cahn solver is taken as $\varepsilon=0.02$. Initial numerical tests suggested that employing a much sharper interface by taking smaller $\varepsilon$ does not affect the solution for this large scale problem. The time-step size is taken as $\Delta t=0.0025$ with the total number of time steps as $5000$. The capillary effects due to surface tension have been neglected for the modeling of the free-surface waves. \begin{figure}[H] \centering \begin{tikzpicture}[very thick,decoration={markings,mark=at position 0.5 with {\arrow{>}}},,every node/.style={scale=1.0},scale=1.0] \draw[fill=black!10] (0,3.5) -- (1,4.5) -- (13,4.5) -- (12,3.5) -- (0,3.5); \draw[fill=black!10] (0,0) -- (0,3.5) -- (12,3.5) -- (13,4.5) -- (13,1) -- (12,0); \draw (0,0) node[left]{} -- (0,6) node[right]{} -- (12,6) node[above]{} -- (12,0) node[above]{} -- cycle; \draw[black,dotted] (1,1) to (1,7); \draw[black] (1,7) to (13,7); \draw[black] (13,7) to (13,1); \draw[black,dotted] (13,1) to (1,1); \draw[black] (0,6) to (1,7); \draw[black] (12,6) to (13,7); \draw[black] (12,0) to (13,1); \draw[black,dotted] (0,0) to (1,1); \draw[->](-2.5,4) to (-0.5,4); \draw (-1.5,4) node[anchor=north]{$\mathrm{u}=u(x,z,t)$}; \draw (-1.5,3.5) node[anchor=north]{$\mathrm{v}=0$}; \draw (-1.5,3) node[anchor=north]{$\mathrm{w}=w(x,z,t)$}; \draw (4,6) node[anchor=south]{$\sigma_\mathrm{xx}=0,$}; \draw (5.7,6) node[anchor=south]{$\sigma_\mathrm{zx}=0,$}; \draw (7.4,6) node[anchor=south]{$\sigma_\mathrm{yx}=0,$}; \draw (8.8,6) node[anchor=south]{$p=0$}; \draw (13.2,1) to (13.6,1); \draw (13.2,4.5) to (13.6,4.5); \draw[<->] (13.4,1) to (13.4,4.5); \draw (13.4,2.75) node[anchor=west]{$d$}; \draw (3.1,5) to (3.3,5); \draw (3.1,3) to (3.3,3); \draw[<->] (3.2,5) to (3.2,3); \draw (3.2,4) node[anchor=west]{$h_c$}; \draw (2.5,5) -- (2.5,4); \draw[dotted] (2.5,4) -- (2.5,3); \draw[dotted] (3,3) -- (3,4); \draw (3,4) -- (3,5); \draw (2.75,5) ellipse (0.26cm and 0.13cm); \draw[dotted] (2.75,3) ellipse (0.26cm and 0.13cm); \draw (2.75,4) ellipse (0.26cm and 0.13cm); \draw[->] (2,2.7) to (2.5,2.7); \draw[->] (3.5,2.7) to (3,2.7); \draw (2.5,2.8) -- (2.5,2.5); \draw (3,2.8) -- (3,2.5); \draw (2.75,2.6) node[]{$2a$}; \draw (6,3) node[anchor=north]{$\Omega_1$}; \draw (6,2) node[anchor=south]{$(\rho_1, \mu_1)$}; \draw (6,5.7) node[anchor=north]{$\Omega_2$}; \draw (6,4.7) node[anchor=south]{$(\rho_2, \mu_2)$}; \draw (0,-0.2) to (0,-1.2);; \draw (12,-0.2) to (12,-1.2); \draw[<->] (0,-1) to (12,-1); \draw (6,-1) node[anchor=north]{$L$}; \draw[<->] (0,-0.4) to (2.75,-0.4); \draw (2.75,-0.2) to (2.75,-0.6); \draw (1.375,-0.4) node[anchor=north]{$L_c$}; \draw (12.2,0) to (12.6,0); \draw[<->] (12.4,0) to (13.4,1); \draw (12.9,0.4) node[anchor=west]{$W$}; \draw[->,red] (0.5,4) to (1.5,4); \draw (1.5,4) node[anchor=west,red]{X}; \draw[->,red] (0.5,4) to (0.5,5); \draw (0.5,5) node[anchor=west,red]{Z}; \draw[->,red] (0.5,4) to (1.2,4.7); \draw (1.2,4.7) node[anchor=west,red]{Y}; \end{tikzpicture} \caption{A schematic of the wave-structure problem. The computational setup and boundary conditions are shown for the Navier-Stokes equations. Here, $\boldsymbol{u} = (\mathrm{u},\mathrm{v},\mathrm{w})$ denotes the components of the fluid velocity which are given by Eqs.~(\ref{u_eqn}) and (\ref{w_eqn}) with $\mathrm{v}=0$. Stress-free boundary condition with zero pressure is given at the top surface of the wave tank. Slip boundary condition is imposed at the X-Z plane at $y=-1$ and $y=1$. All other boundaries have the no-slip condition. Moreover, zero Neumann boundary condition is imposed for the order parameter $\phi$ on all the boundaries.} \label{RU} \end{figure} Based on the experimental campaign in \cite{Morris_thesis}, we select the $ka$ value of $0.208$ which gives a wavelength of $\lambda=3.0208$m. For the Stokes waves, the time period is a function of wavelength as $\lambda = \frac{gT^2}{2\pi}\mathrm{tanh}\big(\frac{2\pi d}{\lambda}\big)$, which gives $T = 1.40045$s. We vary the $kA$ values by changing the amplitude of the wave and quantify the run-up ratio $R/A$ for the truncated cylinder. An unstructured finite element mesh is constructed for the numerical wave tank via open-source mesh generator \cite{gmsh}. A gradually coarsening mesh from mesh size $\delta=0.01$ at $z=0$ to $\delta =0.02$ is used to capture the interface region from $z=-0.2$ to $z=0.2$. This resolution also ensures that at least $4$ number of elements fall under the equilibrium interfacial region to capture the interface properties accurately. Furthermore, the resolution of the mesh is increased to a value of $\delta=0.01$ enveloping the cylinder. The total number of grid nodes in the mesh are around $5.6$ million with approximately $34$ million tetrahedral unstructured elements. The mesh is depicted in Fig. \ref{Mesh_1}. The simulations are performed using 600 CPU cores with MPI parallelism and the total time taken to complete the $5000$ time steps is approximately $6.5$ hours with approximately $5$ seconds to complete a minimum of $3$ nonlinear iterations in a time step. \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip,width=7.5cm]{Mesh_1a_image.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip,width=8cm]{Mesh_3_image.eps} \caption{} \end{subfigure} \caption{Three-dimensional computational mesh and contours of order parameter for the run-up problem: (a) refined interfacial region with unstructured finite element mesh, (b) contour plot of order parameter $\phi$ superimposed with the unstructured mesh.} \label{Mesh_1} \end{figure} The free-surface wave elevation is recorded in time at two locations- one at $x=2$m and the other at $x=3.5$m (the front face of the cylinder). The wave elevation is found out by linearly interpolating the order parameter and finding the interface where $\phi=0$. The wave amplitude is calculated as the amplitude of the first harmonic of the free-surface elevation of the incident wave at the first probe point at $x=2$m by taking the Fourier transform of the elevation $\eta$ as $A = A(\omega_1)$, where $\omega_1$ is the first harmonic frequency of the incident wave. The wave run-up on the front-face of the cylinder at $x=3.5$m is calculated as the mean of the maximum amplitude from each wave cycle as $R = \frac{1}{M} \displaystyle\sum_\mathrm{n=1}^M \eta_\mathrm{n}$, where $\eta_\mathrm{n}$ is the peak amplitude of the free-surface elevation $\eta$ at $x=3.5$m of the $\mathrm{n}$th wave cycle and $M$ is the total number of wave cycles which is equal to $5$ in this case. The wave run-up ratio is evaluated as the ratio of the wave run-up $R$ on the front face of the cylinder and the incident wave amplitude $A$. All the post-processing is performed in the time interval $t/T \in [4,9]$ to exclude any initial transient solutions. The data is post-processed in a similar manner as the experimental campaign \cite{Morris_thesis}. We simulate cases for a range of $kA \in [0.04, 0.2282]$. The iso-contours of the free-surface at $\phi=0$ colored by the free-surface elevation are plotted in Fig. \ref{Plt_2} for two different $kA$ values at $t=12.5$. The time history of the incident wave and run-up for different $kA$ values is plotted in Fig. \ref{Plt_1}(a-c). We observe a secondary kink in the wave run-up at high $kA$ values in the figures similar to the findings in the literature. The wave run-up ratio is compared with the results from the literature in Fig. \ref{Plt_1}(d). Our results are in good agreement with the theoretical and experimental predictions. \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[trim={0.2cm 4cm 0.2cm 0.2cm},clip,width=7cm]{H_0_27_contour.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[trim={0.2cm 4cm 0.2cm 0.2cm},clip,width=7cm]{H_0_33_contour.eps} \caption{} \end{subfigure} \caption{Water wave run-up on a truncated cylinder: Snapshots of iso-contours $\phi=0$ colored by the free-surface elevation for two representative wave steepness parameters: (a) $kA=0.1837$ and (b) $0.2146$ at $t=12.5$.} \label{Plt_2} \end{figure} \vspace{-0.5cm} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[trim={10cm 0.2cm 10cm 0.2cm},clip,width=7cm]{kA_0_0779_TH.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[trim={10cm 0.2cm 10cm 0.2cm},clip,width=7cm]{kA_0_1381_TH.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[trim={10cm 0.2cm 10cm 0.2cm},clip,width=7cm]{kA_0_2146_TH.eps} \caption{} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[trim={10cm 0.2cm 10cm 0.2cm},clip,width=7cm]{RU_plot_1.eps} \caption{} \end{subfigure} \caption{Water wave run-up on a truncated cylinder: Time histories of the free-surface elevations at the probe locations for three representative wave steepness parameters: (a) $kA=0.0779$, (b) $0.1381$, (c) $0.2146$, and (d) comparison of the wave run-up ratio with the results from the literature at $ka=0.208$: The first- and second-order results are from frequency domain potential theory, the experimental results are from the campaign in \cite{Morris_thesis} and the ALE results are from the free-surface ALE solver in \cite{OMAE_RKJ_NWT}. Time and free-surface elevation are non-dimensionalized by time period $T$ and wave amplitude $A$ respectively.} \label{Plt_1} \end{figure} \section{Conclusions} \label{conclusion} In this work, a novel positivity preserving variational method has been developed for the accurate, general and robust phase-field modeling of incompressible two-phase flows involving high density ratios. A one-fluid formulation has been utilized under the diffuse-interface description to represent the interface between the two immiscible phases. The interface is evolved by solving the Allen-Cahn equation under the variational framework which minimizes the free energy functional. The mass conservation and energy stability properties are imparted to the underlying scheme in a relatively simple manner. While the mass is conserved by a Lagrange multiplier term, the discrete energy stability is established by considering the mid-point approximation of the spatial part of the Lagrange multiplier. It is found that the mid-point approximation helps in the energy-stability property of the underlying scheme. We want to emphasize the advantage of the partitioned staggered single-pass coupling which reduces the computational time while providing accuracy and robustness in the solution. Standalone convergence tests of the Allen-Cahn phase-field solver resulted into a second-order accuracy in space and more than first-order for temporal discretization. The positivity preserving property of the scheme is demonstrated by considering the sloshing tank problem. In contrast to the oscillatory solutions from linear stabilized methods, the numerical solution is found to be bounded if we include the PPV-based nonlinear stabilization terms. We also conclude that about 4-5 elements in the equilibrium interfacial region are sufficient to obtain the accurate solution of two-phase flows. Furthermore, a selection of smaller interface thickness parameter $\varepsilon$ increases the accuracy of the solution but this advantage comes with higher computational cost. Tests on the dam break problem demonstrate the advantage of the presented interface capturing technique on the cases with the breaking and merging of the interface. Finally, the ability of the solver to handle a practical problem of wave-structure interaction is demonstrated by analyzing the water run-up on a vertical truncated circular cylinder. Application of the present formulation to the fluid-structure interaction of floating bodies is a topic of future study. {\bf{Acknowledgements}} The first author would like to thank for the financial support from National Research Foundation through Keppel-NUS Corporate Laboratory. The conclusions put forward reflect the views of the authors alone, and not necessarily those of the institutions.
1,116,691,501,172
arxiv
\section{Introduction} Among assessment methods, the job interview remains the most common way to evaluate candidates. The interview can be done via phone, live video, face to face, or more recently asynchronous video interview. For the latter, candidates connect to a platform, and record themselves while answering a set of questions chosen by the recruiter. The platform then allows several recruiters to evaluate the candidate, to discuss among themselves and possibly to invite the candidate to a face-to-face interview. Recruiters choose to use these platforms because it gives them access to a larger pool of candidates, and it speeds up the application processing time. In addition, it allows candidates to do the interview whenever and wherever it suits them the most. However, given a large number of these asynchronous interviews it may quickly become unmanageable for recruiters. The highly structured characteristic of asynchronous video interviews (same questions, same amount of time per candidate) enhances their predictive validity, and reduces inter-recruiter variability \cite{Schmidt2016TheFindings}. Moreover, recent advances in Social Signal Processing (SSP) \cite{Vinciarelli2014MoreComputing} have enabled automated candidate assessment \cite{Chen2017}, and companies have already started deploying solutions serving that purpose. However, previous studies used corpora of simulated interviews with limited sizes. The work proposed in this paper relies on a corpus that has been built in collaboration with a company and that consists of more than 7000 real job interviews for 475 open positions. The size of this corpus enables the exploration of emerging models such as deep learning models, that are known to be difficult to deploy for Social Computing because of the difficulty to obtain large annotations of social behaviors. Based on those facts, we propose HireNet, a new hierarchical attention neural network for the purpose of automatically classifying candidates into two classes: \textit{hirable} and \textit{not hirable}. Our model aims to assist recruiters in the selection process. It does not aim to make any automatic decision about candidate selection. First, this model was built to mirror the sequential and hierarchical structure of an interview assessment: recruiters watch a sequence of questions and answers, which are themselves sequences of words or behavioral signals. Second, the HireNet model integrates the context of the open position (questions during the interview and job title) in order both to determine the relative importance between question-answer pairs and to highlight important behavioral cues with regard to a question. Third, HireNet attention mechanisms enhance the interpretability of our model for each modality. In fact, they provide a way for recruiters to validate and trust the model through visualization, and possibly for candidates to locate their strengths or areas of improvement in an interview.\\ In this paper, we first present an overview of the related works for automatic video interview assessment. Then we go through the construction and the underlying hypotheses of HireNet, our neural model for asynchronous video interview assessment. After, we discuss the binary classification results of our model compared to various baselines, and show salient interview slices highlighted by the integrated attention mechanisms. Finally we conclude and discuss the future directions of our study. \section{Related Work} \subsection{Databases} To the best of our knowledge, only one corpus of interviews with real open positions has been collected and is subject to automatic analysis \cite{Nguyen2014HireBehavior}. This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology \cite{Hoque2016Mach:Coach,Naim2018AutomatedPerformance}, and a corpus of students in services related to hospitality \cite{Muralidhar2016TrainingHospitality}. Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees \cite{Chen2016AutomatedParadigm}, a corpus of students from Bangalore University \cite{Rasipuram2017AutomaticInterviews} and a corpus collected through the use of crowdsourcing tools \cite{Chen2017}. Some researchers are also interested in online video resumes and have constituted a corpus of video CVs from YouTube \cite{Nguyen2016HirabilityResumes}. A first impressions challenge dataset was also supplemented by hirability annotation \cite{Escalante2017ChaLearnOverview}. Some corpora are annotated by experts or students in psychology \cite{Chen2016AutomatedParadigm,Chen2017,Nguyen2014HireBehavior,Rupasinghe2017ScalingAnalysis}. Other corpora have used crowdsourcing platforms or naive observers \cite{Rasipuram2017AutomaticInterviews} for annotation. Table\,\ref{previousCorpora} contains a summary of the corpora of job interviews used in previous works. \begin{table*} \centering \begin{tabular}{|l|c|c|c|} \hline Works & Interview & Real open position & Number of candidates\\ \hline \hline \cite{Nguyen2014HireBehavior} & Face to Face & Marketing short assignment & 36 \\ \hline \cite{Muralidhar2016TrainingHospitality} & Face to Face & None & 169 \\ \hline \cite{Naim2018AutomatedPerformance} & Face to Face & None & 138 \\ \hline \cite{Chen2016AutomatedParadigm} & Asynchronous Video & None & 36 \\ \hline \cite{Rasipuram2017AutomaticInterviews} & Asynchronous Video & None & 106 \\ \hline \cite{RaoS.B2017AutomaticStudy} & Asynchronous Video & None & 100 \\ \hline \cite{Rupasinghe2017ScalingAnalysis} & Asynchronous Video & None & 36 \\ \hline \cite{Chen2017} & Asynchronous Video & None & 260 \\ \hline This Study & Asynchronous Video & Sales positions & 7095 \\ \hline \end{tabular}% \caption{Summary of job interview databases} \label{previousCorpora} \end{table*} \subsection{Machine learning approaches for automatic analysis of video job interview} \textbf{Features} Recent advances in SSP have offered toolboxes to extract features from audio \cite{Eyben2016TheComputing} and video streams \cite{Baltrusaitis2018OpenFaceToolkit}. As asynchronous job interviews are videos, features from each modality (verbal content, audio and video) have to be extracted frame by frame in order to build a classification model. Audio cues consist mainly of prosody features (fundamental frequency, intensity, mel-frequency cepstral coefficients, etc) and speaking activity (pauses, silences, short utterances, etc) \cite{Nguyen2015IMinute,RaoS.B2017AutomaticStudy}. Features derived from facial expressions (facial actions units, head rotation and position, gaze direction, etc) constitute the most extracted visual cues \cite{Chen2017}. Finally, advances in automatic speech recognition have enabled researchers to use the verbal content of candidates. In order to describe the verbal content, researchers have used lexical statistics (number of words, number of unique words, etc), dictionaries (Linguistic Inquiry Word Count) \cite{RaoS.B2017AutomaticStudy}, topic modeling \cite{Naim2018AutomatedPerformance}, bag of words or more recently document embedding \cite{Chen2016AutomatedParadigm}. \textbf{Representation} Once features are extracted frame by frame, the problem of temporality has to be addressed. The most common approach is to simplify the temporal aspect by collapsing the time dimension using statistical functions (\textit{e.g.} mean, standard deviation, etc). However, the lack of sequence modeling can lead to the loss of some important social signals such as emphasis by raising one's eyebrows followed by a smile \cite{Janssoone2016UsingStance}. Moreover co-occurrences of events are not captured by this representation. Thus, a distinction between a fake smile (activation of action unit 12) and a true smile (activation of action units 2, 4 and 12) is impossible \cite{Ekman1990TheII} without modeling co-occurrences. To solve the problem of co-occurrences, the representation of visual words, audio words or visual audio words has been proposed \cite{Chen2017,Chen2016AutomatedParadigm,RaoS.B2017AutomaticStudy}. The idea is to consider the snapshot of each frame as a word belonging to a specific dictionary. In order to obtain this codebook, an algorithm of unsupervised clustering is used to cluster common frames. Once we obtain the clusters, each class represents a "word" and we can easily map an ensemble of extracted frames to a document composed of these words. Then, the task is treated like a document classification. Additionally, the representation is not learned jointly with the classification models which can cause a loss of information. \textbf{Modeling attempts and classification algorithms} As video job interviews have multiple levels, an architectural choice has to be made accordingly. Some studies tried to find the most salient moments during an answer to a question \cite{Nguyen2015IMinute}, the most important questions \cite{Naim2018AutomatedPerformance} or to use all available videos independently \cite{Chen2017} in order to predict the outcome of a job interview. Finally, when a sufficient representation is built, a classification or a regression model is trained. Regularized logistic regression (LASSO or Ridge), Random Forest and Support Vector Machines are the most widely used algorithms. From a practical point of view, manually annotating thin slices of videos is time consuming. On the other side, considering each answer with the same label as the outcome of the interview is considerably less expensive, though some examples could be noisy. Indeed, a candidate with a negative outcome could have performed well on some questions. Furthermore, all these models do not take into account the sequentiality of social signals or questions. \subsection{Neural networks and attention mechanisms in Social Computing} Neural networks have proven to be successful in numerous Social Computing tasks. Multiple architectures in the field of neural networks have outperformed hand crafted features for emotion detection in videos \cite{Zadeh2018}, facial landmarks detection \cite{Baltrusaitis2018OpenFaceToolkit}, document classification \cite{Yang2016} These results are explained by the capability of neural networks to automatically perform useful transformations on low level features. Moreover, some architectures such as Recurrent Neural Networks were especially tailored to represent sequences. In addition, attention mechanisms have proven to be successful in highlighting salient information enhancing the performance and interpretability of neural networks. For example, in rapport detection, attention mechanisms allow to focus only on important moments during dyadic conversations \cite{Yu2017TemporallyContent}. Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms \cite{Zadeh2017TensorAnalysis,Zadeh2018}. \section{Model} \subsection{HireNet and underlying hypotheses} We propose here a new model named HireNet, as in a neural network for hirability prediction. It is inspired by work carried out in neural networks for natural language processing and from the HierNet \cite{Yang2016}, in particular, which aims to model a hierarchy in a document. Following the idea that a document is composed of sentences and words, a job interview could be decomposed, as a sequence of answers to questions, and the answers, as a sequence of low level descriptors describing each answer. The model architecture (see Figure~\ref{fig:HireNet}) is built relying on four hypotheses. The first hypothesis (\textbf{H1}) is the importance of the information provided by the sequentiality of the multimodal cues occurring in the interview. We thus choose to use a sequential model such as a recurrent neural network. The second hypothesis (\textbf{H2}) concerns the importance of the hierarchical structure of an interview: the decision of to hire should be performed at the candidate level, the candidates answering several questions during the interview. We thus choose to introduce different levels of hierarchy in HireNet namely the candidate level, the answer level and the word (or frame) level. The third hypothesis (\textbf{H3}) concerns the existence of salient information or social signals in a candidate's video interview: questions are not equally important and not all the parts of the answers have an equal influence on the recruiter's decision. We thus choose to introduce attention mechanisms in HireNet. The last hypothesis (\textbf{H4}) concerns the importance of contextual information such as questions and job titles. Therefore, HireNet includes vectors that encode this contextual information. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{graphiqueGhazi.png} \caption{HireNet} \label{fig:HireNet} \end{figure} \subsection{Formalization} We represent a video interview as an object composed of a job title $J$ and $n$ question-answer pairs \( \left \{ \left \{ Q_1,A_1 \right \},\left \{ Q_2,A_2 \right \}, \ldots,\left \{ Q_n,A_n \right \} \right \} \). In our model, the job title $J$ is composed of a sequence of $l_J$ words \( \left \{ w^J_1, w^J_2, \ldots, w^J_{l_J} \right \} \) where $l_J$ denotes the length of the job title. In a same way, the $i$-th question $Q_i$ is a sequence of $l_{Q_i}$ words \( \left \{ w_1^{i}, w_2^{i}, \ldots, w^{i}_{l_{Q_i}} \right \} \) where $l_{Q_i}$ denotes the number of words in the question $i$. $A_i$ denotes the sequence of low level descriptors \( \left \{ x_1^{i}, x_2^{i}, \ldots, x^{i}_{l_{A_i}} \right \} \) describing the $i$-th answer. In our study these low level descriptors could be embedded words, features extracted from an audio frame, or features extracted from a video frame. $l_{A_i}$ denotes the length of the sequence of low level descriptors of the $i$-th answer. \subsubsection{Gated Recurrent Unit Encoder} We decided to use a Gated Recurrent Unit (GRU) \cite{cho2014learning} to encode information from the job title, the questions and the answers. A GRU is able to encode sequences. It uses two mechanisms to solve the vanishing gradient problem, namely the reset gate, controlling how much past information is needed; and the update gate, determining how much past information has to be kept and the amount of new information to add. For formalization, we will denote by $h_{t}$ the hidden state of GRU at timestep $t$ of the encoded sequence. \subsubsection{Low level encoder} This part of the model aims to encode the sequences of low level descriptors. As mentioned before, the sequences can represent a text, an audio stream or a video stream. A bidirectional GRU is used to obtain representations from both directions for each element of the sequence $X$. It contains the forward \overrightarrow{GRU} which reads the sequence from left to right and backward \overleftarrow{GRU} which reads the sequence from right to left: \begin{equation} \overrightarrow{h_{t}^i} = \overrightarrow{GRU}(x_t^{i}) , t \in [1,l_{A_i} ] \end{equation} \begin{equation} \overleftarrow{h_{t}^i} = \overleftarrow{GRU}(x_t^{i}) , t \in [l_{A_i}, 1] \end{equation} In the same way, an encoding for a given low level descriptor $x_t^{i}$ is obtained by concatenating forward hidden states and backward hidden states: \begin{equation} h_{t}^i = [ \overrightarrow{h_{t}^i}, \overleftarrow{h_{t}^i}] \end{equation} Encoding sequences in a bidirectional fashion ensures the same amount of previous information for each element of $(A_i)_{1\leq i\leq n}$. Using a simple forward encoder could lead to biased attention vectors focusing only on the latest elements of the answers. \subsubsection{Local context encoder} In this study, the local context information corresponds to the questions $(Q_i)_{1\leq i\leq n}$. In order to encode these sentences, we use a simple forward GRU. \begin{equation} \overrightarrow{h_{t}^{Q_i}} = \overrightarrow{GRU}(w_t^{i}) , t \in [1,l_{Q_i} ] \end{equation} And the final representation of a question is the hidden state of the last word in the question $Q_i$ (\textit{i.e.} $h_{l_{Q_i}}^{Q_i}$). \subsubsection{Low level attention} In order to obtain a better representation of of the candidate's answer, we aim to detect elements in the sequence which were salient for the classification task. Moreover, we hypothesize that the local context is highly important. Different behavioral signals can occur depending on the question type and it can also influence the way recruiters assess their candidates \cite{Roulin2015HonestEvaluations}. An additive attention mechanism is proposed in order to extract the importance of each moment in the sequence representing the answer. \begin{equation} u_{t}^{i} = tanh(W_A h_{t}^i + W_Q h_{l_{Q_i}}^{Q_i}+b_Q) \end{equation} \begin{equation} \alpha^{i}_t = \frac{exp(u_p^\top {u_{t}^{i}} )}{\sum\nolimits_{t'} exp(u_p^\top {u_{t'}^{i}})} \end{equation} \begin{equation} a_i = \sum\nolimits_{t} \alpha^{i}_t h_{t}^i \end{equation} where $W_A$ and $W_Q$ are weight matrices, $u_p$ and $b$ are weight vectors and $u_p^\top$ denotes the transpose of $u_p$. \subsubsection{High level encoder} In order to have the maximum amount of information, we concatenate at the second level, the representation of the local context and the answer representation. Moreover, we think that given the way video interviews work, the more questions a candidate answers during the interview, the more he adapts and gets comfortable. In the light of this, we decided to encode question-answer pairs as a sequence. Given \( \left \{ [ \ h_{l_{Q_1}}^{Q_1}, a_1 ] \ ,[ \ h_{l_{Q_2}}^{Q_2}, a_2 ] \ , \ldots, [ \ h_{l_{Q_n}}^{Q_n}, a_n ] \ \right \} \), we can use the same representation scheme as that of the low level encoder: \begin{equation} \overrightarrow{h_i} = \overrightarrow{GRU}([ h_{l_{Q_i}}^{Q_i}, a_i ]) , i \in [1,n ] \end{equation} \begin{equation} \overleftarrow{h_i} = \overleftarrow{GRU}([ h_{l_{Q_i}}^{Q_i}, a_i ]) , i \in [n,1 ] \end{equation} We will also concatenate forward hidden states and backward hidden states: \begin{equation} h_i = [ \overrightarrow{h_i}, \overleftarrow{h_i}] \end{equation} \subsubsection{Global context encoder} We encode the job title the same way we encode the questions : \begin{equation} \overrightarrow{h_{t}^{J}} = \overrightarrow{GRU}(w_t^{J}) , t \in [1,l_J] \end{equation} As done for the representation of the question, the final representation of the job title is the hidden state of the last word of $J$ (\textit{i.e.} $h_{l_J}^{J}$). \subsubsection{High level attention} The importance of a question depends on the context of the interview, and specifically, on the type of job the candidate is applying for. For instance, a junior sales position interview could accord more importance to the social skills, while an interview for a senior position could be more challenging on the technical side.\\ Like low level attention, high level attention is composed of an additive attention mechanism: \begin{equation} u_{i} = tanh(W_P h_i + W_J h_{l_J}^{J}+b_J) \end{equation} \begin{equation} \alpha_i = \frac{exp(u_J^\top {u_{i}} )}{\sum\nolimits_{i'} exp(u_J^\top {u_{i'}})} \end{equation} \begin{equation} v = \sum\nolimits_{i} \alpha_i h_i \end{equation} where $W_P$, $W_J$ are weight matrices, $u_J$ and $b_J$ are weight vectors and $u_J^\top$ denotes the transpose of $u_J$. Finally $v$ summarizes all the information of the job interview. \subsubsection{Candidate classification} Once $v$ is obtained, we use it as representation in order to classify candidates: \begin{equation} \widetilde{y} = \sigma(W_v v + b_v) \end{equation} where $W_v$ is a weight matrix and $b_v$ a weight vector. As the problem we are facing is that of a binary classification, we chose to minimize the binary cross-entropy computed between $\widetilde{y}$ and true labels of candidates $y$. \section{Experiments} \subsection{Dataset} We have decided to focus on only one specific type of job: sales positions. After filtering based on specific job titles from the ROME Database\footnote{https://www.data.gouv.fr/en/datasets/repertoire-operationnel-des-metiers-et-des-emplois-rome/}, a list of positions was selected and verified by the authors and an expert from the Human Resources (HR). Finally, in a collaboration with an HR industry actor, we have obtained a dataset of French video interviews comprising more than 475 positions and 7938 candidates. As they watch candidates' videos, recruiters can like, dislike, shortlist candidates, evaluate them on predefined criteria, or write comments. To simplify the task, we set up a binary classification: candidates who have been liked or shortlisted are considered part of the \textit{hirable} class and others part of the \textit{not hirable} class. If multiple annotators have annotated the same candidates, we proceed with a majority vote. In case of a draw, the candidate is considered \textit{hirable}. It is important to note that the videos are quite different from what could be produced in a laboratory setup. Videos can be recorded from a webcam, a smartphone or a tablet., meaning noisy environments and low quality equipment are par for the course. Due to these real conditions, feature extraction may fail for a single modality during a candidate's entire answer. One example is the detection of action units when the image has lighting problems. We decided to use all samples available in each modality separately. Some statistics about the dataset are available in Table \ref{table:datasets}. Although the candidates agreed to the use of their interviews, the dataset will not be released to public outside of the scope of this study due to the videos being personal data subject to high privacy constraints. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Modality & Text & Audio & Video \\ \hline \hline Train set & 6350 & 6034 & 5706 \\ \hline Validation set & 794 & 754 & 687 \\ \hline Test set & 794 & 755 & 702 \\ \hline \hline \begin{tabular}[c]{@{}l@{}}Questions per\\ interview (mean)\end{tabular} & 5.05 & 5.10 & 5.01 \\ \hline Total length & 3.82\,M\,words & 557.7\,h & 508.8\,h \\ \hline \begin{tabular}[c]{@{}l@{}}Length per \\ question (mean)\end{tabular} & 95.2\,words & 52.19\,s & 51.54\,s \\ \hline \begin{tabular}[c]{@{}l@{}}\textit{Hirable} label\\ proportion\end{tabular} & 45.0\,\% & 45.5\,\% & 45.4\,\% \\ \hline \end{tabular} \caption{Descriptive table of the dataset: number of candidates in each set and overall statistics of the dataset.} \label{table:datasets} \end{table} \subsection{Experimental settings} \begin{table*}[t!] \centering \begin{tabular}{|l|c|c|c||c|c|c||c|c|c|} \hline & \multicolumn{3}{c||}{Text} & \multicolumn{3}{c||}{Audio} & \multicolumn{3}{c|}{Video}\\ \hline Model & $Precision$ & $Recall$& $\textbf{F1}$ & $Precision$ & $Recall$& $\textbf{F1}$& $Precision$ & $Recall$& $\textbf{F1}$\\ \hline Non-sequential& 0.553 & 0.285 & 0.376 &0.590 & 0.463& 0.519& 0.507 & 0.519& 0.507\\ \hline Bo*W&0.656 & 0.403& 0.499& 0.532 & 0.402& 0.532&0.488 & 0.447& 0.467\\ \hline \hline Bidirectional GRU&0.624& 0.510& 0.561& 0.539& 0.596& 0.566& 0.559& 0.500& 0.528\\ \hline HN\_AVG&0.502& 0.800& 0.617&0.538& 0.672& 0.598&0.507& 0.550& 0.528\\ \hline HN\_SATT& 0.512& 0.803& 0.625& 0.527& 0.736& 0.614& 0.490& 0.559& 0.522\\ \hline HireNet &0.539& 0.797& \textbf{0.643}&0.576& 0.724& \textbf{0.642}&0.562& 0.655& \textbf{0.605} \\ \hline \end{tabular} \caption{Results for Monomodal models} \label{ResultsTable} \end{table*} The chosen evaluation metrics are precision, recall and F1-score of \textit{hirable} class. They are well suited for binary classification and used in previous studies \cite{Chen2017}. We split the dataset into a training set, a validation set for hyper-parameter selection based on the F1-score, and a test set for the final evaluation of each model. Each set constitutes respectively 80\%, 10\% and 10\% of the full dataset. \subsection{Extraction of social multimodal features} For each modality, we selected low-level descriptors to be used as per-frame features, and sequence-level features to be used as the non-sequential representation of a candidate's whole answer for our non-sequential baselines. \textbf{Word2vec}: Pretrained word embeddings are used for the BoTW (Bag of Text Words, presented later in this section), and the neural networks. We used word embeddings of dimension 200 from \cite{fauconnier_2015} pretrained on a French corpus of Wikipedia. \textbf{eGeMAPS}: Our frame-level audio features are extracted using OpenSmile \cite{Eyben2013b}. The configuration we use is the same one used to obtain the eGeMAPS \cite{Eyben2016TheComputing} features. GeMAPS is a famous minimalistic set of features selected for their saliency in Social Computing, and eGeMAPS is its extended version. We extract the per-frame features prior to the aggregations performed to obtain the eGeMAPS representation. \textbf{OpenFace}: We extract frame-level visual features with OpenFace \cite{Baltrusaitis2018OpenFaceToolkit}, a state-of-the-art visual behavioral analysis software that yields various per-frame meaningful metrics. We chose to extract the position and rotation of the head, the intensity and presence of actions units, and the gaze direction. As different videos have different frame-rates, we decided to smooth values with a time-window of 0.5\,s and an overlap of 0.25\,s. The duration of 0.5\,s is frequently used in the literature of Social Computing \cite{Varni2018ComputationalInteractions} and has been validated in our corpus as a suitable time-window size by annotating segments of social signals in a set of videos. \subsection{Baselines} First, we compare our model with several vote-based methods: \textit{i)} \textbf{Random vote baseline} (One thousand random draws respecting the train dataset label balance were made. The F1-score is then averaged over those one thousand samples); \textit{ii)} \textbf{Majority Vote} (This baseline is simply the position-wise majority label. Since our model could just be learning the origin open position for each candidate and its corresponding majority vote, we decided to include this baseline to show that our model reaches beyond those cues).\\ Second, we compare our model with non-sequential baselines: \textit{i)-a} \textbf{Non-sequential text} (we train a Doc2vec \cite{DBLP:conf/icml/LeM14} representation on our corpus, and we use it as a representation of our textual inputs); \textit{i)-b} \textbf{Non-sequential audio} (we take the eGeMAPS audio representation as described in \cite{Eyben2016TheComputing}. That representation is obtained by passing the above descriptors into classical statistical functions and hand-crafted \emph{ad hoc} measures applied over the whole answer. The reason we chose GeMAPS features is also that they were designed to ease comparability between different works in the field of Social Computing); \textit{i)-c} \textbf{Non-sequential video} (our low-level video descriptors include binary descriptors and continuous descriptors. The mean, standard deviation, minimum, maximum, sum of positive gradients and sum of negative gradients have been successfully used for a behavioral classification on media content in \cite{Ryoo2015PooledVideos}. We followed that representation scheme for our continuous descriptors. As for our discrete features, we chose to extract the mean, the number of active segments, and the active segment duration mean and standard deviation) \textit{ii)} \textbf{Bag of * Words} (We also chose to compare our model to \cite{Chen2017}'s Bag of Audio and Video Words: we run a K-means algorithm on all the low-level frames in our dataset. Then we take our samples as documents, and our frames' predicted classes as words, and use a "Term Frequency-inverse Document Frequency" (TF-iDF) representation to model each sample). \\ For each modality, we use the non-sequential representations mentioned above in a monomodal fashion as inputs to three classic learning algorithms (namely SVM, Ridge regression and Random Forest) with respective hyperparameter searches. Best of the three algorithms is selected. As these models do not have a hierarchical structure, we will train them to yield answer-wise labels (as opposed to the candidate-wise labeling performed by our hierarchical model). At test time we average the output value of the algorithm for each candidate on the questions he answered.\\ Third, the proposed sequential baselines aim at checking the four hypotheses described above: \textit{i)} comparing the \textbf{Bidirectional-GRU model} with previously described non sequential approaches aims to validate \textbf{H1} on the contribution of sequentiality in an answer-wise representation; \textit{ii)} the Hierarchical Averaged Network (HN\_AVG) baseline adds the hierarchy in the model in order to verify \textbf{H2} and \textbf{H3} (we replace the attention mechanism by an averaging operator over all of the non-zero bidirectional GRU outputs); \textit{iii)} the Hierarchical Self Attention Network (HN\_SATT) is a self-attention version of HireNet which aims to see the actual effect of the added context information (\textbf{H4}). \subsection{Multimodal models} Given the text, audio, and video trained versions of our HireNet, we report two basic models performing multimodal inference, namely an early fusion approach and a late fusion approach. In the early fusion, we concatenate the last layer $v$ of each modality as a representation ,and proceed with the same test procedure as our non-sequential baselines. For our late fusion approach, the decision for a candidate is carried out using the average decision score $\widetilde{y}$ between the three modalities. \section{Results and analyses} \begin{table}[htb!] \centering \begin{tabular}{|l|c|c|c|} \hline Model & $Precision$ & $Recall$& $\textbf{F1}$ \\ \hline Random vote& 0.459 & 0.452 & 0.456\\ \hline Majority vote&0.567 & 0.576 & 0.571\\ \hline \hline Early Fusion&0.587& 0.705 & 0.640\\ \hline Late Fusion&0.567 & 0.748 & \textbf{0.645}\\ \hline \end{tabular} \caption{Results for Multimodal models and vote-based baselines} \label{ResultsTable2} \end{table} First of all, Tables \ref{ResultsTable} and \ref{ResultsTable2} show that most of our neural models fairly surpass the vote-based baselines. \\ In Table \ref{ResultsTable}, the F1-score has increased, going from the non-sequential baselines, to the Bidirectional-GRU baselines for all the modalities, which supports \textbf{H1}. We can also see that HN\_AVG is superior to the Bidirectional-GRU baselines for audio and text validating \textbf{H2} for those two modalities. This suggests that sequentiality and hierarchy are adequate inductive biases for a job interview assessment machine learning algorithm. As for \textbf{H3}, HN\_SATT did show better results than HN\_AVG, for text and audio. In the end, our HireNet model surpasses HN\_AVG and HN\_SATT for each modality. Consequently, a fair amount of useful information is present in the contextual frame of an interview, and this information can be leveraged through our model, as it is stated in \textbf{H4}. Audio and text monomodal models display better performance than video models. The same results were obtained in \cite{Chen2017}.\\ Our attempts at fusing the multimodal information synthesized in the last layer of each HireNet model only slightly improved on the single modality models. \subsection{Attention visualization} \textbf{Text} In order to visualize the different words on which attention values were high, we computed new values of interest as it has been done in \cite{Yu2017TemporallyContent}. As the sentence length changes between answers, we multiply every word's attention value ($\alpha_t^i$) by the number of words in the answer, resulting in the relative attention of the word with respect to the sentence. In a same way, we multiply each question attention by the number of questions, resulting in the relative attention of the question with respect to the job interview. Then, in a similar way as \cite{Yang2016}, we compute $\sqrt{p_q}p_w$ where $p_w$ and $p_q$ are respectively the values of interest for word $w$ and question $q$. The list of the 20 most important words contains numerous names of banks and insurances companies (Natixis, Aviva, CNP, etc) and job knowledge vocabulary (mortgage, brokerage, tax exemption, etc), which means that their occurrence in candidates answers takes an important role in hirability prediction. \begin{figure}[htb!] \centering \includegraphics[width=\columnwidth]{video_attention_lastgood.png} \caption{Example of salient moments detected with peaks of attention on the video modality} \label{video} \end{figure} \textbf{Video} In order to visualize which moments were highlighted by attention mechanisms in a video, we display an example of the attention values for an answer in Figure \ref{video}. In this figure, the higher the attention value, the more the corresponding frames are considered task-relevant by the attention mechanism. As we can see, some peaks are present. Three thin slices with high attention values are presented. Some social signals that are important in a job interview are identified. We hypothesize that the smile detected in Frame 1 could be part of a tactic to please the interviewer known as deceptive ingratiation \cite{Schneider2015CuesInterview}. In addition, Frames 2 and 3 are representative of stress signals from the candidate. In fact, lip suck was suggested to be linked to anxiety in \cite{Feiler2016BehavioralAnxiety}. \textbf{Audio} The same visualization procedure used for video has been investigated for audio. As audio signal is harder to visualize, we decided to describe the general pattern of audio attention weights. In most cases, when the prosody is homogeneous through the answer, attention weights are distributed uniformly and show no peaks, as opposed to what was observed for video. However, salient moments may appear, especially when candidates produce successive disfluencies. Thus, we have identified peaks where false starts, filler words, repeating or restarting sentences occur. \textbf{Questions} We aim to explore the attention given to the different questions during the same interview. For this purpose, we randomly picked one open position from the test dataset comprising 40 candidates. Questions describing the interview and the corresponding averaged attention weights are displayed in the Figure \ref{questionsatt}. First, it seems attention weight variability between questions is higher for the audio modality than for text and video modalities. Second, the decrease in attention for Questions 5 and 6 could be explained by the fact that those questions are designed to assess "soft skills". Third, peaks of attention weight for the audio modality on Questions 2 and 4 could be induced by the fact that these questions are job-centric. Indeed, it could be possible that disfluencies tend to appear more in job-centric questions or that prosody is more important in first impressions of competence. \begin{figure} \centering \includegraphics[width=0.77\columnwidth]{QuestionAttention.png} \caption{Questions describing the randomly picked open position and their respective attention values } \label{questionsatt} \end{figure} \section{Conclusion and future directions} The HR industry actors nowadays do offer tools to automatically assess candidates undergoing asynchronous video interviews. However, no studies have been published regarding these tools and their predictive validity. The contribution of this work is twofold. First, we evaluate the validity of previous approaches in real conditions (\textit{e.g.} in-the-wild settings, true applications, real evaluations, etc). Second, we used deep learning methods in order to faithfully model the structure of asynchronous video interviews. In that sense, we proposed a new version of Hierarchical Attention Networks that is aware of the interviews contextual elements (questions and job title) called HireNet, which has showed better performance than previous approaches. First basic experiments on multimodal fusion have also been performed (early and late fusion). In future work, the obtained multimodal performance could be improved by leveraging more sophisticated multimodal fusion schemes. HireNet was evaluated on a corpus containing interviews for various jobs -- 475 different positions -- in the domain of sales positions. Theoretical findings from industrial organizational psychology suggest that some dimensions are common across different positions\cite{Huffcutt2001IdentificationInterviews.}. However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models\cite{Liu2017AdversarialClassification} could be of great help. Our model currently considers two labels (“hirable” and “not hirable”). Extending our annotations to more fine-grained information (communication skills, social effectiveness, etc) could provide useful insights about the profile of a candidate and its potential fit with the position in question. Through the use of attention mechanisms, we aimed to highlight salient moments and questions for each modality, which contributes to the transparency and the interpretability of HireNet. Such transparency is very important for Human Resources practitioners to trust an automatic evaluation. Further investigations could be conducted on the proposed attention mechanisms: i) to confirm the saliency of the selected moments using the discipline of Industrial and Organizational psychology; ii) to know the influence of the slices deemed important. This way, a tool to help candidates train for interviews could be developed. Last but not least, ethics and fairness are important considerations, that deserve to be studied. In that sense, detection of individual and global bias should be prioritized in order to give useful feedbacks to practitioners. Furthermore we are considering using adversarial learning as in \cite{Zhang2018} in order to ensure fairness during the training process. \section{Acknowledgments} This work was supported by the company EASYRECRUE, from whom the job interview videos were collected. We would like to thank Jeremy Langlais for his support and his help. We would also like to thank Valentin Barriere for his valuable input and the name given to the model and Marc Jeanmougin and Nicolas Bouche for their help with the computing environment. Finally, we thank Erin Douglas for proofreading the article. \bibliographystyle{aaai}
1,116,691,501,173
arxiv
\section{Introduction} \label{sect:1} \hspace*{0.5cm} Hadron decays of the tau lepton provide a prime scenario to study the hadronization of QCD currents in an energy region settled by many resonances. This task has a twofold significance. First, the study of branching fractions and spectra of those decays is a major goal of the asymmetric B factories (BABAR, BELLE). These are supplying an enormous amount of quality data owing to their large statistics, and the same is planned for the near future at tau-charm factories such as BES-III. Second, the required hadronization procedures involve QCD in a non-perturbative energy region ($E \stackrel{<}{_\sim} M_{\tau} \sim 1.8 \, \mbox{GeV}$) and, consequently, these processes are a clean benchmark, not spoiled by an initial hadron state, where we can learn about the treatment of strong interactions when driven by resonances. \par Analyses of tau decay data involve matrix elements that convey the hadronization of the vector and axial-vector currents. At present there is no determination from first principles of those matrix elements as they involve strong interaction effects in its non-perturbative regime. Therefore we have to rely in models that parameterize the form factors that arise from the hadronization. A relevant one is the so-called K\"uhn-Santamar\'{\i}a model (KS) \cite{Pich:1987qq} that, essentially, relies on the construction of form factors in terms of Breit-Wigner functions weighted by unknown parameters that are extracted from phenomenological analyses of data. This procedure, that has proven to be successful in the description of the $\pi \pi \pi$ final state, has been employed in the study of many two- and three-hadron tau decays \cite{KS3,KS5,KS6,Gomez-Cadenas:1990uj}. The ambiguity related with the choice of Breit-Wigner functions \cite{Pich:1987qq,GS} is currently being exploited to estimate the errors in the determination of the free parameters. The measurement of the $K K \pi$ spectrum by the CLEO Collaboration \cite{Liu:2002mn} has shown that the parameterization described by the KS model does not recall appropriately the experimental features keeping, at the same time, a consistency with the underlying strong interaction theory \cite{Portoles:2004vr}. The solution provided by CLEO based in the introduction of new parameters spoils the normalization of the Wess-Zumino anomaly, i.e. a specific prediction of QCD. Indeed, arbitrary parameterizations are of little help in the procedure of obtaining information about non-perturbative QCD. They may fit the data but do not provide us hints on the hadronization procedures. The key point in order to uncover the inner structure of hadronization is to guide the construction of the relevant form factors with the use of known properties of QCD. \par The TAUOLA library \cite{tauola} is, at present, a key tool that handles analyses of tau decay data. Though originally it comprehended assorted versions of the KS model only, it has been opened to the introduction of matrix elements obtained with other models. Hence it has become an excellent tool where theoretical models confront experimental data. This or analogous libraries are appropriate benchmarks where to apply the results of our research. \par At very low energies ($E \ll M_{\rho}$, being $M_{\rho}$ the mass of the $\rho(770)$ resonance) the chiral symmetry of massless QCD rules the construction of an effective field theory that allows a perturbative expansion in momenta ($p$) and light quark masses ($m$), as $(p^2, M_{\pi}^2) / \Lambda_{\chi}^2$, being $\Lambda_{\chi} \sim 4 \pi F \sim M_{\rho}$ the scale that breaks the chiral symmetry; here $M_{\pi}$ is the pion mass and $F$ is the decay constant of the pion. Indeed Chiral Perturbation Theory ($\chi PT$) \cite{Weinberg:1978kz} drives the hadronization of QCD currents into the lightest multiplet of pseudoscalar mesons, $\pi$, $K$ and $\eta$. The application of this framework to the study of pion decays of the tau lepton was carried out in Ref.~\cite{Colangelo:1996hs} though, obviously, it can only describe a tiny region of the available phase space. It is clear that, whatever the structure given to the form factors in the region of resonances, it should match the chiral constraint in its energy domain. In fact the parameterizations with Breit-Wigner functions that are at the center of the KS model fail to fulfill that condition already at ${\cal O}(p^4)$ in the chiral expansion \cite{Portoles:2000sr,GomezDumm:2003ku}. \par Our knowledge of QCD in the energy region of the resonances is pretty poor. Contrarily to the very low energy domain, we do not know how to construct a dual effective field theory of strong interactions for $E \sim M_{\rho}$. There is a tool, though, that could shed light on the appropriate structure of a Lagrangian theory that we could use. This is yielded by the large-$N_C$ limit of $SU(N_C)$ QCD \cite{'tHooft:1973jz}, which introduces an expansion in inverse powers of the number of colours $N_C$. The essential idea relevant for our goal that comes out from that setting is that at leading order in the expansion, i.e. $N_C \rightarrow \infty$, any amplitude is given by the tree level diagrams generated by a local Lagrangian with an spectrum of infinite zero-width states. This frame, as we will see, can be used to establish a starting point in the study of the hadron resonance region and, consequently, in the hadron decays of the tau lepton. The setting recalls the role of the resonance chiral theory \cite{Ecker:1988te,Donoghue:1988ed} that can be better understood in the light of the large-$N_C$ limit \cite{Peris:1998nj,Pich:2002xy,Cirigliano:2006hb}. \par At high energies ($E \gg M_{\rho}$), where the light-flavoured continuum is reached, perturbative QCD is the appropriate framework to deal with the description of strong interaction of partons. In particular, a well known feature of form factors of QCD currents is their smooth behaviour at high transfer of momenta \cite{Brodsky}, thus it is reasonable to expect that the form factors match this behaviour above the energy region of the resonances. Another related tool is the study of the Operator Product Expansion (OPE) of Green functions of QCD currents that are order parameters of the chiral symmetry breaking. It is possible to evaluate these Green functions within a resonance theory, and then perform a matching with their leading term of the OPE expansion at high transfers of momenta~\cite{Moussallam:1997xx,Knecht:2001xc,Cirigliano:2004ue,RuizFemenia:2003hm,Cirigliano:2005xn,Mateu:2007tr,Ecker:1989yg}. In general, the information coming from high energies is important to settle a resonance Lagrangian. It is reasonable to assume that the effective couplings collect information coming from the energy region above the resonances, hence the described procedure should help to determine the corresponding coupling constants. Indeed, this approach has proven to be capable of that task \cite{Ecker:1989yg}. \par In Ref.~\cite{GomezDumm:2003ku} we considered all mentioned steps in order to analyse the $\pi \pi \pi$ hadron final state in the decay of the tau lepton. Here we continue that undertaking by considering the $K K \pi$ channels that, as mentioned above, do not fit well within the KS model and the present TAUOLA setup. Contrarily to the $\pi \pi \pi$ final state, which is dominated by the hadronization of the axial-vector current, $\tau\to K K \pi\nu_\tau$ decays receive contributions from both vector and axial-vector currents. Indeed, one of the goals of our work is to find out the relative weight of those contributions. Fortunately we will be assisted in this task by the recent analysis of $e^+ e^- \rightarrow K K \pi$ cross-section by BABAR \cite{Aubert:2007ym} where a separation between isoscalar and isovector channels has been performed. Hence we will be able to connect both processes through CVC. \par In Section~2 we introduce the observables to be considered and the framework settled by the procedure sketched above. Then the amplitudes for $K K \pi$ decay channels are evaluated in Section~3. An analysis of how we can get information on the resonance couplings appearing in the hadronization of the currents is performed in Section~4. Finally we explain our results in Section~5, and our conclusions are pointed out in Section~6. Four technical appendices complete our exposition. \section{Theoretical framework} \label{sect:2} \hspace*{0.5cm}The hadronization of the currents that rule semileptonic tau decays is driven by non-perturbative QCD. As mentioned in the Introduction, our methodology stands on the construction of an action, with the relevant degrees of freedom, led by the chiral symmetry and the known asymptotic behaviour of form factors and Green functions driven by large $N_C$ QCD. We will limit ourselves to those pieces of the action that are relevant for the study of decays of the tau lepton into three pseudoscalar mesons. Hence we will need to include both even- and odd-intrinsic parity sectors. \par The large $N_C$ expansion of $SU(N_C)$ QCD implies that, in the $N_C \rightarrow \infty$ limit, the study of Green functions of QCD currents can be carried out through the tree level diagrams of a Lagrangian theory that includes an infinite spectrum of non-decaying states \cite{'tHooft:1973jz}. Hence the study of the resonance energy region can be performed by constructing such a Lagrangian theory. The problem is that we do not know how to implement an infinite spectrum in a model-independent way. However, it is well known from the phenomenology that the main role is always played by the lightest resonances. Accordingly it was suggested in Refs.~\cite{Ecker:1988te,Donoghue:1988ed} that one can construct a suitable effective Lagrangian involving the lightest nonets of resonances and the octet of Goldstone bosons states ($\pi$, $K$ and $\eta$). This is indeed an appropriate tool to handle the hadron decays of the tau lepton. The guiding principle in the construction of such a Lagrangian is chiral symmetry. When resonances are integrated out from the theory, i.e. one tries to describe the energy region below such states ($E \ll M_{\rho}$), the remaining setting is that of $\chi$PT, to which now we turn. \par The very low-energy strong interaction in the light quark sector is known to be ruled by the $SU(3)_L\otimes SU(3)_R$ chiral symmetry of massless QCD implemented in $\chi$PT. The leading even-intrinsic-parity ${\cal O}(p^2)$ Lagrangian, which carries the information of the spontaneous symmetry breaking of the theory, is~: \begin{equation} \label{eq:op2} {\cal L}_{\chi {\rm PT}}^{(2)}=\frac{F^2}{4}\langle u_{\mu} u^{\mu} + \chi _+ \rangle \ , \end{equation} where \begin{eqnarray} u_{\mu} & = & i [ u^{\dagger}(\partial_{\mu}-i r_{\mu})u- u(\partial_{\mu}-i \ell_{\mu})u^{\dagger} ] \ , \nonumber \\ \chi_{\pm} & = & u^{\dagger}\chi u^{\dagger}\pm u\chi^{\dagger} u\ \ \ \ , \ \ \ \ \chi=2B_0(s+ip) \; \; , \end{eqnarray} and $\langle \ldots \rangle$ is short for a trace in the flavour space. The Goldstone octet of pseudoscalar fields \begin{equation} \label{eq:phi_matrix} \Phi(x) = \left( \begin{array}{ccc} \displaystyle\frac{1}{\sqrt 2}\,\pi^0 + \displaystyle\frac{1}{\sqrt 6}\,\eta_8 & \pi^+ & K^+ \\ \pi^- & - \displaystyle\frac{1}{\sqrt 2}\,\pi^0 + \displaystyle\frac{1}{\sqrt 6}\,\eta_8 & K^0 \\ K^- & \bar{K}^0 & - \displaystyle\frac{2}{\sqrt 6}\,\eta_8 \end{array} \right) \ , \end{equation} is realized non--linearly into the unitary matrix in the flavour space \begin{equation} u(\varphi)=\exp \left\{ \frac{i}{\sqrt{2}\,F} \Phi(x) \right\} \; \; \; , \end{equation} which under chiral rotations transforms as \begin{equation} u(\varphi) \to g_R\, u(\varphi)\, h(g,\varphi)^\dagger = h(g,\varphi)\, u(\varphi)\, g_L^\dagger \; \; , \end{equation} with $g \equiv (g_L,g_R) \, \in \, SU(3)_{\mathrm{L}} \otimes SU(3)_{\mathrm{R}}$ and $h(g,\varphi)\,\in \, SU(3)_V$. External hermitian matrix fields $r_{\mu}$, $\ell_{\mu}$, $s$ and $p$ promote the global $SU(3)_{\mathrm{L}} \otimes SU(3)_{\mathrm{R}}$ symmetry to a local one. Thus, interactions with electroweak bosons can be accommodated through the vector $v_{\mu} = (r_{\mu} + \ell_{\mu}) / 2$ and axial--vector $a_{\mu} = (r_{\mu} - \ell_{\mu}) / 2$ fields. The scalar field $s$ incorporates explicit chiral symmetry breaking through the quark masses taking $s = {\cal M} \, + \ldots$, with ${\cal M} = \mathrm{diag}(m_u,m_d,m_s)$ and, finally, at lowest order in the chiral expansion $F = F_{\pi} = 92.4$~MeV is the pion decay constant and $B_0 F^2 = - \langle 0 | \bar{\psi}\psi | 0 \rangle_0$. \par The leading action in the odd-intrinsic-parity sector arises at ${\cal O}(p^4)$. This is given by the chiral anomaly \cite{Wess:1971yu} and explicitly stated by the Wess-Zumino-Witten ${\cal Z}_{WZ}[v,a]$ functional that can be read in Ref.~\cite{Ecker:1994gg}. This contains all anomalous contributions to electromagnetic and semileptonic meson decays. \par It is well known \cite{Ecker:1988te,Cirigliano:2006hb} that higher orders in the chiral expansion, i.e.\ even-intrinsic-parity ${\cal L}_{\chi PT}^{(n)}$ with $n>2$, bring in the information of heavier degrees of freedom that have been integrated out, for instance resonance states. As our next step intends to include the latter explicitly, to avoid double counting issues we will not consider higher orders in $\chi$PT. As we comment below, in order to fulfill this procedure ---at least, up to ${\cal O}(p^4)$--- it is convenient to use the antisymmetric tensor representation for the $J=1$ fields. Analogously, additional odd-intrinsic-parity amplitudes arise at ${\cal O}(p^6)$ in $\chi$PT, either from one-loop diagrams using one vertex from the Wess-Zumino-Witten action or from tree-level operators \cite{Bijnens:2001bb}. However we will assume that the latter are fully generated by resonance contributions \cite{RuizFemenia:2003hm} and, therefore, will not be included in the following. \par The formulation of a Lagrangian theory that includes both the octet of Goldstone mesons and $U(3)$ nonets of resonances is carried out through the construction of a phenomenological Lagrangian \cite{Coleman:1969sm} where chiral symmetry determines the structure of the operators. Given the vector character of the Standard Model (SM) couplings of the hadron matrix elements in $\tau$ decays, form factors for these processes are ruled by vector and axial-vector resonances. Notwithstanding those form factors are given, in the $\tau \rightarrow PPP \nu_{\tau}$ decays, by a four-point Green function where other quantum numbers might play a role, namely scalar and pseudoscalar resonances \cite{Jamin:2000wn}. However their contribution should be minor for $\tau \rightarrow K K \pi \nu_{\tau}$. Indeed the lightest scalar\footnote{As we assume the $N_C \rightarrow \infty$ limit, the nonet of scalars corresponding to the $f_0(600)$ is not considered. This multiplet is generated by rescattering of the ligthest pseudoscalars and then subleading in the $1/N_C$ expansion.}, namely $f_0(980)$, couples dominantly to two pions, and therefore its role in the $K \overline{K} \pi$ final state should be negligible. Heavier flavoured or unflavoured scalars and pseudoscalars are at least suppressed by their masses, being heavier than the axial-vector meson $a_1(1260)$ (like $K_0^*(1430)$ that couples to $K \pi$). In addition the couplings of unflavoured states to $K \overline{K}$ (scalars) and $K \overline{K} \pi$ (pseudoscalars) seem to be very small \cite{PDG2008}. Thus in our description we include $J=1$ resonances only \footnote{If the study of these processes requires a more accurate description, additional resonances could also be included in our scheme.}, and this is done by considering a nonet of fields \cite{Ecker:1988te}~: \begin{equation} R \,\equiv \, \frac{1}{\sqrt{2}} \, \sum_{i=1}^{9} \lambda_i \, \phi_{R,i}\; , \end{equation} where $R = V, A$, stands for vector and axial-vector resonance states. Under the $SU(3)_L \otimes SU(3)_R$ chiral group, $R$ transforms as~: \begin{equation} \label{eq:rtrans} R \, \rightarrow \, h(g,\varphi) \, R \, h(g,\varphi)^{\dagger} \; . \end{equation} The flavour structure of the resonances is analogous to that of the Goldstone bosons in Eq.~(\ref{eq:phi_matrix}). We also introduce the covariant derivative \begin{eqnarray} \nabla_\mu X & \equiv & \partial_{\mu} X + [\Gamma_{\mu}, X] \; \; , \\ \Gamma_\mu & = & \frac{1}{2} \, [ \, u^\dagger (\partial_\mu - i r_{\mu}) u + u (\partial_\mu - i \ell_{\mu}) u^\dagger \,] \; \; ,\nonumber \end{eqnarray} acting on any object $X$ that transforms as $R$ in Eq.~(\ref{eq:rtrans}), like $u_{\mu}$ and $\chi_{\pm}$. The kinetic terms for the spin 1 resonances in the Lagrangian read~: \begin{equation} \label{eq:lag0} {\cal L}_{ \rm kin}^R = -\frac{1}{2} \langle \, \nabla^\lambda R_{\lambda\mu} \nabla_\nu R^{\nu\mu} \, \rangle + \frac{M_R^2}{4} \, \langle \, R_{\mu\nu} R^{\mu\nu}\, \rangle \; \; \; , \; \; \; \; \; R \, = \, V,A \; , \end{equation} $M_V$, $M_A$ being the masses of the nonets of vector and axial--vector resonances in the chiral and large-$N_C$ limits, respectively. Notice that we describe the resonance fields through the antisymmetric tensor representation. With this description one is able to collect, upon integration of resonances, the bulk of the low-energy couplings at ${\cal O}(p^4)$ in $\chi$PT without the inclusion of additional local terms \cite{Ecker:1989yg}. In fact it is necessary to use this representation if one does not include the ${\cal L}_{\chi PT}^{(4)}$ in the Lagrangian theory. Though analogous studies at higher chiral orders have not been carried out, we will assume that no ${\cal L}_{\chi PT}^{(n)}$ with $n=4,6,...$ in the even-intrinsic-parity and $n=6,8,...$ in the odd-intrinsic-parity sectors need to be included in the theory. \par The construction of the interaction terms involving resonance and Goldstone fields is driven by chiral and discrete symmetries with a generic structure given by~: \begin{equation} {\cal O}_i \, \sim \, \langle \, R_1 R_2 ... R_j \, \chi^{(n)}(\varphi) \, \rangle \, , \end{equation} where $\chi^{(n)}(\varphi)$ is a chiral tensor that includes only Goldstone and auxiliary fields. It transforms like $R$ in Eq.~(\ref{eq:rtrans}) and has chiral counting $n$ in the frame of $\chi$PT. This counting is relevant in the setting of the theory because, though the resonance theory itself has no perturbative expansion, higher values of $n$ may originate violations of the proper asymptotic behaviour of form factors or Green functions. As a guide we will include at least those operators that, contributing to our processes, are leading when integrating out the resonances. In addition we do not include operators with higher-order chiral tensors, $\chi^{(n)}(\varphi)$, that would violate the QCD asymptotic behaviour unless their couplings are severely fine tuned to ensure the needed cancellations of large momenta. In the odd-intrinsic-parity sector, that gives the vector form factor, this amounts to include all $\langle R \chi^{(4)} \rangle$ and $\langle RR \chi^{(2)} \rangle$ terms. In the even-intrinsic-parity couplings, giving the axial-vector form factors, these are the terms $\langle R \chi^{(2)} \rangle$. However previous analyses of the axial-vector contributions \cite{GomezDumm:2003ku, Cirigliano:2004ue} show the relevant role of the $\langle RR \chi^{(2)} \rangle$ terms that, accordingly, are also considered here~\footnote{Operators $\langle R \chi^{(4)} \rangle$ that are non-leading and have a worse high-energy behaviour, are not included in the even-intrinsic-parity contributions as they have not played any role in previous related analyses.}. \par We also assume exact $SU(3)$ symmetry in the construction of the interacting terms, i.e. at level of couplings. Deviations from exact symmetry in hadronic tau decays have been considered in Ref.~\cite{Moussallam:2007qc}. However we do not include $SU(3)$ breaking couplings because we are neither considering next-to-leading corrections in the $1/N_C$ expansion. \par The lowest order interaction operators linear in the resonance fields have the structure $\langle R \chi^{(2)}(\varphi)\rangle$. There are no odd-intrinsic-parity terms of this form. The even-intrinsic-parity Lagrangian includes three coupling constants \cite{Ecker:1988te}~: \begin{eqnarray} \label{eq:lag1} {\cal L}_2^{\mbox{\tiny V}} & = & \frac{F_V}{2\sqrt{2}} \langle V_{\mu\nu} f_+^{\mu\nu}\rangle + i\,\frac{G_V}{\sqrt{2}} \langle V_{\mu\nu} u^\mu u^\nu\rangle \; \, , \nonumber \\ {\cal L}_2^{\mbox{\tiny A}} & = & \frac{F_A}{2\sqrt{2}} \langle A_{\mu\nu} f_-^{\mu\nu}\rangle \;, \end{eqnarray} where $f_\pm^{\mu\nu} = u F_L^{\mu\nu} u^\dagger \pm u^\dagger F_R^{\mu\nu} u$ and $F_{R,L}^{\mu \nu}$ are the field strength tensors associated with the external right- and left-handed auxiliary fields. All coupling parameters $F_V$, $G_V$ and $F_A$ are real. \par The leading odd-intrinsic-parity operators, linear in the resonance fields, have the form $\langle R \chi^{(4)}(\varphi)\rangle$. We will need those pieces that generate~: i) the vertex with one vector resonance and three pseudoscalar fields; ii) the vertex with one vector resonance, a vector current and one pseudoscalar. The minimal Lagrangian with these features is~: \begin{equation} \label{eq:l4odd} {\cal L}_4^{\mbox{\tiny V}} = \sum_{i=1}^5 \, \frac{g_i}{M_V} \, {\cal O}^i_{\mbox{\tiny VPPP}} \; + \; \sum_{i=1}^{7} \,\frac{c_i}{M_{V}}\,{\cal O}^i_{\mbox{\tiny{VJP}}} \, , \end{equation} where $g_i$ and $c_i$ are real adimensional couplings, and the operators read \begin{itemize} \item[1/] VPPP terms \begin{eqnarray} \label{eq:omegappp} {\cal O}_{\mbox{\tiny VPPP}}^1 & = & i \, \varepsilon_{\mu\nu\alpha\beta} \, \left\langle V^{\mu\nu} \, \left( \, h^{\alpha\gamma} u_{\gamma} u^{\beta} - u^{\beta} u_{\gamma} h^{\alpha\gamma} \right) \right\rangle \, , \nonumber \\ [2mm] {\cal O}_{\mbox{\tiny VPPP}}^2 & = & i \, \varepsilon_{\mu\nu\alpha\beta} \, \left\langle V^{\mu\nu} \, \left( \, h^{\alpha\gamma} u^{\beta} u_{\gamma} - u_{\gamma} u^{\beta} h^{\alpha\gamma} \, \right) \right\rangle \, , \nonumber \\ [2mm] {\cal O}_{\mbox{\tiny VPPP}}^3 & = & i \, \varepsilon_{\mu\nu\alpha\beta} \, \left\langle V^{\mu\nu} \, \left( \, u_{\gamma} h^{\alpha\gamma} u^{\beta} - u^{\beta} h^{\alpha\gamma} u_{\gamma} \, \right) \right\rangle \, , \nonumber \\ [2mm] {\cal O}_{\mbox{\tiny VPPP}}^4 & = & \varepsilon_{\mu\nu\alpha\beta} \, \left\langle \left\lbrace \,V^{\mu\nu} \,,\, u^{\alpha}\, u^{\beta}\, \right\rbrace \,{\cal \chi}_{-} \right\rangle \, , \nonumber \\ [2mm] {\cal O}_{\mbox{\tiny VPPP}}^5 & = & \varepsilon_{\mu\nu\alpha\beta} \, \left\langle \,u^{\alpha}\,V^{\mu\nu} \, u^{\beta}\, {\cal \chi}_{-} \right\rangle \, , \label{eq:VPPP} \end{eqnarray} with $h_{\mu \nu} = \nabla_{\mu} u_{\nu} + \nabla_{\nu} u_{\mu}$, and \item[2/] VJP terms \cite{RuizFemenia:2003hm} \begin{eqnarray} {\cal O}_{\mbox{\tiny VJP}}^1 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{V^{\mu\nu},f_{+}^{\rho\alpha}\} \nabla_{\alpha}u^{\sigma}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VJP}}^2 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{V^{\mu\alpha},f_{+}^{\rho\sigma}\} \nabla_{\alpha}u^{\nu}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VJP}}^3 & = & i\,\varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{V^{\mu\nu},f_{+}^{\rho\sigma}\}\, \chi_{-}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VJP}}^4 & = & i\,\varepsilon_{\mu\nu\rho\sigma}\, \langle \, V^{\mu\nu}\,[\,f_{-}^{\rho\sigma}, \chi_{+}]\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VJP}}^5 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{\nabla_{\alpha}V^{\mu\nu},f_{+}^{\rho\alpha}\} u^{\sigma}\,\rangle \; \; ,\nonumber\\[2mm] {\cal O}_{\mbox{\tiny VJP}}^6 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{\nabla_{\alpha}V^{\mu\alpha},f_{+}^{\rho\sigma}\} u^{\nu}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VJP}}^7 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{\nabla^{\sigma}V^{\mu\nu},f_{+}^{\rho\alpha}\} u_{\alpha}\,\rangle \;\; . \label{eq:VJP} \end{eqnarray} \end{itemize} Notice that we do not include analogous pieces with an axial-vector resonance, that would contribute to the hadronization of the axial-vector current. This has been thoroughly studied in Ref.~\cite{GomezDumm:2003ku} (see also Ref.~\cite{shortp}) in the description of the $\tau \rightarrow \pi \pi \pi \nu_{\tau}$ process and it is shown that no $\langle A \chi^{(4)}(\varphi) \rangle$ operators are needed to describe its hadronization. Therefore those operators are not included in our minimal description of the relevant form factors. \par In order to study tau decay processes with three pseudoscalar mesons in the final state one also has to consider non-linear terms in the resonance fields. Indeed the hadron final state in $\tau \rightarrow PPP \nu_{\tau}$ decays can be driven by vertices involving two resonances and a pseudoscalar meson. The structure of the operators that give those vertices is $\langle R_1 R_2 \chi^{(2)}(\varphi) \rangle$, and has been worked out before \cite{GomezDumm:2003ku,RuizFemenia:2003hm}. They include both even- and odd-intrinsic-parity terms~: \begin{equation} \label{eq:lag21} {\cal L}_2^{\mbox{\tiny RR}} \, = \, \sum_{i=1}^{5} \, \lambda_i \, {\cal O}^i_{\mbox{\tiny VAP}} \; \; + \; \sum_{i=1}^{4} \,d_i\,{\cal O}^i_{\mbox{\tiny{VVP}}}\; , \end{equation} where $\lambda_i$, and $d_i$ are unknown real adimensional couplings. The operators ${\cal O}^i_{\mbox{\tiny RRP}}$ are given by~: \begin{itemize} \item[1/] VAP terms \begin{eqnarray} \label{eq:VAP} {\cal O}^1_{\mbox{\tiny VAP}} & = & \langle \, [ \, V^{\mu\nu} \, , \, A_{\mu\nu} \, ] \, \chi_- \, \rangle \; \; , \nonumber \\ [2mm] {\cal O}^2_{\mbox{\tiny VAP}} & = & i\,\langle \, [ \, V^{\mu\nu} \, , \, A_{\nu\alpha} \, ] \, h_\mu^{\;\alpha} \, \rangle \; \; , \\ [2mm] {\cal O}^3_{\mbox{\tiny VAP}} & = & i \,\langle \, [ \, \nabla^\mu V_{\mu\nu} \, , \, A^{\nu\alpha}\, ] \, u_\alpha \, \rangle \; \; , \nonumber \\ [2mm] {\cal O}^4_{\mbox{\tiny VAP}} & = & i\,\langle \, [ \, \nabla^\alpha V_{\mu\nu} \, , \, A_\alpha^{\;\nu} \, ] \, u^\mu \, \rangle \; \; , \nonumber \\ [2mm] {\cal O}^5_{\mbox{\tiny VAP}} & = & i \,\langle \, [ \, \nabla^\alpha V_{\mu\nu} \, , \, A^{\mu\nu} \, ] \, u_\alpha \, \rangle \nonumber \; \; . \end{eqnarray} \item[2/] VVP terms \begin{eqnarray} {\cal O}_{\mbox{\tiny VVP}}^1 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{V^{\mu\nu},V^{\rho\alpha}\} \nabla_{\alpha}u^{\sigma}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VVP}}^2 & = & i\,\varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{V^{\mu\nu},V^{\rho\sigma}\}\, \chi_{-}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VVP}}^3 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{\nabla_{\alpha}V^{\mu\nu},V^{\rho\alpha}\} u^{\sigma}\,\rangle \; \; , \nonumber\\[2mm] {\cal O}_{\mbox{\tiny VVP}}^4 & = & \varepsilon_{\mu\nu\rho\sigma}\, \langle \, \{\nabla^{\sigma}V^{\mu\nu},V^{\rho\alpha}\} u_{\alpha}\,\rangle \; \; . \label{eq:VVP} \end{eqnarray} \end{itemize} We emphasize that ${\cal L}_2^{\mbox{\tiny RR}}$ is a complete basis for constructing vertices with only one pseudoscalar meson; for a larger number of pseudoscalars additional operators might be added. As we are only interested in tree-level diagrams, the equation of motion arising from ${\cal L}_{\chi PT}^{(2)}$ in Eq.~(\ref{eq:op2}) has been used in ${\cal L}_4^{\mbox{\tiny V}}$ and ${\cal L}^{\mbox{\tiny RR}}_2$ to eliminate superfluous operators. \par Hence our theory is given by the Lagrangian~: \begin{equation} \label{eq:ourtheory} {\cal L}_{R \chi T} \, = \, {\cal L}_{\chi PT}^{(2)} \, + \, {\cal L }_{\mbox{\tiny kin}}^{\mbox{\tiny R}} \, + \, {\cal L}_2^{\mbox{\tiny A}} \, + \,{\cal L}_2^{\mbox{\tiny V}} \, + \, {\cal L}_4^{\mbox{\tiny V}} \, + \, {\cal L}_2^{\mbox{\tiny RR}} \, . \end{equation} It is important to point out that the resonance theory constructed above is not a theory of QCD for arbitrary values of the couplings in the interaction terms. As we will see later on, these constants can be constrained by imposing well accepted dynamical properties of the underlying theory. \section{Vector and axial-vector currents in $\tau \rightarrow K K \pi \, \nu_{\tau}$ decays} \label{sect:3} \hspace*{0.5cm} The decay amplitude for the $\tau \rightarrow KK \pi\nu_{\tau}$ decays can be written in the Standard Model as \begin{equation} {\cal M} \, = \, - \, \frac{G_F}{\sqrt{2}} \, V_{ud} \, \overline{u}_{\nu_{\tau}} \, \gamma^{\mu} \left( 1 \, - \, \gamma_5 \right) \, u_{\tau} \, T_{\mu} \; , \end{equation} where $V_{ud}$ is an element of the Cabibbo-Kobayashi-Maskawa matrix and $T_{\mu}$ is the hadron matrix element of the participating $V_{\mu} - A_{\mu}$ QCD current~: \begin{equation} T_{\mu} \, = \, \langle K(p_1) \, K(p_2) \, \pi(p_3) \, | \, \left( V_{\mu} - A_{\mu} \right) e^{i {\cal L}_{QCD}} \, | \, 0 \rangle \, . \end{equation} The hadron tensor can be written in terms of four form factors $F_1$, $F_2$, $F_3$ and $F_4$ as \cite{Kuhn:1992nz}~: \begin{equation} \label{eq:t3} T^{\mu} \, = \, V_1^{\mu} \, F_1 \, + \, V_2^{\mu} \, F_2 \, + \, V_3^{\mu} \, F_3 \, + \, Q^{\mu} \, F_4 \; , \end{equation} where $Q^{\mu} = p_1^{\mu} + p_2^{\mu} + p_3^{\mu}$ and \begin{eqnarray} \label{eq:v123} V_{1 \, \mu} & = & \left( \, g_{\mu \nu} - \frac{Q_{\mu} Q_{\nu}}{Q^2} \right) \, \left( p_2 - p_1 \right)^{\nu} \; , \; \; \; \; \; \; \; \; \; \; V_{2 \, \mu} \; = \; \left( \, g_{\mu \nu} - \frac{Q_{\mu} Q_{\nu}}{Q^2} \right) \, \left( p_3 - p_1 \right)^{\nu} \, , \nonumber \\[5mm] V_{3 \, \mu} &= & i \, \varepsilon_{\mu \nu \varrho \sigma} \, p_1^{\nu} \, p_2^{\varrho} \, p_3^{\sigma} \;. \end{eqnarray} There are three different charge channels for the $KK\pi$ decays of the $\tau^-$ lepton, namely $K^+ (p_+) \, K^-(p_-) \, \pi^-(p_{\pi})$, $K^0(p_0) \, \overline{K}^0(\overline{p}_0) \, \pi^-(p_{\pi})$ and $K^-(p_-) \, K^0(p_0) \, \pi^0(p_{\pi})$. The definitions of Eq.~(\ref{eq:v123}) correspond to the choice $p_3 = p_{\pi}$ in all cases, and~: $(p_1,p_2) = (p_+,p_-)$ for the $K^+ \, K^-$ case, $(p_1,p_2) = (\overline{p}_0,p_0)$ for $K^0 \, \overline{K}^0$ and $(p_1,p_2) = (p_-,p_0)$ for $K^- \, K^0$. In general, form factors $F_i$ are functions of the kinematical invariants~: $Q^2$, $s=(p_1+p_2)^2$ and $t=(p_1+p_3)^2$. $F_1$ and $F_2$ originate from the axial-vector current, while $F_3$ follows from the vector current. All of them correspond to spin-1 transitions. The $F_4$ pseudoscalar form factor stems from the axial-vector current, and corresponds to a spin-0 transition. It is seen that this form factor vanishes in the chiral limit, therefore its contribution is expected to be heavily suppressed, and both the spectrum and the branching ratio of tau decays into three pseudoscalar mesons is dominated by $J=1$ transitions, especially in the Cabibbo-allowed modes. \par The $Q^2$-spectrum is given by~: \begin{equation} \frac{d \,\Gamma}{d \,Q^2} \, = \, \frac{G_F^2 \, |V_{ud}|^2}{128 \, (2 \pi)^5 \, M_{\tau}} \, \left( \frac{M_{\tau}^2}{Q^2}-1 \right)^2 \, \int \, ds \, dt \, \left[ W_{SA} + \frac{1}{3} \left( 1 + 2 \frac{Q^2}{M_{\tau}^2} \right) \left( W_A + W_B \right) \right] \, , \end{equation} where the hadron structure functions, introduced in Ref.~\cite{Kuhn:1992nz}, are~: \begin{eqnarray} W_A & = & - ( V_1^{\mu} F_1 + V_2^{\mu} F_2 ) ( V_{1\mu} F_1 + V_{2 \mu} F_2 )^* \, , \nonumber \\ [3mm] W_B & = & \frac{1}{4} \left[ \, s \, t \, u \, + \, (m_K^2 -m_{\pi}^2) \, ( Q^2-m_K^2) \, s \, + \, m_K^2 (2 m_{\pi}^2 - Q^2) \, Q^2 \ - \, m_K^2 m_{\pi}^4 \, \right] \, | F_3 |^2 \, , \nonumber \\[3mm] W_{SA} & = & (Q^{\mu} \, F_4) (Q_{\mu} F_4^*) \, = \, Q^2 \, |F_4|^2 \,, \end{eqnarray} where $u=Q^2 - s - t + 2 m_K^2 + m_{\pi}^2$. The phase-space integrals extend over the region spanned by the hadron system with a center-of-mass energy $\sqrt{Q^2}$~: \begin{equation} \label{eq:phaspace1} \int \, ds \, dt \, \equiv \, \int_{4 \, m_K^2}^{(\sqrt{Q^2}-m_{\pi})^2} ds \, \int_{t_{-}(s)}^{t_+(s)} dt \,, \end{equation} with \begin{equation} \label{eq:phaspace2} t_{\pm}(s) \, = \, \frac{1}{4 \, s} \left\{ \left(Q^2-m_{\pi}^2 \right)^2 \, - \, \left[ \lambda^{1/2}\left( Q^2, s, m_{\pi}^2 \right) \, \mp \, \lambda^{1/2} \left( m_K^2, m_K^2, s \right) \right]^2 \right\} \, , \end{equation} and $\lambda(a,b,c) = (a+b-c)^2-4ab$. We have neglected here the $\nu_{\tau}$ mass, and exact isospin symmetry has been assumed. \par The general structure of the form factors, within our model, arises from the diagrams displayed in Fig.~\ref{fig:feynman}. \begin{figure} \begin{center} \includegraphics[scale=0.9]{Figure1.eps} \caption[]{\label{fig:feynman} Topologies contributing to the final hadron state in $\tau \rightarrow K K \pi \, \nu_{\tau}$ decays in the $N_C \rightarrow \infty$ limit. A crossed circle indicates the QCD vector or axial-vector current insertion. A single line represents a pseudoscalar meson ($K$, $\pi$) while a double line stands for a resonance intermediate state. Topologies $b)$ and $e) $ only contribute to the axial-vector driven form factors, while diagram $d)$ arises only (as explained in the text) from the vector current.} \end{center} \end{figure} This provides the following decomposition~: \begin{equation} F_i \, = \, F_i^{\chi} \, + \, F_i^{\mbox{\tiny R}} \, + \, F_i^{\mbox{\tiny RR}} \;, \; \; \, i=1,... \, , \end{equation} where $F_i^{\chi}$ is given by the $\chi$PT Lagrangian [topologies $a)$ and $b)$ in Fig.~\ref{fig:feynman}], and the rest are the contributions of one [Fig.~1$c)$, $d)$ and $e)$] or two resonances [Fig.~1$f)$]. \subsection{Form factors in $\tau^- \rightarrow K^+ K^- \, \pi^- \, \nu_{\tau}$ and $\tau^- \rightarrow K^0 \,\overline{K^0} \, \pi^- \, \nu_{\tau}$} \hspace*{0.5cm} In the isospin limit, form factors for the $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ and $\tau^- \rightarrow K^0 \overline{K^0} \pi^- \nu_{\tau}$ decays are identical. The explicit expressions for these are~: \begin{eqnarray} \label{eq:f11} F_1^{\chi} &= & - \frac{\sqrt{2}}{3 \, F} \, , \nonumber \\ F_1^{\mbox{\tiny R}}(s,t) & = & - \, \frac{\sqrt{2}}{6} \, \frac{F_V\,G_V}{F^3} \, \left[ \, \frac{A^{\mbox{\tiny R}}(Q^2,s,u,m_K^2,m_{\pi}^2,m_K^2)}{M_{\rho}^{2}-s} \, + \, \frac{B^{\mbox{\tiny R}}(s,u,m_K^2,m_{\pi}^2)}{M_{K^{*}}^{2}-t} \,\right] , \, \\[4mm] F_1^{\mbox{\tiny RR}}(s,t) & = & \frac{2}{3} \, \frac{F_A G_V}{F^3} \, \frac{Q^2}{M_{a_1}^2-Q^2} \, \, \left[ \, \frac{A^{\mbox{\tiny RR}}(Q^2,s,u,m_K^2,m_{\pi}^2,m_K^2) }{M_{\rho}^2-s} \, \right. \nonumber \\ & & \qquad \qquad \qquad \qquad \qquad\left. + \, \frac{B^{\mbox{\tiny RR}}(Q^2,s,u,t,m_K^2,m_{\pi}^2,m_K^2)}{M_{K^*}^2-t} \, \right] \, \, , \nonumber \end{eqnarray} where the functions $A^{\mbox{\tiny R}}$, $B^{\mbox{\tiny R}}$, $A^{\mbox{\tiny RR}}$ and $B^{\mbox{\tiny RR}}$ are defined in Appendix~\ref{ap:1}. The dependence of the form factors with $t$ follows from the relation $u = Q^2 - s - t + 2 m_K^2 + m_\pi^2$. Moreover resonance masses correspond to the lowest states, $M_{\rho} = M_{\rho (770)}$, $M_{K^*} = M_{K^*(892)}$ and $M_{a_1} = M_{a_1(1260)}$\footnote{ Resonance masses and widths within our approach are discussed in subsect.~3.3.}. \par Analogously the $F_2$ form factor is given by~: \begin{eqnarray} \label{eq:f21} F_2^{\chi} &= & F_1^{\chi} \, , \nonumber \\[3.5mm] F_2^{\mbox{\tiny R}}(s,t) & = & - \, \frac{\sqrt{2}}{6} \, \frac{F_V\,G_V}{F^3} \, \left[ \, \frac{B^{\mbox{\tiny R}}(t,u,m_K^2,m_K^2)}{M_{\rho}^{2}-s} \, + \, \frac{A^{\mbox{\tiny R}}(Q^2,t,u,m_K^2,m_K^2,m_{\pi}^2)}{M_{K^{*}}^{2}-t} \,\right] , \, \\[4mm] F_2^{\mbox{\tiny RR}}(s,t) & = & \frac{2}{3} \, \frac{F_A G_V}{F^3} \, \frac{Q^2}{M_{a_1}^2-Q^2} \, \, \left[ \, \frac{B^{\mbox{\tiny RR}}(Q^2,t,u,s,m_K^2,m_K^2,m_{\pi}^2) }{M_{\rho}^2-s} \, \right. \nonumber \\ & & \qquad \qquad \qquad \qquad \qquad\left. + \, \frac{A^{\mbox{\tiny RR}}(Q^2,t,u,m_K^2,m_K^2,m_{\pi}^2) }{M_{K^*}^2-t} \, \right] \, \, . \nonumber \end{eqnarray} The $F_3$ form factor arises from the chiral anomaly and the non-anomalous odd-intrinsic-parity amplitude. We obtain~: \begin{eqnarray} \label{eq:f31} F_3^{\chi} & = & - \frac{N_C \, \sqrt{2}}{12 \, \pi^2 \, F^3} \, , \nonumber \\[3.5mm] F_3^{\mbox{\tiny R}}(s,t) & = & - \frac{4 \, G_V}{M_V \, F^3} \, \left[ \, C^{\mbox{\tiny R}}(Q^2,s,m_K^2,m_K^2,m_{\pi}^2) \, \left( \sin^2 \theta_V \frac{1+ \sqrt{2} \cot \theta_V}{M_{\omega}^2-s} \right.\right. \nonumber \\ & & \qquad \qquad \; \; \left. + \, \cos^2 \theta_V \frac{1- \sqrt{2} \tan \theta_V }{M_{\phi}^2-s} \right) \, + \, \frac{C^{\mbox{\tiny R}}(Q^2,t,m_K^2,m_\pi^2,m_K^2)}{M_{K^*}^2-t} \nonumber \\ & & \qquad \qquad \; \; \left. \, - \, \frac{2 \, F_V}{G_V} \, \frac{D^{\mbox{\tiny R}}(Q^2,s,t)}{M_{\rho}^2-Q^2} \right] \, , \\[3.5mm] F_3^{\mbox{\tiny RR}}(s,t) & = & 4 \sqrt{2} \frac{F_V \, G_V}{F^3} \, \frac{1}{M_{\rho}^2 - Q^2} \, \left[ C^{\mbox{\tiny RR}}(Q^2,s,m_{\pi}^2) \, \left( \sin^2 \theta_V \frac{1+ \sqrt{2} \cot \theta_V}{M_{\omega}^2-s} \right. \right. \nonumber \\ && \qquad \qquad \qquad \qquad \qquad \; \left. \left. + \, \cos^2 \theta_V \frac{1- \sqrt{2} \tan \theta_V }{M_{\phi}^2 -s} \right) \, + \, \frac{C^{\mbox{\tiny RR}}(Q^2,t,m_K^2)}{M_{K^*}^2-t} \right] \, , \nonumber \end{eqnarray} where $C^{\mbox{\tiny R}}$, $D^{\mbox{\tiny R}}$ and $C^{\mbox{\tiny RR}}$ are defined in Appendix~\ref{ap:1}, and $\theta_V$ is the mixing angle between the octet and singlet vector states $\omega_8$ and $\omega_0$ that defines the mass eigenstates $\omega(782)$ and $\phi(1020)$~: \begin{equation} \left( \begin{array}{c} \phi \\ \omega \end{array} \right)\, = \, \left( \begin{array}{cc} \cos \theta_V & - \sin \theta_V \\ \sin \theta_V & \cos \theta_V \end{array} \right) \; \left( \begin{array}{c} \omega_8 \\ \omega_0 \end{array} \right)\, . \end{equation} For numerical evaluations we will assume ideal mixing, i.e.\ $\theta_V = \tan^{-1}(1/\sqrt{2})$. In this case the contribution of the $\phi(1020)$ meson to $F_3$ vanishes. \par Finally, though we have not dwelled on specific contributions to the $F_4$ form factor, we quote for completeness the result obtained from our Lagrangian. Its structure is driven by the pion pole~: \begin{eqnarray} \label{eq:f41} F_4 & = & F_4^{\chi} + F_4^{\mbox{\tiny R}} \, , \nonumber \\[3mm] F_4^{\chi}(s,t) & = & \frac{1}{\sqrt{2} \, F} \frac{m_{\pi}^2}{m_{\pi}^2 - Q^2} \, \left( 1+ \frac{m_K^2 - u}{Q^2} \right) \, , \nonumber \\[3mm] F_4^{\mbox{\tiny R}}(s,t) & = & \frac{G_V^2}{\sqrt{2} \, F^3} \frac{m_{\pi}^2}{Q^2 (m_{\pi}^2 - Q^2)} \, \left[ \frac{s(t-u)}{M_{\rho}^2 -s} + \frac{t(s-u) - (m_K^2 - m_\pi^2)(Q^2-m_K^2)}{M_{K^*}^2-t} \right] \, . \end{eqnarray} \subsection{Form factors in $\tau^- \rightarrow K^- \,K^0 \, \pi^0 \, \nu_{\tau}$} The diagrams contributing to the $\tau^- \rightarrow K^- \,K^0 \, \pi^0 \, \nu_{\tau}$ decay amplitude are also those in Fig.~\ref{fig:feynman}, hence once again we can write $F_i \, =\, F_i^{\chi} \, + \, F_i^{\mbox{\tiny R}} \, + \, F_i^{\mbox{\tiny RR}} \, + \, \dots$. However, the structure of the form factors for this process does not show the symmetry observed in $\tau \rightarrow K \overline{K} \pi \nu_{\tau}$. We find~: \begin{eqnarray} \label{eq:f12} F_1^{\chi} & = & -\frac{1}{F} \, , \nonumber \\[3mm] F_1^{\mbox{\tiny R}}(s,t) & = & - \frac{1}{6} \frac{F_V G_V}{F^3} \, \left[ \, \frac{B^{\mbox{\tiny R}}(s,u,m_K^2,m_{\pi}^2)}{M_{K^{*}}^{2}-t} \, + \, 2 \; \frac{A^{\mbox{\tiny R}}(Q^2,s,u,m_K^2,m_\pi^2,m_K^2)}{M_\rho^{2}-s} \, \right. \nonumber \\ & & \left. \qquad\qquad\qquad + \, \frac{A^{\mbox{\tiny R}}(Q^2,u,s,m_\pi^2,m_K^2,m_K^2)}{M_{K^{*}}^{2}-u} \, \right] \, , \nonumber \\ [3.5mm] F_1^{\mbox{\tiny RR}}(s,t) & = & \frac{\sqrt{2}}{3} \frac{F_A G_V}{F^3} \frac{Q^2}{M_{a_1}^2-Q^2} \, \left[ \, \frac{B^{\mbox{\tiny RR}}(Q^2,s,u,t,m_K^2,m_{\pi}^2,m_K^2)}{M_{K^{*}}^{2}-t} \right. \nonumber \\ & & \qquad\qquad\qquad\qquad\qquad + \, 2 \, \frac{A^{\mbox{\tiny RR}}(Q^2,s,u,m_K^2,m_\pi^2,m_K^2)}{M_\rho^{2}-s} \, \nonumber \\ & & \left.\qquad\qquad\qquad\qquad\qquad + \, \frac{A^{\mbox{\tiny RR}}(Q^2,u,s,m_\pi^2,m_K^2,m_K^2)}{M_{K^{*}}^{2}-u} \, \right] \, , \end{eqnarray} \begin{eqnarray} \label{eq:f22} F_2^{\chi} & = & 0 \, , \nonumber \\ [3mm] F_2^{\mbox{\tiny R}}(s,t) & = & - \frac{1}{6} \frac{F_V G_V}{F^3} \, \left[ \, \frac{A^{\mbox{\tiny R}}(Q^2,t,u,m_K^2,m_K^2,m_{\pi}^2)}{M_{K^{*}}^{2}-t} \, + \, 2 \; \frac{B^{\mbox{\tiny R}}(t,u,m_K^2,m_K^2)}{M_\rho^{2}-s} \, \right. \nonumber \\ & & \left. \qquad\qquad\qquad - \, \frac{A^{\mbox{\tiny R}}(Q^2,u,t,m_K^2,m_K^2,m_\pi^2)}{M_{K^{*}}^{2}-u} \, \right] \, , \nonumber \\ [3.5mm] F_2^{\mbox{\tiny RR}}(s,t) & = & \frac{\sqrt{2}}{3} \frac{F_A G_V}{F^3} \frac{Q^2}{M_{a_1}^2-Q^2} \, \left[ \, \frac{A^{\mbox{\tiny RR}}(Q^2,t,u,m_K^2,m_K^2,m_{\pi}^2)}{M_{K^{*}}^{2}-t} \right. \nonumber \\ & & \qquad\qquad\qquad\qquad\qquad + \, 2 \; \frac{B^{\mbox{\tiny RR}}(Q^2,t,u,s,m_K^2,m_K^2,m_\pi^2)}{M_\rho^{2}-s} \, \nonumber \\ & & \left. \qquad\qquad\qquad\qquad\qquad - \, \frac{A^{\mbox{\tiny RR}}(Q^2,u,t,m_K^2,m_K^2,m_\pi^2)}{M_{K^{*}}^{2}-u} \, \right] \, . \end{eqnarray} \par The form factor driven by the vector current is given by~: \begin{eqnarray} \label{eq:f32} F_3^{\chi} & = & 0 \, \nonumber \\ [3mm] F_3^{\mbox{\tiny R}}(s,t) & = & \frac{2 \sqrt{2} \, G_V}{M_V \, F^3} \!\!\left[ \frac{C^{\mbox{\tiny R}}(Q^2,t,m_K^2,m_\pi^2,m_K^2)}{M_{K^*}^2-t} - \frac{C^{\mbox{\tiny R}}(Q^2,u,m_K^2,m_\pi^2,m_K^2)}{M_{K^*}^2-u} - \frac{2 F_V}{G_V} \frac{E^{\mbox{\tiny R}}(t,u)}{M_{\rho}^2-Q^2} \right] \, , \nonumber \\ [3mm] F_3^{\mbox{\tiny RR}}(s,t) & = & - 4 \frac{F_V G_V}{F^3} \frac{1}{M_{\rho}^2-Q^2} \left[ \frac{C^{\mbox{\tiny RR}}(Q^2,t,m_K^2)}{M_{K^*}^2-t} - \frac{C^{\mbox{\tiny RR}}(Q^2,u,m_K^2)}{M_{K^*}^2-u} \right] \, , \end{eqnarray} with $E^{\mbox{\tiny R}}$ defined in Appendix~\ref{ap:1}. \par Finally for the pseudoscalar form factor we have~: \begin{eqnarray} \label{eq:f42} F_4^{\chi}(s,t) & = & \frac{1}{2 \, F} \frac{m_{\pi}^2 \, (t-u)}{Q^2 ( m_{\pi}^2 - Q^2 )} \, , \nonumber \\ [3mm] F_4^{\mbox{\tiny R}}(s,t) & = & \frac{1}{2} \frac{G_V^2}{F^3} \frac{m_{\pi}^2}{Q^2 (m_{\pi}^2 - Q^2 )} \left[ \frac{t(s-u)-(m_K^2-m_{\pi}^2)(Q^2-m_K^2)}{M_{K^*}^2-t} + \frac{2 \, s (t-u)}{M_{\rho}^2-s} \right. \nonumber \\ [3mm] & & \qquad \qquad \qquad \qquad \; \; \; \,\left. - \frac{u(s-t)-(m_{K}^2-m_{\pi}^2)(Q^2-m_K^2)}{M_{K^*}^2-u} \right] \, . \end{eqnarray} \subsection{Features of the form factors} Several remarks are needed in order to understand our previous results for the form factors related with the vector and axial-vector QCD currents analysed above~: \begin{itemize} \item[1/] Our evaluation corresponds to the tree level diagrams in Fig.~\ref{fig:feynman} that arise from the $N_C \rightarrow \infty$ limit of QCD. Hence the masses of the resonances would be reduced to $M_V=M_{\rho}=M_{\omega}=M_{K^*}=M_{\phi}$ and $M_A=M_{a_1}$ as they appear in the resonance Lagrangian (\ref{eq:lag0}), i.e. the masses of the nonet of vector and axial-vector resonances in the chiral and large-$N_C$ limit. However it is easy to introduce NLO corrections in the $1/N_C$ and chiral expansions on the masses by including the {\em physical} ones~: $M_{\rho}$, $M_{K^*}$, $M_{\omega}$, $M_{\phi}$ and $M_{a_1}$ for the $\rho(770)$, $K^*(892)$, $\omega(782)$, $\phi(1020)$ and $a_1(1260)$ states, respectively, as we have done in the expressions of the form factors. In this setting resonances also have zero width, which represents a drawback if we intend to analyse the phenomenology of the processes~: Due to the high mass of the tau lepton, resonances do indeed resonate producing divergences if their width is ignored. Hence we will include energy-dependent widths for the $\rho(770)$, $a_1(1260)$ and $K^*(892)$ resonances, that are rather wide, and a constant width for the $\omega(782)$. This issue is discussed in Ref.~\cite{shortp}. \par In summary, to account for the inclusion of NLO corrections we perform the substitutions~: \begin{equation} \frac{1}{M_R^2-q^2} \; \; \longrightarrow \;\; \frac{1}{M_{phys}^2-q^2- \, i \, M_{phys} \, \Gamma_{phys}(q^2)} \; , \end{equation} where $R=V,A$, and the subindex {\em phys} on the right hand side stands for the corresponding {\em physical} state depending on the relevant Feynman diagram. \item[2/] If we compare our results with those of Ref.~\cite{KS5}, evaluated within the KS model, we notice that the structure of our form factors is fairly different and much more intricate. This is due to the fact that the KS model, i.e.\ a model resulting from combinations of {\em ad hoc} products of Breit-Wigner functions, does not meet higher order chiral constraints enforced in our approach. \item[3/] As commented above the pseudoscalar form factors $F_4$ vanishes in the chiral limit. Indeed the results of Eqs.~(\ref{eq:f41}, \ref{eq:f42}) show that they are proportional to $m_{\pi}^2$, which is tiny compared with any other scale in the amplitudes. Hence the contribution of $F_4$ to the structure of the spectra is actually marginal. \end{itemize} \section{QCD constraints and determination of resonance coupling constants} \label{sect:4} \hspace*{0.5cm} Our results for the form factors $F_i$ depend on several combinations of the coupling constants in our Lagrangian ${\cal L}_{R \chi T}$ in Eq.~(\ref{eq:ourtheory}), most of which are in principle unknown parameters. Now, if our theory offers an adequate effective description of QCD at hadron energies, the underlying theory of the strong interactions should give information on those constants. Unfortunately the determination of the effective parameters from first principles is still an open problem in hadron physics. \par A fruitful procedure when working with resonance Lagrangians has been to assume that the resonance region, even when one does not include the full phenomenological spectrum, provides a bridge between the chiral and perturbative regimes \cite{Ecker:1989yg}. The chiral constraints supply information on the structure of the interaction but do not provide any hint on the coupling constants of the Lagrangian. Indeed, as in any effective theory \cite{Georgi:1991ch}, the couplings encode information from high energy dynamics. Our procedure amounts to match the high energy behaviour of Green functions (or related form factors) evaluated within the resonance theory with the asymptotic results of perturbative QCD. This strategy has proven to be phenomenologically sound \cite{Moussallam:1997xx,Knecht:2001xc,RuizFemenia:2003hm,Cirigliano:2004ue,Cirigliano:2005xn,Mateu:2007tr, Ecker:1989yg,Amoros:2001gf}, and it will be applied here in order to obtain information on the unknown couplings. \par Two-point Green functions of vector and axial-vector currents $\Pi_{V,A}(q^2)$ were studied within perturbative QCD in Ref.~\cite{Floratos:1978jb}, where it was shown that both spectral functions go to a constant value at infinite transfer of momenta~: \begin{equation} \Im m \, \Pi_{V,A}(q^2) \, \mapright{}{\; \; \; q^2 \rightarrow \infty \; \; \; } \, \, \frac{N_C}{12 \, \pi} \, . \end{equation} By local duality interpretation the imaginary part of the quark loop can be understood as the sum of infinite positive contributions of intermediate hadron states. Now, if the infinite sum is going to behave like a constant at $q^2 \rightarrow \infty$, it is heuristically sound to expect that each one of the infinite contributions vanishes in that limit. This deduction stems from the fact that vector and axial-vector form factors should behave smoothly at high $q^2$, a result previously put forward from parton dynamics in Ref.~\cite{Brodsky}. Accordingly in the $N_C \rightarrow \infty$ limit this result applies to our form factors evaluated at tree level in our framework. \par Other hints involving short-distance dynamics may also be considered. The analyses of three-point Green functions of QCD currents have become a useful procedure to determine coupling constants of the intermediate energy (resonance) framework \cite{Moussallam:1997xx,Knecht:2001xc,RuizFemenia:2003hm,Cirigliano:2004ue,Cirigliano:2005xn}. The idea is to use those functions (order parameters of the chiral symmetry breaking), evaluate them within the resonance framework and match this result with the leading term in the Operator Product Expansion (OPE) of the Green function. \par In the following we collect the information provided by these hints on our coupling constants, attaching always to the $N_C \rightarrow \infty$ case \cite{Pich:2002xy} (approximated with only one nonet of vector and axial-vector resonances)~: \begin{itemize} \item[i)] By demanding that the two-pion vector form factor vanishes at high $q^2$ one obtains the condition $F_V \, G_V = F^2$ involving the couplings in Eq.~(\ref{eq:lag1}) \cite{Ecker:1989yg}. \item[ii)] The first Weinberg sum rule \cite{Weinberg:1967kj} leads to $F_V^2 - F_A^2 = F^2$, and the second Weinberg sum rule gives $F_V^2 \, M_V^2 \, = \, F_A^2 \, M_A^2$ \cite{Ecker:1988te}. \item[iii)] The analysis of the VAP Green function \cite{Cirigliano:2004ue} gives for the combinations of couplings defined in Eq.~(\ref{eq:lambdias}) the following results~: \begin{eqnarray} \label{eq:lambres} \lambda' & = & \frac{F^2}{2 \, \sqrt{2} \, F_A \, G_V} \; = \; \frac{M_A}{2 \, \sqrt{2} \, M_V} \,, \nonumber \\[3.5mm] \lambda'' & = & \frac{2 \, G_V \, - F_V}{2 \, \sqrt{2} \, F_A} \; = \; \frac{M_A^2 - 2 M_V^2}{2 \, \sqrt{2} \, M_V \, M_A} \, , \nonumber \\[3.5mm] 4 \, \lambda_0 & = & \lambda' + \lambda'' \; , \end{eqnarray} where, in the two first relations, the second equalities come from using relations i) and ii) above. Here $M_V$ and $M_A$ are the masses appearing in the resonance Lagrangian (\ref{eq:lag0}). Contrarily to what happens in the vector case where $M_V$ is well approximated by the $\rho(770)$ mass, in Ref.~\cite{Mateu:2007tr} it was obtained $M_A = 998 (49) \, \mbox{MeV}$, hence $M_A$ differs appreciably from the presently accepted value of $M_{a_1}$. It is worth to notice that the two first relations in Eq.~(\ref{eq:lambres}) can also be obtained from the requirement that the $J=1$ axial spectral function in $\tau \rightarrow 3 \pi \nu_{\tau}$ vanishes for large momentum transfer \cite{GomezDumm:2003ku}. \item[iv)] Both vector form factors contributing to the final states $K \overline{K} \pi^-$ and $K^- K^0 \pi^0$ in tau decays, when integrated over the available phase space, should also vanish at high $Q^2$. Let us consider $H_{\mu \nu}^3(s,t,Q^2) \equiv T_{\mu}^3 T_{\nu}^{3 \, *}$, where $T_{\mu}^3$ can be inferred from Eq.~(\ref{eq:t3}). Then we define $\Pi_V(Q^2)$ by~: \begin{equation} \int \, \mbox{d}\Pi_3 \, H_{\mu \nu}^3(s,t,Q^2) \, = \, \left( Q^2 g_{\mu \nu} \, - \, Q_{\mu} Q_{\nu} \right) \, \Pi_V(Q^2) \, , \end{equation} where \begin{eqnarray} \int \mathrm{d}\Pi_3 \, & = & \, \int \frac{d^3 p_1}{2 E_1} \frac{d^3 p_2}{2 E_2} \frac{d^3 p_3}{2 E_3} \delta^4\left( Q-p_1-p_2-p_3\right) \delta \left( s-(Q-p_3)^2 \right) \delta \left( t-(Q-p_2)^2 \right) \, \nonumber \\ \, & = & \, \frac{\pi^2}{4 \,Q^2} \, \int ds \, dt \; . \end{eqnarray} Hence we find that \begin{equation} \Pi_V(Q^2) \, = \, \frac{\pi^2}{12 \, Q^4} \, \int \, \mathrm{ds \, dt} \, g^{\mu \nu} \, H_{\mu \nu}^3(s,t,Q^2) \, , \end{equation} where the limits of integration are those of Eq.~(\ref{eq:phaspace1}, \ref{eq:phaspace2}), should vanish at $Q^2 \rightarrow \infty$. This constraint determines several relations on the couplings that appear in the $F_3$ form factor, namely~: \begin{eqnarray} \label{eq:1c} c_1 \, - \, c_2 \, + \, c_5 \, & = & 0 \, , \\ \label{eq:2c} c_1 \, - \, c_2 \, - \, c_5 \, + \, 2 c_6 \, & = & - \, \frac{ \,N_C}{96 \, \pi^2} \, \frac{F_V \, M_V}{\sqrt{2} \, F^2} \, , \\ \label{eq:3c} d_3 & = & - \frac{N_C}{192 \, \pi^2} \, \frac{M_V^2}{F^2} \, , \\ \label{eq:4c} g_1 \, + \, 2 g_2 \, - g_3 & = & 0 \, , \\ \label{eq:5c} g_2 & = & \frac{N_C}{192 \,\sqrt{2} \, \pi^2} \, \frac{M_V}{F_V} \, . \end{eqnarray} If these conditions are satisfied, $\Pi_V(Q^2)$ vanishes at high transfer of momenta for both $K \overline{K} \pi^-$ and $K^- K^0 \pi^0$ final states. We notice that the result in Eq.~(\ref{eq:1c}) is in agreement with the corresponding relation in Ref.~\cite{RuizFemenia:2003hm}, while Eqs.~(\ref{eq:2c}) and (\ref{eq:3c}) do not agree with the results in that work. In this regard we point out that the relations in Ref.~\cite{RuizFemenia:2003hm}, though they satisfy the leading matching to the OPE expansion of the $\langle VVP \rangle$ Green function with the inclusion of one multiplet of vector mesons, do not reproduce the right asymptotic behaviour of related form factors. Indeed it has been shown \cite{Knecht:2001xc,Mateu:2007tr} that two multiplets of vector resonances are needed to satisfy both constraints. Hence we will attach to our results above, which we consider more reliable~\footnote{One of the form factors derived from the $\langle VVP \rangle$ Green function is ${\cal F}_{\pi \gamma^* \gamma}(q^2)$, that does not vanish at high $q^2$ with the set of relations in Ref.~\cite{RuizFemenia:2003hm}. With our conditions in Eqs.~(\ref{eq:2c},\ref{eq:3c}) the asymptotic constraint on the form factor can be satisfied if the large-$N_C$ masses, $M_A$ and $M_V$, fulfill the relation $2 M_A^2 = 3 M_V^2$. It is interesting to notice the significant agreement with the numerical values for these masses mentioned above.}. \item[v)] An analogous exercise to the one in iv) can be carried out for the axial-vector form factors $F_1$ and $F_2$. We have performed such an analysis and, using the relations in i) and ii) above, it gives us back the results provided in Eq.~(\ref{eq:lambres}) for $\lambda'$ and $\lambda''$. Hence both procedures give a consistent set of relations. \end{itemize} After imposing the above constraints, let us analyse which coupling combinations appearing in our expressions for the form factors are still unknown. We intend to write all the information on the couplings in terms of $F$, $M_V$ and $M_A$. From the relations involving $F_V$, $F_A$ and $G_V$ we obtain~: \begin{eqnarray} \label{eq:fvfagv} \frac{F_V^2}{F^2} & = & \frac{M_A^2}{M_A^2-M_V^2} \, , \nonumber \\ \frac{F_A^2}{F^2} & = & \frac{M_V^2}{M_A^2-M_V^2} \, , \nonumber \\ \frac{G_V^2}{F^2} & = & 1 \, - \, \frac{M_V^2}{M_A^2} \, . \end{eqnarray} Moreover we know that $F_V$ and $G_V$ have the same sign, and we will assume that it is also the sign of $F_A$. Together with the relations in Eq.~(\ref{eq:lambres}) this determines completely the axial-vector form factors $F_{1,2}$. Now from Eqs.~(\ref{eq:1c}-\ref{eq:5c}) one can fix all the dominant pieces in the vector form factor $F_3$, i.e. those pieces that involve factors of the kinematical variables $s$, $t$ or $Q^2$. The unknown terms, that carry factors of $m_{\pi}^2$ or $m_K^2$, are expected to be less relevant. They are given by the combinations of couplings~: $c_1 + c_2 + 8 \, c_3 - c_5$, $d_1 + 8 \, d_2$, $c_4$ , $g_4$ and $g_5$. However small they may be, we will not neglect these contributions, and we will proceed as follows. Results in Ref.~\cite{RuizFemenia:2003hm} determine the first and the second coupling combinations. As commented above the constraints in that reference do not agree with those we have obtained by requiring that the vector form factor vanishes at high $Q^2$. However, they provide us an estimate to evaluate terms that, we recall, are suppressed by pseudoscalar masses. In this way, from a phenomenological analysis of $\omega \rightarrow \pi^+ \pi^- \pi^0$ (see Appendix~\ref{ap:3}) it is possible to determine the combination $2 \, g_4 + g_5$. Finally in order to evaluate $c_4$ and $g_4$ we will combine the recent analysis of $\sigma \left( e^+ e^- \rightarrow K K \pi \right)$ by BABAR \cite{Aubert:2007ym} with the information from the $\tau \rightarrow K K \pi \nu_{\tau}$ width. \subsection{Determination of $c_4$ and $g_4$} \label{sect:41} The separation of isoscalar and isovector components of the $e^+ e^- \rightarrow K K \pi$ amplitudes, carried out by BABAR \cite{Aubert:2007ym}, provides us with an additional tool for the estimation of the coupling constant $c_4$ that appears in the hadronization of the vector current. Indeed, using $SU(2)_I$ symmetry alone one can relate the isovector contribution to $\sigma \left( e^+ e^- \rightarrow K^- K^0 \pi^+ \right)$ with the vector contribution to $\Gamma \left( \tau^- \rightarrow K^0 K^- \pi^0 \nu_{\tau} \right)$ through the relation~: \begin{equation} \label{eq:CVC} \frac{\mathrm{d}}{\mathrm{d} \, Q^2} \, \Gamma \left( \tau^- \rightarrow K^0 K^- \pi^0 \nu_{\tau} \right) \Bigg|_{F_3} \, = \, f(Q^2) \; \sigma_{I=1} \left( e^+ e^- \rightarrow K^- K^0 \pi^+ \right) \; , \end{equation} where $f(Q^2)$ is given in Appendix~\ref{ap:4}. In this Appendix we also discuss other relations similar to Eq.~(\ref{eq:CVC}) that have been used in the literature and we point out the assumptions on which they rely. \par Hence we could use the isovector contribution to the cross-section for the process $e^+ e^- \rightarrow K_S K^{\pm} \pi^{\mp}$ determined by BABAR and Eq.~(\ref{eq:CVC}) to fit the $c_4$ coupling that is the only still undetermined constant in that process. However we have to take into account that our description for the hadronization of the vector current in the tau decay channel does not, necessarily, provide an adequate description of the cross-section. Indeed the complete different kinematics of both observables suppresses the high-energy behaviour of the bounded tau decay spectrum, while this suppression does not occur in the cross-section. Accordingly, our description of the latter away from the energy threshold can be much poorer. As can be seen in Fig.~\ref{fig:ee} there is a clear structure in the experimental points of the cross-section that is not provided by our description. \begin{figure}[!t] \begin{center} \vspace*{0.2cm} \includegraphics[scale=0.45,angle=-90]{Figure2.eps} \caption[]{\label{fig:ee} \small{Comparison of the experimental data \cite{Aubert:2007ym} with the theoretical prediction for the cross-section of the isovector component of $e^+ e^- \rightarrow K^*(892) K \rightarrow K_S K^{\pm} \pi^{\mp}$ process, for different values of the $c_4$ coupling. The $\chi^2$ values are associated to the first 6 data points only. }} \end{center} \end{figure} \par Taking into account the input parameters quoted in Eq.~(\ref{eq:set1}) we obtain~: $ c_4 \, = \, -0.047 \pm 0.002 $. The fit has been carried out for the first 6 bins (up to $E_{cm} \sim 1.52 \, \mbox{GeV}$). This result corresponds to $\chi^2 /dof = 0.3$ and the displayed error comes only from the fit. \par We take into consideration now the measured branching ratios for the $K K \pi$ channels of Table~\ref{tab:52} in order to extract information both from $c_4$ and $g_4$. We notice that it is not possible to reconcile a prediction of the branching ratios of $\tau \rightarrow K \overline{K} \pi \nu_{\tau}$ and $\tau \rightarrow K^- K^0 \pi^0 \nu_{\tau}$ in spite of the noticeable size of the errors shown in the Table~\ref{tab:52}. Considering that the second process was measured long ago and that the $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ decay has been focused by both CLEO III and BABAR we intend to fit the branching ratio of the latter. For the parameter values~: \begin{eqnarray} \label{eq:c4g4I} c_4 & = & -0.07 \pm 0.01 \, , \nonumber \\ g_4 & = & -0.72 \pm 0.20 \, , \end{eqnarray} we find a good agreement with the measured widths $\Gamma(\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau})$ and $\Gamma(\tau \rightarrow K^- K^0 \pi^0 \nu_{\tau})$ within errors (see Table~\ref{tab:52}). Notice that the value of $|c_4|$ is larger than that obtained from the fit to the $e^+ e^- \rightarrow K_S K^{\pm} \pi^{\mp}$ data explained above. In Fig.~\ref{fig:ee} we show the first 8 bins in the isovector component of $e^+ e^- \rightarrow K_S K^{\pm} \pi^{\mp}$ and the theoretical curves for different values of the $c_4$ coupling. As our preferred result we choose the larger value of $c_4$ in Eq.~(\ref{eq:c4g4I}), since it provides a better agreement with the present measurement of $\Gamma(\tau^- \rightarrow K^- K^0 \pi^0 \nu_{\tau})$. Actually, one can expect a large incertitude in the splitting of isospin amplitudes in the $e^+ e^- \rightarrow K_S K^{\pm} \pi^{\mp}$ cross-section (see Appendix~\ref{ap:4}). Taking into account this systematic error, it could be likely that the theoretical curve with $c_4 = -0.07$ falls within the error bars for the first data points. \section{Phenomenology of $\tau \rightarrow K K \pi \nu_{\tau}$~: Results and their analysis} \label{sect:5} Asymmetric B-factories span an ambitious $\tau$ programme that includes the determination of the hadron structure of semileptonic $\tau$ decays such as the $K K \pi$ channel. As commented in the Introduction the latest study of $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ by the CLEO III Collaboration \cite{Liu:2002mn} showed a disagreement between the KS model, included in TAUOLA, and the data. Experiments with higher statistics such as BABAR and Belle should clarify the theoretical settings. \par For the numerics in this Section we use the values in Appendix~\ref{ap:input}. At present no spectra for these channels is available and the determinations of the widths are collected in Table~\ref{tab:52}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline &&& \\[-2.5mm] Source & $\Gamma ( \tau^-\to K^+ K^- \pi^- \nu_\tau) \,$ & $ \Gamma ( \tau^-\to K^0 \overline{K}^0 \pi^- \nu_\tau) \,$ & $\Gamma ( \tau^-\to K^- K^0 \pi^0 \nu_\tau) \,$ \\ [1.5mm] \hline &&&\\ [-2.5mm] PDG \cite{PDG2008} & $3.103\,(136)$& $3.465 \, (770) $& $3.262 \, (521)$ \\ [1.9mm] BABAR \cite{:2007mh} &$3.049 \, (85)$ & & \\ [1.9mm] CLEO III \cite{Liu:2002mn}&$3.511 \, (245)$ & & \\ [1.9mm] Belle \cite{:2008sg} & $3.465 \, (136)$ && \\ [1.9mm] Our prediction & $3.4^{+0.5}_{-0.2}$ & $3.4^{+0.5}_{-0.2}$ & $2.5^{+0.3}_{-0.2}$ \\ [2mm] \hline \end{tabular} \caption{\small{Comparison of the measurements of partial widths (in units of $10^{-15} \, \mbox{GeV}$) with our predictions for the set of values in Eq.~(\ref{eq:c4g4I}). For earlier references see \cite{PDG2008}.}} \label{tab:52} \end{center} \end{table} We also notice that there is a discrepancy between the BABAR measurement of $\Gamma(\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau})$ and the results by CLEO and Belle. Within $SU(2)$ isospin symmetry it is found that $\Gamma(\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}) = \Gamma(\tau^- \rightarrow K^0 \overline{K}^0 \pi^- \nu_{\tau})$, which is well reflected by the values in Table~\ref{tab:52} within errors. Moreover, as commented above, the PDG data \cite{PDG2008} indicate that $\Gamma(\tau^- \rightarrow K^- K^0 \pi^0 \nu_{\tau})$ should be similar to $\Gamma(\tau^- \rightarrow K \overline{K} \pi \nu_{\tau})$. It would be important to obtain a more accurate determination of the $\tau^-\rightarrow K^- K^0 \pi^0 \nu_{\tau}$ width (the measurements quoted by the PDG are rather old) in the near future. \par In our analyses we include the lightest resonances in both the vector and axial-vector channels, namely $\rho(775)$, $K^*(892)$ and $a_1(1260)$. It is clear that, as it happens in the $\tau \rightarrow \pi \pi \pi \nu_{\tau}$ channel (see Ref.~\cite{shortp}), a much lesser role, though noticeable, can be played by higher excitations on the vector channel. As experimentally only the branching ratios are available for the $K K \pi$ channel we think that the refinement of including higher mass resonances should be taken into account in a later stage, when the experimental situation improves. \par In Figs.~\ref{fig:53} and \ref{fig:54} we show our predictions for the normalized $M_{K K \pi}^2-$spectrum of the $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ and $\tau^- \rightarrow K^-K^0\pi^0\nu_{\tau}$ decays, respectively. As discussed above we have taken $c_4 = -0.07\pm 0.01$ and $g_4 = -0.72 \pm 0.20$ (notice that the second process does not depend on $g_4$). We conclude that the vector contribution ($\Gamma_V$) dominates over the axial-vector one ($\Gamma_A$) in both channels~: \begin{equation} \frac{\Gamma_A}{\Gamma_V} \, \Big|_{K \overline{K} \pi} = \, 0.16 \pm 0.05\; , \; \; \; \; \frac{\Gamma_A}{\Gamma_V} \, \Big|_{K^- K^0 \pi} = \, 0.18 \pm 0.04\; , \; \; \; \frac{\Gamma(\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau})}{\Gamma(\tau^- \rightarrow K^- K^0 \pi^0 \nu_{\tau})} \, = \, 1.4 \pm 0.3 \; , \end{equation} where the errors estimate the slight variation due to the range in $c_4$ and $g_4$. These ratios translate into a ratio of the vector current to all contributions of $f_v = 0.86 \pm 0.04$ for the $K \overline{K} \pi$ channel and $f_v = 0.85 \pm 0.03$ for the $K^- K^0 \pi$ one, to be compared with the result in Ref.~\cite{Davier:2008sk}, namely $f_v(K\overline{K}\pi) = 0.20 \pm 0.03$. Our results for the relative contributions of vector and axial-vector currents deviate strongly from most of the previous estimates, as one can see in Table~\ref{tab:3}. Only Ref.~\cite{Gomez-Cadenas:1990uj} pointed already to vector current dominance in these channels, although enforcing just the leading chiral constraints and using experimental data at higher energies. \begin{figure}[!t] \begin{center} \vspace*{0.2cm} \includegraphics[scale=0.45,angle=-90]{Figure3.eps} \caption[]{\label{fig:53} \small{Normalized $M_{K K \pi}^2$-spectra for $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$. Notice the dominance of the axial-vector current at very low values of $Q^2$.}} \end{center} \end{figure} We conclude that for all $\tau \rightarrow K K \pi \nu_{\tau}$ channels the vector component dominates by far over the axial-vector one, though, as can be seen in the spectra in Figs.~\ref{fig:53},\ref{fig:54}, the axial-vector current is the dominant one in the very-low $Q^2$ regime. \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|} \hline & \\[-2.5mm] Source & $\Gamma_V / \Gamma_A$ \\ [1.5mm] \hline & \\ [-2.5mm] Our result & $6 \pm 2$ \\ [1.9mm] KS model \cite{KS5} & $0.6 - 0.7$ \\ [1.9mm] KS model \cite{Finkemeier:1996hh} & $0.4 - 0.6$ \\ [1.9mm] Breit-Wigner approach \cite{Gomez-Cadenas:1990uj} & $\sim 9$ \\ [1.9mm] CVC \cite{Davier:2008sk} & $0.20 \pm 0.03$ \\ [1.9mm] Data analysis \cite{Liu:2002mn} & $1.26 \pm 0.35$ \\[1.9mm] \hline \end{tabular} \caption{\small{Comparison of the ratio of vector and axial-vector contribution for $\tau \rightarrow KK\pi\nu_{\tau}$ partial widths. The last two lines correspond to the $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ process only. Results in Ref.~\cite{Finkemeier:1996hh} are an update of Ref.~\cite{KS5}. The result of Ref.~\cite{Davier:2008sk} is obtained by connecting the tau decay width with the CVC related $e^+ e^- \rightarrow K_S K^{\pm} \pi^{\mp}$ (see Appendix~\ref{ap:4}). The analysis in \cite{Liu:2002mn} was performed with a parameterization that spoiled the chiral normalization of the form factors. }} \label{tab:3} \end{center} \end{table} \begin{figure}[!ht] \begin{center} \vspace*{0.2cm} \includegraphics[scale=0.45,angle=-90]{Figure4.eps} \caption[]{\label{fig:54} \small{Normalized $M_{K K \pi}^2$-spectra for $\tau^- \rightarrow K^-K^0\pi^0\nu_{\tau}$. Notice the dominance of the axial-vector current at very low values of $Q^2$.}} \end{center} \end{figure} \par Next we contrast our spectrum for $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ with that one arising from the KS model worked out in Refs.~\cite{KS5,Finkemeier:1996hh}. This comparison is by no means straight because in these references a second and even a third multiplet of resonances are included in the analysis. As we consider that the spectrum is dominated by the first multiplet, in principle we could start by switching off heavier resonances. However we notice that, in the KS model, the $\rho(1450)$ resonance plays a crucial role in the vector contribution to the spectrum. This feature depends strongly on the value of the $\rho(1450)$ width, which has been changed from Ref.~\cite{KS5} to Ref.~\cite{Finkemeier:1996hh}~\footnote{Moreover within Ref.~\cite{KS5} the authors use two different set of values for the $\rho(1450)$ mass and width, one of them in the axial-vector current and the other in the vector one. This appears to be somewhat misleading.}. In Fig.~\ref{fig:55} we compare our results for the vector and axial-vector contributions with those of the KS model as specified in Ref.~\cite{Finkemeier:1996hh} (here we have switched off the seemingly unimportant $K^*(1410)$). As it can be seen there are large differences in the structure of both approaches. Noticeably there is a large shift in the peak of the vector spectrum owing to the inclusion of the $\rho(1450)$ and $\rho(1700)$ states in the KS model together with its strong interference with the $\rho(770)$ resonance. In our scheme, including the lightest resonances only, the $\rho(1450)$ and $\rho(1700)$ information has to be encoded in the values of $c_4$ and $g_4$ couplings (that we have extracted in Subsection~\ref{sect:41}) and such an interference is not feasible. It will be a task for the experimental data to settle this issue. \par \begin{figure}[!ht] \begin{center} \vspace*{0.2cm} \includegraphics[scale=0.45,angle=-90]{Figure5.eps} \caption[]{\label{fig:55} \small{Comparison between the normalized $M_{K K \pi}^2$-spectra for the vector and axial-vector contributions to the $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ channel in the KS model \cite{Finkemeier:1996hh} and in our approach.}} \end{center} \end{figure} In Fig.~\ref{fig:56} we compare the normalized full $M_{K K \pi}^2$ spectrum for the $\tau \rightarrow K \overline{K} \pi \nu_{\tau}$ channels in the KS model \cite{Finkemeier:1996hh} and in our scheme. The most important feature is the large effect of the vector contribution in our case compared with the leading role of the axial-vector part in the KS model, as can be seen in Fig.~\ref{fig:55}. This is the main reason for the differences between the shapes of $M_{K K \pi}^2$ spectra observed in Fig.~\ref{fig:56}. \begin{figure}[!ht] \begin{center} \vspace*{0.2cm} \includegraphics[scale=0.45,angle=-90]{Figure6.eps} \caption[]{\label{fig:56} \small{Comparison between the normalized $M_{K K \pi}^2$-spectra for $\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau}$ in the KS model \cite{Finkemeier:1996hh} and in our approach. }} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \vspace*{0.2cm} \includegraphics[scale=0.45,angle=-90]{Figure7.eps} \caption[]{\label{fig:57} \small{Comparison between the normalized $M_{K K \pi}^2$-spectra for $\tau^- \rightarrow K^- K^0 \pi^0 \nu_{\tau}$ in the KS model \cite{Finkemeier:1996hh} and in our approach. }} \end{center} \end{figure} \section{Conclusions} Hadron decays of the tau lepton are an all-important tool in the study of the hadronization process of QCD currents, in a setting where resonances play the leading role. In particular the final states of three mesons are the simplest ones where one can test the interplay between different resonance states. At present there are three parameterizations implemented in the TAUOLA library to describe the hadronization process in tau decays. Two are based on experimental data. The other alternative, namely the KS model, though successfull in the account of the $\pi \pi \pi$ final state, has proven to be unsuitable \cite{Liu:2002mn} when applied to the decays into $K K \pi$ hadron states. Our procedure, guided by large $N_C$, chiral symmetry and the asymptotic behaviour of the form factors driven by QCD, was already employed in the analysis of $\tau \rightarrow \pi \pi \pi \nu_{\tau}$ in Refs.~\cite{GomezDumm:2003ku} and \cite{shortp}, which only concern the axial-vector current. Here we have applied our methodology to the analysis of the $\tau \rightarrow K K \pi \nu_{\tau}$ channels where the vector current may also play a significant role. \par We have constructed the relevant Lagrangian involving the lightest multiplets of vector and axial-vector resonances. Then we have proceeded to the evaluation of the vector and axial-vector currents in the large-$N_C$ limit of QCD, i.e.\ at tree level within our model. Though the widths of resonances are a next-to-leading effect in the $1/N_C$ counting, they have to be included into the scheme since the resonances do indeed resonate due to the high mass of the decaying tau lepton. We have been able to estimate the values of the relevant new parameters appearing in the Lagrangian with the exception of two, namely the couplings $c_4$ and $g_4$, which happen to be important in the description of $\tau \rightarrow K K \pi \nu_{\tau}$ decays. The range of values for these couplings has been determined from the measured widths $\Gamma(\tau^- \rightarrow K^+ K^- \pi^- \nu_{\tau})$ and $\Gamma(\tau^- \rightarrow K^- K^0 \pi^0 \nu_{\tau})$. \par In this way we provide a prediction for the ---still unmeasured--- spectra of both processes. We conclude that the vector current contribution dominates over the axial-vector current, in fair disagreement with the corresponding conclusions from the KS model \cite{Finkemeier:1996hh} with which we have also compared our full spectra. On the other hand, our result is also at variance with the analysis in Ref.~\cite{Davier:2008sk}. There are two all-important differences that come out from the comparison. First, while in the KS model the axial-vector contribution dominates the partial width and spectra, in our results the vector current is the one that rules both spectrum and width. Second, the KS model points out a strong interference between the $\rho(770)$, the $\rho(1450)$ and the $\rho(1700)$ resonances that modifies strongly the peak and shape of the $M_{KK \pi}$ distribution depending crucially on the included spectra. Not having a second multiplet of vector resonances in our approach, we cannot provide this feature. It seems strange to us the overwhelming role of the $\rho(1450)$ and $\rho(1700)$ states but it is up to the experimental measurements to settle this issue. \par Even if our model provides a good deal of tools for the phenomenological analyses of observables in tau lepton decays, it may seem that our approach is not able to carry the large amount of input present in the KS model, as the later includes easily many multiplets of resonances. In fact, this is not the case, since the Lagrangian can be systematically extended to include whatever spectra of particles are needed. If such an extension is carried out the determination of couplings could be cumbersome or just not feasible, but, on the same footing as the KS model, our approach would provide a parameterization to be fitted by the experimental data. The present stage, however, has its advantages. By including only one multiplet of resonances we have a setting where the procedure of hadronization is controlled from the theory. This is very satisfactory if our intention is to use these processes to learn about QCD and not only to fit the data to parameters whose relation with the underlying theory is unclear when not directly missing. \par We intend to follow our approach to analyse further relevant three pseudoscalar channels along the lines explained in this article. \paragraph{Acknowledgements} $\,$ \\ \label{Acknowledgements} $\,$ \\ \hspace*{0.5cm} We wish to thank S.~Eidelman, H.~Hayashii, B.~Malaescu, O.~Shekhovtsova and Z.~Was for their interest in this work and many useful discussions on the topic of this article. P.~Roig has been partially supported by a FPU contract (MEC), the DFG cluster of excellence 'Origin and Structure of the Universe' and a Marie Curie ESR Contract (FLAVIAnet). This work has been supported in part by the EU MRTN-CT-2006-035482 (FLAVIAnet), by MEC (Spain) under grant FPA2007-60323, by the Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042) and by Generalitat Valenciana under grant PROMETEO/2008/069. This work has also been supported by CONICET and ANPCyT (Argentina), under grants PIP6009, PIP02495, PICT04-03-25374 and PICT07-03-00818.
1,116,691,501,174
arxiv
\section{Proof of the First Zonklar Equation} \section*{Acknowledgment} This material is based on research sponsored by DARPA and Air Force Research Laboratory (AFRL) under agreement number FA8750-16-2-0173. Hardware support was generously provided by the NVIDIA Corporation. We also thank the financial support of FAPESP (Grant 2017/12646-3, D\'ej\`aVu Project), CAPES (DeepEyes Grant) and CNPq (Grant 304472/2015-8). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Experimental Setup} \label{sec:expsetup} Here we describe the experimental setup, including the datasets (Sec.~\ref{sec:exp_datasets}), metrics (Sec.~\ref{sec:exp_metrics}), and the parametric values employed for provenance image filtering (Sec.~\ref{sec:exp_p1}) and provenance graph construction (Sec.~\ref{sec:exp_p2}). \subsection{Datasets} \label{sec:exp_datasets} \subsubsection{NIST Dataset} As a part of the \emph{Nimble Challenge 2017}~\cite{nist2017dataset}, NIST released a dataset specifically curated for the tasks of provenance image filtering and graph construction. Named \emph{NC2017-Dev1-Beta4}, it contains 65 queries and 11,040 images that comprise samples related to the queries and distractors. As a consequence, the dataset makes available a complete groundtruth that is composed of the 65 expected image ranks as well as the 65 expected provenance graphs related to each query. The provenance graphs were manually created and include images resulting from a wide range of transformations, such as splicing, removal, cropping, scaling, rotation, translation and color correction. Aiming to enlarge NC2017-Dev1-Beta4 towards a more realistic scenario, we extend its set of distractors by adding nearly one million images randomly sampled from the Nimble \emph{NC2017-Eval-Ver1} dataset~\cite{nist2017dataset}. The NC2017-Eval-Ver1 dataset is the latest NIST evaluation set for measuring the performance of diverse image-manipulation detection tasks. However, no complete provenance ground truth is available for this set, leading us to use NC2017-Dev1-Beta4 in conjunction with NC2017-Eval-Ver1. As a result, we end up with what we call the \textit{NIST dataset}, which comprises the 65 provenance graphs from NC2017-Dev1-Beta4 and more than one million distractors from both datasets. Following NIST suggestions in~\cite{nist2017dataset}, we perform both \emph{end-to-end} and \emph{oracle-filter} provenance analysis over the NIST dataset. On the one hand, the end-to-end analysis includes performing the provenance image filtering task first, and then submitting the obtained image rank to the provenance graph construction step. On the other hand, the oracle-filter analysis focuses on the provenance graph construction task; it assumes that a perfect image filtering solution is available. Therefore, only the graph construction step is evaluated. \vspace{0.2cm} \subsubsection{Professional Dataset} Oliveira et. al~\cite{Oliveira_2016} introduced a multiple-parent phylogeny dataset, which comprises composite forgeries that always have two direct ancestors, namely the \emph{host} (which is used for defining the background of the composite) and the \emph{donor} (which they call \emph{alien} and that donates a local portion, such as an object or person, to define the foreground of the composite). Each phylogeny case comprises 75 images, of which three represent the composite, the host, and the donor, and the remaining 72 represent transformations (\textit{e.g.}, cropping, rotation, scale, and color transformations) over those three images. As a consequence, each case is a provenance graph composed of three independent phylogeny trees (one for the host, one for the donor, and one for the composite) that are connected through the composite and its direct parents (the host and the donor, as expected). Although our approach is not directly comparable to the one of Oliveira et al.~\cite{Oliveira_2016} (since they used different metrics and addressed a different problem of finding the correct original images --- the graph sources --- rather than the quality of the coverage of the complete provenance graph) we make use of their dataset for the reason of the composites being the work of a professional artist that tried to make the images as credible as possible. Therefore, we are assessing the metrics defined in Sec.~\ref{sec:exp_metrics} and reporting the results over the 80 test cases found within the dataset. In order to adapt it to our provenance graph building pipeline, however, we are choosing a random image inside the provenance graph as a query for each one of the 80 experimental cases. Finally, we do not extend the professional dataset with distractors; hence we perform only oracle-filter analysis over it. \vspace{0.2cm} \subsubsection{Reddit Dataset} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{figures/Reddit_Inference.pdf} \caption[]{A visualization of how provenance graphs are automatically inferred from a Reddit Photoshop battle instance. The parent-child behavior of comments (right) can be leveraged to infer the structure of the ground truth provenance graph (left). The colors of each comment correspond to their respective edge in the graph.} \label{fig:redditViz} \end{figure} To supplement the experimental data with even more realistic examples, we have collected a new provenance dataset from image content posted to the online Reddit community known as \emph{Photoshop battles}~\cite{reddit2017photoshopbattles}. This community provides a medium for professional and amateur image manipulators to experiment with image doctoring in an environment of friendly competition. Each ``battle'' begins with a single root image submitted by a user. Subsequent users then post different modifications, usually humorous, of the image as comments to the original post. Due to the competitive nature of the community, many image manipulations build off one another, as users try to outdo each other for comic effect. This results in manipulation provenance trees with both wide and deep chains. We use the underlying comment structure of these battles to automatically infer the ground truth provenance graph structure, as shown in Fig.~\ref{fig:redditViz}. Because these images are real examples of incremental manipulations, the Reddit dataset accurately represents manipulations and operations performed on images in the wild. In total, the Reddit dataset contains 184 provenance graphs, which together sum up to 10,421 original and composite images. It will be made available to the public upon the publication of this work. Similar to the Professional dataset, we are not extending the Reddit dataset with distractors; we perform only oracle-filter analysis over it. \subsection{Evaluation Metrics} \label{sec:exp_metrics} In this work, we adopt the metrics proposed by NIST in~\cite{nist2017dataset} for both the provenance image filtering and graph construction tasks. In the case of provenance image filtering, we report (for each image query) the CBIR recall of the expected images at three particular cut-off ranks: $R@50$ (the recall considering the top-50 images of the retrieved image rank), $R@100$ (recall for the top-100 images), and $R@200$ (recall for the top-200 images). Given that recall expresses the percentage of relevant images that are being effectively retrieved, the solution delivering higher recall is considered preferable. In the case of provenance graph construction, we assess, for each provenance graph that is computed for each query, the $F_1$-measure (\textit{i.e.}, the harmonic mean of precision and recall) of the \emph{retrieved nodes} and of the \emph{retrieved edges} (called \emph{vertex overlap} ($VO$) and \emph{edge overlap} ($EO$), respectively). Additionally, we report the \emph{vertex and edge overlap} ($VEO$), which is the $F_1$-measure of retrieving both nodes and edges, simultaneously~\cite{Papadimitriou_2010}. The aim of using such metrics is to assess the overlap between the groundtruth and the constructed provenance graph. The higher the values of $VO$, $EO$, and $VEO$, the better the quality of the solution. Finally, in the particular case of $EO$ (and consequently $VEO$), we report the overlap both for directed edges (which are assumed to be the regular situation, and therefore kept for $EO$ and $VEO$), and for undirected edges (when an edge is considered to overlap another one if they connect analogous pairs of nodes, in spite of their orientations). All aforementioned metrics are assessed through the NIST \emph{MediScore} tool~\cite{nist2017plan}. \subsection{Filtering Setup} \label{sec:exp_p1} In all provenance filtering experiments, we either start describing the images with regular 64-dimensional SURF~\cite{Bay:CVIU:2008} interest points, or the distributed approach explained in Sec.~\ref{sec:prop_spread_kp} combined with the SURF detector (namely \emph{DSURF}). For the regular SURF detector, depending on the experiment, we either extract the top-$2,000$ most responsive interest points (namely \emph{SURF2k}), or the top-$5,000$ most responsive ones (\emph{SURF5k}). DSURF, in turn, is always described with 5,000 64-dimensional interest points, of which 2,500 regard the top-$2,500$ most responsive ones, and the remaining 2,500 are obtained avoiding overlap, as explained in~Sec.~\ref{sec:prop_spread_kp}. For the sake of comparison, besides reporting results of the IVFADC system (explained in Sec.~\ref{sec:prop_indxing}), we also report results of the KD-Forest system discussed by Pinto~et~al.~\cite{pinto2017filtering} over the same set of images. Because the work in~\cite{pinto2017filtering} is not easily scalable beyond 2,000 interest points, with respect to memory footprint, we combine it with SURF2k only (namely \emph{KDF-SURF2k}). Focusing on the IVFADC approach, we provide combinations of it with all the available low-level descriptor approaches, hence obtaining \emph{IVFADC-SURF2k} (for comparison with KDF-SURF2k), \emph{IVFADC-SURF5k}, and \emph{IVFADC-DSURF}. Regardless of the descriptors, we are always performing IVFADC with a codebook set size of 32 codes and sub-codebook set size of 96; both values were learned from preliminary experiments as revealing an acceptable trade-off between index building time and size, and final system recall. Finally, aiming at evaluating the impact of using iterative filtering (explained in Sec.~\ref{sec:prop_iterative_filtering}), we evaluate variations of the two most robust filtering solutions (namely IVFADC-SURF5k and IVFADC-DSURF) by adding iterative filtering (IF), hence obtaining the \emph{IVFADC-SURF5k-IF} and \emph{IVFADC-DSURF-IF} variations. All filtering methods are tested over the NIST dataset, for each one of its 65 queries. \subsection{Graph Construction Setup} \label{sec:exp_p2} As explained in Sec.~\ref{sec:prop:graph_building}, the graph construction task always starts with a given query and its respective rank of potentially related images. For computing both the GCM-based and MI-based dissimilarity matrices (all explained in Sec.~\ref{sec:prop_dismat}), we either detect and match the top-5,000 most responsive SURF interest points per image, for each image pair, or the top-5,000 largest MSER regions per image, again for each image pair. As a consequence, we have available four types of dissimilarity matrices, namely \emph{GCM-SURF} and \emph{GCM-MSER} (both symmetric) and \emph{MI-SURF} and \emph{MI-MSER} (both asymmetric). The reason for choosing SURF and MSER is related to their potential complementarity: while SURF detects blobs of interest~\cite{Bay:CVIU:2008}, MSER detects the stable complex image regions that are tolerant to various perspective transformations~\cite{Matas_2004}. Thus, the two methods end up delivering very different sets of interest points. For extracting feature vectors from both SURF and MSER detected interest points, we compute the 64-dimensional SURF features proposed in~\cite{Bay:CVIU:2008}. In the particular case of MSER, we compute the SURF features over the minimum enclosing circles that contain each one of the detected MSER image regions. During the GCM feature matching, we match only interest points of the same type (\textit{i.e.}, we match SURF blobs with only SURF blobs, as well as MSER regions with only MSER regions). In the end, we construct the provenance graphs from each one of the four types of dissimilarity matrices using either Kruskal's algorithm over the symmetric GCM-based instances (therefore obtaining undirected graphs), or the herein proposed clustered provenance graph expansion approach over the asymmetric MI-based instances (obtaining directed graphs). All graph construction methods are tested over the NIST (65 queries), Professional (80 queries), and Reddit (184 queries) datasets. In the particular case of the NIST dataset, we report both end-to-end and oracle-filter analyses. Regarding end-to-end analysis, we start with the best top-100 image ranks that were obtained in the former set of provenance image filtering experiments. As expected, these ranks contain distractors, as well as miss some images related to the query that should be part of the final provenance graph. With respect to the oracle-filter analysis, the ranks will only contain images related to the query. \section{Conclusions} \label{sec:conc} The determination of image provenance is a difficult task to solve. The complexity increases significantly when considering an end-to-end, fully-automatic provenance pipeline that performs at scale. This is the first work, to our knowledge, to have proposed such a technique, and we consider these experiments an important demonstration of the feasibility of large-scale provenance systems. Our pipeline included an image indexing scheme that utilizes a novel iterative filtering and distributed interest point selection to provide results that outperform the current state-of-the-art found in \cite{pinto2017filtering}. We also proposed methods for provenance graph building that improve upon the methods of previous work in the field, and provided a novel clustering algorithm for further graph improvement. To analyze these methods, we utilized the NIST Nimble Challenge~\cite{Nimble_2017} and the multiple-parent phylogeny Professional dataset~\cite{Oliveira_2016} to generate detailed performance results. Beyond utilizing these datasets, we committed to real-world provenance analysis by building our own dataset from Reddit~\cite{reddit2017photoshopbattles}, consisting of unique manipulation scenarios that were generated in an unconstrained environment. This is the first work of its kind to analyze fully in-the-wild provenance cases. Upon scrutinizing the results from the three differently sourced datasets, we observed that the proposed approaches perform decently well in connecting the correct set of images (with reported vertex overlaps of nearly 0.8), but still struggle when inferring edge directions --- a result that highlights the difficulty of this problem. Directed edges are dependent on whether the transformations are reversible or can be inferred from pixel information. In this attempt to perform provenance analysis, we found that although image content is the most reliable source of information connecting related images, other external information may be required to supplement the knowledge obtained from pixels. This external information can be obtained from file metadata, object detectors and compression factors, whenever available. Work in this field is far from complete. The problem of unconstrained, fully-automatic image provenance analysis is not solved. For instance, this work does not currently utilize previous work found in the Blind Digital Image Forensics (BDIF) field. Significant improvements in region localization, provenance edge calculation, and even edge direction estimation could be performed by using systems already created in the BDIF field. We plan to explore the benefits of integrating splicing and copy-move detectors, along with Photo Response Non-Uniformity (PRNU), and Color Filter Array (CFA) models into our pipeline for detecting image inconsistencies and building higher accuracy dissimilarity matrices. While this work is a significant first step, we hope to spur others on to further investigate fully-automatic image forensics systems. As the landscapes of social and journalistic media change, so must the field of image forensics adapt with them. News stories, cultural trends, and social sentiments flow at a fast pace, often fueled by unchecked viral images and videos. There is a pressing need to find new solutions and approaches to combat forgery and misinformation. Further, the dual-use nature of such systems makes them useful for other applications, such as cultural analytics, where image provenance can be a primary object of study. We encourage researchers to think broadly when it comes to image provenance analysis. \section{Results} \label{sec:results} In this section, we report the experimental results concerning the tasks of provenance image filtering (in Sec.~\ref{sec:res_1}) and of provenance graph construction (in Sec.~\ref{sec:res_2}). \subsection{Image Filtering} \label{sec:res_1} Table~\ref{tab:filtering} contains the results of provenance image filtering over the 65 queries of the NIST dataset, following the setup detailed in Sec.~\ref{sec:exp_p1}. The best solution is IVFADC-DSURF-IF, which reaches an R@50 value of 0.907, meaning that, if we use the respective top-50 rank as input to the provenance task, an average of 90.7\% of the images directly and indirectly related to the query will be available for graph construction. As one might observe, the IVFADC-based solutions presented better recall values when compared to KDF-SURF2k, even when the same number of interest points was used for describing the images of the dataset. That is the case, for instance, of the use of IVFADC-SURF2k, which provided an increase of approximately 17\% in R@50 over its KDF-based counterpart (KDF-SURF2k). IVFADC makes use of CBIR state-of-the-art OPQ, which appears to be more effective than KD-trees for indexing image content. In addition, the GPU-amenable scalability provided by IVFADC allowed us to increase the number of 64-dimensional SURF interest points from 2,000 to 5,000 features per image (reaching around five billion feature vectors for the entire dataset). With more interest points, the dataset is better described, leading, for example, to an increase of nearly 23\% in R@50 for IVFADC-SURF2k over IVFADC-SURF5k. \begin{table}[t] \renewcommand{\arraystretch}{1.5} \caption{Results of provenance image filtering over the NIST dataset.} \centering \begin{tabular}{R{2.3cm}C{1.5cm}C{1.5cm}C{1.5cm}} \hline Solution & R@50$^\dagger$ & R@100$^\dagger$ & R@200$^\dagger$ \\ \hline KDF-SURF2k~\cite{pinto2017filtering} & 0.609 & 0.633 & 0.649 \\ IVFADC-SURF2k & 0.713 & 0.722 & 0.738 \\ IVFADC-SURF5k & 0.876 & 0.881 & 0.883 \\ IVFADC-DSURF & 0.882 & 0.895 & 0.899 \\ IVFADC-SURF5k-IF & 0.895 & 0.901 & 0.919 \\ \textbf{IVFADC-DSURF-IF} & \textbf{0.907} & \textbf{0.912} & \textbf{0.923} \\ \hline \end{tabular} \vspace{0.1cm}\\ $\dagger$: We report the average values on the provided 65 queries.\\ In bold, the solution with highest recall values. \label{tab:filtering} \end{table} The use of DSURF also increased the recall values. Its application was responsible for an improvement of almost 7\% in R@50, when we compare IVFADC-SURF5k and IVFADC-DSURF, at the expense of adding one more hour to the time required to construct the index for the entire dataset. This extra hour is related to the additional step of avoiding interest point overlaps, which is part of the DSURF detection solution. Finally, the use of IF made the recall values approach 0.9, even considering R@50. For example, the use of IVFADC-DSURF-IF yielded an improvement of nearly 3\% in R@50 over IVFADC-DSURF. That happened, however, at the expense of a significant increase of search time, due to the iterative re-querying nature of IF; IVFADC-DSURF-IF requires four times longer than IVFADC-DSURF. However, in certain scenarios where time is not a constraint, the increase of 3\% in recall may justify the deployment of such approach. \subsection{Graph Construction} \label{sec:res_2} We organize the results of graph construction according to the adopted dataset (either NIST, Professional, or Reddit). Table~\ref{tab:nist} shows the performance of the proposed approach over the NIST dataset. Results are grouped into end-to-end and oracle-filter analysis. In the particular case of end-to-end analysis, top-100 rank lists were obtained with IVFADC-DSURF-IF filtering, the best approach reported in Table~\ref{tab:filtering}. As a consequence, the respective provenance graphs are built, on average, without almost $9\%$ of the image nodes, which are not retrieved in the filtering step ($R@100 = 0.912$, in the case of IVFADC-DSURF-IF). Oracle-filter analysis, in turn, starts from a perfect rank of images, containing all and only the graph image nodes. That explains the higher values of VO in such group, at the expense of reducing EO. The reduction of EO is explained by the availability of more related images in the step of graph construction, which increases the number of possible edges and misconnections. It means that the present solutions are good at removing distractors, but there is still room to improve the effective connection of sharing-content images. The best end-to-end solution is MI-SURF, retrieving, on average, directed provenance graphs with $0.613$ ground truth-graph coverage (VEO). The best oracle-filter solution, in turn, is GCM-SURF, with $0.609$ undirected graph coverage. \begin{table}[!htbp] \renewcommand{\arraystretch}{1.5} \caption{Results of provenance graph construction over the NIST dataset. We report the average values on the provided 65 queries.} \centering \begin{tabular}{C{1.3cm}R{1.8cm}C{1.1cm}C{1.1cm}C{1.1cm}} \hline & Solution & VO & EO & VEO\\ \hline \multirow{4}{1.3cm}{End-to-end analysis} & GCM-SURF~\cite{bharati2017uphy} & 0.638 & 0.429$^\dagger$ & 0.537$^\dagger$ \\ & GCM-MSER & 0.257 & 0.140$^\dagger$ & 0.199$^\dagger$ \\ & \textbf{MI-SURF} & \textbf{0.853} & \textbf{0.353} & \textbf{0.613} \\ & MI-MSER & 0.835 & 0.312 & 0.585 \\ \hline \multirow{4}{1.3cm}{Oracle-filter analysis} & \textbf{GCM-SURF}~\cite{bharati2017uphy} & \textbf{0.933} & \textbf{0.256}$^\dagger$ & \textbf{0.609}$^\dagger$ \\ & GCM-MSER & 0.902 & 0.239$^\dagger$ & 0.585$^\dagger$ \\ & MI-SURF & 0.931 & 0.124 & 0.546 \\ & MI-MSER & 0.892 & 0.123 & 0.525 \\ \hline \end{tabular} \vspace{0.1cm}\\ $\dagger$: Values for undirected edges. In bold, the solutions with the best VEO. \label{tab:nist} \vspace{0.1cm} \end{table} \begin{table}[!htbp] \renewcommand{\arraystretch}{1.5} \caption{Results of provenance graph construction over the Professional dataset. We report the average values on the 80 queries belonging to the test set.} \centering \begin{tabular}{R{1.9cm}C{1.5cm}C{1.5cm}C{1.5cm}} \hline Solution & VO & EO & VEO\\ \hline \textbf{GCM-SURF}~\cite{bharati2017uphy} & \textbf{0.985} & \textbf{0.218}$^\dagger$ & \textbf{0.604}$^\dagger$ \\ GCM-MSER & 0.663 & 0.087$^\dagger$ & 0.377$^\dagger$ \\ MI-SURF & 0.975 & 0.102 & 0.541 \\ MI-MSER & 0.604 & 0.043 & 0.326 \\ \hline \end{tabular} \vspace{0.1cm}\\ $\dagger$: Values for undirected edges. In bold, the solution with the best VEO. \label{tab:prof} \vspace{0.1cm} \end{table} \begin{table}[!htbp] \renewcommand{\arraystretch}{1.5} \caption{Results of provenance graph construction over the Reddit dataset. We report the average values on the provided 100 queries.} \centering \begin{tabular}{R{1.9cm}C{1.5cm}C{1.5cm}C{1.5cm}} \hline Solution & VO & EO & VEO\\ \hline GCM-SURF~\cite{bharati2017uphy} & 0.884 & 0.156$^\dagger$ & 0.523$^\dagger$ \\ \textbf{GCM-MSER} & \textbf{0.924} & \textbf{0.121}$^\dagger$ & \textbf{0.526}$^\dagger$ \\ MI-SURF & 0.757 & 0.037 & 0.401 \\ MI-MSER & 0.509 & 0.027 & 0.271 \\ \hline \end{tabular} \vspace{0.1cm}\\ $\dagger$: Values for undirected edges. In bold, the solution with the best VEO. \label{tab:reddit} \end{table} In Table \ref{tab:prof}, we present results of the proposed approaches on the Professional dataset. In comparison to the NIST dataset, the same solutions recognize fewer of the correct provenance graph edges. This happens due to the larger 75-node provenance graphs, which contain a number of near duplicates that were created through reversible operations. As a consequence, altered image nodes can be achieved using different sequences of image transformations, leading to ambiguous dissimilarity values, and multiple plausible paths, within the provenance graph. The methods herein discussed are solely based on image content and do not consider any extra information, thus operating with data from only the pixel domain. Indeed, in previous image phylogeny work reporting results on the Professional dataset, the solutions made use of data from the JPEG compression tables of the images. We speculate that if information regarding the compression factor is included in the present approaches, some confusion regarding the edges can be eliminated. That would not impact the NIST dataset, though, since only a small fraction of its images are available in JPEG format. Here, the best solution is GCM-SURF, retrieving, on average, undirected provenance graphs with $0.604$ ground truth-graph coverage (VEO). Table~\ref{tab:reddit} reports results on the Reddit dataset. As one might observe, this dataset is the most challenging one, with low directed edge coverage (namely $EO$, in the case of MI-SURF and MI-MSER solutions). Since its whimsical content is the product of a diverse community, the Reddit dataset presents realistic, yet frustratingly complex, cases. As a consequence, it is not uncommon to find among the 184 collected provenance graphs suppressed ancestral images, as well as descendant images whose parental connections are defined by very particular and contextual semantic reasons (for instance, an arbitrary person resembling another in the parent image), than by strictly shared visual content. That ends up impacting our results. The best solution is GCM-MSER, which retrieves, on average, undirected provenance graphs with $0.526$ ground truth-graph coverage (VEO). \subsection{Provenance Graph Construction} \label{sec:prop:graph_building} As one can observe in Fig.~\ref{fig:solution}, the provenance graph construction task builds upon the image rank that is obtained by the provenance image filtering task, and ends up with the provenance graph. Therefore, at this point, we can assume that (in the best scenario) all images directly and indirectly related to the query are available for constructing the provenance graph, as well as some \emph{distractors} (images that should not be present in the provenance graph, because they are not related to any of the images within it). The presence of distractors at this step is more of a matter of design. Taking into consideration that, in~\cite{bharati2017uphy}, experiments show distractors not impacting the provenance graph construction too much, and aiming to keep the provenance image filtering part as simple as possible, we give the subsequent dissimilarity matrix calculation task the duty of removing distractors. Therefore, the input is a set containing the $k$ top-retrieved images and the query, which are then used for building dissimilarity matrices. \vspace{0.2cm} \subsubsection{Calculation of Dissimilarity Matrices} \label{sec:prop_dismat} Similar to~\cite{Dias_2012}, given the set $I$ containing the $k$ top-retrieved images and the query, a dissimilarity matrix $D$ is a $(k+1)\times(k+1)$~matrix whose elements $d_{ij}$ describe the dissimilarity between images $I_i$ and $I_j$, respectively the $i$-th and $j$-th images of $I$. Depending on how the values $d_{ij}$ are calculated, $D$ can be either symmetric or asymmetric. In this work, following the solution proposed in~\cite{bharati2017uphy}, we neither make any strong assumptions with respect to the transformations that might have been used to generate the elements of $I$, nor impose limitations on the presence of near duplicates, semantically similar images, or multi-donor composites. Instead, we focus on analyzing the shared visual content between every pair of images $(I_i, I_j)$ through two ways of calculating $d_{ij}$. In the first one, we set $d_{ij}$ as the inverse of the number of geometrically-consistent interest-point matches (GCM) between images $I_i$ and $I_j$; in this particular case, the matrix $D$ is symmetric. In the second one, we set $d_{ij}$ as the \emph{mutual information} (MI) between a color transformation $T_j(I_i)$ of image $I_i$ towards image $I_j$; in this case, the matrix $D$ is asymmetric. Both methods are described below. \vspace{0.2cm} \subsubsection*{GCM-based dissimilarity} Provenance graph construction starts with the detection of interest points over each one of the $k+1$ images that belong to $I$. At this step, different interest point detectors can be applied, such as SURF~\cite{Bay:CVIU:2008} or Maximally Stable Extremal Regions (MSER)~\cite{Matas_2004}, with each one yielding a particular dissimilarity matrix. Once the interest points are available and properly described through feature vectors (\textit{e.g.}, SURF features~\cite{Bay:CVIU:2008}), we find correspondences among them for every pair of images $(I_i, I_j)$. Let $P_i$ be the set of feature vectors obtained from the interest points of image $I_i$, and $P_j$ be the set of features obtained from $I_j$. For each feature belonging to $P_i$, the two best matching features are found inside $P_j$ using Euclidean distance (the closer the features, the better the match). Inspired by Nearest-Neighbor-Distance-Ratio (NNDR) matching quality~\cite{lowe2004distinctive}, we ignore all the features whose ratio of the distances to the first and to the second best matching features is smaller than a threshold $t$, since they might present a poor distinctive quality. The remaining features are then kept and finally matched to their closest pair. Even with the use of NNDR, it is not uncommon to gather \emph{geometrically inconsistent matches}, \textit{i.e.}, contradictory interest-point matches that, if together, cannot represent plausible content transformations of image $I_i$ towards image $I_j$, and vice-versa. To get rid of these matches, we adopt a solution that is able to build a geometrically-consistent model of expected interest-point positions from any pair of matches between images $I_i$ and $I_j$. For example, consider two arbitrary matches $m_1$ and $m_2$, which respectively connect points $p_1 \in I_i$ and $q_1 \in I_j$, and points $p_2 \in I_i$ and $q_2 \in I_j$. Based upon the positions, the distance $l_p$, and the angle $\alpha_p$ between points $p_1$ and $p_2$ (both from image $I_i$), as well as upon the positions, the distance $l_q$, and angle $\alpha_q$ between points $q_1$ and $q_2$ (both from image $I_j$), we estimate the scale, translation, and rotation matrices that make $p_1$ and $p_2$ respectively coincide with $q_1$ and $q_2$. With these matrices, we transform every matched interest point of $I_i$ onto the space of $I_j$. As one might expect, points that do not coincide with their respective peers after the transformations have their matches removed from the set of geometrically consistent matches. Finally, we compute the dissimilarity matrix $D$ by setting every one of its $d_{ij}$ elements as the inverse of the number of found geometrically-consistent matches between images $I_i$ and $I_j$. In this case, the dissimilarity matrix is symmetric. \vspace{0.2cm} \subsubsection*{MI-based dissimilarity} The mutual-information (MI)-based dissimilarity matrix is an extension of the GCM-based alternative (see Fig.~\ref{fig:solution}). After finding the geometrically consistent interest-point matches for each pair of images $(I_i, I_j)$, the obtained interest points are used for estimating the homography $H_{ij}$ that guides the registration of image $I_i$ onto image $I_j$, as well as the homography $H_{ji}$ that analogously guides the registration of image $I_j$ onto image $I_i$. In the particular case of $H_{ij}$, for calculating $d_{ij}$, after obtaining the transformation $T_j(I_i)$ of image $I_i$ towards $I_j$, $T_j(I_i)$ and $I_j$ are properly registered, with $T_j(I_i)$ presenting the same size of $I_j$, and the matched interest points relying on the same position. We thus compute the bounding boxes that enclose all the matched interest points, within each image, obtaining two correspondent patches $R_1$, within $T_j(I_i)$, and $R_2$, within $I_j$. As in~\cite{bharati2017uphy}, the distribution of the pixel values of $R_1$ is matched to the distribution of $R_2$, prior to calculating the pixel-wise amount of residual between them with MI. From the point of view of information theory, MI is the amount of information that one random variable contains about another. From the point of view of probability theory, it measures the statistical dependence of two random variables. In practical terms, assuming each random variable as respectively the aligned and color-corrected patches $R_1$ and $R_2$, the value of MI is given by the entropy of discrete random variables: \begin{equation} \begin{aligned} & MI(R_1, R_2) = \\ & \sum_{x \in R_1}^{ }{\sum_{y \in R_2}^{ }{p(x, y)}} \log \left ( \frac{p(x, y)}{\sum_{x}^{ } {p(x, y)} \sum_{y}^{ } {p(x, y)}} \right ), \end{aligned} \end{equation} \noindent where $x \in [0,\ldots,255]$ refers to the pixel values of $R_1$, and $y \in [0,\ldots,255]$ refers to the pixel values of $R_2$. The $p(x, y)$ value regards the joint probability distribution function of $R_1$ and $R_2$. As explained in~\cite{Costa_2017}, it can be approximated by: \begin{equation} p(x, y) = \frac{h(x, y)}{\sum_{x, y}^{ } {h(x, y)}}, \end{equation} \noindent where $h(x, y)$ is the joint histogram that counts the number of occurrences for each possible value of the pair $(x, y)$, evaluated on the corresponding pixels for both patches $R_1$ and $R_2$. As a consequence, MI is directly proportional to the similarity of the two patches. Back to $H_{ji}$, it is calculated in an analogous way of $H_{ij}$. However, instead of $T_j(I_i)$, $T_i(I_j)$ is manipulated for transforming $I_j$ towards $I_i$. Further, the size of the registered images, the format of the matched patches, and the matched color distributions are different, leading to a different value of MI for setting $d_{ji}$. As a consequence, the resulting dissimilarity matrix $D$ is asymmetric, since $d_{ji}\ne d_{ji}$. \vspace{0.2cm} \subsubsection*{Avoiding distractors} As we have mentioned before, the image rank given to the provenance graph construction step may contain distractors, which need to be removed during the dissimilarity matrix calculation step. When computing the dissimilarity matrix $D$, the solution proposed by Bharati et al.~\cite{bharati2017uphy} establishes matches between every pair of available images, including distractors. By interpreting $D$ as the adjacency matrix of a multi-graph whose nodes are the images, they identify distractors as the nodes weakly connected (\textit{i.e.,} that present a small number of matches, down to none) to the minimum spanning tree that contains the query. Assuming $(k + 1)$ as the number of image nodes, they perform $(k^2 + k) / 2$ operations to populate $D$. In this work, we improve that process by the means of an iterative approach, which starts from the node of the query and then computes the geometrically consistent matches with the remaining $k$ images. A set with only the strongly connected nodes is thus saved for the next iteration. In the following iterations, the algorithm keeps trying to establish matches starting from the last set of strongly matched images, up to the point where no more strong matches are found. Although simple, this solution may provide a significant improvement in the runtime of the dimissimilarity matrix calculation. Let $d\leq k$ be the amount of distractors inside the image rank. We avoid $(d^2 - d) / 2$ operations by applying the iterative solution. In the case of a rank with 50 images ($k=50$), for instance, and 40 distractors ($d=40$) (indicating that the provenance graph contains only ten images), the number of operations is reduced from $1,275$ to $795$, significantly speeding up the runtime in case of small graphs. \vspace{0.2cm} \subsubsection{Clustered Provenance Graph Construction} \label{sec:prop_cluster} Once the GCM- and MI-based dissimilarity matrices are available, we rely on both for constructing the final provenance graph, by the means of a novel algorithm, named \emph{clustered provenance graph expansion}. The main idea behind such a solution is to group the available images in a way that only near duplicates of a common image are added to the same cluster. Starting from the image query $I_q$, the remaining images are sorted according to the number of geometrically consistent matches shared with $I_q$, from the largest to the smallest. The solution then clusters probable near duplicates around $I_q$, as long as they share \emph{enough content}, which is decided based upon the number of matches. After automatically adding the first image of the sorted set to the cluster of $I_q$, the solution iteratively analyzes the remaining available images. For deciding if the $i$-th candidate image $I_i$ (where $i>1$) is a near duplicate, the algorithm keeps track of the number of matches $m_i$ between $I_i$ and the last image $I_{i-1}$ added to the cluster. Let $\mu_{i-1}$ be the average number of matches of the cluster, and $\sigma_{i-1}$ be the standard deviation. $I_i$ is connected to $I_{i-1}$ in the final provenance graph, if $m_1 \in [\mu_{i-1} \text{ - } \sigma_{i-1}; \mu_{i-1} \text{ + } \sigma_{i-1}]$; in such a case, $I_i$ is added to the cluster by affinity, and novel values of $\mu_i$ and $\sigma_i$ are calculated, for evaluating the next candidate $I_{i+1}$. Otherwise, the current cluster is considered finished up to $I_{i-1}$. As a consequence, the obtained clusters have their images sequentially connected into a single path, without branches. That makes sense in scenarios involving sequential image edits where one near duplicate is obtained on top of the other, as in~\cite{nist2017dataset}. To determine the direction of the entire path, we assume the dominant direction within all the edges that make part of the path. To determine the direction of a single edge, we rely on the mutual information. Let $D$ be the MI-based dissimilarity matrix, and consider two images $I_i$ and $I_j$, whose respective $D$ elements are $d_{ij}$ and $d_{ji}$. As explained in~\cite{Oliveira_2016}, an observation of $d_{ij} > d_{ji}$ means that $I_i$ probably generated $I_j$. Finally, whenever a cluster is finished and there are still disconnected available images, we find the image already added to the provenance graph whose number of matches with the remaining ones is largest. This image is then assumed as the new query $I_q'$, over which the aforementioned clustering algorithm is executed, considering only the yet disconnected images. As a result, the final provenance graph sees a branch rising from $I_q'$ as an orthogonal path containing new images. \subsection{Provenance Image Filtering} \label{sec:prop:filtering} The problem of image filtering for the provenance task is different from the typical image retrieval task: a given query image may fulfill one or both of the following conditions: \begin{itemize} \item The query may have a relationship to various \emph{near duplicates}. The near duplicates may be \emph{hosts} of the query (in the case of the query being a composite that inherits the background from a near duplicate) or the query itself may be a host, as in the case of the query donating a background to the near duplicates. \item The query may be a composite with a relationship to one or more donors, whose content may be entirely disjoint. Donors can even be composites themselves, with their own hosts and donors. \end{itemize} In such scenarios, the retrieval method must return as many of the directly and indirectly related images as possible. These aspects define a unique image retrieval and filtering problem, known as \emph{Provenance Image Filtering}~\cite{pinto2017filtering, nist2017plan}, which is different from more typical \emph{near-duplicate} or \emph{semantically similar} image retrieval. In this work, we assume that a ground-up system must be deployed for search, retrieval, and filtering, instead of relying on currently available resources such as \emph{Google}~\cite{barroso2003web} or \emph{TinEye}~\cite{jacquicheng}. \vspace{0.2cm} \subsubsection{Distributed Interest Point Selection} \label{sec:prop_spread_kp} Due to the nature of the manipulations seen in tampered images, it is important to build a filtering system that is tolerant to a wide range of image transformations. Hence, we adopt a low-level image representation that is based on interest points and local features, since they are reportedly tolerant to transformations such as scaling, rotation, and contrast adjustment~\cite{Bay:CVIU:2008}. Nevertheless, while regular interest points are mostly designed to identify corners and blobs on the image, we also want to describe and further index homogeneous areas with low response and consequently a sparse amount of detected interest points, for retrieving images with the same type of content. Although one can use a dense sampling approach to extract interest points within those regions, this is computationally prohibitive in the context of searching millions of images~\cite{pinto2017filtering}. Therefore, we introduce a new method called \emph{distributed interest point selection} that aims at keeping a sparse approach while being able to provide interest points inside low-response areas. For that, we extend Hessian-based detectors (such as Speeded-Up Robust Features (SURF)~\cite{Bay:CVIU:2008}) in the following way. Instead of employing a threshold $t$ to collect interest points whose local Hessian values are greater than $t$, we define a parameter $p$ that expresses the fixed amount of interest points we want to extract from each target image. Within these $p$ interest points, $m < p$ interest points are extracted for the reason of being the top-$m$ regions with the $m$ strongest Hessian values. The remaining $n = p - m$ are extracted from the set containing the post-top-$m$ interest points, which is also sorted according to the Hessian response. Starting from the $(m+1)$-th strongest interest point, we only add the current interest point if it does not \emph{overlap} with another already selected interest point; otherwise, we try to add the next strongest interest point, up to the point of obtaining $n$ interest points. \begin{figure}[!t] \centerline{ \subfloat[]{\frame{\includegraphics[width=4cm]{figures/rsurf.png}}} \hfil \subfloat[]{\frame{\includegraphics[width=4cm]{figures/dsurf.png}}} } \caption{Effects of using the approach of distributed interest point selection. In (a), the result of a regular SURF interest point detection. In (b), the result of the distributed approach over the same image, with many more points over homogeneous regions, such as the skin of the wrist.} \label{fig:disitributed_kp} \end{figure} Fig.~\ref{fig:disitributed_kp} depicts the effect of using the distributed approach along with SURF. Fig.~\ref{fig:disitributed_kp}~(a) depicts a regular SURF detection, while Fig.~\ref{fig:disitributed_kp}~(b) depicts the distributed version, over the same image. Fig.~\ref{fig:disitributed_kp}~(b) presents more points over the skin of the wrist and background (which are more homogeneous regions) than Fig.~\ref{fig:disitributed_kp}~(a). \vspace{0.2cm} \subsubsection{Database Indexing} \label{sec:prop_indxing} The next step is to build the image index structure. After interest point detection and feature extraction, we are left with $p$ description vectors per image. For an image collection $C$: \begin{equation} C_i,\quad s.t.\quad i \in \mathcal{I}_{img} = \{0, 1, \ldots, |C|\}, \end{equation} \noindent our subsequent feature collection is: \begin{equation} F_i \quad s.t.\quad i \in \mathcal{I}_{ind} = \{0, 1, \ldots, |C| \times p\}, \end{equation} \noindent where $\mathcal{I}_{img}$ denotes the numbered index set of full images within $C$, and $\mathcal{I}_{ind}$ indicates the subsequent numbered index set assigned to individual features in $F$. We transform $F$ to a new space using \emph{Optimized Product Quantization} (OPQ)~\cite{ge2013optimized} to make the feature space well-posed for coarse \emph{Product Quantization} (PQ). We refer to this new rotated feature set as $F_{r}$. From a random sample of $F_{r}$, a coarse set of representative centroids $S$ is generated using PQ. A subsequent \emph{Inverted File System with Asymmetric Distance Computation} (IVFADC)~\cite{jegou2011product}) is generated from $S$, allowing for fast and efficient search. \vspace{0.2cm} \subsubsection{Image Search} \label{sec:prop_search} Once the database images are indexed, a search procedure can be performed via feature-wise queries. For a query image $Q$, a set of $p$ distributed SURF features $F_{d}$ is extracted and submitted to the system. Each image $Q$ returns a matrix of indices of \emph{Approximate Nearest Neighbors} (ANN) $R$ of size $(p \times K)$. The $R_{ij}$ value is computed using \emph{Asymmetric Distance Computation} (ADC)~\cite{jegou2011product}, where $i$ denotes the $i$-th query feature of $Q$, and $j$ denotes the $j$-th ANN index of the $i$-th query feature within $F_{d}$: \begin{equation} \begin{aligned} & R_{i,j}=\zeta(F_{d_{i}})_{j} \in \mathcal{I}_{ind},\ s.t.\\ & i \in \{0,1,\ldots,|F_{d_{i}}|\}\ \text{and}\ j \in K, \end{aligned} \end{equation} \noindent where $\zeta$ signifies a single query on the filtering system, and $K$ is the parameter of the K-nearest neighbors for the system to return. Once the set $R$ is calculated, we map $R$ from the $\mathcal{I}_{ind}$ space to the $\mathcal{I}_{img}$ space. The number of unique image indices is computed as: \begin{equation} R_{img}=\xi(R,\mathcal{I}_{ind},\mathcal{I}_{img}). \end{equation} Once $R_{img}$ is obtained, a sorted set of votes is calculated for representing the final global query results of $Q$: \begin{equation} \begin{aligned} & V_{i+1}=argmax_{x}\{\phi(x,R_{img})-V_{i}\},\ s.t. \\ & x \in \theta(R_{img})=\{R_{img}\}\ \text{and}\ V_0 = 0. \end{aligned} \end{equation} The function $\xi(R,\mathcal{I}_1,\mathcal{I}_2)$ maps index values in $R$ to the $\mathcal{I}_{img}$ index domain, allowing each $R_{ij}$ to represent the image it belongs to. The $\phi(x,R)$ value is an accumulator that returns the tally of all values of $x$ within $R$. The $\theta(R)$ value represents the set of distinct values within $R$. Using this scheme, we are able to retrieve images that only partially match $Q$, even in the presence of many noisy matches. Small objects will have high chances of accumulating values while spurious interest points will not. \vspace{0.2cm} \subsubsection{Iterative Filtering} \label{sec:prop_iterative_filtering} Once a first rank of images is retrieved through the search algorithm, we iteratively refine the results to add images that are not directly related to the query, but are still related in some way to its provenance. In contrast to the approach described by Pinto et al.~\cite{pinto2017filtering}, which employs a two-tiered search to retrieve the small donors of the query after masking the regions that diverge between the query and the first images of the retrieved rank, in this work we employ the \emph{reciprocal condition matching measure} (RCMM) proposed in~\cite{brogan2017spotting} to identify and suppress the near duplicates of the query. Given that a large RCMM value between two arbitrary images indicates that they are probably near duplicates, we suppress the retrieved images whose RCMM values with the query are large. The non-suppressed (and therefore non-near-duplicate) images of the current rank are then provided as new queries to the next search iteration, which is performed using the same method explained in Sec.~\ref{sec:prop_search}. By applying the above process for a number of iterations, we search various sets of non-near-duplicate queries (which are potentially donors) and end up with a set of ranks, which are then flattened and re-ranked using RCMM. In the end, we obtain a less query-centric rank of images, which contains not only images directly related to the query, but also indirectly related (\textit{e.g.}, ancestors of the donors of the query). As will be demonstrated in Sec.~\ref{sec:results}, such a strategy improves the recall of the provenance image filtering task. \vspace{0.2cm} \subsubsection{Large-Scale Infrastructure} \label{sec:prop_scaling} Fig.~\ref{fig:filterPipeline} shows the proposed full pipeline for index \emph{training} and \emph{construction} (previously explained in Sec.~\ref{sec:prop_indxing}). Index training refers to the process of learning the OPQ rotations and PQ codebooks from a sampling of the local features that are extracted from the target dataset. Index construction, in turn, refers to the computation of the inverted file indices, after properly rotating the previously extracted local features. The learning of OPQ rotations and PQ codebooks can be done in advance on a CPU, but the construction of indices is well suited to the capabilities of graphical processing units (GPU), allowing for faster computation. \begin{figure}[t] \centering \includegraphics[width=9cm]{figures/Indexing_pipeline.pdf} \caption{Filtering pipeline infrastructure. The orange area (left) shows computations that are performed on a CPU. The purple area (right) shows the index ingestion steps that are performed on a GPU.} \label{fig:filterPipeline} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=9cm]{figures/Fileread_pipeline.pdf} \caption{ Producer-consumer index ingestion. Each file contains features for an image. These file locations are pre-loaded into cache via a rate-limited ``touch" thread, and are read on a producer-consumer multi-threaded basis.} \label{fig:touchPipeline} \end{figure} Besides employing GPUs to efficiently build and search an index of over 1 million high-resolution images, additional steps must be taken to increase the pipeline speed. To date, most indexing algorithms require singular large files containing all features to be ingested at once \cite{flann_pami_2014,Attach:binary_matching_crv2012}, either due to implementation choices or algorithm limitations. The operation of concatenating all features from a set of images into a single file is prohibitively time consuming when dealing with more than a few million interest points. Because our scenarios require the ingestion of multiple billions of interest points, a different solution must be adopted, in order to avoid the need for file concatenation. For that, we propose a multi-threaded producer-consumer setup, as shown in Fig.~\ref{fig:touchPipeline}. In our pipeline, we provide a single feature file per image. The pipeline begins with the ``touch" thread, which systematically loads image feature file locations into the computer's file system cache, for faster retrieval in later stages. Then, a reading thread takes touched files and loads them into memory. A third thread takes sets of loaded feature files and produces feature batches of size $B$ that are optimized in size for GPU ingestion. The fourth thread applies the initial OPQ pre-processing rotations to the feature set, before sending the final batch to the GPU. Using this method, we are able to process billions of features from high-resolution image datasets orders of magnitudes faster than previous methods. \section{Provenance Analysis Methodology} \label{sec:prop} As described in Sec.~\ref{sec:intro}, the task of image provenance analysis is divided into two major steps, namely \emph{Provenance Image Filtering} and \emph{Provenance Graph Construction}. Fig.~\ref{fig:solution} depicts an overview of the proposed solution in this context. \input{./sections/03_prop_01_filtering.tex} \input{./sections/03_prop_02_gbuilding.tex} \section{Related Work} \label{sec:rw} \textit{Content-based image retrieval (CBIR).} In recent years, research advances in the domain of CBIR have included optimizing the memory footprint of indexing techniques and employing graphical processing units (GPU) for parallel search. A recent technique proposed by Johnson~et~al.~\cite{johnson2017billion} utilizes state-of-the-art image indexing (Optimized Product Quantization (OPQ)~\cite{ge2013optimized}) and runtime optimization to perform similarity search on the order of a billion images. Such approaches can be directly applied to perform image filtering for provenance analysis. However, as they follow the traditional CBIR inverted-file index pipeline~\cite{Zisserman_2003}, they will not generalize to all cases due to the nature of the problem. While regular CBIR will probably retrieve good host candidates to the query, in the face of compositions (which are fairly common in provenance analysis), small donors will not be highly ranked (or will not even be retrieved) without adaptations to the base approach. The work of Pinto et al.~\cite{pinto2017filtering} improves the retrieval of donors related to a query in the scope of provenance analysis. The paper introduces a two-tiered search approach. The first tier constitutes a typical CBIR pipeline, while the second tier provides a context-aware query-masking technique, which selects the regions from the query that make it divergent from hosts previously obtained in the first tier. With such regions as evidence, a second search is performed, this time avoiding hosts and retrieving additional potential donor images. Although such an approach does improve the retrieval of donors, it adopts a very ``query-centric'' point of view with respect to the problem of provenance analysis. It only finds the hosts and donors that directly share content with the query, ignoring the other descendants and the ancestors of such hosts and donors, which are indirectly related to the query. \textit{Image processing for image associations.} In our proposed workflow, the filtering step yields relevant images, and then provenance graph construction is performed. The provenance graph construction step involves finding diverse types of associations among images based on their similarities and/or dissimilarities. For that reason, it is related to tasks such as visual object recognition~\cite{russakovsky_2015}, scene recognition~\cite{zhou_2017}, place recognition~\cite{lowry_2016}, object tracking~\cite{cehovin_2016}, near-duplicate detection~\cite{winkler_2013}, and image phylogeny~\cite{Dias_2012}, since they all rely on the comparison of two or more images. Some visual association tasks may be general, as they relate images based on the common characteristics that optimally make them related. This is the case, for instance, for object recognition. For example, a query image that implicitly requests ``retrieve all the images containing dogs'' may also be assumed to be generalized (any breed, color, or size). Scene recognition (\textit{e.g.}, ``retrieve all the images depicting bedrooms'') may also include generalized queries. In such situations, a high content diversity among the related images is usually desired~\cite{Deselaers_2009}. By contrast, some image association tasks may be specialized, in the sense that they aim at extracting the specific characteristics that aid in the visual identification of a sample in a particular setting. That is the case of place recognition (\textit{e.g.}, retrieve all the images of Times Square), and object tracking (\textit{e.g.}, segment the target vehicle plate across the frames of a street surveillance video). Techniques for associating images in a general way include comparing global image representations~\cite{Deselaers_2010, oliva_2006}, employing bags of visual features~\cite{avila_2013, nanni2013heterogeneous} and using convolutional neural networks (CNNs)~\cite{szegedy_2015, krizhevsky_2017, zhou2014learning, wang2017knowledge}. Techniques for associating images in a specialized way include assessing local feature matching~\cite{maresca_2013, yang2015ransac, joly_2007, huang_2008,silva_2015}, image patch matching~\cite{milford_2014}, and evaluating the quality of image registration, color matching, and mutual information~\cite{Costa_2017, bharati2017uphy}. Particularly, provenance analysis is by definition closer to the specialized tasks; for that reason, in this work, we benefit more from techniques mentioned in the latter group. Although one can adapt deep CNNs to provenance analysis by optimizing them for specialization rather than generalization at training time, such a procedure is --- at the present time --- only accomplishable at the expense of prohibitive training times, the need for a reasonably large cluster of GPUs for model screening via hyperparameter optimization, and a sufficiently large amount of available training data~\cite{chollet_2017}. In addition, making such a solution perform at scale at inference time is also challenging. After running benchmark experiments using CNN-based approaches for finding image associations and noting long run-times, we have intentionally chosen to pursue faster alternatives to deep learning in this work. \textit{Image phylogeny trees.} Provenance analysis is related to the simpler task of image phylogeny, which seeks to recover a tree of relationships. Kennedy and Chang~\cite{Kennedy_2008} were the first to point out the possibility of relying on the color information of pixels and local features for gathering clues about plausible parent-child relationships among images. Based upon the pixel colors and local features, they suggest detecting a closed set of directed manipulations between pairs of content-related images (namely copy, scaling, color change, cropping, content insertion, and overlay detection). Rather than exhaustively modeling all of the possible manipulations between near-duplicate images, Dias et al.~\cite{Dias_2011} suggest having a good dissimilarity function that can be used for building a pairwise image dissimilarity matrix $D$. Accordingly, they introduce \emph{oriented Kruskal}, an algorithm that processes $D$ to output an \emph{image phylogeny tree}, a data structure that expresses the probable evolution of the near duplicates at hand. In subsequent work, Dias et al.~\cite{Dias_2012} formally present the dissimilarity-calculation protocol that is widely used in the related literature for computing $D$. They then go on to conduct a large set of experiments with this methodology, considering a family of six possible transformations, namely scaling, cropping, affine warping, brightness, contrast, and lossy content compression~\cite{Dias_2013_large}. Finally, in~\cite{Dias_2013}, Dias et al. replace oriented Kruskal with other phylogeny tree building methods: best Prim, oriented Prim, and Edmonds' optimum branching~\cite{edmonds_1967}, with the last solution consistently yielding improved results. \begin{figure*}[!htbp] \centering \includegraphics[width=18.5cm]{figures/pipeline2.pdf} \caption{Proposed pipeline for end-to-end provenance analysis. The sequence of activities is divided into two parts, which address the tasks of \emph{image filtering} (left panel) and of \emph{graph construction} (right panel). IVFADC stands for \emph{Inverted File System with Asymmetric Distance Computation}. RCMM stands for \emph{Reciprocal Condition Matching Measure}. The value of $n$, within IVFADC, is the number of images in the database. The value of $k$ is a parameter of the solution and is related to the size of the RCMM rank used for building the provenance graph. $^\ast$Reported values are merely illustrative. } \label{fig:solution} \end{figure*} \textit{Image phylogeny forests.} The image phylogeny solutions mentioned up to this point were conceived to handle near duplicates; they do not work in the presence of semantically similar images. Aware of such limitations, Dias et al.~\cite{Dias_2013_fsi} extend the oriented Kruskal solution to \emph{automatic oriented Kruskal}, an algorithm that finds a family of disjoint phylogeny trees (a phylogeny forest) from a given set of near duplicates and semantically similar images, such that each tree describes the relationships of a particular group of near duplicates. Analogously, Costa et al.~\cite{Costa_2014} provide two extensions to the optimum branching algorithm, namely \emph{automatic optimum branching} and \emph{extended automatic optimum branching}, both based on automatically calculated cut-off points. Alternatively, Oikawa et al.~\cite{oikawa_2016} propose the use of clustering techniques for finding the various phylogeny trees; the idea is to group images coming from the same source, while placing semantically similar images in different clusters. Finally, Costa et al.~\cite{Costa_2017} improve the creation of the dissimilarity matrices, regardless of the graph algorithm used for constructing the trees. \textit{Multiple parenting phylogeny trees.} Although previous phylogeny work established preliminary analysis strategies and algorithms to understand the evolution of images, the key scenario of image composition, in which objects from one image are spliced into another, was not addressed. Compositions were first addressed within the phylogeny context by Oliveira et al.~\cite{Oliveira_2016}. The solution presented by these authors assumes two parents (one host and one donor) per composite. Extended automatic optimum branching is thus applied for the construction of ideally three phylogeny trees: one for the near duplicates of the host, one for the near duplicates of the donor, and one for the near duplicates of the composite. Even though this work is very relevant to ours herein, it has a couple of limitations. First, it does not consider the possibility of more than two images donating content towards one composite image (such as the composite with sharks in Panel B of Fig.~\ref{fig:teaser}). Second, Oliveira et al. require all images to be in JPEG format. \textit{Provenance graphs.} To date, the entire image phylogeny literature has made use of metrics that focus on finding the root of the tree, rather than evaluating the phylogeny tree as a whole, considering every image transformation path in the case of provenance. Aware of such limitations and aiming to foster more research on the topic, NIST has recently introduced new terminology, metrics, and datasets, coining the term \emph{image provenance} to express a broader notion of image phylogeny, and suggesting directed acyclic \emph{provenance graphs}, instead of trees, as the data structure that describes the provenance of images~\cite{nist2017plan}. They also suggest the use of a \emph{query} as the starting point for provenance analysis. Following this, Bharati et al.~\cite{bharati2017uphy} introduced a more generalized method of provenance graph construction, which does not assume anything about the images and transformations. A content-based method for the construction of undirected provenance graphs is proposed, which relies upon the extraction and geometrically-consistent matching of interest points. Utilizing this information to build the dissimilarity matrix, the method uses Kruskal's algorithm to obtain the provenance graph. The approach performs well over small cases, even in the presence of distractors (\textit{i.e.}, images that are not related to the query). \section{Introduction} \label{sec:intro} Algorithms for the detection of manipulated content in digital images have reached a stage of maturity that is sufficient for understanding the transformations that were applied to individual images in many cases~\cite{farid2009image,Rocha:CSUR:2011,farid2017detect}. A logical next step is to develop an approach that allows us to ask more complicated questions about the relationships between related images after sequences of transformations have been applied --- a problem that is not well studied in the image processing literature. In this article, we consider the \textit{Provenance Analysis} task~\cite{Dias_2012,Dias_2013}, in which the objective is to recover the graph of relationships between plausibly connected images. These relationships may be expressed as undirected edges (\textit{i.e.}, neighboring transformations are identified) or directed edges (\textit{i.e.}, the order of neighboring transformations is expressed). The development of techniques to recover such graphs combines ideas from the areas of image retrieval, digital image forensics, and graph theory, making this an interesting interdisciplinary endeavour within image processing and computer vision. \begin{figure}[t] \centering \vspace{-0.3cm} \includegraphics[width=8.7cm]{figures/teaser7.pdf} \caption[Image Provenance Analysis]{Image Provenance Analysis workflow. Panel A depicts the first step of Image Provenance Analysis, namely Provenance Image Filtering, in which filters are applied to a large image database to retrieve those images that are related to a given query image. Panel B depicts the second step, namely Provenance Graph Construction, in which the filtered images are linked to each other in a way that expresses the sequences of manipulation and/or compositions (\textit{i.e.}, the provenance history of the images).} \label{fig:teaser} \end{figure} To illustrate the provenance analysis task, consider the set of example images in Panel A of Fig.~\ref{fig:teaser}, which were collected from the popular ``Photoshop battles'' forum on the social media site Reddit~\cite{reddit2017photoshopbattles}. On this forum, amateur artists begin with source images and employ image manipulation tools to generate results for humorous effect. The first step in provenance analysis is \textit{Provenance Image Filtering}, which consists of searching a potentially large pool of images for those that are most closely related to a given query image. Related images might be {\em semantically similar} ({\em i.e.}, the same scene may be present from slightly different view points or at nearby points in time), or they might be {\em near duplicates} related by minor transformations such as exposure and saturation adjustments, or cropping and re-sizing, or they might be {\em image compositions}, which contain elements of two or more different source images. In most cases, the query will be an image that has been manipulated in some way. The second step is \textit{Provenance Graph Construction}, where the objective is to understand the relationships between images yielded by provenance image filtering. A \textit{Host Image} provides the source of background content for subsequent manipulations. In Fig.~\ref{fig:teaser}, the host is the photo of the man holding a shovel in the leftmost part of Panel B. A \textit{Donor Image} provides some amount of content that will be inserted into a host image. In Fig.~\ref{fig:teaser}, three donor images are the original images of the sharks and the paddle board in the bottom half of Panel B. They provide image content that has been inserted into the image they are linked to. Sequences of manipulations are common, and they can be expressed as a directed graph representing the order in which they were applied. This can be seen in the graph of Panel B, where the depth of the central path containing the host and the query leads to three different levels of manipulations. Our goal is to develop an algorithm that can generate such graphs in an automated fashion. We do not make strong assumptions that either the original host or donor images are available during analysis. For instance, the paddle board, flying carpet, and extra people might not necessarily be harvested at the image filtering step. Provenance analysis is important to image processing and computer vision. It has direct applications in a number of different fields. The most immediate application is forensics, where the detection of manipulated images spans traditional policing to analysis for strategic intelligence. The question of the origins of suspect images has taken a prominent role recently, with the rise of so-called ``fake news" on the Internet. While not a new problem\footnote{The computer hacker group Cult of the Dead Cow warned of the devastating potential of widespread online media manipulation as early as 1999~\cite{cdc}.}, concern about fake news reached new heights on the heels of the 2016 American presidential election. The rapid evolution of the online social media landscape has provided new, free media channels with which even amateur bloggers and news outlets can reach massive audiences with little effort, and even less regulation. Recent instances of fake news often involve questionable images propagating through social media. For example, in early 2017, the New York Times reported on the creation of a false story about the discovery of pre-marked ballots in Ohio that appeared a couple of months before the election~\cite{NYT1}. The image accompanying the story was the product of a mirrored image that was selectively blacked-out in local regions~\cite{PP1}. This is a real-life case with multiple manipulations where provenance analysis could be applied to trace the origins of the fabrication. Beyond the important application domain of forensics, image provenance analysis can form a powerful framework for academic research in other fields. Cultural analytics has emerged as a distinct sub-discipline within the digital humanities~\cite{manovich2009cultural,yamaoka2011cultural} that is concerned with combining quantitative methods from social science and computer science to answer humanistic questions about cultural trends. An example of this (which we have already touched upon in Fig.~\ref{fig:teaser}) is the study of Internet \textit{memes} --- cultural artifacts meant to be widely transmitted and evolve over time. Memes are an interesting object of cultural study, in that they encapsulate facets of popular entertainment, political moods, and novel elements of humor. Meme aggregators like the website \emph{knowyourmeme.com} have done a good job at archiving such content, but a more exhaustive quantitative study of the provenance of individual memes has yet to emerge. Tracing the source(s) of modified meme images helps us unpack the underlying cultural trends that can tell us something meaningful about the community that generated the content. Both of the application domains mentioned also motivate the need for any developed techniques to be {\em scalable}. Specialized algorithmic components are necessary to solve the problem at hand. First, one needs an accurate and scalable image retrieval algorithm that is able to operate over very large collections of images (realistically, on the order of millions of images) to find related candidates. Such an algorithm also has to address the particularities of the provenance image filtering task: it must perform well at retrieving the near-duplicate host images that are highly related to the query (a well-known problem in the image retrieval literature), but also perform well at retrieving donors (images that potentially donated small portions to the query) and the donors' respective near duplicates (which might not be directly related to the query). Second, the identification of likely image transformations that explain how each retrieved image might have been used to generate the others is required, as it is used to create the ordering of the images in the provenance graph. And third, methods from graph theory are necessary to organize the relationships between images, yielding a directed graph that is human-interpretable. All of these components must be integrated as a coherent and scalable processing pipeline. This work introduces, for the first time, a fully automated large-scale end-to-end pipeline that starts with the step of provenance image filtering (over millions of images) and ends up with the provenance graphs. The following new contributions are introduced this work: \vspace{0.2cm} \begin{enumerate} \item \emph{Distributed interest point selection}: a novel interest point selection strategy that aims at spatially diversifying the image regions used for indexing within the provenance image filtering task. \item \emph{Iterative Filtering}: a novel querying strategy that iteratively retrieves images that are directly or indirectly related to the query, considering all possible hosts, donors, composites, and their respective near duplicates. \item \emph{Clustered Provenance Graph Construction}: a novel graph construction algorithm that clusters images according to their content (joining near duplicates into the same clusters), prior to establishing their intra- and inter-cluster relationship maps. \item State-of-the-art results on the provenance analysis benchmark released by the American National Institute of Standards and Technology (NIST)~\cite{nist2017dataset}. \item A new dataset of real-world scenarios containing composite images from Photoshop battles held on the Reddit website~\cite{reddit2017photoshopbattles}. Experiments performed over this dataset highlight the real-world applicability of the approach. \end{enumerate}
1,116,691,501,175
arxiv
\chapter[IMB, Kamiokande and Super-K]{Neutrino Astronomy with IMB, Kamiokande and Super Kamiokande\footnotemark}\footnotetext{To be published in Neutrino Physics and Astrophysics, edited by F. W. Stecker, in the Encyclopedia of Cosmology II, edited by G. G. Fazio, World Scientific Publishing Company, Singapore, 2022.} \author[J. LoSecco]{John M. LoSecco} \address{The University of Notre Dame du Lac} \begin{abstract} Some of the earliest work on neutrino astronomy was accomplished by a class of underground detectors primarily designed for particle physics goals. These detectors used inexpensive water to obtain the large masses needed to observe the very low interaction rates expected from neutrinos. They exploited the relatively large light attenuation length and the index of refraction of the water to get a very inexpensive cost per thousand tons of detector. The results obtained from these pioneering neutrino detectors have included real time observation of solar neutrinos, supernova neutrinos, and atmospheric neutrinos. Searches for neutrino point sources, dark matter and primordial magnetic monopoles were also made using them. \end{abstract} \newpage \tableofcontents \newpage \section{Introduction} Neutrino astrophysics is now a well established scientific discipline. Neutrinos have been observed from a broad selection of natural sources, including the sun, supernovae, cosmic ray interactions in the atmosphere and astrophysical sources. Future possibilities include relic supernova neutrinos, neutrinos from the Big Bang, and perhaps neutrinos from dark matter annihilation. Neutrinos have even left a footprint on the structure of the cosmic microwave background radiation. The growth of neutrino astrophysics to maturity has taken decades of work. For the most part this is because theoretical neutrino astronomy was well ahead of the technology. The goal of using neutrinos to study and understand astrophysical processes required a deeper understanding of how neutrinos are classified and how they propagate. The class of detectors discussed here has played key roles in neutrino astrophysics discoveries. \section{History} \subsection{Requirements} Neutrinos have no electromagnetic or strong nuclear interactions. They are normally observed via the weak interaction. A great deal can be inferred from observations of the final state of such neutrino interactions. They inform about the direction, energy and the type of neutrinos involved. There are three known kinds of neutrinos, called "flavors". Neutrinos can also interact via the two different manifestations of the weak interaction. The dominant charged current interaction converts the neutrino into a charged particle that can be observed. The neutral current weak interaction has both a neutrino entering and leaving the reaction which makes it harder to analyze. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{zeller_total_numode.jpg} \caption{The neutrino nucleon cross section on an isoscalar target divided by the neutrino energy in GeV. Figure from \cite{Fo+Ze}} \label{fig:zellertotalnumode} \end{figure} Figure \ref{fig:zellertotalnumode} makes it clear why neutrinos are hard to observe. The vertical scale displays the neutrino nucleon cross section divided by the neutrino energy (in GeV) as a function of the neutrino energy. The horizontal axis is a log scale running from 0.1 to 100 GeV. The cross section never gets above about $\sigma$ = 10$^{-38}$ cm$^{2}$ GeV$^{-1}$ per neutrino per nucleon (proton or neutron) at 1 GeV. Water has a density of about 1000 kg m$^{-3}$, ie, about $\rho = 6.02 \times 10^{23}$ nucleons per cubic centimeter. This gives a scattering length for a 1 GeV neutrino in water at about $(\rho \sigma)^{-1} = 1.7 \times 10^{14}$ cm. However, because of the energy dependence of the neutrino cross section, many neutrinos of interest to astrophysics that have much lower energies, have much lower interaction probabilities. The rate of observed neutrino interactions depends on several factors. As mentioned above, the cross section is an increasing function of the neutrino energy. The interaction event rate also depends on how many neutrinos traverse the detector per unit time and the number of targets, electrons or nucleons, in the detector. The goal here is to observe neutrinos in nature so there is no way to control the energy distribution or the flux of neutrinos so most detector designs concentrate on building the most massive detector possible and maximizing the detection efficiency. Sometimes the choice of the target material can make a difference with a higher cross section for some neutrino reactions. Beyond direct detection of the neutrino interaction there are other methods that have been used to increase the rate of observed neutrino interactions by looking for indirect evidence. For example, upward going muons are mostly created by neutrino interactions in the Earth below that then penetrate a substantial distance in the rock. Thus, the indirect detection method has extended the effective mass of the neutrino target, this at the cost of missing some detailed information about the signal. However, one can achieve a much more massive target with a much smaller detector. \subsubsection{Neutrino Electron Scattering} $\nu_{e} + e \rightarrow \nu_{e} + e$ and the similar reactions $\nu_{\mu,\tau} + e \rightarrow \nu_{\mu\tau} + e$ are very well understood since they only involve leptons\cite{nue}. In most cases the neutrino energy is much higher than the electron mass so the electron direction is close to the neutrino direction and the reaction can be used for pointing. Defining $y$ to be the fractional energy loss of the neutrino, $y = E_{e}/E_{\nu}$, in the small angle approximation one finds \begin{equation} E_{e} \theta_{e}^{2} = 2 m_{e} (1-y), \end{equation} Therefore, since $0 \le y \le 1$, it follows that $E_{e} \theta_{e}^{2} < 2 m_{e}$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{NuEAngle.pdf} \caption{The upper limit on the angular distribution for $\nu + e^{-}$ scattering as a function of the electron recoil energy} \label{fig:nueangle} \end{figure} This reaction is very useful for astrophysics since the recoil electron points back to the neutrino source, (See Figure \ref{fig:nueangle}). Also, most astrophysical sources will have a strong electron neutrino component at Earth (See sections 4 and 5.3). The neutrino cross section\cite{nue} is very small, about $10^{-45}$ cm$^{2}$ at 1 MeV for $\nu_{\mu}$ or $\nu_{\tau}$ and about $10^{-44}$ cm$^{2}$ at 1 MeV for $\nu_{e}$. the differential cross section, \begin{equation} \frac{d\sigma^{\nu_{\mu},\bar{\nu}_{\mu}}}{dy}=\frac{2G_{F}^{2}m_{e}}{\pi}E_{\nu} g_{L,R}^{2} + g_{R,L}^{2} (1-y^2) -g_{L}g_{R} \frac{M_{e}y}{E_{\nu}}, \end{equation} where $g_{L}$ and $g_{R}$ are the couplings to left and right handed electrons. $2g_{L}=g_{V}+g_{A}$ and $2g_{R}=g_{V}-g_{A}$. $g_{V}$ and $g_{A}$ are the vector and axial vector couplings. There are substantial radiative corrections to this simple result. For $\nu_{e}$ scattering there is a charged current contribution which requires replacing $g_{L}$ with $g_{L}+1$. \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{ReactorXsec.pdf} \caption{The $\bar{\nu}_e + p \rightarrow e^{+} + n$ cross section in units of $10^{-41} cm^{2}$ for nuclear reactor energy antineutrinos.} \label{fig:reactorxsec} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{SNXsec.pdf} \caption{The $\bar{\nu}_e + p \rightarrow e^{+} + n$ cross section in units of $10^{-41} cm^{2}$ for supernova range energies.} \label{fig:reactorxsecSN} \end{figure} A special case of neutrino electron scattering occurs at very high energies where the center of mass energy is close to the mass of the $W^{-}$ weak boson\cite{GlasRes}. The threshold for this reaction is 6.3 PeV. The cross section is expected to increase by about a factor of 200 near this resonance. The final state, the decay products of $W^{-}$ decay is not the simple two body final state of low energy $\nu_{e} + e$ scattering. \subsubsection{Antineutrino Proton Scattering} \begin{figure}[h] \includegraphics[width=0.496\linewidth]{SNRecE.pdf} \includegraphics[width=0.496\linewidth]{SNRecAng.pdf} \caption{The $\bar{\nu}_e + p \rightarrow e^{+} + n$ positron recoil energy in MeV showing good fidelity with the neutrino energy (left). The mean cosine of the recoil positron is shown (right). The positron does not provide useful directional information being nearly isotropic at these energies.} \label{fig:reactorAngE} \end{figure} The reaction \begin{equation} \bar{\nu}_e + p \rightarrow e^{+} + n, \label{IBD} \end{equation} the charged current scattering of an electron antineutrino from a proton has been a very effective tool in neutrino physics\cite{nubarp}. This reaction was used as a basis for the discovery of the neutrino. The presence of the $e^{+}$ and $n$ in the final state allows the use of coincidence methods of finding both in order to enhance the true neutrino signals while rejecting backgrounds that would make a single $e^{+}$ or $n$. The process (\ref{IBD}) is known as Inverse Beta Decay, IBD. The cross section for IBD has an approximately quadratic dependence on the neutrino energy, (See Figures \ref{fig:reactorxsec} and \ref{fig:reactorxsecSN}). For supernova and reactor antineutrinos this reaction provides good energy resolution since almost all the energy is carried off by the $e^{+}$ and can be reconstructed (See Figure \ref{fig:reactorAngE}). The recoiling positron lacks directionality (See Figure \ref{fig:reactorAngE}), so it is not useful in locating a signal if it is the only source of information about the source. Since the neutrino momentum is conserved, the forward going neutron carries this directional information. Unfortunately the neutron has little kinetic energy and the initial direction is not practically detectable. \subsubsection{High Energy Neutrino Interactions} Neutrinos have two properties that make them very appealing for astrophysics. They are electrically neutral so their point of origin is not distorted from their source to a detection. Neutrinos are also very penetrating so they carry information directly from the interior of celestial objects that photons can not provide. This later feature makes them difficult to observe but massive detectors and large fluxes compensate for this weakness. While most neutrinos occurring from low energy astrophysical processes are of the electron type, since they basically involve converting protons into neutrons, at higher energies the strong interactions provide an efficient method to make muon type neutrinos. The lightest strongly interacting particle, the pion, decays 99.988\% of the time into a muon neutrino. Any pion not absorbed will make a neutrino. One apparent manifestation of this is the cosmic ray flux at ground level which is dominated by muons created by the same pion decays that make atmospheric neutrinos. The unstable pion is created by cosmic ray interactions in the diffuse upper atmosphere. It then decays to make muons and muon type neutrinos. Many of the muons also decay adding to the neutrino flux. $\pi^{+} \rightarrow \mu^{+} + \nu_{\mu}$, $\mu^{+} \rightarrow e^{+} + \nu_{e} +\bar{\nu}_{\mu}$ and the charged conjugate reaction. Intense energetic cosmic neutrino sources are expected to occur whenever energetic pions are produced in a region where pion decay is more likely than absorption. The observation of energetic cosmic gamma ray sources, powered by neutral pion decay is strong evidence that point neutrino sources should exist. Neutrinos of several hundred MeV and up will interact with normal matter. In particular the protons and neutrons in the nuclei of most matter provides a convenient target. When a charged current neutrino interaction occurs at these energies there will be a charged lepton, an electron, muon or tauon in the final state, with no apparent incident track since the neutrino is invisible until it interacts. Neutrinos can also interact via the neutral current interaction in which there is a neutrino in both the initial and final state. Neutral current events have played an important role in astrophysics, for specific reactions like deuterium dissociation, but in general they do not provide enough information about the neutrino energy or the source direction. As shown in Figure \ref{fig:zellertotalnumode} the neutrino cross section increases approximately linearly at these energies. The rise eventually flattens out at the scale of the $W$ boson mass. But for neutrino observations by this class of experiments it is a reasonable approximation. \subsection{Motivation} Progress in establishing neutrino astrophysics was a consequence of major discoveries in high energy physics in the 1970's. Gauge theories were established as a cornerstone of the field with gauge theories for the strong, weak and electromagnetic interactions. A synthesis of these into grand unified gauge theories predicted a quark lepton transition that would make baryons, protons and neutrons, unstable. Protons should decay. The predicted lifetime was quite large, about $10^{28}$ years. This was well above established experimental limits on the proton lifetime. To extend those limits to the predicted lifetime would require massive detectors. The technology for massive particle detectors had been well developed during that time period. Neutrino experiments were critical in establishing the legitimacy of gauge theories and neutrino experiments require massive detectors. Such detectors would also be needed to establish grand unified theories. Neutrino detectors of that period where either small or very coarse, with poor resolution. Still some early experiments were based on existing accelerator neutrino experiment technology. \subsection{Detector Concept} To observe proton decay one needed to see about one proton's mass of energy deposited in a detector with very low momentum. Since most massive detectors are composed of nuclei the protons and neutrons are bound and so have some Fermi momentum but it is well below the momentum one would have with a proton's mass of energy. For example a neutrino interaction depositing one proton's mass of energy would deposit a comparable momentum. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{IMBwall.jpg} \caption{An underwater view of a wall of photomultiplier tubes in the original IMB detector. The photo illustrates the large absorption length achieved for the water. The distance scale is about 20 meters from wall to wall.} \label{fig:imbwall} \end{figure} To demonstrate a proton decay signal one had to present evidence for the correct energy and momentum deposition. Backgrounds that could be confused with proton decay were intrinsic radioactivity in the detector, cosmic rays and neutrinos present in the environment. Radioactivity can be managed with careful construction avoiding certain materials. Cosmic rays can be managed by going deep underground. Neutrinos can be recognized since the momentum deposition would be comparable to the energy deposition. A known source of background neutrinos comes from cosmic ray production of unstable particles in the Earth's atmosphere. These are called atmospheric neutrinos and are the major background to many very sensitive experiments on Earth. A major breakthrough in detector design came in 1978 when the concept of massive water Cherenkov detectors with surface light detectors would give the energy and momentum information needed to distinguish proton decay from backgrounds\cite{sulak}. Surface light detection could be used if the water was very clear with very little absorption or light scattering. The Cherenkov light rings projected onto the walls are used to reconstruct the direction, length and the density of tracks in the interaction. The diffuseness of the light can be used to classify the kind of track as either showering (electron or photon) or non-showering (muon, pion etc.). Cherenkov light is emitted by a charged particle moving through a transparent material at a speed greater than the speed of light in that medium. For water the light is emitted in a cone at an angle of about 41.2$^{\circ}$ to the charged track. Figure \ref{fig:IMBUpNu} illustrates this and shows an example. \begin{figure} \centering \includegraphics[width=0.495\linewidth]{ChConeSmall.jpg} \includegraphics[width=0.496\linewidth]{IMBNuInteraction.jpg} \caption{A sketch of the production of a Cherenkov cone (left). A computer display of the Cherenkov cone produced by a typical upward going neutrino interaction projected onto the walls of the IMB detector (right).} \label{fig:IMBUpNu} \end{figure} The detector concept can be traced back to DUMAND (Deep Underwater Muon And Neutrino Detector). The DUMAND idea was to utilize naturally occurring massive bodies of water acting as both neutrino target and detector\cite{dumand1}. The light detectors are distributed in the volume of the detector and the optical quality of the water (or ice) is hard to control. An historic international meeting in Hawaii in 1976 the early DUMAND concept was developed\cite{dumand2}. The Hawaii meeting had strong participation by many of the originators of the IMB detector (John Learned, Fred Reines and Lawrence Sulak). The DUMAND concept has been used in a number of detectors to observe the extremely low flux of very high energy neutrinos in cosmic rays This concept of using an imaging water Cherenkov detector has been used for many astrophysical neutrino observatories including, IMB, Kamiokande, Super-Kamiokande and SNO. More are planned or are under construction. Ice Cube and related detectors are based on a similar concept, but use ice instead of water. \section{Detectors} \begin{figure} \centering \includegraphics[width=0.65\linewidth]{Kamiokande89.JPG} \includegraphics[width=0.65\linewidth]{Super-Kamiokande_detector.jpg} \caption{A model of the Kamiokande detector (top) and a drawing of the Super-Kamiokande detector (bottom).} \label{fig:Kam+SK} \end{figure} \subsection{IMB} The first detector of this type was constructed by a collaboration between the University of California at Irvine, the University of Michigan and Brookhaven National Laboratory\cite{IMBDet}. It was called IMB, the initials of the collaborating institutions. Since the theory to be tested and the detector technique were both speculative it was built ``on a shoestring''. This was literally true. The light detectors were suspended in water with nylon filament which were inexpensive and durable. The use of the filament meant that the detector could be maintained. Failing components were lifted out, repaired or replaced. It also allowed the detector to go through two upgrades to the light collection during its operation from 1979 to 1992. The detector could adapt to new physics goals which resulted from its success in achieving its initial goals. The detector had a mass of 8000 tons of water and was viewed by 2048 photomultiplier tubes as light collectors. Figure \ref{fig:imbwall} shows the phototube layout and the clarity of the fluid with an under water photograph. Another feature of the first detector\cite{IMBDet}, IMB, was multi-hit electronics designed to observe the decay of the proton decay products. The proton decay products would consist only of particles lighter than the proton. Of these the kaon, pion and muon are unstable and decay shortly after the proton. Electrons and photons are stable and so would not give a delayed time signature. The initial proposal emphasized the goal of proton decay since it was topical and had inspired the original design. But the utility of such a device in studying atmospheric neutrinos and supernova neutrinos was known and was mentioned in the proposal. It was also known that the detector would be sensitive to solar neutrinos. \subsection{Kamiokande} Kamiokande was a similar detector constructed in Japan in 1982. The name comes from its location in the Kamioka mine in western Japan, plus Nucleon Decay Experiment (nde), signifying its original goal. The detector consisted of a cylindrical tank filled with 3000 tons of water viewed by 1000 20 inch photomultiplier tubes, figure \ref{fig:Kam+SK}. \subsection{Super-Kamiokande} Following the success of IMB and Kamiokande the two groups joined together to construct a new generation detector. The successes of the first round devices helped focus the design on open physics questions. The detector is 50,000 tons of water in a cylindrical tank 40 meters high and 40 meters in diameter viewed by 13,000 20 inch photomultiplier tubes on the surface, figure \ref{fig:Kam+SK}. This review will not cover all the diverse results from this experiment, spanning solar neutrinos, atmospheric neutrinos and neutrinos from a 300 km distant accelerator source. Many searches have been reported for astrophysical phenomena but only the sun has yielded a positive signal so far. \subsection{MACRO} The Monopole and Cosmic Ray Observatory (MACRO) in the Gran Sasso laboratory of Italy was a large underground detector based on a segmented liquid scintillation technology. The detector was 76.6 m long, 12 m wide and 9.3 m high subdivided into six sections called supermodules. It was installed at a depth of about 3700 hg/cm$^2$.\cite{MACRODet} The detector was composed of three subdetectors, liquid scintillation counters, limited streamer tubes and nuclear track detectors. The lower part of each supermodule had three horizontal planes and two vertical planes of liquid scintillation counters. The active volume of the horizontal scintillators was 11.2m × 0.73m × 0.19 m viewed by two phototubes at each end. The active volume of the vertical planes was 11.1m × 0.22 m × 0.46 m viewed by one phototube at each end. The total mass of the 476 scintillators was about 600 tons. The lower part of the detector contained ten horizontal planes of limited steamer tubes. Between the middle eight layers was a rock absorber with a thickness of about 360 g/cm$^{3}$ which set a threshold of about 1 GeV for a crossing muon. Each tube was 12 M long and had a cross sectional area of 3 cm x 3 cm. The angular resolution of the streamer tubes was about 0.2$^{\circ}$ The lower part of the detector contained one horizontal plane of the nuclear track subdetector with additional planes on the east and north faces. There was a transition radiation detector in the upper part of the device capable of measuring muon energies in the range of $100<E_{\mu}<930$ GeV. Muon energies were also estimated from multiple Coulomb scattering. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{MACRODet.pdf} \caption{A sketch of the MACRO detector in the Gran Sasso Lab. Figure from \cite{GiacoandMarg}} \label{fig:MACROfig} \end{figure} Figure \ref{fig:MACROfig} is an illustration of the MACRO detector. Unlike the imaging water Cherenkov detectors of the other devices in this section MACRO was primarily a tracking device to detect and identify signals from interactions outside the sensitive volume of the detector. Proton decay and direct neutrino observations were not primary design goals. Instead the detector had a very large sensitive surface area and accurate timing to measure the speed and direction of entering tracks. Some of the physics goals included magnetic monopoles in cosmic rays, atmospheric neutrinos and neutrino oscillations, underground muons, high energy neutrino astronomy and indirect searches for dark matter candidates. They distinguish neutrino interactions in the rock from penetrating cosmic ray muons by using time of flight to distinguish the upward going muons, caused by neutrino interactions, from the downward cosmic ray background. \section{Neutrino Oscillations Complications} Experimental neutrino physics primarily utilizes the charged current neutrino interaction as a detector and a source. Neutrinos are usually created as electron, muon or tau neutrinos as identified by the charged lepton involved in the creation. Most observations are via the charged current reaction where the neutrino is identified via a charged lepton in the final state. Neutral current neutrino interactions are observable but usually they lack the directional and energy information needed to understand their source. Neutrino oscillations cause a propagating neutrino to change its flavor from the one tagged at its creation. This puzzled astrophysicists for decades since reliable neutrino sources, such as the sun, could not be observed as expected. It took about 40 years to determine the cause of these discrepancies as due to neutrino oscillations. The detectors discussed in this review were key to resolving and understanding neutrino oscillations. Starting with a muon deficit reported by IMB\cite{NuAnom} to the observation of a distance dependent spectral distortion observed by Super-Kamiokande 15 years later\cite{Sk98}. At astrophysical energies the problem was simple. Electron type neutrinos were created at the source as expected, but on route to the detector some of them transformed into muon or tau neutrinos, which didn't have enough energy to engage in charged current interactions. The possibility of sterile neutrinos, neutrinos with no charged current interaction at all has also been implicated by a number of experiments. Detectors set up to observe neutrinos from the natural world lead the way in understanding neutrino oscillations. These conclusions have been confirmed by terrestrial experiments with man made sources. A full understanding of neutrino oscillations is needed to understand neutrinos from astrophysical sources and must be accounted for. From very large distances the neutrinos propagate as mass eigenstates made up of superpositions of all the neutrino flavors. \section{Results} \subsection{Proton Decay} To date there has been no convincing evidence for proton decay but the search for proton decay continues to be a goal of most massive particle detectors. What had been a primary motivator has now become a secondary goal. The largest lower bound on the proton lifetime come from Super-Kamiokande\cite{SKPDK}. $\tau / Br(p \rightarrow e^{+} \pi^{0})>1.6 \times 10 ^{34}$ years with no candidate events observed and $\tau / Br(p \rightarrow \mu^{+} \pi^{0})>7.7 \times 10 ^{33}$ years with two candidates when 0.87 background were expected. \subsection{Atmospheric Neutrinos} Atmospheric neutrinos, while a source of background to very sensitive experiments also presented an opportunity. Up until that time the goal of neutrino experiments was to maximize the signal. To maximize the signal one wanted the maximum neutrino flux. So the detectors were placed as close to the source as possible. The minimum distance was usually determined by radiation shielding issues since intense neutrino sources were also intense radiation sources. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{gaisf18.eps} \caption{Estimates of the atmospheric flux (times $E^2$) for the various neutrino types. These estimates are made for the Kamioka location but are typical} \label{fig:gaisf18} \end{figure} The atmosphere surrounds the Earth and cosmic rays strike it from all directions. Underground, or surface, detectors are closest to the atmospheric neutrinos created near them but the flux is roughly isotropic. The neutrinos come from all directions. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fwogu-ne-nmu-3.pdf} \caption{The ratio of electron neutrino interactions to muon neutrino interactions in the upward going and downward going hemispheres for maximal neutrino mixing as a function of the absolute neutrino mass difference, $\sqrt{\Delta m_{Atm}^{2}}$ The value rises from 0.5 for small values of $\Delta m_{Atm}^{2}$ to 1 for large values. The rise occurs at a much lower value of $\Delta m_{Atm}^{2}$ for upward neutrinos because those neutrinos travel two orders of magnitude further than downward ones. Taken from \cite{Cortez}} \label{fig:fwogu-ne-nmu-3} \end{figure} Atmospheric neutrinos themselves provide little astrophysics information. They are a tertiary result of the impact of cosmic rays on the earth. The primary cosmic rays are charged. Interstellar magnetic fields bend them so they carry little information about their source. The primary motivation to study atmospheric neutrinos is that they create a background source for very sensitive experiments on earth that can not be shielded. The energy range for atmospheric neutrinos peaked in the energy region near proton decay. The motivation to understand the background to proton decay required an understanding of atmospheric neutrinos. Figure \ref{fig:gaisf18} shows that the atmospheric neutrino flux tends to peak near the 1 GeV energy expected from proton decay. The range of travel of atmospheric neutrinos varies from a few kilometers for those from overhead to the Earth's diameter for those coming up from below. Without moving the detector one can study the time evolution of a neutrino beam over about three orders of magnitude. One can compute the travel time from the direction from which the neutrino has come. \subsection{Neutrino Oscillations} Neutrino oscillations are a property of neutrinos not specific to neutrino astrophysics. But understanding neutrino astrophysics was very limited since oscillations of the neutrino signals while en-route made it impossible to understand the astrophysics at the source. All of the detectors discussed in this section contributed to resolving this puzzle. Until astronomers tried to observe neutrinos from distant sources physicists had tried to study neutrinos by maximizing the flux by getting very close to the source. Neutrino oscillations are a quantum mechanical time evolution of neutrinos that can not be noticed close to the source since close to the source neutrinos do not have time to evolve. In retrospect the first evidence for neutrino oscillations came from attempts to observe the $^{8}B$ neutrinos from the sun. Only about $\frac{1}{3}$ of the expected flux was observed. Neutrino oscillations were proposed as a solution but alternate explanations were also popular. To a non-expert it was also possible that the rare side reaction in the sun to make $^{8}B$ might have been overestimated or that the radiochemical method to extract the signal argon from the chlorine might be inefficient. Atmospheric neutrinos, an indirect consequence of cosmic rays in the atmosphere, provide a convenient source of neutrinos at a range of distances from a few kilometers to the diameter of the earth. One could compare a similar neutrino source at a range of distances from 10-20 km from those overhead to 13,000 km from those coming up. Without moving the detector one can study the time evolution of a neutrino beam over about three orders of magnitude. One can compute the travel time from the direction from which the neutrino has come. A simple two component model can describe the change in the flavor content of an initially pure beam of muon neutrinos as the probability for the $\nu_{\mu}$ to remain unchanged as: \begin{equation} P(\nu_{\mu} \rightarrow \nu_{\mu}) = 1-\sin^{2}(2 \theta_{Atm})\times \sin^{2}(1.27 \times \frac{\Delta m_{Atm}^{2} L}{E}) \end{equation} where $\theta_{Atm}$ and $\Delta m_{Atm}^{2}$ are parameters and $L$ is a distance measured in km and $E$ is an energy measured in GeV. $\Delta m_{Atm}^{2}$ is measured in units of eV$^{2}$. \begin{figure}[th] \centering \includegraphics[width=0.85\linewidth]{HirataAtmNuFitPL1992OscFit.pdf} \caption{The Kamiokande 1992 fit to two component neutrino oscillations. The figure on the left considered $\nu_{\mu}\rightarrow\nu_{e}$ oscillations which were subsequently ruled out by a reactor $\bar{\nu}_{e}$ experiment} \label{fig:hirataatmnufitpl1992oscfit} \end{figure} The atmospheric neutrino energy spectrum has modest directional energy dependence, due to the earth's magnetic field, that can be calculated. The neutrino oscillations parameters can be estimated from the variation of the muon neutrino flux with direction, $L$ to find $\Delta m_{Atm}^{2}$ and the magnitude of the modulation to find $\theta_{Atm}$. Proton decay experiments studied atmospheric neutrinos to understand the background to proton decay but also realized their potential to be used to study neutrino oscillations\cite{Cortez}. This source of neutrinos is created as a mixture of four kinds of neutrinos, $\nu_{\nu}, \nu_{e}, \bar{\nu}_{\nu}, \bar{\nu}_{e}$, figure \ref{fig:gaisf18}. Given the range of neutrino propagation distances and the neutrino energies only a range of $\Delta m_{Atm}^{2}$, from about $10^{-4}$ to $10^{-2}$ $eV^{2}$, will provide substantial modulation of the neutrino flux. for values much smaller than this the neutrinos will not have time to evolve so the observations will match expectations. For values much greater than this the neutrinos will be fully mixed and will show little modulation. In this second case the neutrino type ratios measured would deviate from those expected from the flux estimates. A simple argument suggests that atmospheric neutrinos should have twice as many muon neutrinos as electron neutrinos. The neutrinos originate from pion decay to a muon and a muon neutrino. The muon is unstable and will decay to a muon neutrino, an electron neutrino and an electron. $\pi^{+}\rightarrow\mu^{+}+\nu_{\mu}$ and $\mu^{+}\rightarrow e^{+}+\bar{\nu}_{\mu}+\nu_{e}$. So one gets two muon type neutrinos for every electron neutrino. This argument is a bit sloppy since the neutrinos produced will have different energies and so different interaction rates. The neutrino and anti neutrino interaction rate on ordinary matter differ. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{SK98loe.eps} \includegraphics[width=0.6\linewidth]{SK98kamsk.eps} \caption{The Super-K 1998 L/E distribution for electron and muon events (top). The Super-K 1998 fit of the atmospheric neutrino data to oscillations (bottom). The fit involves 8 parameters in addition to the two for neutrino oscillations} \label{fig:sk98kamsk} \end{figure} The first evidence that the observations did not match the expected flux came from IMB where the earliest results showed a deficiency of muon neutrino interactions as a fraction of the total measured sample\cite{BruceBill}. These initial indications were followed up with a thorough search for possible causes for the missing muons. Experimental problems were ruled out since the easily identified sample of muons from cosmic rays that stopped in the detector behaved as expected. An IMB comparison of an upward going sample with a downward going sample set limits on neutrino oscillations\cite{NuOscPRL1985}. A neutrino oscillation analysis using only muon neutrinos and a plot of the neutrino $E/L$ distribution were also published\cite{ICRCIMB1985}. No claims of discovery were made at the time because other experiments using different muon end electron identification methods were indicating no anomaly at the time\cite{NuSex,Koshiba}. In early 1986 the IMB muon neutrino deficit was compared with results of NuSex and Kamiokande in a review talk by LoSecco \cite{LLou} ``Both the Kamioka detector and the Nusex detector can distinguish $\nu_{e}$ from $\nu_{\mu}$ by shower development. They quote a $\nu_{e}/\nu_{\mu}$ flux ratio of 0.36$\pm$0.08 and 0.28$\pm$0.11 respectively. These are lower than the expected value of 0.64. The IMB group has studied the fraction of their contained events resulting in a muon decay. The 26\% observed can be converted to a $\nu_{e}/\nu_{\mu}$ ratio with a number of assumptions about muon capture in water. If 40\% of the $\nu_{\mu}$ interactions do {\em not} result in a muon decay signal the observed value corresponds to $\nu_{e}/\nu_{\mu}\approx1.3$. The problem of the $\nu_{e}/\nu_{\mu}$ ratio is still under active study.''. The three experiments used different methods to make their measurements. LoSecco had noticed earlier, from a table published by Kamiokande\cite{Koshiba}, that while their official result based on shower development showed the quoted electron deficiency their muon decay rate supported the IMB result. Shortly after the review talk the IMB deficit was published in Physical Review Letters\cite{NuAnom}. In June of 1986 LoSecco traveled to Tokyo to discuss the discrepancy between the two classifications used by Kamiokande. He was told unambiguously that the shower development method, yielding a muon excess, was correct. The full story of the eventual confirmation of the IMB deficit by Kamiokande in 1988\cite{Hirata} can be read here \cite{AnomHist}. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{sk4_17a_standard_q13-fixed_full__single_NHIH_S23M23.pdf} \caption{The Super-Kamiokande IV fit to $\Delta m^{2}_{23}$ and $\sin^{2}{\theta_{23}}$} \label{fig:sk417astandardq13-fixedfullsinglenhihs23m23} \end{figure} The effect was large. About 40\% of the expected muon signal was missing. Attempts to fit the neutrino oscillation hypothesis was stymied by the absence of any significant variation with distance or energy. The best fit to these data, figure \ref{fig:hirataatmnufitpl1992oscfit}, gave $\Delta m_{Atm}^{2} = 8 \times 10^{-3}$ eV$^{2}$ and $\sin^{2}(2 \theta_{Atm})=0.43$, basically isotropic with 40\% of the muons missing. Since the oscillation probability depends on $\frac{L}{E}$ the absence of dependence on either of these was problematic. The distance traveled $L$ is limited by the size of the earth. On the other hand the energy $E$ was limited by the detector size. If an neutrino interaction entered or exited detector one only had partial information about its energy. Entering tracks had an additional advantage since these tracks, if going upward, came from neutrino interactions in the rock. The rock represented a much larger mass neutrino target so the neutrino interaction rate would be larger. Still a lot of information was missing. Attempts to understand the muon deficit using upward going stopped and through going events were limited by modeling of the neutrino source and the interaction volume. Ultimately a larger detector was needed to resolve the question. \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{Proton_proton_cycle.png} \caption{Nuclear reactions yielding neutrinos in the sun. The neutrinos are labeled in red. Branching ratios are shown} \label{fig:snreactions} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{SNfig3_fluxes.pdf} \caption{The solar neutrino fluxes. Taken from \cite{snu}} \label{fig:snfig3fluxes} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.7\linewidth]{SNfig4_oldexpts.pdf} \caption{The solar neutrino observations. Taken from \cite{snu}} \label{fig:snfig3fluxesObs} \end{figure} A strong motivation for the 25 kiloton Super-Kamiokande detector was to extend the energy range over which neutrino energies could be measured. Super-K began observations in 1996 and by 1998 had accumulated enough atmospheric neutrino interactions, almost 5000 events, to make a clear measurement\cite{SuperK98}. \begin{figure}[tb] \centering \includegraphics[width=0.875\linewidth]{KamikandeSolNuAng.pdf} \caption{The Kamiokande results on solar neutrinos, The angular distribution. Taken from Nakamura\cite{KamStat}.} \label{fig:kamikandesolnu} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.7\linewidth]{KamikandeSolNu.pdf} \caption{The Kamiokande results on solar neutrinos. The energy distribution is compared to the standard solar model and two models of neutrino oscillations. Taken from Nakamura\cite{KamStat}.} \label{fig:kamikandesolnuE} \end{figure} The most recent Super-Kamiokande report on atmospheric neutrino oscillations\cite{SuperK19} was based on a 253.9 kton-year exposure and had about 24,000 events. The fit over 515 analysis bins, figure \ref{fig:sk417astandardq13-fixedfullsinglenhihs23m23}, results in $\Delta m_{32}^{2} = 2.63 \pm ^{+0.10}_{-0.21} \times 10^{-3}$ eV$^{2}$ and $\sin^{2}{\theta_{23}}=0.588^{+0.030}_{-0.062}$ or $\sin^{2}{\theta_{23}}=0.425^{+0.051}_{-0.034}$ in the first octant with normal mass hierarchy. The fit includes 118 free parameters in addition to those describing neutrino oscillations. MACRO was able to measure atmospheric $\nu_{\mu}$ interactions via three channels: upward through going muons due to $\nu_{\mu}$ interactions in the rock with a mean neutrino energy of 50 GeV, upward going exiting muons with a neutrino interaction in the lower part of the detector and upward stopping muons due to a lower energy $\nu_{\mu}$ interaction in the rock. The average neutrino energy for the last two channels was 2-3 GeV. The upward through going muons had an angular distribution inconsistent with that expected from no neutrino oscillations. The angular distribution is explained by $\nu_{\mu}\rightarrow\nu_{\tau}$ with parameters consistent with Super-Kamiokande quoted above.\cite{MACRONuOsc} Due to substantial uncertainties from the neutrino flux estimates subsequent analyses using 5 ratios of signal sub-samples were done. The ratio of vertical to horizontal high energy muons, the ratio of low energy to high energy through going muons, the ratio of the ratios of data to Monte Carlo for the upward going neutrino events interacting in the detector to the ratio of interacting downward going and upward going stopped events, the ratio of the observed over expected through going muons and the ratio of low energy semi-contained events (starting or ending in the detector) over the expected value. All of these ratios confirmed the shape fit. \subsection{Solar Neutrinos} The sun and most stars derive their energy by converting hydrogen into helium. For this to happen two protons must convert into neutrons, creating a positron and an electron neutrino for each conversion. The neutrinos escape the core of the sun. There are additional nuclear reactions in the sun and stars that extend the neutrino energy well above what one would have with simple hydrogen "burning". The proton-proton cycle converts hydrogen into helium in the suns core\cite{snu}. \[ 2 e^{-} + 4 p \rightarrow ~^{4}He + 2 \nu_{e} + 26.73 MeV\] It produces two $\nu_{e}$ and 26.73 MeV of energy for every He produced. The energy produced thermalizes and the thermal pressure balances the gravitational pressure to provide hydrostatic equilibrium. The energy produced at the center of the sun eventually is radiated from the surface and must be replenished to keep the sun in thermodynamic equilibrium. The neutrinos produced leave the core and can be observed. The full proton proton chain has several steps with intermediate products \[p + p \rightarrow ~^{2}H + e^{+} + \nu_{e}\] creates $~^{2}H$, deuterium. \[p + ~^{2}H \rightarrow ~^{3}He\] creates $~^{3}He$, helium 3 and \[~^{3}He + ~^{3}He \rightarrow ~^{4}He + 2 p\] releases the two intermediate protons. The sun is consuming fuel and so its energy generation is evolving. The time scale for changes is billions of years, so for observational work it may be considered a static source. The neutrino flux from the proton proton reaction, $ p + p \rightarrow ~^{2}H + e^{+} + \nu_{e}$, is about $6.05 \times 10^{10} \nu/cm^{2}/s$ at earth. But the endpoint of this reaction is about 420 keV which makes it hard to observe except with the radiochemical reaction $\nu_{e} + ~^{71}Ga \rightarrow ~^{71}Ge + e^{-}$. The spectral shape is known so an integrated reaction rate can be computed from the flux and cross section above the $~^{71}Ga$ reaction threshold of 233 keV. The reaction $~^{3}He + ~^{4}He \rightarrow ~^{7}Be + \gamma$ opens up several opportunities for nuclear reactions yielding higher energy neutrinos. In particular the reaction $~^{7}Be + p \rightarrow ~^{8}B + \gamma$ provides access to the $~^{8}B$ decay neutrinos $~^{8}B \rightarrow ~^{8}Be + e^{+} + \nu_{e}$ which extend out to energies of about 15 MeV as seen in figure \ref{fig:snfig3fluxes}. The process $~^{7}Be + e^{-} \rightarrow ~^{7}Li + \nu_{e}$ produces two energies at 860 keV (90\%) and 380 keV (10\%). The highest energy neutrinos come from the reaction $~^{3}He + p \rightarrow ~^{4}He + e^{+} + \nu_{e}$ with an endpoint energy of 18.7 MeV. But the reaction is very rare and yields a flux of about $8 \times 10^{3}\nu/cm^{2}/s$ at earth. Many techniques of solar neutrino observations use radiochemical methods such as $\nu_{e} + ~^{37}Cl \rightarrow ~^{37}Ar + e^{-}$ or $\nu_{e} + ~^{71}Ga \rightarrow ~^{71}Ge + e^{-}$ where the unstable final state nucleus is extracted and observed via its own decay. Such methods can measure an integrated flux but give no information about the neutrino energy or direction. The $~^{71}Ga$ reaction has a threshold of 233 keV, below the 420 keV endpoint of the proton proton reaction. The produced $~^{71}Ge$ has a half life of 11.43 days and can be observed via its decay. Because $~^{71}Ga$ observes the initial reaction of the main energy production mechanism in the sun one expects about 2.69 reactions with solar neutrinos in 10 tons of natural $Ga$ during the 11.43 day lifetime of the $~^{71}Ge$ Because of their low energy thresholds radiochemical methods have played a critical role in understanding solar neutrinos. But the methods only measure an integral and are insensitive to neutrino flavor. The very low rate of solar neutrino events in $~^{37}Cl$ (figure \ref{fig:snfig3fluxesObs}) was the initial solar neutrino problem which took decades of work to resolve. Water contains oxygen nuclei, free protons and electrons. Electron neutrinos from the sun will not have charged current reactions on the nuclear targets present. Other neutrino flavors also have insufficient energy to react with these nuclei. So the water detectors covered by this review utilize neutrino electron scattering, which provides directional information. IMB considered studying solar neutrino scattering. It was suggested by Tegid W. Jones of University College London when he joined the collaboration and moved temporarily to Michigan. Jones had been a member of the Garagamelle collaboration that had first observed $\nu + e^{-}$ scattering. IMB concluded that at its depth of 600 m, about 1570 meter water equivalent the decay of muon produced radionuclides would make it hard to see a solar signal. The muon rate at IMB was 2.73 muons per second. Observing solar neutrino scattering would have required much more light collection to get down to the 5-10 MeV energy region where the signal would be found. At those energies radioactive contaminants are also significant and would have required significant remedies to eliminate. On the plus side, salt mines are much cleaner radiologically and the mine had a tradition of doing very low background nuclear physics, such as double beta decay. Administratively, proton decay was funded by the high energy community but solar neutrinos was identified with the nuclear physics community. It wasn't clear if the nuclear physics community would support US work, in this manner at that time. \begin{figure}[tb] \centering \includegraphics[width=0.495\linewidth]{dnspeccont3.eps} \includegraphics[width=0.495\linewidth]{globalcont.eps} \caption{The Super-Kamiokande fit to $\nu_{e}$ oscillations, left. Right has the combined fit with SNO and KamLAND data.} \label{fig:dnspeccont3} \end{figure} The Kamiokande detector adopted solar neutrinos as a physics goal shortly after it was clear that the proton was not going to decay at the predicted rate. Kamiokande was smaller and deeper, at 2030 meter water equivalent and so had a muon rate of 0.4 muons per second. To accomplish the solar neutrino goal required changes to the experiment. Radioactivity such as radon had to be removed from the water, the energy threshold needed to be lowered, gamma rays from the surrounding rock needed to be reduced and timing was added to the electronics. The University of Pennsylvania joined the collaboration to provide the electronics. A water purification system was added to remove radon and an instrumented water shield was added to attenuate gamma rays from the rock. The upgraded detector was named Kamiokande-II First results on solar neutrinos were reported in 1987\cite{KamStat}. These were based on a fiducial mass of 680 tons, 450 days of observation. The trigger threshold was 6.7 MeV at 50\% efficiency rising to 90\% at 8.8 MeV. The analysis threshold was taken above 9.3 MeV recoil electron energy. Angular resolution at 10 MeV was 28$^{\circ}$ and the energy resolution at this energy was 22\%. The observations were ($46\pm13\pm8$)\% of the expected value of the $~^{8}B$ neutrino flux of $5.8 \times 10^{6} \nu/cm^{2}/s$. During a similar period the $~^{37}Cl$ experiment reported ($53\pm20$)\% of the expected signal. This was a very important result since it removed doubts about experimental inefficiencies for either project and shifted attention toward possible problems with neutrino propagation or the solar model. Kamiokande-II was improved by doubling the phototube gain. The energy resolution at 10 MeV dropped to 19.5\%. The trigger threshold was lowered to 6.1 Mev at 50\% and 7.9 Mev at 90\%. The analysis threshold was dropped to 7.5 MeV. Based on 288 days of data through April 1989 ($39\pm9\pm6$)\% of the expected flux was reported. These results are summarized in figures \ref{fig:kamikandesolnu} and \ref{fig:kamikandesolnuE}. A major motivation for the construction of Super-Kamiokande was a better measurement of solar neutrinos. The mass used for solar neutrino observations was 22.5 ktons, as compared to the 0.68 ktons for Kamiokande. Super-Kamiokande started with an energy threshold of 4.49 MeV but was eventually able to lower it to 3.49 MeV The detector started operation in 1996 and went through 4 configurations. The solar neutrino flux, assuming 100\% electron neutrinos, measured in each of those phases\cite{SKSol} is shown in the table \begin{center} \begin{tabular}{|cc|} \hline Phase & Flux ($\times 10^{6}/(cm^{2}s)$ \\ \hline SK-I & 2.380 $\pm$ 0.024 $^{+0.084}_{-0.076}$ \\ SK-II & 2.41 $\pm$ 0.05 $^{+0.16}_{-0.15}$\\ SK-III & 2.404$\pm$ 0.039 $\pm$ 0.053\\ SK-IV & 2.308$\pm$ 0.020 $^{+0.039}_{-0.040}$\\ Combined & 2.345 $\pm$ 0.014 $\pm$ 0.036\\ \hline \end{tabular} \end{center} \begin{figure}[tb] \centering \includegraphics[width=0.525\textwidth]{Kotakeanue_spe.pdf} \includegraphics[width=0.525\linewidth]{Rafellt2012fig45.pdf} \caption{Top: Energy spectra of $\bar{\nu}_{e}$ at different times and the Fermi-Dirac distribution with the same average energies \cite{TotaniSatoDalhedWilson98}. The chemical potentials of the Fermi-Dirac distributions are set to zero. Bottom: The cross section per molecule for various interactions of neutrinos on water. The plot spans the expected range of supernova neutrino energies. The interaction rate is clearly dominated by $\bar{\nu}_{e} p$ charged current scattering. Taken from reference \cite{Raf} \label{fig:anue_spe}.} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.65\textwidth]{SNDetEffRaf.pdf} \caption{The detection efficiency for supernova $\bar{\nu}_{e}$ for Kamiokande and IMB. The solid curves are for the positron energy. The dashed curve is for the neutrino energy. Taken from reference \cite{SNNuO}} \label{fig:rafellt2012fig45} \end{figure} Since the direction of the neutrino is known it is possible, in principal, to reconstruct the solar neutrino energy from just the scattering angle and the energy of the final state electron. Multiple scattering of the recoil electron makes this impractical so fits have been done to the electron recoil energy. Solar neutrino oscillations, which are dominated by MSW effects in the source also can undergo MSW induced oscillations in passing through the earth. A two component neutrino oscillation analysis parameterized by $\Delta m_{12}^{2}$ and $\theta_{12}$ strongly constrains solar neutrino oscillations, figure \ref{fig:dnspeccont3}. Super-Kamiokande results combined with those from SNO and KamLAND, clearly removes the vacuum oscillation solution and are also shown in figure \ref{fig:dnspeccont3}. The results on oscillation parameters were: $\Delta m_{12}^{2} = 4.8 ^{+1.5}_{-0.8} \times 10^{-5}$ eV$^{2}$ and $\sin^{2}(\theta_{12})=0.334 ^{+0.027}_{-0.023}$ \subsection{Supernova Neutrinos} A star that has consumed its nuclear fuel can no longer support itself against gravity. The star contracts and goes through many burning stages until it can not support its mass via thermal pressure. If the star is massive enough there is no other source of support. The star collapses under its own weight. Neutrinos play two roles in the collapse process. A neutronization process in the core converts all electrons and protons to neutrons which removes the electron Fermi gas pressure in the core and initiates a collapse of the star\cite{HowSN}. The collapse converts massive amounts of gravitational binding energy ($3\times 10^{53}$ ergs) to heat. The heat is trapped by all of the mass above it. The only thing that can leak out of the star are neutrinos. Neutrinos in thermal equilibrium rapidly diffuse and cool the star. The neutronization process also produces one electron neutrino for every neutron created. These neutronization neutrinos can escape before the collapse has made the star opaque to neutrinos. Supernova neutrino signals from near enough supernovae are observable with Earth based neutrino detectors. One expects a transient signal correlated with optical observations. The neutrino burst lasts about 10 seconds and nearby supernovae are rare events. The estimated rate of supernovae in our galaxy is $4.6^{+7.4}_{-2.7}$ per century\cite{ONGS}. The last supernova seen in our galaxy was in 1604 over 400 years ago. Since the light and neutrinos travel at (near) the speed of light there is no way to anticipate a supernova neutrino burst. Observations must be made with massive detectors capable of recording neutrinos of modest energies and operating them as much as possible. Figure \ref{fig:anue_spe} shows an estimated electron antinuetrino spectrum from a supernova. The energy stretches from a few MeV to 60 MeV but is peaked at about 15 to 20 MeV. The flux drops as a function of time. Note most of the flux is gone after 15 seconds. The dashed curves in figure \ref{fig:anue_spe} are a Fermi-Dirac distribution with the same average energy. The different neutrino energies have different opacities and so are released at varying equilibrium temperatures, distorting the spectrum. The rate of neutrino interactions depends on the flux, as in figure \ref{fig:anue_spe} times the cross section as in figure \ref{fig:reactorxsec} for $\bar{\nu}_{e}$ charged current reactions on hydrogen. Observation of neutrinos from a supernova burst is usually done through observations of the $\nu_{e}$ and $\bar{\nu}_{e}$ charged current reactions. While all neutrino types are produced and neutrino oscillations mix all the neutrino types together neutrinos of other types are below the charged current threshold so can only interact via neutral currents. The neutral current has a smaller cross section and gives only a fraction of the neutrino energy deposition. Neutral currents provide a less useful, more ambiguous, signal for observations. \begin{table}[ht] \begin{center} \begin{tabular}{|cccc|cccc|} \hline Kamiokande: 1987 & & & & IMB: 1987 & & & \\ \hline event & time & energy & angle & event & time & energy & angle \\ & (sec) & (MeV) & (deg) & & (sec) & (MeV) & (deg) \\ 1 & 0.000 & 20.0 $\pm$ 2.9 & 18 $\pm$ 18 & 1 & 0.000 & 38 $\pm$ 7 & 80 $\pm$ 10 \\ 2 & 0.107 & 13.5 $\pm$ 3.2 & 40 $\pm$ 27 & 2 & 0.412 & 37 $\pm$ 7 & 44 $\pm$ 15 \\ 3 & 0.303 & 7.5 $\pm$ 2.0 & 108 $\pm$ 32 & 3 & 0.650 & 28 $\pm$ 6 & 56 $\pm$ 20 \\ 4 & 0.324 & 9.2 $\pm$ 2.7 & 70 $\pm$ 30 & 4 & 1.141 & 39 $\pm$ 7 & 65 $\pm$ 20 \\ 5 & 0.507 & 12.8 $\pm$ 2.9 & 135 $\pm$ 23 & 5 & 1.562 & 36 $\pm$ 9 & 33 $\pm$ 15 \\ 6 & 1.541 & 35.4 $\pm$ 8.0 & 32 $\pm$ 16 & 6 & 2.684 & 36 $\pm$ 6 & 52 $\pm$ 10 \\ 7 & 1.728 & 21.0 $\pm$ 4.2 & 30 $\pm$ 18 & 7 & 5.010 & 19 $\pm$ 5 & 42 $\pm$ 20 \\ 8 & 1.915 & 19.8 $\pm$ 3.2 & 38 $\pm$ 22 & 8 & 5.582 & 22 $\pm$ 5 & 104 $\pm$ 20 \\ 9 & 9.219 & 8.6 $\pm$ 2.7 & 122 $\pm$ 30 & & & & \\ 10 & 10.433 & 13.0 $\pm$ 2.6 & 49 $\pm$ 26 & & & & \\ 11 & 12.439 & 8.9 $\pm$ 1.9 & 91 $\pm$ 39 & & & & \\ \hline \end{tabular} \end{center} Table from \cite{KST} \label{tab:SN1987A} \end{table} \begin{figure}[tb] \centering \includegraphics[width=0.5\linewidth]{Rafellt2012fig47.pdf} \caption{A plot of the energy and time data from table \ref{tab:SN1987A} The gray region shows the energy region below 30\% detection efficiency. Events from the Baksan detector, composed of liquid scintillator are also shown. The time scales are synchronized on the first event. Figure from reference \cite{Raf}} \label{fig:rafellt2012fig47} \end{figure} On February 23, 1987 a neutrino burst originating in the LMC galaxy was observed by Kamiokande, IMB and a liquid scintillation detector in Baksan USSR. The source of the neutrinos was associated with a visible supernova, confirming astrophysical models of the role of neutrinos in stellar collapse. The distance to the source was outside our galaxy, about 160,000 light years away. The eight events reported by IMB and the eleven events reported by Kamiokande are listed in table \ref{tab:SN1987A}. The observable quantities were the arrival time, the energy of the recoiling positron and the angle of the positron with respect to the neutrino direction. The times and energies are plotted in figure \ref{fig:rafellt2012fig47}. To convert these observations into useful physics results needs an understanding of the detection efficiency, figure \ref{fig:rafellt2012fig45} and the fiducial volume, the target mass. The times in table \ref{tab:SN1987A} are all relative to the first event from the burst in that detector. There was no synchronization between the experiments. The IMB time was taken from the WWVB time standard accurate to about $\pm50$ ms. The first IMB event was observed on February 23 at 7:35:41.37 UT. \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{Rafellt2012fig48.pdf} \caption{A fit of the total energy in neutrinos, the neutron star binding energy, $E_{b}$ and the electron antineutrino temperature $T_{\bar{\nu}_{e}}$ for both IMB and Kamiokande data and for the combined sample.} \label{fig:rafellt2012fig48} \end{figure} Analysis of the 1987 supernova data was encouraging at the time. But neutrino oscillations were not yet fully understood. Since supernovae produce all kinds of neutrinos, and they may have different energy distributions, analysis is complicated by neutrino flavor oscillations effects at the source, in propagation and at detection\cite{SNNuO}. An analysis of the Kamiokande and IMB supernova observations is illustrated in figure \ref{fig:rafellt2012fig48}. The fit\cite{Raf} is in fairly good agreement with the expectations of $3\times10^{53}$ ergs and $3.5$ MeV. \subsection{Point Sources} The presence of cosmic ray photons above about 1 GeV make it clear that there are energetic astrophysical processes in the universe that are capable of producing unstable elementary particles that decay into neutrinos. Since it is much easier to absorb energetic gamma ray photons than neutrinos one expects the neutrino flux to be higher than the gamma ray flux. IMB searched for localized directional sources of neutrinos\cite{Svob1987} based on a sample of 172 upward going muons. The use of upward muons restricted observations to when the source was below the horizon. A search of 10 preselected sources showed no significant excesses. A search using contained events\cite{IMBPS} gave access to the entire sky but also yielded no excess above background for any of 15 preselected sources. Super-K has also looked for point sources\cite{Thane} using 2623 days of data collected from 1996 to 2007. The search was conducted with upward going muons coming from neutrino interactions below the detector. Four searches were conducted, an unbiased search over the whole sky, a search of 16 preselected astrophysical candidates, a search correlated with 27 ultra high energy cosmic rays from Auger and a search for correlations with about 2200 gamma ray bursts in the BATSE, HETE and Swift catalogs. No significant signals were found. Super-K searches for specific sources\cite{SKBlaz} have also not yielded any significant signal. \subsubsection{MACRO} MACRO has reported on searches for astrophysical neutrinos.\cite{MACRO}. The data was recorded from March 1989 to September 1999 with a total live time of about 6.4 years, with varying acceptances. 1100 upward-going muon events were recorded. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{Macro-BatseCorr.pdf} \caption{The time and angular differences between the MACRO 1100 upward muon sample and the 2527 BATSE gamma ray burst sample. The lower plot has a time limit of $\pm$10,000 seconds. The upper plot selects the sample within $\pm$1,000 seconds. Three BATSE bursts are labled.} \label{fig:macro-batsecorr} \end{figure} A search for clusters of upward muon events in a cone of 1.5$^\circ$, 3$^\circ$ or 5$^\circ$ (half angle) of a direction found 60 clusters when 56.3 would be expected from the atmospheric neutrino background. None of the clusters were statistically significant. A search in the direction of 42 potential energetic neutrino sources selected from known energetic photon point-sources obtained a flux limit of $\Phi_{lim}(90\%) = 3.06 \times 10^{-16}$ cm$^{-2}$ s$^{-1}$. They conducted similar searches in the direction of 220 supernova remnants, 181 blazars, 2527 BATSE sources, 271 EGRET sources, 32 BeppoSAX sources and 29 NovaeX sources with varying limits with none above $6.84 \times 10^{-16}$ cm$^{-2}$ s$^{-1}$. They also looked for time correlations with transient events from BATSE, figure \ref{fig:macro-batsecorr} and BeppoSAX. Results were compatible with atmospheric backgrounds. \subsection{Dark Matter} Strong evidence for dark matter comes from observation of gravitationaly bound systems of stars and galaxies and of the cosmic microwave background and the large scale structure (the distribution of galaxies) of the universe. \cite{DM} Conventional understanding of physics suggests that the energy density observed as dark matter has a massive particle like quanta with all of the symmetries expected by quantum field theory. Stable dark matter must have no electric charge or we would find evidence for it in bound states, like atoms, interacting with electromagnetic radiation. If it had electric charge it would be visible, emitting and absorbing light. Dark matter accounts for about 85\% of the matter in the universe, with an energy density of about $2.2\times 10^{-27}$ kg/meter$^3$. Since it is responsible for most of the gravitational structure in the universe its distribution is not expected to be uniform. Observation of the particle manifestation of dark matter would be a major step in understanding its physical nature. It must be present in most regions dominated by gravitational binding. Since it is present in our laboratories substantial effort has gone into {\em direct detection} where one observes an interaction between dark matter particles and a conventional detector. Direct detection is one of three major observational methods that are being pursued. Production methods of dark matter detection use the enormous energies of high energy colliders to produce dark matter out of pure energy. Production methods have been used to very effectively fill in almost all of our knowledge about elementary particles since almost all of them are unstable and have decayed and are no longer present in our environment. The idea of production methods is to create matter and filter out all the components of ordinary matter. What is left would be a dark matter candidate. Indirect methods to observe dark matter take advantage of its ability to clump and the general properties of matter described by relativistic quantum field theories. In particular one expects dark matter to come in matter and antimatter versions. When matter and antimatter meet they convert to pure energy. Even if the dark matter has virtually no particle like interactions with ordinary matter the energy of annihilation will be detectable. \subsubsection{Annihilation} Indirect methods provide a link to neutrino astrophysics. Annihilation rates depend on the square of the density of dark matter particles. Gravitationaly induced clumping of dark matter enhances the annihilation rate. Because dark matter interacts only weakly with normal matter once the dark matter becomes gravitationaly bound it will accumulate inside of celestial object. access to the normal matter annihilation products will be blocked from view except for annihilation products which decay promptly to neutrinos. So bounds on energetic neutrinos from the sun and the earth can be used to discover dark matter. IMB was the first to look for evidence of dark matter annihilation\cite{DM87}. Both contained events and through-going muons were studied. No statistically significant excesses were reported. Model dependent bounds were placed on some forms of dark matter. Super-Kamiokande had multiple searches for evidence of dark matter\cite{SKDM}. They looked for evidence in upward going through-going muons from the direction of the earth center, the sun and the galactic center. No significant excess was found. Limits on the WIMP dark matter proton cross section in the range of $10^{-41} cm^{2}$ to $10^{-38} cm^{2}$ was set depending on the WIMP mass and the assumed form of the coupling. A latter search with a much larger exposure confirmed these limits. \subsubsection{Direct Dark Mater Searches} Many forms of dark matter are expected to be unstable and decay. Most dark matter searches look for the stable, neutral, lightest particle of the dark matter class. There is no reason why unstable dark matter with a very long lifetime would not persist to the present time, like the naturally occurring thorium 232, uranium 238 and potassium 40. The ambient dark matter would then present an unshieldable component in high sensitivity experiments such as proton decay. The signal would not look like proton decay unless there was an unlikely coincidence of mass scales. Unlike the atmospheric neutrino component the decay of dark matter should have no net momentum but this constraint can be confused by decays to lighter dark matter particles which would carry off momentum unseen. So the decay of long lived metastable dark matter would appear as an isotropic anomalous component of the atmospheric neutrino signal. Such a ``component'' was found\cite{NuAnom} but was eventually identified as a consequence of neutrino oscillations. A Super-Kamiokande did a direct search for boosted dark matter coming from the sun or the galactic center with a 161.9 kt yr exposure. The search yielded no significant signal. Boosted dark matter is a form of dark matter which is energetic and may have been produced from cold dark matter. The search was for energetic electrons recoiling from the dark matter as it crossed the detector. They looked for recoiling electrons in the energy range of $0.1<E_{e}<1000$ GeV. Signal rates closely matched estimates due to atmospheric neutrinos. \subsection{Magnetic Monopoles} In addition to proton decay grand unified theories predicted the existence of magnetic monopoles\cite{HoftPoly}. A magnetic monopole is an object with a single magnetic charge. All known magnetic states are composed of magnetic dipoles containing both north and south magnetic charges. The existence of stable magnetic monopoles meant that they would be produced during the Big Bang and would survive until the present day. The nature of grand unified theories also gave these monopoles the power to catalyze proton decay. A grand unified magnetic monopole would cause the baryon violating decay of protons and neutrons in which it came into contact\cite{Rubakov}. The IMB searches for monopole catalysis of proton decay\cite{SE1983} yield a flux limit of from $1.0 \times 10^{-15}/(cm^{2} sr sec)$ to $2.7 \times 10^{-15}/(cm^{2} sr sec)$ at 90\% confident limit. The Super Kamiokande search for magnetic monopoles\cite{SKMon} used a different method. They looked for evidence for monopole catalyzed proton decay in the sun which would produce pions which would ultimately decay to neutrinos with energies in the range of 19 to 55 MeV. By searching for a flux of these energetic neutrinos from the sun they set a limit on the monopole flux of $F_{M}(\sigma_{0}/1mb)<6.3 \times 10^{-24} (\beta_{M}/10^{-3}) /(cm^{2} sr sec)$ at 90\% confidence limit. $\sigma_{0}$ is the catalysis cross section at $\beta_{M}= 1$ and $\beta_{M}$ is the monopole speed in units of $c$. MACRO was able to set direct limits on magnetic monopole flux by measuring the ionization and transit time of possible candidates. No events were found setting 90\% confidence limits at $< 2 \time 10^{-16}/(cm^{2} sr sec)$ over the range of velocities $ 4\times 10^{-5}<\beta<1$. They also set limits on slow moving GUT monopole catalyzed proton decay of $< 3 \times 10^{-16}/(cm^{2} sr sec)$ \subsection{Muons} Muons present an opportunity to significantly extend the sensitivity of astrophysical neutrino searches. Muons are an unstable elementary particle with $c\tau=659$ meters and a mass of 106 MeV. Virtually all muons observed on earth were created nearby. Muons can be created by the decay of heavier elementary particles, primarily pions or much less likely by neutrino interactions. The muons observed in cosmic rays come primarily from the decay of a charged pion into a muon and a neutrino. The pions are produced via strong interactions of cosmic rays on the atmosphere. Whenever a muon is produced a neutrino is produced so cosmic rays produce neutrinos in the Earth's atmosphere. These neutrinos can be observed when they interact in the Earth and produce a muon. The first evidence of atmospheric neutrinos was the observation of horizontal and upward going muons in deep mines where all surface muons had been attenuated.\cite{atmu} Many astrophysical neutrino observatories extend the range of their observations to much lower fluxes, at higher energies by looking at energetic neutrinos emerging from the surrounding rock. Some information is lost. The interaction vertex can not be observed so energy deposited near the interaction point is lost. The range of the muon in the rock is also not known so the target mass can only be estimated with a model of the interaction spectrum. The muon is a charged particle that has no strong interaction and a relatively long lifetime. Time dilation gives high energy muons a much larger lifetime in our reference frame. Because of these two factors muons are the dominant source of cosmic rays found in underground laboratories. Except for radioactive decay of materials in the underground environment they are the dominant source of radiation underground. Several experiments have attempted to extract evidence for an astrophysical source from downward going muons in underground experiments. These muons can be associated with the direction and timing of known high energy astrophysical sources such as AGN. Since the muon is electrically charged and would be deflected by magnetic fields en route and the muon is unstable so would decay en route muons could be associated with the source if they originated from the interaction of a long lived neutral particle coming from the source. IMB\cite{IMBCygX3} searched for a signal from the direction of Cygnus X3 which is a known, variable source, of periodic energetic high energy gamma rays. Searches were made at times of reported activity from both surface and underground detectors. No significant results were observed. Kamiokande\cite{KamCygX3} has reported similar results. Super-Kamiokande\cite{SKCR} has shown evidence for anisotropy in the high energy cosmic ray flux by measuring the flux of downward going muons from a large portion of the sky. \section{Future Opportunities} \subsection{Hyper K} Hyper-Kamiokande consists of a cylindrical tank of 260,000 tons of water, with a height of 60m and a diameter of 74m. surrounded by 40,000 phototubes at a depth of 650 mwe. The goals of Hyper-Kamiokande are similar to those for Supper-Kamiokande. The additional mass will give it more events for solar, atmospheric and supernova neutrinos. The additional mass will also increase the event rate for the accelerator neutrino beam giving it access to the neutrino mass hierarchy and neutrino CP violation. \subsection{JUNO} JUNO consists of 20,000 metric tons of liquid scintillator observed by 15,000 phototubes. The experiment plans to use reactor antineutrino proton scattering to resolve the neutrino mass hierarchy question but will also have a strong astrophysics program. They plan to have an energy resolution of better than 3\% and employ a delayed coincidence method to suppress backgrounds. \subsection{DUNE} DUNE plans to study neutrinos interacting on 40,000 metric tons of liquid argon in four 10,000 ton modules. One of the primary goals of the experiment is to make observations of the electron neutrino component (as distinct from the electron antineutrino component) of a supernova neutrino burst\cite{dunesn}. Argon has a relatively large cross section for electron neutrinos. The neutronization process that forces electrons into protons to make a neutron star is a key part of initiating the stellar collapse mechanism and it produces an electron neutrino pulse preceding the much larger pulse of thermal neutrinos powered by the neutron star binding energy. Observation of neutrinos in DUNE should be complementary to other detector's observations of the antineutrino content via the antineutrino proton scattering mechanism \section{Conclusions} Neutrino astrophysics is a thriving subject with many established discoveries. Many neutrino mysteries have been solved by massive detectors which discovered and measured neutrino oscillations which enabled correct interpretations of astrophysical observations. Oscillations among three neutrino flavors complicate neutrino astrophysics since the three flavors produce different signals. One needs to work backward from the observations to understand the source. This worked well for the sun. \section{Acknowlegements} I would like to thank John G.~Learned for suggesting this article and helping me expand it. I thank Floyd W.~Stecker for his organization and helpful suggestions to improve the manuscript. This article would not have been possible if not for the contributions of those physicists and astronomers who have created this field. The citations can not do all of them justice. Finally I owe a debt of gratitude to all of my collaborators over the years who have patiently helped me learn and understand the subject.
1,116,691,501,176
arxiv
\section{Introduction} \medskip Given a group or semi-group $\Gamma$ generated (in the sense of semi-groups) by some subset $S\subset \Gamma$, the {\it Cayley graph} of $\Gamma$ with respect to $S$ is the oriented graph having vertices $\gamma\in \Gamma$ and oriented edges $(\gamma,\gamma s)_{s\in S}$. In this paper, we consider generating sets $S$ which are not necessarily finite thus yielding oriented Cayley graphs which are perhaps not locally finite. The above situation gives rise to a homomorphism of semi-groups $$\pi:F_S\longrightarrow \Gamma$$ where $F_S$ denotes the free semi-group on the set $S$ consisting of all finite words $s_1\dots s_l$ with letters in the alphabet $S$. The set $F_S$ is also called the {\it free monoid} on $S$ and is often denoted by $S^*$. We will stick to the notation $F_S$ since our set $S$ will be identified with a field $k$ and the notation $k^*$ would be misleading in this case. This paper deals with the group $\Gamma={\rm SL}_2(k)$ over an arbitrary field $k$ generated by the set of matrices $$S=\{\left(\begin{array}{cc} 0&-1\cr 1&\alpha\end{array}\right)\ \vert\ \alpha\in k\}.$$ We will study the preimage ${\cal A}\subset F_S$ of all finite words with letters in $S$ (whose elements are indexed by the elements of the field $k$) such that $$\pi({\cal A})=\{\left(\begin{array}{cc} a&-b\cr b^{-1}&0\end{array}\right)\ \vert \ a\in k\ ,\ b\in k^*\ \}\ .$$ The set ${\cal A}$ can also be described as follows: Recall that the projective line ${\bf P}^1(k)$ consists of all $1-$dimensional subspaces in $k^2$. We denote by $L(x)$ the subspace (line) spanned by $\left(\begin{array}{c} 1\cr x\end{array}\right)$ and by $L(\infty)$ the subspace spanned by $\left(\begin{array}{c} 0\cr 1\end{array}\right)$ (hence $L(x)$ denotes the unique line in $k^2$ which has slope $x$ and runs through the origin). The group ${\rm SL}_2(k)$ acts on ${\bf P}^1(k)$ (this action goes in fact down to the projective group ${\rm PSL}_2(k)$ which is the quotient of ${\rm SL}_2(k)$ by $\pm \left(\begin{array}{cc}1&0\cr 0&1\end{array}\right)$). The subset $\cal A$ considered above can now be defined as the set of all words $w\in F_S$ such that $\pi(w)L(\infty)=L(0)$. The aim of this paper is the description of some properties of the set ${\cal A}\subset F_k$ and of its complement ${\cal C}=F_S\setminus {\cal A}\subset F_S$. All results will be stated in the next section. Proofs will be given in section 3. \section{Definitions and main results} Consider a field $k$ and the set $$S=\{\left(\begin{array}{cc} 0&-1\cr 1&\alpha\end{array}\right)\ \vert\ \alpha\in k\}\subset {\rm SL}_2(k)\quad .$$ {\bf Lemma 2.1.} {\sl The set $S$ generates ${\rm SL}_2(k)$ as a semigroup.} This lemma shows that every element of ${\rm SL}_2(k)$ can be written in at least one way as a finite word with letters in $S$. Since the elements of $S$ are obviously indexed by $k$ we will only write $\alpha$ instead of $\left(\begin{array}{cc} 0&-1\cr 1&\alpha\end{array}\right)$. We identify hence the free monoid $F_S$ on $S$ with the free monoid $F_k$ on $k$ consisting of the set of all finite words with letters in the field $k$. We recall that we are interested in the subset ${\cal A}\subset F_k$ defined by $${\cal A}=\{w=\alpha_1\dots\alpha_l\in F_k\ \vert \ \pi(w) L(\infty)=L(0)\}$$ (where $L(\infty)=k\left(\begin{array}{c}0\cr 1\end{array}\right)$ and $L(0)=k\left(\begin{array}{c}1\cr 0\end{array}\right)$) and in its complement ${\cal C}=F_k\setminus {\cal A}$. We denote by $F_k^l$ the subset of all words of length exactly $l$ in $F_k$. We set ${\cal A}^l={\cal A}\cap F_k^l$ and ${\cal C}^l={\cal C}\cap F_k^l$. {\bf Theorem 2.2.} {\sl (i) $w\in {\cal A}^l$ if and only if $\alpha w\in {\cal C}^{l+1}$ and $w\alpha\in {\cal C}^{l+1}$ for every $\alpha\in k$. \smallskip \par \ \ (ii) For any $w\in {\cal C}^l$ there exist unique values $\alpha,\ \beta\in k$ such that $\alpha w,\ w\beta\in {\cal A}^{l+1}$. \smallskip \par \ \ (iii) if $\alpha_1\dots \alpha_l\in{\cal A}^l$ then $\alpha_2\alpha_3\dots \alpha_l$ and $\alpha_1\alpha_2\dots \alpha_{l-1}\in {\cal C}^{l-1}$. \smallskip \par \ \ (iv) $\alpha_1\alpha_2\dots\alpha_{l-1}\alpha_l\in {\cal A}^l$ if and only if $\alpha_l\alpha_{l-1}\dots\alpha_2\alpha_1\in {\cal A}^l$. \smallskip \par \ \ (v) $\alpha_1\dots \alpha_l\in {\cal A}^l$ if and only if $(-\alpha_1)\dots(-\alpha_l)\in {\cal A}^l$.} {\bf Corollary 2.3.} {\sl For $k$ the finite field on $q=p^d$ elements (with $p$ the finite prime characteristic of $k$) and for $l=0,1,2,\dots$ we have $$\sharp({\cal A}^l)=\frac{q^l-(-1)^l}{q+1}\ ,\quad \sharp({\cal C}^l)= \frac{q^{l+1}+(-1)^l}{q+1}\quad .$$ } \par Consider the equivalence relation $\sim$ on $F_k$ with classes $\cal A$ and $\cal C$. Denote by $\epsilon$ the empty word (of length $0$) in $F_k$. Extend the applications $x\longmapsto x+1,\ x\longmapsto x-1$ of the field $k$ to applications of the set $k\cup \{\epsilon\}$ into itself by setting $(\epsilon\pm 1)=\epsilon$. {\bf Proposition 2.4.} {\sl (i) One has $$\begin{array}{l} x00y\sim xy\ ,\cr x\alpha 1\beta y\sim x(\alpha-1)(\beta-1)y\ ,\cr x\alpha (-1)\beta y\sim x(\alpha+1)(\beta+1)y\end{array}$$ where $x,y\in F_k,\ \alpha,\beta\in k\cup \{\epsilon\}$ with $\alpha=\epsilon\Longrightarrow x=\epsilon$ and $\beta=\epsilon\Longrightarrow y=\epsilon$ (i.e. $\alpha$ is the last letter of word $x\alpha$ if $x\alpha$ is non-empty and $\beta$ is the first letter of the word $\beta y$ if $\beta y$ is non-empty). \ \ (ii) One has $$\begin{array}{ll} \alpha\beta x\sim (\beta-\alpha^{-1})x\quad &{\rm if}\ 0\not=\alpha\in k,\ \beta\in k,\ x\in F_k,\cr 0\beta x\sim x\quad& {\rm if}\ \beta\in k,\ x\in F_k\ .\end{array}$$ } {\bf Remarks 2.5.} (i) If $k$ is the field on $2$ or $3$ elements, then assertion (i) of previous proposition characterizes the sets $\cal A$ and $\cal C$ completely: it yields substitutions which replace every word except $0\in {\cal A}$ and $\epsilon\in{\cal C}$ by an equivalent word which is strictly shorter. \ \ (ii) Over any field $k$, assertion (ii) above and the trivial observation $\alpha\sim\epsilon\Longleftrightarrow \alpha\not=0$ for $\alpha\in k$ determine the sets ${\cal A}$ and $\cal C$. Set $${\cal P}^l=\{\alpha_1\dots \alpha_l\in {\cal A}^l\ \vert\ \alpha_1\alpha_2\dots \alpha_h\in {\cal C}^h\quad {\rm for } \ h=1,\dots, l-1\}$$ and ${\cal P}=\cup {\cal P}^l$. \medskip \par {\bf Theorem 2.6.} {\sl (i) (\lq\lq Unique factorization in ${\cal A}$'') We have $w\in {\cal A}$ if and only if $w$ can be written as $$w=p_1\delta_1p_2\delta_2\dots p_n\delta_n p_{n+1}$$ for some $n\geq 0$ with $p_1,\dots p_{n+1}\in {\cal P}$ and $\delta_1,\dots ,\delta_n\in k$. Moreover, such a factorization of $w\in {\cal A}$ is unique. \smallskip \par \ \ (ii) We have for $l\geq 1$ $$\sharp({\cal P}^l)=(q-1)^{l-1}$$ if $k$ is the finite field on $q=p^d$ elements. } {\bf Corollary 2.7.} {\sl One has for any natural integer $l$ the identity $$(x+1)\sum_{k=0}^{[l/2]} {l-k\choose k} x^k(x-1)^{l-2k}=x^{l+1}+(-1)^l$$ which is equivalent to the identities $$\sum_{s=0}^k{l-s\choose s}{l-2s\choose k-s}(-1)^s=1$$ for $k=0,1,\dots,l$.} {\bf Remark 2.8.} Theorem 2.6 shows that the vector space (over an arbitrary field) with basis the set $$\{\epsilon\}\cup\{w\alpha\ \vert\ w\in {\cal A},\ \alpha\in k\}$$ can be turned into a graded algebra $\bf A$ (the product is given by extending linearly the concatenation of words in $F_k$ and the grading is induced by the length of words in $F_k$). It has in fact a very simple structure: the algebra $\bf A$ is a free non-commutative algebra (on $q(q-1)^{l-2}$ generators of degree $l=2,3,4,\dots$ if $k$ is the finite field on $q=p^d$ elements). Given two words $w,w'\in F_k^l$ of the form $$w=\alpha_0\alpha_1\dots\alpha_{l-1},\ w'=\alpha_1\dots \alpha_{l-1}\alpha_l$$ we call $w'$ an {\it immediate successor} of $w$ and $w$ an {\it immediate predecessor} of $w'$. {\bf Theorem 2.9.} {\sl Each element $w\in {\cal A}^l$ has a unique immediate successor and a unique immediate predecessor in ${\cal A}^l$.} Given an element $w_0 \in {\cal A}^l$, the previous theorem yields a sequence $$w_0,\ w_1,\ w_2,\ w_3,\dots \in {\cal A}^l$$ with $w_{i+1}$ an immediate successor of $w_i$. Otherwise stated: For each $w\in {\cal A}^l$ there exists an infinite word $$\tilde W=\dots \alpha_{-1}\alpha_0\alpha_1\alpha_2\alpha_3\dots$$ such that $\alpha_1\alpha_2\dots \alpha_l=w$ and all factors $w_iw_{i+1}\dots w_{i+l-1}$ of length $l$ (subwords formed by $l$ consecutive letters) of $\tilde W$ are elements of ${\cal A}^l$. Until the end of this section we assume that $k$ is the finite field with $q=p^d$ elements. In this case ${\cal A}^l$ is finite. Given $w\in {\cal A}^l$ there exists hence a smallest integer $r$ such that the infinite word $\tilde W$ associated to $w$ is $r-$periodic. {\bf Theorem 2.10.} {\sl Let $\tilde W=\dots \alpha_{r-1}\alpha_0\alpha_1\dots \alpha_{r-1}\alpha_0\alpha_1\dots$ be an infinite $r-$periodic word with letters in a finite field $k$. Then there exists a smallest integer $t\leq q^2-1$ (in fact, $t$ is either $q$ or a divisor of $(q^2-1)$) such that all factors of length $tr-1$ in $\tilde W$ belong to $\cal A$.} {\bf Remark 2.11.} It follows (cf. assertion (i) in Lemma 3.1 of section 3) that all factors of length $ltr-1$ ($l\geq 1$) of $\tilde W$ belong also to $\cal A$. One can moreover show that if $m$ is an integer with the property that all factors of length $m$ in $\tilde W$ belong to ${\cal A}^m$, then $m=ltr-1$ for a suitable integer $l\geq 1$ (here $r$ denotes the minimal period length of the infinite periodic word $\tilde W$). {\bf Definition 2.12.} Given a finite set $E$ having $N\geq 2$ elements, a {\it mock parity check set} (MPCS for short) of length $d$ is a subset ${\cal M}\subset E^d$ (words of length $d$ with letters in $E$) such that \par (i) each element $w\in {\cal M}$ has a unique immediate successor and a unique immediate predecessor in ${\cal M}$. \par (ii) ${\cal M}$ consists of exactly $N^{d-1}$ elements. Denote by ${\rm Perm}_E$ the group of permutations of the finite set $E$ and let $\varphi:E^{d-2} \longrightarrow {\rm Perm}_E$ be an application which associates to each element $z\in E^{d-2}$ a permutation $\varphi_z:E\longrightarrow E$. {\bf Proposition 2.13.} {\sl The set $${\cal M}=\{\alpha_1\alpha_2\dots\alpha_{d-1}\alpha_d\in E^d\ \vert\ \varphi_{\alpha_2\alpha_3\dots\alpha_{d-1}}(\alpha_1)=\alpha_d\}$$ is a MPCS and every MPCS is of this form.} {\bf Remarks 2.14.} (i) This proposition shows that the set of all MPCS can be endowed with a group structure (the set of functions on $E^{N^{d-2}}$ with values in ${\rm Perm}_E$ has an obvious group structure given by $(\varphi\psi)_z=\varphi_z\circ \psi_z$). \smallskip \par \ \ (ii) A MPCS ${\cal M}\subset E^d$ yields a permutation of its elements: send each $w\in{\cal M}$ to its (unique) successor in ${\cal M}$. Call a MPCS a (generalized) {\it de Brujin sequence} if the associated permutation consists of a unique cycle. One can show that (generalized) de Brujin sequences exist for all integers $N\geq 2$ and $d\geq 1$. {\bf Theorem 2.15.} {\sl Given a finite field $k$, the set $${\cal M}^l={\cal A}^l\cup \{\alpha_1\dots\alpha_l\in {\cal C}^l\ \vert\ \alpha_1\dots \alpha_{l-1}\in {\cal A}^{l-1}\quad {\rm and} \quad \alpha_2\dots\alpha_l\in {\cal A}^{l-1}\}$$ is a MPCS of $k^l$.} \section{Proofs} \par {\bf Proof of Lemma 2.1.} Take $\left(\begin{array}{cc} a&b\cr c&d\end{array}\right) \in {\rm SL}_2(k)$. If $a\not=0$ we have necessarily $d=\frac{1+bc}{a}$ and the computation $$\left(\begin{array}{cc}0&-1\cr 1&\frac{-c-1}{a} \end{array}\right) \left(\begin{array}{cc}0&-1\cr 1&-a \end{array}\right) \left(\begin{array}{cc}0&-1\cr 1&\frac{b-1}{a} \end{array}\right) =\left(\begin{array}{cc} a&b\cr c& \frac{1+bc}{a} \end{array}\right)$$ yields the result. The case $a=0$ is reduced to precedent case by multiplying first with $\left(\begin{array}{cc}0&-1\cr 1&0 \end{array}\right)$ and by remarking that this matrix has order $4$ (or $2$ if the ground field $k$ is of characteristic $2$). \hfill QED {\bf Proof of Theorem 2.2.} One has $$\left(\begin{array}{cc} a&b\cr c&d\end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&x\end{array}\right)= \left(\begin{array}{cc} b&-a+bx\cr d&-c+dx\end{array}\right)$$ which shows that $w\alpha\not\in {\cal A}$ if $w\in \cal A$ (since then $\pi(w)=\left(\begin{array}{cc}a&-b\cr b^{-1}&0\end{array}\right)$). On the other hand, if $w\in {\cal C}$ then $\pi(w)= \left(\begin{array}{cc} a&b\cr c&d\end{array}\right)$ with $d\not= 0$ and the above computation implies the existence of a unique $\beta$ such that $w\beta\in {\cal A}$. This proves half of (i) and (ii). The proof of the remaining half is similar (it is also implied by assertion (iv)). \par In order to prove (iii) one considers $$\begin{array}{c} \displaystyle \pi(\alpha_2\dots \alpha_l)= \pi(\alpha_1)^{-1}\pi(\alpha_1\dots\alpha_l)= \left(\begin{array}{cc} \alpha_1&1\cr -1&0\end{array}\right) \left(\begin{array}{cc} a&-b\cr b^{-1}&0\end{array}\right)\cr \displaystyle =\left(\begin{array}{cc} \alpha_1 a+b^{-1}&-\alpha_1 b\cr -a&b \end{array}\right)\end{array}$$ which shows that $\alpha_2\dots\alpha_l\in {\cal C}^{l-1}$. A similar computation yields $\alpha_1\dots \alpha_{l-1}\in {\cal C}^{l-1}$. \par Since $$\left(\begin{array}{cc} 0&1\cr 1&0\end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&\alpha\end{array}\right) \left(\begin{array}{cc} 0&1\cr 1&0\end{array}\right) =\left(\begin{array}{cc} \alpha&1\cr -1&0\end{array}\right)$$ we get by conjugating $\pi(\alpha_1\dots\alpha_l)= \left(\begin{array}{cc} a&b\cr c&d\end{array}\right)$ with $\sigma= \left(\begin{array}{cc} 0&1\cr 1&0\end{array}\right)$ $$\sigma \pi(\alpha_1\dots\alpha_l)\sigma= \left(\begin{array}{cc} \alpha_1&1\cr -1&0\end{array}\right)\cdots \left(\begin{array}{cc} \alpha_l&1\cr -1&0\end{array}\right) =\left(\begin{array}{cc} d&c\cr b&a\end{array}\right)$$ If $\pi(\alpha_1\dots\alpha_l)= \left(\begin{array}{cc} a&-b\cr b^{-1}&0\end{array}\right)$ we get by taking the inverse of $\sigma \pi(\alpha_1\dots\alpha_l)\sigma$ $$\pi(\alpha_l\dots\alpha_1)= \left(\begin{array}{cc} 0&-1\cr 1&\alpha_l\end{array}\right)\cdots \left(\begin{array}{cc} 0&-1\cr 1&\alpha_1\end{array}\right) =\left(\begin{array}{cc} 0&b^{-1}\cr -b&a\end{array}\right)^{-1} =\left(\begin{array}{cc} a&-b^{-1}\cr b&0\end{array}\right)$$ which shows that $\alpha_l\dots\alpha_1\in {\cal A}$ and proves (iv). \par Transposing $\pi(\alpha_1\dots\alpha_l)$ and multiplying by $(-1)^l$ shows that $(-\alpha_l)\dots(-\alpha_1)\in{\cal A}$. Assertion (iv) implies now (v). \medskip \par {\bf Remark 3.1.} The properties of the action of ${\rm SL}_2(k)$ (or ${\rm PSL}_2(k)$) on the projective line ${\bf P}^1(k)$ can be used to get a more conceptual proof of most assertions in Theorem 2.2. \medskip \par {\bf Proof of Corollary 2.3.} Assertion (ii) of Theorem 2.2 shows that $\sharp({\cal A}^{l+1})\geq \sharp ({\cal C}^l)$ and assertion (iii) implies $\sharp({\cal A}^{l+1})\leq \sharp ({\cal C}^l)$ hence establishing $\sharp({\cal A}^{l+1})= \sharp ({\cal C}^l)$. Induction on $l$ (using the obvious identity $\sharp({\cal A}^l)+\sharp ({\cal C}^l)=q^l$) yields now the result. \medskip \par {\bf Proof of Proposition 2.4.} The first line of assertion (i) follows from the identity $$\left(\begin{array}{cc} 0&-1\cr 1&0 \end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&0 \end{array}\right)= \left(\begin{array}{cc} -1&0\cr 0&-1\end{array}\right)\ .$$ If $\alpha$ and $\beta$ are both non-empty, the last two lines of the proposition follow from the identities $$\left(\begin{array}{cc} 0&-1\cr 1&\alpha \end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&1\end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&\beta \end{array}\right)$$ $$=\left(\begin{array}{cc} -1&1-\beta\cr \alpha-1&\alpha\beta-\alpha-\beta \end{array}\right)= \left(\begin{array}{cc} 0&-1\cr 1&\alpha-1 \end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&\beta-1 \end{array}\right)$$ and $$\left(\begin{array}{cc} 0&-1\cr 1&\alpha \end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&-1\end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&\beta \end{array}\right)= \left(\begin{array}{cc} 1&1+\beta\cr -1-\alpha&-\alpha-\beta-\alpha\beta \end{array}\right) $$ $$=-\left(\begin{array}{cc} 0&-1\cr 1&\alpha+1 \end{array}\right) \left(\begin{array}{cc} 0&-1\cr 1&\beta+1 \end{array}\right)$$ (in fact, the last line of Proposition 2.4 is easily deduced from the second one by using assertion (v) of Theorem 2.2). We leave the remaining cases of assertion (i) (with $\epsilon\in\{\alpha,\beta\}$) to the reader (they follow also easily from Theorem 2.6). Assertion (ii) follows from the computations $$\left(\begin{array}{cc}0&-1\cr 1&\alpha\end{array}\right) \left(\begin{array}{cc}0&-1\cr 1&\beta\end{array}\right) \left(\begin{array}{cc}\alpha\beta a+\beta c-a&\alpha\beta b+\beta d-b\cr -\alpha a-c&-\alpha b-d\end{array}\right)= \left(\begin{array}{cc}a&b\cr c&d\end{array}\right)$$ and $$\left(\begin{array}{cc}0&-1\cr 1&\gamma\end{array}\right) \left(\begin{array}{cc}\alpha\beta a+\beta c-a&\alpha\beta b+\beta d-b\cr -\alpha a-c&-\alpha b-d\end{array}\right)$$ $$= \left(\begin{array}{cc} \alpha a+c&\alpha b+d\cr \alpha\beta a+\beta c-a-\alpha\gamma a-\gamma c& \alpha\beta b+\beta d-b-\alpha\gamma b-\gamma d\end{array}\right)\ .$$ \medskip \par {\bf Lemma 3.2.} (i) If $w,w'\in {\cal A}$ then $ww'\in {\cal C}$ and $w\alpha w'\in {\cal A}$ for any $\alpha\in k$. \smallskip \par \ \ (ii) If exactly one of $w,w'$ is an element of ${\cal A}$ then $w\alpha w'\in {\cal C}$ for any $\alpha\in k$. \medskip \par {\bf Proof of Lemma 3.2.} The computation $$\left(\begin{array}{cc}a&-b\cr b^{-1}&0\end{array}\right) \left(\begin{array}{cc}0&-1\cr 1&\alpha \end{array}\right) \left(\begin{array}{cc}a'&-b'\cr b'^{-1}&0\end{array}\right) =\left(\begin{array}{cc}-ba'-ab'^{-1}-\alpha bb'^{-1}&bb'\cr -(bb')^{-1}&0 \end{array}\right)$$ shows (i). \par Let us now suppose that $w\in {\cal A},\ w'\in {\cal C}$. This implies $\pi(w\alpha)=\left(\begin{array}{cc}a&b\cr 0&a^{-1} \end{array}\right)$ and $\pi(w')=\left(\begin{array}{cc}a'&b'\cr c'&d' \end{array}\right)$ with $d'\not=0$. We get hence $$\pi(w\alpha w')=\left(\begin{array}{cc}aa'+bc'&ab'+bd'\cr a^{-1}c'&a^{-1}d' \end{array}\right)$$ which shows $w\alpha w'\in {\cal C}$. The case $w\in {\cal C},\ w'\in {\cal A}$ follows using assertion (iv) of Theorem 2.2. \medskip \par {\bf Remark 3.3.} One can also use the more conceptual computation $$\pi(w\alpha w')L(\infty)=\pi(w)\pi(\alpha)\pi(w')L(\infty)= \pi(w)\pi(\alpha)L(0)=\pi(w)L(\infty)=L(0)$$ (for $w,w'\in{\cal A}$) in order to prove assertion (i) of Lemma 3.2. \par Similarly, the case $w\in {\cal C},\ w'\in {\cal A}$ is dealt by $$\pi(w\alpha w')L(\infty)=\pi(w)\pi(\alpha)\pi(w')L(\infty)= \pi(w)\pi(\alpha)L(0)=\pi(w)L(\infty)\not=L(0)$$ and assertion (iv) of Theorem 2.2 completes the proof of assertion (ii) in Lemma 3.2. \medskip \par {\bf Proof of Theorem 2.6.} Assertion (i) follows easily from the previous lemma and the definition of ${\cal P}$. \par Assertion (ii) follows from assertion (ii) of Theorem 2.2. \medskip \par {\bf Proof of Corollary 2.7.} Let $k$ be the finite field with $q=p^d$ elements. An exercice using Theorem 2.6 shows that we have $$\sharp({\cal A}^{l+1})=\sum_{k=0}^{[l/2]} {l-k\choose k} q^k(q-1)^{l-2k} \ .$$ Corollary 2.3 establishes then the result if $x$ is a power of a prime number. The proof follows now from the fact that both sides are polynomials in $x$. \medskip \par {\bf Proof of Theorem 2.9.} Follows from assertions (iii) and (ii) in Theorem 2.2. \medskip \par {\bf Proof of Theorem 2.10.} The elements $$\pi(\alpha_0\alpha_1\dots \alpha_{q-1}),\pi(\alpha_1\dots\alpha_{q-1}\alpha_0),\dots ,\pi(\alpha_{q-1}\alpha_0\dots\alpha_{q-2})\in{\rm SL}_2(k)$$ are all conjugate and have hence a common order $t$ which obviously works. \medskip \par The easy proof of Proposition 2.13 is left to the reader. \medskip \par {\bf Proof of Theorem 2.15.} This result follows readily from Theorem 2.9, assertion (i) of Theorem 2.2 and Corollary 2.3. \bigskip \par I thank J.P. Allouche, P. de la Harpe and J. Helmstetter for useful comments. \par I thank also an anonymous referee for the remark that the paper deals in fact with the projective group ${\rm PSL}_2(k)$ and for suggesting Remarks 3.1 and 3.3. \bigskip \par {\bf Bibliography} \bigskip \par {\bf [B1]} R. Bacher, {\it Curvature flow of maximal integral triangulations}, Ann. Inst. Fourier {\bf [49]}, 4 (1999), 1115-1128. \par {\bf [B2]} R. Bacher, {\it An equivalence relation on $\{0,1\}^*$}, Europ. Journal of Combinatorics {\bf 21} (2000), 853-864. \bigskip Roland Bacher INSTITUT FOURIER Laboratoire de Math\'ematiques UMR 5582 (UJF-CNRS) BP 74 38402 St MARTIN D'H\`ERES Cedex (France) e-mail: [email protected] \end{document}
1,116,691,501,177
arxiv
\section{Introduction} When Onsager solved in \cite{44Onsager} the square-lattice Ising model, the basic structure of his derivation is an algebraic structure which is now called the Onsager algebra. He showed that the representation of the original Hamiltonian can be reduced into direct products of two-dimensional representations. This decomposition intrinsically suggests the structure of the free fermion system, although he did not at all used the word 'fermion' in his paper. Later, Kaufman \cite{49Kaufman} rederived the partition function of the model, with the use of generators of the Clifford algebra, and later Schultz, Mattis and Lieb \cite{64SchultzMattisLieb} rederived the free energy through a direct transformation to the free fermion system. The transformations in both \cite{49Kaufman} and \cite{64SchultzMattisLieb} were the Jordan-Wigner transformation. In 1982, Dolan and Grady \cite{82DolanGrady} constructed an infinite number of conserved charges for a self-dual Hamiltonian that satisfy a condition, which is now called the Dolan-Grady condition. Later, von Gehlen and Rittenberg \cite{85GehlenRittenberg} introduced a $n$-state chiral Potts model with specific coupling constants. They derived that this model is integrable in the sense that it satisfies the Dolan-Grady condition, and hence there exist an infinite number of conserved charges, and also they numerically showed that this model exhibits an Ising-like spectrum. This model was also investigated in \cite{89AlbertiniMcCoyPerkTang} and called "superintegrable", because it obeys the Onsager algebra, in addition to showing the structure of commuting transfer matrices. It was pointed out in \cite{89Perk} that specific operators appearing in \cite{82DolanGrady} satisfy the defining relations of the Onsager algebra. Davies derived \cite{91Davies} that a pair of operators recursively generate the Onsager algebra provided that they satisfy two symmetric Dolan-Grady relations; self-duality of the Hamiltonian is not needed in this argument. Let us summarize the following progresses. The irreducible representations of finite-dimensional Onsager algebra were obtained, and the general form of the eigenvalues of the associated Hamiltonians $A_{0}+kA_{1}$, where $k$ is the coupling constant, were determined in \cite{90Davies}, and subsequently in \cite{91Roan}. Lie algebraic structure of the Onsager algebra was investigated in \cite{00DateRoan2}. The Onsager algebra is a subalgebra of $\widehat{sl_{2}}$ \cite{91Davies}\cite{90Davies}\cite{20Stokman}, related with a class of Yang-Baxter algebras \cite{18BaseilhacBelliardCrampe}. Integrable lattice models were derived based on the Onsager algebra and an extension of the Onsager algebra was also considered in \cite{90AhnShigemoto}. Higher rank generalizations of the Dolan-Grady relations and Onsager algebras have been also investigated starting from \cite{95UglovIvanov}\cite{04DateUsami} and more recently in \cite{20Stokman}\cite{18BaseilhacBelliardCrampe}. The completely inhomogeneous transverse Ising chain was considered in \cite{95UglovIvanov}. A contracted case of the Dolan-Grady condition and related spin models were considered in \cite{02KlishevichPlyushchay}\cite{03KlishevichPlyushchay}. A q-deformed analogue of Onsager's symmetry was introduced in \cite{05BaseilhacKoizumi}. It was shown in \cite{05BaseilhacKoizumi2} that the homogeneous XXZ open spin chain with integrable boundary conditions can be built from the generators of the $q$-Onsager algebra, and eigenstates were investigated in the thermodynamic limit in \cite{06Baseilhac}, its transfer matrix was diagonalized in \cite{07BaseilhacKoizumi}, for a review see also \cite{07Baseilhac}. The Onsager symmetry appears in a kind of $n$-state clock chains \cite{19VernierOBrienFendley} whose $Z_n$ symmetry is enhanced to U(1). It was derived that the Hamiltonian in \cite{19VernierOBrienFendley} with additional terms exhibits \cite{20ShibataYoshiokaKatsura} the scar states. Motivated by the results in \cite{19VernierOBrienFendley}, the spin-1/2 XXZ chain at root of unity was investigated \cite{21MiaoLamersPasquier}, and the existence of the Onsager symmetry at root of unity was conjectured \cite{21Miao}. Dynamics of models where the Hamiltonian is an element of the Onsager algebra were also investigated in \cite{21Lychkovskiy}. \\ There exists another progress on solvable models. Recently, an algebraic generalization of the Jordan-Wigner transformation was introduced \cite{16Minami}. This formula can be summarized as follows: Consider a series of operators $\{\eta_j\}$ $(j=1, 2, \ldots, M)$ that satisfy the following commutation relations \begin{eqnarray} \eta_{j}\eta_{j+1}=-\eta_{j+1}\eta_{j}, \hspace{0.6cm} \eta_{j}\eta_{k}=\eta_{k}\eta_{j} \hspace{0.3cm} (|j-k|>1), \hspace{0.6cm} \eta_{j}^{2}=1. \label{cond2} \end{eqnarray} Then, the Hamiltonian $\displaystyle -\beta{\cal H}=\sum_{j=1}^{M}K_j \eta_j$ is mapped to the free fermion system by the following transformation: \begin{eqnarray} \varphi_j = \frac{1}{\sqrt{2}} e^{i\frac{\pi}{2}(j-1)} \eta_0 \eta_1 \eta_2 \cdots \eta_j \hspace{0.8cm} (0\leq j\leq M-1), \label{transmain} \end{eqnarray} where $\eta_0$ is an initial operator satisfying $\eta_0^2=-1$, $\eta_0\eta_1=-\eta_1\eta_0$, and $\eta_0\eta_k=\eta_k\eta_0$ $(2\leq k\leq M)$. Then we obtain $(-2i)\varphi_{j}\varphi_{j+1}=\eta_{j+1}$, and $\displaystyle \{\varphi_j, \varphi_k\}=\varphi_j\varphi_k+\varphi_k\varphi_j=\delta_{jk}$ for all $j$, $k$. Hence, the Hamiltonian is written as a sum of two-body products of fermion operators $\varphi_{j}$. The transformation (\ref{transmain}) is generated from $\{\eta_j\}$, and only the commutation relation (\ref{cond2}) is needed to obtain the free energy. When we consider the transverse Ising chain, i.e. $\eta_{2j-1}=\sigma_{j}^z$ and $\eta_{2j}=\sigma_{j}^x\sigma_{j+1}^x$, (\ref{transmain}) reduces to the original Jordan-Wigner transformation. In other cases, we obtain other transformations that diagonalize the Hamiltonian. This fermionization formula was applied to one-dimensional quantum spin chains \cite{17Minami}\cite{20Yanagihara}, and the honeycomb-lattice Kitaev model with the Wen-Toric code interactions \cite{19Minami}. The key idea of this transformation was developed into a graph-theoretic treatment \cite{20Ogura}, in which the transformations of operators are expressed as modifications of graphs, and the kernel of its adjacency matrices corresponds to conserved quantities of the system. The condition (\ref{cond2}) was independently considered to introduce models that can be mapped to the free fermion system, and investigated in terms of the graph theory \cite{20ChapmanFlammia} \cite{20ElmanChapmanFlammia}. \\ In this short note, we extend the Onsager's result and show that there exist an infinite number of interactions that satisfy the Dolan-Grady condition, and hence exist an infinite number of realizations of (quotients of) the Onsager algebra. We also consider the operators that satisfy the conditions \begin{eqnarray} && \eta_{j}\eta_{j+1}=\omega\eta_{j+1}\eta_{j}, \hspace{0.6cm} \eta_{j}\eta_{k}=\eta_{k}\eta_{j} \hspace{0.3cm} (|j-k|>1), \nonumber \\ && \eta_{j}^{n}=1, \hspace{0.6cm} \omega=e^{i\frac{2\pi}{n}}, \label{condn} \end{eqnarray} and show an infinite number of interactions satisfying the Dolan-Grady condition. Note that the replacement $\omega\mapsto\omega^{-1}$ corresponds to the inversion of the indices $j\mapsto M-j+1$. Throughout this short note, the cyclic boundary condition \begin{eqnarray} \eta_{M+j}=\eta_{j}, \label{cyclic} \end{eqnarray} where $M$ is the number of operators, is assumed. The conditions (\ref{cond2}) or (\ref{condn}) together with the cyclic boundary condition (\ref{cyclic}) yield the Theorem 1-3 and Corollary 1. When $n=2$, the condition (\ref{condn}) reduces to (\ref{cond2}). Operators that satisfy (\ref{condn}) were considered in \cite{80FradkinKadanoff}, and also investigated in \cite{83HowesKadanoffDebNijs} with $n=3$ concerning the incommensurate phase, and considered with arbitrary integer $n$ in \cite{85GehlenRittenberg}. The Hamiltonian in \cite{85GehlenRittenberg} was later obtained from the transfer matrix of the two-dimensional chiral Potts model \cite{87AuYangMcCoyPerkTangYan} \cite{88BaxterPerkAuYang}. The Baxter's clock chain \cite{89Baxter_PhysLettA} \cite{89Baxter_JStatPhys61} can also be written in terms of the operators that satisfy (\ref{condn}), and can be rewritten, through the Fradkin-Kadanoff transformation \cite{80FradkinKadanoff}, in terms of the parafermions, which are $Z_{n}$ generalizations of Majorana fermions. It is easily shown that the Fradkin-Kadanoff transformation can be obtained from the formula (\ref{transmain}). It is now known that the Baxter's clock chain can be regarded as 'free' parafermions \cite{12Fendley} \cite{14Fendley}. Generalizations of the relation (\ref{condn}) were investigated in \cite{14Fendley}, and in \cite{20AlcarazPimenta}-\cite{20AlcarazPimenta2}, and the corresponding Hamiltonians were shown to have an Ising-like spectrum and be integrable. In Theorem 1-3 and Corollary 1, we will show that operators composed of $\eta_{j}$'s satisfy the Dolan-Grady condition. Only the algebraic relations are needed to derive the results, and hence any operators that satisfy (\ref{cond2}) or (\ref{condn}) generate operators that satisfy the Dolan-Grady condition, and hence generate a quotient of the Onsager algebra. In Table 1 and Theorem 5, we show explicit examples of operators that satisfy (\ref{condn}), including the interactions of the transverse Ising chain, and so-called the superintegrable chiral Potts model. At last, we will comment on a fact that an infinite number of models with inhomogeneous interactions also become integrable. \\ \section{Onsager algebra and Theorems} Let us consider series of operators $\{A_{j}\}$ and $\{G_{j}\}$, where $j\in {\bf Z}$. The Onsager algebra is a Lie algebra defined via the relations \begin{eqnarray} [A_j,A_k]=4G_{j-k}, \hspace{0.4cm} [G_m,A_l]=2A_{l+m}-2A_{l-m}, \hspace{0.4cm} [G_j,G_k]=0. \label{Onsagerrel} \end{eqnarray} It is known that a pair of operators, $A_{0}$ and $A_{1}$, recursively generate all the $A_{j}$ and $G_{k}$ in (\ref{Onsagerrel}) provided that they satisfy the relations \begin{eqnarray} [\:A_{0}[\:A_{0}[\:A_{0}, A_{1}\:]\:]\:] &=& C[\:A_{0}, A_{1}\:], \label{DG1} \\ \: [\:A_{1}[\:A_{1}[\:A_{1}, A_{0}\:]\:]\:] &=& C[\:A_{1}, A_{0}\:], \label{DG2} \end{eqnarray} where $C$ is a constant. We call (\ref{DG1}) and (\ref{DG2}) the Dolan-Grady condition. When we consider a Hamiltonian \begin{eqnarray} {\cal H} = A_{0}+kA_{1}, \label{HamA0kA1} \end{eqnarray} where $k$ is a constant, it can be derived that ${\cal H}$ belongs to an infinite family of mutually commuting operators, i.e. ${\cal H}$ is integrable. For irreducible finite dimensional representations of the Onsager algebra, the general form of the eigenvalue was obtained in \cite{90Davies} as \begin{eqnarray*} (\alpha+\beta k)+\sum_{j=1}^{n}4m_{j}\sqrt{1+k^{2}+2k\cos\theta_{j}}, \hspace{0.4cm} m_{j}=0, \pm 1, \pm 2, \ldots, \pm s_{j}, \end{eqnarray*} where $\alpha$ and $\beta$ are constants, and $s_{j}$ are positive integers. Onsager introduced, for the purpose to solve the rectangular Ising model, the transfer matrix which is expressed by the operators \begin{eqnarray} A_{0}=\sum_{j=1}^{N}\sigma_{j}^{x}, \hspace{0.4cm} A_{1}=\sum_{j=1}^{N}\sigma_{j}^{z}\sigma_{j+1}^{z}. \label{A0A1Ising} \end{eqnarray} These $A_{0}$ and $A_{1}$ satisfy the Dolan-Grady condition, and in this case (\ref{HamA0kA1}) is the Hamiltonian of the transverse Ising chain \cite{17Perk}. \\ \begin{theorem} Let us introduce \begin{eqnarray} A_{0} = \sum_{j=1}^{N}\eta_{2j-1}, \hspace{0.6cm} A_{1} = \sum_{j=1}^{N}\eta_{2j}, \end{eqnarray} where $N\geq 2$, and $\eta_{j}$ satisfy (\ref{cond2}) and (\ref{cyclic}). Then $A_{0}$ and $A_{1}$ satisfy the Dolan-Grady condition (\ref{DG1}) and (\ref{DG2}) with $C=16$. \end{theorem} Direct calculations yield Theorem 1. We can also convince \begin{eqnarray*} && A_{2} = \sum_{j=1}^{N}\eta_{2j}\eta_{2j+1}\eta_{2j+2}, \hspace{1.2cm} A_{3} = \sum_{j=1}^{N}\eta_{2j}\eta_{2j+1}\eta_{2j+2}\eta_{2j+3}\eta_{2j+4}, \\ && A_{-1} = \sum_{j=1}^{N}\eta_{2j-3}\eta_{2j-2}\eta_{2j-1}, \hspace{0.6cm} A_{-2} = \sum_{j=1}^{N}\eta_{2j-5}\eta_{2j-4}\eta_{2j-3}\eta_{2j-2}\eta_{2j-1}, \\ && G_{0} = 0, \hspace{1.2cm} G_{1} = \frac{1}{2}\sum_{j=1}^{N} (\eta_{2j}\eta_{2j+1}-\eta_{2j-1}\eta_{2j}), \\ && G_{2} = \frac{1}{2}\sum_{j=1}^{N} (\eta_{2j}\eta_{2j+1}\eta_{2j+2}\eta_{2j+3}-\eta_{2j-1}\eta_{2j}\eta_{2j+1}\eta_{2j+2}), \\ && G_{3} = \frac{1}{2}\sum_{j=1}^{N} (\eta_{2j}\eta_{2j+1}\eta_{2j+2}\eta_{2j+3}\eta_{2j+4}\eta_{2j+5}-\eta_{2j-1}\eta_{2j}\eta_{2j+1}\eta_{2j+2}\eta_{2j+3}\eta_{2j+4}), \end{eqnarray*} and generally \begin{eqnarray*} && A_{l} = \sum_{j=1}^{N}\eta_{2j}\eta_{2j+1}\cdots\eta_{2j+2l-2}, \hspace{0.9cm} A_{-l} = \sum_{j=1}^{N}\eta_{2j-2l-1}\eta_{2j-2l}\cdots\eta_{2j-1}, \end{eqnarray*} and \begin{eqnarray*} && G_{l} = \frac{1}{2}\sum_{j=1}^{N} (\eta_{2j}\eta_{2j+1}\eta_{2j+2}\cdots\eta_{2j+2l-1}-\eta_{2j-1}\eta_{2j}\eta_{2j+1}\cdots\eta_{2j+2l-2}). \end{eqnarray*} It is straightforward to derive that $A_{l}$ and $G_{l}$ satisfy $A_{l+2N}=A_{l}$ and $G_{l+2N}=G_{l}$. The relation $A_{l+2N}=A_{l}$ is the closure relation considered by Davies in [8]. This means that the algebra generated from $A_{0}$ and $A_{1}$ in Theorem 1 is a quotient of the Onsager algebra, i.e. an Onsager algebra with the restriction $A_{l+2N}=A_{l}$. Theorem 1 shows existence of an infinite number of Hamiltonians that are expressed by (\ref{HamA0kA1}) and hence governed by the Onsager algebra, because we know examples of operators that satisfy (\ref{cond2}), such as the interactions of the transverse Ising chain: $\eta^{(1)}_{2j-1}=\sigma_{j}^z$ and $\eta^{(1)}_{2j}=\sigma_{j}^x\sigma_{j+1}^x$, those of the Kitaev chain: $\eta^{(2)}_{2j-1}=\sigma_{2j-1}^x\sigma_{2j}^x$ and $\eta^{(2)}_{2j}=\sigma_{2j}^y\sigma_{2j+1}^y$, those of the cluster model: $\eta^{(3)}_{2j-1}=\sigma_{2j-1}^x\sigma_{2j}^z\sigma_{2j+1}^x$ and $\eta^{(3)}_{2j}=\sigma_{2j}^x1_{2j+1}\sigma_{2j+2}^x$, and other infinite number of interactions listed in Table 1 and Table 2 given in \cite{17Minami}. We will consider Theorem 1 in the cases where the condition (\ref{condn}) is satisfied. Examples of operators that satisfy (\ref{condn}) are shown in Table 1, where operators $\eta_{j}$ form one or several series of operators; operators in each series satisfy (\ref{condn}), and operators from different series commute with each other. The operators $Z$, $X$ and $Y$ in Table 1 are defined, with $\omega^{n}=1$, as \begin{eqnarray} && Z = \left( \begin{array}{ccccc} 1&&&&\\ &\omega&&&\\ &&\omega^{2}&&\\ &&&\ddots&\\ &&&&\omega^{n-1} \end{array} \right), \hspace{0.4cm} X = \left( \begin{array}{ccccc} 0&&&&1\\ 1&0&&&\\ &1&0&&\\ &&&\ddots&\\ &&&1&0 \end{array} \right), \nonumber\\ && Y = \omega^{-\frac{1}{2}(n-1)} \left( \begin{array}{ccccc} 0&\omega^{n-2}&&&\\ &0&\omega^{n-3}&&\\ &&0&\ddots&\\ &&&\ddots&1\\ \omega^{n-1}&&&&0 \end{array} \right), \end{eqnarray} and $Z_{j}$, $X_{j}$ and $Y_{j}$ are defined as \begin{eqnarray} Q_{j} = 1\otimes\cdots\otimes 1\otimes\stackrel{\stackrel{j}{\vee}}{Q}\otimes 1\otimes\cdots\otimes 1, \hspace{0.6cm} Q=Z, X, Y. \end{eqnarray} The operators $Z$, $X$, and $Y$ satisfy $ZX=\omega XZ$, $XY=\omega YX$, and $YZ=\omega ZY$. Then we can derive the following corollary. \begin{corollary} Let us consider \begin{eqnarray} A_{0} = \sum_{j=1}^{N}\eta_{2j-1}^{k}, \hspace{0.6cm} A_{1} = \sum_{j=1}^{N}\eta_{2j}^{l}, \end{eqnarray} where $N\geq 2$, and $n$ is even, and $k=l=n/2$, and $\eta_{j}$ satisfy (\ref{condn}) and (\ref{cyclic}). Then $A_{0}$ and $A_{1}$ satisfy the Dolan-Grady condition (\ref{DG1}) and (\ref{DG2}) with $C=16$ when $n/2$ is odd, and $[A_{0}, A_{1}]=0$ when $n/2$ is even. \end{corollary} \noindent {\rm Proof:} Let $\zeta_{j}=\eta_{j}^{k}$, then $\zeta_{j}^{2}=1$. We find $\zeta_{j}\zeta_{j+1} =\eta_{j}^{k}\eta_{j+1}^{k} =\omega^{k^{2}}\eta_{j+1}^{k}\eta_{j}^{k} =\omega^{k^{2}}\zeta_{j+1}\zeta_{j}$, where $\omega^{k^{2}}=e^{i\frac{2\pi}{n}k^{2}}=e^{i\pi k}=(-1)^{k}$. If $k$ is even, then $[A_{0}, A_{1}]=0$. If $k$ is odd, the operators $A_{0}$ and $A_{1}$ written in terms of $\zeta_{j}$ satisfy the Dolan-Grady condition (\ref{DG1}) and (\ref{DG2}) since $\zeta_{j}\zeta_{j+1}=-\zeta_{j+1}\zeta_{j}$ and $\zeta_{j}^{2}=1$ \hbox{\rule[-2pt]{3pt}{6pt}} \\ When $n=2$, then $A_{0}=\sum_{j=1}^{N}\eta_{2j-1}$ and $A_{1}=\sum_{j=1}^{N}\eta_{2j}$, and thus Theorem 1 is obtained from Corollary 1. In Corollary 1, the relation $\eta_{j}\eta_{j+1}=\omega\eta_{j+1}\eta_{j}$ is assumed. When we assume $\eta_{j}\eta_{j+1}=\omega_{j}\eta_{j+1}\eta_{j}$, where $\omega_{j}$ depends on $j$ and equals to $\omega$ or $\omega^{-1}$, we obtain $\zeta_{j}\zeta_{j+1} =\omega^{\pm k^{2}}\zeta_{j+1}\zeta_{j} =-\zeta_{j+1}\zeta_{j}$, and the Dolan-Grady condition (\ref{DG1}) and (\ref{DG2}) is satisfied again. \\ \begin{theorem} Let us consider \begin{eqnarray} A_{0} = \sum_{j=1}^{N}(\eta_{2j-1}^{k}-\eta_{2j-1}^{n-k}), \hspace{0.6cm} A_{1} = \sum_{j=1}^{N}(\eta_{2j}^{k}-\eta_{2j}^{n-k}), \end{eqnarray} where $N\geq 2$, and $k=n/3$ is an integer that satisfy $1\leq k\leq n-1$, and $\eta_{j}$ satisfy (\ref{condn}) and (\ref{cyclic}). Then $A_{0}$ and $A_{1}$ satisfy the Dolan-Grady condition (\ref{DG1}) and (\ref{DG2}) with $C=-27$ when $k=3m-1, 3m-2$ $\:(m\in{\bf N})$, and $[A_{0}, A_{1}]=0$ when $k=3m$. \end{theorem} It is easy to show \begin{eqnarray} \:[\eta_{2j-1}^{k}, \eta_{2j}^{l}] &=& \eta_{2j-1}^{k}\eta_{2j}^{l}-\eta_{2j}^{l}\eta_{2j-1}^{k} = (1-\omega^{-kl})\eta_{2j-1}^{k}\eta_{2j}^{l}, \nonumber \\ \:[\eta_{2j+1}^{k}, \eta_{2j}^{l}] &=& \eta_{2j+1}^{k}\eta_{2j}^{l}-\eta_{2j}^{l}\eta_{2j+1}^{k} = (1-\omega^{kl})\eta_{2j+1}^{k}\eta_{2j}^{l}. \end{eqnarray} The inner derivatives in terms of $\eta_{2j\mp 1}^{k}$, operated to $\eta_{2j}^{l}$, are therefore equivalent to the multiplications of $(1-\omega^{\mp kl})\eta_{2j\mp 1}^{k}$ from the left. Then we will prove the Theorem. \\ \noindent {\rm Proof:} It is easy to show \begin{eqnarray} [\sum_{i=1}^{N}\eta_{2i-1}^{k}, \eta_{2j}^{k}] &=& [\eta_{2j-1}^{k}, \eta_{2j}^{k}]+[\eta_{2j+1}^{k}, \eta_{2j}^{k}] \nonumber \\ &=& \Big((1-\omega^{-k^{2}})\eta_{2j-1}^{k}+(1-\omega^{k^{2}})\eta_{2j+1}^{k}\Big) \eta_{2j}^{k} \nonumber \\ &=& a_{11j}\eta_{2j}^{k}, \\ \:[\sum_{i=1}^{N}\eta_{2i-1}^{n-k}, \eta_{2j}^{k}] &=& [\eta_{2j-1}^{n-k}, \eta_{2j}^{k}]+[\eta_{2j+1}^{n-k}, \eta_{2j}^{k}] \nonumber \\ &=& \Big((1-\omega^{k^{2}})\eta_{2j-1}^{n-k}+(1-\omega^{-k^{2}})\eta_{2j+1}^{n-k}\Big) \eta_{2j}^{k} \nonumber \\ &=& a_{21j}\eta_{2j}^{k}, \end{eqnarray} where \begin{eqnarray} a_{11j}&=&z\eta_{2j-1}^{k}+{\bar z}\eta_{2j+1}^{k}, \hspace{0.6cm} a_{21j}={\bar z}\eta_{2j-1}^{n-k}+z\eta_{2j+1}^{n-k}, \nonumber \\ z&=&1-\omega^{-k^{2}}, \hspace{2.2cm} {\bar z}=1-\omega^{k^{2}}. \end{eqnarray} Then we obtain \begin{eqnarray} [A_{0}, \eta_{2j}^{k}] &=& \Big((z\eta_{2j-1}^{k}+{\bar z}\eta_{2j+1}^{k}) -({\bar z}\eta_{2j-1}^{n-k}+z\eta_{2j+1}^{n-k})\Big)\eta_{2j}^{k} \label{A1etak} \\ &=& (a_{11j}-a_{21j})\eta_{2j}^{k}, \nonumber \end{eqnarray} Similarly, we obtain \begin{eqnarray} [A_{0}, \eta_{2j}^{n-k}] &=& \Big(({\bar z}\eta_{2j-1}^{k}+z\eta_{2j+1}^{k}) -(z\eta_{2j-1}^{n-k}+{\bar z}\eta_{2j+1}^{n-k})\Big)\eta_{2j}^{n-k} \label{A1etank} \\ &=& (a_{12j}-a_{22j})\eta_{2j}^{n-k}, \nonumber \end{eqnarray} where \begin{eqnarray} a_{12j}={\bar z}\eta_{2j-1}^{k}+z\eta_{2j+1}^{k}, \hspace{0.6cm} a_{22j}=z\eta_{2j-1}^{n-k}+{\bar z}\eta_{2j+1}^{n-k}. \end{eqnarray} Since $\eta_{j}$'s with odd $j$ commute with each other, we obtain \begin{eqnarray} [A_{0}, [A_{0}, [A_{0}, \eta_{2j}^{k}]]] = (a_{11j}-a_{21j})^{3}\eta_{2j}^{k}, \label{A1A1A1etak} \end{eqnarray} and \begin{eqnarray} [A_{0}, [A_{0}, [A_{0}, \eta_{2j}^{n-k}]]] = (a_{12j}-a_{22j})^{3}\eta_{2j}^{n-k}. \end{eqnarray} We find the following terms appear in (\ref{A1A1A1etak}) \begin{eqnarray} a_{11j}^{3} &=& z^{3}\eta_{2j-1}^{3k}+3z^{2}{\bar z}\eta_{2j-1}^{2k}\eta_{2j+1}^{k} +3z{\bar z}^{2}\eta_{2j-1}^{k}\eta_{2j+1}^{2k}+{\bar z}^{3}\eta_{2j+1}^{3k}, \nonumber \\ a_{21j}^{3} &=& {\bar z}^{3}\eta_{2j-1}^{-3k}+3{\bar z}^{2}z\eta_{2j-1}^{-2k}\eta_{2j+1}^{-k} +3{\bar z}z^{2}\eta_{2j-1}^{-k}\eta_{2j+1}^{-2k}+z^{3}\eta_{2j+1}^{-3k}, \end{eqnarray} and \begin{eqnarray} a_{11j}^{2}a_{21j} &=& 3z^{2}{\bar z}\eta_{2j-1}^{k}+3z{\bar z}^{2}\eta_{2j+1}^{k} +z^{3}\eta_{2j-1}^{2k}\eta_{2j+1}^{-k} +{\bar z}^{3}\eta_{2j-1}^{-k}\eta_{2j+1}^{2k}, \label{a112a21} \\ a_{11j}a_{21j}^{2} &=& 3z{\bar z}^{2}\eta_{2j-1}^{-k}+3z^{2}{\bar z}\eta_{2j+1}^{-k} +z^{3}\eta_{2j-1}^{k}\eta_{2j+1}^{-2k} +{\bar z}^{3}\eta_{2j-1}^{-2k}\eta_{2j+1}^{k}. \label{a11a212} \end{eqnarray} The assumption $n=3k$ yields $\eta_{2j-1}^{3k}=1$, $\eta_{2j+1}^{3k}=1$, and hence $a_{11j}^{3}-a_{21j}^{3}=0$. From $3k=n$, we find $\omega^{3k^{2}}=(\omega^{n})^{k}=1$ and ${\bar z}^{3}+z^{3}=0$, then the last two terms in the right-hand side of (\ref{a112a21}), and also those of (\ref{a11a212}), cancel each other, respectively. Together with (\ref{A1etak}), we obtain \begin{eqnarray} [A_{0}, [A_{0}, [A_{0}, \eta_{2j}^{k}]]] = -9z{\bar z}[A_{0}, \eta_{2j}^{k}]. \label{A1A1A1A0A1etak} \end{eqnarray} Similarly we find \begin{eqnarray} [A_{0}, [A_{0}, [A_{0}, \eta_{2j}^{n-k}]]] = -9z{\bar z}[A_{0}, \eta_{2j}^{n-k}]. \label{A1A1A1A0A1etank} \end{eqnarray} When $k=3m$, then $\omega^{k^{2}}=\omega^{3m\cdot k}=(\omega^{n})^{m}=1$, $z=0$, and from (\ref{A1etak}) and (\ref{A1etank}) we find $[A_{0}, A_{1}]=0$. When $k=3m-1$, then $\omega^{k^{2}}=\omega^{3k\cdot m}\omega^{-k} =(\omega^{n})^{m}(\omega^{k})^{-1} =1\cdot(e^{i\frac{2\pi}{n}\frac{n}{3}})^{-1}=e^{-i\frac{2\pi}{3}}$, and when $k=3m-2$, then $\omega^{k^{2}}=\omega^{3k\cdot m}\omega^{-2k} =(\omega^{n})^{m}(\omega^{k})^{-2} =1\cdot(e^{i\frac{2\pi}{n}\frac{n}{3}})^{-2}=e^{i\frac{2\pi}{3}}$. We thus obtain $z{\bar z}=3$, $[A_{0}, A_{1}]\neq 0$, and the first part of the Dolan-Grady condition (\ref{DG1}) is satisfied with $C=-9z{\bar z}$. The second part of the Dolan-Grady condition (\ref{DG2}) is obtained by the shift of indices $2j\mapsto 2j+1$. \hbox{\rule[-2pt]{3pt}{6pt}} \\ Let $\zeta_{j}=\eta_{j}^{k}$ $(k=n/3)$. Then we find $\zeta_{j}\zeta_{j+1} =\eta_{j}^{k}\eta_{j+1}^{k} =\omega^{k^{2}}\eta_{j+1}^{k}\eta_{j}^{k} =\omega^{k^{2}}\zeta_{j+1}\zeta_{j}$. When $k=3m$, then $\omega^{k^{2}}=1$, and therefore $\zeta_{j}\zeta_{j+1}=\zeta_{j+1}\zeta_{j}$, and we obtain $[A_{0}, A_{1}]=0$. When $k=3m-2$, then $\omega^{k^{2}}=e^{i\frac{2\pi}{3}}$, and we find $\zeta_{j}\zeta_{j+1}=e^{i\frac{2\pi}{3}}\zeta_{j+1}\zeta_{j}$. When $k=3m-1$, then $\omega^{k^{2}}=e^{-i\frac{2\pi}{3}}$, and ${\bar \zeta}_{j}{\bar \zeta}_{j+1} =e^{i\frac{2\pi}{3}}{\bar \zeta}_{j+1}{\bar \zeta}_{j}$, where ${\bar \zeta}_{j}=\zeta_{2N-j+1}$. The case with $\omega=e^{i\frac{2\pi}{3}}$ was already considered in \cite{12FjelstadMansson} though the derivation is different. With the choice of the operators $\displaystyle \zeta_{2j-1}=X_{j}$ and $\displaystyle \zeta_{2j}=Z_{j}Z_{j+1}^{\dag}$, the Hamiltonian $A_{0}+kA_{1}$ with $n=3$ can be written as ${\cal H}_{B}-{\cal H}_{B}^{\dag}$, where ${\cal H}_{B}$ is the Baxter's clock model \cite{89Baxter_PhysLettA}\cite{89Baxter_JStatPhys61}. \\ \begin{theorem} Let us consider \begin{eqnarray} A_{0}= \sum_{j=1}^{N}\sum_{k=1}^{n-1} \frac{\eta_{2j-1}^{k}}{1-\omega^{-k}}, \hspace{0.6cm} A_{1}= \sum_{j=1}^{N}\sum_{k=1}^{n-1} \frac{\eta_{2j}^{k}}{1-\omega^{-k}}, \end{eqnarray} where $N\geq 2$, and $\eta_{j}$ together with $\omega$ satisfy (\ref{condn}), and the cyclic boundary condition (\ref{cyclic}) is assumed. Then $[A_{0}, A_{1}]\neq 0$, and $A_{0}$, $A_{1}$ satisfy the Dolan-Grady condition (\ref{DG1}) and (\ref{DG2}) with $C=n^{2}$. \end{theorem} For the purpose to prove this Theorem, we use the formula \cite{75Hansen}\cite{85GehlenRittenberg} \begin{eqnarray} \sum_{l=1}^{n-1}\frac{\omega^{\frac{1}{2}(m-1)l}}{1-\omega^{-l}} = \frac{1}{2}(n-m), \hspace{0.6cm} \sum_{l=1}^{n-1}\frac{\omega^{-\frac{1}{2}(m+1)l}}{1-\omega^{-l}} = -\frac{1}{2}(n-m), \label{sumformula} \\ n=2, 3, 4, 5, \ldots,\hspace{0.4cm} m=1, 3, 5, \ldots \nonumber \\ m\leq n \hspace{0.2cm}{\rm and}\hspace{0.2cm}\omega=e^{i\frac{2\pi}{n}}. \nonumber \end{eqnarray} The first formula is equivalent to (2.16) of \cite{85GehlenRittenberg}, and the second is obtained from the first. \\ \noindent {\bf Proof:} \begin{eqnarray} [A_{0}, \eta_{2j}^{k}] &=& [\sum_{i=1}^{N}\sum_{l=1}^{n-1}\frac{\eta_{2i-1}^{l}}{1-\omega^{-l}}, \eta_{2j}^{k}] \nonumber \\ &=& \sum_{l=1}^{n-1}\frac{1}{1-\omega^{-l}}[\sum_{i=1}^{N}\eta_{2i-1}^{l}, \eta_{2j}^{k}] \nonumber \\ &=& \sum_{l=1}^{n-1}\frac{1}{1-\omega^{-l}} \Big( (1-\omega^{-kl})\eta_{2j-1}^{l}+(1-\omega^{kl})\eta_{2j+1}^{l} \Big) \eta_{2j}^{k} \nonumber \\ &=& \Big( \sum_{l=1}^{n-1}c_{l}(kl)\eta_{2j-1}^{l}+\sum_{l=1}^{n-1}c_{l}(-kl)\eta_{2j+1}^{l} \Big) \eta_{2j}^{k}, \label{DolanGrady1-3} \end{eqnarray} where \begin{eqnarray} c_{l}(m)=\frac{1-\omega^{-m}}{1-\omega^{-l}}. \end{eqnarray} The Dolan-Grady relation (\ref{DG1}) is satisfied if \begin{eqnarray} \Big(\Delta_{k}(\eta_{2j-1}, \eta_{2j+1})^{3}-C\Delta_{k}(\eta_{2j-1}, \eta_{2j+1})\Big) \eta_{2j}^{k} =0, \label{DGThe3-1} \end{eqnarray} where \begin{eqnarray} \Delta_{k}(x, y) = \Delta_{k}(x)+\Delta_{-k}(y), \hspace{0.6cm} \Delta_{k}(x) = \sum_{l=1}^{n-1}c_{l}(kl)x^{l}. \end{eqnarray} For the purpose to derive (\ref{DGThe3-1}), it is sufficient to show that \begin{eqnarray} \Delta_{k}(x, y)^{3}-n^{2}\Delta_{k}(x, y)=0 \label{DGThe3-2} \end{eqnarray} as a polynomial, with the condition $x^n=1$ and $y^n=1$. For the purpose to show (\ref{DGThe3-2}), it is sufficient to show that (\ref{DGThe3-2}) is satisfied with independent numbers $x, y=1$, $\omega, \omega^2, \ldots, \omega^{n-1}$, where $\omega=e^{i\frac{2\pi}{n}}$ (that satisfy $\omega^n=1$). It is straightforward to show, with the use of (\ref{sumformula}), which is valid when $m\leq n$, that $\Delta_{k}(\omega^{s})=-k \:\:(k=1, 2,\ldots, s)$, $\Delta_{k}(\omega^{s})=-(n-k)\:\:(k=s+1, \ldots, n-1)$, and $\Delta_{-k}(\omega^{s})=-(n-k) \:\:(k=1, 2,\ldots, s)$, $\Delta_{-k}(\omega^{s})=k\:\:(k=s+1, \ldots, n-1)$, and thus $\Delta_{k}(x, y)$ takes $n$ or $0$ or $-n$, which yields (\ref{DGThe3-1}) with $C=n^{2}$ and $[A_{0}, A_{1}]\neq 0$. Similarly we obtain (\ref{DG2}) \hbox{\rule[-2pt]{3pt}{6pt}} \\ With the choice of the operators $\displaystyle \eta_{2j-1}=Z_{j}$ and $\displaystyle \eta_{2j}=X_{j}X_{j+1}^{\dag}$, the Hamiltonian $A_{0}+kA_{1}$ results in that of the superintegrable chiral Potts chain \cite{85GehlenRittenberg}. Note that in \cite{85GehlenRittenberg}, the Dolan-Grady condition was derived with the use of the explicit matrix representation. In our derivation, Theorem 4 is proved using only the algebraic relations, and thus valid for all operators which satisfy (\ref{condn}). The operators $A_{0}$ and $A_{1}$ in Theorem 1 satisfy a secular equation and the generated algebra is a quotient of the Onsager algebra. It is an important problem to find the secular equations in the cases of Theorem 2 and 3, and specify the possible form of the eigenvalue for all cases. \\ We can find in Table 1 a list of operators that satisfy (3). The next Theorem shows that once we find an set of operators satisfying (3), we can generate other sets of operators that satisfy (3). \begin{theorem} Let us consider the case $\displaystyle \eta_{j}=\prod_{k=1}^{N}(X_{k}^{x_{jk}}Z_{k}^{z_{jk}})$, where $x_{jk}$ and $z_{jk}$ are non-negative integers. Let $\varphi_{X}$ and $\varphi_{Z}$ be transformations defined by \begin{eqnarray} \varphi_{Z} &:& X_{k}\mapsto X_{k}, \hspace{0.8cm} Z_{k}\mapsto Z_{k}^{-1}, \nonumber \\ \varphi_{X} &:& X_{k}\mapsto X_{k}^{-1}, \hspace{0.6cm} Z_{k}\mapsto Z_{k} \label{transXZ} \end{eqnarray} for all $k$, and $\varphi_{XZ}=\varphi_{X}\circ\varphi_{Z}$. Assume that the operators $\{\eta_{j}\}=\{\eta_{2j-1}\}\cup\{\eta_{2j}\}$ satisfy the condition (\ref{condn}), then the set of operators \begin{eqnarray} && \{\varphi_{Z}(\eta_{2j-1})\}\cup\{\varphi_{Z}(\eta_{2j})\}, \hspace{0.6cm} \{\varphi_{X}(\eta_{2j-1})\}\cup\{\varphi_{X}(\eta_{2j})\}, \nonumber \\ && \{\varphi_{XZ}(\eta_{2j-1})\}\cup\{\eta_{2j}\}, \hspace{0.4cm} \{\eta_{2j-1}\}\cup\{\varphi_{XZ}(\eta_{2j})\}, \label{transXZ5} \\ && \{\varphi_{XZ}(\eta_{2j-1})\}\cup\{\varphi_{XZ}(\eta_{2j})\} \nonumber \end{eqnarray} also satisfy the condition (\ref{condn}). \end{theorem} \noindent {\bf Proof:} The first condition in (\ref{condn}), $\eta_{j}\eta_{j+1}=\omega\eta_{j+1}\eta_{j}$ or $\eta_{j}\eta_{j+1}=\omega^{-1}\eta_{j+1}\eta_{j}$, is written as \begin{eqnarray} \sum_{k=1}^{N}(z_{j k}x_{j+1 k}-x_{j k}z_{j+1 k}) &=& 1 \hspace{0.4cm} {\rm or} \hspace{0.4cm} -1. \label{condnel1} \end{eqnarray} The second condition in (\ref{condn}), $\eta_{i}\eta_{j}=\eta_{j}\eta_{i}$ $(|i-j|>1)$, is written as \begin{eqnarray} \sum_{k=1}^{N}(z_{i k}x_{j k}-x_{i k}z_{j k}) &=& 0 \hspace{0.6cm} |i-j|>1. \label{condnel2} \end{eqnarray} The transformations (\ref{transXZ}) result in \begin{eqnarray} \varphi_{Z}: z_{jk}\mapsto -z_{jk}, \hspace{0.6cm} \varphi_{X}: x_{jk}\mapsto -x_{jk} \end{eqnarray} for all $k$. Then it is easy to convince that the conditions (\ref{condnel1}) and (\ref{condnel2}) are also satisfied after the transformations (\ref{transXZ5}) \hbox{\rule[-2pt]{3pt}{6pt}} \\ Uglov anf Ivanov \cite{95UglovIvanov} considered a generalization of the original Onsager algebra. They considered a Hamiltonian of the form \begin{eqnarray} -\beta{\cal H} = \sum_{j=1}^{N}K_{j}e_{j} \hspace{0.6cm} (N\geq 3) \label{HamRdm} \end{eqnarray} and derived that if the operators $e_{j}$ satisfy the relation \begin{eqnarray} [e_{i}, [e_{i}, e_{j}]]&=&e_{j} \hspace{0.6cm} (|i-j|=1), \nonumber\\ \: [e_{i}, e_{j}]&=&0 \hspace{0.7cm} (|i-j|>1), \label{UIcond} \end{eqnarray} then there exists an infinite family of integrals $\{I_{m}\}$, where $I_{0}={\cal H}$ and $[I_{m}, I_{n}]=0$ $(m, n\geq 1)$. We would like to note that $\displaystyle \frac{1}{2}\eta_{j}^{k}$ ($k=n/2=$odd) satisfy the condition (\ref{UIcond}). A list of operators $\eta_{j}$ with $n=2$ can be found in Table.1 of \cite{17Minami}, where one of the simplest example is \begin{eqnarray} e_{2j-1}=\frac{1}{2}\sigma^{z}_{j}, \hspace{0.3cm} e_{2j}=\frac{1}{2}\sigma^{x}_{j}\sigma^{x}_{j+1}, \end{eqnarray} which are the interactions of the transverse Ising chain \cite{62Katsura}-\cite{98Minami}. The transverse Ising chains with random interactions and fields have been investigated in \cite{68McCoyWu1}-\cite{11Derzhko}. Here we have to note that two-dimensional Ising models are equivalent to one-dimensional quantum chains \cite{71Suzuki} including random cases \cite{14Minami}. We can find, for example, the cluster models with random next-nearest-neighbor interactions are integrable. They cannot be diagonalized, even in the case of uniform interactions, through the standard Jordan-Wigner transformation, and the algebraic generalization (\ref{transmain}) is needed to diagonalize them \cite{20Yanagihara}. The author would like to thank Pascal Baseilhac for his valuable comments. This work was supported by JSPS KAKENHI Grant No. JP19K03668.
1,116,691,501,178
arxiv
\section{Introduction} The atomic pair distribution function (PDF) analysis of x-ray and neutron powder diffraction is growing in popularity with the advent of nanoscience and nanotechnology. The technique is more than 70 years old~\cite{debye;pz30,warre;b;xd90,egami;b;utbp03} and was originally applied almost exclusively to the study of glass and amorphous structures~\cite{warre;jpc34,warre;jacers36,frank;ac50,frank;prsla51,wrigh;gpc98}. However, the approach is proving powerful in solving structure on the nanoscale~\cite{billi;cc04}, where traditional crystallographic methods break down~\cite{billi;s07}. In particular, the study of the structure of discrete nanoparticles using the PDF method has recently become a focus \cite{mcken;n92,zhang;n03,gates;jpcb04,gilbe;s04,page;chpl04,korsu;jac05,petko;jmc05,bedfo;jpcc07,ehm;pd07,masad;prb07,pradh;cm07}. The convergence of this new need with the availability of powerful sources of high energy synchrotron x-rays and spallation neutrons and fast computing is greatly expanding the power and applicability of the method. The PDF, $G(r)$, is defined both as a function of the real-space pair density, $\rho(r)$, and the reciprocal space scattering, $F(Q)=Q[S(Q)-1]$, as follows: \begin{equation} G(r) = \frac{2}{\pi}\int_0^\infty F(Q)\sin Qr \mathrm{d} Q \label{eq:grFromfTraditional} \end{equation} and \begin{equation} G(r) = 4\pi r(\rho(r) - \rho_0), \label{eq:grFromRhoBulk} \end{equation} where $\rho_0$ is the number density of the material \cite{% kaplo;pr65 kaplo;pr68 klug;b;xdpfpaam74 johns;jncs82 soper;prl82 korsu;jsc85 wrigh;jncs85 nanao;prb87 warre;b;xd90 egami;ferro91 billi;prl94 petko;prb98 petko;jac98 keen;jac01 tucke;jac01a egami;b;utbp03 soper;jpcm07ii neder;pssc07 billi;jssc08 dinne;b;pdtap08 gilbe;jac0 }. Equation \ref{eq:grFromRhoBulk} in this form works well for bulk materials, but the negatively sloping baseline, $-4\pi r\rho_0$, is no longer valid when the PDF is calculated from finite-sized objects such as discrete nanoparticles \cite{korsu;jac05}. Motivated by the need for a rigorous definition of the form of this baseline we rederive these equations here. We show the correct form of \eq{grFromRhoBulk} in a number of cases of practical interest such as discrete nanoparticles and nanoparticle PDFs calculated from bulk models. The important distinction is provided by the small angle scattering (SAS) intensity and we explicitly relate the commonly used PDF functions with commonly used results from SAS. We also show that the widely used pair of definitions for the PDF above are actually incompatible with each other and that the definition \eq{grFromfTraditional} does not give rise to \eq{grFromRhoBulk} but rather to $R(r)/r$, where $R(r)$ is the radial distribution function. This work therefore resolves a long-standing ambiguity in the PDF literature. Compared to the PDF of a bulk sample~\cite{levas;jcc07}, the PDF of a nanoparticle is attenuated with increasing-$r$ by a function that is related to the form of the nanoparticle \cite{guini;b;xdic63}. For simple shapes, such as spheroids, spherical shells, rods and discs, this nanoparticle form factor can be computed analytically~\cite{rayle;prsl14,glatt;b;saxs82,thorp;unpub07,gilbe;jac08} and integral equations exist for more complex shapes~\cite{kodam;aca06}. This lends itself to a simple nanoparticle modeling procedure where the nanoparticle PDF is calculated from the PDF of a bulk phase analogue by multiplying by the assumed nanoparticle form factor~\cite{guini;b;xdic63,qiu;prl05,masad;prb07}. The approach is successful for extracting precise quantitative structural information about the crystalline core of nanoparticles, including defects and size-dependent bond-lengths~\cite{masad;prb07}, and this functionality has been incorporated in the latest version of the PDF modeling software PDFgui~\cite{farro;jpcm07}. However, the method is not applicable when the nanoparticle structure has no bulk phase analogue. This is the case in general, for example in nanoparticles with surface modifications~\cite{zhang;n03} and inhomogeneous compositions such as core-shell nanoparticles~\cite{lizma;l96}. In these cases, models of discrete nanoparticles must be applied. As we discuss below, this results in an ambiguity about the precise form of the measured correlation function, and therefore how to calculate it. Currently, this is dealt with quite successfully in an \emph{ad hoc} way, as for example in~\cite{korsu;jac05,neder;jpcm05,korsu;jac07,neder;pssc07}. This paper presents a rigorous definition of the form of the baseline in terms of the nanoparticle form factor. In Section \ref{sec:derivation} we rederive the equations giving rise to the PDF to show the precise relationship between the measured correlation function in an x-ray or neutron total scattering experiment and the underlying model. In Section \ref{sec:sas} we make explicit the link between the commonly used PDF and small angle scattering equations. This has implications for calculating PDFs from discrete nanoparticle models for quantitative comparison with data, which are discussed in \sect{methods}. In \sect{sasft} we discuss the conditions under which the PDF can be calculated in real space. Section \ref{sec:summary} contains a brief summary. \section{Derivation of the PDF equations} \label{sec:derivation} To understand the precise relationship between the commonly used PDF equations, nanoparticle structures, and small angle scattering we rederive the PDF equations from the beginning since subtle details of the derivation that are often overlooked have a significant impact on discussion presented here. Furthermore, the full derivation is not reproduced even in many textbooks on the subject~\cite{warre;b;xd90,klug;b;xdpfpaam74,egami;b;utbp03,dinne;b;pdtap08} and so these subtleties are not widely appreciated in the community. We start from the scattering amplitude from a set of $i$ atoms at points $\vec{r_i}$ in the kinematical limit: \begin{equation} \begin{split} \psi(\vec{Q}) &= \sum_i f_i(Q)e^{\mathrm{i}\vec{Q}\cdot\vec{r_i}}\\ &= \sum_i \psi_i \end{split} \end{equation} If the scattering from these atoms were totally incoherent the total intensity would be the sum of the intensities from each atom, \begin{equation} \begin{split} I_{inc} &=\sum_{i} \psi_i^* \psi_i \\ &=\sum_{i} f_i^*(Q)f_i(Q) \\ &=\sum_{\alpha} N_\alpha f_\alpha^*(Q)f_\alpha(Q)\\ &=N \sum_{\alpha} c_\alpha f_\alpha^*(Q)f_\alpha(Q)\\ &=N \langle f^2 \rangle, \label{eq:iinc} \end{split} \end{equation} where the sum over $\alpha$ is now over the different species of atoms in the sample with $N$ being the total number of atoms, $N_\alpha$ being the number of atoms of type $\alpha$ and where the concentration of species $\alpha$ is $c_\alpha = N_\alpha/N$. Similarly, we can define the sample-averaged scattering power, $\langle f \rangle = \sum_\alpha c_\alpha f_\alpha$ and \begin{equation} \begin{split} \langle f\rangle ^2 & = \frac{1}{N^2} \sum_{ij} f_j^* f_i \\ & = \sum_{\alpha\beta} c_\alpha c_\beta f_\alpha^* f_\beta. \end{split} \end{equation} The full coherent scattering intensity is given by $\psi^*\psi$ which is \begin{equation} \label{eq:Icoh} \begin{split} I_c &= \sum_i\sum_j f_j^*f_i e^{\mathrm{i}\vec{Q}\cdot (\vec{r_i}-\vec{r_j})}\\ &= \sum_{i,j} f_j^*f_i e^{\mathrm{i}\vec{Q}\cdot\vec{r_{ij}}}. \end{split} \end{equation} Here we have dropped the $Q$-dependence of the atomic scattering factors to simplify the notation, but the $f$s are understood to retain their $Q$-dependence. We can separate out the self-scattering, $i=j$, for which $\vec{r_{ij}}=0$: \begin{equation} \begin{split} I_c &= \sum_i f_i^*f_i + \sum_{i\neq j} f_j^*f_i e^{\mathrm{i}\vec{Q}\cdot \vec{r_{ij}}}\\ &= N\langle f^2\rangle + \sum_{i\neq j} f_j^*f_i e^{\mathrm{i} \vec{Q}\cdot \vec{r_{ij}}}, \label{eq:icoh} \end{split} \end{equation} where we have used \eq{iinc}, resulting in an expression for the discrete scattering intensity for $i\ne j$ as \begin{equation} \begin{split} I_d &= I_c - N\langle f^2\rangle\\ &= \sum_{i\neq j} f_j^*f_i e^{\mathrm{i} \vec{Q}\cdot \vec{r_{ij}}}. \end{split} \end{equation} We want an expression for the total scattering structure function, $S(\vec{Q})$, which is defined as $\frac{I_c}{N\langle f\rangle^2} - \frac{\langle (f - \langle f\rangle)^2\rangle}{\langle f\rangle^2}$. The second term in this definition is the Laue monotonic diffuse scattering that comes about because of the imperfect cancellation of intensity at the destructive interference condition when atomic sites are occupied by atoms of different scattering strength. It results in a monotonic incoherent background even in the case of perfectly coherent scattering. To get $S(\vec{Q})$ from \eq{icoh} we therefore must normalize by the total number of scatterers, $N$, \begin{equation} \begin{split} \frac{I_c}{N} &= \langle f^2\rangle + \frac{1}{N}\sum_{i\neq j} f_j^*f_i e^{\mathrm{i} \vec{Q}\cdot \vec{r_{ij}}}. \end{split} \end{equation} Subtracting the normalized self scattering term to get \begin{equation} \frac{I_c}{N} - \langle f^2\rangle = \frac{1}{N}\sum_{i\neq j} f_j^*f_i e^{\mathrm{i} \vec{Q}\cdot \vec{r_{ij}}}, \end{equation} and then normalizing by $\langle f\rangle^2$, we obtain \begin{equation} \frac{I_c}{N\langle f\rangle^2} - \frac{\langle f^2\rangle}{\langle f\rangle^2} = \frac{1}{N\langle f\rangle^2}\sum_{i\neq j} f_j^*f_i e^{\mathrm{i} \vec{Q}\cdot \vec{r_{ij}}}. \end{equation} Thus, \begin{equation} \label{eq:sofqdebye} \begin{split} S(\vec{Q})-1 &=\frac{I_c}{N\langle f\rangle^2} - \frac{\langle f^2\rangle}{\langle f\rangle^2}\\ &=\frac{I_d}{N\langle f\rangle^2}\\ &= \frac{1}{N\langle f\rangle^2}\sum_{i\neq j} f_j^*f_i e^{\mathrm{i}\vec{Q}\cdot \vec{r_{ij}}}. \end{split} \end{equation} This expression yields precisely $S(\vec{Q})-1$ in terms of scattering from atoms in our sample. For an isotropic sample, e.g., a powder of crystals or nanoparticles, we assume there to be a crystallite with every orientation with equal probability and we can take an orientational average. Place the $\vec{Q}$ along $z$ so that we can express $\vec{Q}\cdot \vec{r_{ij}}= Q r_{ij}\cos \theta$. Then the orientational averaging means that $\theta$ takes all values with equal probability. The sample-averaged intensity for a pair of atoms will therefore be \begin{equation} \begin{split} \label{eq:avgexp} \overline{e^{\mathrm{i} \vec{Q}\cdot \vec{r_{ij}}}} &=\frac{\int_0^{2\pi} \mathrm{d}\phi \int_0^{\pi}\mathrm{d}\theta e^{\mathrm{i} Qr_{ij}\cos\theta}r_{ij}^2\sin\theta} {\int_0^{2\pi} \mathrm{d}\phi \int_0^{\pi}\mathrm{d}\theta r_{ij}^2\sin\theta}\\ &=\frac{-2\pi r_{ij}^2\left[ e^{\mathrm{i} Qr_{ij}\cos\theta}\right]_0^{\pi}}{4\pi r_{ij}^2 \mathrm{i} Qr_{ij}}\\ &=\frac{ \left[ e^{\mathrm{i} Qr_{ij}}-e^{-\mathrm{i} Qr_{ij}}\right]}{2\mathrm{i} Qr_{ij}}\\ &=\frac{ \sin (Qr_{ij})}{Qr_{ij}}. \end{split} \end{equation} Using this in \eq{Icoh} gives the average coherent scattering intensity, as expressed originally by Debye~\cite{debye;ap15}. From this we get the total scattering structure function for an isotropic sample, \begin{equation} \begin{split} S(Q)-1 &= \frac{1}{N\langle f\rangle^2}\sum_{i\neq j} f_j^*f_i \frac{ \sin (Qr_{ij})}{Qr_{ij}}. \label{eq:sqminus1} \end{split} \end{equation} Thus, the reduced total scattering structure function, $F(Q)=Q[S(Q)-1]$, is \begin{equation} \label{eq:fqdebye} \begin{split} F(Q) &= \frac{1}{N\langle f\rangle^2}\sum_{i\neq j} f_j^*f_i \frac{ \sin (Qr_{ij})}{r_{ij}}. \end{split} \end{equation} It is convenient at this point to get rid of the $Q$-dependence of the x-ray form-factors. They are assumed to be isotropic so depend only on $Q$ and not $\vec{Q}$, which is a good approximation for scattering from core electrons especially. Write $f(Q)=f(0)\tilde{f}(Q)$, where $\tilde{f}(Q)$ has value 1 at $Q=0$ and contains the $Q$-dependence of the form-factor and $f(0)\approx Z$, where $Z$ is the atomic number that scales the form factor. The Morningstar-Warren approximation~\cite{warre;jacers36} is that the $Q$-dependent part of the form factors can be well approximated by an average $Q$-dependence, $\overline{\tilde{f}(Q)}=\frac{1}{N_{species}}\sum_\alpha c_\alpha \tilde{f}_\alpha(Q)$. In this case the $Q$-dependence, $\overline{\tilde{f}(Q)}^2$, comes out of the double sums in \eq{fqdebye} on the top and the bottom and cancels out. The $f$s that remain are $Q$-independent, and normally replaced by the atomic number (modified by any anomalous scattering factors). The same result holds for neutron scattering where the $f$s are replaced by coherent neutron scattering lengths, $b$. These have no $Q$-dependence and therefore the approximate method for removing the $Q$-dependence is not needed. Now we want to consider the inverse Fourier transform of $F(Q)$. Because $F(Q)$ is an even function, we use the sine-Fourier transform, \begin{equation} \label{eq:fofr} \begin{split} \mathpzc{f}(r) &= \frac{2}{\pi}\int_{0}^\infty F(Q)\sin(Qr) dQ. \end{split} \end{equation} We choose the $\frac{2}{\pi}$ prefactor so that the direct sine transform has a prefactor of $1$. This is precisely the definition of the PDF in \eq{grFromfTraditional}. From this we get \begin{equation} \begin{split} \label{eq:frdebye} \mathpzc{f}(r) &= \frac{2}{\pi}\int_{0}^\infty \frac{1}{N\langle f\rangle^2} \sum_{i\neq j} f_j^*f_i \frac{\sin(Qr_{ij})}{r_{ij}}\sin(Qr) dQ\\ &= \frac{2}{\pi N\langle f\rangle^2}\sum_{i\neq j} \frac{f_j^*f_i}{r_{ij}} \int_{0}^\infty \sin (Qr_{ij})\sin(Qr) dQ\\ &= \frac{1}{N\langle f\rangle^2}\sum_{i\neq j} \frac{f_j^*f_i}{r_{ij}} \,[\delta (r-r_{ij})-\delta (r+r_{ij})]\\ &= \frac{1}{r N\langle f\rangle^2}\sum_{i\neq j} {f_j^*f_i} \,[\delta (r-r_{ij})-\delta (r+r_{ij})] \end{split} \end{equation} which, if we confine ourselves to the positive axis only, is \begin{equation} \begin{split} \mathpzc{f}(r)&= \frac{1}{r N\langle f\rangle^2}\sum_{i\neq j} {f_j^*f_i} \,\delta (r-r_{ij}). \end{split} \end{equation} We can interpret $\mathpzc{f}(r)$ in terms of the radial distribution function (RDF). The RDF, denoted $R(r)$, is defined for an elemental system such that for an arbitrary atom $i$ at the origin, $R_i(r)\mathrm{d} r$ gives the number of atoms in a shell of thickness $\mathrm{d} r$ at a distance $r$ from that atom and the total RDF is the average of the partial RDFs over each atom taken at the origin. Thus, the integral of the RDF between two bounds gives the number of atomic pairs per atom with separation within those bounds. Equation \ref{eq:frdebye} yields this behavior if we multiply by $r$. For a solid with $\alpha$ atomic species we get \begin{equation} \begin{split} \int_a^b R(r)\mathrm{d} r &= \int_a^b\frac{1}{N\langle f\rangle^2}\sum_{i\neq j} {f_j^*f_i} \,\delta (r-r_{ij})\mathrm{d} r \\ &= \frac{1}{ N\langle f\rangle^2}\sum_{i}\sum_{j\in S} {f_j^*f_i}\\ &= \frac{1}{\langle f\rangle^2}\sum_\alpha c_\alpha f_\alpha \sum_{j \in S} {f_j^*}, \end{split} \end{equation} where $S$ is the set of atoms with distance from atom $i$ greater than $a$ and less than $b$. In the case of just one atomic species, this reduces to \begin{equation} \begin{split} \int_a^b R(r)\mathrm{d} r &= \frac{f^2}{f^2}\sum_{j\in S}1\\ &= N_a, \end{split} \end{equation} as required. Thus, \begin{equation} \label{eq:frrho} \begin{split} \mathpzc{f}(r)&= \frac{R(r)}{r} \\ &= 4\pi r\rho(r). \end{split} \end{equation} The second expression comes from the relationship between the RDF and the pair density. The pair density is defined such that $\int \mathrm{d} r \mathrm{d}\phi \mathrm{d}\theta r^2 \sin(\theta) \rho(r) = \int R(r) \mathrm{d} r$, so that $4\pi r^2\rho(r) = R(r)$. Comparing Equations~\ref{eq:fofr} and~\ref{eq:frrho} we see that the definition \eq{fofr} does \emph{not} yield $G(r)$ (\eq{grFromRhoBulk}) and strictly $\frac{2}{\pi}\int_0^\infty F(Q)\sin Qr \mathrm{d} Q = R(r)/r \ne G(r).$ However, we see below that in practice it is $G(r)$ and not $R(r)$ that is obtained experimentally in most cases. Finally, reordering \eq{frrho} we find \begin{equation} \label{eq:rho} \begin{split} \rho (r)&= \frac{\mathpzc{f} (r)}{4\pi r } \\ &= \frac{1}{4\pi r^2 N\langle f\rangle^2}\sum_{i\neq j} {f_j^*f_i} \,\delta (r-r_{ij}). \end{split} \end{equation} In reality, $I_{c}(Q)$ is measured down to a minimum $Q$ due to the experimental setup. This means that in general the forward scattering contributions are lost. We will consider the impact of this on the measured real-space function $\mathpzc{f}(r)$. We rewrite the expression for the experimental $\mathpzc{f}(r)$ as \begin{equation} \begin{split} \label{eq:sasgofr} \mathpzc{f}(r;Q_{min}) &= \frac{2}{\pi} \int_{Q_{min}}^{\infty} F(Q) \sin(Qr) \mathrm{d} Q\\ &= 4\pi r \rho(r) - \frac{2}{\pi} \int_0^{Q_{min}}F(Q) \sin(Qr) \mathrm{d} Q. \end{split} \end{equation} Of course, we have a finite $Q_{max}$ as well, but this will be disregarded during the following discussion, as the effects are well understood~\cite{toby;aca92}. \section{Low angle scattering intensity} \label{sec:sas} We will now consider a number of explicit examples to understand how the missing forward scattering affects the measured $\mathpzc{f}(r;Q_{min})$. Consider a bulk material of uniform density and infinite extent. Since it has no internal structure there is perfect cancellation of all the discrete scattering intensity, $I_d$, everywhere except at $Q=0$ since for all wavelengths it is always possible to find a pair of volume elements where the scattering is exactly out of phase with each other and cancels. The definition of $I_d$ in terms of a double sum over atoms is not entirely appropriate for this case (since there are no atoms!), but it gives an intuitive feeling about the behavior. \begin{equation} I_d(Q) = (N^2-N)\langle f \rangle^2 \delta(Q), \label{eq:Iinfhom} \end{equation} which is a delta-function spike at $Q=0$ sitting on a zero background. The integrated intensity in the delta-function scales like the volume of the sample squared. Strictly speaking the sample has to be infinite in extent, and therefore $N = \infty$, to get a perfect delta-function. From Eqs.~\ref{eq:sofqdebye} and~\ref{eq:Iinfhom} we see that $S(0)-1=N-1 \approx N$ since $\lim_{Q \rightarrow 0} \sin Qr/ Qr = 1$. Now we do this more formally. Assume the sample has a uniform number density, $\rho_0$. Since the scattering length is defined per atom, the scattering amplitude of a volume element $\mathrm{d} \vec{r}$ at position $\vec{r}$ is $\psi= \rho_0\langle f\rangle e^{\mathrm{i} \vec{Q}\cdot\vec{r}}\mathrm{d}\vec{r}$. For an infinite crystal, the intensity is given by \begin{equation} I_c(\vec{Q}) = \rho_0^2\langle f\rangle^2 \int \int e^{\mathrm{i} \vec{Q}\cdot(\vec{r}-\vec{r^\prime})} \mathrm{d} \vec{r} \mathrm{d} \vec{r^\prime}, \end{equation} where the integrals are over all space, which gives a delta-function at $Q=0$. If the material is finite in extent then the integrals are finite. To evaluate the integrals, we can define a shape function $s(\vec{r})$ such that inside the shape $s=1$ and outside the shape, $s=0$. For such a material, \begin{equation} I_c(\vec{Q}) = \rho_0^2\langle f\rangle^2 \int \int s(\vec{r})s(\vec{r^\prime)}e^{\mathrm{i} \vec{Q}\cdot(\vec{r}-\vec{r^\prime})} \mathrm{d} \vec{r} \mathrm{d} \vec{r^\prime}. \end{equation} Let us do a change of variables so that $\vec{r^{\prime\prime}} = \vec{r}-\vec{r^\prime}$, and $\mathrm{d} \vec{r^{\prime\prime}} = \mathrm{d} \vec{r}$, in which case we have \begin{equation} \label{eq:Ishape} \begin{split} I_c(\vec{Q}) &= \rho_0^2\langle f\rangle^2 \int \int s(\vec{r^\prime})s(\vec{r^\prime}+\vec{r^{\prime\prime}})e^{\mathrm{i} \vec{Q}\cdot\vec{r^{\prime\prime}}} \mathrm{d} \vec{r^\prime} \mathrm{d} \vec{r^{\prime\prime}}\\ &= \rho_0^2\langle f\rangle^2 \int \mathrm{d} \vec{r^{\prime\prime}} e^{\mathrm{i} \vec{Q}\cdot\vec{r^{\prime\prime}}}\int s(\vec{r^\prime})s(\vec{r^\prime}+\vec{r^{\prime\prime}})\mathrm{d} \vec{r^\prime}. \end{split} \end{equation} The second integral is a self convolution, or autocorrelation function, of the shape function. Let us define \begin{equation} \label{eq:gammadef} \gamma_0(\vec{r})=\frac{1}{V}\int s(\vec{r^\prime})s(\vec{r^\prime}+\vec{r})\mathrm{d} \vec{r^\prime}, \end{equation} where $V = \int s(\vec{r}) \mathrm{d} \vec{r}$ is the volume defined by the shape function. This $\gamma_0(\vec{r})$ is the characteristic function of the shape~\cite{guini;b;sas55}, and has been called the nanoparticle form factor in the PDF literature~\cite{qiu;prl05,kodam;aca06,masad;prb07}. Defined as such, $\gamma_0(\vec{r})$ has the following properties: \begin{equation} \label{eq:gammaprops} \begin{split} \gamma_0(0) &=1 \\ \int \gamma_0(\vec{r}) \mathrm{d} \vec{r} &= V \end{split} \end{equation} This definition, and a convenient dropping of the double-primes gives \begin{equation} \begin{split} I_c(\vec{Q}) &= \rho_0^2\langle f\rangle^2 V \int \gamma_0(\vec{r})e^{\mathrm{i} \vec{Q}\cdot\vec{r}}\mathrm{d} \vec{r}. \end{split} \end{equation} In analogy with the discrete case we want to convert this to $S(\vec{Q})-1 = \frac{I_c}{N\langle f\rangle^2} - \frac{\langle f^2 \rangle}{\langle f\rangle^2}$, and using the fact that $N=\rho_0 V$ we get \begin{equation} \label{eq:sqgamma} \begin{split} S(\vec{Q})-1 &= \frac{1}{N\langle f\rangle^2}\rho_0^2\langle f\rangle^2 V \int \gamma_0(\vec{r})e^{\mathrm{i} \vec{Q}\cdot\vec{r}}\mathrm{d} \vec{r} - \frac{\langle f^2 \rangle}{\langle f\rangle^2}\\ &= \rho_0 \int \gamma_0(\vec{r})e^{\mathrm{i} \vec{Q}\cdot\vec{r}}\mathrm{d} \vec{r}- \frac{\langle f^2 \rangle}{\langle f\rangle^2}. \end{split} \end{equation} The second term, $\frac{\langle f^2 \rangle}{\langle f\rangle^2}$, is very small compared to the first term. It is order unity, where the first term scales as $N=\rho_0 V$, and can safely be ignored in most cases. Now we want to take the orientational average. This must be done with care as, in general, the orientation of the nanoparticle shape and the underlying structure are correlated. For example, the morphology of the particles (plates or needles) depends on easy growth directions of the underlying structure. As discussed by Gilbert~\cite{gilbe;jac08}, this means that the shape function and the internal structure of the particle are not, in general, separable and it is not correct to get the scattered intensity by convolving the reciprocal-space intensity with the Fourier transform of the characteristic function. Things are greatly simplified in the case where the underlying structure, or the nanoparticle shape, or both, are isotropic, or approximately so. Then we can denote the angle-averaged characteristic function as \begin{equation} \label{eq:gammaavg} \begin{split} \overline{\gamma_0(\vec{r})} = \gamma_0(r) = \frac{ \int \mathrm{d} \phi \int \mathrm{d} \theta \sin(\theta) r^2 \gamma_0(\vec{r}) } { \int \mathrm{d} \phi \int \mathrm{d} \theta r^2 \sin(\theta) }. \end{split} \end{equation} Using this and \eq{avgexp} we get \begin{equation} \label{eq:sqavg} \begin{split} S(Q)-1 &= \rho_0 \int_{0}^\infty \mathrm{d} r \int_0^{2\pi} \mathrm{d}\phi \int_0^{\pi}\mathrm{d}\theta \overline {\gamma_0(\vec{r}) e^{\mathrm{i} \vec{Q}\cdot\vec{r}}} r^2\sin\theta \\ &= \rho_0 \int_{0}^\infty \mathrm{d} r \int_0^{2\pi} \mathrm{d}\phi \int_0^{\pi}\mathrm{d}\theta \overline {\gamma_0(\vec{r})} \overline{e^{\mathrm{i} \vec{Q}\cdot\vec{r}}} r^2\sin\theta \\ &= \rho_0 \int_{0}^\infty \mathrm{d} r \int_0^{2\pi} \mathrm{d}\phi \int_0^{\pi}\mathrm{d}\theta \gamma_0(r)\frac{\sin Qr}{Qr}r^2\sin\theta \\ &= \rho_0 \int_{0}^\infty \gamma_0(r)\frac{\sin Qr}{Qr} 4 \pi r^2 \mathrm{d} r. \end{split} \end{equation} Since the particles have no preferred orientation in space, we have broken the average of the product in the first line into the product of the averages. This gives \begin{equation} \label{eq:fqgamma} \begin{split} F(Q) &= \int_{0}^\infty 4\pi \rho_0 r \gamma_0(r)\sin(Qr)\mathrm{d} r. \end{split} \end{equation} Noting that this is the direct sine-Fourier transform, we take the inverse transform to get \begin{equation} \label{eq:fhomsolid} \begin{split} \mathpzc{f}_u(r) &= \frac{2}{\pi} \int_{0}^\infty F(Q) \sin (Q r) \mathrm{d} Q\\ &= 4\pi\rho_0 r \gamma_0(r), \end{split} \end{equation} where the subscript $u$ indicates that this result is for a solid of uniform density distribution, $\rho_0$. Next we consider a macroscopic crystal. The difference in $F(Q)$ is at higher $Q$ where, instead of complete cancellation of all the discrete intensity it appears at distinct reciprocal lattice points as sharp Bragg peaks. Importantly, in the region of $Q$ below the first Bragg peak, the distinct scattering is zero except at very low-$Q$ where small angle scattering region is reached. The \emph{small angle} scattering intensity, $I_{sas}$ from the crystal is identical to that from the solid with uniform density: $I^{sas}_u = I^{sas}_{crystal}$. The small and wide angle scattering regions are well separated in $Q$ and $I_{sas}$ decays to zero before $Q_{min}$ is reached in the crystal. Thus, \begin{equation} \label{eq:fsas} \begin{split} \mathpzc{f}_{sas}(r) &= \frac{2}{\pi} \int_0^{Q_{min}}F(Q) \sin(Qr) \mathrm{d} Q \\ &= \mathpzc{f}_u(r) \end{split} \end{equation} and therefore \begin{equation} \begin{split} \frac{2}{\pi} \int_0^{Q_{min}}F(Q) \sin(Qr) \mathrm{d} Q = 4\pi \rho_0 r\gamma_0(r).\\ \label{eq:fsasgamma} \end{split} \end{equation} We are now in a position to understand in detail the nature of the measured PDF $\mathpzc{f}(r;Q_{min})$. Substituting \eq{fsasgamma} into \eq{sasgofr} we get \begin{equation} \label{eq:gofrgamma} \mathpzc{f}(r;Q_{min})=4\pi r \rho (r) - 4\pi r \rho_0 \gamma_0(r). \end{equation} This is similar to the definition of the PDF from \eq{grFromRhoBulk}, except that $\gamma_0(r)$ appears in the sloping baseline term. \section{Calculating $\mathpzc{f}(r;Q_{min})$ from models} \label{sec:methods} We can now consider the calculation of measured PDFs using \eq{gofrgamma} in a number of interesting limits. \subsection{Calculating in real-space for bulk crystals} \label{sec:calcBulk} In the case of bulk crystals, the region of interest in the PDF is usually $r \ll D$, $D$ being the smallest dimension of the crystal. In this region, $\gamma_0(r) \approx 1$. Thus, \begin{equation} \begin{split} \label{eq:frbulk} \mathpzc{f}(r;Q_{min})&=G(r)\\ &= 4\pi r (\rho_{bulk} (r) - \rho_0), \end{split} \end{equation} which is the familiar definition of $G(r)$ in \eq{grFromRhoBulk}. The pair density function, $\rho_{bulk} (r)$, is calculated from a model with periodic boundary conditions \cite{billi;b;lsfd98,proff;jac99}, or from a box of atoms that is much larger in extent than the range of $r$ of interest \cite{mcgre;ms88}, using \eq{rho}. The average number density $\rho_0$ is given by the number of atoms per unit volume, which in the case of crystals is the number of atoms in the unit cell divided by the unit cell volume. Two approaches are typically taken to account for thermal and zero-point motion of atoms. One approach is to assume that these motions are well approximated by a Gaussian probability distribution, in which case the delta-functions may be convoluted by Gaussians of finite width to represent the motion. If the motion is anisotropic, Gaussian distributions with different widths in different directions may be used. Because the function being convoluted is a delta-function, from a practical perspective \eq{frbulk} is simply modified so that a Guassian of the appropriate width is added instead of a delta-function~\cite{proff;jac99,thorp;b;fstpbas02}. In models with many thousands of atoms, which is sometimes the case for reverse Monte Carlo methods, the probability distributions can be built up from the static ensemble itself and no convolution is carried out. In this case no presumption of Gaussian dynamics is made. In practice the Guassian approximation works very well in most cases, and deviations from Gaussian behavior can be accounted for by introducing disorder in the models, which is a convenient way of separating the harmonic and non-harmonic contributions to the structure. The effects of correlated dynamics are accounted for using an $r$-dependent Gaussian broadening~\cite{jeong;jpc99,thorp;b;fstpbas02,jeong;prb03}. A convolution is also often carried out to account for the termination effects of the Fourier transform. For example, if the data are simply terminated at $Q_{max}$, as is often the case when data are available to high-$Q$, the model PDF must be convoluted with a sinc function~\cite{egami;b;utbp03}. From Eqs.~\ref{eq:fofr} and~\ref{eq:frbulk}, it is clear that the function that must be so convoluted is $\mathpzc{f}(r;Q_{min})$. This may be done with an integral directly in real-space, but it is often quicker to use a fast Fourier transform and do the convolution as a product in reciprocal space taking advantage of the convolution theorem. Note that, because of the sloping baseline in $\mathpzc{f}(r;Q_{min})$, the two convolutions (thermal motion and termination effects) are applied at different times during the PDF calculation. The thermal convolution is applied to $\rho_{bulk}(r)$ (\eq{rho}) and then this is converted to $G(r)$ (\eq{grFromRhoBulk}), to which is applied the termination convolution. Modeling programs~\cite{proff;jac99,farro;jpcm07} account for this correctly, but when individual peaks are fit in real-space to extract peak positions and intensities, this is often overlooked, though it seems reasonable to ignore the termination effects in most cases. \subsection{Calculating in real-space for nanoparticles modeled as attenuated bulk crystals} In this case $\rho_{bulk} (r)$ is determined using a model of a bulk structure as described in Section~\ref{sec:calcBulk}. The pair density, $\rho (r)$, in \eq{gofrgamma} is the function for the nanoparticle, which is approximated as $\gamma_0 (r)\rho_{bulk} (r)$ \cite{guini;b;xdic63}. Thus, \begin{equation} \label{eq:grnanofrombulk} \begin{split} \mathpzc{f}(r;Q_{min})&= 4\pi r \gamma_0 (r)(\rho_{bulk} (r) - \rho_0). \end{split} \end{equation} This approach has been implemented in the PDFgui modeling software \cite{farro;jpcm07} and used successfully on rather well ordered CdSe nanocrystals \cite{masad;prb07}. The main shortcoming is that effects that cannot be incorporated in the average structure, such as surface relaxations or core-shell inhomogeneities, cannot be modeled. If there is a distribution of nanoparticle sizes and shapes, the characteristic function, $\gamma_0(r)$ can be replaced with an appropriately averaged characteristic function \begin{equation} \label{eq:gammaensemble} \gamma(r)= \int \gamma_0(r;R_1,R_2,\ldots) p(R_1,R_2,\ldots) \mathrm{d} R_1\mathrm{d} R_2 \ldots\,. \end{equation} Here, $p(R_1,R_2,\ldots)$ is the normalized distribution of nanoparticle shapes parameterized by $R_1,\ R_2,\ldots$. For example, for spherical nanoparticles of radius $R$, $p(R_1,R_2,\ldots)=p(R)$, the distribution of nanoparticle radii. Finally, we replace \eq{grnanofrombulk} with \begin{equation} \label{eq:PDFnano} \begin{split} \mathpzc{f}(r;Q_{min}) &= 4\pi r \gamma (r)(\rho_{bulk} (r) - \rho_0). \end{split} \end{equation} Great care should be taken to ensure that the result is unique when refining a number of nanoparticle morphology parameters beyond one or two. \subsection{Calculating as the Fourier transform of the properly normalized Debye Function} This approach has been successfully used by a number of authors \cite{zhang;n03,cerve;jcc06}. The $F(Q)$ function is evaluated using \eq{fqdebye} and then Fourier transformed to obtain the desired real-space function. To account for thermal and zero-point motion in reciprocal-space calculations, \eq{fqdebye} is replaced with a version that includes Debye-Waller effects, \begin{equation} \label{eq:fqdebyewaller} \begin{split} F(Q) &= \frac{1}{N\langle f\rangle^2}\sum_{i\neq j} f_j^*f_i \left(e^{- \frac{1}{2} \sigma_{ij}^2 Q^2}\right)\frac{ \sin (Qr_{ij})}{r_{ij}}. \end{split} \end{equation} Here, $\sigma_{ij}^2$ is the correlated broadening factor for the atom pair, which is the real-space Gaussian width discussed in the case of bulk crystals. As we show here, for a quantitative comparison with measured data, care must be taken with the Fourier transform so that it is carried out over the same range of $Q$ as the experiment. The main drawback of the reciprocal-space approach is that it can be very slow compared to direct real-space calculation due to the long-range extent of the signal from each pair. It is generally preferred for smaller systems. However, due to recent algorithmic advances, the calculation of the Debye equation for larger systems can be greatly accelerated under certain circumstances~\cite{cerve;jcc06}. Using this method, the termination effects coming from the Fourier transform are implicity included provided the Fourier transforms to obtain the model and data PDFs are terminated with the same $Q_{min}$ and $Q_{max}$ values. \subsection{Calculating in real-space from discrete nanoparticle models} In this case, \eq{gofrgamma} is used directly, where $\rho (r)$ is calculated from a finite model of the discrete nanoparticle using \eq{rho}. The difficulty arises in determining a correct form for the baseline $-4\pi r\rho_0\gamma_0 (r)$. Up until now, the shape of the baseline has been approximated using expansions of \emph{ad hoc} mathematical functions~\cite{korsu;jac05,neder;jpcm05,korsu;jac07,neder;pssc07}. This is successful at approximating the behavior of the baseline. However, in this work we derive the explicit form of the baseline shape in terms of the characteristic function of the nanoparticle, the autocorrelation function of the nanoparticle shape. This suggests a number of approaches to calculate the PDF baseline in a more physical way. If we have accurate small angle scattering data from the samples, from \eq{gofrgamma} we see that we can compute the PDF baseline from the measured SAS via a Fourier transform. However, care must be exercised as the derivation assumes that the sample is made up of discrete nanoparticles. In general, clusters and aggregates of nanoparticles will form and small angle scattering signals from these structures on different length-scales will be present and must be separated. Also, scattering density fluctuations of any sort in the sample will affect the SAS signal, as discussed elsewhere \cite{cargi;jac71}. None of these effects need be explicitly considered if the SAS signal is not used in the PDF definition, as in \eq{gofrgamma}, though an alternative method is then required to determine the baseline. The inclusion of SAS data has the potential to add significant value to any refinement of nanoparticle models from the PDF. Both small and large angle scattering contain information about the shape and size of nanoparticles, but this information is nearly decoupled from the internal nanoparticle structure in the small angle scattering. This same information is in the PDF, but it can be obscured by structure features, such as when the PDF prematurely attenuates due to a complex or amorphous surface structure. In this case the nanoparticle size obtained from SAS will be larger than the apparent NP size obtained from the PDF that reflects the size of the coherent core structure. Even without the inclusion of SAS data into PDF refinements, we can can learn much from SAS analysis techniques. PDF nanoparticle refinements usually start with a simple model that includes atomic positions restricted by a shape. Therefore, PDF analysis can benefit from the various {\it ab initio} methods~\cite{sverg;aca91,chaco;bpj98} for determining the shape of a scatterer from the SAS. Without the use of the small angle intensity for determining $\gamma_0 (r)$, we can consider approaches to determine it self-consistently from the model since it is the autocorrelation of the particle shape which is directly available from the model itself by determining a ``shrink-wrapping" of the atomistic model. On the contrary, when the internal structure is well known, but the size and shape distribution of nanoparticles is not, then the characteristic function can be parameterized and refined to obtain the approximate nanoparticle dimensions as was done in \cite{masad;prb07}. \section{The extent of small angle scattering} \label{sec:sasft} We have considered the two asymptotic situations here of including or excluding all the SAS. If the SAS is retained in the intensity that is Fourier transformed, the resulting real-space function obtained is $4\pi r\rho(r)=R(r)/r$ (\eq{frdebye}). Excluding it all results in $G(r)$ (\eq{gofrgamma}). We now consider the possibility that some, but not all, of the small angle scattering is included in the Fourier transform. This might occur in the case of very small nanoparticles, for example, when the SAS extends to wider angles. Here we estimate the circumstances under which a significant amount of small angle intensity will appear in a wide angle PDF experiment for the case of a sphere of uniform density. The scattering intensity is given by~\cite{rayle;prsl14} \begin{equation} \label{eq:firstpeak} I(Q) \propto \frac{9}{(QR)^6} \left[\sin(QR) - QR\cos(QR)\right]^2, \end{equation} where $R$ is the radius of the sphere. By integrating this equation from 0 to $Q_{min}$, and dividing by the total integrated intensity, we get an expression for the proportion of small angle intensity above $Q_{min}$ for a given nanoparticle diameter~\cite{rayle;prsl14}: \begin{equation} \begin{split} \label{eq:integratedSphereSAS} i(x) &= 1- \frac{1}{2\pi r^5} \left[ \right. (2x^4-x^2+3)\cos(2x) \\ &+ x(x^2+6)\sin(2x) + 4x^5\mathrm{Si}(2x) - (5x^2+3) \left. \right]. \end{split} \end{equation} Here, $x = Q_{min}R$ and $\mathrm{Si}(x)$ represents the sine integral, $\mathrm{Si}(x) = \int_0^x\frac{\sin(x')}{x'}\mathrm{d} x'$. For a very small nanoparticle of radius $5 {\mathrm \AA}$, we see that $i(5) < 0.01$, corresponding to $Q_{min} \approx 1 {\mathrm \AA}^{-1}$, which is typical for a RAPDF~\cite{chupa;jac03} experiment. Thus, for even quite small nanoparticles, practically all small angle intensity is below $Q_{min}=1$\AA$^{-1}$ and \eq{gofrgamma} is appropriate. However, care should be taken not to extend $Q_{min}$ too low in $Q$ in a measurement of a nanoparticulate system. In the few-atom limit, such as the case of discrete small molecules, the small and wide angle scattering are not cleanly separated. To produce a complete real-space signal, one can approximate the small angle scattering from a candidate structure model. This approach is commonly used in the study of small molecules in the gas phase~\cite{hargi;b;saogped88}. The Fourier transform of the estimated scattering approximates $R(r)/r$, the nominal ``experimental'' or ``modified'' RDF. An equivalent method for obtaining the modified RDF from wide angle scattering alone is to add a baseline estimated from a model structure to $\mathpzc{f}(r, Q_{min})$ \cite{ruan;nanol07}. This approach has been successful for calculating the $R(r)/r$ for many-atom nanoparticles. \section{Summary} \label{sec:summary} The PDF is a valuable tool for identifying the form and the interior composition of nanoscale materials. Whereas the oscillating component of the PDF gives information about the interatomic distances within the material, the PDF baseline is a function of the characteristic function, a measure of nanoparticle shape which has its origin in the SAS that is usually disregarded in a powder diffraction experiment. This characteristic function goes unnoticed in macroscopic particles, where the PDF is observed at distances that are much smaller than the particle diameter. For nanoparticles, the PDF baseline, and therefore the characteristic function, cannot be disregarded. We have presented a full derivation of the PDF equation taking into account the missing SAS and have reviewed different methods for calculating the PDF for nanoparticles. Given the relationship between the PDF and SAS equations, there is potential benefit in incorporating SAS data and analysis methods into PDF studies. \section{Acknowledgments} The authors would like to thank Phil Duxbury and Pavol Juh\'as for careful proofreading and discussion of this manuscript. The authors acknowledge enlightening conversations with Matteo Leoni, Reinhard Neder, Thomas Proffen, Chong-Yu Ruan, Paolo Scardi and Mike Thorpe. This work was supported by the US National Science foundation through Grant DMR-0703940. \bibliographystyle{nar}
1,116,691,501,179
arxiv
\section{Introduction} Quantum key distribution (QKD) is a technology that allows establishing provably secure keys between legitimate parties~\cite{BB84,Gisin2002,Scarani2009,Lo2014,Lo2016}. The QKD security is based on the fundamental laws of quantum physics, so QKD remains secure against any unforeseen developments, such as quantum computing~\cite{Shor1997}. In recent decades the QKD technology has attracted a significant deal of interest, which makes it one of the most widely studied research fields in quantum information technologies. Provably secure commercial QKD systems are now available. At the same time, the deployment of QKD systems in real-world conditions faces several challenges~\cite{Lo2016}, which are related to the limitation in distance and secret key generation rates as well as the high cost of such devices. One of the most prominent ways to overcome the distance limitations of QKD systems, which is essentially related to optical losses, is to develop QKD networks~\cite{Pan2018}. QKD networks allow increasing the distance for key distribution and making this infrastructure available for multiple users. Several QKD networks have been deployed around the globe~\cite{Pan2018,Elliott2004,Yeh2005,Peev2009,Stucki2011,Pan2009,Pan2010,Han2010,Zeilinger2011,Zhang2016,Kiktenko2017,Kiktenko2018}. These QKD networks are based on the so-called trusted nodes paradigm. Trusted nodes allow distributing a pair of symmetric keys between two not nodes, which have no direct link, using the hop-by-hop approach and the one-time pad (OTP) encryption. Building QKD networks without a trusted network requires quantum repeaters~\cite{Zoller1998}, which are now under active research~\cite{Lukin2020,Gisin2020}, but not yet ready for industrial use. Thus, the problem of optimizations of the QKD networks deployment is of significant importance~\cite{Lutkenhaus2009,Caleffi2017,Su2017,Mosca2018}. \begin{figure}[t!]\centering \includegraphics[width=0.37\textwidth]{topo_star_v2} \put(-210,60){(a)} \\ \vspace{5mm} \includegraphics[width=0.4\textwidth]{topo_line} \put(-210,60){(b)} \caption{\footnotesize Topologies of QKD networks with corresponding optical link lengths between the nodes and the switch, which is controlled by managing Bob\,1. In (a) the star topology is shown. In (b) the backbone topology is illustrated.} \label{fig:topology} \end{figure} One of the ways for overcoming existing challenges in the QKD network development is to use optical switches, which have been tested in several QKD networks~\cite{Toliver2003,Tang2006}. Optical switches allow sharing the resources of the devices in the network, which makes it possible to reduce the number of devices. An idea behind using switches is that the optical channels of the existing telecommunication structure are very heterogeneous in terms of losses, so the key generation rate varies significantly in different segments. Therefore, at least in the case of the backbone quantum network configuration, it makes no sense to organize the continuous key generation in all sections -- the key generation rate is limited by the slowest link. Thus, the use of optical switches in low-loss segments may help to reduce the overall cost of a quantum network significantly, while the secret key rate generation remains reasonably high. The use of optical switches in QKD networks of various topologies requires further optimization from the viewpoint of time, which is supposed to be fixed for a session between various parties of the network. Therefore, an optimization problem on the minimization of the cost of the deployment of the network without a significant reduction of the key generation rate appears to be crucial. In this work, we analyze an architecture of QKD networks, which are based on using optical switches and reducing the number of QKD receiver devices, which use single-photon detectors. We describe the corresponding modification of the QKD network protocol. We use a realistic model of the performance of QKD devices in realistic experimental conditions. We present a general framework for optimizing switch-based backbone QKD networks. As an example, we provide estimations for a network link between of a total of 670\,km length consisting of 8 nodes, and show that the switch-based architecture allows achieving significant resource savings of up to 28\%, while the throughput is reduced by 8\% only. \begin{figure}[]\centering \includegraphics[height=0.5\textheight]{block-scheme_2} \caption{\footnotesize Timing diagram of the QKD realization between four nodes in the star--topology network.} \label{fig:block-scheme_2} \end{figure} Our work is organized as follows. In Sec.~\ref{sec:switch}, we describe a switch-based QKD setup. We provide an analysis of its design and present a model for calculating key rates, which is in good agreement with experimental data. In Sec.~\ref{sec:optimization}, we consider a general approach for configuration optimization of switch-based backbone QKD networks. In Sec.~\ref{sec:Landau}, we use our model of the QKD network to optimize the performance of the backbone QKD network between Moscow and Udomlya. We summarize the main results of our work and conclude in Sec.~\ref{sec:conclusion}. \section{Switch-based QKD networks}\label{sec:switch} Here we briefly describe the extraction of the parameters of the QKD setup, which are further needed for the optimization of the network deployment. For this purpose, we develop a model of the performance of realistic QKD devices and compare the predictions of the model with experimental data. We then apply the developed model for estimating and optimizing the performance of a realistic QKD network using a standard approach for the deployment of (switch-free) backbone QKD networks as a reference. \subsection{Switch-based quantum networks} We use optical switches for deploying QKD networks on the basis of QKD setups, which operate using the plug\&play scheme of the BB84 protocol (without decoy states; the scheme is described in Appendix~\ref{sec:app1}). As usual, we refer to devices where preparation and measurement of qubits is performed as Alice and Bob modules (or Alice and Bob, for short), correspondingly. We test two elementary configurations of QKD networks: Star and backbone topologies (see Fig.~\ref{fig:topology}). In the first case, all the optical channels are loaded continuously, the optical switch is used to commute pairs, for example, Alice\,1--Bob\,1 and Alice\,1--Bob\,2 (see Fig.~\ref{fig:topology}). This configuration is suitable for the distribution of security keys, for example, in a branched urban infrastructure. Alternative topology, so-called the backbone topology, typically serves to increase the distance of QKD. For the case of the backbone topology optical switches can be used for optimizing the resources of the network. Both elementary configurations might be conveniently interconnected in the necessary order, which ensures the scalability. The use of QKD devices with optical switched requires modifications of the QKD network protocol. To control the switches, we need a command from a single point. Therefore, when developing a protocol for the operation of a quantum network, we selected a ``managing'' Bob; all other nodes are ``regular'' Bobs (which we refer to as Bobs) and functionally indistinguishable Alices. We use standard commercially available optical switch Thorlabs-OSW22-2x2, which is controlled under the host level of the LabVIEW software (which is used for controlling QKD devices, see Appendix~ \ref{sec:app1}). The timing diagram of the QKD realization between 4 nodes in the QKD network is shown in Fig.~\ref{fig:block-scheme_2}. The exchange is controlled by the ``managing'' Bob\,1, which sends required information via TCP/IP to the ``regular'' Bob\,2. Bob uses optical switches and adjustment parameters (time windows, train period) and requests the corresponding Alice. Alice uses the optical switch, if present, and confirms that the connection is ready. For the connection with each Bob, Alice accumulates the generated key in a separate file. Alice also needs to change the attenuation value of the output signal, depending on the losses in the corresponding quantum channel. We note that Alice does not need to change other parameters: The value of the electric delay of the signal from the synchronization detector depends only on the length of the optical storage line of Alice. Further, the QKD sessions occur according to the scheme described above until Bob\,1 switch the interacting nodes. Obviously, such a scheme can be easily scalable to more interacting nodes. We note that the commands in the network protocol can be additionally authenticated (or encrypted). \subsection{Adjusting a model for QKD devices performance} \begin{table}[t!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{link} & \multicolumn{2}{c|}{theory} & \multicolumn{2}{c|}{experiment} \\ \cline{2-5} & $R_{\rm sift}$\,[kbit/s] & QBER\,[$\%$] & $R_{\rm sift}$\,[kbit/s] & QBER\,[$\%$] \\ \hline Alice\,1--Bob\,1 & 2.2 & 2.7 & 1.8 & 2.6 \\ \hline Alice\,2--Bob\,2 & 2.7 & 0.8 & 2.3 & 1.2 \\ \hline \end{tabular} \caption{\footnotesize Comparison of the measured sifted key rate and QBER with theoretical predictions.} \label{tab:R-QBER} \end{center} \end{table} \begin{figure}[t!]\centering \includegraphics[width=0.45\textwidth]{plot_QBER_exp_A1-B1} \caption{\footnotesize Measured QBER fluctuations in time. The length of each block of the sifted key, for which QBER is computed, is set to be 5000 bytes.} \label{fig:QBER_exp} \end{figure} In Tab.~\ref{tab:R-QBER}, we present our results for experimental measurement of the sifted key rate and quantum bit error rate (QBER) together with corresponding theoretical predictions given by Eqs. from Appendix~\ref{sec:simulation} for the star topology [see Eqs.~\eqref{eq:R_sift} and \eqref{eq:QBER}]. To test and control the measured QBER in time, we split the accumulated sifted key into blocks of length 5000 bytes each, and compute QBER for each block. Typical QBER fluctuations, which arise mainly due to the imperfect environment of the experiment, are shown in Fig.~\ref{fig:QBER_exp}. \iffalse \begin{table*}[htp] \begin{center} {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline link & $\ell$\,[km] & $P_B$\,[W] & $\alpha_{\rm opt}$\,[dB] & $\alpha_A$\,[dB] & $\alpha_B$\,[dB] & $\eta_{\rm det}$\,[\%] & DCR\,[Hz] & $\tau_d$\,[$\mu$s] \\ \hline Alice\,1--Bob\,1 & 10.0 & 1.40$\times10^{-6}$ & 2.0 & 31.7 & 7.5 & 6 & 470 & 25 \\ \hline Alice\,2--Bob\,2 & 15.3 & 0.13$\times10^{-6}$ & 4.0 & 26.5 & 6.1 & 15 & 110 & 10 \\ \hline \end{tabular} \caption{\footnotesize Total link length, pulse power at Bob's output, measured total losses in the optical channel and internal losses at Alice and Bob, detection efficiency, dark count rate and dead time. The visibility and afterpulse probability are $V=99.3\%$ and $p_{\rm after}=0.1\%$ for both links.} \label{tab:pars_exp} } \end{center} \end{table*} \fi \begin{table*} \begin{center} \begin{tabular}{| l | l | l |} \hline & Alice\,1--Bob\,1 & Alice\,2--Bob\,2 \\ \hline Total link length $\ell$\,[km] & 10.0 & 15.3 \\ \hline Pulse power at Bob's output $P_B$\,[W] & 1.40$\times10^{-6}$ & 0.13$\times10^{-6}$ \\ \hline Total losses in the optical channel $\alpha_{\rm opt}$\,[dB] & 2.0 & 4.0 \\ \hline Internal losses at Alice module $\alpha_A$\,[dB] & 31.7 & 26.5 \\ \hline Internal losses at Bob module $\alpha_B$\,[dB] & 7.5 & 6.1 \\ \hline Detection efficiency $\eta_{\rm det}$\,[\%] & 6& 15 \\ \hline Dark count rate DCR\,[Hz] &470 & 110 \\ \hline Dead time $\tau_d$\,[$\mu$s] &25 & 10 \\ \hline Visibility [\%] & 99.3\,\% & 99.3\,\% \\ \hline Afterpulse probability [\%] & 0.1\,\% & 0.1\,\% \\ \hline \end{tabular} \caption{\footnotesize Measured parameters of links.} \label{tab:pars_exp} \end{center} \end{table*} Despite realistic (non-ideal) conditions, after some time since turning on the observed QBER is stabilized, and the fluctuations normally do not exceed 3\% for the link Alice\,1--Bob\,1. For the line Alice\,2--Bob\,2 which uses SPD with better characteristics, QBER is found to be about 1\%. The average values are given in Tab.~\ref{tab:R-QBER}. One can see that there is a rather good agreement between the model and experiment. Furthermore, in order to improve the generation rate, one can perform parameter optimization for the current setup by maximizing the lower bound on the secret key rate. For fixed and measured optical channel parameters and laser power, we have the following parameters to optimize: the mean number of photons $\mu$, detector efficiency $\eta_{\rm det}$, dead time $\tau_d$ and dark count rate (DCR). We use a finite set of points $\{\eta_{\rm det},\tau_d,{\rm DCR}\}$, extracted from the experimental data, and perform the maximization of the lower bound on the secret key rate with respect to $\mu$ as follows: \begin{equation}\label{eq:R_sec_bound-main} R_{\rm sec} = R_{\rm sift} \big\{ \kappa_1^{(l)} [1 - H(E_1^{(u)})] - f_{\rm ec} H(E_\mu) \big\}, \end{equation} where $R_{\rm sift}$ is the sifted key generation rate, $\kappa_1$ is a fraction of bits in the verified key obtained from single-photon pulses, $E_1$ and $E_\mu$ are the single-photon and the overall QBER's respectively, $f_{\rm ec}$ is the error correction efficiency (here we use $f_{\rm ec}=1.15$ \cite{Trushechkin17}), and $H(\cdot)$ is the binary Shannon entropy. The detailed discussion of Eq.~\eqref{eq:R_sec_bound-main} is presented in Appendix~\ref{sec:processing}. In Fig.~\ref{fig:R_sec_opt} we illustrate $R_{\rm sec}(\mu)$ for the fixed SPD parameters listed in Tab.~\ref{tab:pars_exp} (in blue) and for the parameters optimized for each $\mu$--value (in green). From the plot it is clearly seen that for the same $\mu\simeq0.4$ the gain in rate by approximately factor 1.5 can be achieved with respect to the current experimental setup (red dot) by just finding an optimal set of $\{\eta_{\rm det},\tau_d,{\rm DCR}\}$. Including $\mu$ into optimization variable list as well, numerically we find maximum for the following parameters: $\eta_{\rm det}=30\%$, $\tau_d=25\,\mu$s, DCR$\simeq990$\,Hz and $\mu\simeq0.64$. The corresponding maximum key rate is $R_{\rm sec}\simeq2.2$\,kbit/s. Thus, this simple example proves a clear advantage of parameter optimization. We note that for our benchmark parameters in Tab.~\ref{tab:pars_exp} the realized standard BB84 protocol can be considered secure, i.e. $R_{\rm sec}>0$, only for the losses $\alpha_{\rm opt}$ up to about 5(10)\,dB or equivalently for distances up to 25(50)\,km for the link Alice\,1(2)--Bob\,1(2). Therefore, for practical usage at distances $\ell\sim100$\,km one needs an implementation of some other QKD protocol, e.g. the commonly used BB84 with decoy states~\cite{Ma05,Trushechkin21}. \begin{figure}[h]\centering \includegraphics[width=0.45\textwidth]{plot_R-mu_opt_new2} \caption{\footnotesize Lower bound on the secret key generation rate for the BB84 protocol without decoy states as function of the mean photon number per pulse with non-optimized (in blue) and optimized (in green) single--photon detector parameters $\{\eta_{\rm det},\tau_d,{\rm DCR}\}$. The red dot represents our current experimental setup with parameters given in Tab.~\ref{tab:pars_exp}.} \label{fig:R_sec_opt} \end{figure} \section{Optimizing switch-based backbone QKD networks} \label{sec:optimization} In this section, we describe a general framework for optimizing configurations of switch-based backbone QKD networks. Consider a sequence of $N$ consecutive QKD links with two edge nodes and $N-1$ trusted nodes. Let $R^{(i)}_{\rm sec}$ with $i=1,2,\ldots, N$ be secret key generation rates achievable at each link [see Fig.~\ref{fig:backbone_example}(a)]. In the case of no switches in the network, i.e., where all trusted nodes are equipped with two devices (either two Alices, or two Bobs, or Alice and Bob), the resulting key generation rate between the edge node is given by $\min(R_{\rm sec}^{(1)},\ldots, R_{\rm sec}^{(N)})$, and thus is limited by the slowest link in the network. \begin{figure*} \centering \includegraphics[width=0.75\linewidth]{fig_backbone_example_v2.pdf} \put(-360,190){(a)} \put(-275,110){(b)} \caption{In (a) an example of a switch-based backbone network with splitting links into clusters is presented. The ``switch-based'' nodes and ``full''nodes with two devices are denoted by S and F, respectively. In (b) an operation of a subgroup in odd and even modes is shown.} \label{fig:backbone_example} \end{figure*} Then, let us consider the case where some  trusted nodes are equipped with switches. We refer these nodes to as ``switch-based'' and denoted them by ${\bf S}$. In contrast, we call trusted nodes with two devices as ``full'', and denote them by ${\bf F}$. Thus, the configuration of the network is defined by a string ${\bf X}=({\bf X}_1,{\bf X}_2,\ldots,{\bf X}_{N-1})$, where ${\bf X}_i\in\{{\bf S}, {\bf F}\}$ describes the type of each of $N-1$ trusted nodes. To calculate a rate for a switch-based backbone network, we split a sequence of all rates $(R_{\rm sec}^{(1)}, \ldots, R_{\rm sec}^{(N)})$ into $m$ clusters \begin{multline}     (R_{\rm sec}^{(1)}, \ldots, R_{\rm sec}^{(N_{1})}), (R_{\rm sec}^{(N_{1}+1)}, \ldots R_{\rm sec}^{(N_{2})}), \ldots,\\ (R_{\rm sec}^{(N_{m}+1)}),\ldots, R_{\rm sec}^{(N)})) \end{multline} in such a way that $0<N_1<N_2<\ldots N_m<N$, all trusted nodes connecting links in each of clusters are switch-based, and all trusted nodes connecting links between adjacent clusters are full [see an example of splitting links into clusters in Fig.~\ref{fig:backbone_example}(a)]. The basic idea behind the splitting is that since clusters are separated by full trusted nodes, switch-based nodes inside each cluster are able to operate with switches in an independent manner. The resulting secret key generation between edge nodes in the network then can can be obtained as minimal key generation among all clusters. Consider a cluster of links with rates $(R_{\rm sec}^{(i_{1})},\ldots,R_{\rm sec}^{(i_{2})})$. Since all the intermediate nodes within the cluster are switch-based, to achieve the maximal key generation rate through the whole cluster, at each time moment, it is preferable to turn on either all odd or all even links [see Fig.~\ref{fig:backbone_example}b]. Assuming that time for switching in negligible, the achievable rate is as follows: \begin{equation} \begin{aligned} &\min(p R_{\rm odd}, (1-p) R_{\rm even}),\\ &R_{\rm odd}=\min\limits_{\substack{i_{1}\leq i \leq i_{2}\\i:{\rm odd}}}R_{\rm sec}^{(i)},\\ & R_{\rm even}=\min\limits_{\substack{i_{1}\leq i \leq i_{2}\\i:{\rm even}}}R_{\rm sec}^{(i)}, \end{aligned} \end{equation} where $p\in(0,1)$ is a parameter determining percentage of time in ``even'' and ``odd'' modes. One can see that the maximal possible rate is given by \begin{equation}\label{eq:subgroup} R_{\rm subgroup}=\frac{R_{\rm odd}R_{\rm even}}{R_{\rm odd}+R_{\rm even}}, \end{equation} achieved for $p$ providing $R_{\rm odd}=R_{\rm even}$. The resulting (effective) key generation rate in the network takes the followng form: \begin{equation} \label{eq:backbone-rate} R_{\rm eff}({\bf X})=\min\left(\left\{R_{\rm subgroup}^{(i)}\right\}_{i=1}^m\right), \end{equation} where $R_{\rm subgroup}^{(i)}$ is maximal rate of the $i$th subgroup calculated via Eq.~\eqref{eq:subgroup}, and the splitting of rates into subgroups is determined by the configuration ${\bf X}$. Using Eq.~\eqref{eq:backbone-rate} one can consider finding the best configuration with a given number of switch-based nodes. Since in practical scenarios, the total number of nodes usually does not exceed several dozens, this problem can be solved by an exhaustive search approach by trying all possible configurations $\bf X$ and taking a maximum over of all $R(\bf X)$. The next problem one can consider is an optimization over the cost of equipment related to a given configuration ${\bf X}$. Let us introduce the following notations: let ${\bf A} ({\bf B})$ denote a node with a single Alice (Bob) device, ${\bf SA} ({\bf SB})$ denote a switch-based node with Alice (Bob) device, and ${\bf AA}$, ${\bf AB}$, ${\bf BA}$, ${\bf BB}$ denote a full node with two devices. One can see that the configuration the same configuration ${\bf X}$ can be implemented in several ways: e.g. ${\bf X}=({\bf S},{\bf F})$ can be realized as {\bf A}-{\bf SB}-{\bf AB}-{\bf A}, {\bf B}-{\bf SA}-{\bf BA}-{\bf B}, {\bf B}-{\bf SA}-{\bf BB}-{\bf A}, and so on. Though all implementations provide the same key generation rate, their cost can be different because of the different number of Bob and Alice modules in the network (the cost of Alice and Bob module be very different in practice). In Ref.~\cite{Python-code} we provide an optimization script (upon the reasonable request), which allows one to obtain the cheapest implementation of a backbone network with a maximal key generation rate given a sequence of rates, number of switch-based nodes, and a cost of each type of node implementation (first, a configuration(s) ${\bf X}$ providing the maximal rate are obtained, and then the cheapest implementation(s) are identified). \section{Optimization of the QKD network deployment for a realistic network}\label{sec:Landau} Here we incorporate our results on the analysis and simulation of QKD devices with switches to optimize QKD network deployments. Below we consider an example of a QKD network (a network link between Moscow and Udomlya). To estimate the order of magnitude of potential key rates for each link of the network, we use our model for the simulation of QKD devices based on the one-way BB84 with two decoy states. For simplicity, we assume that all Bob modules in the network have identical pairs of SPDs with parameters listed in Tab.~\ref{tab:pars_prom}. We choose these parameters as reference ones since they describe our currently used experimental decoy-state QKD setup and are studied in detail. Although we expect further progress in the impairment of SPD parameters, we use state-of-the-art detectors' parameters in this work. By setting the typical order of magnitude values for the signal and decoy states intensities $\mu=0.5$, $\nu_1=0.1$ and $\nu_2=0.01$ respectively, we present in Tab.~\ref{tab:Landau} the sifted/secret key rate and QBER estimations for each link of the network (for details of $R_{\rm sec}^{(i)}$ estimation see Appendix~\ref{sec:processing}). We note that optimization of $\mu$, $\nu_1$ and $\nu_2$ depending on the channel length is beyond this work, and therefore we assume that the intensities are the same for each link. One can see that the channel Gorodische--Torzhok has the lowest rate and turns out to be a bottleneck of the network. For this reason, one can already conclude that it is unpreferable to put switches at Gorodische and Torzhok nodes. \begin{table*}[htp] \begin{center} \begin{tabular}{|l|c|} \hline Pulse repetition frequency $f$ & 312.5\,MHz \\ \hline Gate time window $\tau_{\rm gate}$& 600\,ps \\ \hline Internal losses at Bob module $\alpha_B$ & 4\,dB \\ \hline Detection efficiency $\eta_{\rm det}$ & 10\,\% \\ \hline Dark count rate DCR & 300\,Hz \\ \hline Dead time $\tau_d$ & 5\,$\mu$s \\ \hline Visibility $V$ & 98\,\% \\ \hline Afterpulse probability $p_{\rm after}$ & 3\,\%\\ \hline \end{tabular} \caption{Parameters of single photon detectors used for modeling QKD devices.} \label{tab:pars_prom} \end{center} \end{table*} \begin{table*}[htp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Link & Length [km] & Optical losses [dB] & Sifted key rate\,[kbit/s] &Secret key rate ($R_{\rm sec}^{(i)}$)\,[kbit/s] & QBER\,[\%] \\ \hline Moscow--Kubinka & 86.8 & 19.0 & 16.8 & 2.7 & 4.1 \\ \hline Kubinka--Uvarovka & 115.0 & 22.2 & 8.8 & 1.4 & 4.2 \\ \hline Uvarovka--Gagarin & 74.0 & 14.6 & 35.7 & 5.9 & 4.0 \\ \hline Gagarin--P.\,Gorodische & 98.7 & 18.9 & 17.1 & 2.8 & 4.1 \\ \hline P.\,Gorodische--Torzhok & 125.8 & 23.6 & 6.6 & 1.0 & 4.2 \\ \hline Torzhok--V.\,Volochek & 114.4 & 21.8 & 9.6 & 1.5 & 4.1 \\ \hline V.\,Volochek--Udomlya & 82.5 & 15.9 & 29.2 & 4.8 & 4.0 \\ \hline \end{tabular} \caption{Model parameters for the backbone QKD network between Moscow and Udomlya. BB84 protocol with signal intensity $\mu=0.5$, and two decoy states for intensities $\nu_1=0.1$ and $\nu_2=0.01$ is considered. Single photon detector parameters are taken from Tab.~\ref{tab:pars_prom}.} \label{tab:Landau} \end{center} \end{table*} In order to demonstrate how the key generation rates are affected by introducing switches in the network, we first consider the case of a single switch. According to Eq.~\eqref{eq:subgroup}, introducing a switc is equivalent to replacing a pair of adjacent links with secret key generation rates $R_{\rm sec}^{(i)}$ and $R_{\rm sec}^{(i+1)}$ with a single effective link with rate $R_{\rm sec}^{(i)}R_{\rm sec}^{(i+1)}/(R_{\rm sec}^{(i)}+R_{\rm sec}^{(i+1)})$. In Fig.~\ref{fig:R_Landau_chart}, we present the comparison of original key generation rate of all links and effective key generation rates appeared after introducing switches at different nodes. Since the resulting rate between edge nodes is given by minimum of all rates, putting switches in Uvarovka, Gagarin, or V. Volochek does not decrease the rate, though reduce the number of employed devices. \begin{figure*}[t!]\centering \includegraphics[width=0.6\textwidth]{plot_R_Landau_2} \caption{\footnotesize Secret key generation rates from Tab.~\ref{tab:Landau} for each link (in blue). Wide coloured bars represent the rate of adjacent links with an intermediate switch. The black dashed line marks the minimum rate in the network.} \label{fig:R_Landau_chart} \end{figure*} It turns out that the resulting effective key rate of 1\,kbit/s does not decreases even in the case of putting switches at all three mentioned nodes (see Tab.~\ref{tab:Landau_2}). With three switches, the total number of Alice and Bob modules equals to 11, while for the classic backbone line 14 modules are required. This trivial comparison demonstrates a clear economic benefit of the proposed switch line: If Alice and Bob devices had the same cost, na\"{i}vely the benefit would be 21\%. We also note that the configuration of the network with switches in Uvarovka, Gagarin, and V. Volochek can be realized with different number of Alice and Bob modules. The configurations \begin{equation} \begin{aligned} &{\bf A}-{\bf BA}-{\bf SB}-{\bf SA}-{\bf BA}-{\bf BA}-{\bf SB}-{\bf A},\\ &{\bf A}-{\bf BA}-{\bf SB}-{\bf SA}-{\bf BA}-{\bf BB}-{\bf SA}-{\bf B} \end{aligned} \end{equation} have the same number of switches and provide the same rate, however the first configuration has less number of Bob modules than the second one. Taking into account that Bob module contains more complex hardware components, in particular, SPD, that is not only a very important and sophisticated but also a rather sensitive and expensive device, having less Bob modules in the network apparently reduces the maintenance cost. Further increase in the number of switches leads to a decreasing of the effective key generation rate as well as the total number of modules and corresponding equipment cost (see Tab.~\ref{tab:Landau_2}). Note that using four switches compared to the case of no switches drops down the number of modules from 14 to 10 ($\approx 28\,\%$ decrease), while decreases the effective key generation rate by 8\,\% only. Putting switches in all nodes results in decrease of the key generation rate down to 0.58\,kbit/s and reducing the number of employed modules down to 8 units. Thus, we observe a clear tradeoff between the effective key generation rate and the implementation cost. The choice of switches number can be made based on requirements posed on the network by the surrounding infrastructure. \begin{table*}[htp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & $R_{\rm eff}$\,[kbit/s] & $N_A$ & $N_B$ & $N_{\rm tot}$ & Variant of implementation\\ \hline without switches (backbone) & 1.0 & 7 & 7 & 14 & {\bf A}-{\bf BA}-{\bf BA}-{\bf BA}-{\bf BA}-{\bf BA}-{\bf BA}-{\bf B}\\ \hline switches in Uvarovka, Gagarin \& V.\,Volochek & 1.0 & 6 & 5 & 11 & {\bf A}-{\bf BA}-{\bf SB}-{\bf SA}-{\bf BA}-{\bf BA}-{\bf SB}-{\bf A}\\ \hline +switch in Kubinka & 0.92 & 6 & 4 & 10 & {\bf A}-{\bf SB}-{\bf SA}-{\bf SB}-{\bf AA}-{\bf BA}-{\bf SB}-{\bf A} \\ \hline +switch in P.\,Gorodische & 0.6 & 5 & 4 & 9 & {\bf A}-{\bf SB}-{\bf SA}-{\bf SB}-{\bf AB}-{\bf SA}-{\bf SB}-{\bf A} \\ \hline +switch in Torzhok & 0.58 & 4 & 4 & 8 & {\bf A}-{\bf SB}-{\bf SA}-{\bf SB}-{\bf SA}-{\bf SB}-{\bf SA}-{\bf B}\\ \hline \end{tabular} \caption{\footnotesize Effective key rate in the backbone network $R_{\rm eff}$, required number of Alice ($N_{A}$) and Bob ($N_{B}$) modules, total number of modules ($N_{\rm tot}$), and variants of implementation assuming that Bob module is more expensive than Alice one.} \label{tab:Landau_2} \end{center} \end{table*} \section{Conclusion and outlook}\label{sec:conclusion} In the present work, we investigated QKD network deployment based on intermediate nodes with optical switches. For the proof-of-principle, we realized in practice a laboratory BB84 plug\&play scheme QKD network prototype with two Alice and two Bob nodes connected via an optical switch. A good agreement between the experimental key rates and our theoretical model estimations was found. Hence, we applied our results for making predictions for a QKD network between Moscow and Udomlya, which has 8 nodes and a total length of 670\,km. We investigated possible line configurations with various Alice/Bob/switch device placing and proposed several configurations that provide the entire system maintenance cost reduction without essential loss in the overall rate. In particular, it was demonstrated that one could achieve cost savings of up to 28\,\% while reducing throughput by 8\,\%. \section*{Acknowledgments} This work is supported by the Russian Science Foundation under project 17-71-20146. We thank PJSC Rostelecom for providing data on the network between Moscow and Udomlya.
1,116,691,501,180
arxiv
\section{Introduction}\label{sec:intro} Long-duration ($t\gtrsim 2~\mathrm{s}$) Gamma Ray Bursts, simply indicated as GRBs throughout this paper, are widely considered powerful tracers of the star formation history of the Universe \citep{bloom2002,chary2007,savaglio09,robertson12}. In fact, their progenitors are established to be short-lived, massive stars \citep{galama1998,hjorth2003,woosley2006,berger2011} and the GRBs and their afterglows are sufficiently luminous to be detected as early as $\sim 500~\mathrm{Myr}$ after the Big Bang ($z\sim 9.4$, see \citealt{salvaterra09,tanvir09,cucchiara11}). Furthermore, the detectability of GRBs is not limited by how faint their host galaxies might be, making them potential tracers of the \emph{total} star formation rate at high-redshift, unlike surveys of Lyman-Break Galaxies (LBGs), which trace star formation only in systems above the survey detection limit \citep{trenti2013}. This advantage of GRBs over LBGs as star formation tracers is highlighted by the non-detection of the majority of GRB host galaxies at $z\gtrsim 5$, which indicates that even the deepest observations of LBGs with the Hubble Space Telescope (HST) are only capturing the tip of the iceberg of star formation during the first billion years after the Big Bang \citep{trenti12,tanvir12}. However, the use of GRBs as star formation tracers presents significant challenges as well. First, the sample size of GRBs at high-$z$ is still very small, introducing significant Poisson noise. For example, there are only $\sim 40$ GRBs at $z\gtrsim 3.5$ and a handful at $z>6.5$\footnote{\url{http://swift.gsfc.nasa.gov/archive/grb\_table/}}, whereas more than 11000 LBGs have been identified at $z\gtrsim 3.5$, and more than 800 at $z\gtrsim 6.5$ \citep{bouwens2014}. Second, several studies suggest that the connection between GRB rate and star formation rate is likely biased by the metallicity of the GRB progenitor, with low-metallicity environments having a higher yield of GRBs. The presence of a metallicity bias has both observational and theoretical support: Several groups found that host galaxies of GRBs appear to lie below the typical mass-metallicity relation observed for star-forming galaxies \citep{fynbo2003,prochaska2004,modjaz2006, fruchter2006,thoene2007,graham2013,wang2014}, and this result fits well within the leading theoretical framework for GRB engine, the \emph{collapsar} model \citep{woosley1993,macfadyen1999}. According to this model, the GRB progenitor is a massive star whose rapidly rotating core collapses to form a black hole, while the outer envelope falls back and forms an hyperaccreting disk (if endowed with sufficient angular momentum), powering an energetic relativistic jet. Since stellar mass losses grow with increasing metallicity, and carry away precious angular momentum from the star, an enhancement of the GRB rate in low-metallicity environments is expected \citep{woosley2006,yoon06,nuza2007,perna2014}, possibly with a sharp cut-off around solar metallicity \citep{stanek2006,yoon06}. However, this scenario for GRB production is at odds with other observational findings of GRB host galaxies that have solar or even super-solar metallicity \citep{berger2007,levesque10c,savaglio12}, and alternative models to the \emph{collapsar} scenario have been proposed, such as progenitors in binaries \citep{fryer05,cantiello2007}, which are less influenced by metallicity. Such a complex mix of observational and theoretical results on the presence, and strength, of a metallicity bias arguably represents the most significant limiting factor in the use of GRBs as reliable tracers of star formation (\citealt{stanek2006,kewley2007,modjaz2008,jimenez13}, but see the recent paper by \citealt{hunt2014}). A deeper, and clearer, understanding of how metallicity influences the relation between GRB and star formation rate is needed to be able to account and correct for the bias. In this respect, there are two separate questions to address: \begin{itemize} \item Are GRBs more probable in low-metallicity environments compared to higher metallicity ones? \item Is there a maximum metallicity cut-off? \end{itemize} Both affect the connection between GRB and star formation, with an impact that in general varies over redshift, reflecting the evolution in the chemical enrichment of galaxies throughout cosmic history. GRB host galaxy studies are an ideal tool to understand if a metallicity bias is present, and how strong it is. As such, they have been the subject of extensive studies (we refer the reader to the comprehensive review by \citealt{levesque2014}), { but no clear consensus has been reached in the community. Some investigations point toward a clear presence of a metallicity-dependence in the production of GRBs (e.g., \citealt{boissier2013,lunnan2014}), while others conclude that GRBs appear to be unbiased tracers of star formation (e.g., \citealt{fynbo2008,kohn2015}), and some highlight that discrimination between scenarios is challenging (e.g., \citealt{pontzen2010,laskar2011}). } Ideally, spectroscopic samples are the most suited for the task { of quantifying the bias}, since they allow to establish a direct connection between GRBs and host metallicity. However, measuring the metallicity of galaxies at high redshift is challenging, and generally limited only to the brightest objects. A wider study extending to fainter hosts is possible by characterizing luminosities and stellar masses of GRB host galaxies from broad-band photometry, and then comparing sample properties against those of LBGs. Qualitatively, a GRB preference for low-metallicity environments is reflected into steeper luminosity and mass functions at the faint end compared to a scenario without bias, because of the established existence of a mass/luminosity vs. metallicity relation \citep{Panter03,Panter08,maiolino08,mannucci2010}. Similarly, the presence of a metallicity cutoff would introduce a truncation of the luminosity/mass function for the brightest and most massive hosts. Even in absence of a metallicity bias, the luminosity and mass functions of GRB hosts differ from those of LBGs because the GRB rate is proportional to the star formation rate (SFR), making the host skewed toward inclusion of systems with higher SFR (i.e. luminosity), hence resulting in a shallower luminosity function. If $\phi_{(LBG)}(L)$ is the luminosity function of LBGs, then under the basic assumption that the luminosity density and the star formation rates are proportional and that there is no metallicity bias, it follows that the GRB host luminosity function is proportional to $\phi_{(LBG)}(L)\times L$. There is, though, one second order but subtle and crucial correction that needs to be made to this relation to properly quantify how any deviation from it is connected to the metallicity bias. That is proper modeling of dust\footnote{For some discussion on the influence of dust on deriving physical properties of galaxies from their stellar populations see e.g. \citet{vespa}}. In fact, the GRB rate depends on the intrinsic star formation rate, hence on the intrinsic luminosity, not on the observed one. As a result, the scaling for the GRB luminosity function (again with no bias) goes as $\phi_{(GRB)}(L_{obs}) \propto \phi_{(LBG)}(L_{obs}) \times L_{int}$. Since $L_{int}/L_{obs}$ depends on the metallicity (higher dust content is typically associated with higher metallicity environments), this effect may partially mask the presence of a metallicity bias, unless proper dust treatment is taken into account in the data-model comparison. Finally, it is also important to recall that, since all the proposed GRB engines are generically associated with the death of massive stars, it is generally expected that studies which focus on luminosity functions at wavelengths that are good tracers of \emph{recent} star formation are those most suited for the purpose of understanding the connection between GRBs and star formation. In this respect, the two most natural choices are UV or far-IR. In this work, we focus on the first, motivated primarily by the availability of high quality UV luminosity functions for LBGs. To present a comprehensive analysis, we include stellar mass functions in the modeling, but the latter are not as powerful because the connection between star formation and stellar mass is not as tight as it is with UV luminosity. For example, the most massive elliptical galaxies in the local universe have little to no recent star formation, making them highly unlikely to host GRBs. To complement and augment the previous studies of the metallicity bias of GRB host galaxies, we take a novel approach, starting from a successful modeling framework that we developed to investigate how the luminosity function of LBGs evolves with redshift and how it is connected to the underlying dark matter halo mass function. In \citet{trenti10} we constructed a first link between luminosity and dark matter halo mass functions, quantitatively predicting the LBG luminosity function during the epoch of reionization. In particular, the model predicted an accelerated decrease of the luminosity density of galaxies at $z\gtrsim8$ and a steepening of the faint-end slope of the luminosity function, with both predictions recently verified thanks to the array of new Hubble observations of $z\gtrsim 8$ galaxies \citep{bradley12,schmidt2014,ellis2013,mclure13,bouwens2014}. We also applied our luminosity function model to interpret the non-detections of GRB host galaxies at $z>5$ by \citet{tanvir12}, concluding { (see \citealt{trenti12})} that the majority of star formation at $z>6$ is happening in faint systems below the current detection limit (a conclusion deriving naturally from the luminosity function steepening; { see also \citealt{salvaterra13} which supports earlier results}). More recently, we extended our modeling to achieve a comprehensive description of the LBG UV luminosity function evolution from $z\sim 0$ to $z\sim 10$, capturing at the same time the stellar mass density and specific star formation rate evolution \citep{tacchella13}. We hence applied the model to investigate the connection between GRB rate and star formation rate and how it varies with redshift and with the presence of a metallicity bias \citep{trenti2013}. Our key conclusions were: (1) the best model for the GRB rate is one that includes production at low metallicity primarily through the collapsar engine ($\sim 75-80\%$), with metallicity bias, combined with a second, metal-independent channel ($\sim 20-25\%$), such as a progenitor star in a binary system; and (2) the metallicity bias becomes negligible to first approximation at $z\gtrsim 4$, since the majority of star forming galaxies have low metallicity $Z\lesssim 0.1 Z_{\odot}$. While the conclusions reached were interesting and shed some new light on the problem, our study was limited by the uncertainty in the comoving GRB rate. In this work, we aim at building upon our previous framework to expand the data-model comparison to UV luminosity functions, stellar mass functions, { and metallicities} of GRB hosts, with the goal of providing a more stringent and rigorous characterization of the metallicity bias. We extend our previous modeling by taking into account other effects, such as dust obscuration, that are likely to influence the data-model comparison. In addition, by presenting predictions for the redshift evolution of GRB hosts luminosity and stellar mass function, we make testable predictions that can guide the design of the next generation of surveys aimed at characterizing their properties. The paper is organized as follows. Section~\ref{sec:model} describes our model, whose results are presented in Section~\ref{sec:results}, and then compared to the current observations in Section~\ref{sec:tough}. Section~\ref{sec:con} summarizes and concludes with an outlook for the future. Throughout the paper we use the latest $\Lambda CDM$ concordance cosmological model with parameters determined by the \emph{Planck} mission: $\Omega_{\Lambda,0}=0.685$, $\Omega_{m,0}=0.315$, $\Omega_{b,0}=0.0462$, $\sigma_8=0.828$, $n_s=0.9585$, $h=0.673$ \citep{planck}. \section{Modeling: GRB rate and host galaxy properties}\label{sec:model} Following the framework developed in \citet{trenti2013}, we base our modeling on linking star formation to the assembly of dark-matter halos. We assume that each halo converts a fraction of its total mass $\xi(M_h)$ into stars ($M_*=\xi(M_h) \times M_h$) { with a constant star formation rate} over the timescale defined by the halo-assembly time $t_{1/2}(M_h,z)$, that is the time needed to grow from $M_h/2$ to $M_h$. $\xi(M_h)$ depends on mass but not on redshift \citep{tacchella13,behroozi13}, while $t_{1/2}$ decreases both for increasing mass and redshift (e.g., see \citealt{lacey93}). The UV luminosity of the galaxies in our model is computed { via Stellar Population models}, assuming a Salpeter initial mass function from $0.1~\mathrm{M_{\sun}}$ to $100~\mathrm{M_{\sun}}$ \citep{bruzual03}. At fixed halo (and stellar) mass, higher $z$ halos form stars on a shorter timescale, hence they have higher UV luminosity { (see \citealt{tacchella13,trenti2013} for further details)}. The star formation efficiency $\xi(M_h)$ is calibrated at one reference redshift, thanks to abundance matching between the galaxy luminosity function and the dark-matter halo mass function. After such calibration, the model has no free parameters, and the evolution of the dark-matter halo mass function and of the halo assembly time fully determines the star formation rate and galaxy luminosity function at all other redshifts. As discussed in \citet{tacchella13}, this minimal model with no free parameters is remarkably successful in capturing the evolution from $z\sim 0$ to $z\sim 10$ of (1) the UV luminosity density (star formation rate); (2) the galaxy luminosity function; and (3) the stellar mass density and specific star formation rate. Like in our previous work, we calibrate $\xi(M_h)$ at $z=4$. Here, we use the \citet{sheth99} halo mass function and the latest determination of the observed UV luminosity function [$\phi(L)=\phi^*(L/L^*)^{\alpha}\exp{-(L/L^*)}$] derived by \citet{bouwens2014}: $\phi^*=1.35^{+0.22}_{-0.19}\times10^{-3}~\mathrm{Mpc^{-3}}$; $\alpha=-1.65\pm 0.03$ and $M_{AB}^{(*)}=-21.08$, where $M^{(*)}=-2.5\log_{10}{L^*}$. We assume that the LF extends to fainter than observed luminosity, down to $M_{UV}\leq -11$, which corresponds to $M_h\gtrsim 5\times 10^8~\mathrm{M_{\odot}}$ at $z\sim 4$ (e.g. see \citealt{trenti10,tacchella13}). While this represents a significant extrapolation compared to the typical magnitude limit of Lyman-Break galaxy surveys ($M_{UV}\sim -17$, see \citealt{bouwens07}), observations of LBGs behind gravitational lenses have shown that the LF of LBGs at $z\sim 2$ continues to be a steep power law at $M_{UV} \sim -13$ \citep{alavi13}. Note that star forming sites with UV magnitude above $M_{AB}>-11$ are unlikely to host a GRB in any case, since their star formation rate is so low ($\dot{\rho_*} \lesssim 1.3\times 10^{-3}~\mathrm{M_{\sun}~yr^{-1}}$) that stochastic sampling of the IMF is not expected to lead to the formation of any star with $M\gtrsim 30~\mathrm{M_{\sun}}$ within the typical dynamical time of a molecular cloud ($t_{dyn}\sim10^4$~yr). Since the observed UV luminosity is significantly affected by dust extinction, we include in our modeling dust extinction with an empirically calibrated formula following \citet{bouwens2013} and \citet{meurer99}, which link extinction to the UV-continuum slope $\beta$ (for a spectrum modeled as $f_{\lambda}\sim\lambda^{\beta}$): $A_{\mathrm{UV}}=4.43+1.99\beta$. We model the observations by \citet{bouwens2013} as: \begin{eqnarray} & & <\beta(z,M_{AB})>= \\ & & \nonumber \begin{cases} a(z)\exp{(b(z) M_{AB})} + c & \text{if } M_{AB} \geq M_0 \\ c - a(z)\exp(b(z) M_0)(-1+b(z)(M_0-M_{AB})) & \text{if } M_{AB} < M_0 \\ \end{cases} \end{eqnarray} where $c=-2.276$, $M_0=-22$ and $a(z)$, $b(z)$ are determined from fitting the \citet{bouwens2013} measurements. Assuming (like in \citealt{tacchella13}) a Gaussian distribution for $\beta$ at each $M_{\mathrm{UV}}$ value with dispersion $\sigma_{\beta}=0.34$, then the average extinction $<A_{\mathrm{UV}}>$ is given by: \begin{equation} <A_{\mathrm{M_{UV}}}>=4.43+0.79\ln(10)\sigma_{\beta}^2+1.99<\beta>. \end{equation} This dust extinction framework differs slightly from the one we adopted earlier where we had a linear dependence for $\beta(M_{AB})$. In fact, the latest determination of $\beta$ by \citet{bouwens2013} shows evidence for a curved relation between UV slope and absolute magnitude. We adopt empirically an exponential function, which provides a good fit, and then extrapolate it linearly, like in our previous work, at high luminosities ($M_{AB}<-22$, where there are only limited observations). The use of the exponential fit for faint magnitudes has the advantage of avoiding unphysical negative dust corrections. To model the production of GRBs, we include a metallicity-dependent efficiency as in \citet{trenti2013}. Galaxies are assigned a metallicity dependence on redshift and luminosity following the relation derived by \citet{maiolino08}, after we convert our UV luminosity in stellar mass. Specifically, we use their Equation (2) with coefficients given in their Table 5 for the \citet{bruzual03} spectral energy distribution template. Coefficients are linearly interpolated in redshift space among data points. At $z>3.5$ we assume no further evolution of the mass-metallicity relation. { Since our model implies a one-to-one map of stellar mass to star formation rate at fixed redshift, with the assumption of a mass-metallicity relation we are also defining a metallicity-star-formation rate relation. In Section~\ref{section:metal_outcome} we compare our model results to the fundamental relation derived by \citet{mannucci2010} at $z<2.5$. } The efficiency of GRB production versus metallicity $Z$ is based on the idea that we expect two main contributions to long-duration GRB production. A channel broadly based on evolution of single massive stars, where metallicity plays a crucial role in regulating mass loss via winds (the Collapsar model, e.g. \citealt{yoon06}), plus alternative channels without strong metallicity dependence (e.g., for binary progenitors, \citealt{fryer05}). Combining the output from the \citet{yoon06} Collapsar simulations with a metallicity-independent plateau, we write the total GRB efficiency $\kappa(Z)$ as: \begin{equation}\label{eq:metal_eff} \kappa(Z)=\kappa_0\times\frac{a\log_{10}{Z/Z_{\odot}}+b+p}{1+p}, \end{equation} where $\kappa_0$, $p$ $a$, and $b$ take the same values as in \citet{trenti2013}: For $Z/Z_{\odot}\leq10^{-3}$, $a=0$, $b=1$; for $10^{-3}\leq Z/Z_{\odot}\leq10^{-1}$, $a=-3/8$, $b=-1/8$; for $10^{-1}\leq Z/Z_{\odot}\leq1$, $a=-1/4$, $b=0$; for $Z/Z_{\odot}>1$, $a=0$, $b=0$. A fiducial value of $p=0.2$ is used to construct our reference model. $\kappa_0$ is an overall normalization which has no impact on predictions for the luminosity functions of GRB hosts. The quantity $p/(1+p)$ can be interpreted as the probability that a GRB originates from the metal-independent channel rather than from a collapsar, \emph{in the limit of metallicity of the host galaxy approaching zero}. We emphasize that $p$ is \emph{not} a relative probability but rather a minimum, metal-independent \emph{plateau} value for the efficiency of forming GRBs. The relative probability of having GRBs originating from the two channels can only be computed after taking into account the metallicity distribution of the star forming galaxies, and this quantity generally strongly depends on the redshift because of the evolving mass-metallicity relation of galaxies (see Section~\ref{sec:results} for a more detailed discussion). To take into account both the intrinsic scatter in the mass-metallicity relation, as well as the likely presence of a spread in the metallicity of star forming gas within a given galaxy, we assume that $Z$ follows a log-normal distribution, with $\langle ln(Z) \rangle$ given by the mass-metallicity relation and $\sigma = 0.4$. This value corresponds to about 0.15 dex of intrinsic scatter in $log10(Z)$ (e.g., see \citealt{Panter08}) and gives the resulting $\langle \kappa(Z) \rangle$ shown in Figure~\ref{fig:kappa}. We calculate the comoving GRB rate from the model star formation rate, weighted by $\langle \kappa(Z) \rangle$. \begin{figure} \begin{center} \includegraphics[scale=0.36]{log_kappa.eps} \end{center} \caption{Efficiency of GRB production per unit stellar mass as a function of host-galaxy metallicity for different values of the metal-independent plateau parameter $p$, from $p=0$ (magenta) to $p=+\infty$ (green).}\label{fig:kappa} \end{figure} To demonstrate that our framework is successful in describing the evolution of the properties of the LBG population, we compare model predictions for the star formation rate (luminosity density) and stellar mass density to observations in Fig.~\ref{fig:rates}. Further model validation and discussion of the comparison with observations of the LBG luminosity functions and specific star formation rates from $z\sim 0.3$ to $z\sim 8$ can be found in \citet{tacchella13}. \begin{figure} \begin{center} \includegraphics[scale=0.36]{sfrd_comparison.eps} \includegraphics[scale=0.36]{stellar_mass_density_ref_model.eps} \end{center} \caption{Left panel: Star formation rate versus redshift (black points), inferred from LBG observations in the UV integrated to $M_{AB}=-17.0$ including dust corrections \citep{bouwens2013}. Predictions from our LF model are shown as black shaded area when integrated to the same limiting magnitude as the data, highlighting the ability of our framework to describe the star formation history of the Universe. The red shaded region shows model predictions when integrated to $M_{AB}=-11.0$ (faintest galaxy assumed in the modeling). Right panel: Stellar mass density versus redshift for a compilation of observations from \citet{tacchella13} (black datapoints) and predictions from our model. Both data and model predictions for stellar mass density are integrated for $M_*>10^8~\mathrm{M_{\sun}}$.}\label{fig:rates} \end{figure} To derive predictions for the UV LF of GRB host galaxies, we start from the dust-attenuated (observed) LF of LBGs $\phi_{LBG}(L_{obs})$, and apply a weighting to the LF which takes into account the fact that the galaxies are selected based on the presence of a GRB. First, since the GRB rate is proportional to the star formation rate, more luminous host galaxies are preferentially present in a sample of observations targeted at GRB locations. Second, the metallicity bias must also be accounted for, introducing a further weight by $\langle \kappa(Z(M_{AB}) \rangle$. When introducing these weights, it is crucial to note that the GRB rate is proportional to the intrinsic star formation rate and not to the observed (dust-attenuated) one, therefore the correct weight to use is $L_{int}=10^{-M_{AB}^{(int)}/2.5}$, where $M_{AB}^{(int)} = M_{AB}^{(obs)}-A_{UV} (M_{AB}^{(obs)})$. Hence we can write: \begin{equation} \phi_{GRBhost} (L_{obs}) \propto \phi_{LBG}(L_{obs}) \times L_{int} \times \langle \kappa (Z) \rangle, \end{equation} which can be rewritten as: \begin{equation}\label{eq:phiGRBhost} \phi_{GRBhost} (L_{obs}) \propto \phi_{LBG}(L_{obs})\times L_{obs} \times 10^{A_{UV}(M_{AB}^{(obs)})/2.5} \times \langle \kappa (Z) \rangle. \end{equation} Similarly, we proceed to construct predictions for the stellar mass function of GRB hosts, { and for their metallicity distribution.} \section{Modeling result}\label{sec:results} \subsection{Comoving GRB rate} \begin{figure} \begin{center} \includegraphics[scale=0.36]{rgrb_all.eps} \end{center} \caption{ GRB comoving rate from \citet{wanderman10}, shown as black points with errorbars, compared to predictions of our models depending on the assumed efficiency of the metal-independent channel for GRBs. Our reference model ($p\sim0.2$, shown in bold red with shaded region for typical model uncertainty) provides the best description of the data. Models with stronger metal bias are shown in magenta and blue, while a no-bias model is shown in green.}\label{fig:rgb_rate} \end{figure} The comoving GRB rate predicted by our model is shown in Figure~\ref{fig:rgb_rate}, and compared to the observed event rate as derived by \citet{wanderman10}. { Since the rate encompasses information from the full sample of GRBs, it provides the most stringent constraints on the free parameter of our model, the metallicity plateau $p$. Thus we use the inference from the GRB rate to construct a canonical model, and then we test its predictions against observations of GRB hosts luminosity/star formation rate and metallicities, which are available only for small sub-samples.} As expected, the data-model comparison is similar to our previous analysis from \citet{trenti2013}. A small, but non-zero value of $p$ provides the best description of the redshift evolution of the GRB rate. The highest likelihood when the rate is fitted at $z<6$ is given by $p\sim 0.2$ (Figure~\ref{fig:likelihood}), which we assume as our canonical model (red curve with shaded uncertainty region in Figure~\ref{fig:rgb_rate}). Both lower and higher $p$ exhibit systematic differences from the observations, implying that the comoving rate points toward the presence of a metallicity bias in GRBs, but that a non-zero fraction of events needs to originate from a metal-independent channel. This conclusion confirms the findings of \citet{trenti2013}, with a slightly revised quantitative result (the canonical model considered in our earlier work had $p=0.3$) arising because we updated the model calibration using the latest LBG luminosity function and dust content determinations by \citet{bouwens2013} and \citet{bouwens2014}. \begin{figure} \begin{center} \includegraphics[scale=0.36]{revised_likelihood.eps} \end{center} \caption{Likelihood for the value of the metal-independent channel for GRB production ($p$), derived from the comoving GRB rate (red line), from the star formation rates of the TOUGH survey (unbiased sub-sample, green line), { from the metallicity of GRB-DLAs at $z<3.5$ (magenta line, observations from \citealt{cucchiara2014}), and combining the three constraints (blue line).} Dashed horizontal lines denotes the likelihood boundaries at 68\% and 90\% confidence. Because of the small sizes of the TOUGH unbiased subsample { and of the GRB-DLA sample,} the strongest constraint on $p$ originates from comoving rate modeling.}\label{fig:likelihood} \end{figure} Figure~\ref{fig:channel} shows how the production of GRBs in our canonical model switches from predominantly Collapsars (metal-biased) at high-redshift, when most star forming sites have low metallicity, to metal-independent (binary evolution) as the redshift decreases. This happens because Collapsars are progressively suppressed by the increasing metallicity while the metal-independent channel continues to act unaffected, leading to an overall decrease with redshift of the efficiency of GRB production per unit stellar mass in star formation. By redshift $z=0$ our canonical model with plateau $p=0.2$ predicts that over 90\% of the GRBs are produced by the metal-independent channel (see Figure~\ref{fig:channel}). If we assume instead a smaller value for $p$, for example $p=0.04$ in Figure~\ref{fig:channel} (blue line), then we see that the relative fraction of Collapsars is higher. In this case Collapsars are still sub-dominant at $z=0$, but they quickly become the major mode of GRB production at $z\gtrsim 1$, where the majority of observed GRBs are located. { Values of $p< 0.04$ further suppress the metal-independent channel, leading to a decrease of the rate of low-$z$ GRBs, while Collapsars become the dominant progenitor channel at all redshifts.} \begin{figure} \begin{center} \includegraphics[scale=0.36]{channel_grb.eps} \end{center} \caption{Fraction of GRB produced by the metal independent channel as a function of redshift for our model with $p=0.2$ (solid red line), compared to the model with $p=0.04$ (solid blue line). At low redshift the majority of GRBs in both models are produced by this channel, while at high-$z$ the production approaches the asymptotic value $p/(1+p)$, and thus most GRBs are expected to originate from Collapsars.}\label{fig:channel} \end{figure} \subsection{UV luminosity functions and star formation rates} Using Equation~(\ref{eq:phiGRBhost}) we construct the GRB-host galaxy luminosity function at different redshifts, and present the results in Figure~\ref{fig:LFhost}. The best-fitting Schechter parameters $M_{AB}^{(*)}$ and $\alpha$ are reported in Table~\ref{tab:LFhost} in the magnitude range $-22.5\leq M_{AB} \leq -17.0$. { The full luminosity function model output is published in the electronic edition of the article, and Table~\ref{tab:modeloutput} shows a portion for guidance regarding its form and content.} In general, we find that a Schechter LF provides a good description of the modeling output in the magnitude range considered, with typical values of $M_{AB}^{(*)}$ similar to those of the LBG LF at the same redshift, that is $-21.2\lesssim M_{AB}^{(*)} \lesssim -19.4$, with higher values at low $z$. The faint-end slope $\alpha$ is also evolving with redshift, from $\alpha(z=0)\sim -0.1$ to $\alpha(z=9)\sim -1.4$. Translating the luminosity functions into the probability of detecting a GRB host galaxy as a function of the limiting magnitude of the observations, we predict that GRB host surveys reaching $M_{AB}=-18.0$ will have greater than 50\% completeness at $z<4$ (see bold red dotted line in Figure~\ref{fig:lf_distribution}). Observations reaching significantly deeper, to $M_{AB}=-16.0$, are needed for the same completeness at $z>7$, as a result of the steepening of the LF (see also \citealt{salvaterra13} for independent modeling of the high-$z$ hosts). \begin{figure} \begin{center} \includegraphics[scale=0.36]{lf_grb_host_revised.eps} \end{center} \caption{Predictions for the UV luminosity function of GRB host galaxies based on our reference model which includes a GRB metallicity bias for different redshifts. The best fitting Schechter parameters associated to the curves shown are given in Table~\ref{tab:LFhost}. The curves have been normalized to have the same volume density at $M_{AB}=-20.0$. { The plot shows that bright GRB host galaxies are rare both at low (black dotted, $z=0.75$) and very high redshift (solid magenta, $z=8$), respectively because of high metallicity (leading to both suppression of GRB production and significant reddening), and intrinsic rarity of massive, luminous galaxies at early times. Intermediate redshifts are shown as short dashed blue line ($z=2.75$) and long-dashed red line ($z=5.75$).} As the redshift increases, the LF becomes steadily steeper at the faint end.}\label{fig:LFhost} \end{figure} When compared to the model predictions for the LF of LBGs, we see that for most redshifts in our canonical case of $p=0.2$, the GRB hosts empirically follow an approximate scaling with $\phi_{LBG}(L_{obs}) \times L_{obs}$ \emph{when a metallicity bias is present}. This apparently counter-intuitive result is illustrated in more detail in Figure~\ref{fig:metal_dust_balance} and stems from the presence of dust absorption. In fact, Equation~(\ref{eq:phiGRBhost}) shows that since dust and metallicity correlate, the effect of the metallicity bias is partially countered by the dust absorption term, present in the equation because the GRB rate traces the intrinsic (dust-corrected) luminosity density. This is a key finding of our work: If one were to neglect the impact of dust and only consider the observed LF of LBG, then there would be the expectation that observing a GRB host luminosity function scaling as $\phi_{LBG}(L_{obs})\times L_{obs}$ would mean that no metallicity bias is present. \begin{figure} \begin{center} \includegraphics[scale=0.36]{metal_dust_balance_revised.eps} \end{center} \caption{Illustration of the combined effect of dust reddening and metallicity bias in determining the shape of the GRB host galaxy LF at $z=3.25$. The LF (red solid) is steepened by $\Delta \alpha\sim 0.2$ because of metallicity bias in GRB production compared to a model where the GRB rate traces the star formation rate (blue dotted). Still, because of the difference between intrinsic versus observed luminosity, $L_{int}$ and $L_{obs}$, induced by reddening, the final GRB-host LF \emph{looks} very similar to $\phi(L_{obs})*L_{obs}$ shown as black long dashed line. Thus one could naively, but wrongly, infer that there is no metallicity bias! }\label{fig:metal_dust_balance} \end{figure} Furthermore, because of the mass-metallicity relation of galaxies and of its redshift evolution, the impact of the metallicity bias on the LF shape depends strongly on the redshift. At very low redshift, a significant fraction of GRBs { in the canonical model} originates from the metal-independent channel, since GRB production by collapsar is suppressed in most hosts. In fact, the mass-metallicity relation we use in this work, which is based on \citet{kewley2008} at $z\sim 0$\footnote{Our mass-metallicity relation at $z=0.07$ derives from \citet{kewley2008} with calibration done using the \citet{kewley2002} method, and conversion of stellar masses to account self-consistently for the \citet{bruzual03} initial mass function. We recall we are simply implementing the relation by \citet{maiolino08}, hence further details are available from that work (see, e.g. their Table 5 and Figure 7).}, predicts $Z\geq Z_{\sun}$ (with $Z_{\sun}$ corresponding to $12+\log{(O/H)}=8.7$) for a galaxy with stellar mass $\gtrsim 2\times 10^9~\mathrm{M_{\sun}}$ which has $M_{AB}\lesssim -17$ (after including dust extinction). Then, at an intermediate redshift, there is a transition to a regime where the effect of the metallicity bias is most pronounced in producing a steepening of the GRB host LF. Finally, at very high-$z$, the shape of the LF of GRBs is no longer affected by the metallicity bias since the majority of star forming sites have very low metallicities. { At $z\gtrsim 5$ we thus predict that GRBs become essentially unbiased tracers of the star formation rate, since the majority of star forming sites have low metallicity and thus $\langle \kappa \rangle \to 1$ independent of the value of $p$ (see Figure~\ref{fig:kappa}; see also \citealt{salvaterra13}).} In Figure~\ref{fig:lf_distribution} we show predictions for typical luminosities and star formation rates of GRB hosts for models with different $p$. If we set $p=0$ in Eq.~\ref{eq:metal_eff} (strong metallicity bias), GRBs cannot be hosted in galaxies with metallicity $Z\geq Z_{\odot}$, which introduces a sharp bright-end cut-off in the LF. This is evident by looking at the sharp decrease in all luminosity quantities { at $z<3$ for the $p=0$ model (red lines in the top left panel of Figure~\ref{fig:lf_distribution})}. At higher redshift, a suppression of the host LF at the bright end is still present, but hardly distinguishable from our canonical model. The $p=0$ case is clearly an extreme scenario, already ruled out by observations of the high metallicity of some GRB hosts (e.g., \citealt{levesque10c}), but its analysis is still useful to highlight the impact of a strong metallicity bias on the GRB host LF. A scenario where the metallicity bias is absent and the GRB rate traces the star-formation rate is difficult to discriminate from our canonical model based on $z<1$ observations only, since the two luminosity functions are essentially identical down to the median luminosity of the GRB hosts. The strongest difference between the two scenarios appears instead at $2.5\lesssim z \lesssim6$, when the median host luminosities differ by about one magnitude. This complex situation overall suggests caution when drawing conclusions on metal-bias from a small sample of GRB hosts carrying out a generic comparison with the luminosity and stellar mass functions of star-forming galaxies. For example, \citet{jakobsson2005} conclude that no metallicity bias is present, but they neglect the dust impact discussed above. When cast in our framework, the GRB host LF determined in that study would instead provide a weak-evidence in favor of the presence of a metallicity bias at a strength broadly consistent with our canonical model. In fact, we would expect that in absence of metallicity bias the GRB-host LF is shallower by $\Delta \alpha \sim 0.2$ at the intermediate redshifts analyzed by \citet{jakobsson2005}. \subsection{Stellar Masses of GRB hosts} Figure~\ref{fig:stellar_masses} presents the predictions for the stellar masses of GRB hosts based on our model. Qualitatively, the same trends discussed above for the luminosity functions are present and intermediate redshift hosts appear the most promising to discriminate models with different $p$. This is not surprising, since our model has by construction a one-to-one correspondence between stellar mass and UV luminosity (even though the relation between mass and luminosity changes both with redshift and halo mass because of the evolution of the galaxy assembly time; see Section~\ref{sec:model} and \citealt{tacchella13,trenti2013}). Thus, while our model describes well the redshift evolution of the stellar mass density, as well as the flattening at the low-mass end with decreasing redshift, as observed for example by \citet{ilbert13}, our choice to start from UV luminosity needs to be taken into account in a data model comparison. In fact, this suggests to give more weight to analyses of the star formation rates (UV luminosity), rather than the host mass functions. We stress that this is true in general, and it is not resulting just because we are using a simplified model: The GRB rate traces recent star formation, and not the total stellar mass of the host galaxy. No matter how massive and low-metallicity a galaxy is, if its star formation rate is approaching zero, so will be its GRB rate. { Therefore, inference on the metallicity bias based on stellar mass functions, such as that of \citet{boissier2013}, might be affected by higher amount of noise compared to analyses based on star formation rate.} { Figure~\ref{fig:stellar_masses} shows that in case of a very strong metallicity bias ($p=0$), we expect a slight rise in the median stellar mass of the GRB hosts from $z=0$ to $z\sim 2.5$. Models with weaker or no bias predict instead a steady decrease of the host stellar mass with redshift. When these model predictions are compared against observations (within the caveats discussed above), it is fundamental to take into account sample selection as well, since low-mass hosts may be preferentially missed at high redshift. In addition it is important to ensure modeling consistency as well (e.g. with respect to assumptions on the stellar initial mass function). A preliminary comparison with recent stellar mass measurements \citep{perley13,hunt2014} highlights that further modeling and data comparison is needed. In fact, the data, if taken at face value, show an increase of the median stellar mass of GRB hosts with redshift, which would indicate the presence of a strong metal bias in our model. However, the observed stellar masses are generally too high to be consistent with $p\sim 0$, and would rather be suggestive weaker metal bias (plus incompleteness at the low-mass end).} \subsection{Metallicity of GRB hosts}\label{section:metal_outcome} Finally, Figure~\ref{fig:metallicity_distribution} illustrates the model predictions for the metallicity distribution of the GRB hosts. Models with different $p$ have a qualitatively similar trend: host metallicities are expected to decrease with increasing redshift, simply reflecting the underlying mass-metallicity relation that we assume in our model. The detailed choice of $p$ influences the shape of the metallicity distribution, and how rapidly it evolves with redshift. Interestingly, the upper tail (top 5\%) of the distribution is very similar in all cases from $p=0$ to $p=+\infty$, implying that using the maximum metallicity observed for a GRB host bears little insight into the presence, or absence, of a metallicity bias. The median, and the lower 20\% of the distribution have instead markedly different behaviors, which should make it viable to differentiate between models. For example there is a factor 10 difference in the value of the bottom 20\% of the metallicity distribution going from $p=0$ to $p=+\infty$ at $z=0$, with the relative difference in metallicity remaining almost unchanged at high-$z$. { The predictions for the metallicity of the GRB hosts have a direct dependence on the assumed relation between stellar-mass/star formation rate and galaxy metallicity. As discussed in Section~\ref{sec:model}, we use the redshift-dependent mass-metallicity relation of \citet{maiolino08}. An alternative possibility would have been to implement the fundamental relation observed at $z\lesssim 2.5$ by \citet{mannucci2010} which links all three quantities together without introducing any redshift dependence (at least in the redshift range considered by the authors). In Figure~\ref{fig:mannucci_vs_maiolino} we compare the metallicity as a function of star formation rate predicted in our model considering the two relations. It is very interesting, and reassuring, to note that at low redshift ($z=0.25$ shown in the figure), the two different assumptions lead essentially to the same relation. As the redshift increases, there is however a marked difference: the \citet{mannucci2010} fit remains at fairly high metallicity, while with the \citet{maiolino08} relation, we predict values that are lower by $\sim 0.5$ dex for high star formation rates (massive and luminous galaxies), with the difference growing significantly for lower star formation rates. Taken at face value, the \citet{mannucci2010} relation is inconsistent with the observations of low metallicities ($Z\sim 10^{-2}~Z_{\sun}$) in GRB-DLAs at $z\sim 2$ which are reported by \citet{cucchiara2014}, since it only predicts $Z\gtrsim 10^{-1}~Z_{\sun}$. In addition, it is not clear how to extrapolate the relation above $z>2.5$, since galaxies at higher redshift start showing systematic deviations, as \citet{mannucci2010} highlight in their Figure 4 (right panel). While these issues prevent us from implementing an analysis of GRB host observations with the \citet{mannucci2010} relation, our Figure~\ref{fig:mannucci_vs_maiolino} may suggest that the extrapolation of the mass-metallicity relation by \citet{maiolino08} is too steep for faint galaxies, possibly introducing systematic uncertainty in our analysis, which could be evaluated if large complete samples of GRBs metallicity measurements/limits were available.} \section{A first application to GRB host observations}\label{sec:tough} { \subsection{Star Formation Rates}} For a proper comparison between model and data, it is fundamental to resort to complete and unbiased samples of follow-up observations. Otherwise, systematics that are hard or even impossible to quantify might affect the inferences obtained. For example, it is likely to expect that observations of GRB hosts with short observations leading to non-detections might be preferentially non reported in the literature compared to a detection of a bright host. This would lead to a luminosity function that is skewed toward higher number density at the bright end, masking the presence of a metallicity bias. Furthermore, dusty hosts might be preferentially missed, and a high dust content might even compromise the success in obtaining a redshift for the GRB from the afterglow, leading in this case to an over-estimation of the impact of the metallicity bias. A complete sample free of selection effects is therefore fundamental. This is the reason why we consider data from the optically unbiased GRB host (TOUGH) survey \citep{hjorth12} to illustrate the potential of a comprehensive data-model comparison as a tool to infer the presence and strength of a metallicity bias in GRB production, { and to test the inference derived from modeling of the GRB comoving rate}. Specifically, we restrict to the \emph{complete} sub-sample presented in \citet{michal2012}, which includes UV-inferred star formation rate { (uncorrected for dust)} for all hosts and which we overplot to our predictions in Figure~\ref{fig:lf_distribution}, { which are also referring to \emph{observed} UV luminosity therefore providing an uncorrected star formation rate}. Despite the low number of points (11 hosts) and their relatively low redshift ($z\lesssim 1.1$), the analysis of the observations provides a first constraint on $p$ independent of the global GRB rate modeling. We carried out a Maximum Likelihood analysis of the star formation rates from \citet{michal2012} at varying $p$, and show the results in Figure~\ref{fig:likelihood} (green line). The likelihood strongly excludes $p=0$ and has a peak at $p=0.04$. Unsurprisingly, based on the considerations of Section~\ref{sec:results}, there is a near-plateau of high likelihood values at $p\gtrsim 0.04$: At 68\% confidence, $0.01<p<0.12$, but the interval is broader and highly asymmetric for the 90\% confidence region, with allowed values $0.1<p<0.8$. This is the consequence of the limited discriminating power of a low-$z$ sample. Yet, despite the large uncertainty, it is re-assuring to see that $p$ values inferred independently from the UV luminosity modeling and from the rate modeling (red solid) agree at the $\sim 1.5\sigma$ level. When the two likelihoods are combined together (blue solid), $p\sim 0.2$ remains the most likely solution. { This is essentially because a small sample of 11 datapoints is not competitive against the larger amount of information encoded in the comoving GRB rate, derived from all the known events. A plateau value $p\sim 0.2$} implies a probability $p/(1+p)\sim 0.15$ for GRBs to be produced by metal-independent progenitors \emph{in the limit of very low metallicity}, while the actual fraction of GRBs originating from the two channels as a function of redshift is shown in Figure~\ref{fig:channel}. { \subsection{Metallicity Distribution}} Observations of hosts metallicities can provide a { further} test to validate/reject the modeling outcome { (which we show in Figure~\ref{fig:metallicity_distribution}) for different values of the efficiency parameter $p$.} Unfortunately, and unlike the case for UV luminosity, data are not available for a \emph{complete} sample, complicating a formal likelihood analysis. Still, a qualitative comparison with results from the compilation of GRB hosts with measured metallicities carried out by \citet{savaglio2013}, { and the new recent measurement/re-analysis by \citet{cucchiara2014}} yields good insight: Our canonical model ($p=0.2$, red lines in figure~\ref{fig:metallicity_distribution}) predicts { hosts metallicities} that are too high compared to the observed values { at low redshift (green points in the figure).} { Values $p\lesssim 0.04$ would be needed to improve consistency with these data, pointing toward a lower efficiency of the metal-independent production channel, and thus a stronger presence of the metallicity bias, which has been argued by several studies in this redshift range \citep{savaglio09,castroceron10,perley13}, and also derived by our maximum likelihood analysis of the TOUGH host galaxy luminosity (green curve in Figure~\ref{fig:likelihood}). However, this in turn creates tension with the global modeling of the GRB rate, which prefers $p\gtrsim 0.1$ at 90\% confidence.} Of course, this { inference on $p$ from host metallicity} is valid only under the assumption that { the observations we consider have} a distribution that is representative of that of a complete sample. { This is} complicated because of the difficulty of measuring metallicities for GRB hosts both at the high end (e.g. because of possible dust biases), and at the low end (since hosts are intrinsically faint). { The impact of observational biases in the likelihood analysis is clearly seen by analyzing subsamples of the measurements and upper limits on metallicity reported by \citet{cucchiara2014}, and shown as blue points in Figure~\ref{fig:metallicity_distribution}. If we restrict the likelihood analysis at $z\lesssim 3.5$ we have a good agreement between data and our canonical model, with $p=0.2$ providing a solution within the 68\% confidence interval (see magenta line in Figure~\ref{fig:likelihood}). However, inclusion of all data shifts the likelihood toward significantly higher $p$ values and absence of metallicity bias would be the preferred solution (see red line in Figure~\ref{fig:likelihood_appendix}). This possibly originates because $z>3.5$ spectroscopic observations have been carried out primarily at low-resolution, which provide lower limits to metallicity \citep{cucchiara2014}. On the other hand, if one restricts the sample only to high-resolution spectroscopy, then the likelihood analysis yields the opposite result, namely a preference for $p=0$ (blue line in Figure~\ref{fig:likelihood_appendix}). Still, even assuming this likelihood, when combined with the result from GRB rate modeling, the total analysis shifts only marginally away from our canonical $p=0.2$ model which remains within the combined 68\% confidence interval (black line in Figure~\ref{fig:likelihood_appendix}). } For future progress, it would be extremely helpful to build an observational sub-sample { with uniform coverage} which is complete and similar to the TOUGH dataset of star-formation rates considered by \citet{michal2012}, but targeted at higher redshifts, in order to obtain likelihood constraints comparable or even stronger than those inferred from rate modeling. For this, host follow-up of all GRBs presented by \citet{salvaterra12} would be ideal and could solve the tension between the metallicity predictions which favor low $p$, and the comoving GRB rate modeling which seems to favor higher $p$. \section{Conclusions}\label{sec:con} In this paper we have used a minimal but successful model for the redshift evolution of the luminosity function of star forming (Lyman Break) galaxies to investigate production of long-duration GRBs and properties of their host galaxies. We followed the framework introduced in our earlier works on the connection between GRBs and LBGs as complementary probes of star formation across cosmic time \citep{trenti12,trenti2013}. Here, we have made predictions for the luminosity, { stellar mass functions, and metallicity distribution} of GRB hosts, depending on the presence and strength of the GRB metallicity bias. The key findings of our modeling, introduced in Sections~\ref{sec:model}-\ref{sec:results} and discussed in Section~\ref{sec:tough}, are the following: \begin{itemize} \item The luminosity function of GRB hosts is connected to that of star forming galaxies, but changes in the shape are introduced by several factors. First, GRBs trace star formation, therefore to first order UV-bright hosts are preferred, with an approximate scaling given by host luminosity ($\Phi_{(GRB)}\propto \Phi_{(LBG)}\times L$). The presence of a metallicity bias qualitatively counteracts, at least partially, this luminosity function flattening, making GRBs more likely to have faint hosts. This established picture is however missing one key ingredient, which we discussed and modeled, namely the impact of dust reddening. Since dust is proportional to metallicity, and dust masks star formation, we highlight that to first approximation the dust has an impact on the GRB host luminosity function that is comparable but opposite to that of a mild metallicity bias (Figure~\ref{fig:metal_dust_balance}). Analysis of GRB host observations need to take that into account properly or else they would risk to draw (partially) incorrect information on the strength of the GRB metallicity bias. \item { The maximum likelihood model derived from analysis of the observed comoving GRB rate} is one that has a moderate metallicity bias, with about 80\% of GRBs in a very low metallicity environment produced by collapsars and the remaining 20\% by a metal-independent channel { (such as, e.g. binaries or magnetar engines; see also \citealt{jimenez13})}. However, this model predicts that the large majority of $z\lesssim 1$ GRBs are produced by the metal independent channel, because low-$z$ hosts have metallicities where there is a preferential suppression of collapsars (Figure~\ref{fig:channel}). At intermediate redshifts ($z\sim 3$) the two channels produce comparable numbers of bursts, offering the most discriminating power between alternative scenarios. Finally, at $z\gtrsim 5$ most of the GRBs are produced in metal poor environments where collapsar efficiency has plateaued, so it becomes again hard to discriminate among models with different $p$. \item We make detailed predictions for the luminosity and stellar mass functions of GRB hosts, as illustrated in the key Figures~\ref{fig:lf_distribution}-\ref{fig:stellar_masses}. Because of the direct relation between UV luminosity and GRB production, a comparison with luminosity/star formation rates of host galaxies is recommended over the use of stellar masses which are only indirectly, and approximatively, linked to recent star formation. To avoid introducing uncontrolled systematic errors (such as preferential reporting of detections over null results), it is also fundamental to use only complete samples of GRB hosts free of selection effects. \item As a first comparison with observations, we present a maximum likelihood analysis of the star formation rates of the complete sub-sample of TOUGH observations of GRB hosts by \citet{michal2012}. The sample size is small (11 hosts) and at low redshift, but nevertheless it provides a preliminary characterization of the metallicity bias from host studies using our framework. The inferred strength of the metallicity bias is { lower ($p=0.04$) but} consistent with that of our canonical model within uncertainties, leading to a combined constraint $0.1<p<0.35$ at 90\% confidence. This means that both the strong metal bias ($p=0$) and the no bias scenarios ($p=+\infty$) are clearly ruled out. \item Our model predicts the metallicity distribution of GRB hosts as well. This is shown in Figure~\ref{fig:metallicity_distribution}, which highlights that characterizing the median (and the lower 50\%) of the metallicity distribution of GRB hosts seems to be a powerful indicator of the strength of the metallicity bias. In contrast, the top of the metallicity of GRB hosts is remarkably similar among different models, providing relatively little insight to constrain $p$. The predictions shown in Figure~\ref{fig:metallicity_distribution} for $p=0.2$ appear in conflict with the { low-redshift observations of GRB host metallicities by \citet{savaglio09}}. This suggests that smaller $p$ values smaller or closer to the peak of the likelihood from GRB host luminosities at $p=0.04$ yield an overall better description of the observations. However, we can make this statement only in a qualitative sense, since we do not have a complete sample of GRB host metallicities, { and observational effects can strongly affect a formal maximum likelihood analysis, as shown in Figure~\ref{fig:likelihood_appendix}.} \end{itemize} These conclusions allow us to address the two questions we posed in Section~\ref{sec:intro} as one of the motivations to our study: Are GRBs more probable in low-metallicity environments compared to higher metallicity? Is there a maximum metallicity cut-off? We conclude that there is clear evidence for a metal-dependent relation between GRB and star-formation rate, with low metallicity environments preferentially producing bursts (see also \citealt{jimenez13}). However, a sharp metallicity cut-off is strongly ruled out by the data-model comparison. At the current time, the main limitation of the analysis we presented is given by the lack of a well defined complete sample of GRB host observations at intermediate redshifts. This should be the top priority for further progress in the characterization of the physics of GRB explosions from studies of their host galaxies. { Because our canonical model parameter $p=0.2$, inferred primarily from the GRB rate modeling, appears in tension with metallicity measurements, it is important to focus on acquiring more data on star formation rates and metallicities of GRB hosts.} Future observations should also be able to test directly model consistency, namely the three main ingredients we used to construct predictions: (1) calibration of the luminosity functions starting from LBG observations; (2) empirical, observationally motivated relations for metallicity versus mass/luminosity (with its extrapolation to fainter than observed galaxies) and for dust content of GRB hosts; (3) GRB efficiency versus metallicity described by a simple relation with one free parameter ($p$). For example, ALMA observations of GRB hosts have the potential to characterize star formation rates, and measure directly dust and metal content from molecular line diagnostic, bypassing the dust absorption modeling present in the current work focused on rest-frame UV data. Finally, in a few years, 30m class observatories from the ground, and the \emph {James Webb Space Telescope}, will not only be capable of detecting fainter hosts than those seen with current facilities, but also provide spectra of the hosts seen today, thereby measuring directly the metallicity distribution of the environments in which GRBs explode. \acknowledgements We thank Sandra Savaglio for useful discussions, Nino Cucchiara for sharing data, and an anonymous referee for helpful comments. This work was partially supported by the European Commission through the Marie Curie Career Integration Fellowship PCIG12-GA-2012-333749 (MT) and by NSF Grant No. AST 1009396 (RP). RJ acknowledges support from Mineco grant FPA2011-29678-C02-02. { Tabulated model predictions are available in electronic format.}
1,116,691,501,181
arxiv
\section{Introduction} \label{Sec.1} To explain the observed cosmological matter-antimatter asymmetry, the baryon number $B$ violation is one of three basic ingredients for an initially symmetrical Universe \cite{Sakharov:1967dj}. The baryon number is necessarily violated in the Grand Unified Theories (GUTs) \cite{Georgi:1974sy,Nath:2006ut}, which can unify the strong, weak and electromagnetic interactions into a single underlying force at a scale of $M_\mathrm{GUT}\simeq 2 \times 10^{16}$~GeV. A general prediction of the GUTs is proton decay. However, no experimental evidence of proton decay, $B$-violating neutron decay and neutron-antineutron oscillation has been found \cite{PDG}. Fortunately, the new generation of underground experiments JUNO \cite{JUNO,JUNO_PPNP}, Hyper-Kamiokande \cite{Hyper-K} and DUNE \cite{DUNE} with huge target masses and different detection technologies will continue to search for proton decay and test the GUTs. Among many possible proton decay modes \cite{PDG}, $p\to e^+ \pi^0$ and $p\to \bar{\nu} K^+$\ are the two dominant ones predicted by a majority of GUTs. The first one is expected to be the leading mode in many GUTs, particularly in those non-supersymmetric GUTs which typically predict the lifetime of proton to be about $10^{35}$ years \cite{Babu}. In comparison, the decay mode $p\to \bar{\nu} K^+$\ is favored by a number of supersymmetric GUTs. For these two decay modes, best measured upper limits of proton partial lifetime are $\tau/B(p \to e^+ \pi^0) > 2.4 \times 10^{34}$ years \cite{Takenaka:2020vqy} and $\tau/B(p \to \bar{\nu} K^+) > 5.9 \times 10^{33}$ years \cite{Abe:2014mwa} at $90\%$ C.L. from the Super-Kamiokande (Super-K) experiment, which is a water Cerenkov detector. Compared to the water Cerenkov detectors, a liquid scintillator (LS) detector has a distinct advantage in detecting the proton decay mode $p\to \bar{\nu} K^+$\ \cite{Svoboda,LENA, KamLAND,JUNO}. The present paper plans to investigate the sensitivity of the future LS detector, JUNO. Here, the decay will give rise to a three-fold coincidence feature in time, which is usually composed of a prompt signal by the energy deposit of $K^+$, a short-delayed signal ($\tau=12.38$ ns) by the energy deposit of decay daughters of $K^+$ and a long-delayed signal ($\tau=2.2$ $\mu$s) by the energy deposit of the final Michel electron. Using the time-correlated triple coincidence, the JUNO detector can effectively identify the $p\to \bar{\nu} K^+$\ and reject the atmospheric neutrino backgrounds \cite{KamLAND}. Preliminary studies have given a rough estimation of the sensitivity of JUNO to the proton decay mode $p\to \bar{\nu} K^+$\ \cite{JUNO}. In this paper, the JUNO potential based on a detailed detector performance has been studied in Monte Carlo (MC) simulation. Sec.~\ref{Sec.2} briefly introduces the JUNO detector and its expected performance. In Sec.~\ref{Sec.3}, the MC simulation of $p\to \bar{\nu} K^+$\ and the atmospheric $\nu$\ backgrounds will be described. In Sec.~\ref{Sec.4}, the multi-pulse fitting method and other selection criteria to discriminate $p\to \bar{\nu} K^+$\ from the backgrounds are investigated. We will present the expected sensitivity of JUNO to the $p\to \bar{\nu} K^+$\ in Sec.~\ref{Sec.5}. Finally, a conclusion is given in Sec.~\ref{Sec.6}. \section{JUNO Detector} \label{Sec.2} JUNO is a multi-purpose neutrino observatory under construction in South China. As a low background observatory, it has an overburden of 700 m rock (1800 m.w.e) to shield the detector from cosmic muons. Its central detector (CD) is a 12 cm thick acrylic sphere with a diameter of 35.4 m, filled with 20 kton LS. The CD is immersed in a cylindrical water pool and supported by a stainless steel lattice structure. Besides, the CD is instrumented with 17612 20-inch PMT (LPMT) and 25600 3-inch PMT (SPMT) which are uniformly distributed outside the acrylic sphere. 5000 of the LPMT are dynode (DYN) PMT produced by Hamamatsu Photonics K.K., while the remaining LPMT are Micro Channel Plate (MCP) PMT manufactured by North Night Vision Technology Co. Ltd. (NNVT) \cite{MCPPMT:2020}. Their transit time spread (TTS) are 1.1 ns and 5.0 ns in $\sigma$, respectively, according to the result of the PMT mass test \cite{Wen:2019sik}. The total photocathode coverage of the LPMT will be around $75\%$. The SPMT, which contribute another $2.5\%$ photocathode coverage, are also deployed to serve as an additional independent calorimeter. The TTS ($\sigma$) of SPMT has been measured to be around 1.5 ns \cite{Cao:2021wrq}. For each MeV energy deposition in LS when detecting the low energy events, around $1.3\times 10^{3}$ photon electrons (PE) are expected to be received by the LPMT. A VETO system, including Top Tracker (TT) detector and water Cerenkov PMT system, is designed to prevent the influence of cosmic muons. The TT detector is a plastic scintillator detector complex which partly covers the water pool and the CD, which helps reject the cosmic muons passing it. The water Cerenkov PMT system is assembled on the outer surface of the stainless steel lattice structure and measures the Cerenkov light produced by the cosmic muons passing the water pool. The rejection ratio of cosmic muons is estimated to be more than 99\%. \section{Simulation} \label{Sec.3} To understand the behavior of $p\to \bar{\nu} K^+$\ and to discriminate them from the backgrounds in JUNO detector, a Monte Carlo simulation has been performed which is composed of two steps, the generator production and detector simulation. The generator of $p\to \bar{\nu} K^+$\ and its backgrounds is produced with GENIE (version 3.0.2) \cite{GENIE}, in which the primary processes of $p\to \bar{\nu} K^+$\ and the atmospheric $\nu$\ interactions in LS are simulated. The detector simulation, which is the simulation of the final states of $p\to \bar{\nu} K^+$\ and atmospheric $\nu$\ interaction in JUNO detector, is processed in SNiPER \cite{SNIPER} which is a Geant4 \cite{Agostinelli:2002hh} based simulation software developed by the JUNO collaboration. All the related optical processes, including the quenching effect, are considered. The profiles of the LS, including the fluorescence times can be found in Ref. \cite{LStimeprofile}. Totally \SI{10}{k} $p\to \bar{\nu} K^+$\ (PD) events and \SI{160}{k} atmospheric $\nu$\ events are simulated with vertex positions uniformly distributed over the whole LS volume. This study does not yet use a full event reconstruction of energy, position and hit time information. Instead, they are smeared according to the expectation from the detector Monte Carlo and used as the input of our further analysis. The visible energy is the energy deposition reconstructed from the number of PE received by the LPMT. For a conservative consideration, it is smeared by ${3\%}/{\sqrt{E_{vis} ({\rm MeV})}}$ when the energy deposition is smaller than 60 MeV, and a resolution of $1\%$ when greater \cite{CalibStrategy}. The position of the event is described with the center of energy deposition position, which is the averaged position weighted by the energy deposition each time. It is smeared by a Gaussian distribution with resolution of 30 cm. In this study, the detected time of photons hit on the cathode of SPMT are collected to form a hit time spectra for each event, after the correction of photon time-of-flight (TOF) relative to the reconstructed deposition center. TTS of SPMT are set randomly according to the measurement results introduced in Sec. \ref{Sec.2}. The reason of not using the LPMT will be introduced in Sec. \ref{sec:PD}. \subsection{Proton Decay}\label{sec:PD} Based on the JUNO LS components, the initial proton of $p\to \bar{\nu} K^+$\ may come from free protons (in Hydrogen) or bound protons (in Carbon). In the free proton decay, the final states $\bar{\nu}$ and $K^+$ have fixed kinetic energies of 339 MeV and 105 MeV, respectively. According to a toy MC simulation with the corresponding monochromatic $K^+$ in JUNO detector, it is found that $92.4\%$ $K^+$ will deposit all of their kinetic energy within 1.2~ns, which means a deposition pulse can be found in the hit time spectrum immediately. Then, these $K^+$ will stay at rest until decaying into their daughter particles after an average of 12.38 ns. The $K^+$ has six main decay channels. The most dominant channels are $K^+ \to \mu^+ \nu_\mu$ and $K^+ \to \pi^+ \pi^0$ with branching ratios of $63.56 \%$ and $20.67 \%$, respectively \cite{PDG}. In the first channel, the produced $\mu^+$ has a kinetic energy of 152 MeV and decays to a Michel electron with a lifetime of about 2.2 $\mu s$. The produced $\pi^0$ and $\pi^+$ in the second channel will decay into two gammas, a $\mu^+$ and a $\nu_\mu$, respectively, and consequently produce a Michel electron. All daughter particles will deposit their kinetic energies immediately and give a second deposition pulse. After the TOF correction, the hit time spectrum of $K^{+}$ and decay particles will form an overlapping double pulse pattern. Given the relatively long lifetime of muon, a later third pulse from the Michel electron, as a delayed feature of $p\to \bar{\nu} K^+$, will be found on the hit time spectrum. This triple coincidence as introduced in Sec.~\ref{Sec.1} is one of the most important features to distinguish a $p\to \bar{\nu} K^+$\ event from the backgrounds. This triple coincidence is illustrated in Fig.~\ref{fig:TimeSpec}. \begin{figure}[h] \centering \includegraphics[width=8cm]{figure/HitTimeDistribution.pdf} \caption{\label{fig:TimeSpec} Illustration of the hit time spectrum of a typical $p\to \bar{\nu} K^+$\ event, containing the deposition pulses of $K^+$, the decay daughter of $K^+$ ($\mu^{+}$ in this event) and the Michel electron.} \end{figure} As introduced in Sec.~\ref{Sec.2}, the LPMT and SPMT are applied in JUNO. However, as shown in Fig.~\ref{fig:PMTPerformance}, they have different performances on the hit time spectrum collection. When a LPMT is triggered by a hit, the waveform will be digitized and recorded by the electronics. For low energy events like the inverse $\beta$ decay (IBD), the reconstruction from the waveform to the hit time of each PE is possible since only few photons could be received by most LPMT. However, a typical $p\to \bar{\nu} K^+$\ event usually has an energy deposition of more than 200 MeV. In this case, plenty of PEs would be received by the LPMT in few tens of ns (as shown in Fig.~\ref{fig:singlePMT}) and the hit time reconstruction would be difficult. As shown in Fig.~\ref{fig:3pulses}, the triple coincidence time feature would be smeared if the hit time reconstruction is not carried out. In comparison, considering that the receiving area of SPMT is around 1/40 times that of LPMT, most SPMT will work in single hit mode in which the SPMT is usually hit by at most only one PE. In advantage of this, the triple coincidence time feature of $p\to \bar{\nu} K^+$\ could be preserved well. Thus, only the SPMT are applied in this studied to collect the hit time spectrum. \begin{figure}[ht] \subfigure[Waveform of a LPMT acquired from the SNiPER electronics simulation.]{ \includegraphics[height=5cm]{figure/singlePMToutput.pdf} \label{fig:singlePMT} } \subfigure[Comparison of the LPMT and SPMT hit time output from a typical $p\to \bar{\nu} K^+$\ event after TOF correction.]{ \includegraphics[height=5cm]{figure/PMToutput.pdf} \label{fig:3pulses} } \caption{ \label{fig:PMTPerformance} Simulated PMT output of a typical $p\to \bar{\nu} K^+$\ event. The total visible energy of this event is 275 MeV and the $K^+$ decays at 13.7 ns after it's born. Photon hit time reconstruction is not easy to achieve when using LPMT to detect a hundreds-of-MeV event. So the SPMT is used for hit time spectrum collection. More details can be found in the text.} \end{figure} The protons bound in Carbon nuclei will be influenced by nuclear effects \cite{Abe:2014mwa}, including the nuclear binding energy, Fermi motion and nucleon-nucleon correlation. The kinetic energies of the produced $K^+$ are smeared around 105 MeV which is relative to that in the free proton case. In addition, the $K^+$ kinetic energy will also be changed by the final state interactions (FSI). Before the $K^+$ escapes from the residual nucleus, it may interact with the spectator nucleons and knock one of them out of the remaining nucleus. It can also exchange its charge with a neutron and turn into $K^0$ via $K^+ + n \to K^0 +p$. Furthermore, the de-excitation of the residual nucleus will produce $\gamma$s, neutrons or protons etc. Obviously, the FSI and de-excitation processes will change the reaction products, which are crucial to our later analysis. The GENIE generator (version 3.0.2) \cite{GENIE} is used to model these nuclear effect. Some corrections have been made for the default GENIE. Firstly, the nuclear shell structure is taken into account which is not included in the default nuclear model of GENIE. A spectral function model, which provides a 2-dimensional distribution of momentum $k$ and removal energy $E_R$ for protons in $^{12} {\rm C}$, is applied to describe the initial proton states \cite{Benhar:2005dj}. Then, the initial proton energy is determined by $E_p = m_p - E_R$ where $m_p$ is the mass of free proton. In this case, about $2.2\%$ protons from $^{12} {\rm C}$ cannot decay into $\bar{\nu}$ and $K^+$ since the corresponding proton invariant mass is smaller than the $K^+$ mass \cite{Hu:2021xjz}. Secondly, we turn on hadron-nucleon model in GENIE. The default GENIE uses the hadron-atom model to evaluate the FSI, which costs less time but does not include the $K^+ + n \to K^0 +p$ interaction. Meanwhile, we modify the target nucleon energy and the binding energy with $m_p - E_R$ (or $m_n - E_R$) and $E_B = E_R - k^2/(2 M_{^{11}\rm{B}})$ \cite{Bodek:2018lmc}, respectively. In addition, the fraction of $K^+$-nucleon charge exchange and elastic scattering interactions is corrected in terms of the numbers of spectator proton and neutron in the remaining nucleus. With all these modifications, we finally got a distribution of $K^+$ kinetic energies as shown in Fig.~\ref{fig:KaonKinetic}. The charge exchange probability is about $1.7\%$ for $p\to \bar{\nu} K^+$\ in $^{12}$C according to the result of the modified GENIE. \begin{figure}[h] \centering \includegraphics[width=7cm]{figure/KaonKinetic.pdf} \caption{\label{fig:KaonKinetic} The $K^+$ kinetic energy distributions for $p\to \bar{\nu} K^+$\ in $^{12}$C with (solid line) and without (dashed line) the FSI from the default (blue) and modified (red) GENIE. } \end{figure} Thirdly, all the residual nuclei in default GENIE are generated in the ground state, thus no de-excitation processes are taken into account. The TALYS (version 1.95) software \cite{Koning:2012zqy} is then applied to estimate the de-excitation processes with input of the excitation energy $E_x$. The $E_x$ of the residual nucleus can be calculated through $E_x =M_{inv}-M_R$, where $M_{inv}$ and $M_R$ are the corresponding invariant mass and static mass, respectively. For $p\to \bar{\nu} K^+$\ in $^{12} {\rm C}$, $^{11} {\rm B^*}$, $^{10} {\rm B^*}$ and $^{10} {\rm Be^*}$ account for $90.9 \%$, $5.1 \%$ and $3.1 \% $ of the residual nuclei, respectively. Among these residual nuclei, the $^{10} {\rm B^*}$ and $^{10} {\rm Be^*}$ come from the final state interactions between $K^+$ and one of the nucleons in $^{11} {\rm B^*}$. The de-excitation modes and corresponding branching ratios of the residual nuclei $^{11} {\rm B^*}$, $^{10} {\rm B^*}$ and $^{10} {\rm Be^*}$ have been reported in Ref. \cite{Hu:2021xjz}. According to the result, many de-excitation processes could produce neutron. In the case of a $s_{1/2}$ proton decay, the dominant de-excitation modes of $^{11} {\rm B^*}$ states, including $n + {^{10} {\rm B}}$, $n + p + {^{9} {\rm Be}}$, $ n + d + {^{8} {\rm Be}}$, $n + \alpha + {^{6} {\rm Li}}$, $2n +p + {^{8} {\rm Be}}$, will contribute to a branching ratio of $45.8 \%$ \cite{Hu:2021xjz}. About $56.5 \%$ of highly excited $^{11} {\rm B^*}$ states can directly emit one or more neutrons from their exclusive de-excitation modes. In addition, the non-exclusive de-excitation processes, and the de-excitation modes of $d + {^{9} {\rm Be}}$ and $d + \alpha + {^{5} {\rm He}}$, can also produce neutrons \cite{Hu:2021xjz}. Most of these neutrons will give a 2.2 MeV $\gamma$ from the neutron capture in the JUNO LS, which will influence the setting of the criteria (introduced in Section. \ref{Sec:4.2}). \subsection{Backgrounds} \label{sec:AN} The dominant backgrounds of $p\to \bar{\nu} K^+$\ are featured by atmospheric $\nu$\ and cosmic muon since the deposited energy of $p\to \bar{\nu} K^+$\ events are usually larger than 100 MeV. The cosmic muon comes from the interaction of cosmic ray and the atmosphere. The produced cosmic muon usually takes very high energy and produce obvious Cerenkov light when passing through the water pool outside JUNO CD. With the VETO system, JUNO is expected to discriminate more than 99\% of the cosmic muons. The survival cosmic muon usually flips at the corner of the water pool with very low energy deposited and few Cerenkov light produced, and therefore escape from the watch of the VETO system. Thus, most VETO survived cosmic muons are supposed leaving no signal in CD and will not be the background of $p\to \bar{\nu} K^+$\ observation. For those muons that are VETO survived, entering and leaving signals in CD, the energy deposition processes are mainly caused by the energetic primary muon. Consequently, with the visible energy, VETO and volume selection, as well as the expected triple coincidence feature selection, this type of background is supposed to be negligible. Therefore, the background mainly discussed in this paper is the atmospheric $\nu$\ . According to the detector properties, the expected observed number of atmospheric $\nu$\ events is calculated with the help of the atmospheric $\nu$\ fluxes at the JUNO site \cite{Honda:2015fha}, the neutrino cross sections from the GENIE \cite{GENIE} and the best-fit values of the oscillation parameters in the case of normal hierarchy \cite{PDG}. The JUNO LS detector will observe 36k events in ten years. We use GENIE in default configuration to generate 160 k atmospheric $\nu$\ events, which corresponds to 44.5 years of JUNO data taking. Each atmospheric $\nu$\ event has a weight value, which indicates the occurring possibility of this event for JUNO 200 kton-years exposure considering the neutrino oscillation. Then, these atmospheric $\nu$\ events are simulated in SNiPER as our sample database. The atmospheric $\nu$\ events can be classified into the following four categories \cite{Formaggio:2013kya}: the charged current quasi-elastic scattering (CCQE), the neutral current elastic scattering (NCES), the pion production and the kaon production. The categories and their ratios are shown in Table \ref{tab:ANcomponents}. The most dominant backgrounds in the energy range of $p\to \bar{\nu} K^+$\ (Sub-GeV) are formed by the elastic scattering, including the CCQE and the NCES events. The final states of the elastic scattering events usually deposit all their energy immediately and eventually followed by a delayed signal. Consequently, requiring a triple coincidence feature effectively suppresses these two categories of backgrounds. \begin{table*}[ht] \centering \caption{\label{tab:ANcomponents} The categories of atmospheric $\nu$\ backgrounds. The data are summarized based on the result of GENIE and SNiPER.} \begin{tabular}{c|c|c|c|c} \Xhline{1.2pt} Type & Ratio (\%) & \tabincell{c}{Ratio with $E_{vis}$ in \\ $[100~\rm{MeV},600~\rm{MeV}](\%)$} & Interaction & \tabincell{c}{Signal\\ characteristics} \\ \hline NCES & 20.2 & 15.8 & \tabincell{c}{$\nu+n \to \nu+n$\\$\nu+p \to \nu+p$} & Single Pulse \\ \hline CCQE & 45.2 & 64.2 & \tabincell{c}{$\bar{\nu_{l}}+p \to n+l^{+} $\\$\nu_{l}+n\to p+l^{-} $} & Single Pulse \\ \hline Pion Production & 33.5 & 19.8 & \tabincell{c}{$\nu_l +p \to l^{-}+p+\pi^{+}$\\$\nu+p\to \nu+n+\pi^{+}$} & \tabincell{c}{Approximate Single Pulse\\(Second pulse too low)}\\ \hline Kaon Production & 1.1 & 0.2 & \tabincell{c}{$\nu_l +n\to l^{-}+\Lambda+K^{+}$\\$\nu_l +p\to l^{-}+p+K^{+}$} & Double Pulse \\ \Xhline{1.2pt} \end{tabular} \end{table*} Another significant background is the CC and NC pion productions, which is caused by the single pion resonant interaction and the coherent pion interaction. The produced pions will decay into muons with an average time of 26 ns. These muons, together with those produced in the CC pion production, will consequently produce Michel electrons. It can be found that pion-production events would feature a triple coincidence in time similar to the search for $p\to \bar{\nu} K^+$\ . However, the muon contributed to the second pulse of the triple coincidence has kinetic energy of 4 MeV which is too small comparing to the total energy deposition. The atmospheric $\nu$\ interactions with pion production have larger possibility to produce the accompanying nucleons. Some of the created energetic neutrons feature a small probability to propagate freely for more than 10 ns in the LS. In this case, the neutron interaction can cause a sufficiently large second pulse. Therefore, pion production events with an energetic neutron, e.g. $v + p \to v + n + \pi^+$, can mimic the signature of $p\to \bar{\nu} K^+$\ . In fact, $\bar{\nu}_\mu$ CC quasi-elastic scattering $\bar{\nu}_\mu + p \to n + \mu^+$ can also contribute to this kind of background. It should be noted that this type of events is not observed by KamLAND \cite{KamLAND}. However, because of its larger target mass and proton exposure amount compared to KamLAND, it is possible for JUNO to observe these backgrounds. Since the energetic neutron usually breaks up the nucleus and produces many neutrons, a large number of neutron capture can be used to suppress this kind of backgrounds. Another possible source of background is resonant and non-resonant kaon productions (with or without $\Lambda$ in the productions). The visible energy distribution of the kaon production is shown in Fig.~\ref{fig:KaonProd}. The Nuwro generator \cite{NuWro} is applied to help estimating the non-resonant kaon production, because this type of event is not included in GENIE due to the strangeness number conservation. Based on the result of simulation, this kind of background has a negligible contribution in the relevant energy range (smaller than 600 MeV), which is coincident to the LENA \cite{LENA} and KamLAND \cite{KamLAND} conclusions. \begin{figure}[h] \centering \includegraphics[width=8cm]{figure/KaonProduction.pdf} \caption{\label{fig:KaonProd} Visible energy distribution of the kaon production from atmospheric $\nu$\ backgrounds. According to the plot, the resonant kaon production has a negligible contribution and the non-resonant background can be eliminated, with a upper $E_{vis}$ cut at 600~MeV.} \end{figure} \section{Analysis} \label{Sec.4} To quantify the performance of background discrimination, we design a series of selection criteria to evaluate the detection efficiency of $p\to \bar{\nu} K^+$\ and the corresponding background rate based on the simulation data sample. According to the physics mechanisms introduced in the last section, the key part of the selections is based on the triple coincidence signature in hit time spectrum. Many beneficial works to search for the proton decay with a LS detector have been done by LENA and KamLAND groups \cite{LENA,KamLAND}. However, in this paper, we want to state that the situation in JUNO is more challenging because of the much larger detector mass. Concerning the proton exposure in ten years, the detected number of atmospheric $\nu$\ would be about 20 times of that of the KamLAND experiment. Therefore, more stringent selection criteria have to be defined to suppress background to a sufficiently low level, at least that of KamLAND. Besides the common cuts on energy, position and temporal features, additional criteria have to be explored. Based on the expected configurations of JUNO detector, a possible way is additionally provided to distinguish the $p\to \bar{\nu} K^+$\ by using the delayed signals, including the Michel electron and neutron capture gammas. \subsection{Basic Selections} The basic event selection uses only the most apparent features of the decay signature. The first variable regarded is the visible energy of the event. As illustrated in Fig.~\ref{fig:EvisCom}, it can be found that the visible energy of $p\to \bar{\nu} K^+$\ is mostly concentrated in the range of $200\, {\rm MeV} \leq E_{vis} \leq 600 \, {\rm MeV}$, comparing to that of the atmospheric $\nu$\ backgrounds. With the $E_{vis} $ cut, nearly half of the atmospheric $\nu$\ events in the data sample can be rejected while the $p\to \bar{\nu} K^+$\ survival rate is more than $94.6\%$. The left and right peaks mainly correspond to the $K^+ \to \mu^+ \nu_\mu$ and $K^+ \to \pi^+ \pi^0$ decay channels, respectively. \begin{figure} \centering \label{EvisPD} \includegraphics[width=8cm]{figure/EvisCompare.pdf} \caption{The visible energy distributions of $p\to \bar{\nu} K^+$\ (PD) and atmospheric $\nu$\ (AN) events.} \label{fig:EvisCom} \end{figure} In the second step, if the CD is triggered, the VETO detector is required to be quiet in two consecutive trigger windows of 1000 ns which is before and after the prompt signals respectively. In this way most muons can be removed, while the remaining muons usually get through CD near its surface. Concerning the remaining muons surviving the visible energy cut, its track length must be smaller than 3~m, which corresponds to a radius of the energy deposition center of 17.6 m. So the radius of volume cut is set to $R_V \leq 17.5$ m considering the reconstruction error. As shown in Table \ref{tab:eff}, after the basic cuts: \begin{description} \item[(Cut-1)] visible energy $200\, {\rm MeV} \leq E_{vis} \leq 600 \, {\rm MeV}$, \item[(Cut-2-1)] VETO system is not triggered in 1000 ns windows before and after the prompt signals. \item[(Cut-2-2)] volume cut is set as $R_V \leq 17.5$ m, \end{description} the survival rate of $p\to \bar{\nu} K^+$\ in the data sample is $93.7\%$ while the remaining amount of atmospheric $\nu$\ events is 47849 from the total 160 k atmospheric $\nu$\ events. Further selection methods to reduce the atmospheric $\nu$\ background are required. \subsection{Delayed Signals and Event Classification} \label{Sec:4.2} Due to the good energy and time resolution, JUNO can measure the delayed signals of $p\to \bar{\nu} K^+$\ and atmospheric $\nu$\ events, including the Michel electron and neutron capture. About $95\%$ of $p\to \bar{\nu} K^+$\ is followed by a Michel electron, while only 50\% of the background events exhibit a delayed signal after the basic selections. On the other hand, the $p\to \bar{\nu} K^+$\ on average has a smaller number of captured neutrons per event than the atmospheric $\nu$\ events. According to the differences on the characteristics of delayed signals, criteria can be set to further reduce the remaining background after the basic selection. The Michel electron is the product of the muon decay with kinetic energy up to 52.8 MeV and be produced in average of 2.2 $\mu$s. For the Michel electron signals, we can know the visible energy $E_M$, the correlated time difference $\Delta T_{M}$ to the prompt signal and the correlated distance $\Delta L_{M}$ to the deposition center of prompt signal from the MC simulation. Based on the physical properties of $p\to \bar{\nu} K^+$\ and background events, it is assumed that JUNO can fully identify the Michel electron with $10\,{\rm MeV} < E_M < 54$ MeV and $150 \, {\rm ns} < \Delta T_{M} < 10000$ ns. In this case, the efficiency to distinguish Michel electrons is $89.2\%$. The lower limit of $E_M$ is set to avoid the influence of low energy background, like reactor antineutrinos and natural radioactivities. In Fig. \ref{fig:Michel}, the number of events $N_{M}$ and $\Delta L_{M}$ distributions of identified Michel electrons for $p\to \bar{\nu} K^+$\ and atmospheric $\nu$\ events are shown. About $5.58\%$ of the $p\to \bar{\nu} K^+$\ events are featured by the number of Michel electron $N_M =2$ which corresponds to the $K^+$ decay channel $K^+ \to \pi^+ \pi^+ \pi^-$. For the $N_M =2$ case, $\Delta L_{M}$ is taken by the average value of two correlated distances. It is clear that the proton decay has a smaller $\Delta L_{M}$ on average than the backgrounds. We can consequently use $\Delta L_{M}$ to reduce the atmospheric $\nu$\ backgrounds by applying the criteria: \begin{description} \item[(Cut-3)] tagged Michel electron number $1 \leq N_M \leq 2$, \item[(Cut-4)] correlated distance $\Delta L_{M} \leq 80$ cm, \end{description} in the remaining proton decay candidates after the basic selection. It can be found that $71.4 \%$ of $p\to \bar{\nu} K^+$\ and $9.2 \%$ of atmospheric $\nu$\ events survive in the data sample. \begin{figure}[] \centering \subfigure[Distribution of $N_M$]{ \includegraphics[width=8cm]{figure/MTag.pdf} } \subfigure[Distribution of $\Delta L_{M}$]{ \includegraphics[width=8cm]{figure/MichelR.pdf} } \caption{\label{fig:Michel} The $N_M$ and $\Delta L_{M}$ distributions of identified Michel electrons for $p\to \bar{\nu} K^+$\ and atmospheric $\nu$\ events with the basic selection and the selection of the time and energy properties of Michel electron. A unit area normalization is used.} \end{figure} \begin{figure}[] \centering \subfigure[Distribution of $N_n$]{ \includegraphics[width=8cm]{figure/CTag.pdf} } \subfigure[Distribution of $\Delta L_{n}$]{ \includegraphics[width=8cm]{figure/CaptureR.pdf} } \caption{\label{fig:Neutron} The $N_n$ and $\Delta L_{n}$ distributions of identified neutron capture for $p\to \bar{\nu} K^+$\ and atmospheric $\nu$\ events with the basic selection. A unit area normalization is used.} \end{figure} Similar to the Michel electron, the neutron capture is another potential selection criterion. Here we assume that the delayed neutron capture signal can be fully identified by requiring the visible energy $1.9 \,{\rm MeV} \leq E_n \leq 2.5$ MeV and the correlated time difference 1 $ \,{\rm \mu s} \leq \Delta T_{n} \leq 2.5$ ms. In this way, $89.5\%$ of the neutrons produced by atmospheric $\nu$\ events can be distinguished. In Fig. \ref{fig:Neutron}, the identified neutron distributions of $p\to \bar{\nu} K^+$\ signals and backgrounds after the basic selections are shown. It can be found that the proton decay events have a smaller $N_n$ on average than the atmospheric $\nu$\ events. So we take the selection cut $N_n \leq 3$ to suppress the background. As shown in Fig. \ref{fig:Neutron}, the distance $\Delta L_{n}$, which is defined similarly to $\Delta L_{M}$, can also be a powerful tool to reduce the backgrounds. Thus, a cut of $\Delta L_{n} \leq 70$ cm is required. Note that these criteria about $N_n$ and $\Delta L_{n}$ can reduce a class of important background, namely events with a high energy neutron in the final states of the primary atmospheric $\nu$\ interaction. Such a high energy neutron has a small probability not to lose its energy within 10 ns until it interacts with LS to give a second pulse. If the final states include $\mu^\pm$ or $\pi^+$, this background event will mimic the three fold coincidence of $p\to \bar{\nu} K^+$\ . Since the high energy neutron usually produce more neutrons and larger $\Delta L_{n}$, we choose the following cuts: \begin{description} \item[(Cut-5)] tagged neutron number $ N_n \leq 3$ for $N_M=1$, \item[(Cut-6)] $\Delta L_{n} \leq 70$ cm if $N_M=1$ and $1 \leq N_n \leq 3$, \end{description} to suppress this kind of background. Based on the above discussions about the delayed signals, we naturally classify the MC events into the following three samples: \begin{description} \item[Sample 1] $N_M=1,\Delta L_{M} \leq \SI{80}{cm}, N_n = 0$ \item[Sample 2] $N_M=1,\Delta L_{M} \leq \SI{80}{cm}, 1 \leq N_n \leq 3, \Delta L_{n} \leq \SI{70}{cm}$ \item[Sample 3] $N_M=2, \Delta L_{M} \leq \SI{80}{cm}$ \end{description} \subsection{Multi-Pulse Fitting} \label{multipulsefitting} \begin{figure*}[] \centering \subfigure[Hit time spectrum of a proton decay event]{ \includegraphics[width=8cm,trim=0 0 0 30,clip]{figure/PD-1-seed9383.pdf} } \subfigure[Hit time spectrum of an atmospheric $\nu$\ event]{ \includegraphics[width=8cm,trim=0 0 0 30,clip]{figure/AN-1-seed43453.pdf} } \caption{\label{fig:fitdemo} Illustration of multi-pulse fitting to hit time spectra of a proton decay event (left) and an atmospheric $\nu$\ event (right). The x axis is the hit time after TOF correction. The black dots is the observed spectrum from simulation. The blue line is the fitting result. The green and red filled histograms are the fitted result of the two components in the hit time spectrum and contributed by $K^+$ and the $K^+$ decay daughters.} \end{figure*} As introduced in Sec. \ref{sec:PD}, a proton decay event usually has a triple coincidence signature on its hit time spectrum. The first two pulses of the triple coincidence overlap with each other concerning the decay time of $K^+$, which is a distinctive feature of $p\to \bar{\nu} K^+$\ comparing to the AN backgrounds. It means that the $p\to \bar{\nu} K^+$\ can be distinguished from the backgrounds according to the characteristics of the overlapping double pulses. Therefore, the hit time spectrum is studied further by multi-pulse fitting method \cite{KamLAND}, in order to reconstruct the time difference and energy of the $K^+$ and its decay daughters. For each event, its hit time spectrum can be fitted with the templates \begin{eqnarray} \phi_{D}(t) & = &\epsilon_{K} \phi_{K}(t) + \epsilon_{i} \phi_{i}[a(t-\Delta T)], \label{phi_D} \\ \phi_{S}(t) & = & \epsilon_{S} \phi_{AN}[at], \label{phi_S} \end{eqnarray} where: \begin{itemize} \item $t$ is the hit time. \item $\phi_{D}$ and $\phi_{S}$ denote the fitting templates of the double pulse fitting and the single pulse fitting, respectively. \item $\phi_{K}(t)$ is the template spectrum of $K^+$; \item $\phi_{i}(t)$ is the template spectrum of the decay daughter of $K^+$; $i = \mu$ or $\pi$ refers to the two dominant decay channels $K^+ \to \mu^+ \nu_\mu$ and $K^+ \to \pi^+ \pi^0$, respectively. When $E_{vis}\leq \SI{400}{MeV}$, $i=\mu$; otherwise, $i=\pi$. The templates have been TOF corrected. They are produced by the MC simulations in which the particles are processed by SNiPER with their corresponding kinetic energies. \item $T_K = 105$ MeV is the initial kinetic energy of $K^+$ from the free proton decay. \item $T_\mu = 152$ MeV and $T_\pi = 354$ MeV are the initial kinetic energy of muon and pion from the $K^{+}$ decay at rest. \item $\phi_{AN}(x)$ is the template hit time spectrum of the backgrounds. It is the average spectrum of all the atmospheric $\nu$\ events with energy deposition from 200 MeV to 600 MeV. \end{itemize} The definitions of the free factors in Eqs. (\ref{phi_D}) and (\ref{phi_S}) are described in the following: \begin{itemize} \item $\Delta T$ is the correlated time difference of both components. \item $a$ is a stretch factor to account for shape deformation of the second pulse caused by the electromagnetic showers. \item $\epsilon _{K}$, $\epsilon _{i}$ and $\epsilon_{S}$ are the corresponding energy factors. \end{itemize} For illustration, we use Eq.~(\ref{phi_D}) to fit two typical events as shown in Fig.~\ref{fig:fitdemo}. After fitting the hit time spectrum with the templates of Eqs. (\ref{phi_D}) and (\ref{phi_S}), we calculate the $\chi^2$ of the double and single pulse fittings using the following formulas: \begin{eqnarray} \chi_D^2&=& \sum \frac{(\phi(x)-\phi_D(x))^2}{\sigma^2 (\phi(x))}\label{eq:chi2-gaussian}, \\ \chi_S^2&=& \sum \frac{(\phi(x)-\phi_S(x))^2}{\sigma^2 (\phi(x))}, \end{eqnarray} where $\sigma$ is the statistic fluctuation sigma of the observed spectrum. The $\chi ^2$ ratio $R_{\chi} \equiv \chi_S^2/\chi_D^2$ is taken as the further selection. In the double pulse fitting which is formulated with Eq. (\ref{phi_D}), the characteristics including the time difference $\Delta T$ and the sub-energy $E_{1}$ and $E_{2}$ which are the energy of the overlapping double pulses (from the deposition of the postulated $K^+$ and its decay daughters) and defined as Eqs. (\ref{E1}) and (\ref{E2}). \begin{eqnarray} E_{1}&=&\frac{\epsilon_K T_{K}}{\epsilon_K T_{K} + \epsilon_i T_{i}/a} E_{Fit} \label{E1} \\ E_{2}&=&\frac{\epsilon_i T_{i}/a}{\epsilon_K T_{K} + \epsilon_i T_{i}/a} E_{Fit} \label{E2} \end{eqnarray} where $a$ is the stretch factor, $\epsilon_{K}$ and $\epsilon_{i}$ are the energy factors introduced in Eqs. (\ref{phi_D}) and (\ref{phi_S}), the fitted total energy is defined as $E_{Fit} = E_{vis} - \sum E_M - \sum E_n$ which is the visible energy subtracting the energy of Michel electrons and neutron captures. $T_K$ and $T_i$ have also been defined in the descriptions of Eqs. (\ref{phi_D}) and (\ref{phi_S}). The way to select the $p\to \bar{\nu} K^+$\ from the atmospheric $\nu$\ backgrounds according to the parameters acquired above will be introduced as following. In Fig. \ref{fig:chi2Ratio}, we plot $R_{\chi}$ distributions for the proton decay and the atmospheric $\nu$\ events after applying the selections from Cut-1 to Cut-6. It can be found that $R_{\chi}$ is a tool to reject the background. Actually, the $R_{\chi}$ can be regarded as an indicator that the fitted event tends to be a double pulse overlapping event or a single pulse event. The larger the $R_{\chi}$ is, the stronger it tends to be an event with two pulses overlapping in hit time spectrum. A cut of $R_{\chi} >1$ can be applied roughly to do the selection. If $R_{\chi} >1$, this fitted event will be preliminarily identified as a proton decay candidate. Otherwise, it will be rejected as a background candidate. However, a general cut of the $R_{\chi}$ is not justified to the three samples defined at the end of Sec. \ref{Sec:4.2}. Compared to sample 1 which is composed of the common $p\to \bar{\nu} K^+$\ and atmospheric $\nu$\ events, sample 2 is additionally composed of the background events with energetic neutron introduced in Sec.~\ref{sec:AN}. The second pulse caused by the energetic neutron makes these atmospheric $\nu$\ events have a fake double pulse overlapping shape in the hit time spectrum. A stricter requirement to the $R_{\chi}$ is consequently necessary to reduce the background. The $K^+$ produced in the $p\to \bar{\nu} K^+$\ events in sample 3 actually decays via $K^{+}\to\pi^{+}\pi^{+}\pi^{-}$ considering the cut of the number of Michel electron $N_{M}=2$. As a result, the $p\to \bar{\nu} K^+$\ should be easier to be distinguished from the backgrounds with the $N_{M}=2$. Therefore it is reasonable to set a less stringent cut on $R_{\chi}$ in order to keep a high detection efficiency. Consequently, the $R_{\chi}$ will be set for the three samples separately. In order to sufficiently reject atmospheric $\nu$\ backgrounds, we require \begin{description} \item[(Cut-7-1)] $R_\chi > 1.1$ for Sample 1, \item[(Cut-7-2)] $R_\chi > 2.0$ for Sample 2, \item[(Cut-7-3)] $R_\chi > 1.0$ for Sample 3. \end{description} \begin{figure}[h] \includegraphics[width=8cm]{figure/Chi2_Ratio.pdf} \caption{\label{fig:chi2Ratio} Distributions of the $\chi ^2$ ratio $R_{\chi} \equiv \chi_S^2/\chi_D^2$ from the $p\to \bar{\nu} K^+$\ (PD) and atmospheric $\nu$\ (AN) events after the basic selection and the delayed signal selection.} \end{figure} \begin{figure}[h] \includegraphics[width=8cm]{figure/DeltaT.pdf} \caption{\label{fig:FitTime} Distribution of fitted $\Delta T$ (equation (\ref{phi_D})) of $p\to \bar{\nu} K^+$\ (PD, in blue) and atmospheric $\nu$\ (AN, red filled and pink) events with different $R_{\chi}$ cuts after the basic selection and delayed signal selection.} \end{figure} The distributions of $\Delta T$ are shown in Fig.~\ref{fig:FitTime}, where a rough cut of $R_{\chi}>1$ is applied to $p\to \bar{\nu} K^+$\ and the backgrounds. From the figure, it can be found that the $\Delta T$ of the remaining backgrounds which are mis-identified as $p\to \bar{\nu} K^+$\ candidates are mostly distributed in the small $\Delta T$ area, because the atmospheric $\nu$\ events are usually in shape of single pulse. Consequently, $\Delta T$ is required as: \begin{description} \item[(Cut-8)] correlated time difference should be $\Delta T \geq 7$ ns, \end{description} Concerning the kinetics of the $K^+$ and its decay daughters, the sub-energy $E_1$ should be distributed from 0 to more than 200 MeV with an average of 105 MeV, while $E_2$ should be fixed around 152 MeV or 354 MeV depending on the decay mode. As shown in Fig.~\ref{fig:E1E2}, we plot the correlated sub-energy deposition distributions of $p\to \bar{\nu} K^+$\ and background events. Two obvious groups in the left panel can be observed, corresponding to the two dominant decay channels of $K^+$. Only a small group of atmospheric $\nu$\ events is left in the bottom right corner of the right panel of Fig.~\ref{fig:E1E2}, which is contributed by the mis-identification of a tiny second peak. It is clear that a box selection on $E_1$ and $E_2$ can efficiently reject the atmospheric $\nu$\ backgrounds. Therefore the selections, \begin{description} \item[(Cut-9-1)] $30 \, {\rm MeV} \leq E_1 \leq 200$ MeV \item[(Cut-9-2)] $100 \, {\rm MeV} \leq E_2 \leq 410$ MeV, \end{description} are required. The lower boundary of $E_1$ is set to avoid the influence of the coincidence with the low energy events like reactor antineutrinos or radioactive backgrounds. \begin{figure*}[] \centering \subfigure[$p\to \bar{\nu} K^+$\ ]{ \includegraphics[height=8cm]{figure/PD-Esub2D-1.pdf} } \subfigure[atmospheric $\nu$\ ]{ \includegraphics[height=8cm]{figure/AN-Esub2D-1.pdf} } \caption{\label{fig:E1E2} Correlated $E_1$ and $E_2$ distributions for the $p\to \bar{\nu} K^+$\ (a) and atmospheric $\nu$\ (b) events with the basic selection, delayed signal selection, the $R_{\chi}$ cut and the $\Delta T$ cut. The event out of the red boxes would be rejected as the background. More details can be found in the text.} \end{figure*} \begin{table*}[] \centering \caption{\label{tab:eff} Detection efficiencies of $p\to \bar{\nu} K^+$\ and the number of atmospheric $\nu$\ background after each selection criterion.} \begin{tabular}{c|c|c|c|c|c|c|c} \Xhline{1.2pt} \multicolumn{2}{c|}{\multirow{2}{*}{Criteria}} & \multicolumn{3}{c|}{Survival rate of $p\to \bar{\nu} K^+$\ (\%)} & \multicolumn{3}{c}{Survival count of atmospheric $\nu$\ } \\ \cline{3-8} \multicolumn{2}{c|}{} & Sample 1 & Sample 2 & Sample 3 & Sample 1 & Sample 2 & Sample 3 \\ \hline \multirow{2}{*}{\tabincell{c}{basic selection}} & $E_{vis}$ & \multicolumn{3}{c|}{94.6} & \multicolumn{3}{c}{51299} \\\cline{2-8} & $R_V$ & \multicolumn{3}{c|}{93.7} & \multicolumn{3}{c}{47849} \\ \hline \multirow{4}{*}{\tabincell{c}{Delayed \\signal \\selection}} & $N_M$ & \multicolumn{2}{c|}{74.4} & 4.4 & \multicolumn{2}{c|}{20739} & 1143 \\ \cline{2-8} & $\Delta L_M$ & \multicolumn{2}{c|}{67.0} & 4.4 & \multicolumn{2}{c|}{13796} & 994 \\ \cline{2-8} & $N_n$ & 48.4 & 17.9 & -- & 5403 & 6857 & -- \\ \cline{2-8} & $\Delta L_n$ & -- & 16.6 & -- & -- & 4472 & -- \\ \hline \multirow{3}{*}{\tabincell{c}{Time \\character \\selection}} & $R_{\chi}$ & 45.9 & 9.0 & 3.8 & 4326 & 581 & 716 \\ \cline{2-8} & $\Delta T$ & 28.3 & 7.7 & 2.4 & 121 & 18 & 30 \\ \cline{2-8} & $E_{1}, E_{2}$ & 27.4 & 7.3 & 2.2 & 1 & 0 & 0 \\ \hline \multicolumn{2}{c}{Total} & \multicolumn{3}{c}{36.9} & \multicolumn{3}{c}{1} \\ \Xhline{1.2pt} \end{tabular} \end{table*} The detection efficiencies under each selection criterion are listed in Table \ref{tab:eff}, where the numbers of the remaining backgrounds are also shown, from which the elimination power of each criterion can be found. After applying these criteria, the total efficiency for $p\to \bar{\nu} K^+$\ is estimated to be $36.9\%$, while only one event in sample 1 is remained from the simulated 160 k atmospheric $\nu$\ events (corresponding to an exposure time of 44.5 years). The three samples contribute to $27.4\%$, $7.3\%$ and $2.2\%$ of the detection efficiencies, respectively. Considering the statistical error and the weighting value which accounts for the oscillation probability, the background level corresponds to 0.2 events which has been scaled to 10 years data taking of JUNO. \section{Sensitivities and Uncertainties } \label{Sec.5} The detection efficiency uncertainties of $p\to \bar{\nu} K^+$\ are estimated in Table \ref{tab:effuncertainty}. The statistical uncertainty is estimated to be 1.6\% in the MC simulation. So far, we are using the ideal setting for the position reconstruction (30 cm of the energy deposition center position uncertainty without bias). Considering the performance of the vertex reconstruction algorithm, it is assumed that the residual bias of the position reconstruction of $p\to \bar{\nu} K^+$\ is 10~cm. In this case, the uncertainty of efficiency caused by volume cut of 17.5 m will be 1.7\%. \begin{table}[] \centering \caption{\label{tab:effuncertainty} The detection efficiency uncertainties of the $p\to \bar{\nu} K^+$\ .} \begin{tabular}{cc} \Xhline{1.2pt} Source & Uncertainty \\ \hline Statistic & 1.6\% \\ \hline Position reconstruction & 1.7\% \\ \hline Nuclear model & 6.8\% \\ \hline Energy deposition model & 11.1\% \\ \hline Total & 13.2\% \\ \Xhline{1.2pt} \end{tabular} \end{table} Another important systematic uncertainty of detection efficiency comes from the inaccuracy of nuclear model that will influence the ratio of accompany particles of $p\to \bar{\nu} K^+$\ . To estimate this uncertainty, another $p\to \bar{\nu} K^+$\ sample base is simulated with the FSI and de-exciting processes of residual nucleus disabled. After applying all the criteria, a difference of the detection efficiency is calculated as 6.8\%, which is the estimation of the uncertainty from nuclear model. The most dominant uncertainty comes from the energy deposition model. Due to the lack of study on Sub-GeV particles behavior, especially the quenching effect of hundreds of MeV $K^+$ in LAB based LS, the deposition progress simulation in the LS detector might be inaccurate. Therefore, the simulated waveform of the hit time spectrum might be different from the real one. According to the study of KamLAND \cite{KamLAND}, this kind of uncertainty is estimated as $11.1\%$. We conservatively use the value considering the similar detection method. Therefore, the uncertainty of the proton lifetime is estimated as $13.2\%$ considering all the sources introduced above. The sensitivity on $p\to \bar{\nu} K^+$\ is expressed as \begin{eqnarray} \tau / B(p \to \bar{\nu} K^+) = \frac{N_p T \epsilon}{n_{90}}, \end{eqnarray} where $N_p = 6.75 \times 10^{33}$ is the total number of exposure protons (including the free and bound protons) in JUNO central detector, $T$ is the running time which is assumed to be 10 years to achieve exposure mass of 200 kton-years, $ \epsilon =36.9\%$ is the total signal efficiency. $n_{90}$ is the upper limit of $90\%$ confidence level of the detected signals. It depends on the number of observed events and background level. According to the Feldman-Cousins method \cite{Feldman:1997qc}, $n_{90}$ is estimated as 2.61 when requiring that the observed number is equal to the background number 0.2 in 10 years. Thus, JUNO sensitivity on $p\to \bar{\nu} K^+$\ at 90\% C.L. with 200 kton-years would be \begin{equation} \tau / B(p \to \bar{\nu} K^+) > 9.6 \times 10^{33} \, {\rm years}. \label{eq:sensitivity} \end{equation} Comparing to the representative liquid scintillator detector, the detection efficiency on $p\to \bar{\nu} K^+$\ of JUNO is relatively lower than LENA \cite{LENA}. This should be reasonable considering that the study is based on an overall detector simulation of JUNO. Based on the background level 0.02 events per year, JUNO sensitivity as a function of running time is plotted as shown in Fig.~\ref{fig:sensitivity}. It is clear that after 6 years running (120 kton-years), JUNO will overtake the current best limit from Super-K experiment. Moreover, the proton lifetime measured by JUNO will reach $10^{34}$ years for the first time after data taking of 10.5 years. In case of no event observation after ten years, the $90\%$ C.L. limit to the proton lifetime would reach $1.1 \times 10^{34}$ years. In case of one event observation ($16.4\%$ probability), the corresponding limit would be $6.0 \times 10^{33}$ years. \begin{figure}[] \includegraphics[height=6.5cm]{figure/Sensitivity.pdf} \caption{\label{fig:sensitivity} JUNO sensitivity of $p\to \bar{\nu} K^+$\ as a function of running time.} \end{figure} \section{Conclusion} \label{Sec.6} Simulation study to estimate the performance of JUNO detector on searching for proton decay via $p\to \bar{\nu} K^+$\ has been presented. It is found that the expected detection efficiency of $p\to \bar{\nu} K^+$\ is 36.9\%, while the background is estimated to be 0.2 in ten years exposure. Assuming no proton decay events observed, the sensitivity of JUNO on $p\to \bar{\nu} K^+$\ is estimated to be $9.6 \times 10^{33}$ years at 90\% C.L. based on exposure of 200 kton-years. This is higher than the current best limit $5.9\times10^{33}$ years with 260 kton-years from the excellent effort of Super-K experiment \cite{Abe:2014mwa}. It shows that a liquid-scintillator detector like JUNO will be competitive when compared to the planned Hyper-Kamiokande \cite{Hyper-K} and DUNE \cite{DUNE} experiments. \section*{Acknowledgement} We are grateful for the ongoing cooperation from the China General Nuclear Power Group. This work was supported by the Chinese Academy of Sciences, the National Key R\&D Program of China, the CAS Center for Excellence in Particle Physics, Wuyi University, and the Tsung-Dao Lee Institute of Shanghai Jiao Tong University in China, the Institut National de Physique Nucl\'eaire et de Physique de Particules (IN2P3) in France, the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, the Italian-Chinese collaborative research program MAECI-NSFC, the Fond de la Recherche Scientifique (F.R.S-FNRS) and FWO under the ``Excellence of Science - EOS'' in Belgium, the Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\`ogico in Brazil, the Agencia Nacional de Investigacion y Desarrollo in Chile, the Charles University Research Centre and the Ministry of Education, Youth, and Sports in Czech Republic, the Deutsche Forschungsgemeinschaft (DFG), the Helmholtz Association, and the Cluster of Excellence PRISMA+ in Germany, the Joint Institute of Nuclear Research (JINR) and Lomonosov Moscow State University in Russia, the joint Russian Science Foundation (RSF) and National Natural Science Foundation of China (NSFC) research program, the MOST and MOE in Taiwan, the Chulalongkorn University and Suranaree University of Technology in Thailand, and the University of California at Irvine in USA.
1,116,691,501,182
arxiv
\section{Introduction} Consider the initial value problem \begin{equation} \label{sys} y''(t) = f(t,y(t)),\quad y(0) = y_0,\quad y'(0) = y'_0,\quad t\in [t_0,\, t_0+T] \end{equation} where $f:[t_0,t_0+T]\times \mbox{\rm I\hspace{-.15em}R}\to \mbox{\rm I\hspace{-.15em}R}$ and $f(t,y)$ is continuous with respect to $t$ and satisfies a Lipschitz condition with respect to $y$. For simplicity of presentation we state equation \eqref{sys} in scalar form. However, with a change of notation, the discussion also holds when equation \eqref{sys} is in vector form. One of the approaches to solve numerically equation \eqref{sys} is to rewrite the equation as a system of first-order ODEs and use numerical methods to solve this system. There are many methods for solving systems of first-order ODEs (see, e.g., \cite{Burrage}, \cite{Hairer}). The disadvantage of this approach is that the sizes of the obtained systems are twice as large as the sizes of the orignal systems. Also, this approach does not take advantage of the special form of equation \eqref{sys}. It has been of interest to develop methods to solve equation \eqref{sys} without rewriting it as a system of first-order ODEs. Numerical methods for solving equation \eqref{sys} directly have been developed extensively in the literature (see, e.g., \cite{Coleman}, \cite{cong99},\cite{cong00}, \cite{cong01}, and \cite{cong91}). Among direct methods for solving \eqref{sys}, Runge-Kutta-Nystr\"{o}m (RKN) methods are preferable. An $s$-stage RKN method is defined by its Butcher-tableau as follows $$ \renewcommand{\arraystretch}{1.2} \begin{array}{c|c} \bm{c} & \bm{A} \\ \hline & \bm{b}^T\\ & \bm{d}^T \end{array},~~ \bm{A} = [a_{ij}] \in \mbox{\rm I\hspace{-.15em}R}^{s\times s},~~ \bm{b} = (b_1, ..., b_s)^T,~~ \bm{d} = (d_1, ..., d_s)^T,~~ \bm{c} = (c_1, ..., c_s)^T. $$ Given $y_{n}$ and $y_n'$, the approximations of $y(t_n)$ and $y'(t_n)$ at the $n$-th step, the approximations of $y(t_{n+1})$ and $y'(t_{n+1})$ at the $(n+1)$-th step defined by the $s$-stage RKN method $(\bm{c},\bm{A},\bm{b},\bm{d})$ are \begin{align} y_{n+1} &= y_n + hy'_n + h^2\sum_{j=1}^{s}b_jf(t_n+c_jh,Y_{n,j}), \label{RKNiterate}\\ y'_{n+1} &= y'_n + h\sum_{j=1}^{s}d_jf(t_n+c_jh,Y_{n,j}),\\ Y_{n,i} &= y_n + c_ihy'_n+ h^2\sum_{j=1}^{s}a_{ij}f(t_n+c_jh,Y_{n,j}),\qquad i=1, ..., s\label{RKNstage}. \end{align} For an implicit RKN method, the matrix $\bm{A}$ is nonsingular and \eqref{RKNstage} is a system of nonlinear equations. In this case it requires special techniques such as Newton methods or fixed-point iterations to solve equation \eqref{RKNstage}. For an explicit RKN method, $\bm{A}$ is strictly lower-triangular and $c_1=0$. The stage values $(Y_{n,i})_{i=1}^s$ can be easily computed from \begin{equation} \label{explicitRKstage} Y_{n,1} = y_n,\quad Y_{n,i} = y_n + hc_iy'_n + h^2\sum_{j=1}^{i-1}a_{ij}f(t_n+c_jh,Y_{n,j}),\quad i=2, ..., s. \end{equation} Recently, much work has been devoted to the study of functionally-fitted methods. These methods are developed to integrate an ODE exactly if the solution of the ODE is a linear combination of some certain basis functions (see, e.g., \cite{Coleman}, \cite{Ambrosio}, \cite{Franco2}, \cite{nguyen2}, \cite{nguyen3}, \cite{nguyenfeptrk}, \cite{simos}). For example, trigonometric methods have been developed to solve periodic or nearly periodic problems (see, e.g., \cite{Ambrosio}, \cite{nguyen1}, \cite{simos}). Numerical experiments have shown that trigonometrically-fitted methods are superior to classical Runge-Kutta methods for solving ODEs whose solutions are periodic or nearly periodic functions with known frequencies (see, e.g., \cite{Franco2}, \cite{nguyen1}, \cite{nguyen2}, \cite{nguyen3}, \cite{nguyenfeptrk}, \cite{Ozawa2}, \cite{simos}). In \cite{nguyenfeptrk}, a general class of functionally-fitted explicit pseudo two-step Runge-Kutta (FEPTRK) methods was developed. These methods can be considered generalized explicit pseudo two-step Runge-Kutta (EPTRK) methods. Although EPTRK methods were originally developed for parallel computers (cf. \cite{congeptrk99}), it was shown in \cite{nguyenfeptrk} that (F)EPTRK methods work better than one of the most frequently used Runge-Kutta methods the DOPRI45 method even in sequential computing environments on non-stiff problems. In \cite{cong99}, a class of explicit pseudo two-step Runge-Kutta-Nystr\"{o}m (EPTRKN) methods for solving non-stiff problem \eqref{sys} was developed. In particular, it was proved in \cite{cong99} that an $s$-stage EPTRKN method has step order $p=s$ for any set of collocation points $(c_i)_{i=1}^s$. For super-convergence it was shown in \cite{cong99} that an $s$-stage EPTRKN can attain step accuracy order up to $p=s+2$ if the collocation parameters $(c_i)_{i=1}^s$ satisfy some orthogonality conditions. The accuracy study in \cite{cong99} was done by using Taylor series expansion technique. It was shown in \cite{cong00} that these methods were more efficient than some of the most commonly used methods for solving non-stiff large-scale systems of ODEs. When solving non-stiff problems it is the accuracy not the stability that controls the stepsizes of numerical methods. In this paper, we study a general class of functionally-fitted explicit pseudo Runge-Kutta-Nystr\"{o}m (FEPTRKN) methods. Similar to FEPTRK methods in \cite{nguyenfeptrk}, these methods have the advantage of integrating an ODE exactly if the solution of the ODE is a linear combination of functions in bases on which the FEPTRKN methods are developed. Using a similar collocation framework as in \cite{nguyenfeptrk}, we have shown in this paper that an $s$-stage FEPTRKN method has stage order $p=s$ for any set of collocation parameters $(c_i)_{i=1}^s$. When the basis functions are polynomials, we recover the results for EPTRKN methods in \cite{cong99}. Moreover, we proved in this paper that an $s$-stage FEPTRKN can attain step order $p=s+3$. A consequence of this result is that an $s$-stage EPTRKN method can have step order $p=s+3$. This superconvergence result is new and allows us to construct methods of higher accuracy orders than those in \cite{cong99,cong01} for a given number of stage $s$. Using this superconvergence result, we have constructed $4$-stage, $5$-stage, and $6$-stage FEPTRKN methods having accuracy order $7$, $8$, and $9$, respectively. Numerical experiments with the new methods have shown that these methods attain accuracy orders as indicated by the theory. Moreover, these FEPTRKN methods have performed much better than the corresponding EPTRKN methods when the basis functions are suitably chosen. \section{Explicit pseudo two-step RKN (EPTRKN) methods} \label{sec:eptrk} The iteration scheme of a conventional RKN method based on \eqref{RKNiterate}--\eqref{RKNstage} can be represented compactly as \begin{subequations} \begin{align} \label{eq6a} \bm{Y}_n &= \bm{e}y_n+h\bm{c}y'_n+h^2\bm{A}f(\bm{e}t_n+\bm{c}h,\bm{Y}_n) \in \mbox{\rm I\hspace{-.15em}R}^s,\\ y_{n+1} &= y_n + hy'_n+ h^2\bm{b}^Tf(\bm{e}t_n+\bm{c}h,\bm{Y}_n) \in \mbox{\rm I\hspace{-.15em}R},\\ y'_{n+1} &= y'_n + h\bm{d}^Tf(\bm{e}t_n+\bm{c}h,\bm{Y}_n) \in \mbox{\rm I\hspace{-.15em}R}, \end{align} \end{subequations} where $\bm{Y}_n:=(Y_{n,1},...,Y_{n,s})^T$ and $f(\bm{e}t_n+\bm{c}h,\bm{Y}_n) := (f(t_n+c_1h,Y_1), ..., f(t_n+c_sh,Y_s))^T$. For implicit RKN methods, one has to solve nonlinear equation \eqref{eq6a} for the stage vector $\bm{Y}_n$ and this requires extra computation time. It should be noted that all classical collocation RKN methods are implicit (cf. \cite{cong91}). It is well known that implicit RK(N) methods should only be used for solving stiff problems. For non-stiff problems, explicit methods are computationally cheaper as the stage vector $\bm{Y}_n$ can be computed easily. In \cite{cong99} Cong defined the iteration scheme of an $s$-stage explicit pseudo two-step RKN (EPTRKN) method as \begin{subequations} \label{EPTRKN} \begin{align} y_{n+1} &= y_{n} + hy'_n+ h^2\bm{b}^Tf(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}) \in \mbox{\rm I\hspace{-.15em}R},\\ y'_{n+1} &= y'_{n} + h\bm{d}^Tf(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}) \in \mbox{\rm I\hspace{-.15em}R},\\ \label{EPTRKNc} \bm{Y}_{n+1} &= \bm{e}y_{n+1} + h\bm{c}y'_{n+1}+h^2\bm{A}f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}) \in \mbox{\rm I\hspace{-.15em}R}^s, \end{align} \end{subequations} where, again, $y_n\approx y(t_n)$, $y'_n\approx y'(t_n)$, and $\bm{Y}_n=(Y_{n,1},...,Y_{n,s})^T\approx y(\bm{e}t_n+\bm{c}h)=(y(t_n+c_1h),...,y(t_n+c_sh))^T$. The advantage of EPTRKN methods over implicit RKN methods is: the stage vector $\bm{Y}_{n+1}$ in equation \eqref{EPTRKNc} can be computed explicitly using the values of $y_n$, $y_n'$, and $\bm{Y}_{n}$ from the previous step. The scheme needs $s$ sufficiently accurate starting values to define $\bm{Y}_0$, but these can be obtained by a conventional method. Since the components of $f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n})$ can be evaluated independently in parallel computing environments, EPTRKN methods are ideally suited for parallel computers. Note that on parallel computing environments, EPTRKN methods use only one function evaluation of $f(t_{n}+c_ih,Y_{n,i})$ per step. \section{Functionally-fitted explicit pseudo two-step RKN (FEPTRKN) methods} \label{sec:feptrkn} Let us first give a definition of functionally-fitted explicit pseudo two-step RKN (FEPTRKN) methods. This requires choosing a set of basis functions $\{u_i(t)\}_{i=1}^{s}$ that are sufficiently smooth and linearly independent and satisfy the integration scheme \eqref{EPTRKN} exactly. \begin{defn}[Functionally-fitted EPTRKN] An s-stage EPTRKN method is a functionally-fitted (or generalized collocation) explicit pseudo two-step RKN (FEPTRKN) method with respect to the basis functions $\{u_i(t)\}_{i=1}^s$ if the following relations are satisfied for all $i=1,...,s$: \begin{equation} \label{FEPTRKNdef} \begin{split} u_i(t+h)&=u_i(t) + hu_i'(t) + h^2\bm{b}^T(t,h)u''_i(\bm{e}t+\bm{c}h),\\ u'_i(t+h)&=u'_i(t) + h\bm{d}^T(t,h)u''_i(\bm{e}t+\bm{c}h),\\ u_i(\bm{e}t+\bm{c}h+\bm{e}h)&=\bm{e}u_i(t+h)+h\bm{c}u'_i(t+h) + h^2\bm{A}(t,h)u''_i(\bm{e}t+\bm{c}h). \end{split} \end{equation} \end{defn} Here, $\bm{b}^T(t,h)$ and $\bm{d}^T(t,h)$ denote the transposes of $\bm{b}(t,h)$ and $\bm{d}(t,h)$, respectively. This immediately yields linear systems to solve for $\bm{A}$, $\bm{b}$, and $\bm{d}$. These coefficients generally depend on both $t$ and $h$. The parameters $(c_i)_{i = 1}^s$ are distinct and will be chosen later. By construction, FEPTRKN methods are explicit. \subsection{Collocation condition} It is clear that not all bases of functions $\{u_i(t)\}_{i=1}^s$ satisfy \eqref{FEPTRKNdef}, i.e., there are some sets of functions $\{u_i(t)\}_{i=1}^s$ for which one cannot find $\bm{A}$, $\bm{b}$, and $\bm{d}$ so that condition \eqref{FEPTRKNdef} holds for these $\{u_i(t)\}_{i=1}^s$. The {\it collocation condition} defined below guarantees the existence of $\bm{A}$, $\bm{b}$ and $\bm{d}$ satisfying condition \eqref{FEPTRKNdef}, and, therefore, guarantees the existence of the corresponding FEPTRKN method. \begin{defn}[Collocation condition] A set of sufficiently smooth functions $\{u_1(t),u_2(t),...,$ $u_s(t)\}$ is said to satisfy the collocation condition if for any given $t\in [0,T)$, the matrix \begin{align*} \bm{F}(t,h)&:= \bigg{(}u''_{1}(\bm{e}t+\bm{c}h), u''_{2}(\bm{e}t+\bm{c}h),..., u''_{s}(\bm{e}t+\bm{c}h)\bigg{)} \end{align*} is nonsingular for almost every $h$ in the interval $[0,h_{\max}]$, $h_{\max} = const>0$. \end{defn} \begin{rem}\label{sysAb}{\rm Equations in \eqref{FEPTRKNdef} can be written as \begin{equation} \label{sysABeq} \begin{split} \Big{(}u_{1}(t+h)-u_1(t)-hu'_1(t), ..., u_{s}(t+h)-u_s(t)-hu'_s(t)\Big{)} &= h^2\bm{b}^T(t, h)\bm{F}(t, h), \\ \Big{(}u'_{1}(t+h)-u'_1(t), ..., u'_{s}(t+h)-u'_s(t)\Big{)} &= h\bm{d}^T(t, h)\bm{F}(t, h), \\ \Big{(}\bm{v}_1(t,h), ..., \bm{v}_s(t,h)\Big{)} &= h^2\bm{A}(t, h)\bm{F}(t, h), \end{split} \end{equation} where $$ \bm{v}_i(t,h):=u_{i}(\bm{e}t +\bm{e}h+\bm{c}h)-\bm{e}u_i(t+h)-h\bm{c}u'_i(t+h),\qquad i=1,...,s. $$ Hence, the existence of the coefficients $\bm{A}(t,h)$, $\bm{d}(t,h)$, $\bm{b}(t,h)$ at $t\in [t_0,t_0+T]$ and $h>0$ is guaranteed if $\bm{F}(t,h)$ is nonsingular. } \end{rem} \begin{rem}{\rm If $\{u_i(t)\}_{i=1}^s$ satisfies the collocation condition, then the linear function $\alpha + \beta t$ is not in the linear space $\mathop{\rm Span}\{u_1,u_2,...,u_s\}$ for any constants $\alpha$ and $\beta$. In other words, there do not exist constants $(\alpha_i)_{i=1}^s$ so that $\alpha_1u_1(t)+ \alpha_2u_2(t)+\cdots+\alpha_su_s(t) =\alpha + \beta t$ for some constants $\alpha$ and $\beta$. In particular, none of the functions $\{u_i(t)\}_{i=1}^s$ is a linear function. } \end{rem} \begin{rem}\rm When $\{u_1(t), u_2(t), ..., u_s(t)\}=\{t^2,t^3, ..., t^{s+1}\}$, we have $$ \bm{F}(t,h)= \bigg{(}2\bm{e}, 6(\bm{e}t+\bm{c}h),..., (s+1)s(\bm{e}t+\bm{c}h)^{s-1}\bigg{)}. $$ Subtracting between columns in the matrix $\bm{F}(t,h)$, we see that $$ \det \bm{F}(t,h)= \begin{vmatrix} 2 &6c_1h & \cdots &(s+1)s(c_1h)^{s-1}\\ 2 &6c_2h & \cdots &(s+1)s(c_2h)^{s-1}\\ \vdots & \vdots & \ddots & \vdots\\ 2 &6c_sh & \cdots &(s+1)s(c_sh)^{s-1} \end{vmatrix} = s!(s+1)! h^{\frac{s(s-1)}{2}} \begin{vmatrix} 1 &c_1 & \cdots &c_1^{s-1}\\ 1 &c_2 & \cdots &c_2^{s-1}\\ \vdots & \vdots & \ddots & \vdots\\ 1 &c_s & \cdots &c_s^{s-1} \end{vmatrix}. $$ Observe that $\det \bm{F}(t,h)$ is a multiple of the determinant of a Vandermonde matrix and, therefore, is nonzero if $h\not =0$ and $(c_i)_{i=1}^s$ are distinct. Hence, the collocation condition is always satisfied when the fitting functions $\{u_i(t)\}_{i=1}^s$ are the monomials. \end{rem} The practical implication of the proposed collocation condition is that for any given $t$ we can control the stepsize $h$ to get a nonsingular $\bm{F}(t,h)$ and then solve for $\bm{A},\bm{b}$, and $\bm{d}$. For a given set of functions $\{u_1(t), ..., u_s(t)\}$ which satisfies the collocation condition and a given $t\in [t_0,t_0+T)$, there will be at most countably many values of $h$ such that $\bm{F}(t,h)$ is singular. Thus, we have the following result. \begin{thm} The coefficients of a FEPTRKN method with respect to a set of basis functions that satisfy the collocation condition are uniquely determined almost everywhere on the integration domain. \end{thm} \subsection{The collocation solution} Let $(c_i)_{i=1}^s$ be distinct and $\{u_i\}_{i=1}^s$ be a basis satisfying the collocation condition. Define $$\bm{H}:= \mathop{\rm Span}\{1,t,u_1,...,u_s\} := \bigg\{ \alpha_0 + a_0 t+ \sum_{i=1}^sa_iu_i(t)\bigg{|} \alpha_0\in \mbox{\rm I\hspace{-.15em}R},\, a_i \in \mbox{\rm I\hspace{-.15em}R}, \, i=0,...,s\bigg\}. $$ Given $y_n$, $y_n'$, and $f(t_n+c_ih,Y_{n,i})$, $i=1,...,s$, we call $u(t)$ the {\em collocation solution} if $u(t)\in \bm{H}$ and the following equations hold \begin{equation} \label{interpol} \begin{split} u(t_n) &= y_n,\\ u'(t_n) &= y'_n,\\ u''(t_n+c_ih) &= f(t_n+c_ih,Y_{n,i}),\qquad i = 1, ..., s. \end{split} \end{equation} If such a $u(t)$ exists, the numerical solution, derivative of the solution, and the stage values at the $(n+1)$-th step are defined by \begin{equation} \label{iteratepoly} y_{n+1} := u(t_{n+1}),\quad y'_{n+1} := u'(t_{n+1}),\quad Y_{n+1,i} := u(t_{n+1}+c_ih),\qquad i=1,...,s, \quad t_{n+1}=t_n+h. \end{equation} Equations \eqref{interpol} and \eqref{iteratepoly} can be called a generalized collocation method for integrating equation \eqref{sys}. When $y_n$, $y'_n$, and $\bm{Y}_n= (Y_{n,1}, ..., Y_{n,s})^T$ are known, $u(t)$ can be constructed explicitly through a direct interpolation involving $y_n$, $y'_n$, and $f(\bm{e}t_n+\bm{c}h,\bm{Y}_n)$. The existence of such collocation solution is made more precise as follows. \begin{lem}\label{newinter} Suppose that the $s+2$ values $z_0,z'_0,z_1,z_2 ..., z_s$ are given and the pair $(t_n,h)$ is such that $\bm{F}(t_n,h)$ is nonsingular, then there exists an interpolation function $\varphi \in\bm{H}$ such that \begin{equation} \label{ae11} \varphi(t_n) = z_0,\quad \varphi'(t_n) = z'_0,\quad \varphi''(t_n+c_ih)=z_i, \qquad i=1,...,s. \end{equation} \end{lem} \begin{proof} Since $\varphi \in \bm{H}$, it can be represented in the form $$ \varphi(t)=\alpha_0+ a_0t+\sum_{i=1}^{s}a_iu_i(t). $$ Let $t_{n,i}:=t_n+c_ih$, $i=1,...,s$. Equation \eqref{ae11} can be written as \begin{equation} \label{mq1} \begin{pmatrix} 1&t_n&u_{1}(t_n) & u_{2}(t_n) & \cdots & u_{s}(t_n)\\ 0&1&u'_{1}(t_n) & u'_{2}(t_n) & \cdots & u'_{s}(t_n)\\ 0&0&u''_{1}(t_{n,1}) & u''_{2}(t_{n,1}) & \cdots & u''_{s}(t_{n,1})\\ \vdots&\vdots&\vdots & \vdots & \ddots & \vdots\\ 0&0&u''_{1}(t_{n,s}) & u''_{2}(t_{n,s}) & \cdots & u''_{s}(t_{n,s}) \end{pmatrix} \begin{pmatrix} \alpha_0\\ a_0\\ a_1\\ \vdots\\ a_s \end{pmatrix} = \begin{pmatrix} z_0\\ z'_0\\ z_1\\ \vdots\\ z_s \end{pmatrix}. \end{equation} Since the matrix in the left-hand side of \eqref{mq1} has the same determinant as $\bm{F}(t_n,h)$, the constants $(a_i)_{i=0}^s$ and $\alpha_0$ are uniquely determined from linear system \eqref{mq1} under the assumption that $\bm{F}(t_n,h)$ is nonsingular. Thus, the function $\varphi(t)\in \bm{H}$ satisfying \eqref{ae11} is determined uniquely. \end{proof} \begin{thm} \label{collofuns} The generalized collocation method \eqref{interpol}--\eqref{iteratepoly} is equivalent to the $s$-stage FEPTRKN method with coefficients $(\bm{c}$, $\bm{A}(t_n,h)$, $\bm{b}(t_n,h)$, $\bm{d}(t_n,h))$. \end{thm} \begin{proof} It follows from Lemma~\ref{newinter} that there exists a unique interpolation function $u(t)\in \bm{H}$ such that \begin{equation} \label{mr1} u(t_{n})=y_{n},\qquad u'(t_{n})=y'_{n},\qquad u''(t_{n}+c_ih)=f(t_{n}+ c_ih,Y_{n,i}),\qquad i=1,...,s. \end{equation} Thus, $u(t)$ satisfies \eqref{interpol} and is the collocation solution being searched for. To complete the proof, we have to prove that if we use $u(t)$ to generate the quantities in \eqref{iteratepoly}, then the following relations that define the FEPTRKN method hold true: \begin{equation} \label{eq10} \begin{split} y_{n+1} &= y_{n} +hy'_n+ h^2\bm{b}^T(t_{n},h)f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}),\\ y'_{n+1} &= y'_n+ h\bm{d}^T(t_{n},h)f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}),\\ \bm{Y}_{n+1} &= \bm{e}y_{n+1} +h\bm{c}y'_{n+1} + h^2\bm{A}(t_{n},h)f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}). \end{split} \end{equation} Since $u(t) \in \bm{H}$, it has the following representation \begin{equation} \label{phicomb} u(t)=\alpha_0 + a_0t + \sum_{i=1}^sa_iu_i(t). \end{equation} From the definition of a FEPTRKN method in \eqref{FEPTRKNdef}, the coefficients $(\bm{c}, \bm{A}, \bm{b},\bm{d})$ satisfy \begin{subequations} \begin{align} \label{eq12a} u_i(t_{n+1}) &= u_i(t_{n}) + h u'_i(t_{n})+ h^2\bm{b}^T(t_{n},h)u_i''(\bm{e}t_{n}+\bm{c}h),\\ u'_i(t_{n+1}) &= u'_i(t_{n}) + h\bm{d}^T(t_{n},h)u_i''(\bm{e}t_{n}+\bm{c}h),\\ u_i(\bm{e}t_n+\bm{e}h+\bm{c}h) &= \bm{e}u_i(t_n+h) + \bm{c}hu'_i(t_n+h)+ h^2\bm{A}(t_n,h)u''_i(\bm{e}t_n+\bm{c}h), \end{align} \end{subequations} for all $i = 1, ..., s$. From equation \eqref{eq12a} and the fact that $u(t)$ is a linear combination of functions $u_i(t)$, $1$, and $t$ (cf. \eqref{phicomb}), we obtain \begin{align} \label{cuoi2} u(t_{n+1})=u(t_{n})+hu'(t_{n})+ h^2\bm{b}^T(t_{n},h)u''(\bm{e}t_{n}+\bm{c}h). \end{align} Equation \eqref{iteratepoly}, equation \eqref{mr1}, and equation \eqref{cuoi2} imply that \begin{align*} \label{equivalent1} y_{n+1} = u(t_{n+1}) = y_n + hy'_n + h^2\bm{b}^T(t_{n},h)f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}). \end{align*} Thus, the first equation in \eqref{eq10} holds. The other equations in \eqref{eq10} are obtained similarly. Theorem \ref{collofuns} is proved. \end{proof} \section{Accuracy and stability properties} \label{sec:accuracy} \subsection{Accuracy order} The stage order and step order of a FEPTRKN method are defined as follows (cf. \cite{cong99}). \begin{defn} \label{def4.1} A FEPTRKN method is said to have step order $p=\min\{p_1,p_2\}$ and stage order $r=\min\{p_1,p_2,p_3\}$ if \begin{align*} y(t_{n+1})-y_{n+1}&=O(h^{p_1+1}),\\ y'(t_{n+1})-y'_{n+1}&=O(h^{p_2+1}),\\ y(\bm{e}t_{n+1}+\bm{c}h)-\bm{Y}_{n+1}&=O(h^{p_3+1}), \end{align*} given that $y_{n} = y(t_{n})$, $y'_{n} = y'(t_{n})$, and $y(\bm{e}t_{n}+\bm{c}h)-\bm{Y}_{n}=O(h^{p_3+1})$. \end{defn} \begin{rem}\label{smooth1}\rm We recall this handy result from \cite[Remark 3.1]{nguyen2}. If a function $f(t)\in C^{m+n}([t_0,\,t_0+T])$ satisfies $f(t_i)=0,i=1,...,n$, then there exists $g(t)\in C^{m}([t_0,\,t_0+T])$ such that $f(t)=g(t)\prod_{i=1}^n(t-t_i)$. \end{rem} \begin{thm}\label{genorder} An s-stage FEPTRKN method has stage order $r=s$ and step order at least $p=s$. \end{thm} \begin{proof} Without loss of generality we assume that $t_{n}=0$. Let $u(t)\in \bm{H}$ be the collocation solution satisfying \begin{equation} \label{ae9} u(t_{n})=y_{n},\quad u'(t_{n})=y'_{n},\quad u''(\bm{c}h)=f(\bm{c}h,\bm{Y}_{n}). \end{equation} This collocation solution exists by Lemma~\ref{newinter} and we have $$ y_{n+1} = u(t_{n+1}),\quad y'_{n+1}=u'(t_{n+1}),\quad \bm{Y}_{n+1}=u(\bm{e}h+\bm{c}h). $$ From the equation $y''(t) = f(t,y(t))$, $\forall t\in [t_0,t_0 + T]$, the third equation in \eqref{ae9}, the local assumption $\bm{Y}_{n}-y(\bm{c}h) = O(h^{s+1})$, and Taylor expansions of $f(t,y)$ with respect to $y$, one gets \begin{equation} \label{ae10} u''(\bm{c}h)-y''(\bm{c}h)= f(\bm{c}h,\bm{Y}_{n})-f(\bm{c}h,y(\bm{c}h)) = O(h^{s+1}). \end{equation} Let $$ R(t):=u(t)-y(t),\qquad 0\le t\le t_0 + T. $$ From equations \eqref{ae9} and \eqref{ae10} one gets $$ R(0)=0,\qquad R'(0)=0,\qquad R''(c_ih)=u''(c_ih) - y''(c_ih) =O(h^{s+1}),\qquad i=1,...,s. $$ Since $R''(t)$ is sufficiently smooth and $R''(c_ih)=O(h^{s+1})$, $i=1,...,s$, there exists a sufficiently smooth function $w(t)$ such that $$ R''(t)=w(t)+O(h^{s+1}),\qquad w(c_ih)=0,\qquad i=1,..,s. $$ Since $w(c_ih)=0$, $i=1,...,s$, it follows from Remark~\ref{smooth1} that there exists a sufficiently smooth function $g(t)$ such that $$ w(t) = g(t)\prod_{i=1}^s(t-c_ih). $$ Thus, \begin{equation} \label{eq14} R''(t) = g(t)\prod_{i=1}^s(t-c_ih) + O(h^{s+1}). \end{equation} This and the relation $R'(0)=0$ imply $$ R'(x) = \int_0^x R''(t)\, dt = \int_0^x \bigg(g(t)\prod_{i=1}^s(t-c_ih) + O(h^{s+1})\bigg)\, dt. $$ Letting $x = h$ in the above equality and using the substitution $t=\xi h$, we get \begin{equation} \label{eq16} R'(h) = h^{s+1}\int_0^1 g(\xi h)\prod_{i=1}^s(\xi-c_i)\, d\xi + \int_0^{h} O(h^{s+1})\, dt. \end{equation} This and the relation $\int_0^{h} O(h^{s+1})\, dt = O(h^{s+2})$ imply \begin{equation} \label{ae6} y'_{n+1}-y'(t_{n+1}) =u'(t_{n+1})-y'(t_{n+1})=R'(h) = O(h^{s+1}). \end{equation} From equation \eqref{eq14} and the equalities $R(0)=0$ and $R'(0)=0$, one gets \begin{equation*} R(x)=\int_0^x\int_0^t R''(\xi )\,d\xi dt =\int_0^x\int_0^t\bigg{(}g(\xi)\prod_{i=1}^s(\xi -c_ih)+ O(h^{s+1})\bigg{)}\, d\xi dt . \end{equation*} Substituting $x:=\omega h$ in the above equality and using substitutions, we obtain \begin{equation} \label{intererror} \begin{split} u(\omega h)-y(\omega h) = R(\omega h) &= h^{s+2}\int_0^\omega\int_0^t g(\xi h)\prod_{i=1}^s(\xi -c_i)\,d\xi dt +\int_0^{\omega h}\int_0^t O(h^{s+1})\, d\xi dt. \end{split} \end{equation} This and the relation $\int_0^{\omega h}\int_0^t O(h^{s+1})\, d\xi dt= O(h^{s+3})$, $\omega=const$, imply \begin{align} \label{eq14b} \bm{Y}_{n+1}-y(\bm{e}h+\bm{c}h) &=u(\bm{e}h+\bm{c}h)-y(\bm{e}h+\bm{c}h)=O(h^{s+2}),\\ \label{eq14c} y_{n+1}-y(h) &=u(h)-y(h)=O(h^{s+2}). \end{align} Equations \eqref{ae6}, \eqref{eq14b}, \eqref{eq14c}, and Definition \ref{def4.1} imply that the method has step order $p=s$ and stage order $r=s$. Theorem \ref{genorder} is proved. \end{proof} \begin{remark}{\rm \label{remarka4} From \eqref{eq14b} we have \begin{equation} \bm{Y}_n - y(\bm{e}t_n +\bm{c}h) = O(h^{s+2}),\qquad n=1,2,...\,. \end{equation} Thus, if $\bm{Y}_0$ is computed with a local accuracy order of $s+2$ at the initial step, then we have \begin{equation} \label{aex1} \bm{Y}_n - y(\bm{e}t_n +\bm{c}h) = O(h^{s+2}),\qquad n=0,1,...\,. \end{equation} } \end{remark} \subsection{Superconvergence} \begin{thm}\label{super1} If an $s$-stage FEPTRKN method has collocation parameters $(c_i)_{i = 1}^s$ that satisfy \begin{equation} \label{eq16a} \int_{0}^1\prod_{i=1}^s(\xi-c_i)\,d\xi = 0, \end{equation} then the method has step order $p=s+1$ and stage order $r=s+1$. \end{thm} \begin{proof}From equation \eqref{eq16} and the relation $\int_0^h O(h^{s+1})\, dt = O(h^{s+2})$, we have $$ y'_{n+1}-y'(t_{n+1}) = u'(t_{n+1})-y'(t_{n+1})=R'(h)=h^{s+1}\int_0^1g(\xi h)\prod_{i=1}^s(\xi -c_i)\,d\xi + O(h^{s+2}). $$ Use the Taylor expansion $g(\xi h)=g(0)+O(\xi h)$ and orthogonality condition \eqref{eq16a} to get \begin{equation} \label{eq16c} y'_{n+1} -y'(t_{n+1})=h^{s+1}g(0)\int_0^1\prod_{i=1}^s(\xi -c_i)\,d\xi + O(h^{s+2}) = O(h^{s+2}). \end{equation} The conclusion of the theorem follows from equations \eqref{eq14b}, \eqref{eq14c}, \eqref{eq16c}, and Definition \ref{def4.1}. \end{proof} \begin{thm}\label{super2} If an $s$-stage FEPTRKN method has collocation parameters $(c_i)_{i = 1}^s$ that satisfy \begin{equation} \label{eq17} \int_{0}^1 \xi^k\prod_{i=1}^s(\xi-c_i)\, d\xi = 0,\qquad k=0,1, \end{equation} then the method has step order $p=s+2$. \end{thm} \begin{proof} From the relation $\bm{Y}_n - y(\bm{c}h)= O(h^{s+2})$ (see Remark \ref{remarka4}) and Taylor expansions of $f(t,y)$, one gets \begin{equation} \label{mq2} f(\bm{c}h,\bm{Y}_{n})-f(\bm{c}h,y(\bm{c}h))= O(h^{s+2}). \end{equation} Let $R(t)$ be defined as in Theorem \ref{genorder}, i.e., $R(t):=u(t) - y(t)$. Then, we have $R(0)=0$ and $R'(0)=0$. It follows from \eqref{mq2} that $$ R''(c_i h) = u''(c_i h) - y''(c_i h) = f(c_ih, Y_{n,i})-f(h,y(c_i h))= O(h^{s+2}), \qquad i=1,...,s. $$ Thus, there exists a sufficiently smooth function $w(t)$ such that $w(c_ih)=0$, $i=1,...,s$, and \begin{equation} R''(t)=w(t) +O(h^{s+2}),\qquad 0\le t \le t_0 +T. \end{equation} Again, there exists a sufficiently smooth function $g(t)$ such that $w(t)=g(t)\prod_{i=1}^s(t-c_ih)$. Therefore, \begin{equation} \label{aex2} R''(t) = g(t)\prod_{i=1}^s(t-c_ih) + O(h^{s+2}),\qquad 0\le t \le t_0 +T. \end{equation} Equation \eqref{aex2} and the relation $R'(0)=0$ imply \begin{equation} \label{ae2} R'(x)=\int_0^xR''(\xi)\,d\xi =\int_0^x\bigg{(} g(\xi)\prod_{i=1}^s(\xi -c_ih)+O(h^{s+2})\bigg{)}\,d\xi. \end{equation} Assigning $x:=h$ in the above equality, we obtain \begin{equation} \label{intererror2} u'(h) - y'(h)=R'(h) = h^{s+1}\int_{0}^{1} g(\xi h)\prod_{i=1}^s(\xi -c_i)\,d\xi + \int_0^{h}O(h^{s+2})\, d\xi. \end{equation} Using the Taylor expansion $g(\xi h)=g(0)+\xi hg'(0)+O(h^2)$ and the relation $\int_0^{h}O(h^{s+2})\, d\xi = O(h^{s+3})$, we have \begin{align*} u'(h)-y'(h)&=h^{s+1}g(0)\int_0^{1} \prod_{i=1}^s(\xi -c_i)\,d\xi +h^{s+2}g'(0) \int_0^{1}\xi \prod_{i=1}^s(\xi -c_i)\,d\xi + O(h^{s+3}). \end{align*} This and orthogonality condition \eqref{eq17} imply \begin{equation} \label{ae4} y'_{n+1} - y'(t_{n+1}) = u'(h)-y'(h) = O(h^{s+3}). \end{equation} Similarly, from equation \eqref{aex2} and the relations $R(0) = 0$ and $R'(0) = 0$, one gets \begin{equation} \label{eq19} R(x) = \int_0^x\int_0^t R''(\xi) \,d\xi dt= \int_0^x\int_0^t\bigg{(} g(\xi)\prod_{i=1}^s(\xi -c_ih)+O(h^{s+2})\bigg{)}\,d\xi dt. \end{equation} From equation \eqref{eq19}, the relation $\int_0^h\int_0^t O(h^{s+2})\, d\xi dt = O(h^{s+4})$, Fubini's Theorem, the Maclaurin expansion $g(\xi h) = g(0) + O(h)$, and orthogonality condition \eqref{eq17}, one gets \begin{equation} \label{ae7x} \begin{split} y_{n+1} - y (t_{n+1}) =R(h) &= h^{s+2}\int_0^1\int_0^t g(\xi h)\prod_{i=1}^s(\xi -c_i)\,d\xi dt +O(h^{s+4})\\ &= h^{s+2}\int_0^1\int_\xi^1 g(\xi h)\prod_{i=1}^s(\xi -c_i)\,dt d\xi +O(h^{s+4})\\ &= h^{s+2}\int_0^1 g(\xi h)(1-\xi)\prod_{i=1}^s(\xi -c_i)\, d\xi +O(h^{s+4}) \\ &= h^{s+2}g(0)\bigg(\int_0^1 \prod_{i=1}^s(\xi -c_i)\, d\xi - \int_0^1 \xi\prod_{i=1}^s(\xi -c_i)\, d\xi\bigg) +O(h^{s+3}) \\ &= O(h^{s+3}). \end{split} \end{equation} From \eqref{ae4}, \eqref{ae7x}, and Definition \ref{def4.1}, one concludes that the method has step order $p=s+2$. Theorem \ref{super2} is proved. \end{proof} \begin{rem} From \eqref{intererror} with $w=1+c_i$, $i=1,...,s$, and the relation $\int_0^{\omega h}\int_0^t O(h^{s+1})\, d\xi dt = O(h^{s+3})$, we have \begin{equation} \begin{split} \bm{Y}_{n+1}-y(\bm{e}t_{n+1}+\bm{c}h) &= u(\bm{e}h +\bm{c}h) - y(\bm{e}h +\bm{c}h) \\ &= h^{s+2}C_{n+1}\int_{\bm{0}}^{\bm{e}+\bm{c}}\int_0^t \prod_{i=1}^s(\xi -c_i)\,d\xi dt + O(h^{s+3}),\qquad n=0,1,...\,. \end{split} \end{equation} If $\bm{Y}_0$ is computed at a local accuracy order of $s+2$ at the initial step of computation, i.e., $\bm{Y}_0 - y(\bm{e}t_0 + \bm{c}h) = O(h^{s+3})$, then we will have \begin{equation} \label{stagerrorx} \bm{Y}_{n}-y(\bm{e}t_{n} + \bm{c}h)= h^{s+2}C_{n}\int_{\bm{0}}^{\bm{e}+\bm{c}}\int_0^t \prod_{i=1}^s(\xi -c_i)\,d\xi dt + O(h^{s+3}),\qquad n=0,1,...\, . \end{equation} \end{rem} \begin{thm} \label{super3} If an $s$-stage FEPTRKN method has collocation parameters $(c_i)_{i = 1}^s$ that satisfy \begin{equation} \label{eq20} \int_{0}^1\int_{0}^{1+x}\int_0^t\prod_{i=1}^s(\xi-c_i)\,d\xi dt dx = 0,\qquad \int_{0}^1 \xi^k\prod_{i=1}^s(\xi-c_i)\,d\xi = 0,\quad k=0,1,2, \end{equation} then the method has step order $p=s+3$. \end{thm} \begin{proof} Again, without loss of generality, we assume that $t_n = 0$. Let $R(t)$ be defined as in the proofs of Theorems \ref{genorder} and \ref{super2}, i.e., $R(t)=u(t) -y(t)$. Using Taylor expansions of $f(t,y)$ and \eqref{stagerrorx}, one gets \begin{equation*} \begin{split} R''(\bm{c}h) = u''(\bm{c}h) - y''(\bm{c}h)= f(\bm{c}h,\bm{Y}_{n})-f(\bm{c}h,y(\bm{c}h)) = h^{s+2} C \int_{\bm{0}}^{\bm{e}+\bm{c}}\int_0^t\prod_{i=1}^s(\xi -c_i)\,d\xi dt + O(h^{s+3}), \end{split} \end{equation*} where $C=C_n\frac{\partial f}{\partial y}(0,y(0))$. Thus, we have $$ R(0) = 0,\qquad R'(0) = 0,\qquad R''(c_ih) = C \int_{0}^{h+c_ih}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt + O(h^{s+3}),\qquad i=1,...,s. $$ Again, since $R''(t)$ is sufficiently smooth, there exists a smooth function $w(t)$ such that \begin{equation} \label{a11} R''(x) = w(x) + C\int_{0}^{h+x}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt + O(h^{s+3}),\qquad w(c_ih) = 0,\qquad i=1,...,s. \end{equation} Since $w(c_ih)=0$, $i=1,...,s$, there exists a sufficiently smooth function $g(x)$ such that (cf. Remark~\ref{smooth1}) $$ w(x) = g(x)\prod_{i=1}^s(x-c_ih). $$ This and \eqref{a11} imply \begin{equation} \label{eq22} \begin{split} R''(x)&= g(x)\prod_{i=1}^s(x -c_ih) + C\int_{0}^{h+x}\int_0^t\prod_{i=1}^s(\xi -c_ih) \,d\xi dt + O(h^{s+3}). \end{split} \end{equation} Integrate equation \eqref{eq22} and use the relation $R'(0)=0$ to get \begin{equation*} \begin{split} R'(x)&=\int_0^xR''(\tau)d\tau \\ &=\int_0^x\bigg{(} g(\tau)\prod_{i=1}^s(\tau -c_ih) + C\int_{0}^{h+\tau}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt + O(h^{s+3}) \bigg{)}d\tau\\ &=\int_0^x g(\tau)\prod_{i=1}^s(\tau -c_ih)\, d\tau + C\int_0^x\int_{0}^{h+\tau}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt d\tau + \int_0^xO(h^{s+3})\, d\tau. \end{split} \end{equation*} Assigning $x:= h$ in the above equality, we obtain \begin{equation} \label{eq23} \begin{split} u'(h)-y'(h) = R'(h) &= h^{s+1}\int_0^1 g(h\xi) \prod_{i=1}^s(\xi -c_i)\, d\xi \\ &\, + Ch^{s+3}\int_0^1\int_{0}^{1+\tau}\int_0^t\prod_{i=1}^s(\xi -c_i)\,d\xi dt d\tau + \int_0^{h}O(h^{s+3})\,d\tau. \end{split} \end{equation} Using the Taylor expansion $g(\xi h)=\sum_{k=0}^2 \frac{g^{(k)}(0)}{k!}(\xi h)^k +O(h^3)$, we get \begin{align} \label{mw1} h^{s+1}\int_0^1g(h\xi) \prod_{i=1}^s(\xi -c_i)\, d\xi = \sum_{k=0}^2 h^{s+1+k}\frac{g^{(k)}(0)}{k!} \int_0^{1}\xi^k \prod_{i=1}^s(\xi -c_i)d\xi + O(h^{s+4}). \end{align} Equations \eqref{eq23}--\eqref{mw1} and the relation $\int_0^{h}O(h^{s+3})\,d\tau=O(h^{s+4})$ imply \begin{equation*} \begin{split} u'(h)-y'(h) &= \sum_{k=0}^2 h^{s+1+k}\frac{g^{(k)}(0)}{k!} \int_0^{1}\xi^k \prod_{i=1}^s(\xi -c_i)d\xi \\ &\, + Ch^{s+3}\int_0^1\int_{0}^{1+\tau}\int_0^t\prod_{i=1}^s(\xi -c_i)\,d\xi dt d\tau + O(h^{s+4}). \end{split} \end{equation*} This and orthogonality conditions in \eqref{eq20} imply \begin{equation} \label{ae7} y'_{n+1} - y(t_{n+1}) = u'(h)-y'(h) = O(h^{s+4}). \end{equation} From equation \eqref{eq22} and the relations $R(0)=0$ and $R'(0)=0$, one gets \begin{equation} \begin{split} R(x) &=\int_0^x\int_0^\zeta R''(\tau)d\tau d\zeta \\ &=\int_0^x\int_0^\zeta \bigg{(} g(\tau)\prod_{i=1}^s(\tau -c_ih) + C\int_{0}^{h+\tau}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt + O(h^{s+3}) \bigg{)}d\tau d\zeta. \end{split} \end{equation} This and the relation $\int_0^h\int_0^\zeta O(h^{s+3})\, d\tau d\zeta = O(h^{s+5})$ imply \begin{equation} \label{eq24} \begin{split} R(h) &=\int_0^h \int_0^\zeta g(\tau)\prod_{i=1}^s(\tau -c_ih)\, d\tau d\zeta + C\int_0^h\int_0^\zeta\int_{0}^{h+\tau}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt d\tau d\zeta + O(h^{s+5}). \end{split} \end{equation} By substitutions, one gets \begin{equation} \label{eq25} \int_0^h\int_0^\zeta\int_{0}^{h+\tau}\int_0^t\prod_{i=1}^s(\xi -c_ih)\,d\xi dt d\tau d\zeta = h^{s+4} \int_0^1\int_0^\zeta\int_{0}^{1+\tau}\int_0^t\prod_{i=1}^s(\xi -c_i)\,d\xi dt d\tau d\zeta = O(h^{s+4}). \end{equation} From Fubini's Theorem one obtains \begin{equation} \label{eq26} \begin{split} \int_0^h \int_0^\zeta g(\tau)\prod_{i=1}^s(\tau -c_ih)\, d\tau d\zeta &= \int_0^h \int_\tau^h g(\tau)\prod_{i=1}^s(\tau -c_ih)\, d\zeta d\tau \\ &= \int_0^h g(\tau)(h-\tau)\prod_{i=1}^s(\tau -c_ih)\,d\tau. \end{split} \end{equation} Using the substitution $\tau=h\xi$ and the Maclaurin expansion $g(t) = \beta_0 +\beta_1t + O(t^2)$, one gets \begin{equation} \label{eq27} \begin{split} \int_0^h g(\tau)(h-\tau)\prod_{i=1}^s(\tau -c_ih)\,d\tau = & h^{s+2}\int_0^1 g(\xi h)(1-\xi)\prod_{i=1}^s(\xi -c_i)\,d\xi\\ =& \sum_{k=0}^1 \beta_k h^{s+2+k} \int_0^1 \xi^k\prod_{i=1}^s(\xi -c_i)\,d\xi \\ & - \sum_{k=0}^1 \beta_k h^{s+2+k} \int_0^1 \xi^{k+1}\prod_{i=1}^s(\xi -c_i)\,d\xi + O(h^{s+4}). \end{split} \end{equation} From equations \eqref{eq26}--\eqref{eq27} and orthogonality conditions in \eqref{eq20}, one obtains $$ \int_0^h \int_0^\zeta g(\tau)\prod_{i=1}^s(\tau -c_ih)\, d\tau d\zeta = O(h^{s+4}). $$ This and equations \eqref{eq24}--\eqref{eq25} imply \begin{equation} \label{ae8} y_{n+1} - y(t_{n+1}) = u(h) - y(h) = R(h) = O(h^{s+4}). \end{equation} It follows from \eqref{ae7}, \eqref{ae8}, and Definition \ref{def4.1} that the stage order of the method is $p = s+3$. Theorem \eqref{super3} is proved. \end{proof} \begin{rem}\rm From Fubini's Theorem, we have \begin{align*} \int_{0}^1\int_{0}^{1+x}\int_0^t\prod_{i=1}^s(\xi-c_i)\, d\xi dt dx &= \int_{0}^1\int_{0}^{1+x}\int_\xi^{1+x}\prod_{i=1}^s(\xi-c_i)\, dt d\xi dx \\ &= \int_{0}^1\int_{0}^{1+x}(1+x - \xi)\prod_{i=1}^s(\xi-c_i)\, d\xi dx. \end{align*} If \eqref{eq20} holds, then by Fubini's Theorem one gets \begin{align*} \int_{0}^1\int_{0}^{1}(1+x - \xi)\prod_{i=1}^s(\xi-c_i)\, d\xi dx &= \int_{0}^1\int_{0}^{1}(1+x - \xi)\prod_{i=1}^s(\xi-c_i)\, dx d\xi\\ &= \int_{0}^1\bigg(\frac{3}{2}- \xi\bigg)\prod_{i=1}^s(\xi-c_i)\, d\xi\\ &= \frac{3}{2}\int_{0}^1\prod_{i=1}^s(\xi-c_i)\, d\xi - \int_{0}^1\xi\prod_{i=1}^s(\xi-c_i)\, d\xi = 0. \end{align*} Moreover, using Fubini's Theorem, one gets \begin{align*} \int_{0}^1\int_{1}^{1+x}(1+x - \xi)\prod_{i=1}^s(\xi-c_i)\, d\xi dx &= \int_{1}^2\int_{\xi-1}^{1}(1+x - \xi)\prod_{i=1}^s(\xi-c_i)\, dx d\xi\\ &= \int_{1}^2\frac{(\xi - 2)^2}{2}\prod_{i=1}^s(\xi-c_i)\, d\xi. \end{align*} Therefore, condition \eqref{eq20} is equivalent to \begin{equation} \label{v48} \int_{1}^2(\xi- 2)^2\prod_{i=1}^s(\xi-c_i)\, d\xi = 0,\qquad \int_{0}^1 \xi^k\prod_{i=1}^s(\xi-c_i)\,d\xi = 0,\qquad k=0,1,2. \end{equation} In practice, we use \eqref{v48} instead of \eqref{eq20} to find collocation parameters $(c_i)_{i=1}^s$. \end{rem} \subsection{Stability} \label{subsec:stability} Applying an FEPTRKN method given by $(\bm{c},\bm{A},\bm{b},\bm{d})$ to the Dahlquist test equation $$ y'' = \lambda y,\qquad y(0) = y_0,\qquad y'(0) = y'_0, $$ we get the iterations \begin{equation} \label{v49} \begin{split} \bm{Y}_{n+1} &= \bm{e}y_{n+1} + \bm{c}hy'_{n+1}+ \lambda h^2\bm{A}(t_n,h)\bm{Y}_n,\\ y_{n+1} &= y_n + hy'_n+ \lambda h^2\bm{b}^T(t_n,h)\bm{Y}_n,\\ y'_{n+1} &= y'_n + \lambda h\bm{d}^T(t_n,h)\bm{Y}_n. \end{split} \end{equation} This implies \begin{equation} \label{v49.1} \begin{pmatrix} \bm{Y}_{n+1}\\ y_{n+1}\\ hy'_{n+1} \end{pmatrix} = \bm{M}_{(t_n,h)}(z) \begin{pmatrix} \bm{Y}_{n}\\ y_{n}\\ hy'_{n} \end{pmatrix},\qquad z = \lambda h^2, \end{equation} where $$ \bm{M}_{(t_n, h)}(z)= \begin{pmatrix} z\big[\bm{A}(t_n,h) + \bm{e}\bm{b}^T(t_n,h) + \bm{c} \bm{d}^T(t_n,h)\big]&\bm{e} & \bm{e}+\bm{c}\\ z\bm{b}^T(t_n,h) &1 & 1\\ z\bm{d}^T(t_n,h) &0 & 1\\ \end{pmatrix}. $$ Observe that the formula of our amplification matrix $\bm{M}_{(t_n,h)}(z)$ differs from that in~\cite{cong99,cong01}, where they instead use the formulation \begin{equation} \label{v50} \begin{pmatrix} \bm{Y}_{n}\\ y_{n+1}\\ hy'_{n+1} \end{pmatrix} = \tilde{\bm{M}}(z) \begin{pmatrix} \bm{Y}_{n-1}\\ y_{n}\\ hy'_{n} \end{pmatrix},\qquad \tilde{\bm{M}}(z) = \begin{pmatrix} z\bm{A}&\bm{e} & \bm{c} \\ z^2\bm{b}^T\bm{A} &1+z\bm{b}^T\bm{e} & 1+z\bm{b}^T\bm{c}\\ z^2\bm{d}^T\bm{A} & z\bm{d}^T\bm{e} & 1+z\bm{d}^T\bm{c}\\ \end{pmatrix},\qquad z=\lambda h^2. \end{equation} Note that in~\cite{cong99,cong01} the coefficients $\bm{A}(t,h)$, $\bm{b}(t,h)$ and $\bm{d}(t,h)$ are independent of $t$ and $h$. From \eqref{v49}, we have $$ \begin{pmatrix} \bm{Y}_{n+1}\\ y_{n+1}\\ hy'_{n+1} \end{pmatrix} = \begin{pmatrix} z\bm{A} & \bm{e}& \bm{c} &\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \bm{Y}_{n}\\ y_{n+1}\\ hy'_{n+1} \end{pmatrix}. $$ This and equation \eqref{v50} imply \begin{equation} \label{v51} \begin{pmatrix} \bm{Y}_{n+1}\\ y_{n+1}\\ hy'_{n+1} \end{pmatrix} = \begin{pmatrix} z\bm{A} & \bm{e}& \bm{c} &\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \tilde{\bm{M}}(z) \begin{pmatrix} z\bm{A} & \bm{e}& \bm{c} &\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}^{-1} \begin{pmatrix} \bm{Y}_{n}\\ y_{n}\\ hy'_{n} \end{pmatrix}. \end{equation} From \eqref{v49.1} and \eqref{v51}, we have the similarity transformation $$ \bm{M}(z) = \begin{pmatrix} z\bm{A} & \bm{e}& \bm{c} &\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \tilde{\bm{M}}(z) \begin{pmatrix} z\bm{A} & \bm{e}& \bm{c} &\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}^{-1}. $$ This means that although the two formulations lead to different amplification matrices, the spectral radii of these matrices are the same. Note that for conventional EPTRKN methods, we have $\bm{d}^T\bm{e}=1$ and $\bm{b}^T\bm{e}=\frac{1}{2}$ and, therefore, the matrix $\tilde{\bm{M}}(z)$ can be simplified further. However, for general basis functions, the coefficients $\bm{A}$, $\bm{b}$, and $\bm{d}$ of a FEPTRKN method depend on both $t$ and $h$ and we do not have $\bm{d}^T\bm{e}=1$ and $\bm{b}^T\bm{e}=\frac{1}{2}$. The coefficients $\bm{b}(t_{n},h)$, $\bm{d}(t_{n},h)$, and $\bm{A}(t_{n},h)$ are independent of $t_n$ for the so-called class of {\em separable} basis functions $\{u_i\}_{i=1}^s$, i.e., if we have (see~\cite[Section 3]{nguyen14}) $$ \bm{u}(x+y) = \mbox{\boldmath$\cal F$}(y)\bm{u}(x), \quad \forall x,y \in \mbox{\rm I\hspace{-.15em}R} $$ where $\bm{u} = (1,t, u_1, ..., u_s)^T$ and \mbox{\boldmath$\cal F$} is a square matrix of size $(s+2)\times(s+2)$ with entries that are univariate functions. This class includes the most common functions such as polynomial, exponential, and trigonometric functions. In this particular class, the stability region of the method for a given stepsize $h$ can be defined as \begin{equation} \label{stabregion} S_h := \{z\in (-\infty,0]: \rho(\bm{M}_h(z))\leq 1\} \end{equation} where $\rho(\bm{M}_h(z))$ denotes the spectral radius of $\bm{M}_h(z)$ and \begin{equation} \label{ae1} \bm{M}_{(h)}(z)= \begin{pmatrix} z\big[\bm{A}(h) + \bm{e}\bm{b}^T(h) + \bm{c} \bm{d}^T(h)\big]&\bm{e} & \bm{e}+\bm{c}\\ z\bm{b}^T(h) &1 & 1\\ z\bm{d}^T(h) &0 & 1\\ \end{pmatrix}. \end{equation} It becomes possible to characterize the stability region numerically or even analytically for some special cases (cf.~\cite{nguyen1}, \cite{nguyen3}, \cite{nguyenfeptrk}). Furthermore, if we expand the coefficients into Taylor series \begin{align*} a_{ij}&=a_{ij}^{(0)}+a_{ij}^{(1)}h+a_{ij}^{(2)}h^2+...,\\ b_i&=b_i^{(0)}+b_i^{(1)}h+b_i^{(2)}h^2+..., \end{align*} then it can be shown that the leading terms are constant and conform to the coefficients of the conventional EPTRKN method defined by $\bm{c}$ (cf. \cite{Ozawa2}). Thus, the coefficients $\bm{b}(t,h),\bm{d}(t,h),\bm{A}(t,h)$ converge to those of the conventional EPTRKN method when $h\to 0$. Consequently, the amplification matrix $\bm{M}_{(t_n,h)}(z)$ and the stability region of a FEPTRKN method converge to the amplification matrix $\bm{M}(z)$ and the stability region of the corresponding conventional EPTRKN method, respectively. \section{Extensions} \label{sec:extensions} We now discuss some aspects that are essential to producing competitive numerical codes. \subsection{Variable stepsize} \label{variablestepsize} When the stepsize $h_n$ is accepted and the next stepsize $h_{n+1}$ is suggested, the values in the next step are computed by \begin{equation} \label{variEPTRKN} \begin{split} \bm{Y}_{n+1} &= \bm{e}y_{n+1} + h_{n+1}\bm{c}y'_{n+1}+ h_{n+1}^2\bm{A}(t_{n},h_{n},h_{n+1})f(\bm{e}t_{n}+\bm{c}h_n,\bm{Y}_{n}),\\ y_{n+2} &= y_{n+1} + h_{n+1}y'_{n+1} +h^2_{n+1}\bm{b}^T(t_{n+1},h_{n+1})f(\bm{e}t_{n+1}+\bm{c}h_{n+1},\bm{Y}_{n+1}),\\ y'_{n+2} &= y'_{n+1} +h_{n+1}\bm{d}^T(t_{n+1},h_{n+1})f(\bm{e}t_{n+1}+\bm{c}h_{n+1},\bm{Y}_{n+1}). \end{split} \end{equation} The coefficients $\bm{b}^T(t_{n+1},h_{n+1})$ and $\bm{d}^T(t_{n+1},h_{n+1})$ are computed from the first two equations in \eqref{sysABeq} with $t=t_{n+1}$ and $h=h_{n+1}$. However, we cannot solve for $\bm{A}(t_{n},h_{n},h_{n+1})$ from the third equation in \eqref{sysABeq} if $h_{n+1}\not=h_n$. In particular, the coefficients $\bm{A}(t_{n},h_{n},h_{n+1})$ are obtained by solving the systems \begin{align*} \Big{(}\bm{v}_1(t_{n},h_{n},h_{n+1}), ..., \bm{v}_s(t_{n},h_{n},h_{n+1}) \Big{)} &= h_{n+1}^2\bm{A}(t_{n},h_{n},h_{n+1})\bm{F}(t_{n}, h_{n}), \end{align*} where $$ \bm{v}_i(t_{n},h_{n},h_{n+1}) = u_{i}(\bm{e}t_{n+1}+\bm{c}h_{n+1})-u_i(\bm{e}t_{n+1})-\bm{c}h_{n+1}u'_i(t_{n+1}),\qquad i=1,...,s. $$ Note that even a conventional variable stepsize EPTRKN induces variable coefficients. However, $\bm{b}$ and $\bm{d}$ remain constant and the update for $\bm{A}$ leads to a simple diagonal scaling~\cite{cong01}. The next approximations $y_{n+2} \approx y(t_{n+1}+h_{n+1})$ and $y'_{n+2} \approx y'(t_{n+1}+h_{n+1})$ resulting from the new stepsize $h_{n+1}$ are subject to a local truncation error denoted by LTE which is often estimated by using another embedded method (see Section~\ref{subsec:embedded} below). If the estimated error LTE is smaller than a prescribed tolerance TOL, then $h_{n+1}$ is accepted. Otherwise, it is rejected and a reduced stepsize $\tilde{h}_{n+1}$ is used to recompute $\bm{Y}_{n+1}\approx y(\bm{e}t_{n+1}+\bm{c}\tilde{h}_{n+1})$, $y'_{n+2}\approx y'(t_{n+1}+\tilde{h}_{n+1})$, and $y_{n+2}\approx y(t_{n+1}+\tilde{h}_{n+1})$. This process is repeated until an accepted value of $h_{n+1}$ is found. Varying the stepsize from a collocation perspective means that, from the past accepted values $y_{n}$, $y'_{n}$, and $\bm{Y}_n$ we construct the collocation solution $u(t)$ defined in \eqref{interpol} and then evaluate and store the values $y_{n+1}=u(t_n+h_n)$, $y'_{n+1}=u'(t_n+h_n)$. The next stage values $\bm{Y}_{n+1}$ are generated from this collocation function using an estimated stepsize $h_{n+1}$ (see Section \ref{errorcontrol} below). The acceptance of this $h_{n+1}$ is subject to a local truncation error for computing $y_{n+2}$ using this stepsize (cf. Section \ref{errorcontrol}). Therefore, the collocation function $u(t)$ and values $y_{n+1}$, $y'_{n+1}$ remain the same when $h_{n+1}$ varies. We only adjust the stepsize $h_{n+1}$ for computing $\bm{Y}_{n+1}$, specifically, $\bm{Y}_{n+1} = u(\bm{e}t_{n+1}+\bm{c}h_{n+1})$, see \eqref{iteratepoly}. As a consequence, the following generalization of~\cite[Theorem~2.1]{cong01} holds. \begin{thm} The $s$-stage variable stepsize FEPTRKN method \eqref{variEPTRKN} is of stage order $p=s$ and of step order at least $p=s$ for any set of distinct collocation points $(c_i)_{i=1}^s$. It has step order $p=s+q$, $q=1,2$, if the $(c_i)_{i=1}^s$ satisfy the orthogonality conditions $$ \int_0^1\xi^k\prod_{i=1}^s(\xi-c_i)d\xi = 0,\qquad k=0,q-1. $$ \end{thm} \subsection{Interpolation} \label{interpolation} A continuous extension of an $s$-stage FEPTRKN method determined by $(\bm{c},\bm{A}(t_n,h_n),\bm{b}(t_n,h_n),\bm{d}(t_n,h_n))$ is defined as follows (cf. \cite{cong01}, \cite{nguyenfeptrk}) \begin{align} \label{denseoutput} y_{n+\xi} &= y_{n} + \xi h_ny'_n+ (\xi h_n)^2\bm{b}^T(t_n,h_n,\xi) f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}),\\ \label{denseoutput1} y'_{n+\xi} &= y'_{n} + \xi h_n\bm{d}^T(t_n,h_n,\xi) f(\bm{e}t_{n}+\bm{c}h,\bm{Y}_{n}),\qquad 0\leq \xi \leq 1, \end{align} where $y_{n+\xi}\approx y(t_{n+\xi})$ and $y'_{n+\xi}\approx y'(t_{n+\xi})$ with $t_{n+\xi}:=t_n+\xi h_n$, and the coefficients $\bm{b}(t_n,h_n,\xi)$ and $\bm{d}(t_n,h_n,\xi)$ are obtained from the equations \begin{equation} \label{densepara} \begin{split} \Big{(}u_{1}(t_{n+\xi})-u_1(t_{n})-\xi h_nu'_1(t_{n}), ..., u_{s}(t_{n+\xi})-u_s(t_{n})-\xi h_nu'_s(t_{n})\Big{)} &= (\xi h_n)^2\bm{b}^T(t_n,h_n,\xi)\bm{F}(t_n, h_n),\\ \Big{(}u_{1}(t_{n+\xi})-u_1(t_{n}), ..., u_{s}(t_{n+\xi})-u_s(t_{n})\Big{)} &= \xi h_n\bm{d}^T(t_n,h_n,\xi)\bm{F}(t_n, h_n). \end{split} \end{equation} In other words, any $y_{n+\xi}$ and $y'_{n+\xi}$ are retrieved as if $t_{n+\xi}=t_n+\xi h_n$ was the end point. From their definitions one has $y_{n+\xi}=u(t_n+\xi h_n)$ and $y'_{n+\xi}=u'(t_n+\xi h_n)$, where $u(t)$ is the collocation solution defined in \eqref{interpol}. From \eqref{intererror} it is easy to check that $y_{n+\xi}-y(t_n+\xi h_n)=O(h_n^{s+2})$ and $y'_{n+\xi}-y'(t_n+\xi h_n)=O(h_n^{s+1})$, $\forall \xi \in [0,1]$. Hence, the following result holds. \begin{thm} The FEPTRKN method defined by \eqref{denseoutput}--\eqref{denseoutput1} with $\bm{b}$ and $\bm{d}$ defined by \eqref{densepara} produces a continuous FEPTRKN method of order $s$, i.e., $$ y(t_n+\xi h_n)-y_{n+\xi}=O(h_n^{s+2}),\quad y'(t_n+\xi h_n)-y'_{n+\xi}=O(h_n^{s+1}),\qquad 0 \le \xi \le 1. $$ \end{thm} It can further be shown that when the collocation functions are the monomials, the coefficient $\bm{b}(t_n,h_n,\xi)$ depends only on $\xi$ (cf.~\cite{cong01}). \subsection{Embedded methods} \label{subsec:embedded} \def\tilde{c}{\tilde{c}} \def{\tilde{s}}{{\tilde{s}}} \def\tilde{Y}{\tilde{Y}} \def\tilde{y}{\tilde{y}} \def\tilde{u}{\tilde{u}} \def\tbm#1{\tilde{\bm{#1}}} Consider an $s$-stage FEPTRKN method with coefficients $(\bm{c},\bm{A},\bm{b},\bm{d})$. We now wish to have an embedded FEPTRKN method ($\tbm{c},\tbm{A},\tbm{b},\tbm{d})$ paired with the FEPTRKN method $(\bm{c},\bm{A},\bm{b},\bm{d})$ to cheaply estimate the errors and control the stepsizes in practice. Let ${\tilde{s}} < s$, $\{\tilde{c}_1, ..., \tilde{c}_{\tilde{s}}\} \varsubsetneq \{c_1, ..., c_s\}$, $\tbm{c} = (\tilde{c}_1, ..., \tilde{c}_{\tilde{s}})^T$ and $\tbm{e} = (1, ..., 1)^T$ of length ${\tilde{s}}$. An embedded pair FEPTRKN methods in which another approximation $\tilde{y}_{n+1}$ to $y(t_{n+1})$ can be computed without extra function evaluations is defined by \begin{equation} \label{embpair} \begin{split} y_{n+1} &= y_{n} +h_ny'_n+ h_n^2\bm{b}^T(t_n,h_n)f(\bm{e}t_{n}+\bm{c}h_n,\bm{Y}_{n}),\\ y'_{n+1} &= y'_{n} + h_n\bm{d}^T(t_n,h_n)f(\bm{e}t_{n}+\bm{c}h_n,\bm{Y}_{n}),\\ \tilde{y}_{n+1} &= y_{n} +h_ny'_n + h_n^2\tbm{b}^T(t_n,h_n)f(\tbm{e}t_{n}+\tbm{c}h_n,\tbm{Y}_{n}),\\ \bm{Y}_{n+1} &= \bm{e}y_{n+1} +\bm{c}h_ny'_{n+1} + h_n^2\bm{A}(t_n,h_n)f(\bm{e}t_{n}+\bm{c}h_n,\bm{Y}_{n}) \end{split} \end{equation} where $\tbm{Y}_{n}=(\tilde{Y}_{n,1},...,\tilde{Y}_{n,{\tilde{s}}})$ and $f(\tbm{e}t_{n}+\tbm{c}h_n,\tbm{Y}_{n}) = (f(t_n+\tilde{c}_1h_n,\tilde{Y}_{n,1}), ..., f(t_n+\tilde{c}_{\tilde{s}} h_n,\tilde{Y}_{n,{\tilde{s}}}))^T $ are defined according to the mapping (which is well defined because the $c_i$ are distinct) \begin{align*} \text{if}\quad \tilde{c}_i=c_j\quad \text{then}\quad \tilde{Y}_{n,i}=Y_{n,j}. \end{align*} The coefficient $\tbm{b}(t_n,h_n)$ is obtained by solving the linear system \begin{align*} \Big{(}\tilde{u}_{1}(t_n+h_n)-\tilde{u}_1(t_{n}) -h_n\tilde{u}'_1(t_{n}), ..., \tilde{u}_{{\tilde{s}}}(t_{n}+h_n)-\tilde{u}_{\tilde{s}}(t_{n})-h_n\tilde{u}'_{\tilde{s}}(t_{n})\Big{)} &= h_n^2\tbm{b}^T(t_n,h_n)\tbm{F}(t_n, h_n) \end{align*} where \begin{align*} \tbm{F}(t_n,h_n)&= \bigg{(}\tilde{u}''_{1}(\tbm{e}t_n+\tbm{c}h_n), ..., \tilde{u}''_{{\tilde{s}}}(\tbm{e}t_n+\tbm{c}h_n)\bigg{)}. \end{align*} In other words, $\tbm{b}$ is the coefficient of the FEPTRKN method generated from $\tbm{c}$ and the basis of functions $\{\tilde{u}_1, ..., \tilde{u}_{\tilde{s}}\} \varsubsetneq \{u_1, ..., u_s\}$. The solution $\tilde{y}_{n+1}$ is computed by combining the subset of past stage values with indices corresponding to $\tbm{c}$. For this definition of $\tbm{b}$ we can assert that $\tilde{y}_{n+1}=y(t_{n+1})+O(h^{{\tilde{s}}+1})$ as a result of Theorem~\ref{genorder}. \begin{thm} An $s$-stage embedded pair FEPTRKN \eqref{embpair} produces solutions $y_{n+1}$ and $\tilde{y}_{n+1}$ that satisfy $$ y_{n+1}-\tilde{y}_{n+1}=O(h^{{\tilde{s}}+1}), $$ for all set of distinct collocation parameters $(c_i)_{i=1}^s$. \end{thm} \subsection{Error control and step-size change} \label{errorcontrol} Let us discuss a strategy for changing stepsizes in the implementation of a FEPTRKN method of order $p$ embedded with a FEPTRKN method of order $\tilde{p}<p$ using the variable stepsize technique in Section \ref{variablestepsize}. At each step we compute a local truncation error LTE as follows \begin{equation} \label{LTE1} \text{LTE} = \|y_{n+1} - \tilde{y}_{n+1}\| = O(h^{\tilde{p}+1}). \end{equation} Formula \eqref{LTE1} for computing $\text{LTE}$ is simpler than the one in \cite{cong01} where $\text{LTE}$ is computed by \begin{equation} \label{LTE2} \text{LTE} = \sqrt{\|y_{n+1} - \tilde{y}_{n+1}\|^2 + \|y'_{n+1} - \tilde{y}'_{n+1}\|^2}= O(h^{\tilde{p}+1}). \end{equation} Here $\tilde{y}'_{n+1}$ is computed by $$ \tilde{y}'_{n+1} = y'_{n} + h_n\tbm{d}^T(t_n,h_n)f(\tbm{e}t_{n}+\tbm{c}h_n,\tbm{Y}_{n}), $$ where $\tbm{Y}_{n}$ is defined as in Section \ref{subsec:embedded} and ($\tbm{c},\tbm{A},\tbm{b},\tbm{d})$ are the coefficients of the embedded method. In our experiments we have found out that using \eqref{LTE2} instead of \eqref{LTE1} for computing $\text{LTE}$ results in having smaller stepsizes, and, therefore, yields smaller errors. However, having smaller stepsizes makes the FEPTRKN methods use more function evaluations. From numerical experiments, we do not see any advantage of using \eqref{LTE2} over using \eqref{LTE1} for computing $\text{LTE}$. A stepsize $h_n$ is accepted if $\text{LTE}\le\text{TOL}$ and rejected if otherwise. If $h_n$ is rejected, then we repeat the computations with the new stepsize $h_n=h_n/2$ until an accepted $h_n$ is found. If $h_n$ is accepted, then the stepsize $h_{n+1}$ in the next step is chosen as $$ h_{n+1} = h_n \min\bigg\{2,\max\big\{0.5,0.8\big(\text{TOL}/\text{LTE}\big)^{1/(\tilde{p}+1)}\big\}\bigg\}. $$ It follows from this formula that the ratio $\frac{h_{n+1}}{h_n}$ always stays in the interval $[0.5,2]$. This stepsize changing technique was also used in \cite{cong01}. \section{Numerical experiments} The experiments were conducted in double precision (machine precision = $0.2\times 10^{-15}$) using MATLAB on a PC computer with 2Gb RAM and 2.8 GHz. \subsection{Derivation of some methods} \label{derivation} We derived some methods with the following bases: \begin{enumerate} \item{\{$x^2,x^3,x^4$\} for eptrkn52.} \item{\{$x^2,\cos(\omega x),\sin(\omega x)$\} for feptrkn52.} \item{\{$x^2,x^3,x^4,x^5$\} for eptrkn73.} \item{\{$\cos(\omega x),\sin(\omega x),\cos(2\omega x),\sin(2\omega x)$\} for feptrkn73.} \item{\{$x^2,x^3,x^4,x^5,x^6$\} for eptrkn84.} \item{\{$x^2,\cos(\omega x),\sin(\omega x),\cos(2\omega x),\sin(2\omega x)$\} for feptrkn84.}\item{\{$x^2,x^3,x^4,x^5,x^6,x^7$\} for eptrkn95.} \item{\{$\cos(\omega x),\sin(\omega x),\cos(2\omega x),\sin(2\omega x),\cos(3\omega x),\sin(3\omega x)$\} for feptrkn95.} \end{enumerate} The coefficients of FEPTRKN methods based on bases 2, 4, 6, and 8 above are independent of $t$ but they are functions of $\omega h$. The coefficients of EPTRKN methods, i.e., methods based on bases 1, 3, 5, and 7 above, are constants. Methods based on bases 1 and 2 are 3-stage methods and are implemented with $$ \bm{c} = \begin{bmatrix} 0.18677613705141 &0.75202972313575 &1.66119413981284 \end{bmatrix}^T. $$ This set of collocation points $(c_i)_{i=1}^3$ are determined to satisfy orthogonal conditions in \eqref{eq17} and the equation $$\int_0^2\prod_{i=1}^3(\xi-c_i)\, d\xi =0.$$ These methods have the same step order $p=5$ by Theorem \ref{super2} and are denoted by eptrkn52 for basis 1 and feptrkn52 for basis 2. Their embedded methods have the same step order $p=2$. Methods based on bases 3 and 4 are 4-stage methods. We implement these methods with $$ \bm{c} = \begin{bmatrix} 10027252023777 &0.46050359576754 &0.86389485661306 & 1.43247188452449 \end{bmatrix}^T $$ which is computed to satisfy orthogonality conditions in \eqref{eq20}. These methods share the same accuracy order $p=8$ by Theorem \ref{super3} and are denoted by eptrkn73 and feptrkn73 for basis 3 and basis 4, respectively. Their embedded methods are 3-stage methods and have the same accuracy order $p=3$. Methods based on bases 5 and 6 are 5-stage methods. We implement these methods with $$ \bm{c} = \begin{bmatrix} 0.0911311145011 &0.4288524464674 &0.8402456535427 &1.3131095250315 &1.8405501493461 \end{bmatrix}^T $$ which is computed to satisfy orthogonality conditions in \eqref{eq20} and the equation $$\int_0^2\prod_{i=1}^5(\xi-c_i)\, d\xi =0.$$ These methods have the same accuracy order $p=8$ by Theorem \ref{super3} and are denoted by eptrkn84 for basis 5 and feptrkn84 for basis 6. Their embedded methods are 4-stage methods and have the same accuracy order $p=4$. Methods based on bases 7 and 8 are 6-stage methods. These methods are implemented with $$ \bm{c} = \begin{bmatrix} 0 &0.15981788694649 &0.47315766336506 & 0.80767247891979 &1 &1.55935197076839 \end{bmatrix}^T $$ which is computed so that orthogonality conditions in \eqref{eq20} hold and that the set $(c_i)_{i=1}^s$ contains two values 0 and 1. Having 0 and 1 in $(c_i)_{i=1}^s$ helps reducing one right-hand side function evaluation per step by reusing $f(t_{n+1},y_{n+1})$ computed from the previous step. These methods share the same accuracy order $p=9$ by Theorem \ref{super3} and are denoted by eptrkn95 for basis 7 and feptrkn95 for basis 8. Their embedded methods are 5-stage methods and have the same accuracy order $p=5$. \subsection{Stability regions} As mentioned earlier, the coefficients of FEPTRKN methods converge to those of the corresponding EPTRKN methods when $h$ goes to zero. Thus, the stability regions of FEPTRKN methods converge to those of the corresponding EPTRKN methods when $h$ approaches to zero. Using equations \eqref{stabregion}--\eqref{ae1} we plot the stability regions of the feptrkn52, feptrkn73, feptrkn84, and feptrkn95 methods with constant stepsizes, i.e., $h_n=h=const$. Figure \ref{figstab1} plots the stability regions of the feptrkn52 method for $\omega h\in [0,5]$ (left) and of the feptrkn73 method for $\omega h\in [0,4]$ (right). \begin{figure} \centerline{% \includegraphics[scale=0.85]{stability1b.eps}} \caption{Plots of stability regions of the feprtkn52 and feptrkn73 methods.} \label{figstab1} \end{figure} It can be seen from the left figure in Figure \ref{figstab1} that the stability region of the feptrkn52 method is greater than the stability region of the eptrkn52 method for $\omega h\in [0,3]$. From the right figure in Figure \ref{figstab1} one concludes that the stability region of the feptrkn74 method is larger than that of the eptrkn74 method when $\omega h\in (0,2.7]$. To have large stability regions, one should not use $\omega h>3.5$ and $\omega h>3$ for the feptrkn52 and feprtkn73 methods, respectively. Figure \ref{figstab2} plots the stability regions of the feptrkn84 method (left) and the feptrkn95 method (right) for $\nu=\omega h\in [0,4]$. \begin{figure} \centerline{% \includegraphics[scale=0.85]{stability2b.eps}} \caption{Plots of stability regions of the feptrkn84 and feptrkn95 methods.} \label{figstab2} \end{figure} It can be seen from the left figure in Figure \ref{figstab2} that the stability region of the feptrkn84 method for $\omega h\in [0,3.4]$ is larger than that of the eptrkn84 method. The right figure in Figure \ref{figstab2} indicates that the stability region of the feptrkn95 method when $\omega h\in [0,2.5]$ is larger than that of the eptrkn95 method. Figure \ref{figstab2} suggests that to have large stability regions one should not use $\omega h>3.5$ and $\omega h>2.8$ for the feptrkn84 method and the feptrkn95 method, respectively. Although the stability regions of (F)EPTRKN methods are not as large as those of Runge-Kutta-Nystr\"{o}m methods, they are still sufficiently large for solving nonstiff problems (cf. \cite{cong99}, \cite{cong01}). \subsection{Test problems} Numerical experiments are carried out with the following problems: {\bf BETT}-Consider the system (see \cite[p.141]{simos}) \begin{align*} y_1'' &= -y_1 + 0.001 \cos t,\qquad y_1(0) = 1,\qquad y_1'(0) = 0,\\ y_2'' &= -y_2 + 0.001 \sin t,\qquad y_2(0) = 0,\qquad y_2'(0) = 0.9995, \end{align*} where the integration domain is $[0,40]$. The solution to the problem is given by \begin{align*} y_1(t) = \cos t + 0.0005t\sin t,\\ y_2(t) = \sin t - 0.0005t\cos t. \end{align*} {\bf NEWT}-The two-body gravitational problem \begin{align*} y_1'' &= -\frac{y_1}{(y_1^2 + y_2^2)^{3/2}},\qquad y_1(0) = 1-e,\qquad y_1'(0) = 0, \\ y_2'' &= -\frac{y_2}{(y_1^2 + y_2^2)^{3/2}},\qquad y_2(0) = 0,\qquad y_2'(0) = \sqrt{\frac{1+e}{1-e}}, \end{align*} whose exact solution is given by \begin{align*} y_1(t) &= \cos u - e,\\ y_2(t) &= \sqrt{1-e^2}\sin u, \end{align*} where $u$ is the solution of Keppler's equation $u=t+e\sin u$. The integration domain for this problem is $[0,20]$. \subsection{Results and discussion} Table \ref{table1} presents numerical results for solving the BETT problem using the four methods eptrkn52, eptrkn73, eptrkn84, and eptrkn95 implemented with constant stepsizes. The NCD values in Table \ref{table1} are computed by the formula $$ \text{NCD} = \log_{10} \max(E_1,E_2),\qquad E_i:=\max_{0\le t_n\le T} |y_{i}^{comput}(t_n) - y_i(t_n)|. $$ The NCD values can be used to determine accuracy order of a method in practice. In particular, if a method has an accuracy order $p$, then the error for computing $y_i(t_n)$ is of the form $Ch^p$. Thus, the NCD values for computing $y_i(t_n)$ is about $\log_{10}C + p\log_{10}h$. Therefore, the difference between the NCD values at stepsize $h$ and at stepsize $h/2$ is about $p\log_{10}2\approx 0.3p$. Thus, one can estimate $p$ the accuracy order of the method by finding differences between NCD values at stepsize $h$ and $h/2$ and then dividing these differences by $0.3$. By subtracting the NCD values at stepsize $h/2$ from the NCD values at stepsize $h$ and then dividing the results by $0.3$ one can see that the accuracy orders of the eptrkn52, eptrkn73, eptrkn84, and eptrkn95 methods are $p=5$, $p=7$, $p=8$, and $p=9$, respectively. This agrees with the super-convergence results in Theorems \ref{super2} and \ref{super3}. We also see from Table \ref{table1} that the higher accuracy order of the methods, the more accurate they are for the same stepsizes $h$. However, higher order methods use more function evaluations in each step. Thus, the data in Table \ref{table1} does not tell us which method is the best in this experiment. The main purpose of using the results in Table \ref{table1} is to verify the super-convergence results in Theorems \ref{super2} and \ref{super3}. \begin{table}[ht] \caption{NCD values for the BETT problem} \label{table1} \centering \small \renewcommand{\arraystretch}{1.2} \begin{tabular}{@{}l@{}|c@{\hspace{2mm}}|c@{\hspace{2mm}}|c@{\hspace{2mm}}|c@{\hspace{2mm}}|c@{\hspace{2mm}}| c@{\hspace{2mm}}c@{\hspace{2mm}}|} \hline &$h$ &eptrkn52 & eptrkn73& eptrkn84 &eptrkn95 \\ \hline &$1/2^1$ &-2.6 &-4.0 &-6.0 &-5.9\\ &$1/2^2$ &-4.1 &-6.3 &-8.2 &-8.7\\ &$1/2^3$ &-5.7 &-8.7 &-10.8 &-11.7\\ &$1/2^4$ &-7.2 &-11.1 &-13.5 &-14.6\\ &$1/2^5$ &-8.7 &-13.5 &-15.1 &-14.3\\ &$1/2^6$ &-10.2 &-15.5 &-15.7 &-14.5\\ &$1/2^7$ &-11.7 &-14.7 &-14.4 &-14.7\\ &$1/2^8$ &-13.2 &-14.3 &-14.3 &-14.4\\ &$1/2^9$ &-14.5 &-14.6 &-14.5 &-14.9\\ \hline \end{tabular} \end{table} Table \ref{table2} presents numerical results for solving the NEWT problem with the four EPTRKN methods eptrkn52, eptrkn73, eptrkn84, and eptrkn95. Again by subtracting the NCD values at stepsize $h/2$ from the NCD values at stepsize $h$ and then dividing the obtained results by 0.3, we can see that the accuracy orders of the eptrkn52, eptrkn73, eptrkn84, and eptrkn95 methods are $p=5$, $p=7$, $p=8$, and $p=9$, respectively. \begin{table}[ht] \caption{NCD values for the NEWT problem, $e=0.01$.} \label{table2} \centering \small \renewcommand{\arraystretch}{1.2} \begin{tabular}{@{}l@{}|c@{\hspace{2mm}}|c@{\hspace{2mm}}|c@{\hspace{2mm}}|c@{\hspace{2mm}}|c@{\hspace{2mm}}| c@{\hspace{2mm}}c@{\hspace{2mm}}|} \hline &$h$ &eptrkn52 & eptrkn73& eptrkn84 &eptrkn95 \\ \hline &$1/2$ &-0.9 &-2.2 &-2.6 &-2.9\\ &$1/2^2$ &-2.4 &-4.5 &-6.2 &-6.0\\ &$1/2^3$ &-3.9 &-6.9 &-8.9 &-9.2\\ &$1/2^4$ &-5.4 &-9.2 &-11.5 &-12.1\\ &$1/2^5$ &-6.9 &-11.5 &-13.6 &-13.7\\ &$1/2^6$ &-8.4 &-12.6 &-13.7 &-13.3\\ &$1/2^7$ &-9.9 &-12.8 &-13.1 &-12.8\\ &$1/2^8$ &-11.4 &-12.9 &-13.3 & -12.6\\ &$1/2^9$ &-13.1 &-12.4 &-12.5 &-12.5\\ \hline \end{tabular} \end{table} We also carried out similar experiments for the four FEPTRKN methods feptrkn52, feptrkn73, feptrkn83, and feptrkn95. We found out from our experiments that the accuracy orders of the feptrkn52, feptrkn73, feptrkn83, and feptrkn95 methods are $p=5$, $p=7$, $p=8$, and $p=9$, respectively. Again, this agrees with our super-convergence results in Theorems \ref{super2} and \ref{super3}. For simplicity we do not include the numerical results of these experiments in this paper. Figure \ref{figBETT} and Figure \ref{figNEWT} present the numerical results for solving the BETT and NEWT problems using the eight EPTRKN and FEPTRKN methods derived in Section \ref{derivation}. We also include numerical results obtained by the MATLAB function ode45. This function is a MATLAB implementation of the Runge-Kutta method DOPRI45 (see, e.g., \cite{Hairer}). The eight EPTRKN and FEPTRKN methods are implemented with variable stepsizes technique as described in Section \ref{variablestepsize}. In Figures \ref{figBETT} and \ref{figNEWT} we denote by NFE the number times evaluating $f(t_n+c_ih_n,Y_{n,i})$ of each of these methods. Note that the main computation cost for solving large-scale equation \eqref{sys} is evaluating the right-hand side function $f$. The errors reported in Figures \ref{figBETT} and \ref{figNEWT} are computed as follows $$ \text{Error} = \sqrt{\big(y^{\text{comput}}_1(t_{\text{end}}) - y_1(t_{\text{end}})\big)^2 + \big(y^{\text{comput}}_2(t_{\text{end}}) - y_2(t_{\text{end}})\big)^2}. $$ Figure \ref{figBETT} plots the numerical results for the BETT problem. The left figure in Figure \ref{figBETT} plots the numerical results obtained by the methods eptrkn52, eptrkn73, eptrkn84, eptrkn95, and the ode45 function. One can see from this figure that all the EPTRKN methods are better than the ode45 method. The right figure in Figure \ref{figBETT} plots the results obtained the eight methods derived in Section \ref{derivation} and ode45. Also, it can be seen from the right figure in Figure \ref{figBETT} that all of the FEPTRKN methods are much better than their corresponding EPTRKN methods in this experiment. \begin{figure} \centerline{% \includegraphics[scale=0.85]{bett3.eps}} \caption{Numerical results for the BETT problem.} \label{figBETT} \end{figure} Figure \ref{figNEWT} plots the results for the NEWT problem. From the left figure in Figure \ref{figNEWT} we can see that the eptrkn52 method is slightly better than the ode45 method. The other EPTRKN methods are much better than the ode45 method. The feptrkn95 method yields the best result for small TOL. The right figure in Figure \ref{figNEWT} plots the results for the eight methods derived in Section \ref{derivation} and the ode45 function. Again, we see that all functionally-fitted EPTRKN methods are superior to the corresponding EPTRKN methods. All of the methods derived in Section \ref{derivation} except for the eptrkn52 are much better than the ode45 method when TOL is small. Note that all computations here is done in sequential computing environment. The new methods will be much more efficient for solving large-scale nonstiff problems when implemented on shared memory computers as they require only one right-hand side function evaluation per step in parallel computing environments (see, e.g., \cite{cong00}). \begin{figure} \centerline{% \includegraphics[scale=0.85]{newt.eps}} \caption{Numerical results for the NEWT problem when $e=0.01$.} \label{figNEWT} \end{figure} \section{Concluding remarks} A new class of functionally fitted explicit pseudo two-step RKN (FEPTRKN) methods has been developed and studied in this paper. The advantage of functionally fitted methods over classical methods is that we can fine-tune the choice of bases to have methods which can better capture the special properties of a particular problem that may be known in advance. We have proved a new superconvergence result that allows us to construct $s$-stage (F)EPTRKN methods having accuracy orders $p=s+3$ and have shown that this new super-convergence result can be realized in practice. If we choose the monomials as the basis functions, then we recover EPTRKN methods. Moreover, we can obtain a larger class of methods by choosing various basis functions such as exponential functions, trigonometric polynomials, or mixed algebraic and trigonometric polynomials. Important aspects to producing robust and efficient numerical codes, namely, variable stepsizes, error estimation via embedded methods, as well as continuous extensions are also considered. While it is primarily designed for parallel computers, numerical comparisons with the MATLAB function ode45 in the paper show that the new methods can work better than one of the most commonly used methods on nonstiff problems in which accuracy (rather than stability) controls stepsizes. The new methods generally have a competitive advantage when the basis functions and the collocation parameters are chosen suitably. Since FEPTRKN methods have the structure of EPTRKN methods, they will be more efficient when implemented on parallel computers.
1,116,691,501,183
arxiv
\section{Introduction} \label{intro} One of the popular scenarios of baryogenesis is the spontaneous baryogenesis (SBG) proposed in papers~\cite{spont-BG-1,spont-BG-2,spont-BG-3}, for reviews see e.g. Refs.~\cite{BG-rev,AD-30}. It is assumed that in the unbroken phase the theory is invariant with respect to the global $U(1)$-symmetry, which ensures conservation of baryonic number. This symmetry is spontaneously broken and in the broken phase the Lagrangian density acquires the term \begin{equation} {\cal L}_{SB} = (\partial_{\mu} \theta) J^{\mu}_B\, , \label{L-SB} \end{equation} where $\theta$ is the Goldstone field and $J^{\mu}_B$ is the baryonic current. Due to the spontaneous symmetry breaking (SSB) this current is not conserved. The next step is the statement that the Hamiltonian density corresponding to ${\cal L}_{SB}$ is simply the Lagrangian density taken with the opposite sign: \begin{equation} {\cal H}_{SB} = - {\cal L}_{SB} = - (\partial_{\mu} \theta) J^{\mu}_B\, . \label{H-SB} \end{equation} For the spatially homogeneous field $\theta = \theta (t)$ this Hamiltonian is reduced to ${\cal H}_{SB} = - \dot \theta\, n_B$, where $n_B\equiv J^4_B$ is the baryonic number density, so it is tempting to identify $\dot \theta$ with the chemical potential, $\mu$, of the corresponding system. If this is the case, then in thermal equilibrium the baryon asymmetry would evolve to: \begin{equation} n_B =\frac{g_S B_Q}{6} \left({\mu\, T^2} + \frac{\mu^3}{ \pi^2}\right) \rightarrow \frac{g_S B_Q}{6} \left({\dot \theta\, T^2} + \frac{\dot \theta ^3}{ \pi^2}\right)\,, \label{n-B-of-theta} \end{equation} where $T$ is the cosmological plasma temperature, $g_S$ and $B_Q$ are respectively the number of the spin states and the baryonic number of quarks, which are supposed to be the bearers of the baryonic number. It is interesting that for successful SBG two of the three Sakharov's conditions for the generation of the cosmological baryon asymmetry, namely, breaking of thermal equilibrium and a violation of C and CP symmetries are unnecessary. This scenario is analogous the baryogenesis in absence of CPT invariance, if the masses of particles and antiparticles are different. In the latter case the generation of the cosmological baryon asymmetry can also proceed in thermal equilibrium~\cite{ad-zeld-cpt,ad-cpt}. In this work the classical version of spontaneous baryogenesis is studied. The talk is organized as follows. In Section~\ref {s-ssb} the general features of the spontaneous breaking of baryonic $U(1)$-symmetry are described, and the (pseudo)Goldstone mode, its equation of motion, and baryonic chemical potential are introduced. Next, in Sec.~\ref{s-kin-eq-canon} the standard kinetic equation in stationary background is presented. In Sec.~\ref{s-kin-eq} we derive kinetic equation in time dependent external field and/or for the case when energy is not conserved because of finite limits of integration over time. Several examples, when such kinetic equation is relevant, are presented in Sec.~\ref{s-examples}. Lastly in Sec.~\ref{s-conclude} we conclude. \section{Spontaneous symmetry breaking and goldstone mode \label{s-ssb}} Let us consider the theory of complex scalar field $\Phi$ interacting with "quarks", $Q$, and "leptons", $L$, with the Lagrangian: \begin{eqnarray} {\cal L }(\Phi) = g^{\mu\nu} \partial_\mu \Phi^* \partial_\nu \Phi - V(\Phi^* \Phi) + \bar Q (i \gamma^\mu \partial_\mu - m_Q)\,Q + \bar L ( i \gamma^\mu \partial_\mu - m_L) L + {\cal L}_{int}(\Phi, Q, L)\, , \label{S-Phi} \end{eqnarray} where ${\cal L}_{int} $ describes the interaction between $\Phi $ and fermionic fields. In the toy model studied below we take it in the form: \begin{eqnarray} {\cal L}_{int} = \frac{\sqrt 2}{m_X^2} \frac{\Phi}{f}\, (\bar L \gamma_{\mu} Q )(\bar Q^c \gamma_{\mu} Q) + h.c. \, , \label{L-int} \end{eqnarray} where $Q^c$ is charged conjugated quark spinor, $m_X$ is a parameter with dimension of mass, and $f$ is related to the vacuum expectation value of $\Phi $ defined below in Eq.~(\ref{V-of-Phi}). Such an interaction can appear e.g. in $SU(5)$ Grand Unified Theory. For simplicity, in our toy model we do not take into account the quark colors. B-non conserving interaction may have many different forms. The one presented above describes transition of three quark-type fermions into (anti)lepton. There may be transformation of two or three quarks into equal number of antiquarks. Such interaction describes neutron-antineutron oscillations. There even can be a "quark" transition into three "leptons". Depending on the interaction type the relation between $\dot\theta$ and the effective chemical potential would have different forms. Note that $Q$ and $L$ can be any fermions, not necessarily quarks and leptons of the standard model. For example, they can be new heavy fermions. They may possess similar or the same quantum numbers as the quarks and leptons of the standard model and may couple to the ordinary quarks and leptons. In section~\ref{s-kin-eq} we consider another model to study kinetics of the baryon asymmetry generation which allows for the transformation $3L \leftrightarrow Q$ or $2 Q \leftrightarrow 2\bar Q$. They are surely not permitted for the standard quarks. However, the process $3 q \leftrightarrow 3\bar q$ is permitted and kinetics of this process is essentially the same. We denote by $q$ the fermionic field with the same quantum number as the usual quark. The theory (\ref{S-Phi}) considered in this section is invariant under the following $U(1)$-transformations: \begin{eqnarray} \Phi \rightarrow e^{i \alpha} \Phi, ~~~~Q \rightarrow e^{- i \alpha/3} Q, {}~~~~L \rightarrow L \, . \label{phase} \end{eqnarray} In the unbroken symmetry phase this invariance leads to the conservation of the total baryonic number which includes the baryonic number of $\Phi $, taken to be unity, and that of quarks, equal to $1/3$. In realistic model the interaction of left- and right-handed fermions may be different but we neglect this possible difference in what follows. We assume that the global $U(1)$-symmetry is spontaneously broken at the energy scale $f$ in the usual way, e.g. via the potential of the form \begin{eqnarray} V(|\Phi|) = \lambda \left(\Phi^* \Phi - f^2/2 \right)^2 \, . \label{V-of-Phi} \end{eqnarray} The resulting scalar field vacuum expectation value is $\langle \Phi \rangle = f e^{i\phi_0/f}/\sqrt{2}$ with a constant phase $\phi_0$. Below the scale $f$ we can neglect the heavy radial mode of $\Phi$ with the mass $m_{radial} = \lambda^{1/2} f$, since being very massive it is frozen out, but this simplification is not necessary and is not essential for the baryogenesis. The remaining light degree of freedom is the variable field $\phi$, which is the Goldstone boson of the spontaneously broken $U(1)$. Up to a constant factor the field $\phi$ is the angle around the bottom of the Mexican hat potential described by Eq.~(\ref{V-of-Phi}). Correspondingly we introduce the dimensionless angular field $\theta \equiv \phi/f$: \begin{eqnarray} \Phi =f e^{i\phi / f} \sqrt{2}=f e^{i \theta} /\sqrt{2}\, . \label{theta} \end{eqnarray} As a result the following effective Lagrangian for $\theta $ is obtained: \begin{eqnarray} \nonumber {\cal L}_1 (\theta) = {f^2\over 2} \partial_\mu \theta \, \partial^{\mu} \theta + \bar Q_1 (i \gamma^{\mu} \partial_\mu - m_Q) Q_1 + \bar L ( i \gamma^{\mu} \partial_\mu - m_L) L + \\ \left(\frac{e^{i \theta }}{ m_X^2} \, (\bar L \gamma_{\mu} Q_1 )(\bar Q_1^c \gamma_{\mu} Q_1) + h.c.\right) - U(\theta)\, . \label{L-of-theta-1} \end{eqnarray} Here we introduced "by hand" potential $ U(\theta) $, which may appear due to an explicit symmetry breaking and can lead, in particular, to a nonzero mass of $\theta$. We use the notation $Q_1$ for the quark field to distinguish it from the phase rotated field $Q_2$ introduced below in Eq.~ (\ref{L-of-theta-2}). In a realistic model the quark fields should be (anti)symmetrized with respect to color indices, omitted here for simplicity. If $U(\theta ) = 0$, the theory still remains invariant under the global transformations (i.e. with $\alpha = const$): \begin{eqnarray} Q \rightarrow e^{ - i \alpha/3} Q, ~~~~L \rightarrow L, {}~~~~\theta \rightarrow \theta + \alpha \, . \label{global-trans} \end{eqnarray} If we only rotate the quark field as above but with coordinate dependent $\alpha = \theta (t, \bf x)$, introducing the new field $Q_1 = e^{- i \theta /3} Q_2$, then the Lagrangian~(\ref{L-of-theta-1}) is transformed into: \begin{eqnarray} {\cal L}_2 (\theta) = {f^2\over 2} \partial_\mu \theta \partial^{\mu} \theta + \bar Q_2 (i \gamma^{\mu} \partial_\mu - m_Q) Q_2 + \bar L ( i \gamma^{\mu} \partial_\mu - m_L) L + \nonumber \\ \left(\frac{1}{ m_X^2} \, (\bar Q_2 \gamma_{\mu} L )(\bar Q_2 \gamma_{\mu} Q_2^c) + h.c.\right) + (\partial_\mu \theta) J^\mu - U(\theta) \, , \label{L-of-theta-2} \end{eqnarray} where the quark baryonic current is $J_\mu =(1/3) \bar Q \gamma_\mu Q$. Note that the current has the same form in terms of $Q_1$ and $Q_2$. The equation of motion for the quark field $Q_1$ obtained from Lagrangian (\ref{L-of-theta-1}) has the form: \begin{eqnarray} (i \gamma^{\mu} \partial_\mu - m_Q) Q_1 + \frac{e^{-i \theta }}{ m_X^2} \left[ \gamma_{\mu} L (\bar Q_1 \gamma_{\mu} Q_1^c) + 2 \gamma_{\mu} Q_1^c (\bar Q_1 \gamma_{\mu} L ) \right] =0\,. \label{dirac-1} \end{eqnarray} Analogously the equation of motion for the phase rotated field $Q_2$ derived from Lagrangian (\ref{L-of-theta-2}) is: \begin{eqnarray} \left(i \gamma^{\mu} \partial_\mu - m_Q + \frac{1}{3}\,\gamma ^\mu \partial_\mu \theta \right) Q_2 + \frac{1}{ m_X^2} \left[ \gamma_{\mu} L (\bar Q_2 \gamma_{\mu} Q_2^c) + 2 \gamma_{\mu} Q_2^c (\bar Q_2 \gamma_{\mu} L ) \right] =0\,. \label{dirac-2} \end{eqnarray} Equations for $\theta$-field derived from these two Lagrangians in flat space-time have respectively the forms: \begin{eqnarray} f^2 (\partial _t^2 - \Delta ) \theta + U'(\theta) + \left[ \frac{i \,e^{- i \theta }}{ m_X^2} \, (\bar Q_1 \gamma_{\mu} L )(\bar Q_1 \gamma_{\mu} Q_1^c) + h.c.\right] = 0 \label{d2-theta-1} \end{eqnarray} and \begin{eqnarray} f^2 (\partial _t^2 - \Delta ) \theta + U'(\theta) + \partial _\mu J^\mu_B = 0\, , \label{d2-theta-2} \end{eqnarray} where $U'(\theta) = dU/d\theta $. Using either the equation of motion (\ref{dirac-1}) or (\ref{dirac-2}) we can check that the baryonic current is not conserved. Indeed, its divergence is: \begin{eqnarray} \partial_\mu J^\mu_B = \frac{i \,e^{-i\theta}}{m_X^2} (\bar Q_1 \gamma_\mu Q_1^c) (\bar Q_1 \gamma^\mu L) + h.c. \label{dmu-Jmu} \end{eqnarray} (and similarly for $Q_2$ but without the factor $\exp(-i\theta)$). So the equations of motion for $\theta$ in both cases (\ref{d2-theta-1}) and (\ref{d2-theta-2}) coincide, as expected. In the spatially homogeneous case, when $\partial_\mu J^\mu_B = \dot n_B $ and $\theta = \theta (t)$, and if $U(\theta) = 0$, equation (\ref{d2-theta-2}) can be easily integrated giving: \begin{eqnarray} f^2 \left[\dot \theta (t) - \dot \theta (t_{in})\right] = -n_B (t) + n_B (t_{in}) \,. \label{nB-of-t} \end{eqnarray} It is usually assumed that the initial baryon asymmetry vanishes, $n_B(t_{in}) = 0$. The evolution of $n_B (t)$ is governed by the kinetic equation discussed in Sec.~\ref{s-kin-eq-canon}, which allows to express $n_B$ through $\theta (t)$ and thus to obtain the closed systems of, generally speaking, integro-differential equations. In thermal equilibrium the relation between $\dot\theta$ and $ n_B $ may become an algebraic one, but this is true only in the case when the integration over time is sufficiently long and if $\dot\theta$ is constant or slowly varying function of time. In cosmological Friedmann-Robertson-Walker (FRW) background and space-independent $\theta (t)$ equation (\ref{d2-theta-2}) is transformed to: \begin{eqnarray} f^2 (\partial _t + 3 H ) \dot\theta + U'(\theta) = - (\partial _t + 3 H ) n_B. \label{d2-theta-FRW} \end{eqnarray} We do not include the curvature effects into the Dirac equations because this is not necessary for what follows. Still we are using expression for the current divergence in the form $\mc D_\mu J^\mu =\dot n_B + 3 Hn_B$, but not just $\dot n_B$. If particles (fermions) are in thermal equilibrium with respect to baryo-conserving interactions, then their phase space distribution has the form: \begin{eqnarray} f_{eq} =\left[ 1 + \exp (E/T -\xi_B) \right]^{-1}, \label{f-eq-ferm} \end{eqnarray} where dimensionless chemical potential $\xi_B$ has equal magnitude but opposite signs for particles and antiparticles. The baryonic number density, for small $\xi_B$, is usually given by the expression \begin{eqnarray} n_B = g_S B_Q \xi_B T^3 /6 \label{nB-of-xi} \end{eqnarray} (compare to Eq.~(\ref{n-B-of-theta})). However, the relation (\ref{nB-of-xi}) between the baryonic number density and chemical potential is true only for the normal relation between the energy and three-momentum, $E=\sqrt{ p^2 + m^2}$. This is not the case if the dispersion relation has the form \begin{eqnarray} E = \sqrt{p^2 +m^2} \pm \dot\theta/3, \label{E-split} \end{eqnarray} derived from the equation of motion (\ref{dirac-2}), where the signs $\pm$ refer to particles or antiparticles respectively, as we see a little below. We should note that the above dispersion relation is derived under assumption of constant or slow varying $\dot\theta$. Otherwise the Fourier transformed Dirac equation cannot be reduced to the algebraic one. If the baryon number is conserved, $n_B$ remains constant in comoving volume and it means in turn that $\xi_B = const$ for massless particles. If and when non-conservation of baryons is switched on, $\xi_B$ evolves according to kinetic equation. Complete thermal equilibrium in the standard theory demands $n_B \rightarrow 0$, but a deviation from thermal equilibrium of B-nonconserving interaction leads to generation of non-zero $\xi_B$ and correspondingly to non-zero $n_B$. As we will see in Sec.~\ref{s-kin-eq}, SBG allows for generation of nonzero baryonic number in complete thermal equilibrium. In terms of $\xi_B$ and the new function $\eta = \dot \theta / T^3$ equation (\ref{d2-theta-FRW}) takes the same form as eq. (\ref{nB-of-t}): \begin{eqnarray} f^2 \left[ \eta (t) - \eta (t_{in}) \right] = - \frac{ g_S B_Q}{6} \left[ \xi_B (t) - \xi_B (t_{in}) \right] \label{xi-B-of-eta} \end{eqnarray} and thus \begin{eqnarray} f^2 \left[ \frac{\dot\theta (t)}{T^3(t) }- \frac{\dot\theta (t_{in})}{T_{in}^3} \right] = - \frac{ g_S B_Q}{6} \left[ \xi_B (t) - \xi_B (t_{in}) \right] . \label{dot-theta-of-xi} \end{eqnarray} As we have already mentioned, $n_B (t_{in} ) = 0$, so according to Eq.~(\ref{nB-of-xi}) we should also take $\xi_B (t_{in}) = 0$. However this initial condition for chemical potential is not true in the theory with the Lagrangian (\ref{L-of-theta-2}) and the Dirac equation (\ref{dirac-2}) for the quark field, though the condition $n_B (t_{in} ) = 0$ is supposed to be always valid. Indeed, the B-nonconserving interaction now conserves energy and thus this process does not split the energies of quarks and antiquarks. However, these energies are split from the very beginning due to relation (\ref{E-split}). Correspondingly using eqs. (\ref{f-eq-ferm}) and (\ref{E-split}) we find in the massless case: \begin{eqnarray} n_B = \int \frac{d^3 p}{(2\pi)^3} (f_B - f_{\bar B} ) = \frac{ g_S B_Q}{6} \left(\xi_B - \frac{\dot\theta}{3T}\right)\,T^3 . \label{nB-of-xi-2} \end{eqnarray} If initially $n_B = 0$, then $\xi_B (t_{in}) = \dot\theta_{in}/(3T_{in})$. In the case of conserved baryonic number, $n_B$ remains zero and thus in equilibrium the relation $\xi_B = \dot\theta /(3T )$ must be true at any time. When the B-nonconserving interaction is on, the chemical potential would evolve and might evolve even down to zero, leading to generation of non-zero baryonic density, as is discussed in Sec.~\ref{s-kin-eq}. In the pseudogoldstone case, when $U(\theta) \neq 0$, equations of motion (\ref{d2-theta-2}) or (\ref{d2-theta-FRW}) cannot be so easily integrated, but in thermal equilibrium the system of equations containing $\theta (t)$ and $\xi_B(t)$ can be reduced to ordinary differential equations which are easily solved numerically. Out of equilibrium one has to solve much more complicated system of the ordinary differential equation of motion for $\theta(t)$ and the integro-differential kinetic equation. It is discussed below in Sec.~\ref{s-kin-eq-canon}. \section{Kinetic equation for time independent amplitude \label{s-kin-eq-canon}} The temporal evolution of the distribution function of i-th type particle, $f_i (t,p)$, in an arbitrary process ${ i+Y \leftrightarrow Z}$ in the FRW background, is governed by the kinetic equation: \begin{eqnarray} \frac{df_i}{dt} = (\partial_t - H\,p_i \partial_{p_i}) f_i = I_i^{coll}, \label{kin-eq-1} \end{eqnarray} with the collision integral equal to: \begin{eqnarray} &&I^{coll}_i={(2\pi)^4 \over 2E_i} \sum_{Z,Y} \int \,d\nu_Z \,d\nu_Y \delta^4 (p_i +p_Y -p_Z) \nonumber\\ && \left[ |A(Z\rightarrow i+Y)|^2 \prod_Z f \prod_{i+Y} (1\pm f) - |A(i+Y\rightarrow Z)|^2 f_i \prod_Y f\prod_Z (1\pm f)\right], \label{I-coll-1} \end{eqnarray} where $ A( a \rightarrow b)$ is the amplitude of the transition from state $a$ to state $b$, ${ Y}$ and ${ Z}$ are arbitrary, generally multi-particle states, $\left( \prod_Y f \right)$ is the product of the phase space densities of particles forming the state $Y$, and \begin{eqnarray} d\nu_Y = \prod_Y {\overline {dp}} \equiv \prod_Y {d^3p\over 2E\,(2\pi )^3}. \label{dnuy-1} \end{eqnarray} The signs '+' or '$-$' in ${ \prod (1\pm f)}$ are chosen for bosons and fermions respectively. We neglect the effects of the space-time curvature in the collision integral which is generally a good approximation. In the lowest order of perturbation theory the amplitude of transition from an initial state $| in \rangle$ to a final state $| fin \rangle$ is given by the integral of the matrix element of Lagrangian density between these states, integrated over 4-dimensional space $d^4 x$. The quantum field operators are expanded in terms of creation-annihilation operators with a plane wave coefficients: $\sim \exp (-i Et + i {\bf px} ) $. When the amplitude of the process is time-independent, then the integration of the product of the exponents in infinite integration limits leads to the energy-momentum conservation factors: \begin{eqnarray} \int dt d^3 x \,e^{-i ( E_{in} - E_{fin} ) t + i (\bf P_{in} - \bf P_{fin} ) \bf x } = (2\pi)^4 \delta (E_{in} - E_{fin})\, \delta((\bf P_{in} - \bf P_{fin} ), \label{int-over-d4x} \end{eqnarray} where $E_{in} $, $E_{fin}$, $\bf P_{in}$, and $\bf P_{in}$ are the total energies and 3-momenta of the initial and final states respectively. The amplitude squared contains delta-function of zero which is interpreted as the total time duration, $t_{max}$, of the process and as the total space volume, $V$. The probability of the process given by the collision integral is normalized per unit time and volume, so it must be divided by $V$ and $t_{max}$. We are interested in the evolution of the baryon number density, which is the time component of the baryonic current $J^\mu$: $n_B \equiv J^4$. Due to the quark-lepton transitions the current is non-conserved and its divergence is given by Eq.~(\ref{dmu-Jmu}). The similar expression is evidently true in terms of $Q_2$ but without the factor $\exp (- i \theta)$. Let us first consider the latter case, when the interaction is described by the Lagrangian (\ref{L-of-theta-2}), which contains the product of three "quark" and one "lepton" operators, and take as an example the process $q_1 + q_2 \leftrightarrow \bar q + l $. Since the interaction in this representation does not depend on time, the energy is conserved and the collision integral has the usual form with conserved four-momentum. Quarks are supposed to be in kinetic equilibrium but probably not in equilibrium with respect to B-nonconserving interactions, so their distribution function has the form: \begin{eqnarray} f_Q = \exp \left( -\frac{E}{T} + \xi_B \right) \,\,\, {\rm and} \,\,\, f_{\bar Q} = \exp \left( -\frac{E}{T} - \xi_B \right) \label{f-B-equil}. \end{eqnarray} Here and in what follows the Boltzmann statistics is used. Since the dispersion relation for quarks and antiquarks ~(\ref{E-split}) depends upon $\dot\theta$, the baryon asymmetry in this case is given by eq.~(\ref{nB-of-xi-2}) and the kinetic equation takes the form: \begin{eqnarray} \frac{g_S B_Q}{6}\, \frac{ d}{dt} \left( \xi_B - \frac{\dot\theta}{3T}\right) = - c_1\Gamma \xi_B, \label{dot-n-B-xi} \end{eqnarray} where $c_1$ is a numerical factor of order unity and $\Gamma$ is the rate of baryo-nonconserving reactions. If the amplitude of these reactions has the form presented in Eq.~(\ref{dirac-2}), then $\Gamma \sim T^5/m_X^4$. For constant or slow varying temperature the equilibrium solution to this equation is $\xi_B = 0$ and the baryon number density is proportional to $n_B \sim \dot\theta T^2 $, with $\dot\theta$ evolving according Eq.~(\ref{nB-of-t}) with $n_B$ expressed through $\dot \theta$. Let us check now what happens if the dependence on $\theta$ is moved from the quark dispersion relation to the B-nonconserving interaction term (\ref{d2-theta-1}). The expression for the collision integral (\ref{I-coll-1}) is valid only in absence of external field depending on coordinates. In our case, when quarks "live" in the $\theta (t)$-field, the collision integral should be modified in the following way. Now we have an additional factor under the integral (\ref{int-over-d4x}), namely, $\exp [ \pm i\theta (t)]$. In general case this integral cannot be taken analytically, but if we can approximate $\theta (t) $ as $\theta(t) \approx \dot \theta t$ with a constant or slowly varying $\dot \theta$, the integral is simply taken giving e.g. for the process of two quark transformation into antiquark and lepton, $q_1+q_2 \leftrightarrow \bar q + l$, the energy balance condition imposed by $\delta (E_{q_1} + E_{q_2} - E_{\bar q} - E_l - \dot\theta)$. In other words the energy is non-conserved due to the action of the external field $\theta (t)$. The approximation of linear evolution of $\theta $ with time can be valid if the reactions are fast in comparison with the rate of the $\theta$-evolution. Returning to our case we can see that the collision integral integrated over the three-momentum of the particle under scrutiny (i.e. particle $i$ in eq.~(\ref{I-coll-1}) ) e.g. for process the $q_1+q_2 \rightarrow l + \bar q $ turns into: \begin{eqnarray} \nonumber &&\dot n_B +3H n_B \sim \\ &&\int d\tau_{l \bar q} d\tau_{q_1 q_2} |A|^2 \delta (E_{q_1} + E_{q_2} - E_l - E_{\bar q} - \dot \theta) \delta( {\bf P}_{in} - {\bf P}_{fin} ) e^{- E_{in}/T} \left( e^{\xi_L - \xi_B + \dot \theta/T} - e^{2 \xi_B} \right), \label{dot-nB-1} \end{eqnarray} where $d\tau_{l,\bar q} = d^3 p_l d^3 p_{\bar q} /[ 4 E_l E_{\bar q} (2\pi)^6]$. We assumed here that all participating particles are in kinetic equilibrium, i.e. their distribution functions have the form \begin{eqnarray} f = 1/[\exp{(E/T - \xi}) + 1], \label{f-eq} \end{eqnarray} with $\xi = \mu/T$ being dimensionless chemical potential. In expression (\ref{dot-nB-1}) $\xi_B$ and $\xi_L$ denote baryonic and leptonic chemical potentials respectively and the effects of quantum statistics are neglected but only for brevity of notations. Their effects are not essential in the sense that they do not change the conclusion. The assumption of kinetic equilibrium is well justified because it is enforced by the very efficient elastic scattering. Another implicit assumption is the usual equilibrium relation between chemical potentials of particles and antiparticles, $\bar \mu = -\mu$, imposed e.g. by the fast annihilation of quark-antiquark or lepton-antilepton pairs into two and three photons. Anyhow the assumption of chemical equilibrium is one of the cornerstones of the spontaneous baryogenesis. The conservation of $(B+L)$ implies the following relation: $\xi_L = - \xi_B/3$. Keeping this in mind, we find \begin{eqnarray} \dot n_B + 3H n_B \approx -\left(1 - e^{ \dot \theta/T - 3\xi_B +\xi_L} \right) I \approx \left( \frac{\dot \theta }{T} - \frac{10}{3}\,\xi_B \right) I, \label{dot-nB-2} \end{eqnarray} where we assumed that $\xi_B$ and $\dot \theta/T$ are small. In relativistic plasma with temperature $T$ the factor $I$, coming from the collision integral, can be estimated as $I=T^8/m^4$, where $m$ is a numerical constant with dimension of mass. It differs from $m_X$, introduced in eq.~(\ref{L-of-theta-1}), by a numerical coefficient. The asymmetry between quarks and antiquarks having the distribution (\ref{f-eq}) with equal by magnitude but opposite by sign chemical potentials and identical dispersion relations is equal to \begin{eqnarray} n_B = C_B {\xi_B T^3} , \label{n-B-eq} \end{eqnarray} where $C_B$ is a constant, see Eq.~(\ref{n-B-of-theta}) in the limit of $\mu \ll T$, because in the realistic case the baryon asymmetry is quite small. For a large factor $I$ we expect the equilibrium solution $\xi_B = (3/10) \dot\theta/T$, so $\dot\theta$ up to the different numerical factor seems to be the baryonic chemical potential, as expected in the usually assumed SBG scenario. An emergence of the factor $3/10$ instead of $1/3$ in the equilibrium expression is due to the conservation law $B+L = const$. However, as we have seen above, the baryonic chemical potential is not aways proportional to $\dot\theta (t)$. \section{Kinetic equation for time-varying amplitude \label{s-kin-eq}} In the case the interaction proceeds in a time dependent field and/or the time duration of the process is finite, then the energy conservation delta-function in (\ref{dot-nB-1}) does not emerge and the described in Sec.~\ref{s-kin-eq-canon} approach becomes invalid, so one has to make the time integration with an account of time-varying background and integrate over the phase space without energy conservation. In what follows we consider two-body inelastic process with baryonic number non-conservation with the amplitude obtained from the last term in Lagrangian (\ref{L-of-theta-1}). At the moment we will not specify the concrete form of the reaction but only will say that it is the two-body reaction \begin{eqnarray} a+b \leftrightarrow c+d , \label{ab-go-cd} \end{eqnarray} where $a,b,c$, and $d$ are some quarks and leptons or their antiparticles. The expression for the evolution of the baryonic number density, $n_B$, follows from eq. (\ref{kin-eq-1}) after integration of its both sides over $d^3 p_i/(2\pi)^3$. Thus we obtain: \begin{eqnarray} \dot n_B + 3 H n_B = -\frac{(2\pi)^3}{t_{max}} \int d\nu_{in} d\nu_{fin}\,\delta({\bf P}_{in} -{\bf P}_{fin} )\, |A|^2 \left( f_a f_b - f_c f_d \right)\,, \label{nB-kin-eq} \end{eqnarray} where e.g. $d\nu_{in} = {d^3 p_a d^3 p_b}/{[ 4 E_a E_b (2\pi)^6}]$ and the amplitude of the process is defined as \begin{eqnarray} A = \left(\int_0^{t_{max}} dt\, e^{i [ (E_c+E_d - E_a -E_b)t + \theta (t) ]} \right) F(p_a,p_b,p_c,p_d)\, , \label{amp} \end{eqnarray} and $F$ is a function of 4-momenta of the participating particles, determined by the concrete form of the interaction Lagrangian. In what follows we consider two possibilities: $ F= const$ and $F = \psi^4 \,m_X^{-2}$, where in the last case $\psi^4$ symbolically denotes the product of the Dirac spinors of particles $a, b, c$, and $d$. In the case of equilibrium with respect to baryon conserving reactions the distribution functions have the canonical form, $f_a = \exp (-E_a /T + \xi_a)$, where $\xi_a \equiv \mu_a/T$ is the dimensionless chemical potential. So for constant $F$ the product $|A|^2 (f_a f_b - f_c f_d) $ depends upon the particle 4-momenta only through $E_{in}$ and $E_{fin}$, where \begin{eqnarray} E_{in} = E_a +E_b,\,\,\,\,{\rm and}\,\,\,\, E_{fin} = E_c+E_d . \label{E-in-E-fin} \end{eqnarray} To integrate Eq.~(\ref{nB-kin-eq}) over the phase space it is convenient to change the integration variables, according to: \begin{eqnarray} \frac{d^3 p_a}{2 E_a}\,\frac{d^3 p_b}{ 2 E_b } = d^4 P_{in}\, d^4 R_{in} \,\delta (P^2_{in} + R^2_{in} )\, \delta (P_{in}R_{in})\,, \label{pp-PR} \end{eqnarray} where $P_{in} = p_a +p_b$ and $R_{in} = p_a - p_b$ and masses of the particles are taken to be zero. Analogous expressions are valid for the final state particles. Evidently the time components of the 4-vectors $P$ are the sum of energies of the incoming and outgoing particles, $P^{(4)}_{in} = E_{in} $ and $P^{(4)}_{fin} = E_{fin} $. Now we can perform almost all (but one) integrations and finally we obtain the kinetic equation in the following form: \begin{eqnarray} \dot n_B + 3 H n_B = - \frac{T^5}{2^5 \pi^6\, t_{max}}\, \int_0^\infty dy \left[ e^{\xi_a + \xi_b} \left( |A_+|^2 + |A_-|^2 e^{-y} \right) - e^{\xi_c+ \xi_d} \left( |A_{ - }|^2 + |A_{+}|^2 e^{-y} \right) \right] , \label{dot-nB-gen} \end{eqnarray} where $y= E_-/T$ is the dimensionless energy with $E_-$ being the difference between initial and final energies of the system, $E_- = E_{in} - E_{fin} $, $A_+$ and $A_-$ are amplitudes taken at positive and negative $E_-$, respectively. Note, that with the substitution $E_- \rightarrow |E_-|$ the only difference between $A_+$ and $A_-$ is that $A_- (\theta) = A_+ (-\theta)$. The equilibrium is achieved when the integral in Eq.~(\ref{dot-nB-gen}) vanishes. Clearly it takes place at \begin{eqnarray} \xi_a+\xi_b -\xi_c -\xi_d = \frac{ \langle |A_+|^2 e^{-y} + |A_-|^2 \rangle } { \langle |A_+|^2 + |A_-|^2 e^{-y} \rangle} -1 , \label{equiv-sol} \end{eqnarray} where the angular brackets mean integration over $dy$ as indicated in Eq.~(\ref{dot-nB-gen}). This results above are obtained for the amplitude which does not depend upon participating particle momenta. The calculations would be be somewhat more complicated if this restriction is not true. For example if the baryon non-conservation takes place in four-fermion interactions, then the amplitude squared can contain the terms of the form $(p_a p_b)^2 /m_X^4$ or $(p_a p_c)^2 /m_X^4$, etc. The effect of such terms results in a change of the numerical coefficient in Eq.~(\ref{dot-nB-2}) but the latter is unknown anyhow, and what is more important the temperature coefficient in front of the integral in this equation would change from $T^5$ to $T^9/m_X^4$. \section{Examples of time-varying $\theta$ \label{s-examples}} \subsection{Constant $\dot \theta$ \label{ss-const-dot-tjeta}} This is the case usually considered in the literature and the simplest one. The integral (\ref{amp}) is taken analytically resulting in: \begin{eqnarray} |A|^2 \sim \frac{ 2- 2\cos[ ( \dot\theta - E_-) t_{max}]}{ (\dot\theta - E_-)^2}\, , \label{A-const-dot-theta} \end{eqnarray} where $E_-$ is running over the positive semi-axis. For large $t_{max}$ this expression tends to $\delta (E_--\dot\theta)$, so $ |A_+|^2 = 2\pi \delta (E_- - \dot\theta) t_{max} $ and $ |A_-|^2 = 2\pi \delta (E_- +\dot\theta) t_{max} =0$, if $\dot\theta >0$ and vice versa otherwise. Hence the equilibrium solution is \begin{eqnarray} \xi_a+\xi_b -\xi_c -\xi_d - \dot\theta = 0, \label{equil-1} \end{eqnarray} coinciding with the standard result. The limit of $\dot\theta = const$ corresponds to the energy non-conservation by the rise (or drop) of the energy of the final state in reaction (\ref{ab-go-cd}) exactly by $\dot\theta$. However if $t_{max}$ is not sufficiently large, the non-conservation of energy is not equal to $\dot\theta$ but somewhat spread out and the equilibrium solution would be different. There is no simple analytical expression in this case, so we have to take the integral (\ref{equiv-sol}) over $y$ numerically to find at what values of chemical potentials, $\xi_k$, it vanishes and this point determines the equilibrium values of the chemical potentials in external $\dot\theta $ field. The results of the calculations are presented in Fig.~\ref{fig-1}. In the left panel the values of the r.h.s. of Eq.~(\ref{equiv-sol}) are compared with $\dot\theta/T$ (thick line) for two values of the cut-off in time integration $\tau \equiv t_{max} T = 10$ (dashed line) and $\tau = 3$ (dotted line). In the right panel relative differences between the r.h.s. of Eq.~(\ref{equiv-sol}) and $\dot\theta/T$, normalized to $\dot\theta/T$, as functions of $\dot\theta$ for different maximum time of the integration are depicted. We see that for $\tau = 30$ (thick line) the deviations are less than 10\%, while for $\tau = 3$ (dotted line) the deviations are about 30\%. If we take $\tau$ close to unity, the deviations are about 100\%. The value of $\dot\theta/T$ is bounded from above by 0.3 because at large $\dot\theta/T$ the linear expansion, used in our estimates, is invalid. \begin{figure} \centering \includegraphics[width=.46\textwidth]{Q16-prcd-fig1} \hspace{.2cm} \includegraphics[width=.46\textwidth]{Q16-prcd-fig2} \caption{ {\it Left}: $\dot \theta/T$ for infinite time integration (thick line). Cut-off in time integration $\tau \equiv t_{max} T =$ 10 (dashed); 3 (dotted). {\it Right}: relative differences between the equilibrium solutions and $\dot\theta/T$, normalized to $\dot\theta/T$, as functions of $\dot\theta$ for different $t_{max}$: $\tau \equiv t_{max} T =$ 30 (thick); 10 (dashed); 3 (dotted).} \label{fig-1} \end{figure} \subsection{Second order Taylor expansion of $\theta (t)$ \label{ss-Taylor}} Here we assume that $\theta (t)$ can be approximated as \begin{eqnarray} \theta (t) = \dot \theta \,t + \ddot \theta\, t^2/2, \label{theta-Taylor} \end{eqnarray} where $\dot\theta$ and $\ddot\theta$ are supposed to be constant or slowly varying. In this case the integral over time (\ref{amp}) can also be taken analytically but the result is rather complicated. We need to take the integral \begin{eqnarray} \int_0^{t_{max}} dt \exp [i \theta (t) ] . \label{int-theta-of-t} \end{eqnarray} Its real and imaginary parts are easily expressed through the Fresnel functions. So the amplitude squared is given by the functions tabulated in Mathematica and the position of the equilibrium point can be calculated, as in the previous case, by numerical calculation of one dimensional integral. The r.h.s. of Eq.~(\ref{equiv-sol}) as functions of $\dot\theta $ for different values of $\tau$ are presented in Fig.~\ref{fig-2}, left panel. It is interesting that the dependence on $\tau$ is not monotonic. This can be explained by that at small $\tau$ the effects of $\ddot\theta t^2$ are not essential. To check the dependence on $\ddot\theta$ we calculated again the r.h.s. of Eq.~(\ref{equiv-sol}) but now as functions of $\ddot\theta$ presented in the right panel in Fig.~\ref{fig-2}. We see that the equilibrium point oscillates as a function of $\ddot\theta$. \begin{figure} \centering \includegraphics[width=.46\textwidth]{Q16-taylor2-dot-theta} \hspace{.2cm} \includegraphics[width=.46\textwidth]{Q16-taylor-ddot-theta} \caption{ Relative differences between equilibrium solutions and $\dot\theta/T$, normalized to $\dot\theta/T$. {\it Left:} as functions of $\dot\theta$ for different $t_{max}$: $\tau \equiv t_{max} T = 30$ (thick); 10 (dashed); 3 (dotted); $\ddot \theta = 0.1$. {\it Right:} as functions of $\ddot\theta$ for different $\dot \theta$: $\dot \theta =$ 0.1 (thick); 0.2 (dashed); 0.3 (dotted); $\tau \equiv t_{max} T= 10$.} \label{fig-2} \end{figure} \section{Conclusion \label{s-conclude}} We argue that in the standard description $\dot\theta$ is not formally the chemical potential, though in thermal equilibrium $\mu_B$ tends to $\dot \theta$ with numerical, model dependent, coefficient. Moreover, this is not always true but depends upon the chosen representation for the "quark" fields. In the theory described by the Lagrangian (\ref{L-of-theta-1}) which appears "immediately" after the spontaneous symmetry breaking, $\theta (t)$ directly enters the interaction term in this Lagrangian and in equilibrium $\mu_B \sim \dot\theta$ indeed. On the other hand, if we transform the quark field, so that the dependence on $\theta$ is shifted to the bilinear product of the quark fields (\ref{L-of-theta-2}), then chemical potential in equilibrium does not tend to $\dot\theta$, but to zero. On the other hand, the magnitude of the baryon asymmetry in equilibrium is always proportional to $\dot\theta$. It can be seen, according to the equation of motion of the Goldstone field, that $\dot\theta/T$ drops down in the course of the cosmological cooling as $T^2$, so the baryon number density in the comoving volume decreases in the same way. So to avoid complete vanishing of $n_B$ the baryo-violating interaction should switch-off at some non-zero $T$. This is always the case but the dependence on the interaction strength is non-monotonic. The assumption of constant or slowly changing $\dot\theta$, which is usually done in the SBG scenario, may be not fulfilled and to include the effects of an arbitrary variation of $\theta (t)$ as well as the effects of the finite time integration we transform the kinetic equation in such a way that it becomes operative in the case of non-conserved energy. A shift of the equilibrium value of the baryonic chemical potential due to this effect is numerically calculated. \vspace{.5cm} { \bf Acknowledgement.} EA and AD thank the support of the Grant of President of Russian Federation for the leading scietific schools of the Russian Federation, NSh-9022.2016.2. VN thanks the support of the Grant RFBR 16-02-00342.
1,116,691,501,184
arxiv
\section{Introduction} \label{intro} Multi-label learning deals with the problem where each sample is represented by a feature vector and is associated with multiple concepts or semantic labels. For example, in image annotation, an image may be annotated with different scenes; or in text categorization, a document may be tagged to multiple topics. Formally, given a data matrix $X \in \mathbb{R}^{d \times n}$ is composed of $n$ samples of $d$-dimensional feature space. The feature vector $x_{i} \in X$ is associated with label set $Y_{i}=\left \{ y_{i1},y_{i2},_{\cdots},y_{ik} \right \}$ where $k$ is the number of labels, and $y_{(i,j)} \in \left \{ 0,1 \right \}$ is a logical value where the associated label is relevant to the instance $x_{i}$. Over the past decade, many strategies have been proposed in the literature to learn from multi-labeled data. Initially, the problem was tackled by learning binary classification models on each label independently~\cite{zhang2013review}. However, this strategy ignores the existence of label correlation. Interestingly, several methods~\cite{braytee2019correlated,wang2019discriminative,wang2020dual} show the importance of considering the label correlation during multi-label learning to improve the classification performance. However, these methods use logical labels where no manifold exist and apply traditional similarity metrics such as Euclidean distance which is mainly built for continuous data. In this study, contrary to the majority of the methods, in addition to learning the mapping function from a feature space to multi-label space, we explore the projection function from label space to feature space to reconstruct the original feature representations. For example, in image annotation, our novel method is able to reconstruct the scene image using the projection function and the semantic labels. Initially, it is necessary to explore the natural structure of the label space in multi-labeled data. Existing datasets naturally contain logical label vectors which indicate whether the instance is relevant or not relevant to a specific label. For example, as shown in Fig.~\ref{fig:natural}, both images tagged the label boat with the same weight equal to $1$ (present). However, to accurately describe the labels in both images, we need to identify the importance of the labels in each image. It is clearly seen that the label boat in image (\ref{fig:boat2}) is more important than that in image (\ref{fig:boat1}). Furthermore, the label with the zero value in the logical label vector refers to different meanings, which may either be irrelevant, unrepresented or missing. Using the same example in Fig.~\ref{fig:natural}, the boat and the sun labels in images (\ref{fig:boat1} and \ref{fig:boat2}) are not tagged due to their small contribution (unrepresented). Our method learns a numerical multi-label matrix in semantic embedding space during the optimization method based on label dependencies. Therefore, replacing the importance of labels using numerical values instead of the logical labels can improve the multi-label learning process. Importantly, learning the numerical labels is essential to our novel approach that is developed based on the encoder-decoder deep learning paradigm~\cite{ranzato2007unified}. Specifically, the input training data in the feature space is projected into the learned semantic label space (label manifold) as an encoder step. In this step, simultaneously through an optimization problem, it learns the projection function and the semantic labels in Euclidean space. Significantly, we also consider the reconstruction task of the original feature representations using the projection matrix as input to a decoder. This step imposes a constraint to ensure that the projection matrix preserves all the information in the original feature matrix. The decoder allows the original features to be recovered using the projection matrix and the learned semantic labels. In the case of image annotation, this process is similar to combining puzzle pieces to create the picture. However, in the case of logical labels where the label either exists or not, it is incapable of reconstructing the original visual feature representations. We show that the impact of the decoder in identifying the relevant features can improve multi-label classification performance. This is because the feature coefficients are estimated based on the actual numerical labels and more importantly the weights of the relevant features results in a reduction of the reconstruction error. The proposed method is visualized in Fig.~\ref{fig:correlation}. We test the proposed approach on a variety of public multi-label datasets, and clearly verify that they favourably outperform the state-of-the-art methods in feature selection and data reconstruction. We formulate the proposed approach as a constrained optimization problem to project feature representations into semantic labels with a reconstruction constraint. More precisely, the method is designed by an effective formulation of the encoder and decoder model using a linear projection to and from the learned semantic labels, respectively. This design alleviates the computational complexity of the proposed approach making it suitable for large scale datasets. To the best of our knowledge, this is the first attempt to learn the semantic label representation from the training data that can be used for data reconstruction in multi-label learning. In summary our contribution are: (1) a semantic encoder-decoder model to learn the projection matrix from original features to semantic labels that can be used for data reconstruction; (2) we extend the logical label to a numerical label which describes the relative importance of the label in a specific instance; (3) we propose a novel \textit{Learning Discriminative Features using Multi-label Dual Space (LDFM)} which is able to identify discriminative features across multiple class labels. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[height=0.1\textheight,width=0.3\textwidth]{Figures/a.jpg} \label{fig:boat1}} \subfloat[]{ \includegraphics[height=0.1\textheight,width=0.3\textwidth]{Figures/b.jpg} \label{fig:boat2}} \caption{Two images annotated with labels: sea, sunset, and boat} \label{fig:natural} \end{figure} \section{Related Work} \textbf{Label correlation} \hspace{0.2cm} Over the past decade until recently, it has been proven that label correlation has improved the performance of multi-label learning methods. The correlation is considered either between pairs of class labels or between all the class labels which is known as second and high order approaches, respectively~\cite{zhang2013review,zhu2017multi,che2020novel}. However, in these models, the common learning strategy is to deal with logical labels which represents whether the label is relevant or irrelevant to an instance. The label matrix in the available multi-labeled datasets contains logical values which lack semantic information. Hence, few works reveal that transforming the labels from logical into numerical values improves the learning process. \textbf{Semantic labels} \hspace{0.2cm} The numerical value in the label space carries semantic information, i.e., the value may refer to the importance or the weight of an object in the image. The numerical label matrix in Euclidean space is not explicitly available in the multi-labeled data. A few works have studied the multi-label manifold by transforming the logical label space to the Euclidean label space. For example, \cite{hou2016multi} explore the label manifold in multi-label learning and reconstruct the numerical label matrix using the instance smoothness assumption. Another work~\cite{cai2018multi} incorporates feature manifold learning in the multi-label feature selection method, and \cite{huang2018manifold} select the meaningful features using the constraint Laplacian score in manifold learning. However, our proposed method differs from these by learning an encoder-decoder network to reconstruct the input data using the learned projection matrix along with predicting the semantic labels. \textbf{Autoencoder} \hspace{0.2cm} Several variants use an autoencoder for multi-label learning. \cite{cheng2019multi} learn the unknown labels using the entropy measure from existing labels, then the completed label matrix is used as an input layer feature set in autoencoder architecture. However, our method reconstructs the original input data in the decoder using the learned semantic labels. Further, \cite{law2019multi} propose a stacked autoencoder for feature encoding and an extreme learning machine to improve the prediction capability. However, the authors did not take label correlation into consideration and the original logical labels are used in the learning process. In this paper, we select the discrete features that are important to detect the objects' weights during the encoding phase and simultaneously they are significant to reconstruct the original data in the decoding phase. \section{The Proposed Method} In multi-label learning, as mentioned above, the training set of multi-labeled data can be represented by $\left \{ x_{i} \in X | i=1,\cdots,n \right \}$, the instance $x_{i} \in \mathbb{R}^{d}$ is a $d$-dimensional feature vector associated with the logical label $Y_{i}=\left \{ y_{i1},y_{i2},_{\cdots},y_{ik} \right \}$, where $k$ is the number of possible labels; and the values $0$ and $+1$ represent the irrelevant and relevant label to the instance $x_{i}$ respectively. \subsection{Label Manifold} To overcome the key challenges in logical label vectors, we first propose to learn a new numerical label matrix $\widetilde{Y} \in \mathbb{R}^{k \times n}$ which contains labels with semantic information. According to the label smoothness assumption~\cite{xu2014learning} which states that if two labels are semantically similar, then their feature vectors should be similar, we initially exploit the dependencies among labels to learn $\widetilde{Y}$ by multiplying the original label matrix with the correlation matrix $C \in \mathbb{R}^{k \times k}$. Due to the existence of logical values in the original label matrix, we use the \textit{Jaccard index} to compute the correlation matrix as follows \begin{equation} \small \label{eq:newy} \widetilde{Y}=Y^{T}C \end{equation} where the element $\widetilde{Y}_{i,j}=Y_{i,1}^{T} \times C_{1,j} + Y_{i,2}^{T} \times C_{2,j} + \cdots + Y_{i,n}^{T}\times C_{n,j}$ is initially determined as the predictive numerical value of instance $x_{i}$ is related to the $j$-th label using the prior information of label dependencies. Following is a simple example to investigate the efficiency of using label correlation to learn the semantic numerical labels. The original logical label vectors $Y$ of images (\ref{fig:boat1}) and (\ref{fig:boat2}) are shown in Fig.~\ref{fig:correlation}. The zero values of the original label matrix point to three different types of information. The grey and red colors in Fig.~\ref{fig:correlation} refer to unrepresented and missing labels respectively. However, the white color means that images (\ref{fig:boat1}) and (\ref{fig:boat2}) are not labeled as ``Grass''. We clearly show that the predictive label space $\widetilde{Y}$ can distinguish between three types of zero values and it provides the appropriate numerical values which include semantic information. For example, due to the correlation between ``Boat'' and ``Ocean'' and ``Sunset'' and ``Sun'', the unrepresented label information for ``Boat'' and ``Sun'' in image (\ref{fig:boat1}) and ``Sun'' in image (\ref{fig:boat2}) is learned. Further, the missing label of the ``Ocean'' image (\ref{fig:boat2}) is predicted. Interestingly, we can further see in the predictive label matrix that the numerical values of the ``Grass'' label in both images are very small because the ``Grass'' label is not correlated with the other labels. Thus, this perfectly matches with the information in the original label matrix that the ``Grass'' object does not exist in images (\ref{fig:boat1}) and (\ref{fig:boat2}) as shown in Fig.~\ref{fig:correlation}. Therefore, based on the above example, we can learn the accurate numerical labels with semantic information. The completed numerical label matrix $\widetilde{Y}$ of the training data is learned in the optimization method of the encoder-decoder framework in the next section. \begin{figure \centering \includegraphics[height=0.35\textheight,width=1\textwidth]{Figures/framework_Corr.png} \caption{Encoder-decoder architecture of our proposed method LDFM. A predicted numerical label matrix $\widetilde{Y}$ is initialized by multiplying the correlation matrix with the original logical label matrix $Y$. The zero values of the images in red, grey and white boxes represent missing, misrepresented, and irrelevant labels, respectively} \label{fig:correlation} \end{figure} \subsection{Approach Formulation} Suppose there is training data $X \in \mathbb{R}^{d \times n}$ with $n$ samples that are associated with $Y \in \mathbb{R}^{k \times n}$ labels. The predictive numerical matrix $\widetilde{Y}$ is initialised using Eq.~\ref{eq:newy}. The intuition behind our idea is that the proposed method is able to capture the relationship between the feature space and the manifold label space. Inspired by the autoencoder architecture, we develop an effective method that integrates the characteristics of both the low-rank coefficient matrix and semantic numerical label matrix. Specifically, our method is composed of encoder-decoder architecture which tries to learn the projection matrix $W \in \mathbb{R}^{k \times d}$ from the feature space $X$ to the numerical label space $\widetilde{Y}$ in the encoder. At the same time, the decoder can project back to the feature space with $W^{T} \in \mathbb{R}^{d \times k}$ to reconstruct the input training data as shown in Fig.~\ref{fig:correlation}. The objective function is formulated as \begin{equation} \small \label{eq:obj1} \begin{aligned} \min_{W} & \left \| X - W^{T}WX \right \|^{2}_{F} & \text{s.t. } WX=\widetilde{Y} \end{aligned} \end{equation} where $\left \| . \right \|_{F}$ is the Frobenius norm. \subsubsection{Optimization Algorithm} To optimize the objective function in Eq.~\ref{eq:obj1}, we first substitute $WX$ with $\widetilde{Y}$. Then, due to the existence of constraint $WX=\widetilde{Y}$, it is very difficult to solve Eq.~\ref{eq:obj1}. Therefore, we relax the constraint into a soft one and reformulates the objective function~(\ref{eq:obj1}) as \begin{equation} \small \label{eq:obj2} \min_{W} \left \| X - W^{T}\widetilde{Y} \right \|^{2}_{F} + \lambda \left \| WX - \widetilde{Y}\right \|^{2}_{F} \end{equation} where $\lambda$ is a parameter to control the importance of the second term. Now, the objective function in Eq.~\ref{eq:obj2} is non-convex and it contains two unknown variables $W$ and $\widetilde{Y}$. It is difficult to directly solve the equation. We propose a solution to iteratively update one variable while fixing the other. Since the objective function is convex by updating one variable, we compute the partial derivative of Eq.~\ref{eq:obj2} with respect to $W$ and $\widetilde{Y}$ and set both to zero. \begin{algorithm {\small \KwIn{Training data $X \in \mathbb{R}^{d \times n}$} \myinput{Logical label matrix $Y \in \mathbb{R}^{k \times n}$} \myinput{Parameters: $\lambda$ and MaxIteration} \textbf{Initialization:}\\ $C_{jk}=\frac{\sum_{i=1}^{n}x_{ij}x_{jk}}{\sum_{i=1}^{n}x_{ij} + \sum_{i=1}^{n}x_{ik} - \sum_{i=1}^{n}x_{ij}x_{ik}}; j,k$ are the label vectors\\ $\widetilde{Y}=YC$\\ $t=0$ \While{MaxIteration$>t$}{ Update $W$ by the solving the Eq.~\ref{eq:sylv_W}\; Update $\widetilde{Y}$ by the solving the Eq.~\ref{eq:sylv_Y}\; $t= t + 1$\; } \KwOut{Projection matrix $W \in \mathbb{R}^{k \times d}$} Rank features by $\left \| W_{m,:} \right \|_{2}$ in descending order and return the top ranked features \caption{Learning Discriminative Features using Multi-label Dual Space (LDFM)} \label{mainalgo} } \end{algorithm} \begin{itemize} \item \textbf{Update $W$:} \begin{equation} \small \label{eq:W} \begin{aligned} & -\widetilde{Y}\left ( X^{T} - \widetilde{Y}^{T}W \right ) + \lambda\left ( WX-\widetilde{Y} \right )X^{T}=0 \\ & \Rightarrow \widetilde{Y} \widetilde{Y}^{T}W + \lambda WXX^{T}=\widetilde{Y}X^{T} + \lambda \widetilde{Y} X^{T} \\ & \Rightarrow PW + WQ=R \end{aligned} \end{equation} where $P=\widetilde{Y}\widetilde{Y}^{T}$, $Q=\lambda XX^{T}$, and $R=\left ( \lambda + 1 \right )\widetilde{Y}X^{T}$ \item \textbf{Update $\widetilde{Y}$:} \begin{equation} \small \label{eq:Y} \begin{aligned} & -WX+WW^{T}\widetilde{Y} + \lambda\left ( -WX + \widetilde{Y} \right )=0 \\ & \Rightarrow WW^{T}\widetilde{Y} + \lambda \widetilde{Y} = WX + \lambda WX \\ & \Rightarrow A\widetilde{Y}+\widetilde{Y}B=D \end{aligned} \end{equation} where $A=WW^{T}$, $B=\lambda I$, $D=\left ( \lambda + 1 \right )WX$ and $I \in \mathbb{R}^{k \times k}$ is an identity matrix. \end{itemize} Eqs.~\ref{eq:W} and \ref{eq:Y} are formulated as the well-known Sylvester equation of the form $MX+XN=O$. The sylvester equation is a matrix equation with given matrices $M, N, $ and $O$ and it aims to find the possible unknown matrix $X$. The solution of the Sylvester equation can be solved efficiently and lead to a unique solution. For more explanations and proofs, the reader can refer to~\cite{bartels1972solution}. Using the Kronecker products notation and the vectorization operator $vec$, Eqs.~\ref{eq:W} and \ref{eq:Y} can be written as a linear equation respectively \begin{equation} \small \label{eq:sylv_W} \left (I_{d} \otimes P + Q^{T} \otimes I_{k} \right )vec(W)=vec(R) \end{equation} where $I_{d} \in \mathbb{R}^{d \times d}$ and $I_{k} \in \mathbb{R}^{k \times k}$ are identity matrices and $\otimes$ is the Kronecker product. \begin{equation} \small \label{eq:sylv_Y} \left (I_{n} \otimes A + B^{T} \otimes I_{k} \right )vec(\widetilde{Y})=vec(D) \end{equation} where $I_{n} \in \mathbb{R}^{n \times n}$ is an identity matrix. Fortunately, in MATLAB, this equation can be solved with a single line of code \textit{sylvester}\footnote{https://uk.mathworks.com/help/matlab/ref/sylvester.html}. Now, the two unknown matrices $W$ and $\widetilde{Y}$ can be iteratively updated using the proposed optimization rules above until convergence. The procedure is described in Algorithm~\ref{mainalgo}. Our proposed method learns the encoder projection matrix $W$. Thus, we can embed a new test sample $x^{s}_{i}$ to the semantic label space by $\widetilde{y}_{i}=Wx^{s}_{i}$. Similarly, we can reconstruct the original features using the decoder projection matrix $W^{T}$ by $x^{s}_{i}=W^{T}\widetilde{y}_{i}$. Therefore, $W$ contains the discriminative features to predict the real semantic labels. To identify these features, we rank each feature according to the value of $\left \| W_{m,:} \right \|_{2} \left ( m=1,\cdots,d \right )$ in descending order and return the top ranked features. \begin{table \centering {\scriptsize \begin{tabular}{llccccl} \hline Dataset & Domain & \#Instance & \#Training & \#Test & \#Features & \#Labels \\ \hline Scene &Image & $2407$&$1211$&$1196$&$294$&$6$ scenes\\ Emotions &Audio & $593$&$391$&$202$&$72$&$6$ emotions\\ Reference &Text (Yahoo) & $5000$&$2000$&$3000$&$793$&$33$ topics\\ Computers & Text (Yahoo)& $5000$&$2000$&$3000$&$681$&$33$ topics\\ \hline \end{tabular} } \caption{Characteristics of the evaluated datasets} \label{tbl:ds} \end{table} \section{Experiments} \subsection{Experimental datasets} We open source for our LDFM code for reproducibility of our experiments\footnote{https://github.com/alibraytee/LDFM}. Experiments are conducted on four public multi-label datasets which can be downloaded from the Mulan repository\footnote{http://mulan.sourceforge.net/datasets-mlc.html}. The details of these datasets are summarized in Table~\ref{tbl:ds}. Due to space limitations, we use three evaluation metrics namely \textit{Hamming loss}, \textit{Average precision}, and \textit{Micro-F1} which define in~\cite{braytee2019correlated}. \subsection{Comparing methods and experiment settings} Multi-label feature selection methods attracted the interest of the researchers in the last decade. In this study, we compare our proposed method LDFM against the recent state-of-the-art multi-label feature selection methods, including GLOCAL~\cite{zhu2017multi}, MCLS~\cite{huang2018manifold}, and MSSL~\cite{cai2018multi}. MCLS and MSSL methods consider feature manifold learning in their studies. ML-KNN ($K$=$10$)~\cite{zhang2007ml} is used as the multi-label classifier to evaluate the performance of the identified features. The results based on a different number of features, vary from $1$ to $100$ features. PCA is applied as a prepossessing step with and retain $95\%$ of the data. To ensure a fair comparison, the parameters of the compared methods are tuned to find the optimum values. For GLOCAL, the regularization parameters $\lambda_{3}$ and $\lambda_{4}$ are tuned in $\left \{ 0.0001,0.001,\dots, 1 \right \}$, the number of clusters is searched from $\left \{ 4,8,16,32, 64 \right \}$, and the latent dimensionality rank is tuned in $\left \{ 5,10,\dots, 30 \right \}$. MCLS uses the defaults settings for its parameters. For MSSL, the parameters $\alpha$ and $\beta$ are tuned by searching the grid $\left \{ 0.001,0.01,\dots,1000 \right \}$. The parameter settings for LDFM are described in the following section. \begin{figure}[ht] \includegraphics[height=0.15\textheight,width=1\textwidth]{Figures/Scene_Res.png} \caption{Comparison of multi-label feature selection algorithms on Scene} \label{fig:scene} \end{figure} \begin{figure}[ht] \includegraphics[height=0.15\textheight,width=1\textwidth]{Figures/Emotions_Res1.png} \caption{Comparison of multi-label feature selection algorithms on Emotions} \label{fig:emotions} \end{figure} \subsection{Results} \subsubsection{Classification results} Several experiments have been conducted to demonstrate the classification performance of LDFM compared to the state-of-the-art multi-label feature selection methods. Figs.~\ref{fig:scene}-\ref{fig:computers} show the results in terms of \textit{Hamming loss}, \textit{Average precision}, and \textit{Micro-F1} evaluation metrics on the four datasets. In these figures, the classification results are generated based on top-ranked $100$ features (except Emotions which only has $72$ features). Based on the experiment results shown in Figs.~\ref{fig:scene}-\ref{fig:computers}, interestingly, it is clear that our proposed method has a significant classification improvement with an increasing number of selected features, and then remains stable. Thus, this observation indicates that it is meaningful to study dimensionality reduction in multi-label learning. Further, it highlights the stability and capability of LDFM to achieve good performance on all the datasets with fewer selected features. \begin{figure}[ht] \includegraphics[height=0.15\textheight,width=1\textwidth]{Figures/Reference_Res1.png} \caption{Comparison of multi-label feature selection algorithms on Reference} \label{fig:reference} \end{figure} \begin{figure}[ht] \includegraphics[height=0.15\textheight,width=1\textwidth]{Figures/Computers_Res.png} \caption{Comparison of multi-label feature selection algorithms on Computers} \label{fig:computers} \end{figure} The proposed method is compared with the state-of-the-art on each dataset. As shown in Figs.~\ref{fig:scene}-\ref{fig:computers}, LDFM achieves better results compared to MCLS, MSSL, and GLOCAL on almost all the evaluated datasets. Specifically, in terms of the \textit{Hamming loss} evaluation metric, where the smaller the values, the better the performance, LDFM's features substantially improve the classification results compared to the state-of-the-art. It can be observed that MCLS has the worst results and MSSL and GLOCAL achieve comparable results as shown in Figs.~\ref{fig:scene}, \ref{fig:emotions}, and \ref{fig:computers}. In terms of \textit{Average precision}, and \textit{Micro-F1} evaluation metrics, where the larger the values, the better the performance, LDFM generally achieves better results against the compared methods in all datasets. We note that LDFM performs slightly better than MSSL and GLOCAL on the Emotions dataset using \textit{Micro-F1} metric. We also note that the compared methods produce unstable results on the Reference and Computers datasets using the \textit{Micro-F1} metric. In general, our proposed method demonstrates the great benefits of using label manifolds in encoder-decoder architecture to identify the discriminative features. Furthermore, using the Friedman test, we investigate whether the results that are produced by LDFM are significantly different to the state-of-the-art. In particular, we examine the Friedman test between LDFM against each compared method for each evaluation metric in the four datasets. The statistical results show that the p-value in all tests is less than $0.05$ which rejects the null hypothesis that the proposed method and the compared methods have an equal performance. Finally, we explored the reconstruction capability of the decoder in LDFM using the projection matrix $W$ to reconstruct the original data. Table~\ref{tbl:reconst} reports the reconstruction errors using the training logical labels ($Y$), training predicted labels ($\widetilde{Y}$), and the logical testing labels. It is observed that the percentage reconstruction error of the original training matrix using the logical training labels only ranges between $4\%$ to $8\%$ on the four dataset, and this error is dramatically decreased to $0.1\%$ to $3\%$ by using the predicted numerical label matrix. This observation reveals that the decoder plays an important role in selecting the important features which can be used to reconstruct the original matrix. Further, it supports our argument to reconstruct the visual images using the semantic labels and the coefficient matrix. In addition, we report the capability of reconstructing the testing data matrix using the projection matrix and the testing labels with a small error that ranges between $4\%$ to $8\%$. \begin{table \centering \scriptsize \begin{tabular}{lccc} \hline Dataset & Logical error & Predicted error & Testing error \\ \hline Scene & $0.042$&$0.032$ & $0.043$ \\ Emotions & $0.087$&$0.022$ & $0.086$ \\ Reference & $0.044$&$0.001$&$0.044$ \\ Computers & $0.051$&$0.001$&$0.051$ \\ \hline \end{tabular} \caption{Reconstruction different error values using the decoder} \label{tbl:reconst} \end{table} \subsubsection{LDFM results for handling missing labels} In this experiment, to investigate the ability of the proposed method to handling missing labels, we randomly removed different proportions of labels from the samples from moderate to extreme levels: 20\%, 40\%, 60\%, and 80\%. Table~\ref{tblmissing} shows that LDFM achieved consistent improvement over the base especially on 20\%, 40\%, and 60\% missing label levels. Further, it is superior across four datasets using four different missing label proportions. \begin{table}[t] \centering \scriptsize \begin{tabular}{l|l|llll|llll} \hline Dataset$\downarrow$ & Evaluation criteria & \multicolumn{4}{c|}{Base } & \multicolumn{4}{c}{\textbf{LDFM} } \\ & Missing label proportion $\rightarrow$ & 20\% & 40\% & 60\% & 80\% & 20\% & 40\% & 60\% & 80\% \\ \hline \multirow{3}{*}{Scene} & Hamming Loss & 0.18 & 0.24 & 0.69 & 0.81 & 0.10 & 0.14 & 0.39 & 0.78 \\ & Average Precision & 0.57 & 0.54 & 0.51 & 0.49 & 0.81 & 0.79 & 0.76 & 0.73 \\ & Micro-F1 & 0.25 & 0.32 & 0.32 & 0.30 & 0.67 & 0.63 & 0.44 & 0.32 \\ \hline \multirow{3}{*}{Computers} & Hamming Loss & 0.04 & 0.05 & 0.18 & 0.91 & 0.03 & 0.04 & 0.17 & 0.91 \\ & Average Precision & 0.58 & 0.56 & 0.55 & 0.55 & 0.61 & 0.59 & 0.57 & 0.56 \\ & Micro-F1 & 0.39 & 0.40 & 0.24 & 0.09 & 0.42 & 0.42 & 0.26 & 0.10 \\ \hline \multirow{3}{*}{Reference} & Hamming Loss & 0.03 & 0.04 & 0.15 & 0.92 & 0.02 & 0.03 & 0.14 & 0.91 \\ & Average Precision & 0.54 & 0.52 & 0.50 & 0.50 & 0.60 & 0.57 & 0.55 & 0.54 \\ & Micro-F1 & 0.40 & 0.41 & 0.20 & 0.07 & 0.42 & 0.45 & 0.23 & 0.07 \\ \hline \multirow{3}{*}{Emotions} & Hamming Loss & 0.23 & 0.33 & 0.57 & 0.65 & 0.21 & 0.28 & 0.51 & 0.64 \\ & Average Precision & 0.76 & 0.74 & 0.70 & 0.68 & 0.79 & 0.76 & 0.75 & 0.72 \\ & Micro-F1 & 0.62 & 0.59 & 0.50 & 0.48 & 0.65 & 0.63 & 0.54 & 0.49 \\ \hline \end{tabular} \caption{Results on four datasets with different missing label proportions} \label{tblmissing} \end{table} \subsection{Parameter sensitivity and convergence analysis} \label{sec:param} In this section, we study the influence of the proposed method's parameters $\lambda$ and $MaxIteration$ on the classification results. First, $\lambda$ controls the contribution of the decoder and encoder in the method, however the second parameter defines the number of iterations required to convergence. The parameter $\lambda$ and $MaxIteration$ are tuned using a grid search from $\left \{ 0.2,0.4,\dots,2 \right \}$ and $\left \{ 1,20,40,\dots,100 \right \}$ respectively. As shown in Fig.~\ref{fig:param}a and \ref{fig:param}b, using the average precision metric on the two datasets, we can observe that with an increasing $\lambda$, the learning performance is improved. Further, we investigate the convergence of the LDFM optimization method. As shown in Fig.\ref{fig:param}c and \ref{fig:param}d using two datasets, it is clearly seen that our method converges rapidly and has around $10$ iterations which demonstrates the efficacy and speed of our algorithm. \begin{figure}[htbp] \subfloat[Emotions] \includegraphics[height=0.1\textheight,width=0.25\textwidth]{Figures/Parameters_emotions.png} } \subfloat[Reference] \includegraphics[height=0.1\textheight,width=0.25\textwidth]{Figures/Parameters_reference.png} } \hfill \subfloat[Emotions] \includegraphics[height=0.1\textheight,width=0.2\textwidth]{Figures/Objective_emotions.png} } \subfloat[Reference] \includegraphics[height=0.1\textheight,width=0.2\textwidth]{Figures/Objective_reference.png} } \caption{LDFM Results on Emotions and Reference datasets. (a) and (b) the average precision results w.r.t different parameters. (c) and (d) convergence curves} \label{fig:param} \end{figure} \section{Conclusion} This paper proposes a novel semantic multi-label learning model based on an autoencoder. Our proposed method learns the projection matrix to map from the feature space to semantic space back and forth. The semantic labels are predicted in the optimization method because they are not explicitly available from the training samples. We further rank the feature weights in the learned project matrix for feature selection. The proposed method is simple and computationally fast. We demonstrate through extensive experiments that our method outperforms the state-of-the-art. Furthermore, we demonstrate the efficiency of the proposed method to reconstruct the original data using the predicted labels. \bibliographystyle{splncs04}
1,116,691,501,185
arxiv
\section{Introduction} \label{sec:intro} Topological defects arise in a wide variety of systems ranging from table-top experiments in condensed matter systems\cite{Chuang:1991zz, Hendry:1994,Ruutu:1996,Bauerle:1996,Dodd:1998aan,Carmi:2000zz,Digal:1998ak} to theories of the early Universe, extremely dense stars etc.\cite{Hindmarsh:1994re,Magueijo:2000se,Srivastava:2017itj}. Topological defects mostly form during phase transitions accompanied by spontaneous breaking (SSB) of discrete or continuous symmetries. These defects can also be excited in the SSB phase by external stimulations \cite{Digal:1999ub}. The type of topological defects that are possible depends on the corresponding homotopy group and the spatial dimensions, irrespective of the energetics involved \cite{Zurek:1996sj,Toulouse:1976,Mermin:1979zz}. The processes of formation and evolution of these defects have been extensively studied in the literature. There are many theoretical as well as experimental studies available on the formation of topological defects in condensed matter systems \cite{Srivastava:2001,Chaikin2000}. In the case of high energy physic systems the studies are mostly theoretical. However since topological considerations play a dominant role in their formation and evolution, ideas proposed for high energy physics systems have been tested in condensed matter systems. The theory of formation of topological defects was first proposed in the context of the early Universe by T. W. B. Kibble, known as the Kibble-Mechanism(KM) \cite{Kibble:1976sj}. The Kibble-Mechanism is based on two postulates. In the immediate aftermath of a phase transition the order parameter (OP) in physical space takes values from the order parameter space (OPS). According to the first postulate of KM, immediately after a SSB phase transition the physical space splits into domains, inside of which the OP is roughly uniform. The second postulate, which is also known as the ``Geodesic Rule", states that in between adjacent domains OP field interpolates along the shortest path in the OPS \cite{Kibble:1995aa}. Given an OPS these two postulates can be applied to make definite predictions such as the minimum number of domains required to form a defect, defect densities and defect correlations etc. \cite{Bowick:1992rz,Ray:2001ug}. For example, in two dimensions a minimum of three domains are required to form a vortex or anti-vortex, which is usually located near the junction of these. The probability of formation is estimated to be $0.25$. According to the Kibble Mechanism the formation of defects for a given OPS, spatial dimensions, and the nature of phase transition, does not depend on the energy scales involved, i.e whether in a condensed matter system, in high energy systems or in the early Universe. The Kibble mechanism has been very successful in describing the formation of topological defects during a first order transition. In a first order phase transition, bubbles of SSB phase nucleate in the background of symmetric phase \cite{Srivastava:1991nv}. The bubbles with uniform OP always dominate during the nucleation as these correspond to least action. These bubbles can be identified with the domains proposed in the Kibble Mechanism. These bubbles grow, collide/coalesce with one another converting the symmetric phase into the SSB phase. It has been observed in numerical experiments that when two bubbles collide, the OP variation from one bubble to the other interpolates along the shortest path in the OPS validating the geodesic rule of the Kibble mechanism \cite{Srivastava:1991nv,Chakravarty:1992zp}. Though there can be exceptions to this when kinematics allow energetic bubble collisions or there is a small explicit symmetry breaking present apart from the SSB \cite{Copeland:1996jz,Digal:1997ip,Digal:1995vd}. In a second order phase transition formation of topological defects is described by the Kibble-Zurek mechanism \cite{Zurek:1996sj}. In this case it becomes necessary to incorporate the phenomenon of critical slowing down in the Kibble Mechanism which was first pointed out by Zurek. Zurek argued that in a second order phase transition as the system temperature approaches the critical temperature$(T_c)$ from above, the dynamics of the field almost freezes as soon as it enters the regime of critical slowing down $\left[T_c + \epsilon_1,T_c-\epsilon_2\right]$. When the system exits this regime, the field configuration still corresponds to temperature $T_c + \epsilon_1$, even though the temperature is equal to or below $T_c - \epsilon_2$. The field configuration at $T=T_c+\epsilon_1$ is laid out using the two postulates of Kibble mechanism. If the temperature of the system were to remain fixed at $T_c + \epsilon_1$ the domains will keep fluctuating and there will be continuous creation and anihilation of unstable topological defects. Since the system temperature $T \le T_c - \epsilon_2$, i.e the effective potential now has a non trivial OPS, the field inside the domains will roll down to the corresponding nearest point on the OPS. In this process the topological defects will become stable. Though the two postulates of the Kibble Mechanism and the Kibble-Zurek Mechanism, have been very successful in describing the formation and dynamics of topological defects their validity, in particular that of the Geodesic rule has never been tested in the case of $2nd$ order phase transition. It is argued that the Geodesic rule follows from free energy considerations, i.e when the field variation over the distances traces shortest path on the OPS the energy and free energy are minimised. However, the same can be achieved by maximising the entropy. A definitive answer to whether there are deviations to Geodesic rule can come only from the simulations of the partition function. In this work we compute the deviations from the Geodesic rule in the $O(2)-$spin model in two and three spatial dimensions. In this model the spins take values from an unit circle. We also consider $\phi^4$ theory with $U(1)\equiv O(2)$ symmetry in three spatial dimensions to see the effect of radial/magnitude fluctuations on the Geodesic rule. In the relevant temperature range of interest the $\phi$ field takes values from a circular band in the complex $\phi-$plane. The computations involve Monte Carlo simulation of the lattice partition function of both the models. At a given temperature the correlation length $\xi$ is computed. The conventional definition of $\xi$ is not suitable for the Kibble Mechanism as the correlation of the field at $\xi$ separations does not vanish. Therefore we define $\xi_{KM}$ such that field correlation approximately vanishes over $\xi_{KM}$ separations. It is found that $\xi_{KM}$ is significantly larger than $\xi$. Once $\xi$ and $\xi_{KM}$ are computed the field configuration over a stretch of $\xi_{KM}$ is mapped onto the field space. Finding length of the image trajectory in the field space requires assumption of the Geodesic rule for variations of the field between nearest neighbour (NN) on the lattice. The simulations results for both the models show significant deviations from the Geodesic rule. The deviation over $\xi$ distances is found to be above $20\%$. Over $\xi_{KM}$ the trajectories no longer prefer the Geodesic path in the field space. Even the maximal violation of the Geodesic rule largely underestimates the defect densities. This is possibly due to dominance of defects arising from thermal fluctuations. These results suggest that even over $\xi_{KM}$ fluctuations dominate the field dynamics. The paper is organised as follows. In section-II and section-III, numerical calculations to determine the validity of geodesic rule are presented for the $O(2)-$spin model and $\phi^4$ theory respectively. The conclusions are presented in section-IV. \section{Geodesic rule in $O(2)-$spin model} The $O(2)-$spin model, also known as the $XY-$model, is a toy spin model on the lattice \cite{Kosterlitz:1973xp} . Each of the spins, $\vec{s}_i$ at the site $i$, is a two component unit vector. The spin $\vec{s}_i$ can also be represented by an angle $\theta_i$ such that $\vec{s}_i = \left(cos\theta_i,sin\theta_i\right)$. The Hamiltonian, \begin{equation} H = -J \sum_{\left<ij\right>} \vec{s}_i.\vec{s}_j, \end{equation} includes ferromagnetic coupling ($J>0$) between the $i-$th and $j-$th spins which are NN's. The $O(2)-$symmetry corresponds to global rotations of the spins which preserve the Hamiltonian. For simplicity, external field breaking the $O(2)$ symmetry explicitly is not considered in the present study. The expression for the partition function is given by, \begin{equation} {\cal{Z}} = \int \prod_i^{N} d\vec{s}_i Exp[\beta \sum_{\left<ij\right>} \vec{s}_i.\vec{s}_j]. \label{prt1} \end{equation} $\beta = J/\kappa T$, where $\kappa$ and $T$ are the Boltzmann constant and temperature respectively. For convenience $T$ is considered in units of $J/\kappa$. It is well known that in three or higher dimensions the system of lattice spins undergoes a $2$nd order ferromagnetic-paramagnetic transition at critical temperature $T_c$. The order parameter for this transition is given by, \begin{equation} \vec{M} = \left<{1 \over N} \sum_i \vec{s}_i\right>. \label{M} \end{equation} The magnitude of $\vec{M}$ acquires non-zero value for temperatures, $T \le T_c$, which leads to SSB of $O(2)$ symmetry. The symmetry broken phase allows for existence of stable topological defects corresponding to the first homotopy group $\pi_1(S^1)\equiv Z$. These defects can exist even above $T_c$ but are unstable. In the case of two physical dimensions there is no magnetisation transition due to dominance of the Goldstone modes. Interestingly the system undergoes the well known Kosterliz-Thouless(KT) transition\cite{Kosterlitz:1973xp}. The spin system consists of a network of vortices at all temperatures. The two phases are characterised by how the vortices interact. The high temperature phase is described by a screened Coulomb gas and a strongly interacting system below the KT transition. Though the two dimensional system is not directly relevant for the formation of topological defects via the Kibble-Zurek mechanism, the distribution of vortices at high temperature serves as an instructive example. The results of the deviation from the Geodesic rule in two and three dimensions are very similar. Also there is no reason to expect that the Geodesic rule will not hold in the case of two spatial dimensions. Simulation of the partition function, Eq.\ref{prt1}, involves mainly generating statistically significant spin configurations using Monte Carlo methods at various temperatures above $T>T_c$. To generate the configurations the standarad metropolis algorithm\cite{Hasting:1970} is used. In this algorithm the spin $\vec{s}_i$ at site $i$ is rotated by an angle $\theta$ randomly chosen between $0-2\pi$ to obtain $\vec{s}^\prime_i$. The move, $\vec{s}_i \rightarrow \vec{s}_i^\prime$ is accepted if the change in the energy $\Delta E=E^\prime - E < 0$. If $\Delta E >0$ then the move is accepted with probability $Exp(-\beta \Delta E)$. While the spin $\vec{s}_i$ is being updated, the rest of the spins are held fixed. A sweep constitutes the sequential updating of all the spins. Since a new spin configurations is obtained from an old one, there is non-zero autocorrelation between them. To reduce this auto-correlation to a reasonable level a configuration is accepted for computations after about $~10$ sweeps. For simplicity, a square lattices ($N = N_s\times N_s$) and a cubic lattice ($N=N_s\times N_s \times N_s$) are considered in $2$ and $3-$ dimensions respectively. In Fig.\ref{fig1} and Fig.\ref{fig2} show a small part of typical 2-dimensional spin configuration generated at temperature $T=1.43$ and $T=1.25$ respectively. In these figures the purple dots represent vortices and green dots represent anti-vortices. The spin configuration shown in Fig.\ref{fig1} has more fluctuations compared to configuration in Fig.\ref{fig2} as the temperature is higher for the former. The larger fluctuations lead to higher defect density, so it grows with temperature. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.49\textwidth} \hskip-1.0cm \includegraphics[width=\textwidth]{b0p7.eps} \caption{A sampled spin configuration\\ at $T=1.6T_c$.} \label{fig1} \end{minipage} \hspace{0.0cm} \begin{minipage}[b]{0.49\textwidth} \hskip-1.0cm \includegraphics[width=\textwidth]{b0p8.eps} \caption{A sampled spin configuration\\ at $T=1.4T_c$.} \label{fig2} \end{minipage} \end{figure} \subsection{Calculation of $\xi$ and $\xi_{KM}$} Conventionally the correlation length $\xi$ is obtained from the correlation function, \begin{equation} c(r)=\left\langle \vec{s}_i\cdot \vec{s}_j\right\rangle - \left\langle \vec{s}_i \right\rangle \cdot \langle\vec{s}_j\rangle,~~r=|{\rm \bf r}_i-{\rm \bf r}_j|, \end{equation} of spins $\vec{s}_i$ and $\vec{s}_j$ located at ${\rm \bf r}_i$ and ${\rm \bf r}_j$ respectively. For large $r$, $c(r)$ is expected to decay as $\sim Exp[-r/\xi(T)]$. Clearly at $r=\xi$, $c(r)\sim 1/e$, consequently $\vec{s}_i$ can not take values independent of $\vec{s}_j$ and vice-versa. For $T=1.18$, $\xi \simeq 4.4$ in lattice units. At $\xi$ separations, the probability that the relative angle $\Delta \theta = \rm{min}\left[|\theta_i-\theta_j|, 2\pi-|\theta_i-\theta_j|\right]$ between $\vec{s}_i$ and $\vec{s}_j$ is below $\pi/2$ is $\sim 65\%$, i.e $\langle \Delta \theta\rangle \ne \pi/2$ and $\langle \theta_i\theta_j\rangle \ne \pi^2$. According to the Kibble Mechanism $\Delta\theta$, between two adjacent domains, can take any value between $0-\pi$ with uniform probability. Consequently, $\langle \Delta\theta\rangle = \pi/2$ or $\langle \theta_i\theta_j\rangle= \pi^2$. Therefore $\xi$ does not accurately describe the size of the domains prescribed in the Kibble Mechanism. We define $\xi_{KM}$ as the least physical separation at which the two spins $\vec{s}_i$ and $\vec{s}_j$ are not correlated, i.e $c(\xi_{KM})\simeq 0$. At $T=1.18$, it so happens that $\xi_{KM}$ is larger than $\xi$ by a factor of $\sim 5$. \subsection{The Geodesic rule over $\xi$ and $\xi_{KM}$ separations} The calculations to check the validity or the deviations of the Geodesic rule necessitates that given a configuration of the field along a linear stretch in physical space is mapped to a trajectory in the field space. Since the field is defined only on the lattice grid, the image in the field space in the present context will just be a set of points in the field space. A continuous path needs to be drawn in the field space such that the points corresponding to spins at NN points in physical space are joined by a continuous line. We assume that the field variation between NN points follows the Geodesic rule and accordingly draw segments of the trajectory in the field space. We expect this assumption to be valid for the temperatures we consider as there is always non-zero correlation between NN points. We mention here that the Geodesic rule is taken into account while discretising a continuum theory on the lattice. In numerical simulations of topological defects this rule is used locate as well as compute the windings of the defects. In the present case the field space or the OPS is a circle. Note that this circle can be characterised by an angular variable $\theta$ which can take any value in $[0-2\pi]$ with the identification of $\theta=0$ and $\theta=2\pi$. The field configuration along a straight line from ${\rm \bf r}_i$ to ${\rm \bf r}_j$ is mapped to a trajectory on the circle. The spins $\vec{s}_i$ and $\vec{s}_j$ are mapped to the two end points, $\theta_i$ and $\theta_j$ respectively on the OPS. If the Geodesic rule were valid then the length, $\Delta\theta_{ij}$, of this trajectory will be minimum of $|\theta_i-\theta_j|$ and $2\pi-|\theta_i-\theta_j|$, i.e $\Delta\theta_{ij}=\rm{min}[|\theta_i-\theta_j|,2\pi-|\theta_i-\theta_j|]$. The length of the trajectory corresponding to the field configuration from ${\rm \bf r}_i$ to ${\rm \bf r}_j$ is given by the absolute value of, \begin{equation} \eta_{ij}(r)=\sum_{k=i}^{j-1} \alpha_k \Delta\theta_{k+1,k}, ~ \Delta\theta_{k+1,k}=\rm{min}\left[|\theta_{k+1}-\theta_k|, 2\pi-|\theta_{k+1}-\theta_k|\right]. \end{equation} $\alpha_k=+1(-1)$, if following the Geodesic rule the trajectory traces a path clockwise(anti-clockwise) in the field space as one goes from the lattice site ${\rm \bf r}_k$ to ${\rm \bf r}_{k+1}$. Note that $\eta_{ij}(r)$ corresponds to net variation of angle $\theta$ of the spins over $r$ separation. In case $\Delta\theta_{ij}\ne \eta_{ij}$, it is counted as the deviation from the geodesic rule. This computation is repeated for different values of $r=2,3,.....,\xi_{KM}$ resulting in the function $d(r)$ which gives the deviation from the Geodesic rule at separation $r$. In figure \ref{fig3} we show an example of the field configurations along a linear stretch of $\xi_{KM}$ for $T=1.43$. $\xi_{KM}=12$ for this $T$. The corresponding image in the OPS is shown in figure \ref{fig4}. The two end points correspond to $\theta_1 \simeq 0.325\pi$ and $\theta_{13} \simeq 0.292\pi$, with $\Delta\theta_{1,13}=0.033\pi$. $\eta_{1,13}$ in this case turns out to be $2\pi-\Delta\theta_{1,13}=1.967\pi$; which does not correspond to the shortest path. This configuration is an example where the Geodesic rule is violated. In figures \ref{fig5} another configuration is shown for the same $T$. The corresponding image map is shown in \ref{fig6}. The endpoints for this configuration are; $\theta_1 \simeq 1.76\pi$ and $\theta_{13} \simeq 0.05\pi$. In this case $\Delta\theta_{1,13}=.29\pi$ but $\eta_{1,13}$ is found to be $2.29\pi$. This trajectory not only violates the Geodesic rule but also winds the field space, i.e OPS, once. This implies that for given two end points in the field space there are many ways in which geodesic rule can be violated but there is only one way it is followed. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{ngoshort.eps} \caption{A configuration spins over length $\xi$ in physical space.} \label{fig3} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{shortngo2.eps} \caption{The corresponding trajectory $\eta_{1,13}$ defined in Eq.(5) on the OPS.} \label{fig4} \end{minipage} \end{figure} \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{ngo.eps} \caption{A configuration spins over length $\xi$ in physical space.} \label{fig5} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{ngeoops.eps} \caption{The corresponding trajectory $\eta_{1,13}$ defined in Eq.(5) on the OPS.} \label{fig6} \end{minipage} \end{figure} The Fig.\ref{fig7} and Fig.\ref{fig8} show the probability of deviation from the geodesic rule vs $r$ at various temperatures in two and three dimensional systems respectively. For simplicity this calculation was carried out along the $x$ and $y$ directions only. The results were found to be independent of directions. The results show that the probability of deviation increases with separation $r$ and temperature. The upper points on the curves for different temperatures correspond to the deviation at $r=\xi_{KM}$. Note that the deviation at $r=\xi_{KM}$ increases as we approach $T_c$ from above. This is because $\xi_{KM}$ diverges at $T_c$. The larger the $\xi_{KM}$, there are more possibilities to trace different paths between two end points. It is observed that paths with multiple windings around the OPS are possible. For comparison the deviation at the separations $r=\xi$ is also shown in these figures, indicated by blue dots on the different curves. Though, this deviation is smaller than that at $r=\xi_{KM}$ but is still significant. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{2v.eps} \caption{Geodesic rule devation in $2D-$XY model with lattice distance.} \label{fig7} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{3v.eps} \caption{Geodesic rule deviation in $3D-$XY model with lattice distance.} \label{fig8} \end{minipage} \end{figure} \subsection{ Distribution of vortices in two spatial dimensions} In this section the results on the distribution of vortices are presented for two spatial dimensions. Each of the configuration in the thermal ensemble, generated using the Monte Carlo simulations, is analysed to compute the number of defects and net winding number inside a $100\times 100$ sub-lattice, which is quarter of the full system. Within this sub-lattice each elementary squares is scanned for vortices. To determine if there is a defect inside a square the Geodesic rule is applied to specify field variations between NN points. This uniquely determines the winding number of the field along the perimeter. The variations of $\theta$ along a closed path on the lattice is a topological number, i.e $2\pi n$ for integer n. If the $\theta$ of the spins varies by $(-)2\pi$ as the perimeter is traced once in the clockwise direction, the winding is considered to be $(-)1$. The net winding on the full lattice is zero as the the lattice is effectively a torus ($T^2$) due to periodic boundary conditions imposed along the $x,y$ directions. Fig.\ref{2_dden} shows, $N_D$, the average number of vortices and anti-vortices in the whole lattice. For comparison we compute defects inside each elementary square in three dimensional lattice ignoring the sign of the winding number. The results are shown in Fig.\ref{3_dden}. In both cases the number of vortices decreases with temperature. In Fig.\ref{2_dden} and Fig.\ref{3_dden}, the estimates using the KM are also included, which underestimate $N_D$ by large margins for temperatures considered here. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{2ddef.eps} \caption{$N_D$ from Monte Carlo simulations for $2D-$XY model. The lower curve corresponds to estimate from KM. } \label{2_dden} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{3ddef.eps} \caption{$N_D$ from Monte Carlo simulations for $3D-$XY model. The lower curve corresponds to estimate from KM.} \label{3_dden} \end{minipage} \end{figure} The number of vortices for all temperatures $T>T_c$, is way above what is expected from the Kibble Mechanism. It can be easily shown that the probability of having a vortex or anti-vortex inside an equilateral triangle of size $\xi_{KM}$ is $1/4$ \cite{Bowick:1992rz}. If the Geodesic rule is violated by $50\%$ then this probability that there is a defect inside the triangle goes up to $5/8$. This suggests that at finite temperature most of the defects are created via thermal fluctuations. For example, at $T=1.18$, using the Geodesic rule or it's violation by $50\%$, the maximum number of defects expected inside the sub-lattice is about $\sim 27$ when $\xi_{KM}$ is used for the estimate. The average number of defects in this case is $\sim 700$. This discrepancy reduces when standard correlation length, $\xi$, and the probability that the Geodesic rule is violated at this temperature are considered. \subsection{Correlation between vortices and anti-vortices} The Kibble Mechanism predicts strong correlation among the vortices \cite{Digal:1998ak}. It is expected that, given a vortex the probability that the nearest defect is an anti-vortex is higher than a vortex. This can be deduced from the Kibble Mechanism in the following way \cite{Digal:1998ak,Vachaspati:1984dz}. The number of defects within an area $A$ is $N\sim A/\xi^2$. On the other hand the perimeter of this area is $\sim A^{1/2}$. This can be traversed in $N^{1/2}\sim A^{1/2}/\xi$ steps of length $\xi$. Since for each step the average variation of $\theta$ is $\pi/2$, the net variation of $\theta$ along the perimeter can be computed using method of random walk of $N^{1/2}$ steps. Hence it is expected that the standard deviation is $\sim N^\nu$ with $\nu=1/4$. For a completely random distribution of vortices $\nu=1/2$. In Fig.\ref{cdist} we plot the distribution of net winding number in the sub-lattice for various temperatures. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{cd.eps} \caption{The distributions of the net winding number in the sub-lattice at various temperatures.} \label{cdist} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{nu.eps} \caption{The exponent $\nu$ corresponding to the standard deviation in the net winding number distributions.} \label{nuex} \end{minipage} \end{figure} The distributions shown in Fig.\ref{cdist} can be fitted well with gaussians. The exponent $\nu$ is given by the standard deviation of the gaussians. The resulting values of $\nu$ extracted for various temperatures is shown in Fig.\ref{nuex}. The exponent $\nu$ for $T=1.18$ turns out to be $0.18$. For $T=2.5$, $\nu \simeq 0.21$. Note that a constant factor is to be taken into account while comparing these results with the the Kibble Mechanism \cite{Digal:1998ak}. The resulting $\nu$ approaches the value $1/4$ for $T\to \infty$. This makes sense as the geodesic rule is imposed at the scale of lattice spacing and the NN spins are uncorrelated in this limit. In the following we study the Geodesic rule in $\phi^4$ theory. The results are qualitatively similar to the present case of $O(2)-$spins. \section{Geodesic rule in $\phi^4$ theory} The lattice action for the $\phi^4$ theory with $U(1)$ symmetry is taken to be, \begin{equation} S_\phi = -\kappa\sum_{i,\mu}\left( \phi^\dag_i\phi_{i+\hat{\mu}} + h.c \right) + \sum_{i} \left[{1\over 2} \phi^\dag_i\phi_i + \lambda\left(\phi^\dag_i\phi_i-1\right)^2\right] \label{S} \end{equation} where $\mu=x,y,z$. $\hat{\mu}$ represent the unit vector in the $\mu-$th direction. $\lambda$ represents the coupling for the quartic self-interaction of the $\phi$ field. $\lambda$ is fixed to $1$ for the simulations. The NN coupling parameter $\kappa$ plays the role of inverse temperature in this case. The corresponding partition function is given by \begin{equation} {\cal{Z}}= \int \prod_i d\phi_i^* d\phi_i~{\rm Exp}[-S_\phi]. \end{equation} The field $\phi$ being complex can be thought of as a two component vector, whose magnitude can fluctuate unlike in the case of $O(2)-$spins. In the thermodynamic limit the system undergoes a $2nd$ order phase transition at $\kappa_c$. The paramagnetic phase persists upto $\kappa_c$. For $\kappa > \kappa_c$ the system is found to be in the ferromagnetic phase. The equilibrium configurations in this model are generated by using the pseudo heat-bath algorithm. In this algorithm to update $\phi_{i}$ an action $S_i$ is considered, which is obtained from $S_\phi$ by dropping term which does not depend on $\phi_i$. The resulting action $S_i$ is rewritten as a sum of gaussian in $\phi_i$ and a quartic term. The gaussian term is used to generate a new $\phi_i$ which is accepted with probability determined by the change in the quartic term. This process is repeated to update $\phi$ at all the lattice sites. To reduce the auto correlation over relaxation method is used. In addition every $5$th configuration is used for analysis. For details see reference \cite{Bunk:1994xs}. The simulations are carried out for $60\times 60\times 60$ $3D$ lattice. The Fig.\ref{phi4a} and Fig.\ref{phi4b} show two dimensional section of typical field configurations at $\kappa=0.15$ and $\kappa=0.4$ respectively. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.49\textwidth} \hskip-1.0cm \includegraphics[width=\textwidth]{p4t2p0.eps} \caption{A sampled spin configuration\\ at $T=2T_c$.} \label{phi4a} \end{minipage} \hspace{0.0cm} \begin{minipage}[b]{0.49\textwidth} \hskip-1.0cm \includegraphics[width=\textwidth]{p4t0p76.eps} \caption{A sampled spin configuration\\ at $T=0.76T_c$.} \label{phi4b} \end{minipage} \end{figure} At high temperatures, $\kappa < \kappa_c$, the volume and thermal average of $\phi$ is zero, however $\phi_i=0$ is suppressed due to vanishing measure. It is found that the image trajectory, of any field configuration over a linear stretch $r$, rarely passes through the origin ($\phi=0$) in the field space. Therefore the phase $\theta$ of the field $\phi$ can be used for the computation of the probability of deviation from the Geodesic rule following the procedures adopted in the case of $O(2)-$spins. $\Delta\theta_{ij}$ and $\eta_{ij}$ are found as function of the separation $r=|{\bf r}_i-{\bf r}_j|$. As in the case of $O(2)$ spins whenever $\Delta\theta_{ij}\ne \eta_{ij}$ the Geodesic rule is considered to be violated. The Fig.\ref{fig11} shows the probability of deviation from the Geodesic rule as function of $r$ for different $\kappa$. The deviation increases with $r$ and temperature $1/\kappa$ similar to the case of $O(2)-$spins. The red squares on the curves correspond to the deviation at $r=\xi_{KM}$. \begin{figure}[h!] \centering {\rotatebox{360}{\includegraphics[width=0.48\hsize] {rv.eps}} } \caption{Geodesic rule deviation for $\phi^4$ theory.} \label{fig11} \end{figure} \section{Conclusions} We have studied the variations of field in the $XY-$model in two and three dimensions and $\phi^4$ theory in three dimensions using Monte Carlo lattice simulations. For two dimensions the lattice size was taken to be $200\times 200$. For three dimensions we consider $60^3$ lattice. The computations are carried out above the critical temperature to check the validity of the Geodesic rule used in the Kibble-Zurek mechanism of defect formation in 2nd order phase transitions. For this purpose a correlation length $\xi_{KM}$ is defined using the criteria that field correlation vanish approximately. This length scale is different from the conventional correlation length $\xi$. At $\xi$ separations the field can not be considered uncorrelated which is crucial for the Geodesic rule. It is found that even at $\xi$ separations the field variation deviates significantly from what is expected from the Geodesic rule. At $\xi_{KM}$ separations non geodesic paths are found to be equally or more probable than the geodesic paths. We also studied the number of vortices and the net winding number in the case of $O(2)-$spins in two dimensions. The geodesic rule is assumed for field variations between nearest neighbour spins. This is necessary to probe the location and winding charge of the vortices. The analysis is carried out a sub-lattice of size $100\times 100$ which is a quarter of the whole lattice. This is necessary as in this case the net winding number is not constrained to be zero. The number of vortices is found to be significantly higher than what is expected from the Kibble-Zurek mechanism. The discrepancy does not improve much even after the deviation from the geodesic rule is incorporated in the Kibble-Zurek mechanism. The simulation results show that the large number of vortices are a result of the thermal production. This is supported by results of the distribution of the net winding number. The results show that the corresponding standard deviation is much smaller than what is expected from the consideration of the Geodesic rule. This result suggests that there is significant pairing between the vortices and anti-vortices. It is expected that from topological considerations thermal fluctuations always form pair of vortex-anti-vortex and at smaller separations. We mention here that in theories with gauge symmetries there is no satisfactory argument for the Geodesic rule \cite{Rudaz:1992wy}. The trajectory of the field between two domains can be deformed by gauge transformations. The Monte Carlo simulation method presented here can be extended to settle the Geodesic rule in these theories, which we plan to do next. \acknowledgements We thank A. P. Balachandran, Ajit. M. Srivastava and Rajarshi Ray for valuable comments. \vspace{17mm} \centerline{\bf REFERENCES}\vskip -20pt
1,116,691,501,186
arxiv
\section{Introduction} Microgels are colloid-size particles made by crosslinked polymer networks generally organized into a dense core and a softer, loose corona~\cite{vlassopoulos2014tunable}. They can be synthesized in a range of diameters going from approximately 50 nm to a few microns~\cite{pelton2004unresolved} and their softness can be tuned for example by varying the number of crosslinkers; this allows to generate a plethora of particles with different behaviour ranging from hard-sphere-like microgels~\cite{schneider2017tuning} for high cross-linkers concentration, to ultrasoft particles when few crosslinks react with monomers~\cite{bachman2015ultrasoft,virtanen2016persulfate}. Besides the tunability in the synthesis protocol, the single most prominent feature of microgels is the ability to adjust their volume to react to a change of the external conditions~\cite{fernandez2011microgel}; this is the case of the widely exploited poly-N-isopropylacrylamide (PNIPAM) microgels which are thermoresponsive and undergo a volume phase transition (VPT) from a swollen to a collapsed state on increasing temperature~\cite{pelton1986preparation, lyon2012polymer,PhysRevE.94.032601}. Depending on the chemistry and the synthesis protocol, microgels can be designed to be responsive also to pH~\cite{nigro2015dynamic}, salt concentration~\cite{lopez2007macroscopically} or external (\textit{e.g} electric) fields~\cite{crassous2015anisotropic}. Thanks to their inner polymeric architecture, microgel colloids thus possess unique properties which make them remarkably suited for practical applications~\cite{galaev2007smart,fernandez2009gels,oh2008development} as well as for fundamental research\cite{yunker2014physics}. Indeed, from the latter point view, the possibility of tuning the volume fraction of the samples \emph{in situ} makes it possible to use microgel particles as a model system to explore fundamental physics problems, as highlighted by several recent works~\cite{seth2006elastic,mattsson2009soft,iyer2009self,peng2015two,rossi2015shape}. The soft nature of microgels can be incorporated, on a first level, by modelling them as partially-penetrable spheres. Indeed, by leveraging the classic Hertzian approach~\cite{landau1986theory}, which describes the elastic response of a medium in the small-deformation regime, it is possible to derive an effective pair potential that is well-suited to describe microgel suspensions which are not too dense~\cite{zhang2009thermal,pamies2009phase,paloli2013fluid,mohanty2014effective}. However this description neglects the polymeric nature of the microgel, thus failing to describe the behaviour of the suspension at high densities, where strong deformations, interpenetration and entanglement effects play an important role in the interactions among particles~\cite{mohanty2017interpenetration,conley2017jamming}. In this respect, computer simulations represent a viable tool to test the range of validity, and hence the limits, of the Hertzian approach. Indeed, a numerical model of microgel particles that incorporates the proper degree of detail would allow to probe the realistic effective interactions beyond the simple elastic repulsion. Attempts in this direction have been made only recently~\cite{Ahuali2017} by leveraging microgels obtained from coarse-grained spherical polymer networks that mimic the swelling behaviour of experimental microgels~\cite{claudio2009comparison,jha2011study,doi:10.1021/la400033s,kobayashi2014structure,ghavami2016internal,Ahuali2017,kobayashi2017polymer}. Unfortunately, most of these models are based on unrealistic polymer networks, free from entanglements and made by polymer chains of the same length. In fact, in addition to the qualitative agreement with the experimental microgel behaviour, a full understanding of inter-microgel interactions requires models that are able to account for the inner topology of particles. For instance, non-trivial features of microgels that are deemed to be important are a non-homogeneous distribution of crosslinkers, a continuous distribution of polymer chain-lengths and, arguably, the internal degree of entanglement. In a recent work~\cite{gnan2017silico}, we have proposed a numerical protocol to design neutral microgel particles based on the self-assembly of gel-forming monomers under spherical confinement. Once the disordered network is fully assembled, monomers are simulated using classical polymer interactions. With our method we find that the radius of the initial spherical confinement controls to a high degree the internal architecture of the single particle while maintaining the number of crosslinkers and of monomers constant. In particular, large confinement radii give rise to open and less intertwined polymer networks with larger gyration radii; on the other hand small confinement radii generate small and compact microgel particles. Consequently, the different internal structures, obtained at fixed crosslinker concentration and number of monomers, give rise to different swelling behaviour of the microgels themselves. Using the radius of confinement as an extra parameter of the numerical synthesis we are able to quantitatively reproduce the experimental swelling behaviour of small microgels~\cite{gnan2017silico}. In this work we exploit the developed method to investigate how the percentage of crosslinkers employed in the numerical synthesis influences the structure and the collapse of the microgel across the VPT. We find that the chain-length distribution across the network, for fixed crosslinker concentration, is not influenced by the confinement but is an intrinsic property of the gel-forming monomers employed for the network assembly. In addition, we demonstrate that the chain-size distribution and the average number of monomers per chain can be predicted with an heuristic argument based on the Flory theory. Finally, we show that the effect of the crosslinkers is to increase the compactness of microgels with consequences on the swelling behaviour of the resulting particles. We find that, in all the cases investigated, the internal structure of microgels can be well described in terms of the fuzzy-sphere model, confirming that the assembled particles are made by a compact core surrounded by a lower-density corona, in agreement with experiments~\cite{conley2016superresolution}. \section{Materials and Methods} The initial microgel configuration is built by performing simulations of a binary mixture of patchy particles under spherical confinement, as described in Ref.~\cite{gnan2017silico}. The two species of patchy particles can form up to two (species $A$) and four (species $B$) bonds to mimic the behaviour of monomers and crosslinkers, respectively. Bonds are possible only between $A$--$A$ and $A$--$B$ pairs of particles. The microscopic patchy model we employ takes advantage of a recently developed mechanism that greatly enhances equilibration, making it possible to easily generate fully-bonded configurations~\cite{Sciortino2017}. We build microgels made by about $41000$ particles with three different values of the crosslinker concentration $c$, namely $c = 1.4\%$, $3.2\%$ and $5.0\%$, and several values of the radius of confinement $Z$, ranging from $Z = 30\sigma$ to $Z = 70\sigma$ where $\sigma$ is the monomer size. Once the initial configurations are generated, the confinement is removed and the resulting topology is frozen-in by replacing the patchy force field with the Kremer-Grest set of interactions~\cite{kremer1990dynamics}, which models polymers in a good solvent. Within this framework, the covalent bonds are modelled through a finite extensible non-linear elastic (FENE) term as given by \begin{eqnarray} \label{eq:V_FENE} V_{\rm FENE}(|\vec r_{ij}|)= \begin{cases} -\varepsilon \, k_{F}R_0^2 \ln(1-(\frac{r_{ij}}{R_0\sigma})^2) & {\rm if} \quad |\vec r_{ij}|< R_0\sigma \\[0.5em] 0 & {\rm otherwise.} \end{cases} \end{eqnarray} \noindent where $\sigma$ is taken as the unit of length, $\epsilon$ controls the energy scale, $R_{0}=1.5$ sets the maximum extension of the bond and $k_F=15$ is the spring constant. The generic monomer-monomer steric repulsion is given by a Weeks-Chandler-Andersen~\cite{wca} term which reads \begin{eqnarray} V_{\rm WCA}(r) = \begin{cases} 4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^6 \right] + \epsilon & {\rm if} \quad r \leq 2^\frac{1}{6}\sigma \\[0.5em] 0 & {\rm if} \quad r > 2^\frac{1}{6}\sigma \end{cases} \label{eq:WCA} \end{eqnarray} The possibility of tuning the quality of the solvent is provided by an additional interaction term, $V_{\alpha}$, acting between all monomer pairs. The strength of this term is controlled by the parameter $\alpha$, which implicitly sets the solvophobicity of the monomers and plays the role of an inverse temperature~\cite{soddemann2001generic,verso2015simulation}. This attractive term reads, \begin{eqnarray} \label{eq:V_alpha} V_{\alpha}(r) = \begin{cases} -\varepsilon\alpha & {\rm if} \quad r \leq 2^{1/6} \sigma\\[0.5em] \frac{1}{2}\alpha\varepsilon [\cos(\gamma (r/\sigma)^2+\beta) -1] & {\rm if} \quad 2^{1/6}\sigma < r\leq R_{0} \sigma\\[0.5em] 0 & {\rm otherwise} \end{cases} \end{eqnarray} \noindent where $\gamma=\pi (2.25-2^{1/3})^{-1}$ and $\beta=2\pi-2.25\gamma$~\cite{soddemann2001generic}. We simulate single microgels by means of molecular dynamics simulations performed in the isothermal ensemble at constant reduced temperature, $T^*=k_BT/\epsilon=1.0$ where $k_B$ is the Boltzmann constant and $k_B/\epsilon=1$. The temperature is kept fixed by a Nos\`{e}-Hoover thermostat and the integration is carried out with a leap-frog scheme with a reduced time step $\delta t^*=\delta t\sqrt{\epsilon/ (m \sigma^2 ) }=0.001$, where $m$ is the mass of the monomer which is set to $m=1$. We vary the solvophobic parameter from $\alpha=0$ (where the system is in a good solvent) to $\alpha=1.5$, for which a complete collapse of the microgel is observed. We average all the investigated quantities over a number of distinct initial configurations, depending on the radius of the initial confinement. \section{Results} \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{sketch.png} \caption{\label{fig:sketch} A simulation snapshot of a microgel generated with $c=1.4\%$ and $Z=50\sigma$. The bottom right-hand corner shows a slab of thickness $20 \sigma$, cut out to showcase the radial heterogeneity of microgels. In this part, monomers are coloured according to their distance $d$ from the microgel centre of mass: for this specific case, grey monomers with $d \lesssim 34\sigma$ belong to the core region, green monomers with $34 \sigma \lesssim d \lesssim 52 \sigma$ are part of the corona, while violet monomers with $d\lesssim 52\sigma$ compose the so-called dangling chains.} \end{figure} Figure~\ref{fig:sketch} provides a visual representation of a portion of a microgel generated \textit{in silico} with the method described above. The disordered nature of the polymer network is evident. Monomers are coloured according to their distance $d$ from the microgel centre of mass to illustrate the presence of (at least) three different regions inside the microgel structure. In the centre there is a uniformly dense core, followed by a corona, whose average density decreases as the distance from the centre of mass increases. Finally, there is an extremely diluted outermost layer composed of particles that are part of long chains or loops that stick out of the corona. These \textit{dangling ends} are usually not accounted for by the models employed to describe microgels, such as for example the widely used fuzzy sphere model described below, but they are important for determining a number of properties of the microgels, such as their hydrodynamic radius $R_H$ or their effective interactions~\cite{boon2017swelling,gnan2017silico}. In what follows we will quantitatively analyse the composition of the polymer network, its shape and size and how these are affected by a change in the quality of the solvent. \subsection{Number of chains, average chain-length and size distribution} The behaviour of a polymer network is highly sensitive to its topology~\cite{rubinstein2003polymer}. We therefore start the analysis of the structure of single microgels by looking at the properties of the chains that make up the polymer networks. To this aim we take advantage of the self-assembly process of the employed binary mixture of patchy particles to build the initial network. Following Flory~\cite{flory1953principles}, in a binary mixture of $N_A$ bifunctional particles and $N_B$ particles with functionality $f_B>2$, all having identical reactive sites, the distribution $N_l$ of the size $l$ of the chains that connect the branching points can be written as \begin{equation} \label{eq:csd_flory} N^{\rm Flory}_l = N_A (1 - p_A p_b)^2 (p_A p_b)^{l - 1} \end{equation} where $p_A = \frac{2 N_A}{2 N_A + f_B N_B}$ is the fraction of patches of type $A$ and $p_b$ is the bonding probability (in the language of Flory, $p_b$ is the probability that a randomly-chosen site has reacted). $N_l^{\rm Flory}$ is normalised such that $\sum_{l=1}^\infty l N_l = N_A$ and $\sum_{l=1}^\infty N_l = N_c$, where $N_c$ is the total number of chains. Consequently, the average chain length in the Flory approach $\langle l \rangle^{\rm Flory}$ is given by \begin{equation} \langle l \rangle^{\rm Flory} = \frac{\sum_{l=1}^\infty l N_l}{\sum_{l=1}^\infty N_l} = \frac{N_A}{N_c}. \end{equation} \noindent In the fully-bonded limit, $p_b \to 1$ and hence $N_l^{\rm Flory} \to N_A (1 - p_A)^2 (p_A)^{l - 1}$, $N_c \to N_A (1 - p_A) = N_A p_B$ and $\langle l \rangle^{\rm Flory} \to p_B^{-1}$, where $p_B = 1 - p_A$ is the fraction of patches of type $B$~\cite{bianchi_FS}. The same result can be obtained by considering that, for $p_b \to 1$, the number of $B$-sites connected to $A$-sites, which is, by definition, also the number of chain ends, is $N_e = f_B N_B p_A = 2 N_A p_B$. Since each chain has two ends, $\langle l \rangle^{\rm Flory} \to N_A/N_c = 2 N_A/N_e = p_B^{-1}$. In the system considered here, bonding between $B$ sites is forbidden and hence the Flory approach does not hold. However, we can still compute the total number of chain ends in a fully-bonded system by using a similar procedure. Indeed, since $B$-sites can only be bonded to $A$-sites, then $N_e = f_B N_B$. It follows that, for $p_b \to 1$, \begin{equation} \label{eq:asympt_avg_length} \langle l \rangle^{\rm asympt}= \frac{2N_A}{f_B N_B} = \frac{p_A}{p_B}. \end{equation} \noindent Since $\langle l \rangle^{\rm asympt}= \sum_{l=1}^\infty l N_l / \sum_{l=1}^\infty N_l$ and $\sum_{l=1}^\infty l N_l = N_A$, if we assume that, following Eq.~\eqref{eq:csd_flory}, the chain-size distribution has an exponential form, we obtain \begin{equation} \label{eq:asympt_csd} N_{l}^{\rm asympt}= N_A \left( \frac{p_B}{p_A} \right)^2 \left( \frac{p_A - p_B}{p_A} \right)^{l-1} \end{equation} \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{avg_chain_length.pdf} \caption{\label{fig:avg_length}Average chain length $\langle l \rangle$ for microgels with different crosslinker concentrations as a function of the initial radius of confinement. Open circles are simulation data, filled light squares and dark diamonds are the asymptotic (Eq.~\eqref{eq:asympt_csd}) and non-asymptotic (Eq.~\eqref{eq:phenomenological_avg_length}) theoretical values, respectively. Simulation data are always almost completely hidden behind the results of the heuristic theory, showcasing the very good agreement between the two.} \end{figure} Figure~\ref{fig:avg_length} shows the average chain length $\langle l \rangle$ for all the generated microgels comparing the numerical results with the theoretical ones. In particular, the asymptotic ($p_b \to 1$) values, computed according to Eq.~\eqref{eq:asympt_avg_length}, are always a few percent larger than the simulation data. The agreement between theory and data, which is already very good, can be further improved by reintroducing the dependence of $N_l$ on $p_b$, which is always very close to but not exactly one ($p_b \gtrsim 0.998$). This operation can be heuristically performed by noting that, in the original Flory expression, Eq.~\ref{eq:csd_flory}, taking the $p_b \to 1$ limit is effectively equivalent to substituting $p_A p_b$ with $p_A$. By making the opposite substitution in Eq.~\ref {eq:asympt_csd} we obtain the following $p_b$-dependent expressions for $N_l$ and $\langle l \rangle$ for our model: \begin{align} \label{eq:phenomenological_csd} N_l & = N_2 \left( \frac{1 - p_Ap_b}{p_Ap_b} \right)^2 \left( \frac{2p_Ap_b - 1}{p_Ap_b} \right)^{l-1}\\ \label{eq:phenomenological_avg_length} \langle l \rangle & = \frac{p_Ap_b}{1 - p_Ap_b}. \end{align} \noindent We stress that the above relations were not formally derived and therefore should be considered of heuristic nature. Nonetheless, we find a nearly perfect overlap between simulation data and values calculated from Eqs~(\ref{eq:phenomenological_csd})-(\ref{eq:phenomenological_avg_length}), as shown in Fig~.\ref{fig:avg_length}. \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{csd_R40.pdf} \caption{\label{fig:csd_R40}Simulation (points) and theoretical (lines) chain-size distributions $N_l$ for microgels with different crosslink concentrations generated with an initial radius of confinement $Z = 40\sigma$. The theoretical curves come from Eq.~\ref{eq:asympt_csd}.} \end{figure} We can go one step further and compare the theoretical and numerical chain-size distributions. Fig.~\ref{fig:csd_R40} shows $N_l$ for microgels with different crosslink concentrations and $Z = 40\sigma$, comparing the simulation data (points) with the theoretical expression given by Eq.~\eqref{eq:asympt_csd}. We note that the use of Eq.~\eqref{eq:asympt_csd} or Eq.~\eqref{eq:phenomenological_csd} to calculate $N_l$ yields indistinguishable results within the statistical noise of the data. We thus find that the chain-size distributions are always exponential, supporting the inital assumption used in the derivation of the theoretical expression. Moreover, the agreement between simulation data and theory is excellent in the whole range of sizes observed in simulation. We note on passing that the same qualitative picture (exponential $N_l$, excellent agreement with Eq.~\eqref{eq:asympt_csd}) holds true for all the investigated microgels. These results confirm that the properties of the chains do not depend on the choice of the confinement radius $Z$. Thus, this is a useful parameter which allows to control the size and the swelling behavior of the microgels, but it does not affect the connective properties of the chains. However, the network topology and thus the number of entanglements can be varied in this way, and we will consider their characterisation in future work. \subsection{Form factors and density profiles in the swollen regime} Experimentally, the structure of microgels is most often probed with neutron or X-ray scattering techniques. The output of such experiments is the form factor $P(q)$, which provides a description of the microscopic structure in reciprocal space. The form factor itself is then usually fitted to a model with a minimal amount of adjustable parameters to obtain the structure in real space. In the case of microgels, the most used model is the so-called \textit{fuzzy sphere} model~\cite{stieger2004small}, which assumes a constant-density core of radius $R'$ and a smearing parameter $\sigma_{\rm surf}$ that is linked to the size of the outer corona. The static and dynamic inhomogeneities of the underlying polymer networks are accounted for by a Lorentzian term of amplitude $I(0)$ and correlation length $\xi$. All in all, the fuzzy-sphere model gives the expression \begin{equation}\label{eq:fuzzy} P(q)=\left[\frac{3 [ \sin(qR') -qR' \cos(qR')] }{(qR')^3} \exp\left(\frac{-(q\sigma_{\rm surf})^2}{2}\right)\right]^2 + \frac{I(0)}{1+\xi^2 q^2}. \end{equation} \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{pq_alpha0.pdf} \caption{\label{fig:pq_alpha0} Form factors $P(q)$ directly calculated from simulations (points) and obtained from a fuzzy-sphere model (Eq.~\eqref{eq:fuzzy}) fit (lines) for microgels of different crosslinker concentrations at three different values of the radius of confinement $Z$.} \end{figure} In numerical simulations, the form factor can be directly calculated as $P(q)=(1/N)\sum_{ij}\langle\exp(\vec{q}\cdot\vec{r}_{ij})\rangle$, where the sum runs over all the monomer couples $i,j$ and the angular brackets represent an average taken on an ensemble of configurations. Figure~\ref{fig:pq_alpha0} shows the form factors of numerical microgels at all the investigated crosslinker concentrations and for three values of the radius of confinement. The $P(q)$ is averaged from two to four different realizations of microgels for each studied case. At a first glance we already see that the position and number of the peaks, which are linked to the size and softness of the microgel, respectively, depend on both $c$ and $Z$. The effect of varying either of these parameters is qualitatively similar: microgels generated with higher $c$ or smaller $Z$ are compact in size and more structured and \textit{vice versa}. The effect of varying $c$ and $Z$ can be quantified by fitting the numerical $P(q)$ to Eq.~\eqref{eq:fuzzy}. The fit allows to extract the parameters $R'$ and $\sigma_{\rm surf}$, which are needed in order to predict the density profile of the microgel, which reads~\cite{stieger2004small}: \begin{eqnarray} \label{eq:profiles} \frac{\rho(r)}{\rho_0}= \begin{cases} 1 & {\rm if} \quad r < R_c \\[0.5em] 1 - \frac{(r - R' + 2\sigma_{\rm surf})^2}{8\sigma_{\rm surf}^2} & {\rm if} \quad R_c\leq r < R' \\[0.5em] \frac{(R' - r + 2\sigma_{\rm surf})^2}{8\sigma_{\rm surf}^2} & {\rm if} \quad R'\leq r< R'' \\[0.5em] 0 &{\rm if} \quad r\geq R'' \end{cases} \end{eqnarray} \noindent where $R_c=R'-2\sigma_{\rm surf}$ is the radius of the constant-density part of the particle, $\rho_0$ is its number density and $R''=R'+2\sigma_{\rm surf}$ is the total radius, including the fuzzy shell. The solid lines in Figure~\ref{fig:density_profile_alpha0} are obtained from Eq.~\eqref{eq:profiles} using the parameters extracted from the form factors. The agreement is excellent for all the microgels in the whole investigated $q$-range. The oscillatory character of the density profile in the core region for $Z=70\sigma$ marks the importance of averaging over several realizations, particularly for the case of large confinement radii. In all cases, a core-corona structure is retained with a sharper variation of the profiles when the confining radius is smaller and with increasing crosslinker concentration. \begin{figure}[h!] \includegraphics[width=0.55\textwidth,clip]{density_profiles_alpha0.pdf} \caption{\label{fig:density_profile_alpha0}Simulation (points) and fitted (lines) density profiles $\rho(r)$ for microgels of different crosslinker concentrations at two different values of the radius of confinement $Z$.} \end{figure} \subsection{Swelling behaviour} The ability to respond to a change of the external conditions is the most important property of microgels. The response to external stimuli is finely dependent on a large number of parameters, from the chemical composition of the particle to the synthesis protocol as well as the physical nature of the conditions that are being changed. Here we investigate how \textit{in-silico} microgels generated with different crosslinker concentrations and radius of confinement change their size when the quality of the solvent varies. This is achieved in our model by the variation of the control parameter $\alpha$. \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{swelling.pdf} \caption{\label{fig:swelling}Swelling curves of microgels generated with different crosslinker concentrations and $Z = $ (left) $30\,\sigma$, (middle) $50\,\sigma$ and (right) $70\,\sigma$.} \end{figure} Figure~\ref{fig:swelling} shows the so-called swelling curves for all investigated values of $c$ and for three selected values of $Z$ as a function of $\alpha$. These plots show the gyration radius $R_g$ of the particles, normalised by dividing by $R_g^{\rm max} \equiv R_g(\alpha = 0)$ for the maximally swollen conditions $(\alpha=0)$. For all cases considered the behaviour is qualitatively similar: all curves start from unity and decrease monotonically as $\alpha$ increases. The volume phase transition temperature (VPTT), which for PNIPAM microgels is around $T_{VPT}\sim 32\degree$C, in our model can be identified as the inflection point that develops at $\alpha \simeq 0.6$. For large values of $\alpha$, that is, when the polymer network is in a bad solvent, microgels collapse on themselves and their size plateaus at some asymptotic value. This value is known to be dependent on the crosslinker concentration as well as on the synthesis protocol~\cite{fernandez2011microgel}. Here we are able to reproduce both effects. Indeed, the change in relative size between the swollen and collapsed states is less marked as the crosslinker concentration increases or the extent of the initial confinement decreases. Therefore, the two parameters we can control, $c$ and $Z$, have the same qualitative effect on the swelling behaviour. From a quantitative standpoint, however, their effect is markedly different. Figure~\ref{fig:swelling} shows that increasing $c$ by a factor of two or three yields only a $10$ -- $20\%$ difference in the swelling behaviour, whereas a similar change in $Z$ produces microgels that are almost twice as softer. The swelling behaviour can be characterised in more detail by looking at the internal structure of microgels as $\alpha$ increases. \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{{pq_alpha0_6}.pdf} \caption{\label{fig:pq_alpha0_6}Simulation (points) and fitted (lines) form factors $P(q)$ for microgels of different crosslinker concentrations at three different values of the radius of confinement $Z$, close to the volume phase transition ($\alpha = 0.6$).} \end{figure} Figure~\ref{fig:pq_alpha0_6} shows the form factors, and the accompanying fuzzy-sphere-model fits, of selected microgels for $\alpha = 0.6$, i.e. close to the VPT (see Fig~\ref{fig:swelling}). From a qualitative standpoint, the enhanced solvophobicity of the monomers do not change the trends with $c$ and $Z$ observed for the $\alpha = 0$ case, as seen in Fig.~\ref{fig:pq_alpha0}. Quantitatively, we observe a slight increase in the number and in the sharpness of the peaks, especially at small values of the initial confinement. The behaviour of $P(q)$ is again well-described by the fuzzy-sphere model for all the values of $\alpha$ considered here, making it possible to closely follow the evolution of the values of the fitting parameters through the whole volume phase transition, from the swollen to the collapsed state. \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{R_fit.pdf}\\ \includegraphics[width=0.5\textwidth,clip]{sigma_fit.pdf}\\ \includegraphics[width=0.5\textwidth,clip]{xi_fit.pdf} \caption{\label{fig:fit_parameters}The fuzzy-sphere fitting parameters core radius $R'$, smearing parameter $\sigma_{\rm surf}$ and network correlation length $\xi$ of microgels generated with different crosslinker concentrations and three different values of $Z$. Lines are guides to the eye.} \end{figure} Figure~\ref{fig:fit_parameters} shows the core radius $R'$, the smearing parameter $\sigma_{\rm surf}$ and the network correlation length $\xi$ obtained by fitting the form factors of the microgels to the fuzzy-sphere model as the quality of the solvent varies. The effect of increasing $\alpha$ is, in all cases, monotonic: the sizes of the core and of the corona (linked to $R'$ and $\sigma_{\rm surf}$, respectively) decrease, while $\xi$ increases (within numerical noise). From a qualitative standpoint, Figure~\ref{fig:fit_parameters} shows once again that the effect of the crosslinker concentration and of the initial radius of confinement is similar: smaller $Z$ and larger $c$ tend to produce smaller (and hence less swellable) microgels and \textit{vice versa}. The behaviour of the two quantities linked to the particle size, $R'$ and $\sigma_{\rm surf}$, is qualitatively similar to the behaviour of $R_g$, as seen in Figure~\ref{fig:swelling}. However, here the contributions to the swelling given by the core and by the corona are decoupled and can be analysed separately. On one hand, the extent of the variation of $R'$ with $\alpha$ closely resembles the one of $R_g$ for all investigated values on $c$ and $Z$. On the other hand, the relative change in size of $\sigma_{\rm surf}$ due to the worsening of the solvent quality is much stronger: for the softest microgel ($c=1.4\%$, $Z = 70$), there is a sevenfold difference between the values of the smearing parameter in the swollen and collapsed states. This striking dependence on $\alpha$ originates from the low density of the corona, which is mainly composed of loosely crosslinked chains which, in the swollen state, tend to maximise the available volume by expanding, thus increasing their entropy. However, as the quality of the solvent worsens, the balance between entropy and enthalpy starts to favour the latter. As a result, the density of the outer part increases much faster than the density of the core, greatly reducing the extent of the corona, as signalled by the relative drop of $\sigma_{\rm surf}$. The other meaningful parameter that can be extracted from the fits is the network correlation length $\xi$. It is commonly linked to the spatial heterogeneity of the polymer network over intermediate length scales. We note that $\xi$ is a phenomenological quantity which is implicitly defined by the fitting function and thus cannot be directly computed starting from the equilibrium configurations obtained in simulations. The bottom panel of Figure~\ref{fig:fit_parameters} shows that $\xi$ is a monotonically-increasing function of $\alpha$ and depends very weakly on $c$, while it is more susceptible to changes in $Z$. We note that the commonly observed behaviour of $\xi$ in experiment is to decrease with temperature, rather than increase~\cite{shibayama1992,nigro_xi}. More work is required to understand the origin of such a difference. At very large $Z$ we observe a non-monotonic behaviour with $c$ which we ascribe to the lack of structure in these very loosely confined microgels which makes the fitting (and the interpretation of the parameters) less obvious. At high values of $\alpha$ we do not see the collapse of all the curves (regardless of $Z$ and $c$) on a single master value, as seen for the cases of $R'$ and $\sigma_{\rm surf}$. Looking at Figure~\ref{fig:pq_alpha0_6}, we see that, as the confinement gets looser and looser, form factors retain their overall shape but become less structured, displaying shallower dips and fewer oscillations. While we have not directly probed the internal structure of the collapsed microgels, we ascribe such a difference in the form factors, and hence in the values of the fitting parameter $\xi$, to an amplification of the difference in the heterogeneities of the structure which occurs during the collapse. The source of the initial difference in heterogeneity, signalled by the different values of $\xi$ at $\alpha=0$, is due to the different network structure generated during the assembly stage. Indeed, networks assembled by patchy-like particles are known to have structure factors that exhibit steeper and steeper low-$q$ behaviours as the density decreases~\cite{demichele2006dynamics,rovigatti2012structural}. \begin{figure}[h!] \includegraphics[width=0.5\textwidth,clip]{RhvsRSAXS_LR.pdf} \caption{\label{fig:RhvsRg}Fuzzy-sphere ($R''$) and hydrodynamic ($R_H$) radii as functions of $\alpha$ for microgels generated with $c = 1.4\%$, $3.2\%$ and $5\%$ and three different values of $Z$, as indicated at the right and top of the plot, respectively.} \end{figure} As a last point, we investigate how the total fuzzy-sphere radius $R''$ and the hydrodynamic radius $R_H$, which is defined as the radius at which $\rho(r)/\rho(0) = 10^{-3}$, as done in Ref.~\cite{gnan2017silico}, change with $c$, $Z$ and $\alpha$. We use $R''$ and $R_H$ as proxies for the SANS and DLS radii measured experimentally, respectively. These quantities are known to differ, with the latter being always larger than the former~\cite{fernandez2011microgel}. Figure~\ref{fig:RhvsRg} shows $R''$ and $R_H$ as functions of the solvent quality for microgels generated with different $c$ and $Z$. Both quantities monotonically decrease with $\alpha$ and $c$ and monotonically increase with $Z$. Interestingly, the hydrodynamic radius exhibits a larger variation across the volume phase transition. In a good solvent, \textit{i.e.} $\alpha \to 0$, dangling ends or very long closed loops can stick out of the corona and increase the effective size of particle, thereby boosting the value of $R_H$. When the presence of these loosely connected chain portions is higher, as it is the case for larger values of $Z$ and smaller values of $c$, the effect is enhanced. However, in the collapsed state, the dangling ends are mostly attached to the particle surface and thus do not contribute significantly to the value of $R_H$. As a result, $R_H$ approaches $R''$ as $\alpha$ increases. \section{Conclusions} In this work we have investigated the internal structure of numerical microgels across the volume phase transition as the crosslinker concentration $c$ and the initial assembly conditions vary. The flexible protocol we have recently introduced to generate these microgels is based on the self-assembly of a confined mixture of bivalent and tetravalent patchy monomers that gives rise to an almost fully bonded network assembled in a spherical confinement ($95\%-99\%$ of particles belong to the largest cluster). For the present study, we design microgels with a total number of monomers $N\sim 41000$; by assuming each coarse-grained monomer to have a size comparable with the Kuhn length, the resulting microgels are the numerical analogue of experimental PNIPAM microgels of small size. A direct comparison with experimental data, made possible by the small size of the employed microgels, is discussed in a recent work~\cite{gnan2017silico}. We plan to draw similar comparisons with other small microgels of different crosslinker concentrations in the near future. We have investigated three crosslinker concentrations, namely $c=1.4\%, 3.2\%, 5.0\%$, and several initial confining radii $Z$, ranging from $Z=30\sigma$ to $Z=70\sigma$. We find that the internal architecture of microgel particles is controlled qualitatively in the same way by both $c$ and $Z$; large $c$ and small $Z$ values generate compact microgels, while large confining radii and small $c$ values enhance the formation of more heterogeneous and softer networks. However, while $Z$ acts on the degree of entanglement of the polymer chains in the network, the crosslinker concentration modify the chain-length distribution. We have shown that the latter can be derived theoretically from the Flory theory in which we have included the requirement that bonds among crosslinkers are absent. The effect of $Z$ and $c$ on the microgel structure is also reflected in the numerical form factors: we find that strongly confined microgels or microgels with high $c$ display form factors with several sharp peaks, an indication of the presence of a structured crosslinked network. On the other hand, small crosslinker concentration and large confining radii result in form factors with few shallow peaks. This is valid also for larger values of the solvophobic parameter $\alpha$, which plays the role of a temperature and controls the degree of swelling of the microgel particles. By fitting the form factors to the fuzzy-sphere model we have extracted the density profiles of the microgels, finding that they are in good agreement with the numerical density profiles directly calculated from simulations. Interestingly, we observe from such fits that, in the collapsed state beyond the VPT, a difference between microgels generated at large and small $Z$ still persists, even if the radius of gyration is almost the same for all $Z$ and $c$ values. This difference could be related to non-equilibrium effects occurring in the monomer dynamics at high values of $\alpha$; indeed, during the collapse monomers could get trapped in a disordered arrested state whose microscopic structure would depend on the initial conditions of the microgel structure, and hence on $Z$. Further investigation on the monomer dynamics is needed to better understand this point. Finally, we have also investigated the difference between the total radius of the fuzzy sphere and its hydrodynamic radius. The latter accounts for the presence of dangling ends: a large difference between $R_H$ and $R''$ indicates the presence of several of these loosely connected portions of the outer corona. We have found that $R_H$ and $R''$ differ more for large confining radii and small crosslinker concentration. However, in the collapsed state the long chains, that were free to move for small $\alpha$ outside of the corona radius, collapse on themselves and attach onto the core of the microgel, thus almost cancelling the difference between $R_H$ and $R''$. The analysis proposed in this work allows to better understand the structural properties of microgels generated with our novel synthesis protocol in an effort towards the design of realistic microgels, which encodes the correct properties of experimental microgel particles. This effort is ultimately aimed at extracting reliable effective interactions and to go beyond the widely used, but over-simplified, Hertzian model~\cite{mohanty2014effective}. \section*{Acknowledgements} LR and NG contributed equally to this work. We acknowledge support from the European Research Council (ERC Consolidator Grant 681597, MIMIC).
1,116,691,501,187
arxiv
\section{\bf Introduction } Resonant scattering amplitudes near inelastic thresholds cannot be accurately described by the Breit-Wigner formula. About 30 years ago S. M. Flatt\'e proposed the following parameterization of the $S$-wave $\pi\eta$ and the $\rm K \overline{K}$ production amplitudes which are dominated by the $a_0(980)$ resonance \cite{Flatte}: \begin{equation} A_j \sim \frac{M_R \sqrt{\Gamma_0 \Gamma_j}}{M_R^2-E^2-iM_R(\Gamma_1+\Gamma_2)},~~~j=1,2. \label{Flatte} \end{equation} Here $M_R$ is the resonance mass, $E$ is the effective mass (c.m. energy) and $\Gamma_j=g_j k_j$ are the channel widths, $k_1$ is the pion (or $\eta$) c.m. momentum, $k_2$ is the kaon c.m. momentum and $g_i$ are the channel coupling constants. Below the $\rm K \overline{K}$ threshold $\Gamma_2$ is imaginary: $\Gamma_2= i g_2 p_2$, where $ p_2=\sqrt{m_K^2-\frac{E^2}{4}}$, $m_K$ being the kaon mass. At the $\rm K \overline{K}$ threshold the energy $E_0=2 m_K$, $k_1=q$ and $\Gamma_0=g_1 q$. Above the threshold $E=2 (k_2^2+m_K^2)^{\frac{1}{2}}$. From (\ref{Flatte}) we see that apart of the normalization factors the Flatt\'{e} production amplitudes (1) depend on three real parameters: the resonance mass $M_R$ and two coupling constants $g_1$ and $g_2$. One can, however, easily demonstrate that in presence of an inelastic coupling to the $\pi \eta$ channel the three parameters are not sufficient to describe a behaviour of the $\rm K \overline{K}$ elastic scattering amplitude $T_{22}$ near its threshold. If this coupling is switched off then the following threshold formula, called the effective range approximation, can be used: \begin{equation} T_{22}=\frac{1}{k_2 \cot \delta_2 -i k_2},~~~~~~~~~~~~ k_2 \cot \delta_2 \approx\frac{1}{a} +\frac{1}{2} r~ k_2^2. \label{eff1} \end{equation} Here $\delta_2$ is the $\rm K \overline{K}$ phase shift and the two real parameters $a$ and $r$ denote the $\rm K \overline{K}$ scattering length and the effective range, respectively. However, if the interchannel coupling is nonzero then one has to introduce the inelasticity parameter $\eta$ and to replace the real parameters $a$ and $r$ by the complex ones, $A$ and $R$: \begin{equation} T_{22}=\frac{1}{2ik_2} ( \eta e^{2 i \delta_2} - 1) \approx \frac{1}{\frac{1}{A} - i~ k_2 +\frac{1}{2}~ R~ k_2^2}. \label{eff2} \end{equation} It means that one needs at least four real parameters to describe the $\rm K \overline{K}$ scattering near its threshold. These parameters should also appear in the formulae for the production amplitudes $A_1$ and $A_2$. This can be achieved by introduction of a new complex constant $N$ in the denominator $W$ of the production amplitudes $A_i$ in (\ref{Flatte}): \begin{equation} W= M_R^2 - E^2-i M_R g_1 q -i M_R g_2 k_2 +N~k_2^2. \label{Wu} \end{equation} Dividing $W$ by the product $M_R g_2$ we get \begin{equation} \frac{W}{M_R g_2}= \frac{1}{A} - i k_2 +\frac{1}{2} R k_2^2, \label{WMR} \end{equation} where the inverse of the scattering length $A$ and the effective range $R$ are written in terms of the three Flatt\'e parameters $M_R, g_1$, $g_2$ and of the new parameter $N$: \begin{equation} \frac{1}{A}=\frac{M_R^2 - E_0^2}{M_R g_2}-i\frac{g_1}{g_2}~q,~~~~~~~~~ R=\frac{2 N-8}{M_R g_2}. \label{AR} \end{equation} In the Flatt\'{e} approximation $N=0$, hence $Re R=-8/M_R g_2$ and $Im R \equiv 0$. The zero value of the imaginary part of the effective range is an essential limitation of the Flatt\'{e} formula. \section{\bf Scattering Amplitudes} The first channel elastic scattering amplitude $T_{11}$ can be parameterized in terms of five parameters. The four of them: $Re A, Im A, Re R$ and $Im R$ are the same as those defined in (\ref{eff2}) for the second channel amplitude. The fifth parameter is the first channel phase shift $\delta_0$ determined at the $\rm K \overline{K}$ threshold. Using the two-channel unitarity one can derive the following approximate formula for $T_{11}$: \begin{equation} T_{11} \approx \frac{e^{i\delta_0}}{k_1} \frac{\sin \delta_0~ +~i~ Im~(e^{-i\delta_0}~A)~k_2~-~\frac{1}{2}~Im~(e^{-i\delta_0}~A~R)~ k_2^2}{1-~i~ A~k_2~+~\frac{1}{2}~A~R ~k_2^2}\cdot \label{T11new} \end{equation} Below the $\rm K \overline{K}$ threshold one has to replace $k_2$ by $ip_2$. In the Flatt\'{e} approximation $\delta_0$ equals to the phase of the complex scattering length $A$. The transition amplitude $T_{12}$ from the first to the second channel in the new approach is given by \begin{equation} T_{12} \approx \frac{1}{\sqrt{k_1}}~ e^{i \delta_0} \frac{\sqrt{Im~ A~ -~\frac{1}{2}~ |A|^2 ~Im~R~ k_2^2}}{1-~i~ A~k_2~+~\frac{1}{2}~A~R ~k_2^2}\cdot \label{T12new} \end{equation} Let us remark that all the three amplitudes, given by Eqs. (\ref{eff2}), (\ref{T11new}) and (\ref{T12new}), have a common denominator \begin{equation} D(k_2)=1-~i~ A~k_2~+~ \frac{1}{2}~A~R ~k_2^2 \cdot \label{D} \end{equation} The complex zeroes $z_{1,2}$ of $D(k_2)$ are related to $A$ and $R$ by \begin{equation} z_{1,2} = \frac{i}{R} \pm \sqrt{-\frac{1}{R^2}-\frac{2}{AR}}\cdot \label{z1z2} \end{equation} In the Flatt\'{e} approximation $Re~z_1~=- Re~z_2$ which limits possible positions of the amplitude poles at the energies $E_{1,2}=\sqrt{E_0^2+4z_{1,2}^2}$. These pole positions are the most essential quantities characterizing the resonances. \begin{figure}[pb] \includegraphics[height=.213\textheight]{delta12cz.eps}~~~ \includegraphics[angle=0,height=.215\textheight]{eta.eps} \vspace*{8pt} \caption{Comparison of phase shifts and inelasticities of the scattering amplitudes\label{f1}} \end{figure} \section{\bf Phenomenological Studies of the $a_0(980)$ Threshold Parameters} It is instructive to show numerical results concerning the threshold parameters of the $a_0(980)$ resonance, situated close to the $\rm K \overline{K}$ threshold. One can use a coupled channel formalism for the separable meson-meson interactions in two or three channels. It has been developed in Ref.~\refcite{LL} and further applied to study the $a_0$ resonances in the $\pi \eta$ and the $\rm K \overline{K}$ channels \cite{AFLL}. The model parameters were fixed using the data of the Crystal Barrel and of the E852 Collaborations. The following threshold parameters have been presently calculated: $Re~ A= 0.17$ fm, $Im ~ A=0.41$ fm, $Re ~R = -11.32$ fm, and $Im ~R = -3.18$ fm. Let us stress here that the imaginary part of the effective range is nonzero and cannot be neglected. In Fig. 1 we see differences between the phase shifts and inelasticities calculated using the theoretical model of Ref.~\refcite{AFLL} and the Flatt\'e approximation in which $Im R \equiv 0$. They are sufficiently large to create deviations of the order of hundred percent between the squares of the moduli of the scattering amplitudes $|T_{11}|^2$ or $|T_{22}|^2$ already at the distance of 50 MeV away from the $\rm K \overline{K}$ threshold (see Fig. 1 of Ref.~\refcite{Lizbona}). Another discrepancy between the Flatt\'e formula and the above model is a shift of the pole complex energies $E_1$ and $E_2$. For example, the $Re E_1$ shift is larger than 10 MeV and exceeds the experimental energy resolution of many present experiments (see Fig. 2 of Ref.~\refcite{Lizbona}). \section{\bf New Formulae for the Production Amplitudes} Let us assume that there are no initial state strong interactions. Then one can generalize the Watson theorem, which is satisfied below the inelastic threshold \begin{equation} Im~ A_1 = k_1~ T_{11}~ A_1^*~, \label{Watson} \end{equation} to a form valid for the two coupled channels above this threshold: \begin{eqnarray} Im~ A_1= k_1~ T_{11}~ A_1^*~+k_2~ T_{12}~ A_2^*,\\ Im~ A_2= k_2~ T_{22}~ A_2^*~+~k_1 ~T_{21}~ A_1^*. \label{Watson2} \end{eqnarray} One can propose the following new parameterization of the production amplitudes: \begin{equation} A_1= f_1~T_{11}~+~f_2~T_{12},~~~~~A_2= f_1~T_{12}~+~f_2~T_{22}~. \label{A1A2} \end{equation} Here $f_1$, $f_2$ are real functions of energy (or momentum $k_2$) which near the threshold can be approximated by \begin{equation} f_1 \approx n_1+\beta_1 k_2^2,~~~ f_2 \approx n_2+\beta_2 k_2^2. \label{n1n2} \end{equation} In (\ref{n1n2}) $n_1$ and $n_2$ are normalization constants, $\beta_1$ and $\beta_2$ are additional real coefficients. The formula given by (\ref{Flatte}) is finally replaced by (\ref{A1A2}). The parameters to be fitted from experiments are: complex $A,R$ and real $\delta_0,n_1,n_2,\beta_1,\beta_2$. A generalization of the above formulae to a case where the particle masses in the second channel are different ($m_a$ and $m_b$) is simple. Above the inelastic threshold one defines the momentum $ k_2=\frac{1}{2E} \{[E^2-(m_{a}+m_{b})^2][E^2-(m_{a}-m_{b})^2]\}^{\frac{1}{2}}$. Below the threshold (for $E< E_0=m_{a}+m_{b}$) $k_2$ is replaced by $i p_2$, where $p_2=\frac{1}{2E} \{[(m_{a}+m_{b})^2-E^2] [E^2-(m_{a}-m_{b})^2]\}^{\frac{1}{2}}$. The new formulae can be applied in numerous analyses of present and future experiments (for example Belle, BaBar, CLEO, BES, KLOE, COSY, Tevatron, LHCb, JLab, PANDA ...) and also to reanalyse older experiments in order to update our information on meson spectroscopy and on reaction mechanisms.\\
1,116,691,501,188
arxiv
\section{Introduction, Definitions and Notations} \bigskip Recently, $q$-analysis has served as a structure between mathematics and physics. Therefore, there is a significant increase of activity in the area of the $q$-analysis due to applications of the $q -analysis in mathematics, statistics and physics. $p$-adic numbers also play a vital and important role in mathematics. $p -adic numbers were invented by the German mathematician Kurt Hensel \cit {Hensel}, around the end of the nineteenth century. In spite of their being already one hundred years old, these numbers are still today enveloped in an aura of mystery within the scientific community. The $p$-adic $q$-integral (or $q$-Volkenborn integral) are originally constructed by Kim \cite{Kim 4}. The $q$-Volkenborn integral is used in Mathematical Physics for example the functional equation of the $q$-Zeta function, the $q$-Stirling numbers and $q$-Mahler theory of integration with respect to the ring \mathbb{Z} _{p}$ together with Iwasawa's $p$-adic $q$-$L$ function. T. Kim, by using $q$-Volkenborn integral, introduced a novel Lebesgue-Radon-Nikodym type theorem. He is given some interesting properties concerning Lebesgue-Radon-Nikodym theorem. After, Jang \cite{Jang} defined q $-extension of Hardy-Littlewood-type maximal operator by means of $q -Volkenborn integral on \mathbb{Z} _{p}.$ Next, Kim, Choi and Kim \cite{Kim 7} defined weighted Lebesgue-Radon-Nikodym theorem. They also gave some interesting properties of this type theorem. By the same motivation of the above studies, in this paper, we construct weighted $q$-Hardy-littlewood-type maximal operator by the means of $p$-adic $q$-integral on \mathbb{Z} _{p}$. We also give some interesting properties of this type operator. Imagine that $p$ be a fixed prime number. Let \mathbb{Q} _{p}$ be the field of $p$-adic rational numbers and let \mathbb{C} _{p}$ be the completion of algebraic closure of \mathbb{Q} _{p}$. Thus, \begin{equation*} \mathbb{Q} _{p}=\left\{ x=\sum_{n=-k}^{\infty }a_{n}p^{n}:0\leq a_{n}\leq p-1\right\} . \end{equation*} Then \mathbb{Z} _{p}$ is an integral domain, which is defined by \begin{equation*} \mathbb{Z} _{p}=\left\{ x=\sum_{n=0}^{\infty }a_{n}p^{n}:0\leq a_{n}\leq p-1\right\} , \end{equation*} or \begin{equation*} \mathbb{Z} _{p}=\left\{ x\in \mathbb{Q} _{p}:\left\vert x\right\vert _{p}\leq 1\right\} . \end{equation*} In this paper, we assume that $q\in \mathbb{C} _{p}$ with $\left\vert 1-q\right\vert _{p}<1$ as an indeterminate. The $p$-adic absolute value $\left\vert .\right\vert _{p}$, is normally defined by \begin{equation*} \left\vert x\right\vert _{p}=\frac{1}{p^{r}}\text{,} \end{equation*} where $x=p^{r}\frac{s}{t}$ with $\left( p,s\right) =\left( p,t\right) =\left( s,t\right) =1$ and $r\in \mathbb{Q} $. A $p$-adic Banach space $B$ is a \mathbb{Q} _{p}$-vector space with a lattice $B^{0}$ ( \mathbb{Z} _{p}$-module) separated and complete for $p$-adic topology, ie., \begin{equation*} B^{0}\simeq \lim_{\overleftarrow{n\in \mathbb{N} }}B^{0}/p^{n}B^{0}\text{.} \end{equation*} For all $x\in B$, there exists $n\in \mathbb{Z} $, such that $x\in p^{n}B^{0}$. Define \begin{equation*} v_{B}\left( x\right) =\sup_{n\in \mathbb{N} \cup \left\{ +\infty \right\} }\left\{ n:x\in p^{n}B^{0}\right\} \text{.} \end{equation*} It satisfies the following properties \begin{eqnarray*} v_{B}\left( x+y\right) &\geq &\min \left( v_{B}\left( x\right) ,v_{B}\left( y\right) \right) \text{,} \\ v_{B}\left( \beta x\right) &=&v_{p}\left( \beta \right) +v_{B}\left( x\right) \text{, if }\beta \in \mathbb{Q} _{p}\text{.} \end{eqnarray*} Then, $\left\Vert x\right\Vert _{B}=p^{-v_{B}\left( x\right) }$ defines a norm on $B,$ such that $B$ is complete for $\left\Vert .\right\Vert _{B}$ and $B^{0}$ is the unit ball. A measure on \mathbb{Z} _{p}$ with values in a $p$-adic Banach space $B$ is a continuous linear ma \begin{equation*} f\mapsto \int f\left( x\right) \mu =\int_ \mathbb{Z} _{p}}f\left( x\right) \mu \left( x\right) \end{equation*} from $C^{0}\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $, (continuous function on \mathbb{Z} _{p}$) to $B$. We know that the set of locally constant functions from \mathbb{Z} _{p}$ to \mathbb{Q} _{p}$ is dense in $C^{0}\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $ so. Explicitly, for all $f\in C^{0}\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $, the locally constant functions \begin{equation*} f_{n}=\sum_{i=0}^{p^{n}-1}f\left( i\right) 1_{i+p^{n \mathbb{Z} _{p}}\rightarrow \text{ }f\text{ in }C^{0}\text{.} \end{equation*} Now if ~$\mu \in \boldsymbol{D}_{0}\left( \mathbb{Z} _{p} \mathbb{Q} _{p}\right) $, set $\mu \left( i+p^{n \mathbb{Z} _{p}\right) =\int_ \mathbb{Z} _{p}}1_{i+p^{n \mathbb{Z} _{p}}\mu $. Then $\int_ \mathbb{Z} _{p}}f\mu $ is given by the following \textquotedblleft Riemann sums\textquotedblright \begin{equation*} \int_ \mathbb{Z} _{p}}f\mu =\lim_{n\rightarrow \infty }\sum_{i=0}^{p^{n}-1}f\left( i\right) \mu \left( i+p^{n \mathbb{Z} _{p}\right) \text{.} \end{equation*} T. Kim defined $\mu _{q}$ as follows \begin{equation*} \mu _{q}\left( \xi +dp^{n \mathbb{Z} _{p}\right) =\frac{q^{\xi }}{\left[ dp^{n}\right] _{q}} \end{equation*} and this can be extended to a distribution on \mathbb{Z} _{p}$. This distribution yields an integral in the case $d=1$. So, $q$-Volkenborn integral was defined by T. Kim as follows \begin{eqnarray} &&I_{q}\left( f\right) =\int_ \mathbb{Z} _{p}}f\left( \xi \right) d\mu _{q}\left( \xi \right) \label{equation 5} \\ &=&\lim_{n\rightarrow \infty }\frac{1}{\left[ p^{n}\right] _{q}}\sum_{\xi =0}^{p^{n}-1}f\left( \xi \right) q^{\xi }\text{, (for details, see \cite{Kim 4}, \cite{Kim 5}). } \notag \end{eqnarray} Where $\left[ x\right] _{q}$ is a $q$-extension of $x$ which is defined b \begin{equation*} \left[ x\right] _{q}=\frac{1-q^{x}}{1-q}\text{,} \end{equation*} note that $\lim_{q\rightarrow 1}\left[ x\right] _{q}=x$ cf. \cite{Kim 2}, \cite{Kim 3}, \cite{Kim 4}, \cite{Kim 5}, \cite{Jang}. Let $d$ be a fixed positive integer with $\left( p,d\right) =1$. We now se \begin{eqnarray*} X &=&X_{d}=\lim_{\overleftarrow{n} \mathbb{Z} /dp^{n \mathbb{Z} , \\ X_{1} &= \mathbb{Z} _{p}, \\ X^{\ast } &=&\underset{\underset{\left( a,p\right) =1}{0<a<dp}}{\cup }a+d \mathbb{Z} _{p}, \\ a+dp^{n \mathbb{Z} _{p} &=&\left\{ x\in X\mid x\equiv a\left( \func{mod}p^{n}\right) \right\} , \end{eqnarray*} where $a\in \mathbb{Z} $ satisfies the condition $0\leq a<dp^{n}$. For $f\in UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $ \begin{equation*} \int_ \mathbb{Z} _{p}}f\left( x\right) d\mu _{q}\left( x\right) =\int_{X}f\left( x\right) d\mu _{q}\left( x\right) , \end{equation*} (for details, see \cite{Kim 8}). By the meaning of $q$-Volkenborn integral, we consider below strongly $p -adic $q$-invariant distribution $\mu _{q}$ on \mathbb{Z} _{p}$ in the form \begin{equation*} \left\vert \left[ p^{n}\right] _{q}\mu _{q}\left( a+p^{n \mathbb{Z} _{p}\right) -\left[ p^{n+1}\right] _{q}\mu _{q}\left( a+p^{n+1 \mathbb{Z} _{p}\right) \right\vert <\delta _{n}, \end{equation*} where $\delta _{n}\rightarrow 0$ as $n\rightarrow \infty $ and $\delta _{n}$ is independent of $a$. Let $f\in UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $, for any $a\in \mathbb{Z} _{p}$, we assume that the weight function $\omega \left( x\right) $ is defined by $\omega \left( x\right) =\omega ^{x}$ where $\omega \in \mathbb{C} _{p}$ with $\left\vert 1-\omega \right\vert _{p}<1$. We define the weighted measure on \mathbb{Z} _{p}$ as follows \begin{equation} \mu _{f,q}^{\left( \omega \right) }\left( a+p^{n \mathbb{Z} _{p}\right) =\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\xi }f\left( \xi \right) d\mu _{q}\left( \xi \right) \label{equation 2} \end{equation} where the integral is the $q$-Volkenborn integral. By (\ref{equation 2}), we easily note that $\mu _{f,q}^{\left( \omega \right) }$ is a strongly weighted measure on \mathbb{Z} _{p}$. That is \begin{eqnarray*} &&\left\vert \left[ p^{n}\right] _{q}\mu _{f,q}^{\left( \omega \right) }\left( a+p^{n \mathbb{Z} _{p}\right) -\left[ p^{n+1}\right] _{q}\mu _{f,q}^{\left( \omega \right) }\left( a+p^{n+1 \mathbb{Z} _{p}\right) \right\vert _{p} \\ &=&\left\vert \sum_{x=0}^{p^{n}-1}\omega ^{x}f\left( x\right) q^{x}-\sum_{x=0}^{p^{n}}\omega ^{x}f\left( x\right) q^{x}\right\vert _{p} \\ &\leq &\left\vert \frac{f\left( p^{n}\right) \omega ^{p^{n}}q^{p^{n}}}{p^{n} \right\vert _{p}\left\vert p^{n}\right\vert _{p} \\ &\leq &Cp^{-n} \end{eqnarray*} Thus, we get the following proposition. \begin{proposition} For $f,g\in UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $, we have \begin{equation*} \mu _{\alpha f+\beta g,q}^{\left( \omega \right) }\left( a+p^{n \mathbb{Z} _{p}\right) =\alpha \mu _{f,q}^{\left( \omega \right) }\left( a+p^{n \mathbb{Z} _{p}\right) +\beta \mu _{g,q}^{\left( \omega \right) }\left( a+p^{n \mathbb{Z} _{p}\right) \text{.} \end{equation* where $\alpha ,\beta $ are positive constants. Moreover \begin{equation*} \left\vert \left[ p^{n}\right] _{q}\mu _{f,q}^{\left( \omega \right) }\left( a+p^{n \mathbb{Z} _{p}\right) -\left[ p^{n+1}\right] _{q}\mu _{f,q}^{\left( \omega \right) }\left( a+p^{n+1 \mathbb{Z} _{p}\right) \right\vert \leq Cp^{-n} \end{equation* where $C$ is positive constant. \end{proposition} Let $UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $ be the space of uniformly differentiable functions on \mathbb{Z} _{p}$ with supnorm \begin{equation*} \left\Vert f\right\Vert _{\infty }=\underset{x\in \mathbb{Z} _{p}}{\sup }\left\vert f\left( x\right) \right\vert _{p}. \end{equation*} The difference quotient $\Delta _{1}f$ of $f$ is the function of two variables given by \begin{equation*} \Delta _{1}f\left( m,x\right) =\frac{f\left( x+m\right) -f\left( x\right) }{ },\text{ for all }x\text{, }m\in \mathbb{Z} _{p}\text{, }m\neq 0\text{.} \end{equation*} A function $f \mathbb{Z} _{p}\rightarrow \mathbb{C} _{p}$ is said to be a Lipschitz function if there exists a constant $M>0$ \left( \text{the Lipschitz constant of }f\right) $ such tha \begin{equation*} \left\vert \Delta _{1}f\left( m,x\right) \right\vert \leq M\text{ for all m\in \mathbb{Z} _{p}\backslash \left\{ 0\right\} \text{ and }x\in \mathbb{Z} _{p}. \end{equation*} The \mathbb{C} _{p}$ linear space consisting of all Lipschitz function is denoted by Lip\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $. This space is a Banach space with the respect to the norm \left\Vert f\right\Vert _{1}=\left\Vert f\right\Vert _{\infty }\tbigvee \left\Vert \Delta _{1}f\right\Vert _{\infty }$ (for more informations, see \cite{Kim 1}, \cite{Kim 2}, \cite{Kim 3}, \cite{Kim 4}, \cite{Kim 5}, \cit {Kim 6}, \cite{Jang}). The main aim of this paper is to define weighted $q -extension of Hardy Littlewood type maximal operator. Moreover, we show the boundedness of the weighted $q$-Hardy-littlewood-type maximal operator in the $p$-adic integer ring. \section{\qquad The weighted $q$-Hardy-littlewood-type maximal operator} In view of (\ref{equation 2}) and the definition of $p$-adic $q$-integral on \mathbb{Z} _{p}$, we now start with the following theorem. \begin{theorem} Let $\mu _{q}^{\left( w\right) }$ be a strongly $p$-adic $q$-invariant in the $p$-adic integer ring and $f\in UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $. Then for any $n\in \mathbb{Z} $ and any $\xi \in \mathbb{Z} _{p}$, we have \end{theorem} $(1)$ $\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\frac{\xi }{p^{n}}}f\left( \xi \right) q^{-\frac{\xi }{p^{n} }d\mu _{q^{p^{-n}}}\left( \xi \right) =\frac{\omega ^{\frac{a}{p^{n}}}} \left[ p^{n}\right] _{q^{p^{-n}}}}\int_ \mathbb{Z} _{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) q^{-\xi }d\mu _{q}\left( \xi \right) ,$ $(2)$ $\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\frac{\xi }{p^{n}}}d\mu _{q^{p^{-n}}}\left( \xi \right) =\frac \left( 1-q\right) \omega ^{\frac{a}{p^{n}}}q^{\frac{a}{p^{n}}}}{\left( 1-\omega q\right) \left[ p^{n}\right] _{q^{p^{-n}}}}\left( 1+\frac{\log \omega }{\log q}\right) .$ \begin{proof} (1) We see use of (\ref{equation 5}) and (\ref{equation 2} \begin{eqnarray*} &&\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\frac{\xi }{p^{n}}}f\left( \xi \right) q^{-\frac{\xi }{p^{n} }d\mu _{q^{p^{-n}}}\left( \xi \right) \\ &=&\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m+n}\right] _{q^{p^{-n}}} \sum_{\xi =0}^{p^{m}-1}\omega ^{\frac{a+p^{n}\xi }{p^{n}}}f\left( a+p^{n}\xi \right) q^{-\frac{a+p^{n}\xi }{p^{n}}}q^{\frac{a+p^{n}\xi }{p^{n}}} \\ &=&\omega ^{\frac{a}{p^{n}}}\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{n \right] _{q^{p^{-n}}}\left[ p^{m}\right] _{q}}\sum_{\xi =0}^{p^{m}-1}\omega ^{\xi }q^{-\xi }f\left( a+p^{n}\xi \right) q^{\xi } \\ &=&\frac{\omega ^{\frac{a}{p^{n}}}}{\left[ p^{n}\right] _{q^{p^{-n}}}}\int_ \mathbb{Z} _{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) q^{-\xi }d\mu _{q}\left( \xi \right) . \end{eqnarray*} (2) By the same method of (1), we easily see the followin \begin{eqnarray*} &&\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\frac{\xi }{p^{n}}}d\mu _{q^{p^{-n}}}\left( \xi \right) \\ &=&\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m+n}\right] _{q^{p^{-n}}} \sum_{\xi =0}^{p^{m}-1}\omega ^{\frac{a+\xi p^{n}}{p^{n}}}q^{\frac{a+\xi p^{n}}{p^{n}}} \\ &=&\frac{\omega ^{\frac{a}{p^{n}}}q^{\frac{a}{p^{n}}}}{\left[ p^{n}\right] _{q^{p^{-n}}}}\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m}\right] _{q} \sum_{\xi =0}^{p^{m}-1}\omega ^{\xi }q^{\xi } \\ &=&\frac{\omega ^{\frac{a}{p^{n}}}q^{\frac{a}{p^{n}}}\left( 1-q\right) } \left[ p^{n}\right] _{q^{p^{-n}}}\left( 1-\omega q\right) \lim_{m\rightarrow \infty }\frac{1-\left( \omega q\right) ^{p^{m}}} 1-q^{p^{m}}} \\ &=&\frac{\omega ^{\frac{a}{p^{n}}}q^{\frac{a}{p^{n}}}\left( 1-q\right) } \left[ p^{n}\right] _{q^{p^{-n}}}\left( 1-\omega q\right) }\left( 1+\frac \log \omega }{\log q}\right) \end{eqnarray*} Since $\underset{m\rightarrow \infty }{\lim }q^{p^{m}}=1$ for $\left\vert 1-q\right\vert _{p}<1,$ our assertion follows. \end{proof} Now, we are ready to give definition of weighted $q$-extension of Hardy-littlewood-type maximal operator related to $p$-adic $q$-integral on \mathbb{Z} _{p}$ with a strong $p$-adic $q$-invariant distribution $\mu _{q}$ in the $p -adic integer ring. \begin{definition} Let $\mu _{q}^{\left( \omega \right) }$ be a strongly $p$-adic $q$-invariant distribution in the $p$-adic integer ring and $f\in UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $. Then weighted $q$-extension of the Hardy-littlewood-type maximal operator with respect to $p$-adic $q$-integral on $a+p^{n \mathbb{Z} _{p}$ is defined by \begin{equation*} M_{p,q}^{\left( \omega \right) }f\left( a\right) =\underset{n\in \mathbb{Z} }{\sup }\frac{1}{\mu _{1,q^{p^{-n}}}^{\left( w^{p^{-n}}\right) }\left( x+p^{n \mathbb{Z} _{p}\right) }\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\frac{x}{p^{n}}}q^{-\frac{x}{p^{n}}}f\left( x\right) d\mu _{q^ \frac{1}{p^{n}}}}\left( x\right) \end{equation* for all $a\in \mathbb{Z} _{p}$. \end{definition} We recall that famous Hardy-littlewood maximal operator $M_{\mu }$ is defined by \begin{equation} M_{\mu }f\left( a\right) =\underset{a\in Q}{\sup }\frac{1}{\mu \left( Q\right) }\int_{Q}\left\vert f\left( x\right) \right\vert d\mu \left( x\right) , \label{equation 3} \end{equation} where $f \mathbb{R} ^{k}\rightarrow \mathbb{R} ^{k}$ is a locally bounded Lebesgue measurable function, $\mu $ is a Lebesgue measure on $\left( -\infty ,\infty \right) $ and the supremum is taken over all cubes $Q$ which are parallel to the coordinate axes. Note that the boundedness of the Hardy-Littlewood maximal operator serves as one of the most important tools used in the investigation of the properties of variable exponent spaces (see \cite{Jang}). The essential aim of Theorem 1 is to deal with the weighted $q$-extension of the classical Hardy-Littlewood maximal operator in the space of $p$-adic Lipschitz functions on \mathbb{Z} _{p}$ and to find the boundedness of them. By the meaning of Definition 1, we get the following theorem. \begin{theorem} Let $f\in UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $ and $x\in \mathbb{Z} _{p}$, we get \end{theorem} \textit{(1)} $M_{p,q}^{\left( \omega \right) }f\left( a\right) =\frac 1-\omega q}{1-q}\frac{\log q}{\log \left( \omega q\right) }\underset{n\in \mathbb{Z} }{\sup }q^{-\frac{x}{p^{n}}}\int_ \mathbb{Z} _{p}}\omega ^{\xi }f\left( x+p^{n}\xi \right) q^{-\xi }d\mu _{q}\left( \xi \right) $, \textit{(2)} $\left\vert M_{p,q}^{\left( \omega \right) }f\left( a\right) \right\vert _{p}\leq \underset{n\in \mathbb{Z} }{\sup }\left\vert \frac{1-wq}{q^{\frac{x}{p^{n}}}}\frac{\log q}{\log \left( \omega q\right) }\right\vert _{p}\left\Vert f\right\Vert _{1}\left\Vert \left( \frac{q}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}}$, \textit{where} $\left\Vert \left( \frac{q}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}}=\int_ \mathbb{Z} _{p}}\left( \frac{q}{\omega }\right) ^{-\xi }d\mu _{q}\left( \xi \right) $. \begin{proof} (1) Because of Theorem 1 and Definition 1, we se \begin{eqnarray*} M_{p,q}^{\left( \omega \right) }f\left( a\right) &=&\underset{n\in \mathbb{Z} }{\sup }\frac{1}{\mu _{1,q^{p^{-n}}}^{\left( w^{p^{-n}}\right) }\left( x+p^{n \mathbb{Z} _{p}\right) }\int_{a+p^{n \mathbb{Z} _{p}}\omega ^{\frac{x}{p^{n}}}q^{-\frac{x}{p^{n}}}f\left( x\right) d\mu _{q^ \frac{1}{p^{n}}}}\left( x\right) \\ &=&\underset{n\in \mathbb{Z} }{\sup }\frac{\left( 1-\omega q\right) \left[ p^{n}\right] _{q^{p^{-n}}}\log q}{\left( 1-q\right) \omega ^{\frac{x}{p^{n}}}q^{\frac{x}{p^{n}}}\log \left( \omega q\right) }\frac{\omega ^{\frac{x}{p^{n}}}}{\left[ p^{n}\right] _{q^{p^{-n}}}}\int_ \mathbb{Z} _{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) q^{-\xi }d\mu _{q}\left( \xi \right) \\ &=&\frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\underset{r\in \mathbb{Z} }{\sup }\frac{1}{q^{\frac{x}{p^{n}}}}\int_ \mathbb{Z} _{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) q^{-\xi }d\mu _{q}\left( \xi \right) \end{eqnarray*} (2) On account of (1), we can derive the followin \begin{eqnarray*} \left\vert M_{p,q}^{\left( \omega \right) }f\left( a\right) \right\vert _{p} &=&\left\vert \frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\underset{n\in \mathbb{Z} }{\sup }\frac{1}{q^{\frac{x}{p^{n}}}}\int_ \mathbb{Z} _{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) q^{-\xi }d\mu _{q}\left( \xi \right) \right\vert _{p} \\ &\leq &\left\vert \frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\right\vert _{p}\underset{n\in \mathbb{Z} }{\sup }\left\vert q^{-\frac{x}{p^{n}}}\int_ \mathbb{Z} _{p}}f\left( a+p^{n}\xi \right) \left( \frac{q}{\omega }\right) ^{-\xi }d\mu _{q}\left( \xi \right) \right\vert _{p} \\ &\leq &\left\vert \frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\right\vert _{p}\underset{n\in \mathbb{Z} }{\sup }\left\vert q^{-\frac{x}{p^{n}}}\right\vert _{p}\int_ \mathbb{Z} _{p}}\left\vert f\left( a+p^{n}\xi \right) \right\vert _{p}\left\vert \left( \frac{q}{\omega }\right) ^{-\xi }\right\vert _{p}d\mu _{q}\left( \xi \right) \\ &\leq &\left\vert \frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\right\vert _{p}\underset{n\in \mathbb{Z} }{\sup }\left\vert q^{-\frac{x}{p^{n}}}\right\vert _{p}\left\Vert f\right\Vert _{1}\int_ \mathbb{Z} _{p}}\left\vert \left( \frac{q}{\omega }\right) ^{-\xi }\right\vert _{p}d\mu _{q}\left( \xi \right) \\ &=&\left\vert \frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\right\vert _{p}\underset{n\in \mathbb{Z} }{\sup }\left\vert q^{-\frac{x}{p^{n}}}\right\vert _{p}\left\Vert f\right\Vert _{1}\left\Vert \left( \frac{q}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}}. \end{eqnarray*} Thus, we complete the proof of theorem. \end{proof} We note that Theorem 2 (2) shows the supnorm-inequality for the weighted $q -Hardy-Littlewood-type maximal operator in the $p$-adic integer ring, in a word, Theorem 2 (2) shows the following inequalit \begin{equation} \left\Vert M_{p,q}^{\left( \omega \right) }f\right\Vert _{\infty }=\underset x\in \mathbb{Z} _{p}}{\sup }\left\vert M_{p,q}^{\left( \omega \right) }f\left( x\right) \right\vert _{p}\leq K\left\Vert f\right\Vert _{1}\left\Vert \left( \frac{q} \omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}} \label{equation 4} \end{equation} where $K=\left\vert \frac{\left( 1-\omega q\right) \log q}{\left( 1-q\right) \log \left( \omega q\right) }\right\vert _{p}\underset{n\in \mathbb{Z} }{\sup }\left\vert q^{-\frac{x}{p^{n}}}\right\vert _{p}$. By the equation \ref{equation 4}), we get the following Corollary, which is the boundedness for weighted $q$-Hardy-Littlewood-type maximal operator in the $p$-adic integer ring. \begin{corollary} $M_{p,q}^{\left( \omega \right) }$ is a bounded operator from $UD\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $ into $L^{\infty }\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $, where $L^{\infty }\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $ is the space of all $p$-adic supnorm-bounded functions with the \begin{equation*} \left\Vert f\right\Vert _{\infty }=\underset{x\in \mathbb{Z} _{p}}{\sup }\left\vert f\left( x\right) \right\vert _{p}\text{,} \end{equation* for all $f\in L^{\infty }\left( \mathbb{Z} _{p} \mathbb{C} _{p}\right) $. \end{corollary}
1,116,691,501,189
arxiv
\section{Introduction} Top quarks are produced in pairs via the strong interaction either from the fusion of two incoming gluons or from the annihilation of an incoming quark and anti-quark at leading order (LO). The interference effects between the Born diagram and the box diagram at next-to-leading order (NLO), as well as between initial- and final-state radiation, correlates the direction of the produced top quarks(antiquarks) to the direction of the incoming quarks(antiquarks). At the LHC, quarks in the initial state are mostly valence quarks while the initial state antiquarks are always from the sea quarks. As valence quarks on average carry a harder momentum spectrum than sea-quarks, top quarks have a preference to be produced in forward directions. The rapidity distribution of top quarks is therefore predicted to be broader than that of the more centrally produced top antiquarks. The difference of the absolute values of the rapidity of top quark and antiquark in an event, $\Delta|y|$, can be used to measure the asymmetry effect. \section{The $t\bar{t}$ charge asymmetry in lepton+jets events} The CMS collaboration\cite{Chatrchyan:2008aa} performs the inclusive and differential measurements of the charge asymmetry in lepton+jets events using the data corresponds to an integrated luminosity of 19.7 fb$^{-1}$~\cite{Khachatryan:2015oga}. The observed distributions of $\Delta|y|$ are corrected back to parton level by imposing an unfolding technique to allow for a comparison of the measurement and the prediction from theory. The differential charge asymmetry is also measured as function of kinematic variables of the $t\bar{t}$ system, the invariant mass $m_{t\bar{t}}$, the absolute value of the rapidity $|y_{t\bar{t}} |$, and the transverse momentum $p_{T,{t\bar{t}}}$ of the $t\bar{t}$ system. Figure~\ref{fig::semi} shows the differential results as a function of $m_{t\bar{t}}$. The inclusive measurement yields an asymmetry of A$_{C}$ = 0.0010 $\pm$ 0.0068(stat.)$ \pm$ 0.0037(syst.). As an alternative, the measurements are also performed in a fiducial phase space, yielding an integrated result of -0.0035 $ \pm$ 0.0072 (stat.) $\pm$ 0.0031 (syst.). The CMS collaboration also reports the most precise measurement of the inclusive charge asymmetry using a template fit to $\Upsilon_{t\bar{t}} = \tanh\Delta|y|$, as sensitive variable~\cite{Khachatryan:2015mna}. The measured charge asymmetry is A$_C$ = 0.0033 $\pm$ 0.0026(stat.) $\pm$ 0.0033(syst.), which is consistent with SM predictions. \begin{figure}[htb] \begin{center} \subfloat[]{\includegraphics[width=0.48\textwidth , height=0.35\textwidth]{87.pdf}} \subfloat[]{\includegraphics[width=0.5\textwidth , height=0.38\textwidth]{154.pdf}} \caption{(a) Measured asymmetry as a function of $m_{t\bar{t}}$ in ATLAS~\cite{Aad:2015noh}, and (b) in CMS ~\cite{Khachatryan:2015oga} using lepton+jets events, compared to predictions for the SM, as well as to the predictions of BSM in different sectors.\label{fig::semi}} \end{center} \end{figure} The ATLAS collaboration\cite{Aad:2008zzm} analyzes data corresponding to an integrated luminosity of 20.3 fb$^{-1}$~\cite{Aad:2015noh}. To measure the charge asymmetry, the observed $\Delta|y|$ distribution is unfolded to parton level using the Fully Bayesian Unfolding(FBU) technique. In addition to the inclusive measurement, the differential measurements are performed as a function of the invariant mass, transverse momentum and longitudinal boost of the $t\bar{t}$ system. The inclusive $t\bar{t}$ charge asymmetry is found to be A$_C $ = 0.009 $\pm$ 0.005. \section{The $t\bar{t}$ charge asymmetry in dilepton events} The CMS performs the same unfolding approach as smileptonic to measure the charge asymmetry in dilepton events with 19.7 fb$^{-1}$ accumulated data~\cite{Khachatryan:2016ysn}. The main benefit of lepton based asymmetry measurement is the absence of distorting effects in the $t\bar{t}$ reconstruction, yielding a smaller uncertainty after unfolding procedure. The lepton asymmetry is measured using $\Delta|\eta|$ of the two leptons. Figure~\ref{fig::dilep}(upper row) shows the charge asymmetry (right) and the lepton asymmetry (left) as a function of the invariant mass of the $t\bar{t}$ system. The measured inclusive asymmetries are A$_C$ = 0.011 $\pm$ 0.011(stat.) $\pm$ 0.007(syst.) and A$_{C}^{lep}$ = 0.003 $\pm$ 0.006(stat.) $\pm$ 0.003(syst.). \begin{figure}[ht] \begin{center} \subfloat[]{\includegraphics[width=0.4\textwidth , height=0.3\textwidth]{2D_AfbVsmtt_unfolded_lepChargeAsym_all}} \subfloat[]{\includegraphics[width=0.4\textwidth , height=0.31\textwidth]{2D_AfbVsmtt_unfolded_rapiditydiffMarco_all}}\\ \subfloat[]{\includegraphics[width=0.4\textwidth , height=0.3\textwidth]{fig_05b}} \subfloat[]{\includegraphics[width=0.4\textwidth , height=0.3\textwidth]{fig_06b}} \caption{(a,b): Dependence of the $t\bar{t}$ and leptonic charge asymmetries on $m_{t\bar{t}}$(CMS)~\cite{Khachatryan:2016ysn}. (c,d): Summary of the measurements for the $t\bar{t}$ and leptonic asymmetries (ATLAS) ~\cite{Aad:2016ove}.\label{fig::dilep}} \end{center} \end{figure} The ATLAS collaboration reports inclusive, and differential asymmetry as a function of the invariant mass, transverse momentum and longitudinal boost of the $t\bar{t}$ system(Figure~\ref{fig::dilep} lower row) ~\cite{Aad:2016ove}. The inclusive asymmetry are measured to be A$_C$ = 0.008 $\pm$ 0.006 for the lepton asymmetry and A$_C$ = 0.021 $\pm$ 0.016 for the $t\bar{t}$ asymmetry. \section{The $t\bar{t}$ charge asymmetry in boosted events} Apart from previous measurements, the ATLAS Collaboration analyzes lepton+jets events with a boosted topology, where the hadronic top-quark decay is reconstructed as a single large-radius jet~\cite{Aad:2015lgx}. The charge asymmetry is measured in a fiducial region with $m_{t\bar{t}} > $ 0.75 TeV and an absolute rapidity difference within $-2 < |y_{t}| - |y_{\bar{t}}| < 2$. \begin{figure}[htb] \begin{center} \subfloat[]{\includegraphics[width=0.45\textwidth , height=0.4\textwidth]{summary_incl_diff_final}} \subfloat[]{\includegraphics[width=0.45\textwidth , height=0.4\textwidth]{AFB-ATLAS-2015-750final_06092}} \caption{(a) A summary of the charge asymmetry measurements in $m_{t\bar{t}}$ bins. (b) Predictions from a number of extensions of the SM for the forward-backward asymmetry, and high-mass charge asymmetry measurements at the LHC ~\cite{Aad:2015lgx}.\label{fig::boosted}} \end{center} \end{figure} The measured inclusive asymmetry is A$_C$ = 0.042 $\pm$ 0.032(stat $\oplus$ syst). Results of a differential measurement performed in three $m_{t\bar{t}}$ bins are shown in Figure~\ref{fig::boosted}(left). Figure~\ref{fig::boosted} (right) compares the results of measurements of the asymmetry at Tevatron and LHC with their uncertainties. \section{The CP violation asymmetry at CMS} The first measurement of CP violation asymmetries in top quark pair production and decay are performed in CMS with lepton+jets events~\cite{Khachatryan:2016ngh}. The analysis uses data from 8 TeV pp collisions corresponding to a total integrated luminosity of 19.7 fb$^{-1}$. Several new observables, as proposed in~\cite{Khachatryan:2016ngh}, are included to measure the CP violation asymmetries. These observables are odd under T transformation, i.e. CP(O$_{i}$) = -O$_{i}$. The A$_{CP}$ is found to be consistent with zero, within its uncertainty, in agreement with the standard model prediction as shown in Figure~\ref{fig::CMSCP}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth , height=0.45\textwidth]{Figure_005} \caption{Summary of the uncorrected (corrected) CP asymmetries $A_{CP}(A_{CP}^{'})$ for the observables defined in~\cite{Khachatryan:2016ngh}.\label{fig::CMSCP}} \end{center} \end{figure} \section{Charge and CP asymmetries in b-hadron decays} The ATLAS collaboration measures the same- and opposite-sign charge asymmetries in lepton+jets $t\bar{t}$ events where a b-hadron decays semileptonically to a soft muon~\cite{Aaboud:2016bmk}. The charge asymmetries are formed based on the charge of the lepton from the top-quark decay and the charge of the soft muon from the semileptonic decay of a b-hadron. These asymmetries are measured in a fiducial region corresponding to the experimental acceptance. The data are categorized into same- and different-top-like SMT muons by a kinematic likelihood fitter (KLFitter)~\cite{Erdmann:2013rxa}. Given the charge asymmetries, four CP asymmetries (one mixing and three direct) are also measured. \\ \section{Summary} The latest results of $t\bar{t}$ asymmetry measurements performed by the CMS and ATLAS collaborations are presented. All results are comparable with the predictions by the SM and no hint that point to new physics beyond the standard model is found.
1,116,691,501,190
arxiv
\section{Introduction}\label{introduction} The unprecedented adoption of mobile handhelds has created a host of new services that require accurate localization, even in GPS-denied environments. How to accurately localize without satellite positioning has been an active research topic. Various cooperative localization methods have been developed suitable to high noise scenarios, e.g., indoors \cite{Patwari:2005kc,Wymeersch:2009hv}. \subsection{The Forgotten Challenge}\label{motivation} The motivation behind this work is the assumption in the literature of distributed cooperative localization that requires a universally known {\em global coordinate system (GCS)}. Even in anchor-free algorithms, such as those in \cite{Ihler:2004jq,priyantha2003anchor,shang2004improved}, where nodes localize to a relative {\em local coordinate system (LCS)} only, the anchors with shared GCS knowledge are required to transform the local coordinates to global ones. Otherwise, nodes would have no idea the whereabout of the global origin. In the case of distributed ad-hoc networks, where the anchors are simply nodes with a good estimate of their GPS coordinates, achieving a shared GCS between them is non-trivial. Presumably, in this case, the anchors would have to communicate with each other to agree on a GCS with a common origin and pass that information to the rest of the network nodes. In addition, up-to-date information between nodes should be maintained, inducing an increased communication overhead. The solution we suggest is to use directly a GCS for all calculations, therefore eliminating any need for LCS and all of the described issues. The first obvious choice of a GCS would be to use GPS as a common GCS to all nodes. Unfortunately using GPS coordinates in message passing operations would easily make calculations underflow due to the small distances inherent in indoors localization, and would require scaling and normalization at each node. This means that every node would have to convert incoming messages to a LCS, suitable for message passing operations, and then convert them back to the GCS for transmission, increasing the computational cost of the algorithm. Instead, we propose a grid-based GCS that can be used directly to conduct message passing operations and at the same time hugely decrease the computational cost. In this manner we both remove any requirement for the networks to consent on a GCS and also achieve very low complexity. \subsection{Our Contributions} This correspondence proposes a novel scheme that uses a GCS system suitable for message passing algorithms in cooperative localization. As a real-life example, we use the NATO's military grid reference system (MGRS), cf. \cite{ngamgrs}.\footnote{Even though this letter uses the NATO's MGRS coordinate system, any grid-based coordinate system can be used with trivial changes.} The approach no longer requires GCS coordination between anchors. Also, it has inspired the use of parametric representations using multinomial probability mass functions (pmfs), allowing for a fast robust and accurate cooperative localization algorithm that elegantly resolves the GCS knowledge requirement. In summary, we have made the following contributions: \begin{itemize} \item We propose a grid-based GCS solution, i.e., map GPS coordinates to unique grid identifiers, solving the common reference issue in all distributed localization techniques. \item Parametric approximations to the pmfs are proposed to overcome the computational bottleneck of non-parametric belief propagation (BP) used in cooperative localization. \item Simulation results illustrate that the proposed grid-based BP method, which is referred to as Grid-BP, provides similar accuracy with low computational cost when compared to common techniques with {\em ideal} reference. \end{itemize} \section{Problem Formulation}\label{problemformulation} We consider a network of nodes in a 2D environment which consists of $N$ agents and $M$ anchors, where $M\geq 4$ and $N \gg M$. Let the space be subdivided into a square grid where each square ``bucket'' has a unique identifier, namely an ID. Then let $\bm X=[{X}_1,\dots,{X}_i,\dots,{X}_{N+M}] $ be the locations of all nodes, with ${X}_{i}$ representing the unique identifier of node $i$ and $X_i \in \{x_1,x_2, \ldots x_k \}$, where $k$ iterates over all possible IDs. Also, let $\bm Z$ denote the coordinates of all nodes, with $\bm{Z}_{i}$ representing the coordinates of node $i$, and the domain of $\bm{Z}_i$ is $\Re^2$. The nodes communicate wirelessly and it is assumed that the maximum communication range for each node is $R_{\max}$. Time is slotted and time slots are denoted by the time index superscript $(t)$ for $t=1,2,\dots,\infty$. We represent the problem as a joint pmf. Let $p^{(t)}(X_{i})$ be the pmf, i.e., the belief that node $i$ has about its location at time $t$. We model $p^{(t)}(X_{i})$ as a multinomial distribution with parameters $\theta_k$, where $\theta_{k}$ is the probability of node $i$ being in ID $x_{k}$ and $\sum_{k}\theta_{k}=1$. In addition, let the set of all nodes $j$ within range of node $i$ be denoted as the neighbourhood ${\cal N}_{i}$. Initially, the belief for the agents can be a non-informative uniform pmf over the grid, while the anchors' pmfs are focused in the IDs close to the real position, e.g., within $10{\rm m}$. Anchors can obtain their IDs either by directly mapping them from satellite data, cf. \cite{ngamgrs}. Node $i$ receiving a message from node $j$ at time slot $t$ can derive, using time-of-arrival (ToA) measurements,\footnote{The assumption of using ToA is not restrictive on the proposed algorithm because it can easily be used with other measurement models.} a noisy estimate $r^{(t)}_{j\rightarrow i}$ of the distance between them. For convenience, we assume $r^{(t)}_{j\rightarrow i}=r^{(t)}_{i \rightarrow j}= r^{(t)}_{ji}$. Thus, as in \cite{Buehrer:2010ew} for ToA distance measurements, we define the random variable $R_{ji}$ with its value $r_{ji}$ modelled as \begin{equation} r_{ji}=\| \bm{z}_i-\bm{z}_j\|+\eta_{ji}, \end{equation} where $\eta_{ji}$ is a Gaussian noise with variance $\sigma^{2}_{ji}=K_{e}\|\bm{z}_i -\bm{z}_j\|^{\beta_{ji}}$ in which $K_{e}$ is a proportionality constant capturing the combined physical layer and receiver effect, and $\beta_{ji}$ denotes the path loss exponent. In the case of line-of-sight (LoS), $\eta_{ji}$ is assumed zero mean, and $\beta_{ji}=2$, i.e., $\eta_{ji} \sim \mathcal{N}(0, \sigma^{2}_{ji})$. In this work we assume only LoS, but it would be easy to extend the algorithm with NLoS mitigation, by using e.g. \cite{oikonomou2011hybrid,Yin:2015}. We define the likelihood of node $i$ and node $j$ measuring distance $R_{ji}=r_{ji}$ between them at time $t$, given $X_i,X_j$ as \begin{align} p^{(t)}(R_{ji}=r_{ji}\mid X_i, X_j) \propto \exp\left(-\left(\frac{r_{ji}- ||C_i-C_j ||_2}{h}\right)^2\right) \end{align} where $h$ controls steepness, $C_i$ and $C_j$ are the coordinates of the centers of the grids' squares $X_i$ and $X_j$, respectively. Therefore, our objective is to find the maximum a posteriori (MAP), i.e., the values that maximize $p(\bm{X}|\bm{R})$ given distance measurements $\bm{R}=[R_{ji}]$. For a specific node $i$, we have \begin{equation} \hat{{X}}_{i} = \arg\max_{{X}_{i}}p^{(t)}({X}_i|\bm{R}_{i}). \end{equation} Thus, $p^{(t)}({X}_i|\bm{R}_i)$ can be evaluated using the Bayes' rule as \begin{align} p^{(t)}({X}_i|\bm{R}_{i})&\propto p^{(t)}({X}_{i}) \prod_{j \in {\cal N}_i} p^{(t)}(R_{ji}|{X}_{i})\notag\\ &\propto p^{(t)}({X}_{i}) \prod_{j \in {\cal N}_i}\int p^{(t)}(R_{ji}|{X}_i,{X}_j) p^{(t)}({X}_{j})\text{d}{X}_j,\label{eq:product_marginal} \end{align} in which the sign ``$\propto$'' means ``is proportional to'', and normalization should be done to obtain the pmf. \section{The Proposed Grid-BP Algorithm}\label{mgrs-bpalgorithm} Cooperative localization can be viewed as running inference on a probabilistic graphical model (PGM) \cite{Wymeersch:2009hv}, where messages are representations or probability functions. Here, the proposed grid-based localization algorithm will be presented. First, the use of a cluster graph and BP will be motivated, and the used PGM will be analysed. Then efficient approximation for the marginalization and the product operation will be given. \vspace{-.1in} \subsection{Belief Message Passing}\label{beliefmessagepassing} The network can be modelled as a cluster graph and loopy belief message passing algorithm can be used. We adopt a Bethe cluster graph \cite{Koller:2009tn}. The lower factors are composed of univariate potentials $\psi(X_{i})$. The upper region is composed of factors equal to $\psi( X_{i},X_{j}, R_{ji})$, e.g., see Fig.~\ref{fig:clustergraph}. \begin{figure}[!ht] \centering \includegraphics[scale=0.5]{bethe.pdf} \caption{The cluster graph. Lower row factors denote the node position beliefs. Upper row factors denote the ranging interactions between the nodes.}\label{fig:clustergraph} \end{figure} The lower factors are set to the initial beliefs for the given time slot $(t)$, and the upper factors to the corresponding cpmfs: \begin{align} \psi(X_{i})&=p^{(t)}(X_{i}),\\ \psi( X_{i},X_{j},R_{ji}= r^{(t)}_{ji})&= p^{(t)}(R_{ji}=r^{(t)}_{ji}|X_{i}, X_{j}). \end{align} Messages are then passed between nodes for multiple iterations until the node beliefs have converged. The message from node $j$ to node $i$, at BP iteration $(s+1)$ is calculated by \begin{equation}\label{eq:messageupper} \mu^{(s+1)}_{j \rightarrow i}(X_{i}) = \int \psi(X_{i},X_{j},R_{ji}=r^{(t)}_{ji}) \frac{b^{(s)}_{j\rightarrow i}(X_{j})}{\mu^{(s)}_{i\rightarrow j}(X_{j}) }\text{d} X_{j}, \end{equation} where intuitively, a message \eqref{eq:messageupper} is the belief that node $j$ has about the location of node $i$ and $r^{(t)}_{ji}$ is the observed value of the distance between the nodes, at time slot $t$. Then the belief of node $i$ is updated as \begin{equation} b^{(s+1)}_{i}(X_{i})= \lambda\psi(X_{i})\prod\limits_{k \in {\cal N}_{i}} \mu^{(s+1)}_{k \rightarrow i}(X_{i}) + (1-\lambda)b^{(s)}_{i}(X_{i}), \label{eq:messagelower} \end{equation} where $\lambda$ is a dampening factor used to facilitate convergence. BP continues until convergence, or if convergence is not guaranteed, $s$ reaches a maximum number of iterations $I_{\text{max}}$. Then the beliefs, representing approximations to the true marginals, for each node are found by \eqref{eq:messagelower}, i.e., $p^{(t+1)}(X_i)=b^{(s+1)}_{i}(X_{i})$. The proposed Grid-BP is given as Algorithm \ref{alg:HSM}. Each node needs to perform a marginalization operation \eqref{eq:messageupper}, and a product operation \eqref{eq:messagelower}. Approximations are required for both complex operations. In Grid-BP, we take advantage of the multinomial parametric form which we discuss next. \begin{algorithm} \caption{Grid-BP} \begin{algorithmic}[1] \label{alg:HSM} \STATE Initialize beliefs $p^{(0)}(X_{i}) ~\forall i \in $ Nodes \FOR{$t=0$ to $T$} \FORALL{ $i \in$ Nodes} \STATE Broadcast current belief $p^{(0)}(X_{i})$ \FORALL{$j \in {\cal N}_{i}$ } \STATE Collect distance estimates $r^{(t)}_{ji}$ \ENDFOR \ENDFOR \STATE Initialize $\psi(X_{i}) =p^{(t)}(X_{i})$ \STATE Initialize $\psi(X_{i}, X_{j}, R_{ij})= p^{(t)}(R_{ij}= r^{(t)}_{ji} \mid X_{i}, X_{j})$ \REPEAT \FORALL{ $i \in$ Nodes} \FORALL{$j \in {\cal N}_{i}$ } \STATE Receive $b^{(s)}_{j}(X_{j}) $ \STATE Calculate $\mu^{(s+1)}_{j \rightarrow i}(X_{i}) $, using \eqref{eq:messageupper} using Gibbs sampling (i.e., Algorithm \ref{alg:HSMgibbs}) \ENDFOR \STATE Calculate $b^{(s+1)}_{i}(X_{i})$, using \eqref{eq:messagelower}. \STATE Check for convergence \STATE Send $b^{(s+1)}_{i}(X_{i}) $ \ENDFOR \UNTIL{convergence or $s$ reaches $I_{\text{max}}$} \STATE Update belief $p^{(t+1)}(X_{i})$, using \eqref{eq:messagelower}. \ENDFOR \end{algorithmic} \end{algorithm} \vspace{-.1in} \subsection{Marginalization Operation \eqref{eq:messageupper}}\label{marginilizationoperation} The calculation of \eqref{eq:messageupper} gives the belief node $j$ has about node $i$. To understand this, let us assume the case where all energy in $b^{(s)}_{j}(X_{j}) $ is concentrated at a single ID $x_j$, then node $j$ would believe that node $i$ is located in one of the IDs that approximate a ``circle '' with centre $x_j$ and radius $ r^{(t)}_{ji}$. Hence, to get $\mu^{(s+1)}_{j \rightarrow i}(X_{i}) $, first we draw $L$ particles from $x_j^{(l)} \sim b^{(s)}_{j}(X_{j}) $. Then we draw $L$ samples from $\phi^{(l)} \sim {\cal U}(0, 2\pi)$ and $L$ samples from $\hat{r}^{(l)}_{ji}\sim \mathcal{N}(r^{(t)}_{ji}, h)$. The Gibbs sampling algorithm is provided as Algorithm \ref{alg:HSMgibbs}. We repeat Algorithm \ref{alg:HSMgibbs} for all incoming messages and we will get $\{x_j^{(l)},\hat{r}_{ji}^{(l)},\phi^{(l)}\}_{l=1,j=1}^{L,\mid {\cal N}_i \mid}$. \begin{algorithm} \caption{Grid Gibbs Sampling} \begin{algorithmic}[1] \label{alg:HSMgibbs} \STATE Set ${\cal D}_{ X_{i}}$ to empty \FORALL{$j\in{\cal N}_{i}$} \STATE Sample $ x^{(l)}_{j}\sim \mu_{j \rightarrow i}( X_{j})$ which is a multinomial pdf \STATE Sample $\phi^{(l)} \sim {\cal U}[0, 2\pi]$ \STATE Sample $\hat{r}^{(l)}_{ji}\sim \mathcal{N}(r^{(t)}_{ji}, h)$ \STATE $ x^{(l)}_{i} ={\sf MAP}\text{-}{\sf DMtoID}(x^{(l)}_{j} , \hat{r}^{(l)}_{ji},\phi^{(l)})$ which maps the distance metric to IDs \STATE Add $ \{ x^{(l)}_{i} \}_{l=1}^{L}$ to ${\cal D}_{ X_{i}}$ \ENDFOR \RETURN ${\cal D}_{ X_{i}}$ \end{algorithmic} \end{algorithm} It is important to note that in order to combine the distance metric with the sampled IDs $\{x_j^{(l)}\}$ and get the set ${\cal D}_i=\{x_i^{(l)}\}$, we use a mapping function which we define as \begin{equation} x_i^{(l)}={\sf MAP}\text{-}{\sf DMtoID}(x_j^{(l)},\hat{r}_{ji}^{(l)},\phi^{(l)}) \end{equation} Intuitively we do this by counting for each sampled ID $x_j^{(l)}$ the number of IDs to the east and to the north, node $i$ will be, given the measured distance samples $\hat{r}_{ji}^{(l)}$ normalized by $D$ as \begin{equation}\label{eq:steps} \left[\begin{array}{c} d_H\\ d_V \end{array}\right]={\rm int}\left(\frac{\hat{r}_{ji}^{(l)}}{D} \left[\begin{array}{c} \cos(\phi^{(l)})\\ \sin(\phi^{(l)}) \end{array}\right]\right). \end{equation} Then we map the displacement $d_H, d_V$ to a new ID and return it as $x_i^{({l})}$. The set ${\cal D}_{ X_{i}}$ of all samples obtained from all incoming messages is used to find \eqref{eq:messagelower}. The distance to ID mapping function is given as Algorithm \ref{alg:MAP-DMtoID}. In the case of MGRS IDs the horizontal and vertical mappings are done by adding $d_H, d_V$ to the easting and northing components of the ID of $x_j^{({l})}$, to be discussed in Section \ref{militarygridreferencesystem}. There is no need to do any reverse mapping as we directly get the new IDs. \begin{algorithm} \caption{${\sf MAP}\text{-}{\sf DMtoID}$} \begin{algorithmic}[1] \label{alg:MAP-DMtoID} \STATE Calculate horizontal and vertical steps using \eqref{eq:steps} \STATE Map horizontal ID $ x^{(l)}_{j}\rightarrow b^{(l)}_{j}$ \STATE $b^{(l)}_{h}=b^{(l)}_{j} + d_H$ \STATE Inverse horizontal mapping $b^{(l)}_{h} \rightarrow x^{(l)}_{h}$ \STATE Map vertical ID $x^{(l)}_{h}\rightarrow b^{(l)}_{v}$ \STATE $b^{(l)}_{i}=b^{(l)}_{v} + d_V$ \STATE Inverse vertical mapping $b^{(l)}_{i} \rightarrow x^{(l)}_{i}$ \RETURN ${ x}^{(l)}_{i}$ \end{algorithmic} \end{algorithm} \subsection{Product Operation \eqref{eq:messagelower}}\label{productoperation} To obtain \eqref{eq:messagelower}, we will first convert the samples calculated for each message $\mu^{(s+1)}_{j \rightarrow i}(X_{i})$, \eqref{eq:messageupper}, to parametric multinomial pmfs and then calculate their product. We assume that the parameters of each multinomial are random variables $\bm \Theta_i$ with a uniform Dirichlet prior with parameters $\bm \alpha =[\alpha_1 \cdots \alpha_K]$, where $K$ denotes the number of unique IDs in the pmf, i.e. $\mu^{(s+1)}_{j \rightarrow i}(X_{i})\approx p^{(s+1)}_{j \rightarrow i}(X_{i}|\bm \Theta_i; \bm \alpha ) $. Firstly we use the samples from each incoming message as observations and get the MAP estimate $ \hat{\bm\theta}_i=[\hat{\theta}_{i,1}\cdots \hat{\theta}_{i,K}]$ of the parameters $\bm \Theta_i$ of each $p^{(s+1)}_{j \rightarrow i}(X_{i}|\bm \Theta_i; \bm \alpha ) $. We also assume that all incoming messages have the same prior distribution. This allows us to efficiently create parametric forms of the incoming messages as multinomial pmfs. These parameters are calculated as \begin{equation} \hat{\bm\theta_i}= {\mathbb E}\left[P(\bm\Theta_i | {\cal D}_{X_i})\right]= {\mathbb E} \left[p({\cal D}_{X_i}| \bm\Theta_i)p(\bm\Theta_i)\right], \end{equation} which gives \begin{equation}\label{eq:likelihood} \hat{\theta}_{i,k}=\frac{M_{k}+| {\cal N}_i | \alpha_{k}}{| {\cal N}_i |\sum\limits_{k}\left(M_{k}+\alpha_{k}\right)}, \end{equation} where $M_{k}$ is number of particles $x_k $ in ${\cal D}_{X_i}$. The algorithm is presented in Algorithm \ref{alg:multiproduct}. For clarity, the quantities of each ID in the samples are shown as being found by a count function but in practice it can be done during the Gibbs sampling step allowing for a more efficient algorithm. Finally, with the parametric forms of each \eqref{eq:messageupper} in hand, we can easily calculate \eqref{eq:messagelower} as the dot product of the $\bm \theta_i$ parameters of all incoming messages and the node's own belief. \begin{algorithm} \caption{MAP Parameter Estimation} \begin{algorithmic}[1] \label{alg:multiproduct} \STATE Let $|X_{i}|$ be the number of unique IDs in $p( X_{i}|\bm\Theta_{i})$ \STATE Calculate $M_{k}={\rm count}( x_{k}, {\cal D}_{ X_{i}} ) ~\forall k \in | X_{i}|$ \FORALL {$k\in| X_{i}|$} \STATE Calculate $\hat{\theta}_{i,k}$, with \eqref{eq:likelihood} \ENDFOR \RETURN$ {\hat{\bm \theta}_{i}} $ \end{algorithmic} \end{algorithm} \subsection{Message Filtering}\label{messagefiltering} As it makes no sense to keep all the possible IDs on the planet wide GCS, we consider that each node constructs pmfs with the IDs only within a specific range $ R_{\text{grid}}$, e.g., within $100$m, of the IDs it receives in the first iteration. To reduce the IDs further, we propose a simple filter to only keep the most probable IDs summing up to an energy threshold. Simulation results (not included in this paper) suggest that by keeping $\sim80\%$ of the total energy of the pmf, the size of the messages is decreased by $\sim 90\%$ with no increase in localization error. Thus, assuming that each message covers a $100 \times 100{\rm m}^2$ grid, the total number of IDs used without the filter would be $10^4$. After the filter, only $\sim100$ IDs will be transmitted. \subsection{Complexity}\label{complexity} Most message passing cooperative localization algorithms tend to use a variant of Gibbs sampling to calculate \eqref{eq:messageupper} with similar computational costs \cite{Lien:2012bh}. Hence, the complexity cost tends to be bounded by the product operation \eqref{eq:messagelower} cost. As Grid-BP multiplies parametric form multinomials, the operation is bounded by $\bar{K} (| {\cal{N}}_{i} |+1)$, where $\bar{K}$ is the average number of IDs used in the calculations. The computational cost for the product operation of the compared algorithms is summarized in Table \ref{table:productcomplexity}, where $L$ is the number of particles and $| {\cal{N}}_{i} |$ is the number of messages involved and $I_{\text{Hybrid-BP}}$ is the number of iterations the product algorithm in \cite{Caceres:2011wx} is run. \begin{table}[H] \caption{Comparison of complexity costs of \eqref{eq:messagelower}.$^\S$} \center \label{table:productcomplexity} \begin{tabular}{c|c|c|c} Approach & Algorithm & Complexity\\ \hline Non-parametric & NBP & $L_{\text{NBP}}^2(| {\cal{N}}_{i} |$+1)\\ Non-parametric & HEVA-BP & $L_{\text{HEVA}}^2(| {\cal{N}}_{i} |$+1)\\ Parametric & Grid-BP & $\bar{K}(| {\cal{N}}_{i} |$+1)\\ Parametric & Hybrid-BP & $I_{\text{Hybrid-BP}} L_{\text{NBP}}| {\cal{N}}_{i} |(| {\cal{N}}_{i} |+1)$\\ \end{tabular}\\ $^\S$Note that $\bar{K}\simeq L_{\text{HEVA}}\ll L_{\text{NBP}}$, and typically $I_{\text{Hybrid-BP}}\approx 100$. \end{table} \section{Review of MGRS}\label{militarygridreferencesystem} In this correspondence, as a grid based system, we employ MGRS, which is the geo-coordinate standard used by NATO military for locating points on the planet \cite{ngamgrs} and a combination of the universal transverse mercator (UTM) grid system and the universal polar stereographic (UPS) grid system, with a different labelling convention. It is essentially a global mesh grid that assigns a unique ID to each grid square. An example ID is ${\rm 10QCG12345678}$, where the first part ``${\rm 10Q}$'' is called the grid zone designator (GZD), the second part ``${\rm CG}$'' is the $100,000$-meter-square identifier, and finally the last numerical part gives the easting (first half digits) and northing (second half digits) inside the square identifier. Every two digits used (for a minimum of $2$ and a maximum of $10$) increase the resolution by a factor of $10{\rm m}$, down to a resolution of $1{\rm m}^2$ grid squares. Map coordinates are read from west to east first (easting), then from south to north (northing), i.e., left-right, down-up. In cases where the part of the ID is common to all neighbouring nodes, the common part can be dropped and only the rest need to be transmitted or used. For details on the specifics of MGRS, readers are referred to \cite{ngamgrs}. \section{Simulation Results} To assess the performance of Grid-BP, we conducted $300$ Monte-Carlo simulations and the root-mean-square (RMS) localization error was calculated for various noise levels. In each simulation $100$ nodes with $20$ anchors are placed randomly in a $100{\rm m}\times100{\rm m}$ area and the communication range is limited to $12{\rm m}$ and $|{\mathcal N}_i|_{\text{avrg}}=4.03$. Anchor locations are modelled as multivariate Gaussian pdfs with an identity variance matrix. The grid resolution for Grid-BP is $D=1{\rm m}$. We compare Grid-BP with HEVA-BP \cite{oikonomou2011hybrid}, a computationally cheaper variation of non-parametric BP (NBP) \cite{Ihler:2005be}. We also compare it with a parametric belief propagation algorithm, namely Hybrid-BP \cite{Caceres:2011wx}.\footnote{For Hybrid-BP \cite{Caceres:2011wx}, we did not use information given from satellites.} In addition, the maximum number of message passing iterations is set to be $I_{\text{max}}=15$. The experiments were run for different noise levels with the noise factor used ranging between $K_e=[0.0 - 0.3]$ and the number of particles being $200$. Furthermore, to showcase the increased complexity of using GPS coordinates as a common GCS, a variant of HEVA that uses GPS was also provided. In HEVA-GPS, messages contain GPS coordinates that every node converts to an LCS before calculating \eqref{eq:messageupper} and \eqref{eq:messagelower}. Afterwards the updated beliefs are converted back to GPS coordinates and transmitted. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{GBP-draft-noise-p200.pdf} \caption{The rms error versus the amount of range estimation noise $K_{e}$ when the average node connectivity is $4.03$}\label{fig:noise} \end{figure} In Fig.~\ref{fig:noise}, the RMS localization error of all the algorithms as the noise coefficient increases is shown. We see that HEVA-BP and HEVA-BP-GPS have a better accuracy than both Grib-BP and Hybrid-BP. Grid-BP has a slightly higher RMS error. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{GBP-draft-time-vs-particles.pdf} \caption{The average time [m] versus the amount of particles used.}\label{fig:time} \end{figure} In Fig.~\ref{fig:time}, the average simulation time against the number of particles is shown. The improvement in computational cost by Grid-BP is observed, especially as the number of particles increases. It should also be noted that for higher particle numbers the cost of GPS scaling becomes almost insignificant compared to the cost of message passing equations. Note that both algorithms, HEVA and Hybrid-BP have the strong assumption of sharing knowledge of the GCS origin, while Grid-BP and HEVA-GPS do not (the realistic scenario). Even though HEVA-BP-GPS performs as good as HEVA-BP, there is an increase in computational cost with HEVA-BP due to the mapping of the GPS coordinates to a local Cartesian reference frame (as can be observed in the mean simulation time in Fig.~\ref{fig:time}). As the number of particles gets higher, the relative computational efficiency of Grid-BP can also be seen. Note that all the simulations shown were run on an Intel i7 2.6GHz, using Python for scientific computing \cite{Oliphant:2007dm}. \section{Conclusion} This correspondence has proposed a novel parametric BP algorithm for cooperative localization that uses a grid-based system. The resulting Grid-BP algorithm combines a grid-based GCS that alleviates the hidden issue of requiring shared reference knowledge, and a parametric representation which allows quick and efficient inference. Simulation results showed that Grid-BP's performance is significantly better than other BP algorithms that rely on a known GCS. Grid-BP is also easy to be extended for mobile applications and add the non-LoS mitigation filter proposed in \cite{oikonomou2011hybrid}, making it a versatile and reliable choice in both military and civilian applications.
1,116,691,501,191
arxiv
\section{\@ifstar{\starsection}{\nostarsection}} \newcommand\nostarsection[1] {\sectionprelude\origsection{#1}\sectionpostlude} \newcommand\starsection[1] {\sectionprelude\origsection*{#1}\sectionpostlude} \newcommand\sectionprelude{% \vspace{1em} } \newcommand\sectionpostlude{} \makeatother \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\weight}[2]{#1{(#2)}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{S}{S} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \DeclareMathOperator{\im}{im} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\Sym}{\mathrm{Sym}} \newtheorem{proposition}{Proposition}[section] \newtheorem{thm}[proposition]{Theorem} \newtheorem{lemma}[proposition]{Lemma} \newtheorem{corollary}[proposition]{Corollary} \theoremstyle{definition} \newtheorem{defi}[proposition]{Definition} \newtheorem{notation}[proposition]{Notation} \theoremstyle{remark} \newtheorem*{remark}{Remark} \title{Betti numbers of configuration spaces of surfaces} \author{Gabriel C. Drummond-Cole} \thanks{The first author was supported by IBS-R003-D1} \address{Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 37673, Republic of Korea} \author{Ben Knudsen} \thanks{The second author was supported by NSF award 1606422} \address{Department of Mathematics, Harvard University, Cambridge 02138, USA} \subjclass[2010]{55R80, 57N65, 17B56} \begin{document} \begin{abstract} We give explicit formulas for the Betti numbers, both stable and unstable, of the unordered configuration spaces of an arbitrary surface of finite type. \end{abstract} \maketitle \section{Introduction} In this paper, we study the configuration space of $k$ unordered points in a surface $\Sigma$ of finite type, which is defined as the quotient \[B_k(\Sigma) =\{(x_1,\ldots, x_k)\in \Sigma^k | x_i\ne x_j \text{ if } i\neq j\}/S_k,\] where $S_k$ denotes the symmetric group on $k$ letters. Our goal is the explicit computation of the Betti number $\beta_i(B_k(\Sigma))$ for every choice of $i$, $k$, and $\Sigma$. This computation is divided among the following results according to the nature of $\Sigma$:\vspace{.1in} \begin{center} {\def1.5{1.2} \begin{tabular}{ c c } Type of surface & Computation \\ \hline Closed nonorientable & Proposition \ref{prop:closed nonorientable} \\ Open nonorientable & Proposition \ref{prop: open nonorientable} \\ Open orientable & Proposition \ref{prop: punctured surface Betti numbers} \\ Closed orientable & Corollaries \ref{cor:unstable k+1}, \ref{cor: second unstable betti number}, and \ref{cor: stable numbers}\\ \end{tabular} }\end{center}\vspace{.1in} Some of these computations are known. The case of the plane is treated by Arnold in \cite{Arnold:CRCBG}, the case of a once-punctured orientable surface by B\"{o}digheimer--Cohen in \cite{BodigheimerCohen:RCCSS}, the case of the sphere by Salvatore in \cite{Salvatore:CSSHLS} (see also \cite{RandalWilliams:TCHCSS}), the case of the real projective plane by Wang in \cite{Wang:OBGRP2}, and the case of a general closed nonorientable surface by the second author in \cite{Knudsen:BNSCSVFH}. In addition, during the writing of this work, the authors learned of two independent computations in the case of the torus, due to Maguire \cite{Maguire:CCCS} and Schiessl \cite{Schiessl:BNUCST}, respectively. We include all of these computations below for the sake of completeness. The principal contribution of this work is the case of a general closed surface $\Sigma_g$ of genus $g$. In this most difficult case, the resulting formulas are rather complicated, and we complement them with two asymptotic results characterizing the behavior of $\beta_i(B_k(\Sigma_g))$ viewed alternately as a function of $i$ or of $g$. These results appear below as Corollaries \ref{cor:genus stability} and \ref{cor: fixed g polynomials}. According to unpublished work of Dan Petersen, the former result, which one might characterize as a kind of ``homological stability in genus,'' has a conceptual explanation in terms of representation stability for the symplectic groups. In order to describe our approach, we recall a general method for understanding the rational homology of configuration spaces of manifolds. To a manifold $M$, there corresponds a graded Lie algebra $\mathfrak{g}_M$ built from the cohomology of $M$, and the graded vector space $H_*(B_k(M);\mathbb{Q})$ coincides with a particular summand of the Lie algebra homology of $\mathfrak{g}_M$. This Lie algebra homology may be computed by means of the \emph{Chevalley--Eilenberg complex} (see Definition \ref{defi: CE complex}), and the work of this paper consists in the application of algebraic and combinatorial techniques to this complex. The Chevalley--Eilenberg complex has been a ubiquitous presence in the study of the unordered configuration spaces of manifolds (as well as the ordered---see \cite{Getzler:RMHMOCS}). Prominent examples of its appearance include the work of B\"{o}digheimer--Cohen--Taylor \cite{BodigheimerCohenTaylor:OHCS}, B\"{o}digheimer--Cohen \cite{BodigheimerCohen:RCCSS}, and F\'{e}lix--Thomas \cite{FelixThomas:RBNCS}, building on McDuff's foundational work \cite{McDuff:CSPNP}; and the work of F\'{e}lix--Tanr\'{e} \cite{FelixTanre:CAUCS} following Totaro \cite{Totaro:CSAV}. In order to make the identification with Lie algebra homology, each of these works requires assumptions about the background manifold---orientability, for example. More recently, the identification was established in full generality by the second author in \cite{Knudsen:BNSCSVFH} using the theory of \emph{factorization homology} developed by Ayala--Francis \cite{AyalaFrancis:FHTM} and Lurie \cite[Ch. 5]{Lurie:HA}, among others. Although the Chevalley--Eilenberg complex has appeared widely in the past, it has often been the practice to deal with it one subcomplex at a time---to treat separately the computations for the configuration spaces of five and of twenty points, say. Our approach is to treat the complex as a whole, performing the computation for $B_k(\Sigma)$ simultaneously for all $k$. It is this simultaneity that renders the computations feasible in practice. \subsection*{Acknowledgements} The first author thanks Christoph Schiessl for helpful conversation. The second author thanks Jordan Ellenberg, Benson Farb, and Joel Specter. \section{Recollections} \subsection{General conventions}\label{subsec: conventions} We work throughout with graded vector spaces or (co)chain complexes over the ground ring $\mathbb{Q}$. Degree is understood homologically; that is, the differential of a chain complex decreases degree while that of a cochain complex increases degree. In addition, our chain complexes will often carry an auxiliary grading, called \emph{weight}. Degree is generically indexed by $i$ and weight by $k$. We denote the dimension of the degree $i$ summand of a graded vector space $V$ by $\dim_i$. If $V$ is weighted, we denote the dimension of the degree $i$ and weight $k$ summand by $\dim_{i,k}V$. For $X$ a topological space, the Betti number $\beta_i(X)$ is $\dim_i H(X;\mathbb{Q})$. The $n$th \emph{suspension} of the graded vector space $V$ is the graded vector space $V[n]$ with $V[n]_i=V_{i-n}$, and the element of $V[n]$ corresponding to $x\in V$ is denoted $\sigma^nx$. Vector spaces are identified with graded vector spaces concentrated in degree 0; for example, \[\mathbb{Q}[n]_i=\begin{cases} \mathbb{Q}&\quad i=n\\ 0&\quad \text{else.} \end{cases}\] The degree of a homogeneous element $x$ is written $|x|$. Graded vector spaces will typically be finite dimensional in each degree. In this situation, it is convenient to collect the dimensions of the summands of $V$ into its \emph{Poincar\'{e} series} \[ \mathcal{P}_V(t)=\sum_{n\in\mathbb{Z}} \dim_i(V)t^i. \] For example, we have the equalities \begin{align*}\mathcal{P}_{\Lambda[x]}(t)&=1+t^{|x|}\\ \mathcal{P}_{\mathbb{Q}[x]}(t)&=\sum_{n\geq0}t^{n|x|}=\frac{1}{1-t^{|x|}}, \quad |x|\neq0. \end{align*} Poincar\'{e} series are additive under direct sum and, modulo convergence issues for unbounded complexes, multiplicative under tensor product. Moreover, for a complex $(V,d)$, we have the equality \[ \mathcal{P}_{H(V,d)}(t) = \mathcal{P}_{\ker(d)}(t) + t^{|d|}(\mathcal{P}_{\ker(d)}(t) - \mathcal{P}_V(t)).\] The symmetric algebra $\Sym(V)$ is understood in the graded sense; that is \[\Sym(V):=\mathbb{Q}[V_{\mathrm{even}}]\otimes \Lambda[V_{\mathrm{odd}}].\] If $V$ is weighted, then $\Sym(V)$ inherits a weight grading by specifying that the inclusion of $V$ be weight preserving and that weights add under multiplication. Note that it is the homological degree alone that determines whether a homogeneous element is a polynomial or an exterior generator. We will typically work with $\Sym(V)$ where $V$ is a weighted complex concentrated in weights 1 and 2. In these situations, we employ the convention that a variable without a tilde has weight one, while a variable with a tilde has weight two. \subsection{Configuration spaces and Lie algebras} Recall that a graded Lie algebra is a graded vector space $\mathfrak{g}$ equipped with a linear map $[-,-]:\mathfrak{g}^{\otimes2}\to \mathfrak{g}$ satisfying the graded antisymmetry and graded Jacobi identities. \begin{defi}\label{defi: CE complex} Let $\mathfrak{g}$ be a graded Lie algebra. The \emph{Chevalley--Eilenberg complex} of $\mathfrak{g}$ is the chain complex $CE(\mathfrak{g}[1])$ whose underlying graded vector space is $\Sym(\mathfrak{g}[1])$, which carries the structure of the cofree conilpotent cocommutative coalgebra on the graded vector space $\mathfrak{g}[1]$, and whose differential is the unique coderivation $D$ of that coalgebra structure such that \[D(\sigma x\cdot \sigma y)=(-1)^{|x|}\sigma[x,y].\] \end{defi} \begin{remark} The complex $CE(\mathfrak{g})$ computes the Lie algebra homology of $\mathfrak{g}$; the reader interested in more may consult any of many expositions, for instance~\cite[XIII]{CartanEilenberg:HA}, \cite[7]{Weibel:IHA}, or \cite[22]{FelixHalperinThomas:RHT}. We note for the sake of completeness that the differential is given explicitly by the formula \[D(\sigma x_1\cdots \sigma x_n)=\sum_{1\leq i<j\leq n}(-1)^{|x_i|}\epsilon(x_1,\ldots, x_n)\sigma[x_i,x_j]\cdot\sigma x_1\cdots \widehat{\sigma x_i}\cdots \widehat{\sigma x_k}\cdots \sigma x_n,\] where the sign $\epsilon$ is determined by \[\sigma x_1\cdots \sigma x_n=\epsilon(x_1,\dots, x_n)\sigma x_i\cdot\sigma x_j\cdot\sigma x_1\cdots \widehat{\sigma x_i}\cdots \widehat{\sigma x_j}\cdots \sigma x_n.\] We shall not make use of this formula, as a more convenient sign convention, made explicit at the close of the section, is available in the case of interest. \end{remark} Now, let $\Sigma$ be a surface of finite type. We write $\mathbb{Q}^w$ for the orientation sheaf of $\Sigma$ and recall that there is an isomorphism $(\mathbb{Q}^w)^{\otimes 2}\cong\mathbb{Q}$ inducing a cup product from twisted to ordinary cohomology. We write $\mathfrak{g}_\Sigma$ for the graded Lie algebra given additively by \[\mathfrak{g}_\Sigma=H_c^{-*}(\Sigma;\mathbb{Q}^w)[1]\oplus H_c^{-*}(\Sigma;\mathbb{Q})[2],\] where $H_c$ denotes compactly supported cohomology, with the nonzero components of the bracket determined by the cup product according to the equation \[[\sigma\alpha,\sigma\beta]=(-1)^{|\beta|}\sigma^2(\alpha\smile\beta).\] \begin{remark} If $\Sigma$ is orientable, then $\mathfrak{g}_\Sigma$ is the tensor product of the cohomology of $\Sigma$ and the free graded Lie algebra on a single generator of degree 1, equipped with the canonical Lie algebra structure on the tensor product of a commutative algebra and a Lie algebra. If $\Sigma$ is nonorientable, there is an analogous characterization as the the super tensor product of a commutative superalgebra with a Lie superalgebra. \end{remark} The filtration of $\mathfrak{g}_\Sigma$ by bracket length is canonically split, and we regard $\mathfrak{g}_\Sigma$ and thereby $CE(\mathfrak{g}_\Sigma)$ as weight graded according to the induced grading. We have the following result, which we do not state in the greatest possible generality. \begin{thm}\label{thm:Knudsen}\cite{Knudsen:BNSCSVFH} There is an equality $$\beta_i(B_k(\Sigma))= \dim_{i,k}H(CE(\mathfrak{g}_\Sigma)).$$ \end{thm} We close this section with a remark on signs. From the definitions, we have \[CE(\mathfrak{g}_\Sigma)=\Sym\big(H_c^{-*}(\Sigma;\mathbb{Q}^w)[2]\oplus H_c^{-*}(\Sigma;\mathbb{Q})[3]\big),\] and $D$ is determined as a coderivation by specifying that \[D(\sigma^2\alpha\cdot \sigma^2\beta)=(-1)^{|\alpha|+|\beta|+1}\sigma^3(\alpha\smile\beta).\] All of the calculations below are performed with a differential, also called $D$, that omits the sign $(-1)^{|\alpha|+|\beta|+1}$. This omission is justified by a linear change of variables; indeed, multiplication by $-1$ in the even part of $H_c^{-*}(\Sigma;\mathbb{Q})[3]$ eliminates the sign. \section{Nonorientable and open surfaces}\label{sec: non-orientable and open surfaces} In this section, we compute the Betti numbers of $B_k(\Sigma)$, where $\Sigma$ is either nonorientable or both orientable and open. The bulk of the former computation was carried out in \cite{Knudsen:BNSCSVFH} and the bulk of the latter in \cite{BodigheimerCohen:RCCSS}; nevertheless, we include them here for the sake of completeness and as a warmup for the more involved computations to come. \subsection{Nonorientable surfaces} Let $N_{h}\cong\#_h\mathbb{RP}^2$ denote the nonorientable surface of Euler characteristic $2-h$, and let $N_{h,n}=N_h\setminus S$, where $|S|=n$. Using Poincar\'{e} duality and elementary algebraic topology, one finds that \[ H_c^{-*}(N_{h,n};\mathbb{Q}^w)\cong \mathbb{Q}[-1]^{h+n-1}\oplus \mathbb{Q}[-2]\] and \[H_c^{-*}(N_{h,n};\mathbb{Q})\cong \begin{cases} \mathbb{Q}\oplus \mathbb{Q}[-1]^{h-1}&\quad n=0\\ \mathbb{Q}[-1]^{h+n-2}&\quad\text{else.} \end{cases} \] Thus, the twisted cup product $H_c^{-*}(N_{h,n};\mathbb{Q}^w)^{\otimes 2}\to H_c^{-*}(N_{h,n};\mathbb{Q})$ vanishes for degree reasons, and we conclude that the Chevalley--Eilenberg differential vanishes. In the closed case $n=0$, we have the following: \begin{proposition}\label{prop:closed nonorientable} For any $k\geq0$, \[ \beta_i(B_k(N_{h})) =\left\{ \begin{array}{ll} \binom{h+i-2}{h-2} + \binom{h+i-5}{h-2}&i\leq k\\ \binom{h+i-5}{h-2}& i=k+1\\ 0&\text{else}. \end{array} \right. \] where $\binom{-1}{-1}\coloneqq 1$. \end{proposition} \begin{proof} Since $\mathfrak{g}_{N_h}$ is Abelian, there is no differential in $CE(\mathfrak{g}_{N_h})$, so the $i$th Betti number of $B_k(N_{h})$ is given by the dimension of the summand of degree $i$ and weight $k$ in \[ CE(\mathfrak{g}_{N_h})\cong \mathbb{Q}[p,\tilde{u}_1,\ldots, \tilde{u}_{h-1}]\otimes \Lambda[\tilde{v},u_1,\ldots, u_{h-1}] \] where $|p|=0$, $|u_j|=1$, $|\tilde{u}_j|=2$, and $|\tilde{v}|=3$ (this is a slight abuse of notation as the $u_j$ classes are from twisted cohomology groups of $N_{h}$ while the $\tilde{u}_j$ classes are from untwisted cohomology groups of $N_{h}$). Decomposing this expression using the elements $\tilde u_{h-1}^ru_{h-1}^s$ for $r\geq0$ and $s\in \{0,1\}$ yields the recursive formula $$\beta_i(B_k(N_h))= \sum_{j=0}^i\dim H_{i-j}(B_{k-j}(N_{h-1});\mathbb{Q})$$ for $h\geq2$, together with the base case $$\beta_i(B_k(N_1))=\begin{cases} 1&\quad i\in\{0,3\}\\ 0&\quad\text{else} \end{cases}$$ for $k>1$. An easy induction on $h$ using the identity \[ \sum_{j=0}^i \binom{n+j}{n}=\binom{n+i+1}{n+1} \] (valid for $n\ge -1$) completes the proof. \end{proof} In the punctured case, the calculation is even simpler: \begin{proposition}\label{prop: open nonorientable} For any $k\geq0$ and $n\geq1$, \[\beta_i(B_k(N_{h,n}))=\left\{ \begin{array}{ll} \binom{h+n+i-3}{h+n-3} + \binom{h+n+i-4}{h+n-3}&i\leq k\\ 0&\text{else}. \end{array} \right.\] \end{proposition} \begin{proof} As before, there is no differential in the Chevalley--Eilenberg complex, so the $i$th Betti number of $B_k(N_{h,n})$ is given by the dimension of the summand of degree $i$ and weight $k$ in $$CE(\mathfrak{g}_{h,n})\cong\mathbb{Q}[p, \tilde u_1,\ldots, \tilde u_{h+n-2}]\otimes\Lambda[u_1,\ldots, u_{h+n-1}],$$ where $|p|=0$, $|u_j|=1$, and $|\tilde u_j|=2$ (the same caveat about naming applies), and we again have the recursive formula $$\beta_i(B_k(N_{h,n}))= \sum_{j=0}^k\dim H_{i-j}(B_{k-j}(N_{h-1,n});\mathbb{Q})$$ for $h\geq2$, together with the base case $$\beta_i(B_k(N_{1,1}))=\begin{cases} 1&\quad i\in\{0,1\}\\ 0&\quad\text{else} \end{cases}$$ for $k\geq1$. Since $\beta_i(B_k(N_{h,n}))=\beta_i(B_k(N_{h+1,n-1}))$ for $n\geq2$, it suffices to verify the case $n=1$, which follows as before by induction on $h$. \end{proof} \subsection{Open orientable surfaces} We turn our attention to $\Sigma_{g,n}$, the $n$-punctured orientable surface of genus $g$, with $n>0$. As a graded vector space, the Lie algebra $\mathfrak{g}_{\Sigma_{g,n}}$ is given by \[ H_c^{-*}(\Sigma_{g,n};\mathbb{Q})[1] \oplus H_c^{-*}(\Sigma_{g,n};\mathbb{Q})[2]. \] One knows that \[ H_c^{-*}(\Sigma_{g,n});\mathbb{Q})\cong \mathbb{Q}[-1]^{2g+n-1}\oplus \mathbb{Q}[-2], \] and the only nonzero brackets take the paired classes in the lefthand copy of $H_c^1(\Sigma_{g,n})$ to their cup product in the righthand copy of $H^2_c(\Sigma_{g,n})$. Denoting by $\mathfrak{h}$ the Lie subalgebra spanned by these $2g+1$ classes, we may decompose $\mathfrak{g}_{\Sigma_{g,n}}$ as a sum of $\mathfrak{h}$ and an Abelian Lie algebra spanned by the remaining classes. This sum passes to a tensor product at the level of Chevalley--Eilenberg complexes, and we obtain the decomposition \[ CE(\mathfrak{g}_{\Sigma_{g,n}})\cong CE(\mathfrak{h})\otimes W_{g,n}. \] Here $W_{g,n}=\Sym(p, \tilde{a}_r,\tilde{b}_r, u_s,\tilde u_s)$, $|p|=0$, $|u_s|=1$, $|\tilde u_s|=2$, $|\tilde{a}_r|=|\tilde b_r|=2$, and $(1,1)\leq (r,s)\leq (g,n-1)$. The Lie algebra homology of $\mathfrak{h}$ was computed in dual form in~\cite{BodigheimerCohen:RCCSS}. \begin{thm}\label{thm:BodigheimerCohen}\cite[Theorem D.]{BodigheimerCohen:RCCSS} $$ \dim_{i,k}( H(CE(\mathfrak{h}))=\begin{cases} \binom{2g}{i}-\binom{2g}{i-2}&\quad 0\leq i\leq g,\, i=k\\ \binom{2g}{i-1}-\binom{2g}{i+1}&\quad g+1\leq i\leq 2g+1,\, i=k-1\\ 0&\quad \text{else.} \end{cases}$$ \end{thm} As for $W_{g,n}$, we have the following. \begin{lemma}\label{lemma: punctured_oriented_lemma} For any $g,k\geq0$ and $n\geq1$, \[ \dim_{i,k}(W_{g,n})=\begin{cases}\sum_{l=0}^{\lfloor\frac{i}{2}\rfloor}\binom{n+i-2l-2}{n-2}\binom{2g+l-1}{2g-1}&\quad i\leq k\\ 0&\quad\text{else.} \end{cases} \] \end{lemma} \begin{proof} Write $W_*=\mathbb{Q}[\tilde a_r, \tilde b_r]$ and $W_+=\mathbb{Q}[\tilde{u}_s]\otimes \Lambda[u_s]$. Then for $i\le k$, \begin{align*} \dim_{i,k}(W_{g,n})&=\dim_{i,k}(\mathbb{Q}[p]\otimes W_*\otimes W_+)\\ &=\sum_{j=0}^k\dim_{i,k-j}(W_*\otimes W_+)\\ &=\dim_{i}(W_*\otimes W_+), \end{align*} since $\dim_{i,k}(W_*\otimes W_+)=0$ whenever $i\neq k$. We also have \[\dim_{i}(W_*)=\begin{cases} \binom{2g+l-1}{2g-1}&\quad i=2l\\ 0&\quad\text{else,} \end{cases}\] since the displayed binomial coefficient is the number of ways of writing $l$ as the sum of $2g$ nonnegative integers, while \[ \dim_{i}(W_+)= \binom{n+i-2}{n-2}\] since the displayed binomial coefficient is the number of ways of writing $i$ as the sum of $n-1$ nonnegative integers. With these calculations in hand, the claim follows from the equality \[\dim_{i}(W_*\otimes W_+) = \sum_{2l+m=i}\dim_{2l}(W_*)\dim_{m}(W_+). \] \end{proof} Combining these calculations, we deduce the following: \begin{proposition}\label{prop: punctured surface Betti numbers} For $g,k\geq0$ and $n\geq1$, the Betti number $\beta_i(B_k(\Sigma_{g,n}))$ is: {\small\[ \beta_i(B_k(\Sigma_{g,n}))=\begin{cases} {\displaystyle\sum_{j=0}^g}\left[\binom{2g}{j}-\binom{2g}{j-2}\right]{\displaystyle\sum_{l=0}^{\lfloor\frac{i-j}{2}\rfloor}}\binom{2g+l-1}{2g-1}\bigg[\\\qquad{}\qquad{}\binom{n+i-j-2l-2}{n-2}+\binom{n+i+j-2g-2l-3}{n-2}\bigg] &i\le k-1\\ {\displaystyle\sum_{j=0}^g}\left[\binom{2g}{j}-\binom{2g}{j-2}\right]{\displaystyle\sum_{l=0}^{\lfloor\frac{i-j}{2}\rfloor}}\binom{2g+l-1}{2g-1}\binom{n+i-j-2l-2}{n-2}&i=k\\ \displaystyle 0&\text{else.} \end{cases} \] } \end{proposition} \begin{proof} For $i\leq k-1$, we have \begin{multline*} \beta_i(B_k(\Sigma_{g,n})) = \sum_{j=0}^g\sum_{l=0}^{\lfloor\frac{i-j}{2}\rfloor} \left[\binom{2g}{j}-\binom{2g}{j-2}\right] \binom{n+i-j-2l-2}{n-2} \binom{2g+l-1}{2g-1} \\ + \sum_{j=g+1}^{2g+1}\sum_{l=0}^{\lfloor\frac{i-j}{2}\rfloor} \left[\binom{2g}{j-1}-\binom{2g}{j+1}\right] \binom{n+i-j-2l-2}{n-2} \binom{2g+1-1}{2g-1}. \end{multline*} Indexing the first sum in the second expression by $r=2g+1-j$ instead of $j$, the difference of binomial coefficents becomes \[\binom{2g}{2g-r}-\binom{2g}{2g-r+2}=\binom{2g}{r}-\binom{2g}{r-2},\] which matches the difference in the first sum. The expression becomes \begin{align*} \sum_{j=0}^g \bigg[ \binom{2g}{j}&-\binom{2g}{j-2} \bigg] \bigg[\sum_{l=0}^{\lfloor \frac{i-j}{2}\rfloor} \binom{n+i-j-2l-2}{n-2}\binom{2g+l-1}{2g-1} \\&\qquad+ \sum_{l=0}^{\lfloor\frac{i+j-2g-1}{2}\rfloor} \binom{n+i+j-2g-2l-3}{n-2}\binom{2g+l-1}{2g-1} \bigg]. \end{align*} Now, if $l>\lfloor\frac{i+j-2g-1}{2}\rfloor$, then $2l>i+j-2g-1$, so that the first binomial vanishes. Thus we may combine the sums to obtain the desired expression. \end{proof} \section{Closed orientable surfaces}\label{sec:closedsurfaces} \subsection{Technical setup} Let $\Sigma=\Sigma_g$ denote the closed surface of genus $g$. Since \[ H_c^{-*}(\Sigma;\mathbb{Q})\cong \mathbb{Q}\oplus \mathbb{Q}[-1]^{2g}\oplus \mathbb{Q}[-2], \] we may write the underlying graded vector space of the Chevalley--Eilenberg complex as \[ CE(\mathfrak{g}_{\Sigma})\cong\mathbb{Q}[p,\tilde{a}_1,\ldots,\tilde{a}_g,\tilde{b}_1,\ldots,\tilde{b}_g,v] \otimes \Lambda[\tilde{p},{a}_1,\ldots,{a}_g,{b}_1,\ldots,{b}_g,\tilde{v}] \] where the variables are in the following degrees and weights: \vspace{10pt} \begin{center} \begin{tabular}{l|cccc} &degree $0$ & degree $1$ & degree $2$ & degree $3$\\ \hline weight $1$&$p$&$a_i,b_i$ & $v$ &\\ weight $2$&&$\tilde{p}$&$\tilde{a}_i,\tilde{b}_i$&$\tilde{v}$ \end{tabular} \end{center} \vspace{10pt} The differential $D$ is specified as a coderivation by a map from this symmetric coalgebra to the cogenerators, the nonzero components of which are $D(a_ib_i)=\tilde{p}$ and $D(vx) = \tilde{x}$ for $x$ of weight 1. To give a general formula for $D$, we introduce the following operators, which will play a key role in the remainder of the paper. \begin{defi} On $CE(\mathfrak{g}_{\Sigma})$ we define the operators \[ \Delta = \sum_j\partial_{b_j}\partial_{a_j},\qquad \delta = \sum_{j;\,c\in\{a,b\}} \tilde{c}_j\partial_{c_j},\] where $\partial_u$ is the degree $-1$ operator defined by formal differentiation with respect to $u$. \end{defi} Note that $\delta$ is a differential while $\Delta$ is not; moreover, the two commute. Since $\Delta$ is even, we can be relatively cavalier about applying it to elements, but it is necessary to be more careful with the odd operator $\delta$. In terms of these operators, we have the formula \[D=\tilde p\Delta+\delta\partial_{v}+\frac{\tilde v}{2}\partial^2_v+\tilde p\partial_p\partial_v.\] Our strategy in making this formula more comprehensible will be to eliminate the last two terms using contracting homotopies. In order to state the end result, we require some terminology. First, as we will see below, the Betti number $\beta_i(B_k(\Sigma_g))$ is zero for $i>k+1$ and independent of $k$ for $i<k$ (indeed, this latter fact is a general phenomenon; see \cite{Church:HSCSM} and \cite{RandalWilliams:HSUCS}). We write $\beta_i^\mathrm{st}(B(\Sigma_g))=\beta_i(B_k(\Sigma_g))$ for any $i<k$ and refer to this number as the $i$th \emph{stable Betti number}. In order to keep track of the ranks of graded spaces, we will use Poincar\'{e} series, our conventions regarding which may be found in Section \ref{subsec: conventions}. We write \begin{align*}\mathcal{P}_{\mathrm{st}}(t)&=\sum_{i=0}^\infty\beta_i^\mathrm{st}(B(\Sigma_g))t^i\\ \mathcal{P}_{0}(t)&=\sum_{i=0}^\infty \beta_i(B_i(\Sigma_g))t^i\\ \mathcal{P}_{1}(t)&=\sum_{i=1}^\infty \beta_i(B_{i-1}(\Sigma_g))t^i \end{align*} for the Poincar\'{e} series recording the Betti numbers of interest. We further define the following two subspaces of $CE(\mathfrak{g}_{\Sigma})$: \begin{align*} \mathcal{X}_g&\coloneqq \mathbb{Q}[\tilde a_i, \tilde b_i] \otimes \Lambda[{a}_i,{b}_i] \\ \mathcal{K}_g&\coloneqq \left(\ker\delta|_\mathcal{X} \cap \ker\Delta|_{\mathcal{X}}\right), \end{align*} where $i$ runs from 1 to $g$. Since $g$ is often fixed, we will abbreviate the names of these spaces to $\mathcal{X}$ and $\mathcal{K}$ when confusion is unlikely, but we will have need of the subscript when we allow $g$ to vary. We also use the notation $X$ and $K$ for $\mathcal{P}_\mathcal{X}(t)$ and $\mathcal{P}_{\mathcal{K}}(t)$, respectively. We note the easily verified equalities \[ X_g=\frac{1}{(1-t)^{2g}}=\sum_{i=0}^\infty \binom{2g+i-1}{i}t^i. \] Our first main technical result, whose proof occupies Section \ref{section:theoremproof}, provides formulas for the Poincar\'{e} series of interest in terms of these auxiliary subspaces. \begin{thm}\label{thm:summary} Poincar\'e series for the Betti numbers of the configuration spaces of $\Sigma$ are given by \begin{align*} \mathcal{P}_{\mathrm{st}}(t)&= \frac{1+t^3}{t^2} \big[(1+t)K +(-t+t^2)X - 1\big] \\ \mathcal{P}_0(t)&=\frac{1}{t}\big[ (1+t^2+t^3)K - 1+t-t^2 + (-t^3+t^4)X \big] \\ \mathcal{P}_{1}(t)&=t^2K. \end{align*} \end{thm} These are the formulas we use to make our computations. As an illustration of their efficiency, we present the case $g=0$. \begin{corollary}\label{cor:sphere} For any $k\geq0$, $$\beta_i(B_k(S^2))=\begin{cases} 1 &\quad i=0 \text{ or } i=3\leq k \text{ or } i=2=k+1\\ 0 &\quad \text{else} \end{cases}$$ \end{corollary} \begin{proof} Since $g=0$, $\mathcal{X}=\mathbb{Q}$, and $\delta=\Delta=0$. Thus \begin{align*} \mathcal{P}_{\mathrm{st}}(t)&=1+t^3\\ \mathcal{P}_0(t)&=1+t^3\\ \mathcal{P}_{1}(t)&=t^2. \end{align*} Thus, when $i=0$, there is a one-dimensional contribution from the first series for all $k>0$ and from the second series when $k=0$; when $i=2$, there is a one-dimensional contribution from the third series when $k=1$; when $i=3$, there is a one-dimensional contribution from the first series for all $k>3$ and from the second series when $k=3$; and there are no contributions for $i\notin\{0,2,3\}$. \end{proof} Our second main technical result is a computation of the graded dimension of the auxiliary space $\mathcal{K}_g$. \begin{thm} \phantomsection \label{thm: Kg calculation} \begin{enumerate} \item\label{item:Kg dimensions} For $i\ge 3$ and $g\ge 0$, the graded dimensions of $\mathcal{K}_g$ are given by \[ \dim_i(\mathcal{K}_g)=\sum_{j=0}^{g-1}\sum_{m=0}^j (-1)^{g+j+1}\frac{2j-2m+2}{2j-m+2} \binom{\frac{6j+2i+2g-2m-1-3(-1)^{i+j+g+m}}{4}}{m,2j-m+1}. \] The special cases for $i\le 2$ are \begin{align*} \dim_i(\mathcal{K}_g)=& \left\{ \begin{array}{ll} 1&i=0\\ 0&i=1\\ 2g&i=2. \end{array}\right. \end{align*} \item\label{item:high g Kg dimensions} In the range $3\le i\le g+2$, there is the simplification \[\dim_i(\mathcal{K}_g)=\binom{2g+i-3}{i-1}.\] \end{enumerate} \end{thm} Proving this result will be the object of Section \ref{section:lemmasproof}; for the moment, we concentrate on exploiting it. \subsection{Results} The three corollaries that follow provide explicit formulas for all Betti numbers of configuration spaces of closed surfaces. The proofs are immediate from Theorems \ref{thm:summary} and \ref{thm: Kg calculation}, together with the formula for $X$ given above. When the degree $i$ is at least five, the situation is ``generic'' and each formula is merely the sum of the formula for $K_g$ given in Theorem \ref{thm: Kg calculation}(\ref{item:Kg dimensions}), suitably reindexed, with a simplification of any summands involving $X$. When $i$ is small, the calculation is easily carried out by hand, in some cases with the aid of Theorem \ref{thm: Kg calculation}(\ref{item:high g Kg dimensions}). \begin{corollary}\label{cor:unstable k+1} For $i\ge 5$ and $g\ge 0$ the unstable Betti number $\beta_i(B_{i-1}(\Sigma_g))$ is \[ \beta_i(B_{i-1}(\Sigma_g)) = \sum_{j=0}^{g-1}\sum_{m=0}^j (-1)^{g+j+1}\frac{2j-2m+2}{2j-m+2} \binom{\frac{6j+2i+2g-2m-5-3(-1)^{i+j+g+m}}{4}}{m,2j-m+1}. \] The special cases for $i<5$ are: \begin{align*} \beta_i(B_{i-1}(\Sigma_g))= \left\{ \begin{array}{ll} 0&i=1\\ 1&i=2\\ 0&i=3\\ 2g&i=4. \end{array}\right. \end{align*} \end{corollary} \begin{corollary}\label{cor: second unstable betti number} For $i\ge 5$ and $g\ge 0$, the unstable Betti number $\beta_i(B_{i}(\Sigma_g))$ is \begin{multline*} -\binom{2g+i-4}{2g-2}+\sum_{j=0}^{g-1}\sum_{m=0}^j (-1)^{g+j+1}\frac{2j-2m+2}{2j-m+2} \Bigg[ \binom{\frac{6j+2i+2g-2m+1+3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} \\+\binom{\frac{6j+2i+2g-2m-3+3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} +\binom{\frac{6j+2i+2g-2m-5-3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} \Bigg]. \end{multline*} The special cases for $i<5$ are: \begin{align*} \beta_i(B_{i}(\Sigma_g))= \left\{ \begin{array}{ll} 1&i=0\\ 2g&i=1\\ 2g^2-g&i=2\\ 4&i=3,g=1\\ (4g^3-g+3)/3&i=3,g\ne 1\\ 0&i=4,g=0\\ 4&i=4,g=1\\ 24&i=4,g=2\\ (4g^4+4g^3-g^2+11g)/6&i=4,g>2. \end{array}\right. \end{align*} \end{corollary} \begin{corollary}\label{cor: stable numbers} For $i\ge 5$ and $g\ge 0$, the stable Betti number $\beta_i^{\mathrm{st}}(B(\Sigma_g))$ is \begin{multline*} -\binom{2g+i-1}{2g-2}-\binom{2g+i-4}{2g-2}+\sum_{j=0}^{g-1}\sum_{m=0}^j (-1)^{g+j+1}\frac{2j-2m+2}{2j-m+2} \Bigg[ \\\binom{\frac{6j+2i+2g-2m+3-3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} +\binom{\frac{6j+2i+2g-2m+1+3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} \\+\binom{\frac{6j+2i+2g-2m-3+3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} +\binom{\frac{6j+2i+2g-2m-5-3(-1)^{i+j+g+m}}{4}}{m,2j-m+1} \Bigg]. \end{multline*} The special cases for $i<5$ are: \begin{align*} \beta_i^{\mathrm{st}}(B(\Sigma_g))= \left\{ \begin{array}{ll} 1&i=0\\ 2g&i=1\\ 0&i=2,g=0\\ 3&i=2,g=1\\ 2g^2-g&i=2,g>1\\ 1&i=3,g=0\\ 5&i=3,g=1\\ 16&i=3,g=2\\ (4g^3-g+3)/3&i=3,g>2\\ 0&i=4,g=0\\ 7&i=4,g=1\\ 28&i=4,g=2\\ 90&i=4,g=3\\ (4g^4+4g^3-g^2+11g)/6&i=4,g>3. \end{array}\right. \end{align*} \end{corollary} Because the summations over genus in the formulas of Corollaries~\ref{cor:unstable k+1},~\ref{cor: second unstable betti number}, and~\ref{cor: stable numbers} are rather messy, we complement them with two further results demonstrating asymptotic behavior for high genus and for high degree, along with a handful of explicit computations in low genus and/or degree. We begin with a kind of genus stability result, which is immediate from Theorems \ref{thm:summary} and \ref{thm: Kg calculation}(\ref{item:high g Kg dimensions}): \begin{corollary}\label{cor:genus stability} In the range $5\le i\le g$, we have: \[ \beta_i(B_k(\Sigma_g))=\left\{ \begin{array}{rl} {}\displaystyle\binom{2g+i-2}{i}+\binom{2g+i-5}{i-3}&i\le k \\\\ \displaystyle\binom{2g+i-5}{i-3}&i=k+1 \\\\ \displaystyle 0&i>k+1. \end{array} \right. \] \end{corollary} Next, there is the following consequence of the formulas of Corollaries~\ref{cor:unstable k+1},~\ref{cor: second unstable betti number}, and~\ref{cor: stable numbers}. \begin{corollary}\label{cor: fixed g polynomials} For fixed $g$, there are polynomials with rational coefficients of degree $2g-1$ in one variable, $p^{\mathrm{st}}_g$, $q^{\mathrm{st}}_g$, $p_g^{0}$, $q_g^0, p_g^1$, and $q_g^1$, such that the Betti numbers in genus $g$ are given by: \begin{align*} \beta_i^{\mathrm{st}}(B(\Sigma_g))&=\left\{\begin{array}{ll} p^{\mathrm{st}}_g(i) & i\ge 5\text{, odd} \\ q^{\mathrm{st}}_g(i) & i\ge 6\text{, even.} \end{array}\right.\\ \beta_i(B_i(\Sigma_g))&=\left\{\begin{array}{ll} p^0_g(i) & i\ge 5\text{, odd} \\ q^0_g(i) & i\ge 6\text{, even.} \end{array}\right. \\ \beta_i(B_{i-1}(\Sigma_g))&=\left\{\begin{array}{ll} p^1_g(i) & i\ge 5\text{, odd} \\ q^1_g(i) & i\ge 6\text{, even.} \end{array}\right. \end{align*} \end{corollary} \begin{proof} Inside the summations calculating the Betti number, the only factor with $i$ dependence is of the form $\binom{\frac{i}{2}+a+(-1)^ib}{k,2j-k+1}$ which is a degree $2j+1\le 2g-1$ polynomial in $i$ for fixed parity. Outside the summation, the stable and lower unstable Betti numbers also have a contribution which is polynomial of degree $2g-2$ in $i$. \end{proof} Unfortunately, the presentation in the corollaries is not particularly readable, and one might hope for a simplification giving the coefficients of these polynomials in a more comprehensible manner, or at least for a discernable pattern of some sort. One example of such a possible pattern is the observation that for $2\le g\le 20$, the coefficients of $p^{\mathrm{st}}_g$, $q^{\mathrm{st}}_g$, $p^0_g$, and $q^0_g$ are all nonnegative. These polynomials are given for $g\le 3$ in Figure~\ref{fig: table of polynomials in low genus}. To illustrate the growth of the integers involved, we note that $q^{\mathrm{st}}_{5}(i)$, the polynomial for even degree stable Betti numbers in genus $5$, is: \[\frac{i^{9}}{368640}+\frac{7i^{8}}{81920}+\frac{227i^{7}}{161280}+\frac{1393i^{6}}{92160}+\frac{319i^{5}}{2880}+\frac{24947i^{4}}{46080}+\frac{9331i^{3}}{5760}+\frac{7811i^{2}}{2880}+\frac{407i}{140}+2. \] Figure~\ref{fig: chart of stable values} contains a chart of values of stable Betti numbers for $g\le 6$ and $i\le 43$. We emphasize that with the formulas of this paper, the calculation of such polynomials or Betti numbers for much larger $g$ and $i$ is a computationally trivial task; these charts are included purely to provide convenient examples. \begin{figure} \[\arraycolsep=1.4pt\def1.5{2.2} \begin{array}{rcccc} &g=0&g=1&g=2&g=3\\ \hline p^{\mathrm{st}}_g& 0 & 2i-1 & \frac{2i^{3}+3i^{2}+10i+9}{8} & \frac{2i^{5}+15i^{4}+80i^{3}+198i^{2}+302i+363}{192} \\ q^{\mathrm{st}}_g & 0 & 2i-1 & \frac{2i^{3}+3i^{2}+10i}{8} & \frac{2i^{5}+15i^{4}+80i^{3}+252i^{2}+464i+192}{192} \\ p^0_g& 0 & \frac{3i-1}{2} & \frac{3i^{3}+i^{2}+13i+31}{16} & \frac{3i^{5}+22i^{4}+162i^{3}+500i^{2}+603i+630}{384} \\ q^0_g& 0 & \frac{3i-4}{2} & \frac{3i^{3}+4i^{2}+28i}{16} & \frac{3i^{5}+19i^{4}+120i^{3}+500i^{2}+1200i+384}{384} \\ p^1_g& 0 & \frac{i-3}{2} & \frac{i^{3}+i^{2}-9i-9}{16} & \frac{i^{5}+4i^{4}-2i^{3}+32i^{2}+i-420}{384} \\ q^1_g& 0 & \frac{i}{2} & \frac{i^{3}-2i^{2}+16}{16} & \frac{i^{5}+7i^{4}-8i^{3}-76i^{2}+112i+384}{384} \end{array} \] \caption{The first few polynomials from Corollary~\ref{cor: fixed g polynomials}. These give both stable and unstable Betti numbers for $i\ge 5$.}\label{fig: table of polynomials in low genus} \end{figure} \begin{figure} \[ \begin{array}{lccccccc} i\backslash g & 0 & 1 & 2 & 3 & 4 & 5 & 6\\\hline\hline 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 0 & 2 & 4 & 6 & 8 & 10 & 12\\ 2 & 0 & 3 & 6 & 15 & 28 & 45 & 66\\ 3 & 1 & 5 & 16 & 36 & 85 & 166 & 287\\ 4 & 0 & 7 & 28 & 90 & 218 & 505 & 1013\\ 5 & 0 & 9 & 48 & 169 & 532 & 1332 & 3069\\ 6 & 0 & 11 & 75 & 335 & 1098 & 3300 & 8294\\ 7 & 0 & 13 & 114 & 569 & 2289 & 7227 & 20878\\ 8 & 0 & 15 & 162 & 979 & 4187 & 15587 & 47762\\ 9 & 0 & 17 & 225 & 1531 & 7748 & 30294 & 105963\\ 10 & 0 & 19 & 300 & 2396 & 13034 & 58860 & 216281\\ 11 & 0 & 21 & 393 & 3520 & 22079 & 105118 & 436150\\ 12 & 0 & 23 & 501 & 5151 & 34866 & 188319 & 818752\\ 13 & 0 & 25 & 630 & 7211 & 55223 & 315369 & 1530869\\ 14 & 0 & 27 & 777 & 10039 & 82965 & 529718 & 2693703\\ 15 & 0 & 29 & 948 & 13529 & 124690 & 842884 & 4736380\\ 16 & 0 & 31 & 1140 & 18125 & 179921 & 1343826 & 7912036\\ 17 & 0 & 33 & 1359 & 23689 & 259302 & 2050653 & 13221792\\ 18 & 0 & 35 & 1602 & 30784 & 361900 & 3132029 & 21159269\\ 19 & 0 & 37 & 1875 & 39236 & 504021 & 4615128 & 33879846\\ 20 & 0 & 39 & 2175 & 49741 & 684067 & 6800508 & 52294099\\ 21 & 0 & 41 & 2508 & 62085 & 926002 & 9727432 & 80742936\\ 22 & 0 & 43 & 2871 & 77111 & 1227304 & 13904838 & 120830579\\ 23 & 0 & 45 & 3270 & 94561 & 1622011 & 19387707 & 180821641\\ 24 & 0 & 47 & 3702 & 115439 & 2106363 & 27001767 & 263434743\\ 25 & 0 & 49 & 4173 & 139439 & 2727348 & 36822006 & 383668154\\ 26 & 0 & 51 & 4680 & 167740 & 3479594 & 50140352 & 545978070\\ 27 & 0 & 53 & 5229 & 199984 & 4426415 & 67056804 & 776480287\\ 28 & 0 & 55 & 5817 & 237539 & 5560388 & 89530551 & 1082270541\\ 29 & 0 & 57 & 6450 & 279991 & 6965069 & 117692377 & 1507214918\\ 30 & 0 & 59 & 7125 & 328911 & 8630475 & 154433796 & 2062327850\\ 31 & 0 & 61 & 7848 & 383825 & 10664900 & 199922976 & 2818996389\\ 32 & 0 & 63 & 8616 & 446521 & 13055217 & 258327002 & 3793935067\\ 33 & 0 & 65 & 9435 & 516461 & 15939574 & 329858905 & 5100110873\\ 34 & 0 & 67 & 10302 & 595664 & 19301036 & 420398901 & 6762379052\\ 35 & 0 & 69 & 11223 & 683524 & 23313381 & 530213298 & 8955099361\\ 36 & 0 & 71 & 12195 & 782305 & 27955117 & 667445528 & 11714562366\\ 37 & 0 & 73 & 13224 & 891329 & 33442128 & 832424580 & 15303928783\\ 38 & 0 & 75 & 14307 & 1013119 & 39747526 & 1036240128 & 19775312348\\ 39 & 0 & 77 & 15450 & 1146921 & 47136517 & 1279294291 & 25517963824\\ 40 & 0 & 79 & 16650 & 1295531 & 55575883 & 1576461699 & 32605592304\\ 41 & 0 & 81 & 17913 & 1458115 & 65388148 & 1928229150 & 41603514029\\ 42 & 0 & 83 & 19236 & 1637756 & 76532730 & 2354275836 & 52614541111\\ 43 & 0 & 85 & 20625 & 1833536 & 89398287 & 2855185938 & 66446126460\\ \end{array} \] \caption{Stable Betti numbers for low degree and genus}\label{fig: chart of stable values} \end{figure} \section{Proof of Theorem \ref{thm:summary}}\label{section:theoremproof} Recall from Section~\ref{sec:closedsurfaces} that the differential on $CE(\mathfrak{g}_\Sigma)$ is given by the formula \[D=\tilde p\Delta+\delta\partial_{v}+\frac{\tilde v}{2}\partial^2_v+\tilde p\partial_p\partial_v.\] Our first simplification is to eliminate the term $\frac{\tilde v}{2}\partial^2_v$ using a contracting homotopy. First, we recall that the spaces of interest are \begin{align*} \mathcal{X}_g&\coloneqq \mathbb{Q}[\tilde a_i, \tilde b_i] \otimes \Lambda[{a}_i,{b}_i] \\ \mathcal{K}_g&\coloneqq \left(\ker\delta|_\mathcal{X} \cap \ker\Delta|_{\mathcal{X}}\right), \end{align*} with Poincar\'{e} series $X$ and $K$, respectively. We further define two auxiliary subspaces of $\mathcal{X}_g$: \begin{align*}\mathcal{Y}_g&\coloneqq \mathbb{Q}[p,\tilde a_i, \tilde b_i] \otimes \Lambda[{a}_i,{b}_i]\\ \mathcal{Z}_g&\coloneqq \mathbb{Q}[p,\tilde a_i, \tilde b_i] \otimes \Lambda[\tilde p,{a}_i,{b}_i].\end{align*} As before, $i$ runs from 1 to $g$, and we omit the subscript when the genus is unambiguous. \begin{lemma}\label{lemma:chainhomotopy} Define \begin{enumerate} \item a retraction $CE(\mathfrak{g}_\Sigma)\to \mathcal{Z}\oplus v\mathcal{Z}$, which, for $q\in \mathcal{Z}$, takes $\tilde{v}q$ to \[-2v\delta(q)-2v\tilde{p}\partial_p q,\] and \item a degree $1$ chain homotopy from $CE(\mathfrak{g}_M)$ to itself, which, for $q$ in $\mathcal{Z}$, takes $v^n\tilde vq$ to \[ \frac{2n!}{(n+2)!}v^{n+2}q.\] \end{enumerate} Then the inclusion of $\mathcal{Z}\oplus v\mathcal{Z}$ into $CE(\mathfrak{g})$ along with the retraction and chain homotopy constructed above constitute the data of a deformation retraction. \end{lemma} The proof is a direct computation. \begin{remark} It follows from this step alone that $H_i(B_k(\Sigma_g);\mathbb{Q})=0$ for $i>k+1$. \end{remark} Next, we eliminate the term $\tilde p\partial_p\partial_v$ from the differential by deforming $\mathcal{Z}\oplus v\mathcal{Z}$ onto the subspace $\mathcal{Y}\oplus v\tilde{p}\mathcal{Y}\oplus v\mathcal{X}$ endowed with a twisted differential. See Figure~\ref{fig: inclusion diagram} for a graphical summary of the differentials and one of the maps used in the deformation retraction. \begin{lemma}\label{lemma:chainhomotopy2} Define \begin{enumerate} \item a degree $-1$ linear operator $d$ on $\mathcal{Y}\oplus v\tilde{p}\mathcal{Y}\oplus v\mathcal{X}$ by \begin{align*} d|_\mathcal{Y}(q)&= -\frac{p\Delta}{n+1}(v\tilde{p}\Delta + \delta)q \\ d|_{v\tilde{p}\mathcal{Y}}(q)&= -\frac{p\Delta}{n+1}\delta(q) \\ d|_{v\mathcal{X}}(q)&=\left(\frac{1}{v}\delta+\tilde p\Delta\right)q, \end{align*} where $n=n(q)$ is the largest nonnegative integer such that $q=p^nq'$; \item a linear map $f:\mathcal{Y}\oplus v\tilde p\mathcal{Y}\oplus v\mathcal{X}\to \mathcal{Z}\oplus v\mathcal{Z}$ by \begin{align*} f|_\mathcal{Y}(q)&=\left(1-\frac{p}{n+1}v\Delta\right)q\\ f|_{v\tilde p\mathcal{Y}}(q)&=\left(1-\frac{p}{\tilde p(n+1)}\delta\right)q\\ f|_{v\mathcal{X}}(q)&=q; \end{align*} \item a linear map $g:\mathcal{Z}\oplus v\mathcal{Z}\to \mathcal{Y}\oplus v\tilde p\mathcal{Y}\oplus v\mathcal{X}$ by \begin{align*} g|_{\mathcal{Y}\oplus v\tilde p \mathcal{Y}}(q)&=q\\ g|_{v\mathcal{Y}}(q)&=\begin{cases} q&\quad n=0\\ 0&\quad \text{else} \end{cases}\\ g|_{\tilde p \mathcal{Y}}(q)&=-\frac{p}{n+1}\left(v\Delta-\frac{1}{\tilde p}\delta\right)q \end{align*} \item a degree $1$ linear operator on $\mathcal{Z}\oplus v\mathcal{Z}$ by stipulating that \begin{align*} H|_{\tilde p\mathcal{Y}}&=\frac{p}{\tilde p(n+1)}v \end{align*} and extending by zero. \end{enumerate} Then $d$ is a differential, $f$ and $g$ are chain maps, and $H$ is a chain homotopy $\id\sim fg.$ \end{lemma} \begin{proof} The proof is a direct, albeit tedious, computation. We leave to the reader all but the verification that $g$ is a chain map. For this, we have \begin{align*} dg|_{\mathcal{Y}}&=-\frac{p\Delta}{n+1}(v\tilde{p}\Delta + \delta)\\ dg|_{v\tilde p\mathcal{Y}}&=-\frac{p\Delta}{n+1}\delta\\ dg|_{v\mathcal{Y}}&=\begin{cases} \frac{1}{v}\delta+\tilde p\Delta&\quad n=0\\ 0&\quad \text{else} \end{cases}\\ dg|_{\tilde p \mathcal{Y}}&=\frac{p\Delta}{n+2}\delta\frac{p}{n+1}v\Delta -\frac{p\Delta}{n+2}(v\tilde{p}\Delta+\delta)\frac{p}{n+1}\frac{1}{\tilde p}\delta \\ &=\frac{p^2\Delta}{(n+2)(n+1)}(\delta v\Delta-v\tilde{p}\Delta\frac{1}{\tilde p}\delta) \\ &=0\\ gD|_{\mathcal{Y}}&=-\frac{p}{n+1}\left(v\Delta-\frac{1}{\tilde p}\delta\right)\tilde p\Delta\\ &=-\frac{p\Delta}{n+1}(v\tilde p\Delta+\delta)\\ gD|_{v\tilde p\mathcal{Y}}&=-\frac{p}{n+1}\left(v\Delta-\frac{1}{\tilde p}\delta\right)\frac{\delta}{v}\\ &=-\frac{p\Delta}{n+1}\delta\\ gD|_{v\mathcal{Y}}(q)&=\left(\frac{1}{v}\delta+\tilde p\Delta\right)q-\frac{p}{n+1}\left(v\Delta-\frac{1}{\tilde p}\delta\right)\frac{\tilde p}{v}\partial_pq\\ &=\left(\frac{1}{v}\delta+\tilde p\Delta\right)\left(1-\frac{p}{n+1}\partial_p\right)q\\ &=\begin{cases} (\frac{1}{v}\delta+\tilde p\Delta)q&\quad n=0\\ 0&\quad\text{else} \end{cases}\\ gD|_{\tilde p\mathcal{Y}}&=0, \end{align*} where we have made use of the fact that if $n=0$, then $\partial_pq=0$, while if $n\ne 0$, we have \[\left(\frac{p}{n+1}\partial_p\right)q=\frac{p}{n(\partial_pq)+1}\partial_pq=\frac{p}{n(q)}\partial_pq= q.\] Thus, $gD=dg$. \end{proof} \begin{figure} \[ \xymatrix{ &&v\mathcal{X}\ar[dll]_{\frac{\delta}{v}}\ar[drr]^{\tilde{p}\Delta}\ar@{=>}[ddd]|(.36)\hole^{\id} \\ \mathcal{Y}\save!L(.5)\ar@(dl,ul)^{\frac{-p\delta\Delta}{n+1}}\restore\ar[rrrr]_(.30){\frac{-vp\tilde{p}\Delta^2}{n+1}}\ar@{=>}[ddrr]_{\frac{-vp\Delta}{n+1}}\ar@{=>}[ddd]_{\id}&&&&v\tilde{p}\mathcal{Y}\ar@{=>}[ddll]^{\frac{-p\delta}{(n+1)\tilde{p}}}\ar@{=>}[ddd]^{\id}\save!R(.7)\ar@(dr,ur)_{\frac{-p\delta\Delta}{n+1}}\restore \\\\ &&v\mathcal{Y}\ar[dll]_{\frac{\delta}{v}}\ar[drr]^{\tilde{p}\Delta}\ar[dd]^{\frac{\tilde{p}\partial_p}{v}} \\ \mathcal{Y}\ar[drr]_{\tilde{p}\Delta}&&&&v\tilde{p}\mathcal{Y}\ar[dll]^{\frac{\delta}{v}} \\ &&\tilde{P}\mathcal{Y} } \] \caption{The inclusion $f$ of $\mathcal{Y}\oplus v\tilde{p}\mathcal{Y}\oplus v\mathcal{X}$ with the twisted differential into $\mathcal{Z}\oplus v\mathcal{Z}$. Single arrows indicate differentials in the domain and codomain of $f$ and double arrows indicate components of $f$.} \label{fig: inclusion diagram} \end{figure} \begin{corollary}\label{corollary:stable complex} For $i<k$, \[ H_i(B_k(\Sigma_g);\mathbb{Q})\cong H_i(\mathcal{X}\oplus \mathcal{X}[3],\,d_\mathrm{st}) \] where $d_\mathrm{st}$ has components $\Delta^2[3]: \mathcal{X}\to \mathcal{X}[3]$, $\delta\Delta: \mathcal{X}\to \mathcal{X}$, and $-\delta\Delta:\mathcal{X}[3]\to \mathcal{X}[3]$. In particular, the dimension of this vector space is independent of $k$. \end{corollary} \begin{proof} The degree less than weight subspace of $\mathcal{Y}\oplus v\tilde p\mathcal{Y}\oplus v\mathcal{X}$ is $$p\mathcal{Y}\oplus pv\tilde p\mathcal{Y}\cong \bigoplus_{n=1}^\infty p^n\mathcal{X}\oplus p^nv\tilde p \mathcal{X}.$$ It is clear from the formulas that the respective dimensions of the kernel and image of $$d:p^n\mathcal{X}\oplus p^nv\tilde p\mathcal{X}\to p^{n+1}\mathcal{X}\oplus p^{n+1}v\tilde p \mathcal{X}$$ are independent of $n$ and therefore of $k$. Thus, for example, the Betti numbers of $B_k(\Sigma_g)$ for $i<k$ and any $k$ are computed by the complex $$ p\mathcal{X}\oplus p\mathcal{X}[3]\overset{-\frac{p}{2}d_\mathrm{st}}{\longrightarrow} p^2\mathcal{X}\oplus p^2\mathcal{X}[3]\overset{-\frac{p}{3}d_\mathrm{st}}{\longrightarrow}\cdots, $$ which is isomorphic to $(\mathcal{X}\oplus \mathcal{X}[3],d_\mathrm{st})$ after the change of variables $p^n\mapsto (-1)^n\frac{p^n}{n!}$. \end{proof} Corollary~\ref{corollary:stable complex} asserts that we may calculate stable Betti numbers using the complex \[ \xymatrix{ \vdots\ar[d]\ar[dr]&\vdots\ar[d] \\ \mathcal{X}_n\ar[d]_{\delta\Delta}\ar[dr]^{\Delta^2}& \mathcal{X}[3]_{n}\ar[d]^{-\delta\Delta} \\ \mathcal{X}_{n-1}\ar[d]\ar[dr]&\mathcal{X}[3]_{n-1}\ar[d] \\ \vdots&\vdots } \] We will use two more lemmas for the proof of Theorem~\ref{thm:summary}. We write $\mathcal{X}_{>0}\subset \mathcal{X}$ for the subspace of polynomials with no constant term. \begin{lemma}\label{lemma: acyclic} The cochain complex $(\mathcal{X}_{>0},\delta)$ is acyclic. More specifically, there is an operator $H$ on $\mathcal{X}$ that restricts to a cochain nullhomotopy of $(\mathcal{X}_{<0},\delta)$, also called $H$, and this operator has the property that $\Delta H\Delta H$ and $H\Delta H\Delta$ differ by an invertible linear transformation. \end{lemma} \begin{proof} It is easily seen that the operator \[h=\sum_{c}c\partial_{\tilde c}\] on $\mathcal{X}$ has the property that $(\delta h+h\delta)\alpha$ is a nonzero scalar multiple of $\alpha$ for every nonzero monomial $\alpha\in\mathcal{X}_{>0}$, where $c$ runs over the set $\{a_1,\ldots, a_g,b_1,\ldots, b_g\}$. We obtain $H$ by setting $H(\alpha)=\frac{h(\alpha)}{\deg \alpha}$ where $\deg \alpha$ is the polynomial degree of the monomial $\alpha$ (not its homological degree), and $H|_{\mathcal{X}_0}=h|_{\mathcal{X}_0}=0$. To see the desired behavior with respect to $\Delta$, first calculate that \[ [h,\Delta]=\sum_j \partial_{a_j}\partial_{\tilde{b}_j}-\partial_{b_j}\partial_{\tilde{a}_j}. \] Then both $\Delta$ and $[h,\Delta]$ are partial differentiation operators so $[[h,\Delta],\Delta]=0$, or in other words $\Delta h\Delta = \frac{1}{2}(\Delta^2 h + h\Delta^2)$. Then since $h^2=0$, we have \[ \Delta h\Delta h = \frac{1}{2}(h\Delta^2 h + \Delta^2 h^2)=\frac{1}{2}(h\Delta^2 h + h^2\Delta^2 ) = h\Delta h\Delta. \] In polynomial degree at most $4$, $h\Delta h\Delta$ is manifestly zero since $h$ kills anything of nonpositive polynomial degree, and so the same is true for $\Delta h\Delta h$. Then $H\Delta H\Delta = 0 = \Delta H\Delta H$ on elements of $\mathcal{X}$ of degree at most $4$. In polynomial degree greater than $4$, since $\Delta$ lowers degree by two, applying $\Delta H\Delta H$ to a monomial of degree $p$ yields $\frac{p-4}{p}H\Delta H\Delta$. \end{proof} Acyclicity will be used immediately; the specific commutation relation between $H$ and $\Delta$ will be used below in the proof of Lemma~\ref{lemma: good recurrence}. \begin{lemma} There is an isomorphism $\ker d_\mathrm{st}\cong \ker(\delta\Delta\oplus\delta\Delta[3])$ of bigraded vector spaces. \end{lemma} \begin{proof} We have a canonical isomorphism $\mathcal{X}\cong\mathcal{X}_{>0}\oplus\mathbb{Q}$, which we write as $q\mapsto (f(q),c(q))$. Clearly $\im\delta\subset\mathcal{X}_{>0}$, and we fix a section $s:\im \delta\to \mathcal{X}_{>0}$ of $\delta$. Now, observe that if $(q,r)\in \mathcal{X}\oplus \mathcal{X}[3]$ is annihilated by $d_\mathrm{st}$, then $\Delta q\in\ker\delta$, and \begin{align*}\delta\Delta(sf(\Delta q)-r) &=\Delta\delta sf(\Delta q)-\delta\Delta r\\ &=\Delta f(\Delta q)-\delta\Delta r\\ &=\Delta^2q-\Delta c(\Delta q)-\delta\Delta r\\ &=\Delta^2 q-\delta\Delta r \end{align*} which is the projection to the second factor of $d_\mathrm{st}(q,r)$ and hence zero. Thus, the assignment $(q,r)\mapsto (q, sf(\Delta q)-r)$ defines a linear map $\ker d_\mathrm{st}\to \ker (\delta\Delta\oplus\delta\Delta[3])$. This map is an isomorphism with inverse given by the same formula. \end{proof} \begin{lemma}\label{lemma: ker dD in terms of K} The Poincar\'e series for $\ker\delta\Delta$ on $\mathcal{X}$ satisfies \[ \mathcal{P}_{\ker\delta\Delta}(t)= \frac{1}{t(1+t)}\big[ (1+t)K - 1 + t^2X \big] \] \end{lemma} \begin{proof} Consider $x\in \ker\delta\Delta\subset \mathcal{X}$. Then $\delta x$ is in $\ker\Delta\cap \im\delta$ which differs from $\ker\Delta\cap \ker\delta$ only in that the latter also contains the class of $1$. Furthermore, if $\delta x =\delta x'$ then $x-x'$ is in $\ker\delta$. Then \[ \mathcal{P}_{\ker\delta\Delta}(t)= \frac{1}{t}(K-1) + \mathcal{P}_{\ker \delta}(t). \] Lemma~\ref{lemma: acyclic} implies that $\mathcal{P}_{H(\mathcal{X},\delta)}(t)=1$, which implies by the formula for the Poincar\'e series of the homology of a complex that \[ \mathcal{P}_{\ker\delta}(t)=\frac{1+tX}{1+t}, \] and then expansion completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:summary}] We have \begin{align*}\mathcal{P}_\mathrm{st}(t)&=\mathcal{P}_{H(\mathcal{X}\oplus\mathcal{X}[3],d_\mathrm{st})}(t)\\&=\mathcal{P}_{H(\mathcal{X}\oplus\mathcal{X}[3],\delta\Delta\oplus\delta\Delta[3])}(t)\\&=\mathcal{P}_{H(\mathcal{X},\delta\Delta)}(t)+\mathcal{P}_{H(\mathcal{X}[3],\delta\Delta)}(t)\\&=(1+t^3)\mathcal{P}_{H(\mathcal{X},\delta\Delta)}(t).\end{align*} Then combining this formula with \[ \mathcal{P}_{H(\mathcal{X},\delta\Delta)}(t)=\frac{1+t}{t}\mathcal{P}_{\ker\delta\Delta}(t)-\frac{1}{t}X \] and Lemma~\ref{lemma: ker dD in terms of K} yields the desired result. The unstable Betti number $\beta_i(B_i(\Sigma_g))$ is computed by taking kernel modulo image in the diagram $$\xymatrix{ &v\mathcal{X}\ar[dl]_-{\frac{1}{v}\delta}\ar[dr]^-{\tilde p\Delta}\\ \mathcal{X}\ar[d]_-{\delta\Delta}\ar[drr]^-{\Delta^2}&&v\tilde p\mathcal{X}\ar[d]^-{-\delta\Delta}\\ p\mathcal{X}&&pv\tilde p\mathcal{X} }$$ so that \begin{align*} \mathcal{P}_{0}(t)&=\mathcal{P}_{\ker d_\mathrm{st}}(t)-\mathcal{P}_{\im (\frac{1}{v}\delta\oplus\tilde p\Delta)}(t)\\ &=\mathcal{P}_{\ker \delta\Delta\oplus \ker\delta\Delta[3]}(t)-t^{-1}(t^2X-t^2K)\\ &=(1+t^3)\mathcal{P}_{\ker\delta\Delta}(t)-t(X-K). \end{align*} In this case, applying Lemma~\ref{lemma: ker dD in terms of K} and conducting algebraic simplifications yields the result. Since the unstable Betti number $\beta_i(B_{i-1}(\Sigma_g))$ is computed as the kernel of $\frac{1}{v}\delta\oplus\tilde p\Delta$ acting on $v\mathcal{X}$, the third claim is immediate. \end{proof} \section{Proof of Theorem~\ref{thm: Kg calculation}}\label{section:lemmasproof} The purpose of this section is to calculate the Poincar\'e series of $\mathcal{K}$, the simultaneous kernel of $\delta$ and $\Delta$, which is used in Section~\ref{sec:closedsurfaces} to calculate the stable and unstable Betti numbers in the case of a closed surface. For $g\geq0$ and $n>0$ we write \[\mathcal{V}(g,n)=\{(q,r)\in \mathcal{X}_g\oplus \mathcal{X}_g[3]:\Delta^n q = \delta\Delta^{n-1}r,\,\Delta^n r = 0\},\] and we adopt the convention that $\mathcal{V}(g,0)=0$. We also set $\mathcal{S}=\mathbb{Q}[\tilde a,\tilde b],$ where $\tilde a$ and $\tilde b$ have degree 2. We write $V_{g,n}$ and $S$ for the respective Poincar\'{e} series. These spaces are related to the spaces of interest as follows: \begin{lemma}\label{lemma:exact_sequence} For $g\geq0$, there is an exact sequence \[0\longrightarrow \mathcal{K}_{g+1}[1]\longrightarrow \mathcal{S}\otimes \mathcal{V}(g,1)[1]\longrightarrow \mathcal{K}_{g},\] where the rightmost arrow is the composite \[\mathcal{S}\otimes \mathcal{V}(g,1)[1]\xrightarrow{\tilde a,\tilde b= 0}\mathcal{V}(g,1)[1]\xrightarrow{(q,r)\mapsto \delta q}\mathcal{K}_g.\] \end{lemma} The proof (as well as the definition of the leftmost map) uses a substantial amount of new notation and is deferred to the end of the section. For the present, we derive the following consequence: \begin{lemma}\label{lemma: good recurrence} For $g\geq0$, the Poincar\'e series $K_{g+1}$ satisfies the recurrence relation \[ tK_{g+1}=1+ t^3 + tSV_{g,1}- K_{g}. \] \end{lemma} \begin{proof} In light of Lemma \ref{lemma:exact_sequence}, the bulk of the lemma will be established after showing that the map \[\mathcal{V}(g,1)[1]\xrightarrow{(q,r)\mapsto \delta q}\mathcal{K}_g\] is surjective in degrees 4 and higher. For this, we recall from Lemma~\ref{lemma: acyclic} that there is a cochain nullhomotopy $H$ of $(\mathcal{X}_{>0},\delta)$ such that $\Delta H\Delta H$ and $H\Delta H\Delta$ differ by an invertible linear transformation. Now, given $x\in \mathcal{K}_g$ of degree at least 4, the pair $(Hx,H\Delta Hx)$ lies in $\mathcal{V}(g,1).$ Indeed, both $x$ and $\Delta Hx$ lie in $\mathcal{X}_{>0}$ by our assumption on the degree of $x$, so we have \begin{align*}\delta H\Delta Hx&=\Delta Hx - H\delta\Delta Hx\\ &=\Delta Hx - H\Delta\delta Hx\\ &=\Delta Hx - H\Delta x+H\Delta H\delta x\\ &=\Delta Hx, \end{align*} and \[\Delta x=0\implies H\Delta H\Delta x=0\implies \Delta H\Delta Hx=0.\] Since we also have \[\delta Hx=x - H\delta x=x,\] we conclude that $x$ lies in the image of the indicated map, establishing surjectivity. This in turn implies that the rightmost map in the exact sequence of Lemma \ref{lemma:exact_sequence} is surjective, which proves the claimed equality of Poincar\'{e} series above degree 3. From Lemma~\ref{lemma: acyclic}, it follows $\mathcal{K}_g$ is spanned in degree 2 by the elements of the form $\tilde c_i$, so that $(c_i,0)\in \mathcal{V}(g,1)$ is a preimage and we have surjectivity in this degree as well. Thus, the claimed equality holds in degree 2. Since the kernel $\mathcal{K}_g$ vanishes in degree $1$ and is one dimensional in degree $0$, it remains to establish the equality in degree 3. For this, we make use of the modified sequence \[0\longrightarrow\mathcal{K}_{g+1}[1]\longrightarrow \mathcal{S}\otimes\mathcal{V}(g,1)[1]\longrightarrow\mathcal{K}_g/(\tilde a_1 b_1- a_1\tilde b_1).\] If $(q,r)$ is degree $2$ in $\mathcal{V}(g,1)$ and $\delta q=\tilde a_1 b_1- a_1\tilde b_1$, then $q=a_1b_1$ up to the kernel of $\delta$. In degree $2$ this kernel is spanned by $\tilde{a}_i$ and $\tilde{b}_i$, which are also in the kernel of $\Delta$. Then $\delta r = \Delta q = \Delta(a_1b_1)=1$, which is impossible, and we conclude that this modified sequence is still exact. Thus, it suffices to show surjectivity in degree 3 of the map \[\mathcal{V}(g,1)[1]\xrightarrow{(q,r)\mapsto \delta q}\mathcal{K}_g/(\tilde a_1 b_1- a_1\tilde b_1). \] But it is easy to see that, in degree 3, $\mathcal{K}_g=\ker \delta$ is spanned by elements of the form $\tilde c_ic_j-c_i\tilde c_j$, so that the quotient is spanned by classes of the form $\tilde c_ic_j-c_i\tilde c_j$ with $i\neq j$ and $\tilde a_ib_i-a_i\tilde b_i-\tilde a_1b_1+a_1\tilde b_1$. The proof is completed by noting that $(c_ic_j,0)\in \mathcal{V}(g,1)$ is a preimage in the former case and $(a_ib_i-a_1b_1,0)\in \mathcal{V}(g,1)$ is a preimage in the latter case. \end{proof} The final missing ingredient is an explicit description of $V_{g,1}$, which we again obtain recursively. \begin{lemma}\label{lemma:Vrecurrence} The Poincar\'e series $V_{g,n}$ satisfies the recurrence relation: \[ V_{g+1,n} = S{}( V_{g,n+1}+ 2tV_{g,n}+ t^2V_{g,n-1}) \] for $n\ge 1$ and $g\ge 0$. \end{lemma} The proof of this recurrence is also deferred to the end of the section. \begin{corollary}\label{cor:Vsolution} For $g,n\ge 0$, we have \[ V_{g,n} = S{}^g(1+t^3)\left( \sum_{j=0}^{g+n-1} t^j\left(\binom{2g}{j}-\binom{2g}{j-2n} \right) \right). \] \end{corollary} \begin{proof} Note that the recursion of Lemma~\ref{lemma:Vrecurrence} may be extended to nonpositive $n$ if we formally set $V_{g,-n}=-t^{-2n}V_{g,n}$. Using the recursion, we write \[ V_{g,n} = S{}^g\sum_{j=0}^{2g} t^j \binom{2g}{j}V_{0,n+g-j} .\] Next we note that $V_{0,n}=1+t^3$ for $n>0$ and thus $V_{g,n}$ can be calculated as \[ V_{g,n} =S{}^g\left(\sum_{j=0}^{g+n-1}(1+t^3) t^j\binom{2g}{j} - \sum_{j=g+n+1}^{2g} (1+t^3)t^{2g+2n-j}\binom{2g}{2g-j}\right).\] Change of index yields the result. \end{proof} This expression admits a convenient simplification when the genus is high relative to the degree. \begin{corollary}\label{cor: stable value of Vg1} There is the congruence \[ V_{g,1}\equiv \frac{(1+t^3)(1-t^2)}{(1-t)^{2g}}\pmod{t^{g+2}}. \] \end{corollary} \begin{proof} Modulo $t^{g+2}$, we have \begin{align*} V_{g,1}&\equiv S^g(1+t^3)\left(\sum_{j=0}^\infty t^j\left(\binom{2g}{j}-\binom{2g}{j-2}\right)\right) \\ &\equiv\frac{(1+t^3)(1-t^2)(1+t)^{2g}}{(1-t^2)^{2g}}\pmod{t^{g+2}}, \end{align*} as desired (note that for the $t^{g+1}$ coefficient, $\binom{2g}{g+1}-\binom{2g}{g-1}$ is identically zero). \end{proof} Finally, we are ready to calculate the coefficients of the Poincar\'e series $K_g$ explicitly. \begin{proof}[Proof of Theorem~\ref{thm: Kg calculation}] To prove (\ref{item:Kg dimensions}), we use the recurrence relation of Lemma~\ref{lemma: good recurrence} repeatedly, picking up factors of $1+t^3$ and $tSV_{i,1}$: \begin{align*} K_g &= \frac{1+t^3}{t}+ SV_{g-1,1} -\frac{K_{g-1}}{t} \\ &= \frac{1+t^3}{t}+ SV_{g-1,1} - \frac{1+t^3}{t^2} - \frac{SV_{g-2,1}}{t}+\frac{K_{g-2}}{t^2} \\&=\cdots \end{align*} Ending this expansion with $K_1=SV_{0,1}$, we can combine the $1+t^3$ terms and achieve \[ K_{g}=(1-t+t^2)(1 -(-t)^{1-g})+\sum_{j=0}^{g-1}\frac{SV_{j,1}}{(-t)^{g-1-j}} \] valid for $g>0$. The first term has order at most $t^2$, so for $i\ge 3$, $\dim_i(\mathcal{K}_g)$ is the degree $i$ term of \[ \sum_{j=0}^{g-1}\frac{SV_{j,1}}{(-t)^{g-1-j}}. \] Substituting the formula from Corollary~\ref{cor:Vsolution}, this is the degree $i$ term of \[ \sum_{j=0}^{g-1}\sum_{m=0}^j(-1)^{g+1+j}S^{j+1}(1+t^3)t^{j+m+1-g}\left(\binom{2j}{m}-\binom{2j}{m-2}\right). \] For a given choice of degree $i$, the contribution from the $(j,m)$ summand is then $(-1)^{g+1+j}\left(\binom{2j}{m}-\binom{2j}{m-2}\right)$ times the degree $i+g-j-m-1$ coefficient of $(1+t^3)S^{j+1}$. The terms of the Poincar\'e series $S^{j+1}$ are familiar from Section~\ref{sec: non-orientable and open surfaces}. To wit, we have \[ S^{j+1}=\sum_{i\geq0\,\,\mathrm{ even}} \binom{2j+\frac{i}{2}+1}{2j+1}t^i \] so that \begin{align*} (1+t^3)S^{j+1} &=\sum_{i\geq0\,\,\mathrm{even}}\binom{2j+\frac{i}{2}+1}{2j+1}t^i+\sum_{i\geq0\,\,\mathrm{odd}}\binom{2j+\frac{i-3}{2}+1}{2j+1}t^i \\ &=\sum_{i\geq0}\binom{2j+\frac{i}{2}+\frac{1}{4} + \frac{3(-1)^{i}}{4}}{2j+1}t^i. \end{align*} We then have the following equation, valid for $g>0$ and $i\ge 3$: \begin{align*} \dim_i(\mathcal{K}_g)=& \sum_{j=0}^{g-1}\sum_{m=0}^j (-1)^{g+j+1}\left(\binom{2j}{m}-\binom{2j}{m-2}\right) \\ &\qquad\qquad\left(\binom{\frac{1}{2}\left(3j+i+g-m-\frac{1}{2}-\frac{3(-1)^{i+j+g+m}}{2}\right)}{2j+1}\right), \end{align*} which yields (\ref{item:Kg dimensions}) by a direct computation of the individual summands. The special cases for $i\le 2$ can be seen by the formula further above or by inspection. For the special case $g=0$, the space $\mathcal{K}_g$ is one dimensional, concentrated in degree $0$, which agrees with the formulas of the theorem since the sum for $i\ge 3$ is empty for $g=0$. For (\ref{item:high g Kg dimensions}), we proceed by induction on $i$, using the recurrence relation of Lemma~\ref{lemma: good recurrence} in the following form: \begin{align*} K_g = 1+t^3 + tSV_{g,1} - tK_{g+1} \end{align*} For the base case, according to Theorem~\ref{thm: Kg calculation}(\ref{item:Kg dimensions}), the degree $2$ coefficient of $K_{g+1}$ is $2g+2$. Then to determine the degree $3$ coefficient of $K_g$ we need only determine the degree $2$ coefficients of $SV_{g,1}$. By Corollary~\ref{cor:Vsolution}, modulo $t^3$, $SV_{g,1}$ is \[ (1+2t^2)^{g+1}\left( \binom{2g}{0} + \binom{2g}{1}t + \left(\binom{2g}{2}-\binom{2g}{0}\right)t^2 \right) \] and so the degree $2$ coefficient of $SV_{g,1}$ is \[ 2(g+1)+ \binom{2g}{2}-1. \] so the overall $\dim_3(\mathcal{K}_g)$ is \[ \dim_3(\mathcal{K}_g)=1+2(g+1)+\binom{2g}{2}-1 - 2g-2 = \binom{2g}{2}, \] as desired. We move on to the inductive step. We will prove the statement for $i\le g+2$, assuming it for lower values of $i$ and all appropriate $g$. By Corollary~\ref{cor: stable value of Vg1}, \[ SV_{g,1}\equiv\frac{(1+t^3)(1-t^2)}{(1-t)^{2g}(1-t^2)^2}\pmod{t^i} \] so \begin{align*} tSV_{g,1}&\equiv\frac{t(1+t^3)(1-t^2)}{(1-t)^{2g}(1-t^2)^2}\pmod{t^{i+1}} \\ &\equiv \frac{t}{(1-t)^{2g-1}}+\frac{t^2}{(1-t)^{2g+1}}\pmod{t^{i+1}} \\&\equiv \left(\sum_{j\geq0}\bigg[\binom{2g+j-3}{2g-2}+\binom{2g+j-2}{2g}\bigg]t^j\right) \pmod{t^{i+1}} \end{align*} where the last step uses the expansion of $\frac{1}{(1-t)^n}$ from Section~\ref{sec: non-orientable and open surfaces}. Thus, the coefficient of $t^i$ in this series is $\binom{2g+i-3}{2g-2}-\binom{2g+i-2}{2g}$, and combining this with the inductive premise (for $i-1$ and $g+1$) and the recurrence relation yields the result directly. \end{proof} Our last act will be to supply the missing proofs of Lemmas \ref{lemma:exact_sequence} and \ref{lemma:Vrecurrence}, both of which follow the same basic syntax, which we now pause to elucidate. We fix $g\geq0$ and use the letters $q$ and $r$ to denote generic homogeneous elements of $\mathcal{X}_{g+1}$, which we polarize according to the decomposition $\mathcal{X}_{g+1}\cong \Lambda[a,b]\otimes\mathbb{Q}[\tilde a,\tilde b]\otimes\mathcal{X}_g$ by writing \[ \def1.5{1.5} \begin{array}{rrrrl} q&=&&\tilde{a}^i\tilde{b}^j &q_{1,ij}\\ &+&a&\tilde{a}^{i-1}\tilde{b}^j &(q_{2,ij} - q_{3,ij})\\ &+&b&\tilde{a}^i\tilde{b}^{j-1} &(q_{2,ij } + q_{3,ij})\\ &+&ab&\tilde{a}^{i-1}\tilde{b}^{j-1} &q_{4,ij}, \end{array}\] where the $q_{kij}$ contain no factors of $a$, $b$, $\tilde a$, or $\tilde b$ and we implicitly sum over $(i,j)$. This equation defines $q_{1,ij}$ for nonnegative $i$ and $j$, $q_{2,ij}$ and $q_{3,ij}$ when at least one of $i$ and $j$ is positive, and $q_{4,ij}$ when both $i$ and $j$ are positive, and we extend to arbitrary $(i,j)$ by declaring all other cases to be zero. Note that $q_{2,i,0}=-q_{3,i,0}$ and that $q_{2,0,j}=q_{3,0,j}$. The index $i$ counts the ``total power of $a$'' in $q$, disregarding the distinction between $a$ and $\tilde a$---likewise for $j$ and $b$---and it is evident that $\delta$ preserves both $i$ and $j$ while $\Delta$ does not. In what follows, we will apply operators $\phi_{m,n}\coloneqq\delta^m\Delta^n$ to $q$ and $r$ and impose conditions like $\phi_{m,n} q=0$ or $\phi_{m,n}q=\phi_{m',n'}r$ to obtain an infinite sequence of relations among the $q_{kij}$ and $r_{kij}$. Because $\Delta$ does not preserve $i$ and $j$, these relations will mix different values of $i$ and $j$. In order to obtain homogeneous relations, we work in the bigrading $|q_{1,ij}|=|q_{2,ij}|=(i,j)$ and $|q_{3,ij}|=|q_{4,ij}|=(i-1,j-1)$. With this bigrading in place, we will obtain a system of equations valid for each pair $(i,j)$, and so we will suppress the indices. As an example, we consider imposing the condition $\Delta(q)=0$, which implies that $\Delta(q_{1,ij})+q_{4,i+1,j+1}=0$ for each $(i,j)$. With our bigrading, this equation is homogeneous of degree $(i,j)$, and we abbreviate it as $$\Delta(q_1)+q_4=0.$$ There is an important special case in bidegree $(0,0)$, where $q_2$ is undefined, which we handle separately. Other potential special cases occur in bigrading $(-1,*)$ and $(*,-1)$, with $*\ge 0$, where only $q_3$ is defined. In these cases, since $q_{3,ij}=\pm q_{2,ij}$ if $i$ or $j$ is zero, the value of $q_3$ in this lowest degree is determined by $q_2$ in the next higher bidegree, and one might worry about interaction between the constraints on these two variables. Fortunately, our equations will contain pairs of the form \begin{align*} \phi_{m,n}q_2 &=0 \\ \phi_{m,n}q_3 &= \alpha q_4 \end{align*} for some constant $\alpha$, so that, since $q_4=0$ in this lowest bidegree, the two constraints are identical and there is no special case. Without further ado, we move to the proofs. \begin{lemma}\label{lemma: K and V relationship} If $q$ lies in $\mathcal{K}_{g+1}$, then $(q_{1,ij},-q_{3,i+1,j+1})\in \mathcal{V}(g,1)$ for each $(i,j)$, and $\delta q_{1,00}=0$. Conversely, every collection $\{q_{1,ij}, q_{3,i+1,j+1}\}$ satisfying these conditions specifies a unique element of $\mathcal{K}_{g+1}$. \end{lemma} \begin{proof} The equations $\Delta q = \delta q = 0$ yield: \begin{align*} \Delta q_1 + q_4 &= 0\\ \Delta q_2&=0 \\ \Delta q_3&=0 \\ \Delta q_4&=0 \\ \delta q_1 + 2q_2&= 0 \\\delta q_2 & = 0 \\-\delta q_3 + q_4&=0 \\\delta q_4 & = 0. \end{align*} The fifth and seventh equations each have a free variables for which we will substitute, reducing the number of degrees of freedom and the number of variables. The sixth and eight equation are redundant, implied by the other equations. Then reducing, we get \begin{align*} \Delta q_1 + \delta q_3 &= 0\\ \Delta q_3&=0 \\\delta \Delta q_1&=0\\ \delta\Delta q_3&=0. \end{align*} The third and fourth equations are now redundant. Thus, in the general case, the equations are equivalent to $(q_1,-q_3)\in \mathcal{V}(g,1)$. In the special case $(i,j)=(0,0)$, we have the additional equation $\delta q_1=0$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:exact_sequence}] Lemma \ref{lemma: K and V relationship} grants that the correspondence \[q\mapsto \sum_{i,j}\tilde a^i\tilde b^j\otimes (q_{1,ij},-q_{3,i+1,j+1})\] defines a map $\mathcal{K}_{g+1}\to \mathcal{S}\otimes\mathcal{V}(g,1)$, and that this map is injective. For the righthand map, we take the composite \[\mathcal{S}\otimes \mathcal{V}(g,1)[1]\longrightarrow \mathcal{V}(g,1)[1]\xrightarrow{(q,r)\mapsto \delta q}\mathcal{K}_g,\] where the first map is induced by the augmentation of $\mathcal{S}$. Lemma \ref{lemma: K and V relationship} implies that the kernel of this map is precisely the image of $\mathcal{K}_{g+1}$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:Vrecurrence}] By definition, $(q,r)\in \mathcal{V}(g+1,n)$ if and only if $\Delta^n q = \delta\Delta^{n-1}r$ and $\Delta^n r = 0$. For $n>1$, these requirements are equivalent to the following system: \begin{align*} \Delta^n q_1 + n\Delta^{n-1} q_4 &= \delta\Delta^{n-1} r_1 + 2 \Delta^{n-1} r_2 + (n-1)\delta \Delta^{n-2}r_4 \\ \Delta^n q_2&= -\delta\Delta^{n-1} r_2\\ \Delta^n q_3&= -\delta\Delta^{n-1} r_3 + \Delta^{n-1} r_4 \\ \Delta^{n} q_4 &= \delta\Delta^{n-1} r_4 \\ \Delta^n r_1 + n\Delta^{n-1}r_4 &= 0\\ \Delta^n r_2&=0\\ \Delta^n r_3&=0\\ \Delta^{n} r_4&=0 \end{align*} None of these equations are redundant. Rearranging, this system becomes the following: \begin{align*} (q_2,r_2) &\in \mathcal{V}(g,n)\\ (q_3+\frac{1}{n}r_1,-r_3) &\in \mathcal{V}(g,n)\\ \Delta^{n-1}(\Delta r_1 + nr_4) &= 0\\ \Delta^{n+1} r_1&=0 \\ \Delta^n q_1 + n\Delta^{n-1} q_4 &= \frac{1}{n}\delta\Delta^{n-1} r_1 + 2 \Delta^{n-1} r_2 + \frac{n-1}{n}\delta \Delta^{n-2}(\Delta r_1 + nr_4) \\ \Delta^{n} q_4 &= -\frac{1}{n}\delta\Delta^{n} r_1 \end{align*} We rewrite the last four of these equations as \begin{align*} \left(nq_4 + \Delta q_1 -\frac{\delta}{n} r_1 - 2r_2,\frac{n-1}{n}(\Delta r_1+nr_4)\right) &\in \mathcal{V}(g,n-1)\\ \Delta^{n+1}r_1 &= 0 \\ \Delta^n\left(\Delta q_1 - \frac{\delta}{n} r_1 -2r_2\right) &= \delta\Delta^n r_1 \end{align*} In summary, the original system is equivalent to the membership relations \begin{align*} (q_2,r_2) &\in \mathcal{V}(g,n)\\ \left(q_3+\frac{1}{n}r_1,-r_3\right) &\in \mathcal{V}(g,n) \\ \left(nq_4 + \Delta q_1 -\frac{\delta}{n} r_1 - 2r_2, \frac{n-1}{n}(\Delta r_1+nr_4)\right) &\in \mathcal{V}(g,n-1)\\ \left(q_1,\frac{n+1}{n}r_1\right) &\in \mathcal{V}(g,n+1) \end{align*} The degeneration when $(i,j)=(0,0)$ is identical except that the $(q_2,r_2)$ term does not exist. When $n=1$, the equations impose the relation that both sides of the third pair above, the pair that is supposed to be in $\mathcal{V}(g,n-1)$, are identically zero and the relations instead eliminate the variables $q_4$ and $r_4$. This matches our convention that $\mathcal{V}(g,0)=0$. Examining the degrees of the pairs above yields the functional relation \[ V_{g+1,n} = S{}( V_{g,n+1}+t^2V_{g,n-1}) + S{}t^3 V_{g,n} + (S{}-1)t^{-1}V_{g,n}, \] and applying the identity $\frac{S{}-1}{S{}}=2t^2-t^4$ yields \[ V_{g+1,n} = S{}( V_{g,n+1}+ 2t V_{g,n} + t^2V_{g,n-1}) \] \end{proof} \bibliographystyle{amsalpha}
1,116,691,501,192
arxiv
\section{Introduction \& Context}\label{sec1} Set covering is a well-known problem in combinatorial optimization. The objective is to cover a set of elements, called the universe, using a minimum number of available covers. The theoretical problem is known to be generally NP-difficult to solve \cite{Vazirani2001}, and is often encountered in industrial processes and real-life problem. In particular, the mathematical formulation of the set cover problem is well-suited for radar search pattern optimization of modern radar systems. Electronic scanning and numerical processing allow modern radars to dynamically use bi-dimensional beam-forming, giving them great control on the radar search pattern. While traditional rotating radars search the space sequentially along the azimuth axis and reproduce at each azimuth the same pattern along the elevation axis, modern radars can optimize the search pattern along both axis simultaneously (Fig. \ref{modernradars}). Those new possibilities requires a more sophisticated formulation for the optimization problem of the radar search pattern. Approximation of radar search pattern optimization as a set cover problem offers a flexible yet powerful formulation \cite{Briheche2016}, capable of accounting radar specific constraints (localized clutter, adaptive scan-rate updates, multiple missions) without changing the underlying mathematical structure of the optimization problem. \begin{figure} \centering \includegraphics[width=0.70\linewidth]{figures/01-modernradars.pdf} \caption{Radar search pattern for a rotating radar (left) and a modern fixed-panel radar (right)} \label{modernradars} \end{figure} Various optimization algorithms have been proposed for solving the set cover problem: exact methods such as branch-and-bound combined with relaxations methods, and approximation algorithms such as greedy algorithm, simulated annealing and genetic algorithms (see \cite{Yelbay2015} for a recent survey of those methods). In practice, problems of reasonable size can be efficiently solved by branch-and-bound exploration using linear relaxation for lower bound estimation. More importantly, branch-and-bound features interesting characteristics, making it particularly fit for producing just-in-time solutions, for example for radars in operational situation. \section{Problem Statement} \subsection{Definition} Let $\ensuremath{G}=\{g_{m,n}\}$ be the set representation of a finite bi-dimensional $M$-by-$N$ regular grid with \begin{itemize} \item each element $g_{m,n}$ representing a cell indexed by $(m,n)\in [0,M[ \times [0,N[ \ \subset \mathbb{N}^2$. The grid contains $MN$ cells. \item each couple $(m,n)$ representing the node at the intersection of the $m$-th horizontal line and the $n$-th vertical line with $(m,n)\in [0,M] \times [0,N] \ \subset \mathbb{N}^2$. The grid has $(M+1)(N+1)$ nodes. \end{itemize} On the grid, a rectangular cover is a subset of elements included in a rectangle, uniquely defined by its upper left corner node $(m_0,n_0)$ and its lower right corner node $(m_1,n_1)$, such that $0\leq m_0 < m_1 \leq M$ and $0 \leq n_0 < n_1 \leq N$. The set representation of a cover defined by corners $(m_0,n_0)$ and $(m_1,n_1)$ is: $$\ensuremath{C}=\{g_{m,n},\ (m,n) \in [m_0, m_1[ \times [n_0, n_1[\}$$ For example, in cover $\ensuremath{C}_7$ (Fig. \ref{setcoverexample}), the corners are $(m_0,n_0)=(0,1)$ and $(m_1,n_1)=(1,2)$.\\ \noindent Let $\ensuremath{\mathcal{C}}=\{\ensuremath{C}_1,\dots,\ensuremath{C}_D\}$ be a collection of $D$ rectangular covers on $\ensuremath{G}$.\\ Let $T_\ensuremath{C} \in \mathbb{R^+}$ be the associated cost of cover $\ensuremath{C} \in \ensuremath{\mathcal{C}}$ (also noted $T_i$ for cover $\ensuremath{C}_i$).\\ Find a minimum cost sub-collection $\ensuremath{\mathcal{S}} \subset \mathcal{C}$ covering all cells of grid $\ensuremath{G}$.\\ \subsection{Example} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/02-setcoverproblem.pdf} \caption{Grid $\ensuremath{G}$ to cover (left), a cover (center) and the collection $\ensuremath{\mathcal{C}}$ of available covers (right)} \label{setcoverexample} \end{figure} There are six available covers such as in Figure \ref{setcoverexample} to cover $\ensuremath{G}$: \begin{itemize} \item $\ensuremath{\mathcal{S}}_1=\{\ensuremath{C}_1,\ensuremath{C}_4,\ensuremath{C}_5,\ensuremath{C}_6,\ensuremath{C}_7\}$ is a valid sub-optimal covering collection with total cost $T_1+T_4+T_5+T_6+T_7=8$. \item $\ensuremath{\mathcal{S}}_2=\{\ensuremath{C}_2,\ensuremath{C}_3,\ensuremath{C}_6,\ensuremath{C}_7\}$ is a valid optimal covering collection with total cost $T_2+T_3+T_6+T_7=6$, as there are no solution with total cost $5$ or less. \item $\ensuremath{\mathcal{S}}_3=\{\ensuremath{C}_2,\ensuremath{C}_5,\ensuremath{C}_6,\ensuremath{C}_8\}$ is another optimal covering collection, thus an optimal solution is not necessarily unique. \end{itemize} The optimization formulation of this set cover problem can be written as: \begin{equation} \begin{array}{rl} \min & \sum_{\ensuremath{C} \in \ensuremath{\mathcal{S}}} T_\ensuremath{C} \\ s.t. & \forall g_{m,n} \in \ensuremath{G}, \exists \ensuremath{C} \in \ensuremath{\mathcal{S}},\ g_{m,n} \in \ensuremath{C} \\ & \ensuremath{\mathcal{S}} \subset \ensuremath{\mathcal{C}} \end{array}\label{setcoverequation} \end{equation} In the case of radar search pattern application, the grid $\ensuremath{G}$ represents the surveillance area (Fig. \ref{surveillancearea}), while each cover $\ensuremath{C} \in \ensuremath{\mathcal{C}}$ represents a radar beam and its detection area on the grid $\ensuremath{G}$ (Fig. \ref{beam}). The associated cost $T_\ensuremath{C}$ is the duration required to emit the radar signal, then receive and process the echo. The total cost of collection of radar beams is the time required to emit all beams in sequential order, as the radar cannot emit simultaneously several beams. \begin{figure} \centering \includegraphics[width=0.80\linewidth]{figures/03-surveillancearea.pdf} \caption{Surveillance area $\ensuremath{\mathcal{A}}_S$ (left), its projection in direction cosines (center) and the surveillance grid (right)} \label{surveillancearea} \end{figure} \begin{figure} \centering \includegraphics[width=0.80\linewidth]{figures/04-widenedbeamcover.pdf} \caption{Radar detection beam (left), its radiation pattern (center) and the associated cover (right)} \label{beam} \end{figure} \subsection{Combinatorial complexity} A rectangle cover is uniquely define by its upper left and lower right corners. Those corners are mathematically defined by choosing two values $m_0$ and $m_1$ among the $M+1$ horizontal lines, and two values $n_0$ and $n_1$ among the $N+1$ vertical lines on the grid. Thus there are at most $$\binom{M+1}{2}\binom{N+1}{2}=\frac{MN(M+1)(N+1)}{4} = O(M^2N^2) $$ possible distinct rectangles on a $M$-by-$N$ grid. And so the maximum number of possible sub-collections of rectangular covers on the grid is $2^{MN(M+1)(N+1)/4}$. Even for a $10$-by-$10$ grid, which is relatively small, the number of possible sub-collections is approximately $10^{900}$, which is far too big to allow the use of brute-force exploration. \section{Integer Programming} \subsection{Problem Formulation} The set cover problem can be written as an integer program by using matrix formulations. We represent each cover $\ensuremath{C} \in \ensuremath{\mathcal{C}}$ as a binary $M$-by-$N$ matrix noted $\bold{C}$, or as a binary vector of length $MN$ noted $\bold{c}$: $$ \bold{C}(m,n)= \bold{c}(m+Mn)= \begin{cases} 1 \text{ if } g_{m,n} \in \ensuremath{C}\\ 0 \text{ otherwise} \end{cases} $$ \begin{figure} \centering \includegraphics[width=0.90\linewidth]{figures/05-discretecover.pdf} \caption{Radar detection beam (left), its binary matrix representation (center) and its binary vector representation (right)} \label{covervector} \end{figure} For each cover $\ensuremath{C}_i \in \ensuremath{\mathcal{C}}$, let $x_i \in \{0,1\}$ be the binary selection variable of cover $\ensuremath{C}_i$, such that the vector $\bold{x}=(x_1,\dots,x_D)\in \{0,1\}^D$ represents the sub-collection $\ensuremath{\mathcal{S}}=\{\ensuremath{C}_i \in \ensuremath{\mathcal{C}} \ s.t.\ x_i=1\}$, containing the chosen covers. Let $\bold{T}=(T_1\cdots T_D)^T$ be the cost vector and let $$\bold{A}= \begin{pmatrix} \bold{c}_1 & \cdots & \bold{c}_D \end{pmatrix} = \begin{pmatrix} \bold{C}_1(0,0) & \cdots & \bold{C}_D(0,0) \\ \bold{C}_1(1,0) & \cdots & \bold{C}_D(1,0) \\ \vdots & \ddots & \vdots\\ \bold{C}_1(m,n) & \cdots & \bold{C}_D(m,n) \\ \vdots & \vdots & \vdots\\ \end{pmatrix}$$ be the cover matrix. Then the set cover problem (\ref{setcoverequation}) can be written as the following integer program: \begin{equation} \begin{array}{rl} \min & \bold{T}^T.\bold{x} \\ s.t. & \bold{A}\cdot \bold{x} \geq \bold{1} \\ & \bold{x} \in \{0,1\}^D \end{array}\label{integerprogram} \end{equation} where $\bold{1}$ is the vector $(1 \cdots 1)$ of length $MN$. As an example, the set cover problem represented in Figure \ref{setcoverexample} can described by the following Equation: \begin{equation} \bold{A}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \end{pmatrix} \text{, } \bold{T}= \begin{pmatrix} 2 & 2 & 2 & 2 & 2 & 1 & 1 & 1 \end{pmatrix}^T \text{ and } \ensuremath{\bold{x}}= \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8 \end{pmatrix} \label{setcoverexampleequation} \end{equation} In their general form, integer programs are NP-hard to solve. Intuitively, this means that solving those problems is difficult, and requires some form of exhaustive enumeration of all possible solutions, whose number is often exponential in respect to the problem size. \subsection{Linear Relaxation} The linear relaxation of an integer program can be obtained by relaxing the integrality constraint of (\ref{integerprogram}) into a positivity constraint, allowing the variables $(x_i)_{1\leq i \leq D}$ to take continuous values in $[0,1]$: \begin{equation} \begin{array}{rl} \min & \bold{T}^T.\bold{x} \\ s.t. & \bold{A}\cdot \bold{x} \geq \bold{1} \\ & \bold{0} \leq \bold{x} \leq \bold{1} \end{array}\label{linearprogram} \end{equation} Any valid solution of the integer program is also a valid solution of its linear relaxation. Consequently the optimal value of the linear relaxation is inferior to the optimal value of the integer program, since an optimal solution of the integer program is a valid solution of the linear relaxation. Note that the constraint $\bold{x}\leq \bold{1}$ is in fact unnecessary, since the problem \begin{equation} \begin{array}{rl} \min & \bold{T}^T.\bold{x} \\ s.t. & \bold{A}\cdot \bold{x} \geq \bold{1} \\ & \bold{0} \leq \bold{x} \end{array}\label{linearprogramreduced} \end{equation} has the same optimal solutions as (\ref{linearprogram}). Intuitively, in the linear relaxation, a cell is going to be covered by a sum of ``fractional" covers (with $x_i<1$), or as at least one integer cover (with $x_i=1$) and thus has no need for covers with $x_i>1$. We formalize mathematically this idea in the following proof: \begin{itemize} \item Let $\bold{x}^b=(x_1^b \cdots x_D^b)$ be an optimal solution of (\ref{linearprogramreduced}). Let $\bold{y}=(y_1 \cdots y_D) \text{ with }y_i=\min\{x_i^b,1\}$.\\ Immediately we have $\ensuremath{\bold{T}}^T \bold{y} \leq \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^b$. Since $\bold{x}^b$ is a valid solution of (\ref{linearprogramreduced}): $$\forall (m,n),\sum_i x_i^b \ensuremath{C}_i(m,n) = \sum_{x_i^b\leq 1} x_i^b \ensuremath{C}_i(m,n) + \sum_{x_i^b> 1} x_i^b \ensuremath{C}_i(m,n) \geq 1$$ \begin{itemize} \item[-] Case 1: $\sum_{x_i^b>1} x_i^b \ensuremath{C}_i(m,n)=0$ $$\sum_i y_i \ensuremath{C}_i(m,n) \geq \sum_{x_i^b\leq 1} x_i^b \ensuremath{C}_i(m,n) = \sum_{x_i^b\leq 1} x_i^b \ensuremath{C}_i(m,n) + \sum_{x_i^b> 1} x_i^b \ensuremath{C}_i(m,n) \geq 1$$ \item[-] Case 2: $\sum_{x_i^b>1} x_i^b \ensuremath{C}_i(m,n)>0$\\ $\exists j \text{ s.t. }\ x_j^b>1 \text{ and } \ensuremath{C}_j(m,n)>0$, and since $\ensuremath{C}_j(m,n) \in \{0,1\}$, $\ensuremath{C}_j(m,n)=1$, thus: $$ \sum_i y_i \ensuremath{C}_i(m,n) \geq y_j \ensuremath{C}_j(m,n) \geq \ensuremath{C}_j(m,n) \geq 1 $$ \end{itemize} So $\bold{y}$ is a valid solution of (\ref{linearprogramreduced}), and since $\ensuremath{\bold{T}}^T \bold{y} \leq \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^b $, $\bold{y}$ is also an optimal solution of (\ref{linearprogramreduced}).\\ From this, we deduce $\ensuremath{\bold{T}}^T \bold{y} = \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^b $ and with $\ensuremath{\bold{T}} > \bold{0}$, we deduce $\ensuremath{\bold{x}}^b=\bold{y}$, so $\ensuremath{\bold{x}}^b$ is a valid solution for (\ref{linearprogram}). \item Let $\ensuremath{\bold{x}}^a$ be an optimal solution of (\ref{linearprogram}), $\ensuremath{\bold{x}}^a$ is also a valid solution of (\ref{linearprogramreduced}), so $\ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^b \leq \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^a$. We just showed that if $\ensuremath{\bold{x}}^b$ is an optimal solution of (\ref{linearprogramreduced}) then it is also a valid solution of (\ref{linearprogram}), so we also have $\ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^a \leq \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^b$, and thus we can conclude $\ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^b = \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}^a$. So any optimal solution of (\ref{linearprogram}) is an optimal solution of (\ref{linearprogramreduced}) and reciprocally, thus both problems have the same set of optimal solutions. \end{itemize} Furthermore, the positivity constraints $\bold{0} \leq \bold{x}$ can be integrated in the matrix formulation with $$ \bold{R} = \begin{pmatrix} \ensuremath{\bold{A}} \\ \bold{I} \end{pmatrix} \text{ and } \bold{d}= \begin{pmatrix} \bold{1} \\ \bold{0} \end{pmatrix} $$ by rewriting the linear program as \begin{equation} \begin{array}{rl} \min & \bold{T}^T.\bold{x} \\ s.t. & \bold{R}\cdot \bold{x} \geq \bold{d} \end{array}\label{linearprogramcomplete} \end{equation} And the three formulations of the linear relaxation (\ref{linearprogram}), (\ref{linearprogramreduced}) and (\ref{linearprogramcomplete}) are equivalent. The integer program representing our set cover problem and its linear relaxation have two more interesting properties: \begin{itemize} \item Easily-checked feasibility: an integer program is feasible if there is at least one solution validating all constraints. It is possible that no valid solution exists if some constraints are conflicting, or if one constraint is impossible. In our case, feasibility is easy to check: the integer program as well as its linear relaxation are feasible if and only if $\ensuremath{\bold{x}}_F=(1\cdots 1)$ is a feasible solution, i.e. $\ensuremath{\bold{A}} \cdot \ensuremath{\bold{x}}_F = \sum_{i=1}^{D}\ensuremath{\bold{c}}_i \geq \bold{1}$: \begin{itemize} \item if $\ensuremath{\bold{x}}_F$ is a valid solution, then the problem is feasible by definition. \item if $\ensuremath{\bold{x}}_F$ is an invalid solution, then there is an invalidated constraint for $\ensuremath{\bold{x}}_F$, i.e.:\\ $\exists (m,n)$ s.t. $\sum_{i=1}^{D}\ensuremath{\bold{C}}_i(m,n)<1$, and since $\forall(i,m,n)$, $\ensuremath{\bold{C}}_i(m,n) \in \{0,1\}$,\\ $\exists (m,n)$ s.t. $\forall i ,\ensuremath{\bold{C}}_i(m,n)=0 \Rightarrow \exists (m,n)$ s.t. $\forall (x_i)_{1\leq i \leq D}, \sum_{i=1}^{D}x_i\ensuremath{\bold{C}}_i(m,n)=0<1$\\ In other words, $A$ has its $(m+Mn)$-th row filled with zeros, corresponding to a constraint which can be satisfied by no solution. Intuitively, $\ensuremath{\bold{x}}_F$ represents $\ensuremath{\mathcal{C}}$, the collection of all available covers itself, and if it is an invalid solution, then there is a cell which cannot be covered. This can happen in a real system if there is a cell which cannot be scanned, because of an obstacle or because the radar has not enough power to achieve the desired detection range. \end{itemize} \item Boundedness: a recurring question for linear programs is whether they are bounded, that is whether the cost function is bounded (below for minimization) for valid solutions. In our case the cost function is positive and thus bounded below by $0$. \end{itemize} \subsection{Linear Programming} There are three important geometrical aspects describing the decision space of the integer and linear programs (Fig. \ref{lp_space}): \begin{itemize} \item $\ensuremath{\bold{T}}$ is the cost function gradient. The cost function is linear and its gradient is constant. $-\ensuremath{\bold{T}}$ is the direction of maximum decrease of the cost function. \item $\ensuremath{\bold{A}}$ is the cover matrix. Each row of $\ensuremath{\bold{A}}$ correspond to a detection constraint on a cell of $\ensuremath{G}$. In the decision space, each constraint corresponds to an hyperplane, the limit between the halfspace of solutions validating the constraint and the halfspace of solutions violating the constraint. The intersection of those halfspace forms a convex polyhedron. \item The positivity constraint of the linear relaxation $ \bold{0} \leq \bold{x} \leq \bold{1}$ bounds the values of the valid solutions in the hypercube $[0,1]^D$. For the integer program, the integrality constraint $\ensuremath{\bold{x}} \in \{0,1\}^D$ further reduces the set of valid solutions to the vertices of the hypercube $[0,1]^D$. \end{itemize} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figures/06-lp_space_rect.pdf} \caption{Decision space for 2D linear and integer programs} \label{lp_space} \end{figure} So the set of valid solutions for the linear relaxation is the intersection of the valid halfspaces for all constraints, and the hypercube $[0,1]^D$. Geometrically, it is a bounded convex polyhedron in $\mathbb{R}^D$, and can be described by its vertices (``corners"). Each vertex of this polyhedron is a point where at least $D$ hyperfaces of the polyhedron intersect, in other words, a point where $D$ constraints are tight. Such a point is called a basic solution (or basic vertex) of the linear program. It has been proved that if a linear program is bounded and feasible, then it has a basic optimal solution \cite{Matouek2006}. For a linear program, the polyhedron convexity allows the use of descent methods, such as Dantzig's simplex method, represented in Figure \ref{lp_solv}, which moves from vertex to vertex on the feasible polyhedron until it reaches an basic optimal solution, i.e. a vertex with no decreasing neighbor. However, this type of method generally cannot be used to solve integer programs, for which solutions are isolated points. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figures/06-lp_solv_rect.pdf} \caption{Illustration of Dantzig's simplex method for solving linear programs} \label{lp_solv} \end{figure} So for a given basic optimal solution, we have $D$ tight constraints. Let $B \leq MN$ be the number of tight detection constraints. If $D > B$ then we have $Z = D-B$ tight bound constraints. By considering formulation (\ref{linearprogramreduced}), we know that those $Z$ tight bound constraints are of the form $x_i \leq 0$, thus $x_i = 0$. The corresponding $Z$ variables are called \textit{non-basic} variables and are zeroes. The other $D-Z=B$ variables are called \textit{basic} variables and can be non-zero values. We reorganize the variables as $\ensuremath{\bold{x}}^T=(\ensuremath{\bold{x}}_B^T\ \ensuremath{\bold{x}}_Z^T)$, where $\ensuremath{\bold{x}}_B$ are the basic variables, and $\ensuremath{\bold{x}}_Z$ are the non-basic variables. Thus we have $B$ tight detection constraints in $\ensuremath{\bold{A}}$, such that $$\ensuremath{\bold{A}}_B \ensuremath{\bold{x}}_B = \bold{1}$$ where $\ensuremath{\bold{A}}_B$ is the square $B$-by-$B$ submatrix of $\ensuremath{\bold{A}}$ linking the basic variables $\ensuremath{\bold{x}}_B$ to the tight detection constraints. Furthermore, $\ensuremath{\bold{A}}_B$ is necessarily non-singular: since the hyperplanes of all constraints intersect, the constraints are linearly independent. \subsection{Integral Program and Total Unimodularity} So for any basic optimal solution of the linear program, there is a (possibly non-unique) square non-singular submatrix $\ensuremath{\bold{A}}_B$ of $\ensuremath{\bold{A}}$ such that $\ensuremath{\bold{A}}_B \ensuremath{\bold{x}}_B = \bold{1}$ (note that the reverse is not true, the condition is necessary, but not sufficient). Thus, $\ensuremath{\bold{x}}_B=\ensuremath{\bold{A}}_B^{-1} \bold{1}$ and $\ensuremath{\bold{x}}^T=(\ensuremath{\bold{x}}_B^T\ \ensuremath{\bold{x}}_Z^T)=(\ensuremath{\bold{x}}_B^T\ \bold{0})$. So if $\ensuremath{\bold{A}}_B^{-1}$ is an integral matrix (i.e. contains only integral values), then $\ensuremath{\bold{x}}$ is also integral. This means that the linear program and the integer program share an optimal solution. The integrality of an invert matrix is determined by the determinant of its forward matrix: $$\det(\ensuremath{\bold{A}}_B) \in \{-1,+1\} \Rightarrow \ensuremath{\bold{A}}_B^{-1} = \frac{\com(\ensuremath{\bold{A}}_B)^T}{\det(\ensuremath{\bold{A}}_B)^{-1}}\text{ is integral}$$ A matrix $\ensuremath{\bold{A}}$ is said to be \textit{unimodular} if $\det(\ensuremath{\bold{A}}) \in \{-1,0,+1\}$. A matrix is said to be \textit{totally unimodular} if all its square submatrices are unimodular. An integer program whose constraint matrix is totally unimodular can be directly solved by linear relaxation, because the integer program and its linear relaxation have the same basic optimal solutions. In this case, the problem and its associated convex polyhedron are said to be integral. Geometrically, this means that all vertices of the polyhedron are integral points. Total unimodularity is an important concept in combinatorial optimization, because it reduces integer programming to linear programming, which is theoretically an ``easier" problem. Linear programming is solvable in polynomial time, and very efficient practical algorithms exist. \subsection{One-dimensional cover problem} For example, let us consider the one-dimensional case of our problem, with $M=1$: \begin{itemize} \item $\ensuremath{G}=\{g_n,\, n\in [0,N[\}$ is a vector with $N$ cells to covers. \item a cover $C=\{g_n,\, n\in [n_0,n_1[\}$ is a contiguous subset of $\ensuremath{G}$, uniquely defined by a starting element $n_0$ and an ending element $n_1$ such that $0 \leq n_0 < n_1 \leq N$. Each cover can be represented by a binary vector $$\ensuremath{\bold{c}}(n) = \begin{cases} 1 \text{ if } n_0 \leq n < n_1\\ 0 \text{ otherwise} \end{cases}$$ which contains contiguous ``ones", i.e. there is no ``zero" between two ``ones". \item $\ensuremath{\mathcal{C}}=\{\ensuremath{C}_1,\dots,\ensuremath{C}_D\}$ is a collection of covers on $\ensuremath{G}$. \end{itemize} An example of this problem is presented in Figure \ref{1dcoverproblem}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/1dsetcoverproblem.pdf} \caption{Instance of the one-dimensional cover problem} \label{1dcoverproblem} \end{figure} For the one-dimensional cover problem, the cover matrix $\ensuremath{\bold{A}}=\begin{pmatrix}\ensuremath{\bold{c}}_1 & \cdots & \ensuremath{\bold{c}}_D\end{pmatrix}$ is an \textit{interval matrix}, i.e. each column $\ensuremath{\bold{c}}_i$ of $\ensuremath{\bold{A}}$ has its ones ``consecutively". Interval matrices are known to be unimodular \cite{Nemhauser1999}, and thus totally unimodular, since every submatrix is also an interval matrix. Every basic optimal solution of the linear relaxation is an integral solution, and a valid solution for the integer program. Solving the linear relaxation of the one-dimensional cover is sufficient to solve the problem itself, making it an ``easy" problem. However this not the case for the two-dimensional cover problem. For the problem represented in Figure \ref{setcoverexample} and described by Equation (\ref{setcoverexampleequation}), the linear basic optimal solution is $\ensuremath{\bold{x}}_L=(0\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2})^T$. A possible $\ensuremath{\bold{A}}_B$ submatrix for this solution is \begin{figure*}[!h] \centering \includegraphics[width=\widthof{$\ensuremath{\bold{A}}_B=\begin{pmatrix}0&0&0&0&0&0&0&0\end{pmatrix}$}]{figures/submatrix.pdf} \end{figure*} which has a determinant of $-2$, thus explaining the appearance of $\frac{1}{2}$ values in the linear basic solution. This counterexample disproves the total unimodularity of the two-dimensional cover. \subsection{Integrality Gap} For the two-dimensional cover problem, the linear optimal solution might not be a valid solution for the integer problem and may have fractional values. The optimal cost of the linear relaxation described by Equation (\ref{setcoverexampleequation}) is $\ensuremath{\bold{T}}^T\ensuremath{\bold{x}}_L = \frac{11}{2}$. The optimal cost of the integer program cannot be fractional and is necessarily integer (in this case, it is $6$). The difference between the optimal cost of the integer program and its linear relaxation is called the \textit{integrality gap}. \subsection{Dynamic programming} Interestingly, the difficulty gap between the one-dimensional and two-dimensional cover problems can be found by a completely different, algorithmic approach using dynamic programming. Dynamic programming is a method for solving an optimization problem by recursively solving smaller sub-problems. Problems solved by dynamic programming usually possess an \textit{optimal substructure}, which means that an optimal solution can be constructed by combining optimal solutions of its sub-problems. The method is particularly efficient if this substructure can be broken down recursively in a polynomial number of sub-problems. This the case for the one-dimensional cover (Fig. \ref{1doptimalsubstructure}). An optimal subcollection $\ensuremath{\mathcal{S}}$ covering $\ensuremath{G}$ for this problem is going to be the combination of: \begin{itemize} \item a cover $\ensuremath{\bold{c}}$ including the last cell, so such that $n_0\leq n < N (=n_1)$. Thus $\ensuremath{\bold{c}}$ covers the last cell but might also covers some previous cells. \item a subcollection covering optimally the first $n_0$ cells (which are not covered by $\ensuremath{\bold{c}}$) \end{itemize} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/1doptimalsubstructure.pdf} \caption{Optimal substructure of the one-dimensional cover problem} \label{1doptimalsubstructure} \end{figure} Then it's possible to recursively define the solution covering optimally the first $K$ cells as the union of a one cover including the $K$-th cell and a solution covering optimally the first $k$ cells with $k<K$. This is formalized by the following equation, called the \textit{Bellman recursion}: $$ \ensuremath{\bold{x}}_{[1,K]} = \left\{ \begin{array}{cl} \bold{0} &\text{ if } K=0 \\ \ensuremath{\bold{x}}_{[1,K-1]} &\text{ if } \ensuremath{\bold{A}} \ensuremath{\bold{x}}_{[1,K-1]} \geq \bold{1}_{[1,K]} \\ \argmin \{ \ensuremath{\bold{T}}^T \ensuremath{\bold{x}}\ \text{ s.t. }\ensuremath{\bold{x}}=(\ensuremath{\bold{x}}_{[1,k]}+\ensuremath{\bold{x}}_i),\ k<K,\ \ensuremath{\bold{A}} \ensuremath{\bold{x}} \geq \bold{1}_{[1,K]}\}&\text{otherwise} \end{array}\right. $$ where \begin{itemize} \item[-] $\ensuremath{\bold{x}}_{[1,K]}$ is the solution covering optimally at least the first $K$ cells \item[-] $\bold{1}_{[1,K]}$ is the vector of length $N$ starting with $K$ ones and ending with $N-K$ zeroes \item[-] $\ensuremath{\bold{x}}_i$ is the vector of length $D$ with a one at position $i$ and zeroes elsewhere, representing the addition of the cover $\ensuremath{\bold{c}}_i$. \item[-] $k$ is the first non-zero element of the given cover $\ensuremath{\bold{c}}_i$ \end{itemize} And the recursion can be described by the following steps \begin{itemize} \item[-] $\ensuremath{\bold{x}}_{[1,0]}$ is the solution covering zero cells, initialized at $\bold{0}$ \item[-] If the solution covering $K-1$ cells also covers the $K$-th cell, we keep it. \item[-] Otherwise, search the optimal solution covering the first $K$ cells as a combination of: \begin{itemize} \item[$\cdot$] a cover $\ensuremath{\bold{c}}_i$ containing the $K$-th cell \item[$\cdot$] the sub-solution $\ensuremath{\bold{x}}_{[1,k]}$ optimally covering the first $k<K$ cells not covered by $\ensuremath{\bold{c}}_i$ \end{itemize} \end{itemize} So the dynamic programming method computes optimal sub-solutions for all sub-problems of covering the first $K$ cells for $K \leq N$. So we must solve $N$ sub-problems. For each sub-problem, the solution is computed as a combination of a cover and a smaller substructure optimal solution, so requires $O(D)$ steps to search through all covers. Thus, the algorithmic complexity of the dynamic programming algorithm is $O(ND)$, which is polynomial. A natural question would be whether this approach can be generalized to the two-dimensional cover problem. Let us consider an optimal solution for the two-dimensional cover problem. It can be viewed as a combination of a rectangular cover $\ensuremath{C}$ including the last bottom-right cell and an optimal cover for the substructure of cell not covered by $\ensuremath{C}$. So the optimal substructure of the two-dimensional cover problem is the cover sub-problem of a first ``top-left" half of the $M$-by-$N$ grid $\ensuremath{G}$. The number of sub-problems is equal to the number of way of cutting $\ensuremath{G}$ into two substructures: a top-left part and a bottom-right part, as presented in Figure \ref{1dvs2doptimalsubstructure}. Equivalently, this is equal to the number of paths between the top-right corner and the bottom-left corner of $\ensuremath{G}$. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/1dvs2doptimalsubstructure.pdf} \caption{Substructure decomposition of the one-dimensional cover problem (left) and the two-dimensional cover problem (middle)} \label{1dvs2doptimalsubstructure} \end{figure} A cut is constituted by $N+M$ edges on the grid, with $M$ vertical edges and $N$ horizontal edges. Any cut can be defined uniquely by choosing the $N$ vertical edges (or equivalently $M$ horizontal edges) among the $N+M$ edges. So the number of possible paths between two opposite corners of $\ensuremath{G}$, and thus the number of cover sub-problems on $\ensuremath{G}$ is $\binom{N+M}{N}=\binom{N+M}{M}$. Let $K=\min\{N,M\}$, then the number of possible cuts can be bound below by the following approximation using Stirling's formula $$\binom{N+M}{N} \geq \binom{2K}{K} \simeq \frac{\sqrt{2\pi2K}(2K)^{2K}}{e^{2K}} \left(\frac{e^{K}}{\sqrt{2\pi K}K^K}\right)^2 = \frac{2^{2K}}{\sqrt{\pi K}} $$ Thus, the number of sub-problems to solve grows exponentially with the grid size: an increase by $10$ of the grid size increase the number of sub-problems by approximately $2^{2\cdot 10} \approx 10^6$. Even for small values, the number of sub-problems explodes: \begin{table}[!h] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $N=M$ & 10 & 20 & 30 & 40 & 50 \\ \hline $\binom{2N}{N}$ & $\simeq 10^{5}$ & $\simeq 10^{11}$ & $\simeq 10^{17}$ & $\simeq 10^{23}$ & $\simeq 10^{29}$ \\ \hline \end{tabular} \caption{Number of sub-problems}\label{tab} \end{table} So while theoretically usable for the two-dimensional cover problem, dynamic programming has an exponential complexity for this problem, making the approach rather unpractical. This hints that the two-dimensional cover problem is computationally harder than the one-dimensional cover problem. The two-dimensional cover problem is in fact NP-difficult to solve \cite{Briheche2018}. To efficiently solve the two-dimensional cover problem, we need to use a more general optimization method. \section{Branch\&Bound} Integer programs are generally NP-hard optimization problems: there is currently no know algorithm capable of finding quickly an optimal solution. Informally, exact algorithms ``have to" search through the solution space. The space of all possible solutions can be represented as a finite binary tree with depth $p$, each node representing the value choice of an integer variable (Fig. \ref{branchbound}). Each end leaf represents a solution for the integer program. The number of possible solutions is finite, but grows exponentially and is usually huge: in our case, $2^D$ possible solutions. Exploring the entire tree is computationally unfeasible in reasonable time. However it is possible at each node to estimate a lower bound of the node sub-tree best solution, by solving its linear relaxation. Knowing their lower bound, it is possible to avoid exploring certain subsets. This method is known as the branch-and-bound method \cite{Conforti2014}: \begin{itemize} \item Branching: Each branch at the current node (with depth $i-1$) correspond to a chosen value, $0$ or $1$, for the next variable $x_i$. In each branch, $x_i$ is no longer a variable but a parameter. The current problem is thus divided into $2$ smaller sub-problems, each considering a different value for $x_i$ and each having one less variable. \item Bounding: The current problem is relaxed into a linear program, whose solution is a lower bound of the current problem best solution. Depending on the lower bound value, the node sub-tree will be explored next (if it is the most promising branch), later (if there is a more promising branch), or never (if a better solution has already be found in another branch). \end{itemize} Defining what a promising branch is a difficult question, a lower bound is not necessarily better since deeper nodes may have higher bounds while being closer to optimal solutions. Integer programming solvers usually rely on various heuristics to define the exploration strategy and improve bound estimations. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/07-branchandbound.pdf} \caption{Finite tree of solutions (left) and branch-and-bound method (right)} \label{branchbound} \end{figure} \subsection{Description} We present in this section a pseudo-code describing a basic implementation of the branch-and-bound method in Algorithm \ref{branchandbound}. Each node in the tree can be described by the sequence of choices leading to it: $$\ensuremath{N}=(x_1,x_2,\dots,x_n)$$ From a given node, we can compute its children $\ensuremath{N}_0 = (x_1,\dots,x_d,0)$ and $\ensuremath{N}_1 = (x_1,\dots,x_d,1)$. At each node $\ensuremath{N}$ explored, $(x_1,x_2,\dots,x_n)$ are set, and we solve a linear relaxation of the problem with respect to the variables $(x_{n+1},\dots,x_D)$ using the simplex method, then add $\ensuremath{N}$ to the list of nodes to explore. The algorithm can be summarized by the following steps: \begin{itemize} \item[\textbf{0.}] \textbf{Initialization}:\\ Initialize the list of node to explore with the root node. \item[\textbf{1.}] \textbf{Exploration}:\\ Pop next node to explore from the list of nodes and compute its linear relaxation. \item[\textbf{2.}] \textbf{Bounding}:\\ If the current node relaxation value is less than the current best solution found, proceed to Step 3, otherwise, drop current node and go back to Step 1. \item[\textbf{3.}] \textbf{Update}:\\ If the current node relaxation is an integral solution, then its an improving solution (note that an end leaf always yield an integral solution). Update best current solution and proceed to Step 1.\\ Otherwise: \item[\textbf{4}.] \textbf{Branching}:\\ Compute the current node children. For each child, check if the descendants contains a valid solution (this can be done by summing covers already used by the parent, the cover of the child node if used, and covers available to the descendants). If the child node is valid, add it to the list of node to explore. Proceed to Step 1. \end{itemize} \begin{algorithm}[!h] \caption{Branch-and-bound}\label{branchandbound} \begin{algorithmic} \State \textcolor{gray}{\% \Call{lp\_solve}{} is the relaxation subroutine called during branching} \Function{lp\_solve}{$\ensuremath{N}$} \State $(x_1,\dots,x_{d-1}) := \ensuremath{N}$ \Comment{At node $\ensuremath{N}$, the first $d-1$ variables are set} \State $(x_d,\dots,x_D) := \argmin\{\sum_{j=d}^{D}T_j x_j \text{, s. t. } \ensuremath{\bold{A}} \cdot \ensuremath{\bold{x}} \geq \bold{1}\}$ \Comment{Optimization of non-set variables} \State \Return $\ensuremath{\bold{x}}_L := (x_1,\dots,x_d,x_{d+1},\dots,x_D)$ \EndFunction \\ \State \textcolor{gray}{\% Initialization} \State $\ensuremath{N}_{root}=()$ \State $\ensuremath{\mathcal{N}} := \{\ensuremath{N}_{root}\}$ \Comment{Start with root node} \State $\ensuremath{\bold{x}}_{best} := \ensuremath{\bold{x}}_F = (1\cdots1)$ \Comment{Best solution found so far (by default, $\ensuremath{\bold{x}}_F$ is a valid solution)} \\ \State \textcolor{gray}{\% Exploration} \While{$\ensuremath{\mathcal{N}}$ is not empty} \State $\ensuremath{N}:=\operatorname{pop}(\ensuremath{\mathcal{N}})$ \Comment{Take next node in $\ensuremath{\mathcal{N}}$} \State $\ensuremath{\bold{x}}_L := \Call{lp\_solve}{\ensuremath{N}}$ \Comment{Solve node relaxation} \\ \State \textcolor{gray}{\% Bounding} \If{ $\ensuremath{\bold{T}}^T\cdot \ensuremath{\bold{x}}_L < \ensuremath{\bold{T}}^T \cdot \ensuremath{\bold{x}}_{best}$ } \Comment{Explore node $\ensuremath{N}$ only if it can improve best solution} \\ \State \textcolor{gray}{\% Update} \If { $\ensuremath{\bold{x}}_L \in \{0,1\}^D$ } \Comment{Check if $\ensuremath{\bold{x}}_L$ is an integral solution} \State $\ensuremath{\bold{x}}_{best} := \ensuremath{\bold{x}}_L$ \Else \State $(x_1,\dots,x_d) := \ensuremath{N}$\\ \State \textcolor{gray}{\% Branching} \For {$x \in \{0,1\}$} \Comment{Compute children of node $\ensuremath{N}$} \State $\ensuremath{N}_c := (x_1,\dots,x_d,x)$ \If {$\sum_{j=1}^{d}x_j\ensuremath{\bold{c}}_j+x\ensuremath{\bold{c}}_{d+1}+\sum_{j=d+2}^{D}\ensuremath{\bold{c}}_j\ \geq\ \bold{1}$} \Comment{Check child feasibility} \State $\ensuremath{\mathcal{N}} := \ensuremath{\mathcal{N}} \cup \{\ensuremath{N}_c\}$ \Comment{Add child to the candidate list} \EndIf \EndFor \EndIf \EndIf \EndWhile \State \textbf{return} $\ensuremath{\bold{x}}_{best}$ \end{algorithmic} \end{algorithm} This very generic description is just a presentation of the general idea of the method. Efficient implementations of the branch-and-method usually combined several techniques such as cutting planes, diving heuristics and local branching to improve bounds estimation and speed. \subsection{Application example} In this section, we apply and describe the behavior of the branch-and-bound method on the example represented in Figure \ref{setcoverexample} and described in Equation \ref{setcoverexampleequation}. \begin{itemize} \item $\ensuremath{\mathcal{N}}=\{\}$, $\ensuremath{\bold{x}}_{best}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=13$ :\\ Solving the root relaxation yields the linear solution $(0\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2})$ with cost $\tfrac{11}{2}\leq 13$. Root node children $(0)$ and $(1)$ are feasible, and thus added to the exploration list $\ensuremath{\mathcal{N}}:=\{(0),(1)\}$ \item $\ensuremath{\mathcal{N}}=\{(0),(1)\}$, $\ensuremath{\bold{x}}_{best}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=13$ :\\ Relaxation of $(0)$ yields the same linear solution $(0\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2})$ with cost $\tfrac{11}{2}$. We add the children $(0,0)$ and $(0,1)$ to the exploration list $\ensuremath{\mathcal{N}}:=\{(1),(0,0),(0,1)\}$ \item $\ensuremath{\mathcal{N}}=\{(1),(0,0),(0,1)\}$, $\ensuremath{\bold{x}}_{best}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=13$ :\\ Relaxation of $(1)$ yields the linear optimal solution $\ensuremath{\bold{x}}_L=(1\ 0\ 1\ 1\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2})$ with cost $\tfrac{15}{2}<13$. We add the children $(0,0)$ and $(0,1)$ to the exploration list $\ensuremath{\mathcal{N}}:=\{(1,0),(1,1)\}$ \item $\ensuremath{\mathcal{N}}=\{(0,0),(0,1),(1,0),(1,1)\}$, $\ensuremath{\bold{x}}_{best}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=13$ :\\ Relaxation of $(0,0)$ yields the linear optimal solution $\ensuremath{\bold{x}}_L=(0\ 0\ 1\ 1\ 0\ 0\ 1\ 1)$ with cost $6<13$. $\ensuremath{\bold{x}}_L$ is an integral solution, thus we update the best current solution $\ensuremath{\bold{x}}_{best}:=\ensuremath{\bold{x}}_L;\ f_{best}:=6$. \end{itemize} At this point, we can already deduce than we have found an integer optimal solution. The root relaxation has linear optimal cost $\tfrac{11}{2}$. Since any integer solution is a valid linear solution, it has an integer cost greater than the linear optimal cost $\tfrac{11}{2}$, so greater than $6$. This suffices to prove the optimality of $\ensuremath{\bold{x}}_{best}=(0\ 0\ 1\ 1\ 0\ 0\ 1\ 1)$ for the integer program described by Equations (\ref{integerprogram},\ref{setcoverexampleequation}). \subsection{Multiple solutions enumeration} While we could terminate the exploration once we have found an optimal solution, we also have the possibility to pursue the exploration in order to found alternative optimal solutions. In engineering applications, multiple solutions are a desirable feature for engineers and operators who can select a solution among multiple candidates based on their expertise. This choice in turn can be analyzed to define preferences, to add secondary selection criterion to the method or even refined the model into a multi-objective optimization problem. Multiple solutions enumeration can be done by slightly modifying steps \textbf{2.} and \textbf{3.} of the branch-and-bound method: \begin{itemize} \item[\textbf{2.}] \textbf{Bounding}:\\ If the current node relaxation value is less than or equal to the current best solution found, proceed to Step 3, otherwise, drop current node and go back to Step 1. \item[\textbf{3.}] \textbf{Update and Enumerate}:\\ If the current node relaxation is an integral solution, then its an improving solution. If it is strictly better than the current solution, empty the set of best solutions and update best current solution. Otherwise, update the set of best solutions. Proceed to Step 4 (as there could be other optimal solutions among the children of the current node).\\ \end{itemize} This result in modifications to Algorithm \ref{branchandbound} pseudo-code as described in Algorithm \ref{enumerationbranchandbound}. \begin{algorithm}[!h] \caption{Multiple solutions enumeration branch-and-bound}\label{enumerationbranchandbound} \begin{algorithmic} \State \textcolor{gray}{\% Initialization} \State ... \State $\ensuremath{\bold{x}}_{best} := \ensuremath{\bold{x}}_F = (1\cdots1)$ \Comment{Best solution found so far (by default, $\ensuremath{\bold{x}}_F$ is a valid solution)} \State $\mathcal{X}_{best} := \{\ensuremath{\bold{x}}_F\}$ \Comment{Set of best solutions found so far} \\ \State \textcolor{gray}{\% Exploration} \While{$\ensuremath{\mathcal{N}}$ is not empty} \State ... \\ \State \textcolor{gray}{\% Bounding} \If{ $\ensuremath{\bold{T}}^T\cdot \ensuremath{\bold{x}}_L \leq \ensuremath{\bold{T}}^T \cdot \ensuremath{\bold{x}}_{best}$ } \Comment{Explore $\ensuremath{N}$ if its relaxation is at least as good as $\ensuremath{\bold{x}}_{best}$} \\ \State \textcolor{gray}{\% Update and Enumerate} \If { $\ensuremath{\bold{x}}_L \in \{0,1\}^D$ } \Comment{Check if $\ensuremath{\bold{x}}_L$ is an integral solution} \If{ $\ensuremath{\bold{T}}^T\cdot \ensuremath{\bold{x}}_L < \ensuremath{\bold{T}}^T \cdot \ensuremath{\bold{x}}_{best}$ } \State $\ensuremath{\bold{x}}_{best} := \ensuremath{\bold{x}}_L$ \State $\mathcal{X}_{best} := \{\ensuremath{\bold{x}}_L\}$ \Else \State $\mathcal{X}_{best} := \mathcal{X}_{best} \cup \{\ensuremath{\bold{x}}_L\}$ \EndIf \EndIf \\ \State \textcolor{gray}{\% Branching} \For {$x \in \{0,1\}$} \State ... \EndFor \EndIf \EndWhile \State \textbf{return} $\mathcal{X}_{best}$ \end{algorithmic} \end{algorithm} If we pursue the method application to the numerical example previously described: \begin{itemize} \item $\ensuremath{\mathcal{N}}=\{(0,0),(0,1),(1,0),(1,1)\}$, $\ensuremath{\bold{x}}_{best}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=13$ :\\ Relaxation of $(0,0)$ yields the linear optimal solution $\ensuremath{\bold{x}}_L=(0\ 0\ 1\ 1\ 0\ 0\ 1\ 1)$ with cost $6\leq13$. $\ensuremath{\bold{x}}_L$ is an integral solution, thus we update the best current solution $\ensuremath{\bold{x}}_{best}:=\ensuremath{\bold{x}}_L;\ f_{best}:=6$.\\ We add the children $(0,0,0)$ and $(0,0,1)$ to the exploration list $\ensuremath{\mathcal{N}}$. \item $\ensuremath{\mathcal{N}}=\{(0,1),(1,0),(1,1),(0,0,0),(0,0,1)\}$, $\ensuremath{\bold{x}}_{best}=(0\ 0\ 1\ 1\ 0\ 0\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=6$ :\\ Relaxation of $(0,1)$ yields the linear optimal solution $\ensuremath{\bold{x}}_1=(0\ 1\ 1\ 0\ 0\ 1\ 1\ 0)$ with cost $6\leq6$. $\ensuremath{\bold{x}}_1$ is an integral solution, thus added to $\mathcal{X}_{best}:=\{\ensuremath{\bold{x}}_{best},\ensuremath{\bold{x}}_1\}$. We add the children $(0,1,0)$ and $(0,1,1)$ to the exploration list $\ensuremath{\mathcal{N}}$. \item $\ensuremath{\mathcal{N}}=\{(1,0),(1,1),(0,0,0),\dots\}$, $\ensuremath{\bold{x}}_{best}=(0\ 0\ 1\ 1\ 0\ 0\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=6$ :\\ Relaxation of $(1,0)$ yields the linear optimal solution $\ensuremath{\bold{x}}_L=(1\ 0\ 1\ 1\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2}\ \tfrac{1}{2})$ with cost $\tfrac{15}{2}>6$. We drop node $(1,0)$ and proceed with the next node. \item $\ensuremath{\mathcal{N}}=\{(1,1),(0,0,0),\dots\}$, $\ensuremath{\bold{x}}_{best}=(0\ 0\ 1\ 1\ 0\ 0\ 1\ 1)$, $f_{best}=\ensuremath{\bold{T}}^T\cdot\ensuremath{\bold{x}}_{best}=6$ :\\ Relaxation of $(1,1)$ yields the linear optimal solution $\ensuremath{\bold{x}}_L=(1\ 1\ 1\ 0\ 0\ 0\ 1\ 1)$ with cost $8>6$. We drop node $(1,1)$ and proceed with the next node. \end{itemize} This numerical example is graphically represented in Figure \ref{branchandboundapplication}, with the optimization phase, the enumeration phase and some nodes rejection. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figures/branchandboundapplication.pdf} \caption{Graphical representation of the branch-and-bound application example} \label{branchandboundapplication} \end{figure} \subsection{Just-in-time criteria} One of the most interesting features of the branch-and-bound method from an operational point of view is the possibility to use a ``just-in-time" criteria. For example, a radar system with an embedded computer must optimize its cover just before a mission start. However, it only has five minutes to perform the optimization. A ``just-in-time" is a time limit condition that would ensure that even if the optimum has not been reached, the algorithm will return the best solution it found in the available lapse of time. Another strength of the method is the fact that linear relaxation provides a lower bound of the optimal solution value: $$B_{\ensuremath{\mathcal{N}}}=\min\{\ensuremath{\bold{T}}^T\cdot \ensuremath{\bold{x}}_L\ :\ \ensuremath{\bold{x}}_L=\text{LP\_SOLVE}(\ensuremath{N}), \ensuremath{N} \in \ensuremath{\mathcal{N}} \}$$ thus during the computation of the method, we always have an interval of confidence for the optimal solution value, above the lower bound but below the current best value: $$B_\ensuremath{\mathcal{N}} \leq \ensuremath{\bold{T}}\cdot \ensuremath{\bold{x}}_{opt} \leq \ensuremath{\bold{T}}\cdot \ensuremath{\bold{x}}_{best}$$ Knowing the lower bound, we can compute the \textit{(worst-case) relative optimality gap} as: $$\Delta_{opt} = \frac{\ensuremath{\bold{T}}\cdot \ensuremath{\bold{x}}_{best} - B_\ensuremath{\mathcal{N}}}{B_\ensuremath{\mathcal{N}}}$$ which give as a percentage the best gain we can hope from the optimal solution relatively to the current best solution. The pseudo-code modifications required to account a time limit and provided the current lower bound are described in Algorithm \ref{jitbranchandbound}. \begin{algorithm}[!h] \caption{Just-in-time branch-and-bound}\label{jitbranchandbound} \begin{algorithmic} \\ \State \textcolor{gray}{\% Exploration} \State current\_time := time() \Comment{Get current time} \While{$\ensuremath{\mathcal{N}}$ is not empty AND current\_time $\leq$ time\_limit } \State ... \EndWhile \State \textbf{return} $\mathcal{X}_{best}$, $B_\ensuremath{\mathcal{N}}$ \end{algorithmic} \end{algorithm} In practice, if the algorithm has a broad choice of available covers, it will find very quickly a good quality solution. Typically within $\leq 10\%$ of relative optimality gap. However closing those last percents to reach the optimal solution can be difficult. Because the decision space is often huge, the algorithm spends a long time crossing out possibilities. In some case even, the algorithm finds quickly the optimal solution, and spends a long time proving its optimality. \section{Application to Radar Engineering} In this section, we give a study case example of radar search pattern optimization and its simulation results. We present first a quick informal description with intuitive and quantitative insight on our mathematical model of the radar system. \subsection{Radar model} An active radar is a system capable of detecting distant metallic objects, by sending electromagnetic waves and listening to reflected echos. To perform detection in a given azimuth-elevation direction $(az,el)$, the radar antenna is electronically controlled to focus power in direction $(az,el)$, maximizing the radiation pattern in that direction. A signal containing a series of impulses is then sent through the radar. Upon reception, the reflected signal is filtered to detect echoes. A longer signal is more energetic and easier to filter out. The energy received by the radar from a target at distance $R$ is \begin{equation} E_r = K\ \frac{g^2\ T}{R^4} \label{radarequation} \end{equation} where $K$ is a constant accounting for the radar emitting power, internal losses, target reflectability, etc., $g$ is the antenna radiation pattern in direction $(az,el)$, and $T$ is the signal duration. Equation (\ref{radarequation}) is a simpler version of the \textit{radar equation} \cite{Skolnik2008}, a fundamental concept in radar theory. It formalize the intuitive idea that the reflected energy increases with antenna directivity and signal duration, but decreases with the target distance. The radar has a certain detection threshold $E_t$, and detects a target only if its reflected energy is above this threshold. For a given radar system, we have a set of feasible rectangular radiation patterns for the antenna and a set of available signals. The combination of a radiation pattern and a time signal is called a \textit{dwell}. \subsection{Simulation parameters} The desired detection range is defined by a minimum distance $D_{\min}$and a minimum altitude $H_{\min}$. We want to detect targets within that range, closer than $D_{\min}$ and below altitude $H_{\min}$, so the desired detection range is defined as: $$\ensuremath{R}_c(az,el)= \left\{\begin{array}{cl} D_{\min} & \text{ if } el \leq \operatorname{asin}\left(\frac{H_{\min}}{D_{\min}}\right)\\ \frac{H_{\min}}{\sin(el)} & \text{ otherwise } \end{array}\right.$$ Informally, the volume defined by the detection range resembles a sliced cylinder (Fig. \ref{desiredrange}). The radar can use two different type of signals: a short signal and a long signal. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/requiredrange.pdf} \caption{Desired detection range: elevation cut (left), azimuth cut (center) and 3D view (right)} \label{desiredrange} \end{figure} In our simulation, the detection grid $G$ is a 20$\times$20 lattice with 326 valid cells. We computed 866 feasible dwells in our study case, with 815 dwells using a short time signal with duration $T_s$ and 51 dwells using a long time signal with duration $T_l$. We compute the cost vector $\ensuremath{\bold{T}}$ with size 866 associating each dwell to its signal time duration. We want to find a optimal radar search pattern, i.e. a sub-collection of dwells among the 866 available dwells covering all 326 valid detection cells with minimal total time-budget. For each of the 866 dwells, we use Equation (\ref{radarequation}) to compute the dwell detection cover on the 326 cells. From the detection covers, we can compute the cover matrix $\ensuremath{\bold{A}}$ with shape 326 $\times$ 866. Having computed $\ensuremath{\bold{T}}$ and $\ensuremath{\bold{A}}$, we can use the branch-and-bound method described previously to search an optimal radar search pattern. The corresponding integer program has 866 variables and 326 detection constraints. The optimization is done through the CPLEX solver \cite{CPLEX}, which implements an improved version of the branch-and-bound. The total time required to find the solution is 5 seconds on an [email protected] processor. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/solution.pdf} \caption{From left to right: computed radar search pattern, emission pattern and detection range} \label{solution} \end{figure} \subsection{Optimal solution} The returned optimal solution is shown in Figure (\ref{solution}): the left sub-figure shows the discrete covers of the 20 dwells used in the pattern. 10 dwells use a short signal (represented in blue) and cover high elevation, and 10 dwells use a long signal (represented in red) and cover low elevations. This result is explained by the fact that a radar must usually achieve high detection range near the horizon (where targets are located) and low detection range at high elevation (since most aircrafts have limited flying altitude). It makes sense to use longer, and thus ``more energetic" dwells at low elevations than at high elevations. \subsection{Enumeration} As we have seen before, there may be multiple optimal solutions. In this simulation we managed to found 3500 different optimal solutions in 5 minutes. However, this search is unlikely to be exhaustive: due to the wide choice of possible dwells, there is often an extremely high number of possible alternative optimal solutions. Finding all solutions is unfeasible in practice. However optimal solutions share certain characteristics: all solutions have 10 shorts signal dwells at high elevation and 10 long signal dwells at low elevation. The long-signal dwells (in red) are mostly the same for all optimal solutions found, and form an \textit{optimality invariant}. The short-signal dwells (in blue) are however different for each solution. Intuitively, the low-elevation area is more ``energetically demanding"; thus low-elevation detection constraints are the ``hardest constraints" of the problem, and do not leave a lot of choice for covering the low-elevation area. High-elevation detection constraints are in comparison ``easier" and can be validated by different covers. \begin{figure}[b] \centering \includegraphics[width=0.5\linewidth]{figures/alternatesolutions.pdf} \caption{Two other possible optimal solutions} \label{alternatesolutions} \end{figure} \section{Conclusion} The branch-and-bound method is a practical and powerful technique. It can be used as an exact algorithm if exploration is pushed to its completion, when there is no more branch left with a potentially better solution. It can also be used as an heuristic, with stopping criterion based on an time limit or a optimality gap treshold. This is especially useful in operational situations with broad choices, when finding a good solution is easy, but proving optimality is difficult. The method is very generic, and can be used to solve a lot of different combinatorial problems. Many of those problems have evident practical values and important applications in various industries, such as the set cover problem in radar applications. The versatility of the method and its various ``flavors" can be used for different purposes: enumeration permits analysis of the radar ``possibilities" during conception, while just-in-time criteria improves resources management in operational situations. Branch-and-bound is an extremely efficient tool for a broad variety of engineering applications. \section*{Acknowledgments} This work is partly supported by a DGA-MRIS scholarship. \bibliographystyle{ietConference}
1,116,691,501,193
arxiv
1,116,691,501,194
arxiv
\section{Introduction} The complexity of the orbital problems that aerospace engineers commonly confront makes that analytical solutions to them remain generally unknown. Because of that, simplified models that may capture the bulk of the dynamics of concern for a particular mission are customarily used in the preliminary steps of mission design \cite{Kozai1963,CuttingFrautnickBorn1978,ScheeresGumanVillac2001,LaraSanJuan2005,ColomboLuckingMcInnes2012}, but also in the search for efficient maneuvers for the end-of-life disposal of the spacecraft \cite{ArmellinSanJuanLara2015,Colomboetal2015}. Some of these simplified models have only two degrees of freedom (DOF) and, therefore, the phase space description can be approached with the usual tools of nonlinear dynamics, as the computation of Poincar\'e surfaces of section \cite{Broucke1994,JorbaMasdemont1999}, the numerical continuation of periodic orbits and other invariant manifolds \cite{Lara1999,GomezMondelo2001,Lara2003,LaraRussellVillac2007,RussellLara2007}, or the computation of different stability indicators which may disclose the existence of stable regions in real models, as well as the dynamical channels connecting distant ones \cite{LaraRussellVillac2006,LaraRussellVillac2007Meccanica,Villac2008}. Alternatively, an analytical process in which the dimension of the system is reduced to a 1-DOF integrable approximation of the original problem is customarily done by filtering the higher frequencies of the motion using perturbation theory. The integrable approximation is valid in some region of the phase space and reflects the main characteristics of the long-term dynamics, which can be explored without need of numerical integration by the (almost) instant representation of eccentricity-vector diagrams and inclination-eccentricity curves of frozen orbits \cite{CoffeyDepritDeprit1994,SanJuanLaraFerrer2006}. \par There are, however, some cases in which approximate descriptions of dynamics, even qualitative ones, cannot be characterized by a simple model and, on the contrary, require to deal with full models. The paradigm of these problems is the design of low lunar orbits, a procedure in which, due to the irregular distribution of the mass of the moon, it is commonly accepted that, at least, a $50\times0$ truncation of the Selenepotential is required in the correct description of the long-term dynamics \cite{Konoplivetal1994,Roncoli2005,LaraSaedeleerFerrer2009,Lara2011}. This is not the unique case in which full models are required, and the use of higher order truncations of the gravity potential has also been encouraged in the preliminary design of low-eccentricity, frozen earth orbits \cite{Cook1966,RosboroughOcampo1991,Cook1992}. \par The reduction to a 1-DOF system is still possible when dealing with full models, but the literal expressions to handle in the perturbation approach are enormous in this case, and, therefore, the assistance of a computer algebra system becomes essential \cite{Lara2010ICATT}. The current computational power makes that memory allocation is no longer a main concern, and literal expressions are customarily handled in expanded form up to higher degrees. However, rendering eccentricity-inclination diagrams and inclination-eccentricity curves of frozen orbits can no longer be considered an instant operation because the evaluation process is jeopardized by the unwieldy expressions obtained as the rough output of the symbolic processor. Indeed, series of tens of thousands, or even hundreds of thousands of literal terms are reported in the literature in relation to full perturbation models \cite{CoffeyNealSegermanTravisano1995}. A partial solution to this problem comes out from a post-processing of the crude symbolic output to turn it into a smartly rearranged sequence. However, in spite of broad simplification rules are part of the gunnery with which general symbolic algebra systems are equipped, designing efficient heuristic simplification procedures, which are required when dealing with implicit integration to avoid the limitations of series expansions in the eccentricity, is a quite challenging task because these kinds of simplifications strongly depend on the algebraic structure of particular problems. Because of that, mathematical simplification is still regarded as an art \cite{Knuth1997}, and the implementation of useful simplification rules definitely requires high skills in symbolic algebra manipulation jointly with an uncommon gift for identifying useful intermediate expressions. Hence, the development of efficient simplification and evaluation rules for coping with perturbed Keplerian motion remains only within the reach of a handful of selected experts \cite{CoffeyDeprit1980,Deprit1982,HealyTravisano1998}. \par Handling huge literal expressions cannot be avoided when higher orders of the perturbation approach are essential in the description of the dynamics \cite{WnukBreiter1991,Metrisetal1993,WnukJopek1994,LaraPalacianRussell2010,LaraPerezLopez2017}. On the contrary, in those cases in which the coupling of different perturbations is not of relevance in the analysis, the averaging procedure may preserve the main structural features of the potential model, thus avoiding the need of the typical blind computer-based, brut force approach. This is in particular the case of the design of low lunar orbits, whose dynamics is mainly driven by the Selenopotential, and provides the principal motivation for the current research. Indeed, analytical models based on low degree truncations of the Selenopotential not only fail in describing the quantitative behavior of high inclination, low lunar orbits, but they may provide completely wrong qualitative descriptions of the phase space. This fact is clearly illustrated in Fig.~(3) of \cite{LaraSaedeleerFerrer2009}, where it is shown that the actual argument of the periapsis of almost polar frozen orbits lies exactly in the opposite direction of the one predicted by a 7th degree truncation of the potential. Hence the convenience of preliminary exploration of the sensitivity of the dynamics, based on a gradual increase of the complexity of the gravitational potential in a harmonic coefficient-by-harmonic coefficient basis \cite{LaraFerrerSaedeleer2009,Lara2018Stardust}. Therefore, having available efficient procedures for evaluating algebraic expressions with a similar structure to the usual gravitational potential expansion is more than welcome. \par While the evaluation of the gravitational potential is efficiently done in Cartesian coordinates \cite{Cunningham1970,Pines1973,LundbergSchutz1988,FantinoCasotto2009}, the analytical investigations of the orbital elements evolution are better performed using Lagrange planetary equations, which require the reformulation of the gravitational potential in orbital elements. The formulation of the gravity potential in orbital elements shows the contribution of secular, long-, and short-period corrections to the pure Keplerian motion \cite{Kozai1959}. Relevant aspects of the dynamics are disclosed when focusing on the effects of secular and long-period terms, which can be studied after averaging the contribution of short-period effects, namely, those related to the period in which the mean anomaly advances by $2\pi$. From the mathematical point of view, this averaging is the result of a transformation from osculating to ``mean'' elements, whose explicit computation is of the utmost importance when dealing with unstable dynamics, as is the common case of mapping orbits about planetary satellites \cite{Lara2008,LaraPalacianYanguasCorral2010}. \par The direct conversion of the gravitational potential into orbital elements is quite feasible for low degree and order truncations. On the other hand, the computational burden grows enormously when converting higher orders, yet the use of computer algebra systems makes the task possible. Alternatively, efficient recursion formulas for the conversion of the gravitational potential into orbital elements up to arbitrary degree and order have been provided by Kaula \cite{Kaula1961,Kaula1966}, who organizes the potential as a multivariate Fourier series. In these series, the arguments of the trigonometric functions involve linear combinations of the mean anomaly, the argument of the periapsis, and the right ascension of the ascending node, as well as the Greenwich angle, whereas the coefficients of the trigonometric terms are obtained from Kaula's inclination and eccentricity functions ---the latter being special cases of Hansen coefficients \cite{Hansen1855}--- as well as the different powers of the inverse of the orbit semi-major axis. Equinoctial elements \cite{ArsenaultFordKoskela1970} are commonly used to avoid the singularities of circular and equatorial orbits that are inherent to the classical Keplerian elements formulation. Formulas for the conversion of the gravitational potential into equinoctial elements are provided in \cite{BrouckeCefola1972,CefolaBroucke1975}. \par Existing recursions are not limited to the construction of the gravitational potential alone, and are also available for the secular and long-period terms of the gravitational potential that define the mean elements dynamics. Indeed, when tesseral resonances are not of concern, the mean elements potential is reduced to the zonal harmonics contribution, a case in which Kaula's eccentricity functions are exact, contrary to the expansions in the eccentricity required in other instances. For this case, equivalent recursions to those of Kaula, but taking the point of view of perturbation theory using Delaunay canonical variables, have been later made available and complemented with analogous recursions for the construction of the generating function of the canonical transformation that performs the short-period averaging of the zonal potential, yet limited to a first order perturbation approach \cite{Saedeleer2005}. Analogous, but more involved recursions, exist also for third-body perturbations in different simplified models \cite{Kaula1962,LaskarBoue2010,PalacianVanegasYanguas2017}. \par The use of recursion notably speed evaluation of full gravitational fields, and slight modifications of the recursions in \cite{Saedeleer2005} together with additional recursions for the frozen orbits equation ease rendering phase space diagrams in real time \cite{LaraSaedeleerFerrer2009} even in the case in which a 100th degree truncation of the zonal potential is taken into account \cite{Lara2011}. Even though, we will show that the performance of the recursions in \cite{Saedeleer2005} is clearly exceeded by Kaula's original recursion formulas for the averaged potential. Note, however, that these kinds of recursions are constrained to first order effects in the perturbation approach. That is, they are useful only in those cases in which there is not a clear prevalence of any of the potential terms, which, therefore, are all taken to be of the same order in the perturbation arrangement. The moon and Venus are relevant instances of this kind of gravitational potential. On the contrary, it is well known that the dynamics about earth-like bodies is dominated by the zonal harmonic coefficient of the second degree ($C_{2,0}$), whereas the rest of coefficients are $\mathcal{O}(C_{2,0}^2)$. In consequence, second order effects of $C_{2,0}$ are as much important as first order effects of the other coefficients, and, because of that, they must be taken into account in studies of the long-term dynamics about earth-like bodies \cite{Brouwer1959,Garfinkel1959,Kozai1959,CoffeyDepritDeprit1994}. Hence, to broaden applicability of Kaula's recursions to this common case, they are complemented with the terms that model the long-term second order effects due to the zonal harmonic of the second degree. As far as recursions for a second order averaging are not yet available, we simply add the expanded expressions of this second order effect to the (first order) recursion formulas. \par The paper is organized as follows. We present basic facts on the formulation of the gravitational potential in orbital elements in Section \ref{s:zonal}, were we also recall fundamental formulas that are pertinent to approach the zonal potential Hamiltonian by perturbations taking benefit of Kaula's recursions. To properly deal with second order effects of the zonal harmonic of the second degree in those cases in which they are relevant, the elimination of the parallax \cite{Deprit1981,LaraSanJuanLopezOchoa2013b,LaraSanJuanLopezOchoa2013c} is carried out first, in Section \ref{s:parallax}, where it is shown that this simplification does not affect the basic structure of Kaula's recursions. Kaula's eccentricity functions are recovered at this stage, which can be expressed in closed form because we only deal with the zonal part of the gravitational potential. Remaining short-period effects after the elimination of the parallax are removed in Section \ref{s:delaunay} by performing a Delaunay normalization \cite{Deprit1982}, leading to the compact formulation of a mean elements Hamiltonian by means Kaula-type recursions, which are supplemented with the expanded terms corresponding to the second order effects of the zonal harmonic coefficient of the second degree. Performance comparisons between Kaula and other recursions in the literature and presented in Section \ref{s:performance}, showing the superiority of Kaula's formulation for higher degree truncations of the mean elements zonal potential. Finally, the feasibility of displaying eccentricity-vector diagrams and inclination-eccentricity curves in real time, without limiting to the case of low eccentricities or linearized equations \cite{Cook1966,RosboroughOcampo1991,Cook1992}, is illustrated in Section \ref{s:averagedflow}, where a coefficient-by-coefficient approach is used to ascertain the correct truncation of the Selenopotential for increasing altitudes of a lunar orbiter over the surface of the moon. The different transformation and simplifications carried out to obtain the long-term dynamics are based on Deprit's implementation of the Lie transforms method \cite{Deprit1969}. For completeness, the basics of this perturbation method, which is standard these days \cite{BoccalettiPucacco1998v2,MeyerHall1992}, are summarized in \ref{s:LieTransforms}. \par \section{The Hamiltonian of the zonal potential} \label{s:zonal} Solution of the Laplace's equation in spherical coordinates leads to the usual expansion of the gravity potential \cite[see][for instance]{MacMillan1958} \begin{equation} \label{geopot} U=-\frac{\mu}{r}\sum_{n\ge{0}}\frac{R_\oplus^n}{r^n} \sum_{m=0}^n\left(C_{n,m}\cos{m\lambda}+S_{n,m}\sin{m\lambda}\right)P_{n,m}(\sin\varphi), \end{equation} where $r$, $\varphi$, and $\lambda$ stand for radius, geocentric latitude and longitude, respectively; $P_{n,m}$ are associated Legendre polynomials, and $C_{n,m}$ and $S_{n,m}$ are harmonic coefficients. In addition to the harmonic coefficients, the values of the gravitational parameter $\mu$ and the equatorial radius of the central body $R_\oplus$ are what define a gravitational model \cite{WGS84,KonoplivBanerdtSjogren1999,KonoplivParkFolkner2016}. Due to the rotation of the central body, the gravitational potential depends on time when referred to an inertial frame. A customary simplification is that it rotates with constant rotation rate $n_\oplus$ about the polar axis, and hence the time dependence is avoided when the problem is formulated in a rotating frame. \par Reformulation of Eq.~(\ref{geopot}) in orbital elements is easily done by recalling the conversion from spherical to rectangular coordinates in the rotating frame \begin{equation} \label{spherical} \sin\phi=\frac{z}{r}, \qquad \sin\lambda=\frac{y}{\sqrt{x^2+y^2}}, \quad \cos\lambda=\frac{x}{\sqrt{x^2+y^2}}, \end{equation} where $x$, $y$, and $z$, need to be expressed in the usual Keplerian elements: $a$, $e$, $I$, $\Omega$, $\omega$, $M$, for orbit semi-major axis, eccentricity, inclination, right ascension of the ascending node, argument of the periapsis, and mean anomaly, respectively. This is easily done by carrying out the rotations that relate the orbital frame with the Cartesian frame \begin{equation} \label{rectangular} \left(\begin{array}{c} x\\ y\\ z\end{array}\right) =R_3(-h)\,R_1(-I)\,R_3(-\theta)\left(\begin{array}{c} r\\ 0\\ 0\end{array}\right), \end{equation} where $R_i$, $i=1,2,3$ denote standard rotation matrices, $h=\Omega-n_\oplus{t}$, is the argument of the ascending node of the orbit measured in the rotating frame, $\theta=f+\omega$ is the argument of the latitude, and $f$ is the true anomaly, which is an implicit function of $M$. Besides, the radius must be replaced in Eq.~(\ref{rectangular}) by the conic equation \begin{equation} \label{conic} r=\frac{p}{1+e\cos{f}}, \end{equation} in which $p=a\eta^2$ is the parameter of the conic and $\eta=\sqrt{1-e^2}$ is customarily known as the eccentricity function. \par Direct substitution of Eqs.~(\ref{spherical})--(\ref{conic}) into Eq.~(\ref{geopot}) provides the required conversion of the gravitational potential into orbital elements, and is the usual procedure when dealing with just a few terms of the gravitational potential. However when the degree and order of the gravitational potential expansion go beyond the first few terms of Eq.~(\ref{geopot}), this transformation is notably expedited by using Kaula's developments \cite{Kaula1966}. \par On the other hand, the effects of tesseral perturbations are known to average out except for particular resonances of the satellite's mean motion with the rotation rate of the system. Then, we constrain to the zonal harmonics part of the gravity potential, which is obtained by neglecting in Eq.~(\ref{geopot}) those harmonic coefficients $S_{n,m}$, $C_{n,m}$ with $m>0$, viz. \begin{equation} \label{zonalpot} U=-\frac{\mu}{r}\sum_{n\ge{0}}\frac{R_\oplus^n}{r^n}C_{n,0}P_{n,0}(\sin\varphi). \end{equation} The zonal potential enjoys axial symmetry, and, therefore, is a 2-DOF problem that does not depend on the geocentric longitude. This reduction of the dimension releases the problem from the time dependency, and, therefore, the body-fixed frame is no longer needed and can be replaced by the usual inertial frame. \par Following Kaula's approach \cite{Kaula1966}, Eq.~(\ref{zonalpot}) is rewritten in orbital elements as \begin{equation} \label{zonalKaula} U=-\frac{\mu}{r}-\frac{\mu}{a}\left(\frac{a}{r}\right)^2\eta\sum_{i\ge2}V_i \end{equation} with \begin{equation} \label{Vi} V_i= \frac{R_\oplus^i}{a^i} \frac{{C}_{i,0}}{\eta^{2i-1}} \sum_{j=0}^i\mathcal{F}_{i,j}(I)\sum_{k=0}^{i-1}\binom{i-1}{k}e^k\cos^k\!f \cos[(i-2 j)(f+\omega)-i_\pi], \end{equation} in which $\mathcal{F}_{i,j}$ are Kaula's inclination functions particularized for the case of the zonal problem, namely, \begin{equation} \label{KaulaF} \mathcal{F}_{i,j}=\sum_{l=0}^{\min(j,i_0)}\frac{(-1)^{j-l-i_0}}{2^{2i-2l}}\frac{(2i-2l)!}{l!(i-l)!(i-2l)!}\binom{i-2l}{j-l}\sin^{i-2l}I, \quad i\ge2l, \end{equation} and the parity correction $i_\pi$ and the symbol $i_0$ adhere to the index notation in \cite{Lara2018Stardust}. That is, for a generic summation index $i$ \begin{equation} \label{indexconvention} i^\star=i\bmod2, \qquad i_\pi=\frac{\pi}{2}i^\star, \qquad i_m=\left\lfloor\frac{i-m}{2}\right\rfloor, \qquad i_m^\star=i_m+i^\star, \end{equation} where $m$ notes an integer number, and $\lfloor{p}/q\rfloor$ denotes the integer division of the integers $p$ and $q$. Besides, in what follows we shorten expressions by using the standard abbreviations $s\equiv\sin{I}$ and $c\equiv\cos{I}$. \par Note that Eqs.~(\ref{zonalKaula})--(\ref{Vi}) differ from analogous equations in \cite{Kaula1966} (see also \cite{RosboroughOcampo1991}) in which we keep a divisor $r^2$ factoring the summation comprising the non-centralities of the gravitational potential. The reason for that will be apparent later, in reference to the elimination of the parallax simplification. \par The flow derived from Eq.~(\ref{zonalKaula}) is obtained from the integration of Lagrange planetary equations \cite[see sect.~10.2 of][for instance]{Battin1999}, but analytical solutions to these equations are not generally known even for the lower degree truncations of Eq.~(\ref{zonalKaula}). However, under certain conditions, useful analytical approximations can be computed by perturbation methods \cite{Nayfeh2004,BrouwerClemence1961}. \par In particular, because the zonal problem is derived from a potential function, it admits Hamiltonian formulation and we can take benefit of Hamiltonian perturbations methods to approach the problem \cite{FerrazMello2007}. More specifically, we resort to the Lie transforms method as devised by Deprit \cite{Deprit1969}, which is summarized in \ref{s:LieTransforms} for the convenience of interested readers. In Deprit's approach, the Hamiltonian of the zonal problem is written \begin{equation} \label{zonalHam} \mathcal{H}=\sum_{m\ge0}\frac{\epsilon^m}{m!}\mathcal{H}_{m,0}, \end{equation} in which $\epsilon$ is a formal small parameter used to indicate the strength of each different perturbation, and we write the Hamiltonian terms in the form \begin{eqnarray} \label{H00} \mathcal{H}_{0,0} &=& -\frac{\mu}{2a}, \\ \label{H10} \mathcal{H}_{1,0} &=& \left(-\frac{\mu}{a}\right)\frac{a^2\eta}{r^2}V_2, \\ \label{H20} \mathcal{H}_{2,0} &=& 2\left(-\frac{\mu}{a}\right)\frac{a^2\eta}{r^2}\sum_{i\ge3}V_i, \\ \label{H30} \mathcal{H}_{m,0} &=& 0, \qquad m\ge3, \end{eqnarray} \par We note that orbital elements are not canonical variables, which are certainly required in Hamiltonian mechanics. However, because Keplerian elements provide an immediate insight into orbital problems, we regularly use the orbital elements notation in following expressions in the understanding that the symbols used are not variables, but known explicit or implicit functions of some set of canonical variables. In particular, the construction of perturbation solutions to perturbed Kepelrian problems is conveniently approached in Delaunay canonical variables $(\ell,g,h,L,G,H)$ corresponding to the mean anomaly $\ell=M$, the argument of the periapsis $g=\omega$, the argument of the node $h=\Omega$, the Delaunay action $L=\sqrt{\mu{a}}$, the total angular momentum $G=L\eta$, and the polar component of the angular momentum in the polar axis direction $H=G\cos{I}$ \cite{Delaunay1860}. \par Note that the right ascension of the ascending node is a cyclic variable in the zonal Hamiltonian defined by the set of Eqs.~(\ref{Vi})--(\ref{H30}). For this reason, its conjugate momentum $H$ is an integral of the zonal problem. Common perturbation solutions to the zonal problem are based in finding a canonical transformation that completely removes the mean anomaly $\ell$ from the Hamiltonian. After truncation to the order of the transformation, the reduced Hamiltonian is of 1-DOF, and, in consequence, it is integrable in the prime variables. However, one must note that, because of the peculiarities of the elliptic motion the mean anomaly does not appear explicitly in the zonal Hamiltonian, but implicitly through the true anomaly $f\equiv{f}(\ell,G,L)$. This fact makes the equation of the center $\phi=f-\ell$ to appear in the computation of the generating function as soon as in the first order of a perturbation approach, in this way making notably difficult the computation of higher orders of the solution in closed form \cite{Kozai1962,OsacarPalacian1994}. The traditional alternative relies on expansions of the elliptic motion using trigonometric functions \cite{BrouwerClemence1961}, but this approach requires high order expansions for effectively dealing with moderate eccentricities, with the consequent long series to evaluate \cite{DepritRom1970,Wnuk1997}, and does not solve efficiently the case of high eccentricities. Even though alternative expansions of the elliptic motion have been suggested using elliptic function theory \cite{BrumbergFukushima1994} leading to extensions of Kaula's recursions \cite{Brumbergetal1995}, they are rarely used due to the technicalities involved in the evaluation of the required special functions \cite{Fukushima2014ch}. Therefore, the standard way in the reduction of the zonal Hamiltonian starts from making a preparatory simplification which is traditionally dubbed \emph{the elimination of the parallax} \cite{Deprit1981,LaraSanJuanLopezOchoa2013b,LaraSanJuanLopezOchoa2013c}. \section{Simplification: elimination of the parallax} \label{s:parallax} The aim of this simplification is to remove non-essential short-period terms from the zonal Hamiltonian in Eq.~(\ref{zonalHam}). That is, with this simplification, all the short-period terms are removed except those which are related to the fundamental Kepler equation; hence the flow derived from the in this way simplified Hamiltonian is termed quasi-Keplerian motion \cite{Deprit1981}. \par The elimination of the parallax simplification applies to full zonal models \cite{AlfriendCoffey1984}, but we don't need to do that due to the low order of the perturbation theory we are interested in. Hence, we limit the application of this simplification just to case of the zonal harmonic of the second degree. That is, short period terms related with the explicit appearance of $f$ in Eq.~(\ref{H10}), which happen in the $V_2$ coefficient, as shown in Eq.~(\ref{Vi}), will be removed by the transformation up to the second order of $C_{2,0}$, whereas the explicit appearance of $r$ in Eq.~(\ref{H10}) , which is also affected of short-period terms, cf.~Eq.~(\ref{conic}), will remain untouched by the transformation, as will remain so the remaining terms of the Hamiltonian, which are given in Eq.~(\ref{H20}). \subsection{1st order} Then, the first step in solving the homological equation of Deprit's perturbation approach in \ref{s:LieTransforms}, is to choose the new Hamiltonian term $\mathcal{H}_{0,1}$ to be comprised of those terms of $\mathcal{H}_{1,0}$ in Eq.~(\ref{H10}) that are free from the explicit appearance of $f$. In view of \begin{equation} V_2= -\frac{R_\oplus^2}{a^2}\frac{C_{2,0}}{\eta^3}\Big\{\frac{1}{4}\left(2-3s^2\right) (1+e\cos{f}) +\frac{3}{8}s^2\left[e\cos(f+2\omega)+2\cos(2f+2\omega)+e\cos(3f+2\omega)\right] \Big\}, \end{equation} we choose \begin{equation} \label{H01parallax} \mathcal{H}_{0,1}=-\frac{\mu}{a}\frac{a^2\eta}{r^2}\langle{V}_2\rangle_f, \end{equation} with \begin{equation} \label{V2averaged} \langle{V}_2\rangle_f = -C_{2,0}\frac{R_\oplus^2}{p^2}\frac{1}{4}\eta\left(2-3s^2\right). \end{equation} \par Then, $W_1$ is computed from Eq.~(\ref{homoDelo}) by simple quadrature as \[ W_1=\frac{1}{n}\int(\mathcal{H}_{1,0}-\mathcal{H}_{0,1})\,\text{d}\ell, \] that can be solved in closed form using the differential relation between the true and mean anomalies \begin{equation} \label{dldf} a^2\eta\,\text{d}\ell=r^2\,\mathrm{d}f, \end{equation} which is derived from the preservation of the angular momentum in the Keplerian motion. Thus, \[ W_1=\frac{1}{n}\int(\mathcal{H}_{1,0}-\mathcal{H}_{0,1})\,\frac{r^2}{a^2\eta}\,\text{d}f =-L\int\left(V_2-\langle{V}_2\rangle_f\right)\,\text{d}f, \] and hence \begin{equation} \label{W1parallax} W_1=G\frac{R_\oplus^2}{p^2}\frac{C_{2,0}}{8}\left[e\left(4-6 s^2\right)\sin{f}+s^2\sum_{i=0}^3Q_i(e)\sin (if+2g)\right], \end{equation} with \begin{equation} \label{Q0123} Q_ =\frac{1+2\eta}{(1+\eta)^2}e^2, \qquad Q_1=3e, \qquad Q_2=3, \qquad Q_3=e. \end{equation} \par Note that the summand of $W_1$ related to the term $Q_0$ does not depend on $\ell$. It is an integration ``constant'' that has been added to the integral to guarantee that $W_1$ is free from hidden long-period terms on the argument of the periapsis \cite{Kozai1962,ExertierThesis1988}.\footnote{These kinds of functions that are free from the mean anomaly are sometimes called ``Kozai-like constants'' in artificial satellite theory \cite{LaraSanJuanLopezCefola2012}.} Indeed, it is simple to check that $\int_{0}^{2\pi}W_1\,\mathrm{d}\ell=0$ in this way being free from long-period terms. With the addition of this integration constant to the first order of the generating function we guarantee that the second order Hamiltonian, still to be computed, will not be deprived of any long-period effect \cite{LaraSanJuanLopezOchoa2013b,LaraSanJuanLopezOchoa2013c}. \par We hasten to add that dealing with integration constants of the generating function is not necessary from the point of view of constructing the perturbation solution, which involves the transformation from original to new variables and vice versa, yet it is highly recommended in those cases in which realistic information is expected to be obtained directly from the simplified Hamiltonian. \subsection{2nd order} At the second order, Eq.~(\ref{homoDelo}) leads to \begin{equation} \label{homo2parallax} W_2=\frac{1}{n}\int(\widetilde{\mathcal{H}}_{0,2}-\mathcal{H}_{0,2})\,\mathrm{d}\ell, \end{equation} where, from Deprit's recursion in Eq.~(\ref{deprittriangle}), \begin{equation} \label{H02tilde} \widetilde{\mathcal{H}}_{0,2}=\{\mathcal{H}_{0,1},W_1\}+\{\mathcal{H}_{1,0},W_1\}+\mathcal{H}_{2,0}. \end{equation} After evaluating the Poisson brackets and in view of Eq.~(\ref{H20}), it is obtained \begin{equation} \label{tildeH02parallax} \widetilde{\mathcal{H}}_{0,2} = \frac{\mu}{a}\frac{a^2\eta}{r^2}\left\{ C_{2,0}^2\frac{R_\oplus^4}{p^4}\eta \sum_{j=0}^6\sum_{l=-2}^2\left[q_{j,l}+\frac{es^2}{(1+\eta)^2}\tilde{q}_{j,l}\right]\cos(jf+2l\omega) -2\sum_{i\ge3}V_i \right\} \end{equation} in which the non-null coefficients $q_{j,l}\equiv{q}_{j,l}(s,e,\eta)$ and $\tilde{q}_{j,l}\equiv\tilde{q}_{j,l}(s,e,\eta)$ are presented in Tables \ref{t:qjlparallax} and \ref{t:tildeqjlparallax}, respectively.\footnote{For computational purposes, one may note that, following the index convention in Eq.~(\ref{indexconvention}), the lower limit of the summation index $l$ for the $q_{j,l}$ coefficients is $l=j_1$, whereas in the case of $\tilde{q}_{j,l}$ is $l=j_4$.} Recall that the term $\tilde{q}_{0,\pm1}$ is a consequence of including the term $Q_0$ of Eq.~(\ref{Q0123}) in the first order generating function, and can be neglected when having ``centered'' mean elements is not a requirement of the perturbation theory. \par \begin{table*}[htb] % \centering \begin{tabular}{@{}llll@{}} $j$ & \multicolumn{1}{l}{$l=0$} & \multicolumn{1}{l}{$l=1$} & \multicolumn{1}{l}{$l=2$} \\[0.33ex] \hline $0$ & $q_{2,0}-\frac{1}{16}(21 s^4-42 s^2+20)$ & $\frac{3}{64} e^2 s^2 \left(14-15 s^2\right)$ & \vphantom{$\frac{M^9}{M^9}$} \\ [0.5ex] $1$ & $-\frac{1}{32}e\left(27s^4-108s^2+64\right)$ & $\frac{7}{16}es^2\left(11-12s^2\right)$ & \\ [0.5ex] $2$ & $\frac{3}{64}e^2\left(5s^4+8s^2-8\right)$ & $\frac{3}{16}e^2s^2\left(2-s^2\right)+\frac{1}{8}\left(20-21s^2\right)s^2$ & $-\frac{15}{128}e^2s^4$ \\ [0.5ex] $3$ & & $\frac{3}{16}es^2\left(8s^2-5\right)$ & $-\frac{9}{64}es^4$ \\ [0.5ex] $4$ & & $\frac{3}{32}e^2s^2\left(13s^2-10\right)$ & $\frac{3}{64}\left(4-e^2\right)s^4$ \\ [0.5ex] $5$ & & & $\phantom{-}\frac{15}{64}es^4$ \\ [0.5ex] $6$ & & & $\phantom{-}\frac{9}{128}e^2s^4$ \\ [0.33ex] \hline \end{tabular} \normalsize \caption{Coefficients $q_{j,l}$ in Eqs.~(\protect\ref{tildeH02parallax}). } \label{t:qjlparallax} \end{table*} \begin{table*}[htb] % \centering \begin{tabular}{@{}rlll@{}} $l$ & \multicolumn{1}{l}{$j=1$} & \multicolumn{1}{l}{$j=2$} \\[0.33ex] \hline $-2$ & $-\frac{3}{256}e^2s^2\chi^{-}$ & & \vphantom{$\frac{M^9}{M^9}$} \\ [0.75ex] $-1$ & $\frac{3}{128}\left[(9-42\eta-31\eta^2)s^2-\frac{2}{3}(5-54\eta-39\eta^2)\right]\chi^{-}$ & $\frac{1}{8}e(3s^2-2)\chi^{-}$ \\ [0.75ex] $0$ & $\frac{3}{128}(2+23\eta-31\eta^2)s^2\chi^{+}-\frac{3}{16}e^2(1+2\eta)$ & $\frac{3}{32} e[(15s^2-8)\eta-4]$ \\ [0.75ex] $+1$ & $\frac{3}{128}[(37\eta^2-14\eta-83)s^2+(58+12\eta-30\eta^2)]\chi^{+}$ & $-\frac{3}{8}e(3s^2-2)\chi^{+}$ \\ [0.75ex] $+2$ & $\frac{3}{256}(21+18\eta+\eta^2)s^2\chi^{-}$ & $\frac{15}{32} e (\eta +2) s^2$ \\ [0.33ex] \hline \\[-1ex] $l$ & \multicolumn{1}{l}{$j=3$} & \multicolumn{1}{l}{$j=4$} & \multicolumn{1}{l}{$j=5$} \\[0.33ex] \hline $-1$ & $\frac{3}{16}e\tilde{q}_{2,-1}$ & \vphantom{$\frac{M^9}{M^9}$} \\ [0.75ex] $0$ & $\frac{3}{256}(61\eta^2+66\eta-23)s^2\chi^{-}-\frac{3}{16}e^2(1+2\eta)$ & $-\frac{9}{32}es^2\chi^{-}$ & $\frac{5}{24}e\tilde{q}_{4,0}$ \\ [0.75ex] $+1$ & $\frac{3}{16}e\tilde{q}_{2,1}$ \\ [0.75ex] $+2$ & $\frac{9}{256}(39-6\eta-5\eta^2)s^2\chi^{+}$ & $\frac{27}{32}es^2\chi^{+}$ & $\frac{5}{24}e\tilde{q}_{4,2}$ \\ [0.33ex] \hline \end{tabular} \normalsize \caption{Coefficients $\tilde{q}_{j,l}$ in Eqs.~(\protect\ref{tildeH02parallax}); $\tilde{q}_{0,\pm1}=\frac{3}{16}e(1+2\eta)(4-5s^2)$ and $\chi^{\pm}\equiv1\pm\eta$.} \label{t:tildeqjlparallax} \end{table*} In the same way as in the first order, the new Hamiltonian term $\mathcal{H}_{0,2}$ is chosen to be made of those terms of $\widetilde{\mathcal{H}}_{0,2}$ that are free from the explicit appearance of $f$ but keep the explicit appearance of $r$ in Eq.~(\ref{tildeH02parallax}) untouched. Hence \begin{equation} \label{H02parallax} \mathcal{H}_{0,2} = 2\frac{\mu}{a}\frac{a^2\eta}{r^2}\left\{ C_{2,0}^2\frac{R_\oplus^4}{p^4}\eta \left[\frac{q_{0,0}}{2}+\left(q_{0,1}+\frac{es^2\tilde{q}_{0,1}}{(1+\eta)^2}\right)\!\cos2\omega\right] -\sum_{i\ge3}\langle{V}_i\rangle_f\right\}, \end{equation} where $\langle{V}_i\rangle_f$ comprises the terms of Eq.~(\ref{Vi}) that are free from short-period effects depending on $f$. It is computed as follows. \par The dependency of $V_i$ on the true anomaly is made explicit by expanding Eq.~(\ref{Vi}) as a Fourier series in $f$. This is done by expanding first \begin{equation*} \cos^kf\cos[m(f+\omega)-i_\pi]= \cos^kf\cos{mf}\cos(m\omega-i_\pi)-\cos^kf\sin{mf}\sin(m\omega-i_\pi), \qquad m=(i-2j). \end{equation*} Then, using the standard trigonometric reductions \begin{equation*} \cos\beta\cos^k\alpha=\frac{\cos (\alpha +\beta )+\cos (\alpha -\beta )}{2}\cos^{k-1}\alpha, \qquad \sin\beta\cos^k\alpha=\frac{\sin (\alpha +\beta )-\sin (\alpha -\beta )}{2}\cos^{k-1}\alpha, \end{equation*} one easily arrives to the recursion \begin{equation} \label{recurre} \cos^k\!f\left\{\begin{array}{l} \cos(i-2 j)f \\ \sin(i-2 j)f \end{array}\right\} = \frac{1}{2^k}\sum_{l=0}^k\binom{k}{k-l}\left\{\begin{array}{l} \cos(i-2j-k+2l)f \\ \sin(i-2j-k+2l)f \end{array}\right\}, \end{equation} which shows that the only terms in Eq.~(\ref{Vi}) that are free from $f$ come from those terms of Eq.~(\ref{recurre}) such that $l=\frac{1}{2}[k-(i-2j)]$ or $k-l=\frac{1}{2}(k+i)-j$. Hence, \begin{equation} \label{VinofG} \langle{V}_i\rangle_f= \frac{R_\oplus^i}{a^i}C_{i,0}\sum_{j=0}^i\mathcal{F}_{i,j}(s)\mathcal{G}_{i,j}(e,\eta)\cos[(i-2j)\omega-i_\pi], \end{equation} where \begin{equation} \label{miG} \mathcal{G}_{i,j}=\frac{1}{\eta^{2i-1}} \sum_{k=0}^{i-1}\binom{i-1}{k}\binom{k}{\frac{k+i}{2}-j}\frac{e^k}{2^k}. \end{equation} \par As expected, the functions $\mathcal{G}_{i,j}$ in Eq.~(\ref{miG}) are no more than Kaula's eccentricity functions for the particular case of the zonal problem. Indeed, because the summation index $l$ in Eq.~(\ref{recurre}) is integer, $k+i$ must be even in Eq.~(\ref{miG}), which, therefore, can be rearranged in the efficient form \begin{equation} \label{KaulaG} \mathcal{G}_{i,j}=\frac{1}{\eta^{2i-1} }\sum_{l=0}^{\tilde\jmath-1}\binom{i-1}{q}\,\binom{q}{l}\,\frac{e^q}{2^q}, \qquad q=2l+i-2\tilde\jmath, \qquad \left\{ \begin{array}{rcl} i\ge2j & \Rightarrow & \tilde\jmath=j \\[0.5ex] i<2j & \Rightarrow & \tilde\jmath=i-j \end{array}, \right. \end{equation} proposed by Kaula, cf.~Eq. 3.66 of \cite{Kaula1966}. \par Taking into account the peculiarities of the summations for the cases of odd and even harmonics, Eq.~(\ref{VinofG}) is more efficiently organized in the form \begin{equation} \label{Vistar} \langle{V}_i\rangle_f= \frac{R_\oplus^i}{a^i}C_{i,0}\sum_{j=0}^{i_2} (2-\delta_{j+i^\star,0})\mathcal{F}_{i,i_0^\star+j}(s)\mathcal{G}_{i,i_0^\star+j}(e,\eta)\cos\left[(2j+i^\star)\omega+i_\pi\right], \end{equation} in which $\delta_{i,j}$ is the usual Kronecker delta function. Note that Eq.~(\ref{Vistar}) also applies to the term $\langle{V}_2\rangle_f$, which was displayed expanded in Eq.~(\ref{V2averaged}). \par \section{Delaunay normalization} \label{s:delaunay} The transformed, simplified Hamiltonian after the elimination of the parallax is obtained by replacing the original Delaunay variables by new, prime ones, viz. $(\ell',g',h',L',G',H')$. Hence, as follows from Eqs.~(\ref{H01parallax}) and (\ref{H02parallax}), up to the second order, the new Hamiltonian is \[ \mathcal{H}'=\mathcal{H}'_{0,0}+\epsilon\mathcal{H}'_{1,0}+\frac{\epsilon^2}{2}\mathcal{H}'_{2,0}+\mathcal{O}(C_{2,0}^3) \] with \begin{eqnarray} \label{H00p} \mathcal{H}'_{0,0} &=& -\frac{\mu}{2a}, \\ \label{H10p} \mathcal{H}'_{1,0} &=& -\frac{\mu}{a}\frac{a^2\eta}{r^2}\langle{V}_2\rangle_f, \\ \label{H20p} \mathcal{H}'_{2,0} &=& 2\frac{\mu}{a}\frac{a^2\eta}{r^2}\left\{C_{2,0}^2\frac{R_\oplus^4}{p^4}\eta \left[\frac{q_{0,0}}{2}+\left(q_{0,1}+\frac{es^2}{(1+\eta)^2}\tilde{q}_{0,1}\right)\cos2\omega\right] -\sum_{i\ge3}\langle{V}_i\rangle_f\right\}, \end{eqnarray} where $\langle{V}_2\rangle_f$ and $\langle{V}_i\rangle_f$ are given in Eqs.~(\ref{V2averaged}) and (\ref{Vistar}), respectively, and all symbols are now assumed to be functions of the prime Delaunay variables. \par The elimination of the mean anomaly now becomes trivial. Indeed, at the first order we choose \begin{equation} \label{H01Delaunay} \mathcal{H}_{0,1}'=\frac{1}{2\pi}\int_0^{2\pi}\mathcal{H}'_{1,0}\,\mathrm{d}\ell' =\frac{1}{2\pi}\int_0^{2\pi}\mathcal{H}'_{1,0}\frac{r^2}{a^2\eta}\,\mathrm{d}f =-\frac{\mu}{a}\langle{V}_2\rangle_f. \end{equation} The first order term of the generating function is computed from Eq.~(\ref{homoDelo}) \[ W'_1=\frac{1}{n}\int(\mathcal{H}'_{1,0}-\mathcal{H}'_{0,1})\,\text{d}\ell' =\frac{1}{n}\left[-\mathcal{H}'_{0,1}\ell'+\int\mathcal{H}'_{1,0}\frac{r^2}{a^2\eta}\,\text{d}f\right] =-L\langle{V}_2\rangle_f\phi \] where $\phi=f-\ell'$ is the equation of the center. \par The term $\widetilde{\mathcal{H}}'_{0,2}$ is computed again from Deprit's recursion in Eq.~(\ref{deprittriangle}), which gives formally the same expression as Eq.~(\ref{H02tilde}), but now it must be formulated in the tilde functions and variables. The new Hamiltonian term $\mathcal{H}'_{0,2}$ is selected to be free from short-period terms, viz. \[ \mathcal{H}'_{0,2}=\frac{1}{2\pi}\int_0^{2\pi}\widetilde{\mathcal{H}}'_{0,2}\,\mathrm{d}\ell' =\mathcal{H}'_{0,2,1}+\mathcal{H}'_{0,2,2}+\mathcal{H}'_{0,2,3}, \] where \begin{eqnarray*} \mathcal{H}'_{0,2,1} &=& \frac{1}{2\pi}\int_0^{2\pi}\left(\{\mathcal{H}'_{0,1},W'_1\}+\{\mathcal{H}'_{1,0},W'_1\}\right)\text{d}\ell' \\ \mathcal{H}'_{0,2,2} &=& \frac{1}{2\pi}\int_0^{2\pi} 2\frac{\mu}{a}\frac{a^2\eta}{r^2}C_{2,0}^2\frac{R_\oplus^4}{p^4}\eta \left\{\frac{q_{0,0}}2+\left[q_{0,1}+\frac{es^2}{(1+\eta)^2}\tilde{q}_{0,1}\right]\!\cos2\omega\right\}\,\text{d}\ell', \\ \mathcal{H}'_{0,2,3} &=& \frac{1}{2\pi}\int_0^{2\pi} \Bigg(-2\frac{\mu}{a}\frac{a^2\eta}{r^2}\sum_{i\ge3}\langle{V}_i\rangle_f \Bigg)\,\text{d}\ell', \end{eqnarray*} After evaluating the Poisson brackets, and using, when required, the differential relation in Eq.~(\ref{dldf}), the quadratures are solved in closed form of the eccentricity to get \begin{eqnarray*} \mathcal{H}'_{0,2,1} &=& -\frac{\mu}{2a}C_{2,0}^2\frac{R_\oplus^4}{p^4}\frac{1}{8}\eta(1+3\eta)(2-3s^2)^2, \\ \mathcal{H}'_{0,2,2} &=& 2\frac{\mu}{a}C_{2,0}^2\frac{R_\oplus^4}{p^4}\eta\left\{ \frac{q_{0,0}}{2}+\left[q_{0,1}+\frac{es^2}{(1+\eta)^2}\tilde{q}_{0,1}\right]\!\cos2\omega\right\}, \\ \mathcal{H}'_{0,2,3} &=& -2\frac{\mu}{a}\sum_{i\ge3}\langle{V}_i\rangle_f. \end{eqnarray*} Hence \begin{eqnarray} \label{H02Delaunay} \mathcal{H}'_{0,2} &=& -\frac{\mu}{a}2\sum_{i\ge3}\langle{V}_i\rangle_f -\frac{\mu}{a}\frac{R_\oplus^4}{p^4}C_{2,0}^2\eta \left\{\frac{1+3\eta}{16}(2-3s^2)^2-q_{0,0}-2\left[q_{0,1}+\frac{es^2\tilde{q}_{0,1}}{(1+\eta)^2}\right]\cos2\omega\right\}. \end{eqnarray} \par Up to $\mathcal{O}(\epsilon^3)=\mathcal{O}(C_{2,0}^3)$, the new Hamiltonian i \begin{equation} \label{Hampp} \mathcal{H}''=\mathcal{H}''_{0,0}+\epsilon\mathcal{H}''_{1,0}+\frac{\epsilon^2}{2}\mathcal{H}''_{2,0}, \end{equation} where terms $\mathcal{H}''_{m,0}$ $(m=0,1,2)$, are obtained from corresponding ones $\mathcal{H}'_{0,m}$ given by Eqs.~(\ref{H00p}), (\ref{H01Delaunay}), and (\ref{H02Delaunay}), respectively, by changing prime by double prime variables $(\ell'',g'',h'',L'',G'',H'')$. \par Hence, if we neglect the term $\tilde{q}_{0,1}$ from Eq.~(\ref{H02Delaunay}) ---which is a consequence of the integration constant introduced in Eq.~(\ref{W1parallax}) at the 1st order of the elimination of the parallax simplification--- the expanded Hamiltonian in Appendix A of \cite{CoffeyDepritDeprit1994} can be replaced by the compact expression \[ \mathcal{H}= \frac{G^2}{p^2}\left\{C_{2,0}^2\eta^3\frac{R_\oplus^4}{p^4} \left[-\frac{1+3\eta}{32}(2-3s^2)^2+\frac{1}{2}q_{0,0}+q_{0,1}\cos2\omega\right] -\eta^2\sum_{i\ge2}\langle{V}_i\rangle_f\right\}, \] together with the use of Kaula's recursion in Eq.~(\ref{Vistar}). \section{Kaula-type recursion's performance} \label{s:performance} The Hamiltonian scaling used in Eqs.~(\ref{H00})--(\ref{H30}) is valid only in the case of earth-like bodies \cite{CoffeyDepritDeprit1994}. Interesting as this case may be, there are cases, as it happens with Venus or the moon gravitational potentials, in which the prevalence of the $C_{2,0}$ coefficient is not enough to define a clear scaling of the harmonic coefficients, which, therefore, are all taken to be of the same order in the perturbation arrangement. In these cases, 2nd order effects of the second zonal harmonic are not needed and the elimination of the parallax simplification can be avoided. Then, the mean elements Hamiltonian (\ref{Hampp}) is obtained after a single canonical transformation that results in \begin{equation} \label{Hmean} \mathcal{H}'=- \frac{\mu}{2a}-\frac{\mu}{a}\sum_{i\ge2}\langle{V}_i\rangle_f, \end{equation} where $\langle{V}_i\rangle_f$ is given in Eq.~(\ref{Vistar}). The derivation and arrangement of Eqs.~(\ref{Hmean}) is definitely simpler than corresponding expressions in \cite{Saedeleer2005}. \par We compare the efficiency of Eq.~(\ref{Hmean}) in the construction of the mean elements Hamiltonian with corresponding ones proposed in \cite{Saedeleer2005}, but also with the improved version of these equations developed in \cite{LaraSaedeleerFerrer2009}. The results of the comparisons are summarized in Fig.~\ref{f:Ham}, where the ratio of the time needed in the construction of a given term $n$ of the averaged potential is displayed for the three following cases: \begin{itemize} \item Eqs.~(48)--(50) of Ref.~\cite[][p.~250]{Saedeleer2005} compared to Eq.~(\ref{Hmean}), represented by blue dots, \item improved equations in Ref.~\cite{LaraSaedeleerFerrer2009} compared to Eq.~(\ref{Hmean}), represented by magenta dots, \item Eqs.~(48)--(50) of Ref.~\cite{Saedeleer2005} compared to improved equations in Ref.~\cite{LaraSaedeleerFerrer2009}, represented by yellow dots. \end{itemize} As shown in the figure, all the approaches spend similar times for the lower degrees of the potential. Besides, the recursions in \cite{LaraSaedeleerFerrer2009} reduce the time needed in the computation of the original recursions in \cite{Saedeleer2005} to about the 80\%. But even in that case, De Saedeleer's recursions spend a clearly increasing time, with a rate that is approximately proportional to 1 tenth of the zonal harmonic degree $n$, with respect to Kaula's recursions. Thus, in De Saedeleer's (improved) approach, the time spent in constructing the zonal term of 50th degree is $\sim5$ times longer than the time needed when using Kaula's recursions, $\sim10$ times longer for the zonal term of 100th degree, or $\sim20$ times longer for the zonal term of 200th degree. \par \begin{figure}[htb] \centering \includegraphics[scale=0.7]{ComparaH.pdf} \caption{Performance of Eq.~(\ref{Hmean}) in the construction of the mean elements Hamiltonian compared to Eqs.~(49)--(50) of \cite{Saedeleer2005} and corresponding improved recursions in \cite{LaraSaedeleerFerrer2009}. } \label{f:Ham} \end{figure} In spite of the clear advantage of using Kaula's recursions in the construction of the mean elements Hamiltonian, all the approaches are quite feasible with the current computational power. Indeed, as shown in Fig.~\ref{f:Htimes}, even for the higher degrees, the analytical expression of the mean elements Hamiltonian term is obtained with Wolfram \textit{Mathematica}\raisebox{.9ex}{\tiny\textregistered} 9 software in just a few seconds when using a 2.8 GHz Intel Core i7 with 16 GB of RAM running under macOS High Sierra 10.13.5. However, while computing time grows with the degree in a cubic rate when using De Saedeleer's approach, it just grows only slightly higher than quadratic when using Kaula's recursions. \begin{figure}[htb] \centering \includegraphics[scale=0.7]{ComparaHtime.pdf} \caption{Time spent in the computation of each term of the mean elements Hamiltonian with the different recursions. } \label{f:Htimes} \end{figure} \section{The brut force approach} When the Hamiltonian normalization process is approached directly by perturbations with the help of a computer algebra system, the highest efficiency is obtained when all the analytical expression to be handled in the computation of the Poisson brackets, as well as other operators that feed the Lie transform procedure, are fully expanded. Because of that, the number of terms required by particular perturbation theories (Hamiltonian, generating function, periodic corrections) are repeatedly reported in the literature \cite[see][among others]{DepritRom1970,CoffeyAlfriend1981,CoffeyDeprit1982,CoffeyNealSegermanTravisano1995,LaraSanJuanLopezOchoa2013c}. Once the perturbation problem has been solved, there is no doubt that the perturbation solution must be exactly the same as the one obtained with the recursions. However, the latter provides the solution in a more organized way, in which the necessary expressions take the structure of Poisson series \cite{DanbyDepritRom1965,DepritRom1968} where the coefficients of the trigonometric functions are arranged in the form of eccentricity and inclination polynomials, contrary to expanded monomials, as can be checked in Eq.~(\ref{Vistar}). \par In spite of the Lie transforms procedure is straightforward in the case of a simple first order approach, which was precisely the case tested in Section \ref{s:performance}, the fact that it starts from reformulating the zonal gravitational potential in orbital elements, which is given in Eq.~(\ref{zonalpot}) in spherical coordinates, hampers the brut force approach since the beginning. The whole procedure is relieved from unnecessary computations when starting from Eqs.~(\ref{zonalKaula})--(\ref{Vi}), which notably ease the computation of the mean elements Hamiltonian in closed form of the eccentricity using the differential relation in Eq.~(\ref{dldf}). But even when this shortcut is taken, the computations involved in the customary reduction of the powers of trigonometric functions that appear in Eq.~(\ref{Vi}) to trigonometric functions with combined arguments, very soon need to handle notable amounts of memory, in this way making the computation time to grow in a quintic way from degree to degree. \par The close to exponential growth of the Lie transforms approach is illustrated in Fig.~\ref{f:LieHamt}, in a logarithmic scale, where the computations with the Lie transforms where extended only to the 50th degree term of the zonal gravitational potential. The Lie transforms performance relative to both types of recursions is depicted in Fig.~\ref{f:HamLie}, where the performance of Kaula's recursions over those in \cite{Saedeleer2005} is also shown as reference. It can be observed in Fig.~\ref{f:HamLie} that, since the beginning, the performance is at least 10 times better on the side of recursions, it immediately grows to 100 times better even for the lower degrees, and very soon grows to 1000 times better when the degree is only moderate. \begin{figure}[htb] \centering \includegraphics[scale=0.7]{ComparaLietime.pdf} \caption{Time spent in the computation of each term of the mean elements Hamiltonian with the brut force approach (Lie transforms), and De Saedeleer's and Kaula's recursions. } \label{f:LieHamt} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.7]{ComparaLie.pdf} \caption{Performance of Eq.~(\ref{Hmean}) in the construction of the mean elements Hamiltonian compared to Eqs.~(49)--(50) of \cite{Saedeleer2005} and corresponding improved recursions in \cite{LaraSaedeleerFerrer2009}. } \label{f:HamLie} \end{figure} The numbers obtained in our comparisons with the direct implementation of the perturbation method by Lie transforms are just provided as an illustration of the performance, because the perturbation approach may admit different implementations. We used our long experience with the Lie transforms method to make the comparisons unbalanced, yet in no way we claim that the procedure cannot be speeded. However, due to the results obtained, it is not expected that more efficient implementations of the perturbation approach, if possible, will notably change our reported results. \par Also, it may be argued against our comparisons that we relied on a general purpose algebra system whereas the computations can be done much more efficiently if this task is programmed in a specific symbolic manipulator. This is obviously true, but analogous improvements would be found in the recursions if they are likewise programmed with a specific tool. \par \section{Evaluation of the perturbation solution} \label{s:averagedflow} Once the perturbation solution have been computed by any means, it remains to be evaluated for given initial conditions. Recursions can obviously be efficiently programmed in platforms with limited memory. On the contrary, literal expression may require a much larger amount of memory to be stored. In view of memory limitation is not a major issue these days, one might be tempted to forgo the compact elegance of recursions in favor of the alacrity of explicit expressions. However, the use of recursions provides also a great versatility in the selection of the perturbation model, which is a non-negligible advantage in different applications. One important case in which this versatility is of great help is in the process of assessing the reliability of different simplified models to be used for mission design purposes. \par Thus, it commonly emerges the question on how many zonal harmonics of the gravitational potential must be kept into the perturbation model to capture the major features of the long-term dynamics while having a model as simple as possible to ease computations \cite{Lara2018Stardust}. The answer is normally ascertained after a process that includes the phase space representation for increasing complexity of the dynamical model. We will illustrate this procedure by exploring the mean elements phase space of high-inclination low-altitude lunar orbits for increasing number of zonal harmonics. \par The reduced flow stemming from Eq.~(\ref{Hmean}) is of 1-DOF and, for that reason, its orbits can be depicted by simple contour plots of the mean elements Hamiltonian without need of carrying out any numerical integration. Indeed, due to the axial symmetry of the zonal potential, Eq.~(\ref{zonalKaula}), the right ascension of the ascending node $\Omega=h$ is a cyclic variable. Therefore, its conjugate momentum $H$, the polar component of the angular momentum vector, is a dynamical constant. Besides, because the mean anomaly has been removed in the averaging procedure leading to Eq.~(\ref{Hmean}), its conjugate momentum $L$, the Delaunay action, is a formal dynamical constant of the mean elements Hamiltonian. In consequence, Eq.~(\ref{Hmean}) only depends on the remaining canonical variables, viz.~$g$ and $G$, and for given initial conditions remains constant because the averaged Hamiltonian does not depend explicitly on time. \par Alternatively, since $e=\sqrt{1-G^2/L^2}$ and $L$ is constant, on average, the mean elements Hamiltonian can be viewed as depending on the eccentricity and the argument of the periapsis $\omega=g$, in which the dynamical constants $a=L^2/\mu$ and $\sigma=H/L$ define a parameters plane. Furthermore, because $G=L$ for a circular orbit, $\sigma$ represents the cosine of the inclination of a circular orbit, and, hence, it is common to replace the dynamical parameter represented by $\sigma=\cos{I}_\mathrm{circular}$ by the corresponding inclination of circular orbits $I_\mathrm{circular}$ \cite{Lara2010,LaraSanJuanLopezOchoa2013a}. The visualization of the long-term dynamics by means of eccentricity vector diagrams, which are polar plots in which the eccentricity plays the role of the radius and the argument of the periapsis is the polar angle, is a useful tool for mission designers \cite{Cook1966,CuttingFrautnickBorn1978,Shapiro1996,FoltaQuinn2006}. \par The differences in evaluating different truncations of the mean elements Hamiltonian for an orbit with desired characteristics is illustrated in Figs.~\ref{f:ew600a6345i}--\ref{f:ew125km53deg} for a lunar orbiter, in which the physical parameters of the Selenopotential have been taken form the lunar \texttt{lp150q} model \cite{Konoplivetal2001}. The dotted circumferences in the different plots of theses figures mark the eccentricity at which the orbits would impact with the surface of the moon. \par Thus, Fig.~\ref{f:ew600a6345i} shows the case of a low lunar orbit in which the semi-major axis is about 1.3 times the lunar radius and the inclination of the circular orbits is fixed to 63.45 deg. As shown in the figure, the eccentricity vector diagrams that visualize the long-term dynamics for this point of the parametric plane notably change depending on the number of harmonics kept in the truncated model. Indeed, when only the Moon's $C_{2,0}$ and $C_{3,0}$ are taken into account, the perigee of non-impact orbits commonly circulates except in the case of almost circular orbits, where three stable frozen orbits exist, two of them with the argument of the perigee at $\omega=-\pi/2$, whereas the perigee of the other points to the opposite direction. Besides, two unstable almost circular frozen orbits exist with the arguments of the perigee pointing approximately to 0 and $\pi$. The simple addition of the moon's $C_{4,0}$ magnifies the eccentricity of four of the five previous frozen orbits, and only one of the stable orbits remain almost circular. The argument of the perigee of the unstable solutions clearly departs from the $0$--$\pi$ direction, and the eccentric stable frozen orbit with the argument of the perigee $\omega=-\pi/2$ becomes an impact orbit. The inclusion of more zonal harmonics in the mean elements Hamiltonian further modifies the figure, and when the moon's $C_{2,0}$--$C_{6,0}$ are taken into account, only the almost circular orbit survives as a non-impact orbit. Quite notably, no frozen non-impact orbits exist in the $C_{2,0}$--$C_{7,0}$ model, in which corresponding plot a dashed curve passing through the origin illustrates the evolution of the eccentricity of the circular orbits. Addition of more harmonics to the truncated hamiltonian show that the evolution of the circular orbits avoids impact for a $C_{2,0}$--$C_{12,0}$ model, a case in which a frozen non-impact exists with $e\approx0.09$ and $\omega=-\pi/2$. Finally, the situations seems to stabilize when $C_{2,0}$--$C_{30,0}$ are taken into account, and the addition of more zonal harmonics to the long-term Hamiltonian does not introduce relevant quantitative modifications of the long-term dynamics. \par The case of a low-altitude, high-inclination orbit is shown in Fig.~\ref{f:ew125km88deg}. Now, the parameters plane is set to the values $a=1.07$ lunar radius and $I_\mathrm{circular}=88$ deg, which would fit to the case of typical mapping orbits. Only one non-impact frozen orbit exists in this case, but the effect of the different zonal harmonics may change the argument of the perigee from $-\pi/2$ to $\pi/2$, or even prevent existence of non-impact frozen orbits. In this particular case, due to the lower altitude, the model needs a higher degree truncation for the long-term behavior to be stabilized, which happens when $C_{2,0}$--$C_{33,0}$ are taken into account, a case in which a non-impact frozen orbit exists whit $\omega=-\pi/2$ and $e\approx0.04$. \par Finally, the example shown in Fig.~\ref{f:ew125km53deg} shows that the long-term dynamics of lower-inclination orbits can be approached with lower degree truncations of the mean elements Hamiltonian. Thus, for an inclinations of the circular orbits $I_\mathrm{circular}=53$ deg the $C_{2,0}$--$C_{9,0}$ truncation suffices to capture the main characteristics of the dynamics, yet important quantitative variations are noted for higher degree truncations until the stabilization happens for the $C_{2,0}$--$C_{20,0}$ model. \begin{figure}[htbp]\vspace{-1cm} \centering \includegraphics[scale=0.6]{C20toC30.pdf} \includegraphics[scale=0.6]{C20toC40.pdf} \includegraphics[scale=0.6]{C20toC50.pdf} \\ \includegraphics[scale=0.6]{C20toC60.pdf} \includegraphics[scale=0.6]{C20toC70.pdf} \includegraphics[scale=0.6]{C20toC80.pdf} \\ \includegraphics[scale=0.6]{C20toC90.pdf} \includegraphics[scale=0.6]{C20toC10.pdf} \includegraphics[scale=0.6]{C20toC11.pdf} \\ \includegraphics[scale=0.6]{C20toC12.pdf} \includegraphics[scale=0.6]{C20toJ20.pdf} \includegraphics[scale=0.6]{C20toJ30.pdf} \caption{Influence of the number of zonal harmonics in the long-term dynamics of a lunar orbit with $a=R_\oplus+600$ km over the surface of the moon, $I_\mathrm{circular}=63.45\deg$. } \label{f:ew600a6345i} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{J2toJ3.pdf} \includegraphics[scale=0.6]{J2toJ4.pdf} \includegraphics[scale=0.6]{J2toJ6.pdf} \\ \includegraphics[scale=0.6]{J2toJ7.pdf} \includegraphics[scale=0.6]{J2toJ8.pdf} \includegraphics[scale=0.6]{J2toJ9.pdf} \\ \includegraphics[scale=0.6]{J2toJ20.pdf} \includegraphics[scale=0.6]{J2toJ30.pdf} \includegraphics[scale=0.6]{J2toJ33.pdf} \caption{Long-term dynamics of a lunar orbit with $a=R_\oplus+125$ km over the surface of the moon, $I_\mathrm{circular}=88\deg$. } \label{f:ew125km88deg} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.6]{J2toJ3i53.pdf} \includegraphics[scale=0.6]{J2toJ7i53.pdf} \includegraphics[scale=0.6]{J2toJ9i53.pdf} \\ \includegraphics[scale=0.6]{J2toJ16i53.pdf} \includegraphics[scale=0.6]{J2toJ19i53.pdf} \includegraphics[scale=0.6]{J2toJ20i53.pdf} \caption{Long-term dynamics of a lunar orbit with $a=R_\oplus+125$ km over the surface of the moon, $I_\mathrm{circular}=53\deg$. } \label{f:ew125km53deg} \end{figure} \section{Conclusions} It is well known that Kaula's seminal recursions provide an efficient way for the direct construction of the gravitational potential in orbital elements. Similar recursions derived from them can be used in the construction of the long-term gravitational potential, a case in which short-period effects are removed by averaging. Simulations in this paper show that Kaula's approach clearly remains as the benchmark to which the performance of other recursions in the literature, as well as the brut force approach, must be compared in the construction of a high degree long-term potential. On the other hand, second order effects of the second zonal harmonic, which fall out the scope of Kaula's approach, are needed in the investigation of the long-term propagation of orbits about the Earth or Earth-like bodies. However, these effects can be explicitly incorporated to the long-term disturbing function from available expressions in the literature. \par The use of the averaged potential in the construction of eccentricity vector diagrams showed as an efficient way of ascertaining the complexity of the dynamical model to be used in the description of a particular region of phase space. It discloses the simplest model to be used in a real description of the long-term dynamics due to the non-centralities of the gravitational potential, yet additional disturbing effects, as for instance third-body perturbations, may be needed in the real description of the dynamics. \subsection*{Acknowlwedgements} Support by the Spanish State Research Agency and the European Regional Development Fund under Project ESP2016-76585-R (MINECO/ AEI/ERDF, EU) is recognized. ML also recognizes support from the Project ESP2017-87271-P of the same funding agencies.
1,116,691,501,195
arxiv
\section{Introduction} \label{sec:intro} There is considerable progress in calculations of various cross sections for deep inelastic lepton scattering in the Lipatov limit ($\nu\to\infty$ and $Q^2$ fixed at a large value \cite{FKL75}). In particular a picture has been proposed \cite{NZ91,M94} which reproduces the basic features of BFKL pomeron, e.g. its Regge behavior which was earlier \cite{FKL75} obtained from a direct resummation of the Feynman diagrams of Quantum Chromodynamics. Subsequently, the model of \cite{NZ91,M94} was applied to calculate the total cross section, the diffractive dissociation cross section, and the corresponding structure functions for the proton target \cite{NPR95,NPRW96,BP96a,BP96b,B96}. \medskip In view of the successful experimental program for the so-called ``small-x physics'' --- which is being pursued at HERA for proton targets and will be, possibly, extended to nuclear targets --- it makes sense to adapt the results of \cite{NPR95,NPRW96,BP96a,BP96b,B96} also to nuclear targets. So, in this paper, we apply the results of \cite{NPR95,NPRW96,BP96a,BP96b,B96} to calculate the shadowing in deep inelastic lepton-nucleus interactions. \medskip The problem of nuclear shadowing in deep inelastic lepton-nucleus interactions has been recently discussed in many papers. Refs. \cite{HERA,NZZ95,PRW95,AB96,ABKSS96,S96} give an up-to-date picture of this field. \medskip For a description and discussion of the basic physical ideas of the model used in the present calculations we refer the reader to the article \cite{B96}. Here a brief summary will suffice. The model was initially designed for ``onium'' - ``onium'' scattering, i.e. for scattering of two bound $q\bar q$ states with a small transverse separation. At high energy --- in perturbative QCD --- such an ``onium'' state evolves (cascades) into a bunch of colorless dipoles whose number increases as a power of the incident energy. The interaction occurs {\it via} two gluon exchange between the pairs of dipoles (one from each ``onium''). Navelet, Peschanski, Royon and Wallon \cite{NPR95,NPRW96} applied this model to deep inelastic lepton-proton scattering. They assumed --- naturally --- that the virtual photon at high $Q^2$ can be described by an ``onium''. For the target proton --- which certainly does not resemble an ``onium'' --- they made a rather bold assumption that it can nevertheless be approximated by a collection of ``onia'' with an average ``onium'' radius to be determined from the data. It was thus rather surprising that such a --- seemingly crude --- model described very reasonably the existing data on $F_2$ and on gluon structure function in a fairly large range of $Q^2$ and of Bjorken $x$. To obtain good fits to the experimental data the authors of Refs. \cite{NPR95,NPRW96} had only to normalize $F_2$. We show below that this normalization corresponds to $n_{eff}\approx 6.7$ ``onia'' in a nucleon. \medskip In the present paper we apply these ideas to the calculations of shadowing in deep inelastic scattering from nuclear targets. We hope that this can provide an independent check of the hypothesis of Ref. \cite{NPR95}, and give more information about the nucleon structure at very small Bjorken $x$. \medskip The total virtual transverse photon - ``onium'' cross section is one of the basic inputs in computations of nuclear shadowing and we will use the following integral representation of $\sigma^T_{tot}=(\Sigma_nP_n\sigma_n)^T$ ($n$ is the index of the Good and Walker eigenmodes of absorption, see below) \begin{equation} (\Sigma_n P_n\sigma_n)^T = \sigma^T_{tot}(x,Q^2) = {4N_c\alpha^2n_{eff}\alpha_{em}e^2_f\over\pi}\,r_0^2 \int\limits^{c+i\infty}_{c-i\infty}{d\gamma\over 2\pi i}\,x^{{-\alpha N_c\over\pi}\chi(\gamma)} \,2\pi\, \left({Qr_0\over2}\right)^{\gamma-2}h^T(\gamma). \label{eq1} \end{equation} This integral representation is our version of $\sigma^T_{tot}$ discussed in \cite{NPR95}. We have checked through numerical evaluation of (\ref{eq1}) that it agrees well with the results of Refs. \cite{NPR95,NPRW96} provided $n_{eff}\approx 6.7$. In (\ref{eq1}) $N_c$ is the number of colors involved in the process, and $n_{eff}$ is an effective number of ``onia''in the nucleon. $\alpha_{em}={1\over137}$, $e^2_f$ is the sum of the squares of quark charges, and $r_0\approx .8$ fm is approximately the ``onium'' diameter. \begin{equation} h^T(\gamma) = {4\over\gamma(2-\gamma)} {\Gamma(3-{1 \over 2}\gamma)\Gamma^3(2-{1 \over 2}\gamma)\Gamma(2+{1 \over 2}\gamma) \Gamma(1+{1 \over 2}\gamma)\over\Gamma(4-\gamma)\Gamma(2+\gamma)}\, , \label{eq2} \end{equation} and $\chi(\gamma)$ is the eigenvalue of the BFKL kernel defined in terms of $\psi(\gamma)={d\over d\gamma}[\ln\Gamma(\gamma)]$ as \begin{equation} \chi(\gamma)=2\psi(1)-\psi(1-{\textstyle{1 \over 2}}\gamma)- \psi({\textstyle{1 \over 2}}\gamma)\,\,. \label{eq3} \end{equation} $\chi(1)$ determines the slope of the BFKL pomeron \begin{equation} \Delta_p=\alpha_p-1={\alpha N_c\over\pi}\chi(1)\,\, , \label{eq4} \end{equation} where $\alpha$ is a strong coupling constant. \medskip Since we are interested in the structure function $F_2\sim (\sigma_{\rm transv}+\sigma_{\rm longitud})$ one should add to (\ref{eq1}), which gives the transverse virtual photon - ``onium'' total cross section, the contribution of the longitudinal virtual photon in order to have a complete virtual photon - ``onium'' total cross section. As it was shown in Ref. \cite{NPRW96} one gets the longitudinal photon contribution by replacing the function $h^T(\gamma)$ of (\ref{eq2}) by $h^L(\gamma)$: \begin{equation} h^L(\gamma) = h^T(\gamma){(2-\gamma)\gamma\over 2(2-{\textstyle{1\over 2}}\gamma) (1+{\textstyle{1\over 2}} \gamma)}\,. \label{eq5} \end{equation} Since the integrand in (\ref{eq1}) is strongly peaked at $\gamma=1$ we can approximate the above replacement by $h^L(\gamma)\approx {\textstyle{2\over 9}}h^T(\gamma)$. Hence the total cross section is \begin{equation} \sigma_{tot}=\sigma^T_{tot}+ \sigma^L_{tot}\approx {\textstyle{11\over9}}\sigma^T_{tot}\,\,. \label{eq6} \end{equation} The success of (\ref{eq1}) (compare \cite{NPR95,NPRW96}) encourages us to apply the results of \cite{NPR95,NPRW96,BP96a,BP96b,B96} to evaluation of nuclear shadowing. \medskip In the next Section we present the general formula for the shadow in the deep inelastic virtual photon-nucleus interactions. In Section \ref{sec:triple} we apply it to calculate the ``triple pomeron'' \cite{BP96a} contribution to shadowing. In Section \ref{sec:quasi} we do the same for the ``quasielastic'' contributions \cite{BP96b}. Section \ref{sec:disc} presents results and conclusions. Two Appendices give details of derivations of some formulae. \section{Basic formulae for virtual photon-nucleus interactions} \label{sec:basic} Many theoretical papers on this and related subjects have been published in the past and our task is merely to adapt the existing material for our purposes. Our main formula, which gives the deviation from unity of the ratio, $R_A$, of the total virtual photon-nucleus cross section to the incoherent sum of the cross sections on individual nucleons, is a slight generalization of the formula proposed more than 25 years ago by Gottfried and Yennie \cite{GY69} and then developed by Yennie and Bauer et al \cite{Y76}. It goes as follows: \begin{eqnarray} \Delta_A(x,Q^2) &=& R_A(x,Q^2)-1 \nonumber \\ &=& -{2(A-1)\over\sigma_{tot}} \, \hbox{Re}\, \sum_n P_n \biggl\{\int d^2b \int\limits_{-\infty}^{+\infty} dz_1\int\limits_{-\infty}^{z_1} dz_2 \left[\, \int d^2b' f_n({\bf b'})\right]^2 \nonumber \\ & & \times \, \rho({\bf b},z_1) \, \rho({\bf b},z_2) \, e^{i(z_1-z_2)mx_p} \left[1-\int\limits_{z_1}^{z_2} dz_3 \, \rho({\bf b},z_3) \int d^2b' f_n({\bf b'})\right]^{A-2} \biggr\} \,. \label{eq7} \end{eqnarray} Negative $\Delta_A$ means there is shadowing, positive $\Delta_A$ - antishadowing. Here $x={Q^2\over 2m\nu}$ is the Bjorken variable with $Q^2=-q^2$ (the negative of the four momentum squared of the virtual photon), $m$ - the nucleon mass, $\nu$ - the energy loss of the incident lepton (in the rest frame of the target nucleus). \begin{equation} x_p={x\over\beta}\,\, ,\,\,\beta={Q^2\over Q^2+M^2}\, , \label{eq8} \end{equation} where $M$ is the mass of the virtual photon excitation. In this formula we neglected the correlations between the nucleons inside the target. The index $n$ stands for the Good and Walker eigenmodes of absorption \cite{GW60}, which propagate through nuclear matter without diffractive excitations. ${\bf b}$ is the impact parameter of the virtual photon, and $P_n$ is the probability of realization of the $n$-th eigenmode. $\Sigma_n P_n\sigma_n=\sigma_{tot}$ is the total transverse virtual photon-nucleon cross section. $f_n$ is the forward scattering amplitude of the $n$-th mode. \medskip In \cite{BP96a,BP96b,B96} only second order interactions in dipole-dipole cross section of the diffractively excited systems were considered. Then, only two classes of processes seem to be dominant \cite{B96}: the ``triple pomeron'' contribution where the ``onium'' scatters inelastically, and the ``quasielastic''contribution where the ``onium'' scatters quasielastically from the target proton. \medskip Note that the results of Refs. \cite{BP96a,BP96b,B96} we are going to use to calculate (\ref{eq7}) do not give us $P_n$ and $f_n({\bf b})$ but only $\Sigma_n P_n f_n({\bf b})f_n({\bf b})$. However, $\Sigma_n P_n f_n({\bf b})f_n({\bf b})f_n({\bf b})$ and higher order expressions have, so far, not been calculated (e.g. in the case of an exchange of three pomerons one would have to calculate a three dipoles distribution function), hence the last factor in (\ref{eq7}), $[...]^{A-2}$, can, at present, be only approximated (see below). \medskip Note also that when the transverse spatial extension of $f_n$ is much smaller than the nuclear radius (hence the transverse extension of $\rho({\bf b},z)$) we can approximate \begin{equation} f_n({\bf b})=\delta^{(2)}({\bf b})\int d^2b^{\prime} f_n({\bf b^{\prime}})\,. \label{eq9} \end{equation} This approximation was employed in (\ref{eq7}). Physically, this is the question of the relative transverse sizes of the cascade of dipoles \cite{BP96a,BP96b,B96} and of the nucleus. We have checked accuracy of the assumption (\ref{eq9}) for $\rho({\bf b},z)$ suitable for large nuclei (see Section \ref{sec:disc}) and found it acceptable; $\rho({\bf b},z)$'s are taken in form of Saxon-Woods distributions. \medskip Now we will discuss the other input, $\Sigma_nP_nf_nf_n$ , from \cite{BP96a,BP96b,B96} and to make it concise we shall proceed through numerous references to many formulae of the papers of Refs. \cite{BP96a,BP96b,B96}. \section{``Triple pomeron'' contribution to the shadowing} \label{sec:triple} We assume $A\gg1$, hence a large nucleus where (\ref{eq9}) is applicable. We first work out only the double scattering factor in (\ref{eq7}). We modify the formula (21) of \cite{BP96a} introducing two impact parameters ${\bf\tilde b}$,${\bf\tilde b'}$ (to describe collisions with two different nucleons) and taking the inverse Mellin transformation of both sides of (21): \begin{eqnarray} & &\int d^2\tilde b d^2\tilde b'(\Sigma'_nP_nf_n({\bf\tilde b}) f_n({\bf\tilde b'})) \nonumber \\ &=& 2\pi \, \alpha^5 n^2_{eff} N_c \, x^{-2\Delta_p}_p \left({2a_p\over \pi}\right)^3 \int\limits^{c+i\infty}_{c-i\infty} {d\gamma\over2\pi i} \, \rho^{2-\gamma} \beta^{-\alpha N_c\chi(\gamma)/\pi} \nonumber \\ & & \times \!\! \int {d^2\tilde b \, d^2\tilde b' \over \tilde b^2\tilde b'^2} \int dx_{12}dx_{02} W(x_{12},x_{02})D({\bf \tilde b},x_{02}) D({\bf \tilde b'},x_{12})\,, \label{eq10} \end{eqnarray} where \begin{equation} a_p=[7\alpha N_c\zeta(3)\ln(1/x_p)/\pi]^{-1}=a(x_p)\, , \label{eq11} \end{equation} and $\Sigma'_n$ denotes a partial summation over the parameters specifying diffractive eigenstates (for details see \cite{BP96a}). The prime is added because the summation over the masses, $M$, of the diffractively excited states is left till the very end. $W(x_{12},x_{02})$ is a symmetric function of $x_{12}$ and $x_{02}$, and e.g. for $x_{02}<x_{12}$ \begin{equation} W(x_{12},x_{02})=x_{12}^{\gamma-2}F\left(1-{\textstyle{1\over 2}}\gamma, 1-{\textstyle{1\over 2}}\gamma;1;\left({x_{02}\over x_{12}}\right)^2\right) \,\, , \label{eq12} \end{equation} $F$ being the hypergeometric function, and \begin{equation} D(\tilde b,x)\!=\!\!\int\!\! d^2rdz\Phi(r,z)r\ln \left({\tilde b^2 \over rx}\right) \exp\left[-{a_p\over 2} \ln^2\left({\tilde b^2 \over rx}\right)\right], \label{eq13} \end{equation} where $\Phi(r,z)$ is the square of the proton wave function. For small $a_p$ one can evaluate (\ref{eq10}) analytically (the relevant integrals are given in Appendix A) and obtain for the transverse virtual photon \begin{eqnarray} & & \int d^2\tilde b d^2\tilde b'\Sigma'_nP_nf_n({\bf\tilde b}) f_n({\bf\tilde b'}) \nonumber \\ & & \equiv \langle f^2_n \rangle (\rho,\beta,x_p) = 32\pi \, \alpha^5 n^2_{eff} N_c \, x_p^{-2\Delta_p} \left({a_p\over\pi}\right)^{1\over 2} \int\limits_{c-i\infty}^{c+i\infty} {d\gamma\over2\pi i}\rho^{2-\gamma} \beta^{-\alpha N_c\chi(\gamma)/\pi}V(\gamma)r_0^{2+\gamma} e^{\gamma^2\over4a_p} \,\, . \label{eq14} \end{eqnarray} where \begin{equation} V(\gamma)=\int\limits^1_0 F(1-{\textstyle{1\over 2}}\gamma, 1-{\textstyle{1\over 2}}\gamma;1;y^2)dy\,\,, \label{eq15} \end{equation} and $r_0=\int d^2rdz\Phi(r,z)r $ is a non-perturbative parameter determined from the data. A fit to the proton structure function gives $r_0\approx .8$ fm \cite{NPR95,NPRW96}. \medskip At this point we introduce the nuclear algorithm with $A$ collisions inside the nucleus. We write \begin{equation} t_A = \langle f_n^2 \rangle \biggl\{1-f_{eff} \int\limits^{z_2}_{z_1} dz_3 \, \rho({\bf b},z_3)\biggr\}^{A-2}\,\, , \label{eq16} \end{equation} and \begin{equation} T_A = A(A-1)\int d^2b\int\limits_{-\infty}^{+\infty}dz_1 \int\limits_{-\infty}^{z_1}dz_2 e^{i(z_1-z_2)mx_p} \, \rho({\bf b},z_1) \, \rho({\bf b},z_2) \, t_A\,. \label{eq17} \end{equation} The question now is what to take for $f_{eff}$ ? As we have already remarked the results given in \cite{BP96a,BP96b,B96} do not go beyond the double scattering. Also, \cite{BP96a,BP96b,B96} do not provide us with amplitudes for the eigenstates of absorption but with amplitudes averaged over many eigenstates. In order to have such amplitudes one would have to do new and --- possibly --- rather complicated calculations. Therefore, in order to {\it estimate} the multiple scattering corrections we shall use the full amplitude for dipole-proton scattering at fixed $\rho$ (this is the diameter of the ``onium'' representing the virtual photon) \begin{equation} f_{eff}(r_0,\rho,x_p)=\int d^2\tilde b \, T(r_0,\rho,\tilde b,x_p)\,\, , \label{eq18} \end{equation} where \cite{BP96b} \begin{equation} T(r_0,\rho,\tilde b,x_p) = \pi\alpha^2n_{eff}{r_0\rho\over\tilde b^2} \ln\Bigl({\tilde b^2\over r_0\rho}\Bigr)x_p^{-\Delta_p} \Bigl({2a_p\over\pi}\Bigr)^{3\over 2} e^{-{\textstyle{1\over 2}} a_p\ln^2({\tilde b^2\over r_0\rho})}\,\,. \label{eq19} \end{equation} The integration over $d^2\tilde b$ can be done (see Appendix A) and we obtain \begin{equation} f_{eff}(r_0,\rho,x_p) = 2^{3/2}\pi\alpha^2n_{eff} r_0\rho x_p^{-\Delta_p} \Bigl({a_p\over\pi}\Bigr)^{{\textstyle{1\over 2}}} e^{-{\textstyle{1\over 2}} a_p\ln^2({r_0\over\rho})}\,\,. \label{eq20} \end{equation} A comment is here in order. To estimate the multiple scattering corrections one can use some other than $f_{eff}$ effective amplitude, e.g. the one proposed in Ref. \cite{S96} Eq. (5). In our notation it amounts to taking the ratio $\langle f_n^2 \rangle / f_{eff}$ instead of $f_{eff}$. As expected, $\langle f_n^2 \rangle / f_{eff} > f_{eff}$ but both are of the same order of magnitude. Since the complete expressions for multiple scattering corrections need direct calculations of the contributions of interactions with 3 and more nucleons which are not done, we choose $f_{eff}$ as a somewhat simpler expression. The last point is to average $T_A$ over the {\it probability}, $\tilde\Phi$, to find a dipole in the virtual transverse or longitudinal photon \cite{NZ91}: \begin{mathletters} \label{eq21} \begin{equation} \tilde\Phi^T(\hat z,\rho,Q) = 2{N_c\alpha_{em}\over(2\pi)^2}e_f^2 (\hat z^2+(1-\hat z)^2)\hat Q^2 K_1^2(\hat Q\rho)\,\,, \label{eq21a} \end{equation} \begin{equation} \tilde\Phi^L(\hat z,\rho,Q) = 8 {N_c\alpha_{em}\over(2\pi)^2}e^2_f \hat z(1-\hat z)\hat Q^2 K^2_0(\hat Q\rho)\,\, . \label{eq21b} \end{equation} \end{mathletters} \noindent Here \begin{equation} \hat Q=\sqrt{\hat z(1-\hat z)}Q\;\;\;. \label{eq22} \end{equation} Thus the ``triple pomeron''(TP) transverse (T) and longitudinal (L) contributions to the shadow, within the approximation (\ref{eq9}), is as follows \begin{eqnarray} & & \Delta^{TP(T,L)}_A(x,Q^2) \nonumber \\ & & = -{2(A-1)\over\sigma_{tot}}\int_{10x}^1 d\beta \,\hbox{Re}\, \biggl\{\int_0^1 d\hat z\int_0^{r_0} d^2\rho \,\tilde\Phi^{T,L}(\hat z,\rho) \int d^2b\int^{+\infty}_{-\infty} dz_1\int^{z_1}_{-\infty}dz_2 \nonumber \\ & & \times \,\, e^{i(z_1-z_2)mx_p} \rho({\bf b},z_1) \rho({\bf b},z_2) \langle f^2_n \rangle (\rho,\beta,x_p) \left[1-f_{eff}(r_0,\rho,x_p) \int^{z_2}_{z_1}dz_3\rho({\bf b},z_3)\right]^{A-2}\biggr\} \,\,\, . \label{eq23} \end{eqnarray} For comments on the limits of the integration over $\beta$ and $\rho$ see Section \ref{sec:disc}. \medskip It is convenient to extract from (\ref{eq23}) a correction factor, $C_{ms}(x,Q^2)$, for multiple interactions higher than double. This is the factor which multiplies the double scattering contribution to give $\Delta^{TP}_A(x,Q^2)$. Thus we have \begin{eqnarray} & & \Delta^{TP(T,L)}_A(x,Q^2) \nonumber \\ & & = -C_{ms}(x,Q^2){2(A-1)\over\sigma_{tot}} \int_{10x}^1 d\beta \,\hbox{Re}\, \biggl\{\int_0^1 d\hat z\int_0^{r_0} d^2\rho \tilde\Phi^{T,L}(\hat z,\rho) \int d^2b\int^{+\infty}_{-\infty} dz_1\int^{z_1}_{-\infty}dz_2 \nonumber \\ & & \times \,\, e^{i(z_1-z_2)mx_p} \rho({\bf b},z_1)\rho({\bf b},z_2) \langle f^2_n \rangle (\rho,\beta,x_p)\,\,. \label{eq24} \end{eqnarray} $C_{ms}(x,Q^2)$ will be used in the next section to compute the shadow of the whole multiple scattering process from the shadow of the double scattering contribution. We believe that the level of accuracy of our calculations and the size of $C_{ms}(x,Q^2)$ justify this approximation. \section{The quasielastic contributions to the shadowing} \label{sec:quasi} This time the Ref. \cite{BP96b} provides us with the basic formulae and we will assume $A\gg 1$ hence, again, we accept the approximation (\ref{eq9}). The method used in \cite{BP96b} to obtain the double scattering contribution is different than in the case of ``triple pomeron'' \cite{BP96a} and we now have at our disposal the probability {\it amplitude} for the intermediate state which, in the present case, is the one dipole state. Therefore, the procedure of integration over $\tilde b$ is done at the level of the amplitude which, subsequently, is contracted with the probability amplitude for finding a dipole in a photon of a given polarization, and the result squared. So, starting from the formulae (19) and (20) of Ref. \cite{BP96b}, we first replace ${d\sigma_{T,L}\over dM^2}$ by ${d\sigma_{T,L}\over d\beta}={Q^2\over\beta^2}{d\sigma_{T,L}\over dM^2}$, do the integration of the amplitude $T$ over $\tilde b$, and obtain the following double scattering terms in the approximation (\ref{eq9}): \begin{mathletters} \label{eq25} \begin{eqnarray} & & {d\tilde\sigma_T\over d\beta} = {N_c\alpha_{em}e^2_f\over2\pi} {Q^4\over\beta^2}\int^1_0 d\hat z\hat z^2(1-\hat z)^2 [\hat z^2+(1-\hat z)^2] \left[ \int^{r_0}_0 d\rho\rho f_{eff}(r_0,\rho,x_p) K_1(\hat Q\rho)J_1(\hat M\rho)\right]^2 \nonumber \\ \label{eq25a} \end{eqnarray} and \begin{equation} {d\tilde\sigma_L\over d\beta} = 4{N_c\alpha_{em}e^2_f\over2\pi} {Q^4\over\beta^2}\int^1_0 d\hat z\hat z^3(1-\hat z)^3 \left[ \int^{r_0}_0 d\rho\rho f_{eff}(r_0,\rho,x_p) K_0(\hat Q\rho)J_0(\hat M\rho)\right]^2\,\,, \label{eq25b} \end{equation} \end{mathletters} where $f_{eff}(r_0,\rho,x_p)$ is given by (\ref{eq18}), and $\hat M=\sqrt{\hat z(1-\hat z)}M$, with $M$ being the mass of the virtual photon excitation. \medskip In the double scattering cross sections (\ref{eq25}) $f_{eff}$ appears quadratically. Therefore, since the basic ingredient of $C_{ms}(x,Q^2)$ defined in Section \ref{sec:triple} is the factor \mbox{$[1-f_{eff}\int^{z_2}_{z_1}dz_3 \rho({\bf b},z_3)]^{A-2}$} (defined through the same amplitude $f_{eff}$ as in (\ref{eq25})), we accept that by multiplying $\Delta^{QE}_A$ for just the double interaction inside the target nucleus by $C_{ms}$ we obtain a reasonable approximation for $\Delta^{QE}_A$ with all possible (i.e. $A$) interactions included. The error made in this approximate procedure is not significant because $C_{ms}$ is close to unity throughout the relevant region of $x$ and $Q^2$ (see Fig. 3). \medskip Following steps similar to those which led us to the formula (\ref{eq23}) we obtain for the quasielastic contributions to the shadow of the transverse and longitudinal virtual photons the following expressions: \begin{mathletters} \label{eq26} \begin{eqnarray} & & \Delta^{QE(T)}_A(x,Q^2) \nonumber \\ & & = -{2(A-1)\over\sigma_{tot}} \,\, C_{ms}(x,Q^2) \int^1_{10x} \! d\beta \,\hbox{Re} \biggl\{\int d^2b\int^{+\infty}_{-\infty} \!\!\! dz_1\int^{z_1}_{-\infty} \!\! dz_2 e^{i(z_1-z_2)mx_p} \rho({\bf b},z_1) \, \rho({\bf b},z_2) \nonumber \\ & & \times \, {N_c\alpha_{em}e^2_f\over2\pi} {Q^4\over\beta^2} \int^1_0 d\hat z\hat z^2(1-\hat z)^2 [\hat z^2+(1-\hat z)^2] \left[\int^{r_0}_0d\rho\rho f_{eff}(r_0,\rho,x_p) K_1(\hat Q\rho)J_1(\hat M\rho)\right]^2\biggr\} \nonumber \\ \label{eq26a} \end{eqnarray} and \begin{eqnarray} & & \Delta^{QE(L)}_A(x,Q^2) \nonumber \\ & & = -{2(A-1)\over\sigma_{tot}} \,\, C_{ms}(x,Q^2) \int^1_{10x} \! d\beta \,\hbox{Re}\, \biggl\{\int d^2b\int^{+\infty}_{-\infty} \!\!\! dz_1\int^{z_1}_{-\infty} \!\! dz_2 e^{i(z_1-z_2)mx_p} \rho({\bf b},z_1) \rho({\bf b},z_2) \nonumber \\ & & \times {4N_c\alpha_{em}e^2_f\over2\pi} {Q^4\over\beta^2} \int^1_0 d\hat z\hat z^3(1-\hat z)^3 \left[\int^{r_0}_0d\rho\rho f_{eff}(r_0,\rho,x_p)K_0(\hat Q\rho) J_0(\hat M\rho)\right]^2\biggr\}\,, \nonumber \\ \label{eq26b} \end{eqnarray} \end{mathletters} where $f_{eff}(r_0,\rho,x_p)$ is given by (\ref{eq20}). \medskip Clearly, setting $C_{ms}=1$ reduces both (\ref{eq24}) and (\ref{eq26}) to the correct expressions for the double scattering contributions to the shadow. In fact --- as we have already stressed --- the double scattering term is more reliable than the multiple scattering correction because its input from \cite{BP96a,BP96b,B96} comes exclusively from the ``first principles''. That is why it is of interest to discuss deuterium target. This will be discussed in a forthcoming paper. \section{Discussion of the results and the conclusions} \label{sec:disc} Before presenting any results a comment on the real parts of the amplitudes $f$ is in order. First, the dependence of $\sigma_{tot}$ on $x$ , hence on $2m\nu\approx s$ implies that $\hbox{Re}[f]$ must be different from zero. This follows from the derivative analyticity relations for the forward elastic hadronic amplitudes worked out in e.g. Ref. \cite{SS75}. We can introduce, approximately, this effect into all our expressions derived so far performing the following replacements: \begin{eqnarray} \langle f^2_n \rangle &\longrightarrow& \langle f^2_n \rangle (1-i\hat\alpha)^2, \nonumber \\ f_{eff} &\longrightarrow& f_{eff} (1-i\hat\alpha), \label{eq27} \end{eqnarray} where $\hat\alpha$ is the ratio of the real to imaginary parts of the forward amplitude. We use the leading term of the expansion of the forward amplitudes $f$ given in \cite{SS75}: \begin{equation} \hat\alpha={\hbox{Re} f\over\sigma_{tot}}\approx \tan\left({\textstyle{1 \over 2}} \pi\Delta_p\right)\,\,. \label{eq28} \end{equation} We accept $\Delta_p\approx .3$, hence $\hat\alpha\approx .5$ . \medskip Now the results. We have calculated all contributions to the shadow $\Delta_A$ from the triple pomeron and the quasielastic diffractions for two target nuclei: Lead (Pb,A=208) and Calcium (Ca,A=40). All integrations in (\ref{eq23}) and (\ref{eq26}) were done numerically with an extensive help of {\sl Mathematica} (more details are given in Appendix B). \medskip The single nucleon densities $\rho({\bf s},z)$ were taken in the standard form \begin{equation} \rho({\bf s},z)={1\over V(1+e^{(r-r_A)/a})} \,\,\, , \label{eq29} \end{equation} with $r=\sqrt{{\bf s}^2+z^2}$, $r_A=1.2 \, \hbox{fm} \, A^{1/3}$, and $a= .54 \, \hbox{fm}$. \begin{equation} V= \int d^2sdz(1+e^{(r-r_A)/a})^{-1}\,\, , \label{eq30} \end{equation} so that $\int d^2sdz \rho({\bf s},z)=1$. \medskip We assumed $\Delta_p= .3$ which , through relation (\ref{eq4}) and using $N_c=3$ gives the strong coupling constant $\alpha= .11.$ This is an acceptable set of parameters characterizing the BFKL pomeron. We set $n_{eff} = 6.7$ to recover the fit to the experimental data of the $F_2$ of the nucleon \cite{BP96a,BP96b,B96}. \medskip In all integrations we restricted the values of $x_p$ to \begin{mathletters} \label{31} \begin{equation} x_p={x\over\beta}\leq .1 \,\,\,, \label{eq31a} \end{equation} which, as seen from (\ref{eq7}), amounts to excluding distances $(z_1 - z_2)\leq 2 \, {\rm fm}$ from the double- and higher multiplicity interactions; physically a reasonable condition to impose. This was done because in construction of the triple pomeron contribution to $\Delta_A$, Eq. (\ref{eq23}), see also Appendix A, we explicitly used the approximation of small $a_p$, hence small $x_p$. We have checked that changing the upper limit for $x_p$ (\ref{eq31a}) by a factor $1.5$ or $2$ does not change significantly $\Delta^{TP}_A$, at the values of $x$ for which we have done the calculations, namely $x \leq .01$. This gives the lower limit for integration over $\beta$: $\beta>10x$. \medskip The upper limit for $\beta$ on the other hand, should be taken close to unity because small masses give important contributions to shadowing. So, the integration range for $\beta$ is taken as \begin{equation} 10x < \beta < 1\,\,\,. \label{eq31b} \end{equation} \end{mathletters} \medskip All this makes almost all {\it asymptotic} formulae for the ``triple pomeron'' given in Refs. \cite{BP96a,BP96b,B96} inacceptable for our purposes because they lead to infinities for integrations over $\beta$'s close to unity . \medskip We wish to make it clear: by extending our calculation to $\beta$ close to $1$, hence to very small masses $M$, we go beyond the region of $\beta$'s for which the original formulation of Refs. \cite{NPR95,NPRW96,BP96a,BP96b,B96} was devised. We make this extrapolation because {\it the complete integral representations} {\it of various} $\Sigma_nP_nf_nf_n${\it given also in} \cite{BP96a,BP96b,B96} {\it lead to well} {\it defined finite results, even for} $\beta\to 1$ and only they are employed in our calculations. The price to pay is loss of simple analytic expressions and necessity of numerical integrations. Although the asymptotic form of $\sigma_{tot}=\Sigma_nP_n\sigma_n$ does not lead to any such problems, to be consistent with our treatment of $\Sigma_nP_nf_nf_n$ , we also used its integral representation (\ref{eq1}). \medskip We set the cutoff parameter, $r^*$, of the integration over $\rho$, following the discussion of Ref. \cite{BP96b}, at $r^* = r_0$. Unfortunately, in the case of nuclear targets the saturation of the values of the integrals over $\rho$ does not look as good as in the case of the nucleon target (see Ref. \cite{BP96b}, Fig. 2). Therefore, we should treat $r^*$ as a parameter with the lower bound equal $r_0$. This implies that taking $r^*= r_0$ we tend to underestimate the shadow. \medskip One more general comment: we accept that the contribution to $\Delta_A$ of the triple pomeron - , of the quasielastic transverse - , and of the quasielastic longitudinal processes are additive. In other words we do not introduce any interferences between them. \bigskip In Fig. 1 a,b the total shadow $\Delta_A=\Delta^{TP(T)}_A+\Delta^{TP(L)}_A+\Delta^{QET}_A+\Delta^{QEL}_A$ is presented for Pb and Ca targets. As we can see, the pattern of all curves for these two targets is very similar, only the size of the shadow for Pb is a factor of two larger than for Ca. We have also checked through explicit calculations that all components of $\Delta_A$ of Pb are by the same factor $(\approx \,2)$ larger than of Ca. Therefore, we will, henceforth, show only the results for Pb. \medskip In Fig. 2 a,b we show $\Delta^{TP}_A$ with (a) and without (b) presence of the real part of the forward scattering amplitudes. Comparing this figure with Fig. 1 we see that $\Delta^{TP}_A$ dominates $\Delta_A$, and that the presence of $\hat \alpha\not=0$ makes shadowing smaller. In fact a different from zero real part makes $|\Delta_A|$ smaller: by 16\% at very small $x$ and by 60\% (!) at $x= .01$ . (The role of the real parts of the forward amplitudes at various $x$ has been recently discussed in \cite{S96,BC94}.) \medskip In Fig. 3 a,b we show the correction factor for multiple scattering higher than double, $C_{ms}(x,Q^2)$, with (a) and without (b) presence of $\hat\alpha$. We can see that in either case $C_{ms}$ is close enough to unity to justify our procedures in construction of $\Delta_A$. \begin{figure}[htbp] \bigskip \centerline{\epsfxsize=13.0cm \epsfbox{fig1.eps}} \bigskip \caption{ The total shadow, $\Delta_A$, for (a) Pb and (b) Ca nuclei. $-\Delta_A$ is plotted {\it vs} $\ln(Q^2/\hbox{GeV}^2)$ for four values of the Bjorken scaling variable $x$. The ratio of the real to imaginary parts of the forward amplitudes is set at $\hat\alpha=\tan({\textstyle{1\over 2}}\pi\Delta_p)\approx .5$ .} \end{figure} \medskip The following conclusions result from inspection of the Figs. 1 --- 3: (i) In the region $x <10^{-3}$ our calculation shows an appreciable shadowing, increasing with decreasing $x$. The $x$ dependence of $\Delta_A$ follows approximately the power law \begin{equation} \Delta_A \,\, \sim \,\, x^{-\lambda}, \label{eq32} \end{equation} with $\lambda \approx 0.25$, somewhat smaller than the assumed value of $\Delta_p = 0.3$. An indication of saturation is seen at $Q^2 = 4 \, \hbox{GeV }^2 \,$. (ii) $\Delta_A$ decreases slowly with increasing $Q^2$. The effect is somewhat stronger at small $x$ and large $Q^2$. (iii) Multiple scattering corrections account for less than 20 percent of the value of $\Delta_A$. (iv) The existence of the real parts of the forward amplitudes makes shadowing appreciably smaller, especially at $x$ close to $x= 10^{-2}$. \bigskip \begin{figure}[htbp] \centerline{\epsfxsize=9.5cm \epsfbox{fig2.eps}} \bigskip \caption{The ``triple pomeron'' contribution to $\Delta_A$ with $\hat\alpha\not=0$ (a), and with $\hat\alpha=0$ (b), for Pb target.} \end{figure} \medskip Several comments are in order: (a) The rather weak $Q^2$ dependence of $\Delta_A$ seems puzzling if one takes into account that in the dipole model the total (virtual) photon-hadron cross section decreases as $Q^{-1}$ at large $Q$ \cite{M94,NPR95,B96}. One would thus naively expect $\Delta_A \, \sim \, \langle f^2 \rangle / \langle f \rangle$ to decrease also as $Q^{-1}$ which is another version of the Bjorken-Gribov \cite{G70,B76} paradox. In the dipole model the paradox is resolved (and approximate scaling of $\Delta_A$ recovered) due to the cascade nature of the intermediate state which interacts inside the nucleus. This cascade nature implies very strong correlations between the dipoles in the onium representing the virtual photon so that the double dipole density does not resemble at all the product of single densities and both $\langle f^2 \rangle$ and $\langle f \rangle$ behave as $Q^{-1}$ (apart from logarithmic corrections). The approximate scaling of $\Delta_A$ follows. It is interesting to observe that this explanation is radically different from that proposed in the parton model (first paper of \cite{NZ91}, \cite{B76,FS89}). (b) It should be pointed out that, although our calculation is based on perturbative QCD (as is the whole dipole approach), it nevertheless requires two ``non-perturbative'' parameters ($r_0$ and $n_{eff}$) describing structure of the nucleon. These parameters were estimated from the data on the total (virtual) photon-proton cross section as they cannot be calculated in the dipole model. Particularly important for the absolute value of the shadowing turns out to be $n_{eff}$, the average number of ``onia'' forming the proton. This confirms the point of view that the perturbative QCD alone cannot describe fully deep inelastic scattering even at very small $x$ \cite{JDB96}. It is rather remarkable, however, that the required ``non-perturbative'' input is so restricted. (c) As was already pointed out in the main text, our calculation relies heavily on the calculation of the deep inelastic diffraction reported in \cite{BP96a,BP96b}. This is a consequence of close relation between the nuclear shadowing and diffraction (see e.g. \cite{ABKSS96}). Nevertheless, it is worth to mention that --- due to the power-law tail in the impact parameter dependence of the quasielastic and triple pomeron amplitudes --- nuclear shadowing is much more sensitive to the details of the effective cut-off in the photon-nucleon impact parameter plane than is the total diffractive cross section. At this point the two calculations are substantially different. (d) Our treatment of multiple scattering is only approximate. To improve on this point, it is necesarry to calculate directly at least the contribution of interaction with 3 nucleons which would give an estimate of the accuracy of our approximation. This seems a feasible, although probably a complicated calculation. \begin{figure}[htbp] \centerline{\epsfxsize=9.0cm \epsfbox{fig3.eps}} \bigskip \caption{The correction factor $C_{ms}(x,Q^2)$ for higher than double scatterings with $\hat\alpha\not=0$ (a), and with $\hat\alpha=0$ (b), for Pb target.} \end{figure} \acknowledgments This work was supported in part by the KBN Grant No 2 P03B 083 08 (A.B. and W.C.), by the Stiftung fur Deutsch-Polnische Zusammenarbeit project 1522/94/LN (W.F.), and by PECO grant from the EEC Programme ``Human Capital Mobility'', Network ``Physics at High Energy Colliders'' (Contract Nr: ERBICIPDCT 940613). One of the authors (W.C.) thanks Institute of Nuclear Physics in Krak\'ow for financial support. \vspace{2.0cm}
1,116,691,501,196
arxiv
\section{DFT Embedding} Density Functional Theory (DFT) is one of the most powerful tools for the \emph{ab initio} calculation of the physical and chemical properties of materials, being both efficient and accurate. Many implementations of DFT exist, which differ in the approach taken to approximating the unknown density functional that describes the contribution to the energy of the electrons that is \emph{not} due to the external potential. The basic problem of DFT is to minimise the functional \begin{equation} E[\rho]=T_s[\rho]+ J[\rho]+E_{xc}[\rho]+\int V_{ext}({\mathbf{r}}) \rho({\mathbf{r}}) d^3{\mathbf{r}} \end{equation} where $T_s[\rho]$ is the non-interacting kinetic energy functional, $J[\rho]$ is the Hartree energy, $E_{xc}[\rho]$ is the `exchange-correlation' energy which takes into all non-classical electron-electron interactions, and $V_{ext}({\mathbf{r}})$ is the external potential of the system of interest. To obtain the ground state energy this must be minimised with respect to variations in $\rho({\mathbf r})$, subject a constrained number of electrons. The Hartree and external potential energies can be simply evaluated for a trial charge density, and accurate approximations are available for $E_{xc}[\rho]$, but no explicit form is known for the non-interacting kinetic energy. The charge density for the interacting electrons can be identified with the charge density of a `reference' non-interacting electron gas, and this reference system solved self-consistently to yield the energy and charge density for the interacting system - this is the Kohn-Sham method. Due to orthogonalisation requirements, solving for this reference system scales as $O(N^3)$ where $N$ is the number of electrons which limits the size of system that can be considered\cite{3}. There are $O(N)$ methods available that take advantage of the `nearsightedness' of the density matrix, but at present these are only applicable where the number of basis functions per atom is small, such as tight-binding or atomic-orbital calculations. Another approach is to employ approximate kinetic energy functionals, and minimise the functional directly. Calculations of this sort are cheap and quick, but the approximations available in the literature are generally not sufficient for structural optimisation, let alone chemical accuracy. Another approach is embedding. In many cases (eg an adsorbate on a surface) we can divide the system into two regions of space, region $I$ (eg the adsorbate and a few surface atoms) and region $II$ (eg the rest of the surface). Region $II$ is largely the same as that for a more simple system that may easily be solved for, whereas region $I$ is where all of the interesting physics occurs. It would obviously be computationally advantageous to solve for region $II$ first, and then solve for region $I$ taking into account the influence of region $II$ in some way. This `embedding' approach has received a great deal of attention, and a large number of methods have been presented in the literature\cite{4,5,6}. Many methods approach this as a boundary value problem, at the wavefunction level, but we examine a different approach which starts at the more flexible DFT level, and was first presented by Cortona\cite{1} and Wesolowski\cite{2}. We start with Eq. (1) written as \begin{eqnarray} E[\rho] & = & T_s[\rho_1]+T^{nadd}_s[\rho_1,\rho_2]+T_s[\rho_2] \nonumber \\ & + & J[\rho]+E_{xc}[\rho]+\int V_{ext}({\mathbf{r}}) \rho({\mathbf{r}}) d^3{\mathbf{r}} \end{eqnarray} where $\rho({\mathbf r})=\rho_1({\mathbf r})+\rho_2({\mathbf r})$, $\rho_1({\mathbf r})$ is the charge density of the embedded system that we shall be varying, and $\rho_2({\mathbf r})$ is the substrate charge density that is kept constant. This defines the non-additive kinetic energy, $T^{nadd}_s[\rho_1,\rho_2]$, as \begin{equation} T^{nadd}_s[\rho_1,\rho_2]=T_s[\rho_1+\rho_2]-T_s[\rho_1]-T_s[\rho_2]. \end{equation} Minimising Eq. (2) with respect to variations in $\rho_1({\mathbf r})$ leads to the Euler-Lagrange equation \begin{equation} \frac{\delta T_s[\rho_1]}{\delta \rho_1} + \frac{\delta T^{nadd}_s[\rho_1,\rho_2]}{\delta \rho_1} +V_{KS}[\rho;{\mathbf{r}}]=\mu \end{equation} where $\mu$ is the chemical potential, $V_{KS}[\rho;{\mathbf{r}}]$ is the usual Kohn-Sham potential associated with density $\rho({\mathbf{r}})$, and a new `embedding potential' term is present. In the same manner as for the Kohn-Sham case, this leads to the $\rho_1({\mathbf{r}})$ being the solution of the `Kohn-Sham' equations associated with Eq. (4) at self consistency, but with an effective potential given by \begin{eqnarray} \lefteqn{ V^{eff}[\rho;{\mathbf{r}}] = V_{KS}[\rho;{\mathbf{r}}] + \frac{\delta T^{nadd}_s[\rho_1,\rho_2]}{\delta \rho_1} } \nonumber \\ & = V_{KS}[\rho;{\mathbf{r}}] + \left( \frac{\delta T_s[\rho_1+\rho_2]}{\delta \rho_1} - \frac{\delta T_s[\rho_1]}{\delta \rho_1} \right). \end{eqnarray} As applied previously, this method has employed basis function localised to their host atoms, which essentially constrains the charge density to a localised region and to a particular functional form. These previous applications have been to weakly interacting and insulating systems, whereas here we explore the possibility of examining strongly interacting metallic systems using an unbiased plane wave basis. \section{Approximate Kinetic Energy Functionals} As given above we have only re-expressed the original problem in a slightly different form. In order for this approach to be useful, it must be possible to express the non-additive kinetic-energy accurately. Of course we do not know an analytic form for this interaction energy, but it is expected to be small, and zero for no overlap between the embedded and substrate systems. This suggests the use of the approximate kinetic energy functionals available in the literature to construct $T_s^{nadd}$ and its functional derivative. Two forms of approximate kinetic energy functional are considered. First a semi-local form, incorporating an `enhancement factor' analogous to the Generalised Gradient Approximation (GGA) of exchange-correlation functionals. This takes the form \begin{equation} T^{enh}_s[\rho]=\frac{3}{5}(3\pi^2)^\frac{2}{3} \int \rho^\frac{5}{3} F(t) d^3{\mathbf{r}} \end{equation} where \begin{equation} t=\frac{|\nabla \rho|^2}{\rho^\frac{8}{3}}, \end{equation} and $F(t)$ is the enhancement factor. The Thomas-Fermi functional is a special case of this, where $F(t)=1$, the 1$^{st}$ order gradient expansion corresponds to $F(t)=1+at$ with $a$ an appropriate constant, and the von Weizacker approximation corresponds to $F(t)=bt$ with $b$ again an appropriate constant. The function F(t) is generally chosen to provide appropriate limiting behaviour for the functional, and parameters are often chosen to fit data, theoretical or experimental. The functional derivative of this can be expressed most concisely as \begin{eqnarray} \lefteqn{ \frac{\delta T^{enh}_s[\rho]}{\delta \rho} = \frac{3}{5}(3\pi^2)^\frac{2}{3} \times} \nonumber \\ & \left( \frac{5}{3} \rho^\frac{2}{3} F(t) -\frac{8}{3} \frac{|\nabla \rho|^2}{\rho^2} F'(t) -2 \nabla . \left(\frac{\nabla \rho}{\rho} F'(t) \right) \right) \end{eqnarray} which differs from the expression obtained by a direct application of the usual formulae\cite{7}. Severe aliasing problems arise if the standard form is applied, as occurs for the exchange-correlation potential\cite{8}, but this alternative analytic form greatly reduces these numerical difficulties. The above functional has many deficiencies, the primary one being of only limited non-locality. Explicitly non-local functional have been investigated in the literature, with the introduction of an analytic form that integrates over the contributions from the charge density at each pair of points in real space\cite{9}. This can be further generalised by adding a third order term that integrates the contributions from triplets of points in space, and higher order terms. Unfortunately these are generally extremely computationally expensive to evaluate. One exception to this is a form proposed by Wang and Teter\cite{10}, Perrot\cite{11} and Smargiassi and Madden\cite{12} where the non-local term is expressed as a convolution integral which can be evaluated efficiently in reciprocal space. This approximate functional is given by \begin{eqnarray} T^{nloc}_{\alpha}[\rho]= \frac{3}{5}(3\pi^2)^{\frac{2}{3}} \int \rho^{\frac{5}{3}} d^3 {\mathbf{r}} -\frac{1}{2} \int \rho^{\frac{1}{2}} \nabla^2 \rho^{\frac{1}{2}} d^3 {\mathbf{r}} \nonumber \\ + \int \rho^{\alpha}({\mathbf{r}}) w_{\alpha}( {\mathbf{r}} - {\mathbf{r}}') \rho^{\alpha}({\mathbf{r}}') d^3 {\mathbf{r}} d^3 {\mathbf{r}}' \end{eqnarray} where the first term is the Thomas Fermi contribution, the second the von Weizacker contribution, and the final term a non-local contribution. The parameter $\alpha$ is arbitrary, and $w_{\alpha}( {\mathbf{r}} - {\mathbf{r}}')$ is chosen such that the functional has the correct linear response for a homogeneous non-interacting electron gas with the same average charge density as the charge density of interest. The functional derivative of this approximation is given by \begin{eqnarray} \lefteqn{ \frac{ \delta T^{nloc}_{\alpha} }{ \delta \rho } = (3\pi^2)^{\frac{2}{3}} \rho^{\frac{2}{3}} -\rho^{-\frac{1}{2}} \nabla^2 \rho^{\frac{1}{2}} } \nonumber \\ & & +2 \alpha \rho^{\alpha-1}( {\mathbf r} ) \int \rho^{\alpha}({\mathbf r}') w_{\alpha}({\mathbf r}-{\mathbf r}') d^3 {\mathbf r}'. \end{eqnarray} Our aim is to assess the usefulness of these approximations for carrying out the embedding procedure described in the previous section. \section{Results} To investigate this partially frozen density approach we examine the first test case, fcc aluminium with a conventional 4 atom cubic unit cell. The 3 face-centred atoms are taken to be the substrate system, and this structure is solved for first to provide $\rho_2({\mathbf r})$ and $T_s[\rho_2]$. A standard plane-wave method\cite{13} is used, with a lattice constant of $a_0=4.05$\AA, a plane-wave cut-off of $200$ eV, $35 {\mathbf k}$ points in the irreducible wedge, and the Goodwin-Needs-Heine local pseudopotential\cite{14}. Exchange-correlation is described by the LDA. Once this substrate is constructed the embedded Kohn-Sham calculation is carried out as a standard plane-wave calculation, but with the trial potential given by Eq. (5) and the total energy given by Eq. (2). These additional terms describing the substrate, and the calculation only involves the three electrons introduced by the embedded atom at the corner of the unit cell; the 9 substrate electrons are taken into account entirely by their charge density. Parameters of the calculation are chosen to be the same as for the substrate calculation. It should be made clear that for the substrate calculation we are solving for the lattice of face centred atoms and their accompanying electrons, but for the embedded calculation we are solving for the \emph{entire} fcc system, but only the electrons associated with the embedded (corner) atom are provided with a Kohn-Sham representation. \begin{table} \begin{tabular}{lccc} \hline \hline Functional & $E[\rho]/eV$ & $\Delta E/eV$ & R/\% \\ \hline $T_{PW86}$ &-59.68 & -1.35 & 7.2 \\ $T_{\frac{1}{2}}^{nloc}$&-58.41 & -0.08 & 1.2 \\ Kohn-Sham &-58.33 & - & - \\ \hline \end{tabular} \caption{Total energies per atom, and errors in energies and charge density.} \end{table} We discuss results for an enhancement factor approximation (Eq. (6)) with the Perdew and Wang '86 enhancement factor\cite{15} ($T_{PW86}$). \begin{eqnarray} F(s)&=& (1+1.296s^2+14s^4+0.2s^6)^{\frac{1}{15}} \nonumber \\ s&=& \frac{1}{2 (3 \pi^2)^{ \frac{1}{3} } } t^{ \frac{1}{2}}, \end{eqnarray} and the non-local linear-response corrected functional, Eq. (9) with $\alpha=\frac{1}{2}$ ($T_{\frac{1}{2}}^{nloc}$). Although calculations have been carried out for a large number of different enhancement factor functionals there was no significant difference in the results and any small differences do not affect our conclusions. Table 1 shows the errors in the total energies, and the charge density expressed as the mean absolute deviation as a percentage of the mean density. It is immediately apparent that the non-local functional provides the superior approximation. From this we conclude that by applying the non-local functional an accurate total energy and charge density can be obtained by this DFT embedding procedure for an essentially metallic and strongly interacting system. Semi-local enhancement factor functional do not result in a useful accuracy. Next we move onto the second test case, a model surface/adsorbate system, the type of system we hope to apply the method to in the future. This system also places far more demand on the method since the charge densities are considerably more inhomogeneous. We consider a $\sqrt{2} \times \sqrt{2}$ super-cell with one `adsorbate' per cell, a 3-layer $(100)$ slab and 5 equivalent vacuum layers. The adsorbate is centred on a four-fold hollow site. The substrate ($\rho_2({\mathbf r})$) is chosen to be the lower two layers, and embedding calculations were performed with three embedded atoms making up the upper surface layer and the adsorbate. All calculations were performed with $10 {\mathbf k}$ points in the irreducible wedge. We chose to examine the potential energy curves produced by varying the adsorbate/surface, and compare these with the equivalent full Kohn-Sham results. We also compare the embedding results with Kohn-Sham results for a 1-layer $(100)$ slab to discover whether the embedding method can reproduce the interaction between the adsorbate + top layer and lower two layers. \begin{figure}[h \epsfbox{fig.eps} \caption{Adsorption energy as a function of surface/adsorbate distance vertically above a four-fold hollow. Dashed line is 3-layer + adsorbate Kohn-Sham result. Dotted line is 1-layer + adsorbate Kohn-Sham result. Solid line is results for embedding calculation. Zero energy is sum of surface and isolated adsorbate total energies. } \end{figure} Figure 1 shows the potential energy curves for the embedded calculation with the non-local functional, for a full Kohn-Sham calculation and for a Kohn-Sham calculation incorporating only one layer of the surface. Results are promising, with the embedding results agreeing well with the full 3-layer Kohn-Sham calculation. Equilibrium results for the embedded system are a surface/adsorbate distance of $1.82$ \AA\ and $-2.07$ eV, compared to Kohn-Sham results of $1.77$ \AA\ and $-2.04$ eV - an error of $0.05$ \AA\ and $0.03$ eV respectively. This differs greatly from the 1 layer/adsorbate system ($1.48$ \AA\ and $-2.26$ eV). No results are presented for semi-local enhancement factor functionals, since these differ negligibly from the Kohn-Sham 1 layer/adsorbate layer results, indicating that these functionals cannot take into account the coupling between the substrate and adsorbate + top layer for this system. We conclude that the DFT embedding approach, with a non-local functional, can provide accurate results for two systems with very different charge densities. Both of these systems are metallic and strongly interacting, so this approach may be useful for investigating a wide range of large systems. Future work will consist of the application of this method to further systems, both for further validation of the accuracy and the study of large adsorbate/surface systems.
1,116,691,501,197
arxiv
\section{Introduction}\label{sec:intro} The investigation of super-Poissonian and sub-Poissonian light sources plays a significant role in the history of quantum optics~\cite{hbt1956interferometer,short1983observation}. The photon-number distributions of these sources display variances that are, respectively, wider or narrower than the benchmark Poissonian statistics of a coherent state with the same average photon number~\cite{agarwal2012quantumoptics}. Experimentally, these properties can be conveniently characterized by Mandel's $Q$ parameter, with $Q > 0$ for super-Poissonian sources and ${-1 \leq Q < 0}$ for sub-Poissonian sources~\cite{mandel1979subpoissonian}. Given either type of source, it is also interesting to consider the physical processes that can be used to actively transform the emitted super-Poissonian light into sub-Poissonian light, and vice versa. Quintessential examples include the use of optical nonlinearities such as two-photon absorption~\cite{gilles1993tpa} or photon blockades~\cite{birnbaum2005photonblockade} to transform super- into sub-Poissonian light, and amplification~\cite{hong1985conditions,boyd2008propagation} or phase randomization \cite{li2020groundglass} to transform sub- into super-Possonian light. More recently, it has been shown that conditional measurement processes such as photon addition and photon subtraction can also be used to implement these statistical transformations~\cite{agarwal1991nonclassical,ourjoumtsev2006kittens,zavatta2008subtracting,barnett2018statistics}. Here we show that the relatively new conditional measurement process of zero-photon subtraction (ZPS) can also transform certain sub-Poissonian states into super-Poissonian states (and vice versa), despite no photons being added to or subtracted from the system. Other related work includes performing statistical transformations by manipulating the initial parameters in atomic fluorescence~\cite{arnoldus1983conditions}, sideband squeezing~\cite{grosse2007antibunching}, photon catalysis~\cite{bartley2012multiphoton}, and displacement operation~\cite{deoliveira1990properties} experiments. \begin{figure}[t] \includegraphics[scale=0.45,trim={270 180 270 155},clip]{fig1.pdf} \caption{Overview of a typical zero-photon subtraction (ZPS) setup. An input state $\hat{\rho}_{in}$ enters one input port of a beamsplitter with variable reflectivity $R$, and the detection of zero reflected photons at $D_{1}$ heralds a noiselessly attenuated output state $\hat{\rho}_{out}$. Despite no photons being removed from the system, ZPS can dramatically transform the photon statistics of certain input states. These effects can be observed and quantified by measuring trends in a relative attenuation parameter $K(R)$ using an auxiliary detector $D_{2}$.} \label{fig:overview} \end{figure} Figure~\ref{fig:overview} shows an overview of a typical ZPS setup. ZPS was originally proposed in the context of noiseless attenuation~\cite{micuda2012noiseless}, and its counterintuitive ability to reduce the mean photon number of input states has now been demonstrated in several experiments~\cite{allevi2010subtraction,zhai2013pnr,bogdanov2017multiphoton,maganaloaiza2019multiphoton,katamadze2020multimode,nunn2022modifying}. As illustrated in the figure, an input state $\hat{\rho}_{in}$ with average photon number $\langle \hat{n} \rangle_{in}$ passes through a beamsplitter with variable reflectance $R$, and the output state is only accepted when no photons are reflected. This is accomplished by actively ``heralding on zero'' photons (HoZ) using detector $D_{1}$~\cite{nunn2021heralding}, which occurs with a probability of success $P_{s}$ that decreases as $R$ is increased. Upon successful operation, the properties of the heralded output state $\hat{\rho}_{out}$ can then be measured with an auxiliary detection system represented by $D_{2}$. It can be shown that this conditional measurement process implements the basic transformation $\ket{n} \rightarrow t^{n} \ket{n}$ in the Fock state basis (here, transmittivity $|t|^{2} \equiv T = 1- R$), which results in an overall attenuation (i.e., $\langle \hat{n} \rangle_{out} < \langle \hat{n} \rangle_{in}$) for all but Fock state inputs~\cite{gagatsos2014heralded}. Roughly speaking, this attenuation effect can be understood by considering that larger-$n$ terms in the Fock state expansion of a general input state are less likely to ``survive'' the HoZ process, resulting in a conditional re-weighting of the Fock state coefficients towards smaller-$n$ terms in the heralded output state. Here we extend this idea to show how ZPS can also transform the statistical properties of input states, and then use these transformations to define a set of nonclassicality criteria based on ZPS measurements. As a simple example of the ability of ZPS to perform statistical transformations, consider an input superposition state $\ket{\psi}_{in} = \frac{1}{\sqrt{2}}(\ket{1} + \ket{5})$, which has a mean photon number $\langle \hat{n} \rangle_{in} = 3$ and a {\em positive} Mandel $Q$ parameter $Q_{in} \approx 0.33$, denoting super-Poissonian photon statistics. After ZPS with a standard 50:50 beamsplitter $(R = \frac{1}{2})$, the output state becomes $\ket{\psi}_{out} = \frac{4}{\sqrt{17}}(\ket{1} + \frac{1}{4}\ket{5})$, which has an attenuated mean photon number $\langle \hat{n} \rangle_{out} \approx 1.2$ and a {\em negative} Mandel $Q$ parameter $Q_{out} \approx -0.28$. This negative value denotes sub-Poissonian photon statistics and thus non-classical properties of the heralded transmitted light~\cite{agarwal2012quantumoptics}. Combined with a well-known ``no-go'' theorem which states that conditional photon-counting measurements at a beamsplitter cannot produce nonclassical output states unless the input is also nonclassical~\cite{ban1996statistics,kim2002entanglement}, this simple example shows how even a super-Poissonian input state may possess other ``hidden'' non-classical properties that could be revealed by ZPS measurements. In this paper, we formalize this argument by characterizing the ZPS measurement process through a relative attenuation parameter $K(R)$ and deriving nonclassicality criteria based on its behavior. While $K(R)$ typically changes monotonically as beamsplitter reflectance $R$ is increased, we find that non-monotonic behavior arises for ``transformable'' states that change their sub- or super-Poissonian character, and that this ``transformability'' is always nonclassical. The remainder of the paper is structured as follows: In Section~\ref{sec:atten} we define the ZPS relative attenuation parameter $K(R)$ and explore its relationship with $Q$. We illustrate the key features of $K(R)$ by comparing its behavior for three basic (non-transformable) states with two simple examples of transformable states. In Section~\ref{sec:noncl}, we extend the concept of ``transformability'' to define ZPS-based nonclassicality criteria, and in Section~\ref{sec:predict} we describe a method to predict which input states will statistically transform under ZPS. In Section~\ref{sec:exam} we consider two more rich examples of ZPS input states-- the displaced squeezed state \cite{dodonov1994oscillations} and the catalyzed coherent state \cite{lvovsky2002catalysis}-- which illustrate the concept of transformability over a limited range of input state parameter space. Finally, in Section~\ref{sec:det} we consider the effects of realistic detectors on experimentally observing these ZPS-based statistical transformations. \section{Relative attenuation parameter}\label{sec:atten} When exactly zero photons are reflected by the beamsplitter in the setup of Fig. \ref{fig:overview}, ZPS is successful and {\em noiselessly attenuates} the input state $\hat{\rho}_{in}$~\cite{micuda2012noiseless}. This process is described by the nonunitary operator $t^{\hat{n}}$, producing a modified state $\hat{\rho}_{out}$ with resulting mean photon number: \begin{equation}\label{eqn:nout} \langle \hat{n} \rangle_{out} = \frac{\sum_n p_n n T^n}{\sum_n p_n T^n}, \end{equation} where $\{ p_{n} \}$ are the diagonal elements of $\hat{\rho}_{in}$ in the Fock basis. With the exception of pure Fock state inputs, this always results in $\langle \hat{n} \rangle_{out} < \langle \hat{n} \rangle_{in}$~\cite{gagatsos2014heralded}. This can be seen by differentiating Eq.~\ref{eqn:nout} with respect to $T$: \begin{equation}\label{eqn:dndt} \frac{d\langle \hat{n} \rangle_{out}}{dT} = \frac{1}{T}\left[\frac{\sum_n (n - \langle \hat{n} \rangle_{out})^2 p_nT^n}{\sum_n p_nT^n}\right] = \frac{\langle (\Delta n)^2\rangle_{out}}{T} \geq 0, \end{equation} and noting that as $T$ decreases ($R$ increases), $ \langle \hat{n} \rangle_{out}$ is nonincreasing and only remains constant in the case of zero photon number variance $ \langle ( \Delta n)^2 \rangle_{out} = 0$. We quantify this photon number reduction through the relative attenuation parameter $K(R)$, which is defined as the ratio of the mean output photon number {\em with} HoZ (ZPS attenuation) to the mean output photon number {\em without} HoZ (``ordinary'' beamsplitter attenuation)~\cite{nunn2022modifying}: \begin{equation} K(R) \equiv \frac{\langle \hat{n} \rangle_{out} }{(1-R) \langle \hat{n} \rangle_{in}} \label{eqn:kdef} \end{equation} This particular definition facilitates the use of coherent states (Poissonian statistics) as a benchmark in experimental measurements. Because a coherent state impinging on the beamsplitter will produce two completely uncorrelated coherent states in the output ports~\cite{kim2002entanglement}, HoZ in the reflected port makes no difference on $\langle \hat{n} \rangle_{out}$ in the transmitted port. Consequently, $K(R) = 1$ for all $R$ for any coherent state input $\ket{\alpha}$. In contrast, for any pure Fock state input $\ket{n}$ (sub-Poissonian statistics), ZPS will not reduce $\langle \hat{n} \rangle_{out}$ while ``ordinary'' beamsplitter attenuation grows with $R$, causing $K(R)$ to increase monotonically. Conversely, for any thermal state input $\hat{\rho}_{th}$ (super-Poissonian statistics), ZPS will reduce $\langle \hat{n} \rangle_{out}$ {\em more} than ``ordinary'' beamsplitter attenuation~\cite{zhai2013pnr,nunn2022modifying}, causing $K(R)$ to decrease monotonically with $R$. These trends in the behavior of $K(R)$ for different types of input state statistics can be formally seen by noting from Eq.~\ref{eqn:dndt} that the effects of ZPS are highly dependent on photon number variance, and by extension Mandel's $Q$ parameter, defined $Q = \frac{\langle ( \Delta n)^2 \rangle}{\langle \hat{n} \rangle} - 1$~\cite{mandel1979subpoissonian}. Combining the results of Eqs.~\ref{eqn:nout}-\ref{eqn:kdef}, one can obtain expressions for $Q$ in terms of the measurable quantity $K$: \begin{equation}\label{eqn:qout} Q_{out}(R')=-\frac{(1-R')}{K(R')}\frac{dK}{dR}\Big|_{R=R'} \end{equation} for the output state, and for the input state $(R=0)$: \begin{equation}\label{eqn:qin} Q_{in}=-\frac{dK}{dR}\Big|_{R=0} \end{equation} In both cases, the slope of $K(R)$ is determined by the sign of $Q$, with negative slopes denoting super-Poissonian statistics ($Q > 0$) and positive slopes denoting sub-Poissonian statistics ($Q < 0$). Consequently, the initial slope of $K(R)$ near $R = 0$ determines whether the input state was super- or sub-Poissonian~\cite{nunn2022modifying}, while any {\em changes} in the sign of the slope as $R$ is increased denote the statistical transformations of interest here. \begin{figure}[t] \includegraphics[scale=0.50,trim={210 60 220 40},clip]{fig2.pdf} \caption{Plots of the ZPS relative attenuation parameter $K(R)$ for five different input states. Non-monotonic behavior is a signature of a nonclassical input state, with local extrema denoting transformations between super- and sub-Poissonian photon statistics in the output state. Plots (a) - (c) represent standard baseline examples of ``non-transformable'' input states, while plots (d) and (e) represent simple examples of states that are ``transformable'' by the ZPS process. For comparative purposes, $\langle \hat{n} \rangle_{in} = 3 $ in examples (a) - (c).} \label{fig:fig2} \end{figure} Figure~\ref{fig:fig2} illustrates these ideas by showing plots of relative attenuation $K(R)$ for five different input states. By definition, each state begins with $K=1$ at $R=0$ ($\langle \hat{n} \rangle_{out}=\langle \hat{n} \rangle_{in}$), and the plots diverge from this common point as reflectance increases. Plots (a) - (c) show the baseline examples of the three non-transformable states mentioned above ($|\alpha\rangle$, $|n\rangle$, and $\hat{\rho}_{th}$) which show monotonic behavior with initial slopes determined by Eq.~\ref{eqn:qin}. Plot (d) corresponds to the transformable state described in Section~\ref{sec:intro}, $|\psi\rangle_{in} = \frac{1}{\sqrt{2}}(|1\rangle + |5\rangle)$, which shows non-monotonic behavior with a local minimum near $R \sim 0.4$. Past this point, we see the state becomes sub-Poissonian according to Eq.~\ref{eqn:qout}. The super- to sub-Poissonian statistical transformation revealed by this non-monotonic behavior in $K(R)$ can be loosely understood in the following way: when $R = 0$, the equal weighting of $|1\rangle$ and $|5\rangle$ in $|\psi\rangle_{in}$ yields a photon number variance larger than that of the coherent state benchmark (i.e., super-Poissonian), resulting in a negative initial slope for $K(R)$. As $R$ is increased, ZPS increasingly drives the equally weighted superposition towards the Fock state $|1\rangle$, causing a reduction in the photon number variance below that of the coherent state benchmark (i.e., sub-Poissonian) and thus a transition to a positive slope for $K(R)$. Finally, plot (e) in Figure~\ref{fig:fig2} displays a reverse transformation from sub- to super-Poissonian statistics. Here we use the input state $|\psi ' \rangle_{in} = \frac{1}{\sqrt{10}} |1\rangle + \frac{3}{\sqrt{10}} |5\rangle$, which is heavily weighted towards $|5\rangle$ and thus starts with a small (sub-Poissonian) photon-number variance and a positive initial slope for $K(R)$. As $R$ is increased, ZPS drives the state towards a more equally weighted superposition of $|1\rangle$ and $|5\rangle$, which has a larger (super-Poissonian) photon-number variance, and causes the transition to a negative slope for $K(R)$. As $R$ is further increased (past $R \sim 0.7$), we replicate the behavior seen in plot (d): ZPS drives the superposition closer to the Fock state $|1\rangle$ and $K(R)$ once again has a positive slope. Overall, this state undergoes two statistical transformations-- from sub- to super-Poissonian statistics and back again-- over the full range of $R$. \section{Nonclassicality criteria}\label{sec:noncl} It is well known that conditional measurements at a beamsplitter (such as ZPS) cannot produce a nonclassical output unless the input is also nonclassical~\cite{ban1996statistics,kim2002entanglement}. Here, {\em nonclassical} states are defined in the usual way as those which cannot be expressed as a mixture of coherent states~\cite{dodonov2002nonclassical}. The Glauber-Sudarshan $P$-representation for such states cannot be interpreted as a valid probability distribution, becoming highly singular or taking on negative values \cite{glauber1963coherence,sudarshan1963equivalence}. It is generally accepted that sub-Poissonian statistics are nonclassical by this definition \cite{agarwal2012quantumoptics,dodonov2002nonclassical}. It follows, therefore, that any input state which {\em becomes} sub-Poissonian after ZPS must have been nonclassical to begin with. This useful restriction can be formulated in terms of the relative attenuation parameter $K(R)$. Any classical (i.e., not nonclassical) input state $\hat{\rho}_{in}$ is bound by the following inequality: \begin{equation} \frac{dK}{dR} \leq 0 \quad\text{for all $R$} \label{eqn:dkdrlim} \end{equation} Any state which is sub-Poissonian before or after ZPS will violate the above inequality and certify $\hat{\rho}_{in}$ as nonclassical. Given that $K = 1$ at $R = 0$ by definition, it immediately follows that for classical states: \begin{equation} K(R) \leq 1 \quad\text{for all $R$} \label{eqn:klim} \end{equation} As seen in Figure~\ref{fig:fig2}(a), the coherent state input $|\alpha\rangle$ saturates these bounds with $K(R) = 1$ and $dK/dR = 0$ for all $R$. To summarize, violations of Eqs.~\ref{eqn:dkdrlim} and/or \ref{eqn:klim} represent nonclassicality criteria that can be easily measured with the ZPS setup of Fig.~\ref{fig:overview}, and are based on the ability of ZPS to generate sub-Poissonian statistics in the output. This transformation occurs \textit{if and only if} the slope of $K(R)$ is positive for some value of $R$. Alternatively, a single measurement of $K(R)>1$ for some $R$ is sufficient (though not necessary) to identify such a nonclassical state. Of course, sub-Poissonian input states are guaranteed to trigger both of these nonclassicality criteria. \section{Predicting transformability}\label{sec:predict} From the simple examples in Section~\ref{sec:atten}, it is clear that not all super-Poissonian states can transform into sub-Poissonian states under ZPS, or vice versa. We have shown that ``transformable'' states must be nonclassical in Sec.~\ref{sec:noncl}, but the nonclassicality criteria in Eqs.~\ref{eqn:dkdrlim} and \ref{eqn:klim} provide little {\em a priori} insight into which states will actually transform. In this section, we derive sufficient conditions for predicting transformability based on the first few terms of the photon number distribution. The key insight is that if a state only transforms \textit{once} over the full range of $R$, we need only to consider the behavior of $K(R)$ at maximum attenuation to verify if the statistics have changed. In the limit $R\rightarrow 1$, $K(R)$ and its derivative are given by: \begin{align} &\lim_{R\to 1}K(R) = \frac{1}{\langle \hat{n}\rangle_{in}}\left(\frac{p_1}{ p_0}\right) \label{eqn:kend}\\ &\lim_{R\to 1}\frac{dK}{dR} = \frac{1}{\langle \hat{n}\rangle_{in}}\left(\frac{p_1^2 - 2p_0p_2}{p_0^2}\right) \label{eqn:dkdrend} \end{align} Remarkably, these values are entirely determined by the mean photon number $\langle \hat{n}\rangle_{in}$ and first three photon number probabilities $(p_0, p_1, p_2)$ of the input state. For a super-Poissonian input state, transformability amounts to violating the classical bounds in Eqs.~\ref{eqn:dkdrlim} and \ref{eqn:klim}, meaning $dK/dR>0$ or $K>1$ for some value of $R$. Combining this with Eqs.~\ref{eqn:kend} and \ref{eqn:dkdrend}, we obtain the following transformability criteria: \begin{align} p_0 &= 0 \label{eqn:p0lee}\\ p_0\langle\hat{n}\rangle_{in} &< p_1 \label{eqn:p0rule}\\ 2p_0p_2 &< p_1^2\label{eqn:klyshko} \end{align} Satisfying any of the above is sufficient to show that a super-Poissonian input state will transform for sufficiently large $R$. We note that Eq.~\ref{eqn:p0lee} is equivalent to Lee's theorem, which states that any input state with zero vacuum probability $p_{0} = 0$ is nonclassical~\cite{lee1995theorem}, and is often invoked to understand the nonclassical nature of photon-added states \cite{agarwal1991nonclassical,barnett2018statistics}. In the case of ZPS, it is clear that if $p_{0}= 0$, $K(R)$ and its derivative will diverge to infinity as $R \rightarrow 1$, triggering both nonclassicality criteria in Eqs.~\ref{eqn:dkdrlim} and \ref{eqn:klim}. In addition, Eq.~\ref{eqn:klyshko} is equivalent to Klyshko's well-known nonclassicality criterion~\cite{klyshko1996observable}. Thus, all super-Poissonian states which trigger Lee's or Klyshko's nonclassicality criteria are transformable. Conversely, a sub-Poissonian input state will transform if $dK/dR<0$ or $K<1$ for some value of $R$. It follows that satisfying either of Eqs.~\ref{eqn:p0rule} or \ref{eqn:klyshko}, with the inequalities reversed, is sufficient to show that a sub-Poissonian state is transformable. Lee's criteria (Eq.~\ref{eqn:p0lee}) cannot be used to predict transformability for sub-Poissonian states. The above methods for predicting transformability rely only on comparing statistics at $R=0$ and $R\rightarrow1$. Thus, states which transform an \textit{even} number of times over the full range of $R$ may not trigger these criteria, as in the example of Figure~\ref{fig:fig2}(e) which is sub-Poissonian at both extremes. This also holds for super-Poissonian input states, so satisfying Klyshko's inequality (Eq.~\ref{eqn:klyshko}) is sufficient but not necessary to be transformable. For example, any mixture/superposition of 0, 2 and 6 photons does not satisfy Klyshko's inequality, yet it can be shown that the specific case of $p_0=0.04$, $p_2=0.48$, $p_6=0.48$ corresponds to a transformable super-Poissonian state. \section{Input state parameters}\label{sec:exam} For many types of quantum optical states, the initial statistical character is determined by a set of input parameters. A trivial example is the two-term Fock state superposition considered in Figs. 2(d) and (e), $|\psi \rangle_{in} = \gamma |1\rangle + \sqrt{(1 - \gamma^{2})} |5 \rangle$, which was initially super- or sub-Poissonian based on the value of $\gamma$. More complex examples include squeezed and/or displaced states, in which the magnitude and angle of squeezing and displacement can result in dramatically different photon statistics~\cite{deoliveira1990properties,grosse2007antibunching}, and conditional state preparation techniques like photon catalysis~\cite{bartley2012multiphoton}. It is interesting to extend this idea to address the following question: for a given type of input state, what regions of its input parameter space will enable it to transform under ZPS? Here we consider two rich examples within this context-- displaced squeezed states, as considered by Dodonov {\em et al.}~\cite{dodonov1994oscillations}, and catalyzed coherent states~\cite{lvovsky2002catalysis}. \subsection{Displaced squeezed state} Noiseless attenuation of general gaussian states was previously considered in Ref.~\cite{gagatsos2014heralded}, and it was shown that ZPS preserves gaussianity. However, we show here that ZPS may not preserve the sub- or super-Poissonian character of these states for a significant portion of their parameter space. We consider a displaced squeezed state with real squeezing parameter $r$ and displacement magnitude $z$. To restrict our space to two variables, the angles of squeezing and displacement are chosen so the two quadrature variances are equal $\sigma_x=\sigma_p$, and their means opposite $\langle \hat{x} \rangle=-\langle \hat{p} \rangle$, following Dodonov {\em et al.}~\cite{dodonov1994oscillations}. Starting with the photon statistics derived in~\cite{dodonov1994oscillations}, the relative attenuation $K(R)$ can be shown to be: \begin{equation} \begin{split} K(R)= \frac{1}{\sinh^2{r} + z^2} \left[\frac{1}{2\left(\coth{r} - (1 - R)\right)}\right. \\- \left.\frac{1}{\coth{r} +\left( 1 - R\right)}\left( \frac{1}{2} - 2 z^2 \frac{\coth{2r} + 1}{1 + (1 - R)\tanh{r}} \right)\right] \end{split} \end{equation} Figure~\ref{fig:SPEC} shows plots of $K(R)$ for multiple combinations of displacement and squeezing parameters $z$ and $r$. For $z=1$ and $r=1$ in Fig.~\ref{fig:SPEC}(c), a local minimum appears. This indicates that after sufficient noiseless attenuation, the super-Poissonian input state undergoes a transition to sub-Poissonian statistics in the output. \begin{figure}[b] \centering \includegraphics[scale=0.36,trim={130 30 140 50},clip]{fig3.pdf} \caption{Relative attenuation $K(R)$ for a displaced squeezed state with parameters (a) $z=1$, $r=0.05$, (b) $z=1$, $r=0.35$, (c) $z=1$, $r=1$, and (d) $z=0.1$, $r=0.1$. Cases (a), (b) and (d) do not exhibit statistical transformations, while case (c) illustrates a transformation from super- to sub-Poissonian statistics under ZPS.} \label{fig:SPEC} \end{figure} There is a wide portion of parameter space for which this transition is possible, as shown by the blue shaded region in Fig.~\ref{fig:regions}(a). The bounding curves are calculated analytically in Mathematica. The bottom curve corresponds to the bound of Klyshko's inequality (Eq.~\ref{eqn:klyshko}), while the top curve divides sub- from super-Poissonian input states. This leaves three total regions: (i) the blank bottom-most region which contains super-Poissonian states with low squeezing, including squeezed vacuum states; (ii) the transformable super-Poissonian states, with moderate displacement and squeezing; and (iii) the upper-most region of sub-Poissonian states, including those with the highest squeezing. There are no cases in which ZPS transforms a sub-Poissonian state of this kind into a super-Poissonian one. \begin{figure} \centering \includegraphics[scale=0.41,trim={165 120 150 120},clip]{fig4.pdf} \caption{Regions shaded blue and red correspond to parameter spaces for which the input of (a) a displaced squeezed state and (b) a catalyzed coherent state, has a local minimum and maximum respectively in $K(R)$. For (b), these regions overlap. An example from within the overlapping region is shown in Fig. ~\ref{fig:CCS}(c). } \label{fig:regions} \end{figure} As discussed in Sec.~\ref{sec:noncl}, the transition in Fig.~\ref{fig:SPEC}(c) is clearly nonclassical according to Eq.~\ref{eqn:dkdrlim}, and likewise satisfies Klyshko's inequality (Eq.~\ref{eqn:klyshko}). However, it is interesting to note that the input has no ``hidden'' higher-order sub-Poissonian statistics as defined in Ref.~\cite{erenso2002higherorder}. In other words, it can be shown that the normalized correlations functions for this state satisfy $g^{(n)}(0)>1$ for all $n$. In this sense, ZPS does not ``reveal'' sub-Poissonian statistics, but rather generates them from other nonclassical aspects of the initial number distribution. \subsection{Catalyzed coherent state} Catalyzed coherent states (CCS) can be generated by mixing a coherent state $\ket{\alpha}$ with a single-photon Fock state at a beamsplitter with reflectance $\Lambda$, and conditioning the output on the detection of exactly one photon in the auxiliary output mode~\cite{lvovsky2002catalysis}. As first proposed by Xu and Yuan, a similar catalysis procedure can be implemented with the signal and idler modes of an optical parametric amplifier (OPA) with an equivalent catalysis parameter $\Lambda=1-1/g^2$, where $g$ is the gain of the OPA~\cite{xu2016catalysis,shringarpure2019generating,barnett1999equivalence}. Here, we consider an input CCS with coherent state amplitude $\alpha$ and catalysis parameter $\Lambda$. It can be shown that the photon number distribution for this state is: \begin{figure}% \centering % \includegraphics[scale=0.36,trim={130 30 140 50},clip]{fig5.pdf} \caption{Relative attenuation $K(R)$ for a catalyzed coherent state with parameters (a) $\Lambda=0.3$, $\alpha=1$, (b) $\Lambda=0.75$, $\alpha=1$, (c) $\Lambda=0.2$, $\alpha=4$, and (d) $\Lambda=0.9$, $\alpha=1$. Cases (a) and (b) exhibit a super-to-sub- and sub-to-super-Poissonian transformation, respectively, while case (c) shows both and (d) shows neither.\label{fig:CCS}} \end{figure} \begin{equation} p_n=\frac{e^{-|\alpha| ^2 (1-\Lambda)} |\alpha| ^{2 n} (1-\Lambda )^{n-1} (\Lambda +\Lambda n-1)^2}{n! \left[1-\Lambda-|\alpha |^2 \Lambda (\Lambda^2 -4 \Lambda +2) \right]} \end{equation} Using Eqs.~\ref{eqn:nout} and \ref{eqn:kdef}, $K(R)$ is then given by: \begin{equation} K(R)\propto\frac{\splitdfrac{1-4\Lambda(1-\Lambda)-\alpha^2\Lambda(2-5\Lambda)(1-R)}{+\alpha^4\Lambda^2(1-\Lambda)^2(1-R)^2}}{\splitdfrac{1-\Lambda-\alpha^2\Lambda(2-3\Lambda)(1-R)}{+\alpha^4\Lambda^2(1-\Lambda)(1-R)^2}} \end{equation} with the necessary normalization such that $K=1$ at $R=0$. Like the displaced squeezed state, relative attenuation of the CCS can exhibit a local minimum for particular values of the input parameters. In Fig.~\ref{fig:CCS}(a), a minimum is found for $\Lambda=0.3$ and $\alpha=1$. Just like the example in Fig.~\ref{fig:SPEC}(c), this state is super-Poissonian to arbitrarily high order ($g^{(n)}(0)>1$) and satisfies Klyshko's nonclassicality criteria. Unlike the previous example, however, $K(R)$ can also have a local \textit{maximum} for different parameters, as in Fig.~\ref{fig:CCS}(b). In this example, all higher-order statistics of the input state are sub-Poissonian ($g^{(n)}(0)<1$), and yet attenuation by ZPS produces super-Poissonian output statistics for $R\gtrsim 0.52$. The parameter space for local minima and maxima is shown in Fig.~\ref{fig:regions}(b), again calculated analytically with Mathematica. Interestingly, the two regions overlap, allowing for input cases with both a minimum and maximum, as in Fig.~\ref{fig:CCS}(c). Additionally, Klyshko's inequality is satisfied when $\Lambda\lesssim0.365$ and when $\Lambda\gtrsim0.775$. The former region contains all super-to-sub transformations, while the latter contains no transforming states. \section{Realistic detectors}\label{sec:det} In a real experiment, the ability to observe the statistical transformations of interest will critically depend on the performance of detectors $D_{1}$ and $D_{2}$ in Fig.~\ref{fig:overview}. Here we consider the effects of dark counts and inefficiency in both detectors, as well as the lack of photon number resolution (PNR) for $D_2$ in the transmitted output port of the beamsplitter. If heralding detector $D_1$ has a reduced effective efficiency $\eta_1$ (including losses in the reflected mode or before the beamsplitter) then it can be shown that~\cite{nunn2022modifying}: \begin{align} K_{\text{exp}}(R)= K_{\text{ideal}}(R \eta_1)\label{eqn:keff} \end{align} If $\eta_1$ is known, then $K_{\text{ideal}}(R)$ can be recovered over the smaller domain $R\in[0,\eta_1)$. However, if any statistical transformations or other interesting behavior only exist beyond $R=\eta_1$, they can no longer be generated in the transmitted output port. In the limit of $\eta_1\rightarrow0$, ZPS is equivalent to ordinary attenuation, which can never change the sign of Mandel's $Q$ parameter~\cite{alleaume2004singlephoton}. Meanwhile, $D_2$ efficiency $\eta_2$ or loss in the output mode has no effect on $K(R)$ as defined in Eq.~\ref{eqn:kdef}. The numerator and denominator of this ratio would be reduced by the same efficiency factor $\eta_2$, which then cancels. However, the efficiency $\eta_2$ must be carefully considered if using a non-PNR detector $D_2$ (see Fig.~\ref{fig:detectors}). In this case, mean intensity or expected photon number at $D_2$ is replaced with the probability of a click event: \begin{align}\label{eqn:kclick} K_{\text{click}}(R)&\equiv \frac{P(C_2|NC_1)}{P(C_2)} \nonumber \\ &= \frac{ \text{tr}\left\{\hat{B}\hat\rho_{in}\hat{B}^\dagger\hat\Pi^{(NC)}_1\hat\Pi^{(C)}_2\right\}} {\text{tr}\left\{\hat{B}\hat\rho_{in}\hat{B}^\dagger\hat\Pi^{(C)}_2\right\}\text{tr}\left\{\hat{B}\hat\rho_{in}\hat{B}^\dagger\hat\Pi^{(NC)}_1\right\}} \nonumber \\ &= \frac{\sum_np_n\left[(1-R\eta_1)^n-(1-R\eta_1-T\eta_2)^n\right]}{\left[\sum_np_n(1-R\eta_1)^n\right]\left[1-\sum_np_n(1-T\eta_2)^n\right]} \end{align} where the subscripts $1$ and $2$ indicate detection channels $D_1$ and $D_2$, $\hat{B}$ is the unitary beam-splitter operator~\cite{agarwal2012quantumoptics}, and we have used the standard POVMs for non-PNR detectors for ``click'' $(C)$ and ``no-click'' $(NC)$ events~\cite{stevens2013photon}: \begin{align} \hat\Pi^{(C)}_i &= \mathbbm{1} - \Pi^{(NC)}_i \nonumber \\ \hat\Pi^{(NC)}_i &= \sum_n (1-\eta_i)^n\ket{n}\bra{n} \end{align} for $i=1,2$. We find that $\lim_{\eta_2 \to 0}K_{\text{click}}(\eta_1,\eta_2,R)=K(\eta_1R)$ after applying L'H\^{o}pital's rule. Alternatively, the binomial approximation can be applied so long as $|\eta_2 n|\ll1$ and higher-number terms are insignificant. This is similar to the requirements for measuring normalized correlation functions $g^{(n)}$. In these multiphoton coincidence measurements, non-PNR detectors are effective for low efficiency and low counting rates, when the detector response to photon number is approximately linear~\cite{avenhaus2010higherorder}. These measurements are considered loss-tolerant exactly like the transmitted mode in ZPS. \begin{figure} \centering \includegraphics[scale=0.46,trim={310 150 310 150},clip]{fig6.pdf} \caption{ZPS measurement with two non-PNR detectors. Detector $D_2$ measures the rate of clicks with and without conditioning on a no-click at $D_1$. Detector inefficiencies $\eta_1$ and $\eta_2$ are modeled with beamsplitters that reflect photons into the environment. To approximate $K(R)$ without PNR, the effective efficiency of $D_2$ must satisfy $|\eta_2 n|\ll1$.} \label{fig:detectors} \end{figure} Dark counts at the heralding detector $D_1$ have no effect on $K(R)$, only reducing the probability of success for HoZ~\cite{nunn2021heralding}. These false click events can be safely ignored so long as $D_1$ dark counts are completely uncorrelated with the counts at the other detector $D_2$. In contrast, dark counts at $D_2$ do have an effect on the measured relative attenuation. For some dark count probability $d$ at $D_2$, Eq.~\ref{eqn:kclick} is modified as follows: \begin{equation} K_{\text{dark}}(R)=\frac{P(C_2|NC_1)+d\cdot P(NC_2|NC_1)}{P(C_2)+d\cdot P(NC_2)} \end{equation} If we assume the probability of a simultaneous dark and ``true'' count is negligible-- i.e., counting rates are sufficiently low with and without HoZ, then this becomes: \begin{equation}\label{eqn:kdark} K_{\text{dark}}(R)\approx\frac{P(C_2|NC_1)+d}{P(C_2)+d} \end{equation} If the dark count rate $d$ is simply subtracted from both the numerator and denominator, we regain the original ratio $K_{\text{click}}$. In any case, lower $D_2$ dark counts are desirable to improve the signal-to-noise ratio. In summary, the function $K(R)$ can be reliably measured with currently available single-photon detectors, even without PNR capability. The heralding detector $D_1$ requires high efficiency to generate the full range of possible output statistics, but is robust to dark counts and other uncorrelated background noise. The output detector $D_2$ has essentially opposite requirements, requiring low effective efficiency (if non-PNR) and relatively low dark counts. \section{Summary \& Conclusions}\label{sec:con} We have shown how the conditional measurement process of zero-photon subtraction (ZPS) can transform certain super-Poissonian states into sub-Poissonian states, and vice versa. These effects can be experimentally observed by measuring the relative attenuation parameter $K(R)$, which is simply defined as the ratio of ZPS-based attenuation to ordinary beamsplitter attenuation. Because the slope of $K(R)$ is proportional to Mandel’s $Q$-parameter, an observed local minimum or maximum as the beamsplitter reflectivity is tuned from $R=0\rightarrow1$ is the signature of a statistical transformation of interest. We described how input states which transform in this way are necessarily nonclassical, which allowed us to establish nonclassicality criteria based on ZPS measurements. The connection between these criteria and Klyshko and Lee’s theorems~\cite{klyshko1996observable,lee1995theorem} showed that certain restrictions on photon number probabilities provide sufficient, but not necessary, conditions to predict \textit{a priori} which input states will transform through the ZPS process. We considered several simple examples of input states that illustrated the basic physics of these ZPS-based statistical transformations, as well as two more complex example states that showed interesting parameter-dependent behavior. In all cases, the effects of realistic detector parameters (low efficiency and dark counts) were found to only moderately reduce the ability to observe the statistical transformations of interest. Consequently, an experimental demonstration could be feasible with currently available detector technology~\cite{eisaman2011detectors}, although high-fidelity preparation of the various quantum optical input states considered here would remain a challenge. \section*{Acknowledgements} We would like to thank J. D. Franson for many valuable discussions pertaining to this research. This work was supported by the National Science Foundation under Grant No. 2013464.
1,116,691,501,198
arxiv
\section{Introduction} The data-driven modeling of complex systems is currently undergoing a revolution. There is unprecedented availability of high-fidelity measurements from historical records, numerical simulations, and experimental data, and recent developments in machine learning and compressed sensing make it possible to extract more from this data. Systems of interest, such as a turbulent fluid, an epidemiological system, a network of neurons, financial markets, or the climate, are high-dimensional, nonlinear, and exhibit multi-scale phenomena in both space and time. However, many systems evolve on a low-dimensional attractor that may be characterized by large-scale coherent structures~\citep{HLBR_turb,guckenheimer_holmes}. System identification comprises a large collection of methods to characterize a dynamical system from data. Many techniques in system identification~\citep{ljung:book}, including dynamic mode decomposition (DMD)~\citep{Schmid2008aps,Rowley2009jfm,schmid:2010,Tu2014jcd} and DMD with control (DMDc)~\citep{Proctor2014arxiv}, are designed to handle high-dimensional data with the assumption of linear dynamics; historically, there have been relatively few techniques to identify nonlinear dynamical systems from data. However, DMD has strong connections to nonlinear dynamics through Koopman operator theory~\citep{Koopman1931pnas,Mezic2004physicad,Mezic2005nd}, which spurred significant interest and developments~\citep{Rowley2009jfm,Tu2014jcd,Budivsic2012chaos,Mezic2013arfm}. A recent breakthrough in nonlinear system identification~\citep{Schmidt2009science} uses genetic programming~\citep{koza1999genetic} to construct families of candidate nonlinear functions for the rate of change of state variables in time. A parsimonious model is chosen from this family by finding a Pareto optimal solution that balances model complexity with predictive accuracy. In a related modeling framework, we have developed an algorithm for the sparse identification of nonlinear dynamics (SINDY) from data~\citep{Brunton2015arxiv}, relying on the fact that most dynamical systems of interest have relatively few nonlinear terms in the dynamics out of the family of possible terms (i.e., polynomial nonlinearities, etc.). This method uses sparsity promoting techniques to find models that automatically balance sparsity in the number of terms with model accuracy. An earlier related algorithm~\citep{Wang2011prl} uses compressed sensing~\citep{Donoho2006ieeetit,Candes2006picm,Baraniuk2007ieeespm}, while our algorithm uses sparse regression~\citep{Tibshirani1996lasso} to handle measurement noise and overdetermined cases when we have more time snapshots than state measurements. There are many other interesting methods recently developed to incorporate sparsity and nonlinear dynamics~\citep{Schaeffer2013pnas,Ozolicnvs2013pnas,mackey2014compressive}. In addition, there are numerous exciting directions in equation-free modeling~\citep{Kevrekidis2003cms}, including the Perron-Frobenius operator~\citep{Froyland2009pd}, cluster reduced-order models based on probabilistic transition between various system behaviors~\citep{Kaiser2014jfm}, and methods for uncertainty quantification and subspace analysis in turbulent flows and the climate~\citep{Majda2007bpnas,Majda2009pnas,Sapsis2013pnas}. Beyond modeling, a goal for many complex systems is active feedback control, as in many fluid dynamic applications~\citep{Brunton2015amr}. Extending the data-driven methods above to disambiguate between the effects of dynamics and actuation is a critical step in developing nonlinear input--output models that are suitable for control design. Similar to the extension of dynamic mode decomposition to include the effects of control~\citep{Proctor2014arxiv}, here we extend the SINDY algorithm~\citep{Brunton2015arxiv} to include external inputs and control. We also demonstrate the relationship of SINDY, with and without control, to DMD and Koopman methods, concluding that each of these are variations of model identification from data using advanced regression techniques. \section{Model identification via regression} Here we review various techniques in system identification, including dynamic mode decomposition (DMD), Koopman analysis, and the sparse identification of nonlinear dynamics (SINDY). Each of these methods is cast as a regression problem of data onto models, and the schematic overview of these methods is shown in Fig.~\ref{fig:overview}. \begin{figure*} \begin{center} \begin{overpic}[width=.875\textwidth]{fig01.pdf} \put(22,84){$\mathbf{x}_{k+1} = \mathbf{A}\mathbf{x}_k$} \put(66,84){$\mathbf{x}_{k+1} = \mathbf{A}\mathbf{x}_k+\mathbf{B}\mathbf{u}_k$} \put(15,59.5){$\mathcal{K}y(\mathbf{x}_k) = y(\mathbf{f}(\mathbf{x}_k)) = y(\mathbf{x}_{k+1})$} \put(56,59.5){$\mathcal{K}_*y(\mathbf{x}_k,\mathbf{u}_k) = y(\mathbf{f}(\mathbf{x}_k,\mathbf{u}_k),*) = y(\mathbf{x}_{k+1},*)$} \put(23,29){$\frac{d}{dt}\mathbf{x} = \mathbf{f}(\mathbf{x})$} \put(69,29){$\frac{d}{dt}\mathbf{x} = \mathbf{f}(\mathbf{x},\mathbf{u})$} \put(8.5,74){$\mathbf{X}'$} \put(12.2,74){$=$} \put(17,74){$\mathbf{A}$} \put(23.5,74){$\mathbf{X}$} \put(55.5,74){$\mathbf{X}'$} \put(59.2,74){$=$} \put(63.6,74){$\mathbf{A}$} \put(68.2,74){$\mathbf{B}$} \put(72.2,74){$\mathbf{X}$} \put(72.2,68.75){$\mathbf{\Upsilon}$} \put(39.8,77){\scriptsize$=$} \put(87.1,77){\scriptsize$=$} \put(8.5,47.5){$\mathbf{Y}'$} \put(12.2,47.5){$=$} \put(18.5,47.5){$\mathbf{K}$} \put(27,47.5){$\mathbf{Y}$} \put(55.5,47.5){$\mathbf{Y}'$} \put(59.,47.5){$=$} \put(65,47.5){$\mathbf{K}_*$} \put(76.5,47.5){$\mathbf{Y}$} \put(76.5,40.){$\mathbf{\Upsilon}$} \put(39.2,52){\scriptsize$=$} \put(86.9,51.75){\scriptsize$=$} \put(8.5,18.5){$\dot{\mathbf{X}}$} \put(12.2,18.5){$=$} \put(18.5,18.5){$\mathbf{\Xi}$} \put(26.2,18.5){\small $\mathbf{\Theta}(\mathbf{X})$} \put(54.7,18.5){$\dot{\mathbf{X}}$} \put(58.4,18.5){$=$} \put(66,18.5){$\mathbf{\Xi}$} \put(76.7,13.5){\begin{sideways}$\mathbf{\Theta}(\mathbf{X},\mathbf{\Upsilon})$\end{sideways}} \put(38.9,21){\scriptsize$=$} \put(85.5,21.25){\scriptsize$=$} \end{overpic} \vspace{-.1in} \caption{\small Overview of various methods that use regression to identify dynamics from data.}\label{fig:overview} \end{center} \end{figure*} \subsection{Dynamic mode decomposition} The dynamic mode decomposition (DMD) originated in the fluids community to extract spatial-temporal coherent structures from fluid data sets~\citep{Schmid2008aps,Rowley2009jfm,schmid:2010,Tu2014jcd}. DMD modes are spatially coherent and oscillate at a fixed frequency and/or growth or decay rate. Since fluids data is typically high-dimensional, DMD is built on the proper orthogonal decomposition (POD)~\citep{HLBR_turb}, effectively recombining POD modes in a linear combination to enforce the temporal coherence. First, we collect multiple snapshots of high-dimensional fluid data in time $\mathbf{x}_k = \mathbf{x}(k\Delta t)\in\mathbb{R}^n$, where $n$ represents the number of spatial measurements, which may easily represent millions or billions of degrees of freedom. In DMD, we seek a linear operator $\mathbf{A}$ that approximately relates these snapshots, at least for short periods of time: \begin{eqnarray} \mathbf{x}_{k+1} &\approx& \mathbf{A}\mathbf{x}_k.\label{Eq:LinearDynamics} \end{eqnarray} If we collect $m+1$ snapshots and arrange in two matrices: \begin{eqnarray} \mathbf{X}= \begin{bmatrix} \vline & \vline & & \vline \\ \mathbf{x}_1 & \mathbf{x}_2 & \cdots & \mathbf{x}_m\\ \vline & \vline & & \vline \end{bmatrix} ,~~ \mathbf{X}'= \begin{bmatrix} \vline & \vline & & \vline \\ \mathbf{x}_2 & \mathbf{x}_3 & \cdots & \mathbf{x}_{m+1}\\ \vline & \vline & & \vline \end{bmatrix},\label{Eq:DMDStates} \end{eqnarray} it is possible to related these matrices by: \begin{eqnarray} \mathbf{X}' \approx \mathbf{A}\mathbf{X}.\label{Eq:DMD1} \end{eqnarray} In principle, for low-dimensional data it is possible to solve directly for the best-fit linear operator $\bar{\mathbf{A}}$ that minimizes $\|\mathbf{X}'-\bar{\mathbf{A}}\mathbf{X}\|_{F}$ using a least-squares regression, where $\|\cdot\|_F$ is the Frobenius norm. Numerically, the singular value decomposition (SVD) is used to apply the pseudo-inverse of $\mathbf{X}$ to both sides of Eq.~\eqref{Eq:DMD1}. However, when the state dimension $n$ is large, then $\mathbf{A}$ is high-dimensional with $n^2$ elements, and might not be representable computationally. Instead, we apply the proper orthogonal decomposition to the data $\mathbf{X}$ and compute a reduced operator $\tilde{\mathbf{A}}$ that acts on POD coefficients. It is possible to reconstruct the leading eigenvalues and eigenvectors of the high-dimensional $\mathbf{A}$ matrix from the eigendecomposition of $\tilde{\mathbf{A}}$. \begin{enumerate} \item Compute the economy-sized SVD of $\mathbf{X}$: \begin{eqnarray} \mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^*, \end{eqnarray} where $\mathbf{U}\in\mathbb{R}^{n\times m}$, $\mathbf{\Sigma}\in\mathbb{R}^{m\times m}$, and $\mathbf{V}\in\mathbb{R}^{m\times m}$. \item Compute the projection of the least-square solution $\bar{\mathbf{A}} = \mathbf{X}'\mathbf{X}^{\dagger}$ onto POD modes, given by the columns of $\mathbf{U}$, where $\mathbf{X}^{\dagger}=\mathbf{V}\mathbf{\Sigma}^{-1}\mathbf{U}^*$ is the psuedo-inverse: \begin{eqnarray} \tilde{\mathbf{A}} = \mathbf{U}^*\bar{\mathbf{A}}\mathbf{U} = \mathbf{U}^*\mathbf{X}'\mathbf{V}\mathbf{\Sigma}^{-1}. \end{eqnarray} Note that $\tilde{\mathbf{A}}$ is an $m\times m$ matrix, where $m$ is the number of time snapshots; this matrix advances POD coefficients forward in time. \item Compute the eigendecomposition of $\tilde{\mathbf{A}}$: \begin{eqnarray} \tilde{\mathbf{A}}\mathbf{W} = \mathbf{W}\mathbf{\Lambda}. \end{eqnarray} \item The eigenvalues in $\mathbf{\Lambda}$ are also eigenvalues of the full $\mathbf{A}$ matrix, and these are called \emph{DMD eigenvalues}. The corresponding eigenvectors of $\mathbf{A}$, called \emph{DMD modes}, are constructed as~\citep{Tu2014jcd}: \begin{eqnarray} \mathbf{\Phi} = \mathbf{X}'\mathbf{V}\mathbf{\Sigma}^{-1}\mathbf{W}. \end{eqnarray} \end{enumerate} It is also possible to truncate the SVD at order $r$, retaining only the first $r$ POD modes, and resulting in an $r\times r$ matrix $\tilde{\mathbf{A}}$. The DMD modes $\boldsymbol{\phi}$ are spatially coherent and oscillate and/or grow or decay at the fixed frequency $\lambda$. The dynamic mode decomposition has been applied to a wide range of problems including fluid mechanics~\citep{Rowley2009jfm,Tu2014jcd,Mezic2013arfm}, epidemiology~\citep{Proctor2015ih}, neuroscience~\citep{brunton2014extracting}, robotics~\citep{Berger2014ieee}, and video processing~\citep{grosek2014,Erichson2015arxiv}. However, many of these applications have the ultimate goal of closed-loop feedback control. \subsection{Dynamic mode decomposition with control} To disambiguate the effect of internal dynamics from actuation or external inputs, the dynamic mode decomposition with control (DMDc) was developed~\citep{Proctor2014arxiv}. In DMDc, the linear state dynamics in Eq.~\eqref{Eq:LinearDynamics} are augmented to include the effect of actuation inputs $\mathbf{u}$: \begin{eqnarray} \mathbf{x}_{k+1} &\approx& \mathbf{A}\mathbf{x}_k+\mathbf{B}\mathbf{u}_k.\label{Eq:LinearDynamicsWithControl} \end{eqnarray} We still collect the state snapshots from Eq.~\eqref{Eq:DMDStates}, but now we collect an additional matrix for the control history: \begin{eqnarray} \mathbf{\Upsilon} = \begin{bmatrix} \vline & \vline & & \vline \\ \mathbf{u}_1 & \mathbf{u}_2 & \cdots & \mathbf{u}_m\\ \vline & \vline && \vline \end{bmatrix}. \end{eqnarray} In DMDc, $\mathbf{A}$ and $\mathbf{B}$ are approximated from data via: \begin{eqnarray} \mathbf{X}' \approx \begin{bmatrix} \mathbf{A} & \mathbf{B}\end{bmatrix} \begin{bmatrix} \mathbf{X}\\ \mathbf{\Upsilon}\end{bmatrix}. \end{eqnarray} \subsection{Koopman analysis} DMD is connected to nonlinear systems via the Koopman operator~\citep{Mezic2004physicad,Mezic2005nd,Rowley2009jfm,Tu2014jcd}. The Koopman operator~\citep{Koopman1931pnas} is an infinite-dimensional \emph{linear} operator that describes how a measurement function $y(\mathbf{x})$ evolves through \emph{nonlinear} dynamics: \begin{eqnarray} \mathbf{x}_{k+1} &=& {\bf f}(\mathbf{x}_k). \end{eqnarray} The Koopman operator $\mathcal{K}$ acts on the Hilbert space of scalar measurement functions $y(\mathbf{x})$ as: \begin{eqnarray} \mathcal{K} y(\mathbf{x}_k) =y(\mathbf{f}(\mathbf{x}_k)) = y(\mathbf{x}_{k+1}). \end{eqnarray} That is, the Koopman operator acts on $y$ by the composition of $y$ with the dynamic update $\mathbf{f}$. The DMD algorithm approximates the spectrum of the Koopman operator using linear observable functions (i.e., the observable functions are linear functions of the state, as in $y(\mathbf{x}_k) = \mathbf{x}_k$). However, it was recently shown that linear measurements are not sufficiently rich to analyze nonlinear systems~\citep{Williams2014arxivA}, resulting in the extended DMD (eDMD), which performs a similar DMD regression, but on an augmented data matrix including nonlinear state measurements. Since this algorithm is expensive numerically, a kernel trick was implemented to make the eDMD method as computationally efficient as standard DMD~\citep{Williams2015jnls}. \subsection{Koopman with inputs and control} Similar to how DMD was extended to include inputs and control, Koopman analysis has recently been extended to include inputs and control~\citep{Proctor2016arxiv}. In this Koopman with inputs and control (KIC) framework, scalar measurements of the state and control $y(\mathbf{x},\mathbf{u})$ are advanced through nonlinear dynamics with control: \begin{eqnarray} \mathbf{x}_{k+1} = \mathbf{f}(\mathbf{x}_k,\mathbf{u}_k). \end{eqnarray} The Koopman with control operator $\mathcal{K}_*$ is given by: \begin{eqnarray} \mathcal{K}_* y(\mathbf{x}_k,\mathbf{u}_k) =y(\mathbf{f}(\mathbf{x}_k,\mathbf{u_k}),*) = y(\mathbf{x}_{k+1},*). \end{eqnarray} It is important to note that there is a parameterized family of Koopman with control operators $\mathcal{K}_*$, as there is a choice of which future control input $*$ to use. It has been shown that Koopman with inputs and control reduces to DMDc for linear dynamical systems, much as Koopman analysis is numerically computed using DMD for linear systems. \subsection{Sparse identification of nonlinear dynamics (SINDY)} The SINDY algorithm identifies fully nonlinear dynamical systems from measurement data. This relies on the fact that many dynamical systems have relatively few terms in the right hand side of the governing equations: \begin{eqnarray} \frac{d}{dt}\mathbf{x} = \mathbf{f}(\mathbf{x}).\label{Eq:NonlinearDynamics} \end{eqnarray} Given a library of candidate nonlinear functions, \begin{eqnarray} \mathbf{\Theta}^T(\mathbf{X}) = \begin{bmatrix} \rule[2.2pt]{4em}{0.4pt} & \mathbf{1} & \rule[2.2pt]{4em}{0.4pt} \\ \rule[2.2pt]{4em}{0.4pt} & \mathbf{X} & \rule[2.2pt]{4em}{0.4pt}\\ \rule[2.2pt]{4em}{0.4pt} & \mathbf{X}^2 & \rule[2.2pt]{4em}{0.4pt}\\ & \vdots &\\ \rule[2.2pt]{4em}{0.4pt} & \sin(\mathbf{X}) & \rule[2.2pt]{4em}{0.4pt}\\ \rule[2.2pt]{4em}{0.4pt} & \sin(2\mathbf{X}) & \rule[2.2pt]{4em}{0.4pt}\\ & \vdots & \end{bmatrix}, \end{eqnarray} where $\mathbf{X}$ is the same data matrix as in Eq.~\eqref{Eq:DMDStates}, we may write our dynamical system as: \begin{eqnarray} \dot{\mathbf{X}} = \mathbf{\Xi}\mathbf{\Theta}^T(\mathbf{X}).\label{Eq:SINDY} \end{eqnarray} The coefficients $\mathbf{\Xi}$ in this library are \emph{sparse} for most dynamical systems. Therefore, we employ sparse regression to identify a sparse $\mathbf{\Xi}$ corresponding to the fewest nonlinearities in our library that give good model performance. Choosing a library of candidate dynamics is a crucial choice int he SINDY algorithm. The algorithm may be extended to include support for more general nonlinearities. It may also be possible to test different libraries (polynomials, trigonometric functions, etc.) and also incorporate partial knowledge of the physics (fluids vs. quantum mechanics, etc.). Notice that if $\mathbf{\Theta}^T(\mathbf{X})=\mathbf{X}$, then Eq.~\eqref{Eq:SINDY} is equivalent to DMD with $\mathbf{\Xi}=\mathbf{A}$. Each row of Eq.~\eqref{Eq:SINDY} represents a row in Eq.~\eqref{Eq:NonlinearDynamics}, and the sparse vector of coefficients $\boldsymbol{\xi}_k$ corresponding to the $k$-th row of $\mathbf{\Xi}$ is found using a sparse regression algorithm, such as LASSO~\citep{Tibshirani1996lasso}: \begin{eqnarray} \boldsymbol{\xi}_k = \text{argmin}_{\boldsymbol{\xi}_k}\|\dot{\mathbf{X}}_k-\boldsymbol{\xi}_k\mathbf{\Theta}^T(\mathbf{X})\|_2 + \alpha \|\boldsymbol{\xi}_k\|_1, \end{eqnarray} where $\dot{\mathbf{X}}_k$ represents the $k$-th row of $\dot{\mathbf{X}}$. The $\|\cdot\|_1$ term promotes sparsity in the coefficient vector $\boldsymbol{\xi}_k$. The parameter $\alpha$ is selected to identify the Pareto optimal model that best balances low model complexity with accuracy. A coarse sweep of $\alpha$ is performed to identify the rough order of magnitude where terms are eliminated and where error begins to increase. Then this parameter sweep may be refined. To approximate derivatives from noisy state measurements, the SINDY algorithm uses the total variation regularized derivative~\citep{Rudin1992physd,Chartrand2011isrnam}. \section{Sparse identification of nonlinear dynamics with control (SINDY\MakeLowercase{c})} Here, we generalize the SINDY method to include inputs and control. In particular, we now consider the nonlinear dynamical system with inputs $\mathbf{u}$: \begin{eqnarray} \frac{d}{dt}\mathbf{x} &=& {\bf f}(\mathbf{x},\mathbf{u}).\label{Eq:NonlinearDynamicsWithControl} \end{eqnarray} The SINDY algorithm is readily generalized to include actuation, as this merely requires building a larger library $ \mathbf{\Theta}(\mathbf{x},\mathbf{u})$ of candidate functions that include $\mathbf{u}$; these functions can include nonlinear cross terms in $\mathbf{x}$ and $\mathbf{u}$. This extension requires measurements of the state $\mathbf{x}$ as well as the input signal $\mathbf{u}$. This generalization is shown in Fig.~\ref{fig:overview} in terms of the overarching regression framework. If the signal $\mathbf{u}$ corresponds to an external forcing, then we solve for the sparse coefficients $\mathbf{\Xi}$ in the following: \begin{eqnarray} \dot{\mathbf{X}} = \mathbf{\Xi}\mathbf{\Theta}^T(\mathbf{X},\boldsymbol{\Upsilon}).\label{Eq:SINDYc} \end{eqnarray} However, if the signal $\mathbf{u}$ corresponds to a feedback control signal, so that $\mathbf{u} = \mathbf{k}(\mathbf{x})$, then it is impossible to disambiguate the effect of the feedback control $\mathbf{u}$ with internal feedback terms $\mathbf{k}(\mathbf{x})$ within the dynamical system; namely, the SINDY regression becomes ill-conditioned. In this case, we may identify the actuation $\mathbf{u}$ as a function of the state: \begin{eqnarray} \boldsymbol{\Upsilon} = \mathbf{\Xi}_u\mathbf{\Theta}^T(\mathbf{X}). \end{eqnarray} To identify the coefficients $\mathbf{\Xi}$ in Eq.~\eqref{Eq:SINDYc}, we perturb the signal $\mathbf{u}$ to allow it to be distinguished from $\mathbf{k}(\mathbf{x})$ terms. This may be done by injecting a sufficiently large white noise signal, or occasionally kicking the system with a large impulse or step in $\mathbf{u}$. An interesting future direction would be to design input signals that \emph{aid} in the identification of the dynamical system in Eq.~\eqref{Eq:NonlinearDynamicsWithControl} by perturbing the system in directions that yield high-value information. \section{Example systems} Here, we demonstrate the SINDY with control algorithm on a simple example predator-prey model with forcing and on the Lorenz equations with external forcing and control. \subsection{Predator-prey model} A predator-prey model with forcing is given by: \begin{subequations} \begin{eqnarray} \dot x_1 & = & a x_1 - b x_1 x_2 + u^2,\\ \dot x_2 & = & -c x_2 + d x_1x_2 \end{eqnarray}\label{Eq:LV} \end{subequations} The variable $x_1$ represents the size of the prey population and $x_2$ represents the size of the predator population; the prey species is actuated with $u^2$. The parameters $a,b,c,$ and $d$ represents the various growth/death rates, the effect of predation on the prey population, and the growth of predators based on the size of the prey population. \begin{figure} \begin{center} \vspace{-.1in} \includegraphics[width=.415\textwidth]{lotka_controlValidation} \end{center} \vspace{-.175in} \caption{\small SINDY and SINDYc model predictions for the force Lotka-Volterra system in Eq.~\eqref{Eq:LV}. The training data consists of the Lotka-Volterra system with periodic forcing.}\label{fig:LotkaRecon} \end{figure} \begin{figure*} \begin{center} \begin{tabular}{ccc} \begin{overpic}[width=.3\textwidth]{lorenz_nocontrol.jpg} \put(0,62){(a)} \put(19,62){No forcing or control} \end{overpic} & \begin{overpic}[width=.3\textwidth]{lorenz_forcing.jpg} \put(0,62){(b)} \put(19,62){Forcing: $g(u)=u^3$} \put(44.7,54){$u(t) = .5+\sin(40t)$} \end{overpic} & \begin{overpic}[width=.3\textwidth]{lorenz_control.jpg} \put(0,62){(c)} \put(19,62){Control: $g(u)=u$} \put(44.5,54){$u(t) = 26-x(t)+d(t)$} \end{overpic} \end{tabular} \end{center} \vspace{-.1in} \caption{\small Lorenz system without forcing or control (a), with forcing (b), and with feedback control (c). In each case, the system is integrated for $50$ time units with a $\Delta t=.001$ with the parameters $\sigma=10$, $\beta=8/3$, and $\rho=28$.}\label{fig:Lorenz} \end{figure*} In this example, we force the system sinusoidally with $u(t) = 2\sin(t)+2\sin(t/10)$, and the population response is shown in Fig.~\ref{fig:LotkaRecon} (grey and black). The first $100$ time units are used to train the SINDY and SINDYc algorithms, after which they are validated on the next $100$ time units of forced data. The naive application of SINDY without knowledge of the input results in an unstable model (blue), while the SINDYc algorithm correctly identifies the model structure and parameters in Eq.~\eqref{Eq:LV} to within machine precision in the absence of measurement noise; the SINDYc reconstruction is shown in red. \subsection{Lorenz equations} We also test the SINDYc method on the Lorenz equations: \begin{subequations} \begin{eqnarray} \dot x & = & \sigma(y-x) + g(u) \\ \dot y & = & x(\rho-z) -y \\ \dot z & = & xy-\beta z. \end{eqnarray}\label{Eq:Lorenz} \end{subequations} These equations are examined with various forcing and control models, as shown in Fig.~\ref{fig:Lorenz}. In the case of an external forcing, as in Fig.~\ref{fig:Lorenz} (b), the SINDYc algorithm correctly identifies the model and nonlinear input terms. In the case that the Lorenz system is being actively controlled by state feedback, as in Fig.~\ref{fig:Lorenz} (c), we must add a perturbation signal $d(t)$ to the input to disambiguate the effect of state feedback via $u$ from internal dynamics. For this problem, we use an additive white noise process. In this example, we train the models using $20$ time units of controlled data, and validate them on another $20$ time units where we switch the forcing to a periodic signal $u(t) = 50\sin(10t)$. The SINDY algorithm does not capture the effect of actuation, while SINDYc correctly identifies the model and predicts the behavior in response to a new forcing that was not used in the training data. \begin{figure} \begin{center} \vspace{-.15in} \includegraphics[width=.425\textwidth]{lorenz_controlValidation} \end{center} \vspace{-.3in} \caption{\small SINDY and SINDYc predictions for the controlled Lorenz system in Eq.~\eqref{Eq:Lorenz}. Training data consists of the Lorenz system with state feedback as in Fig.~\ref{fig:Lorenz} (c). After the training period, the input $u$ switches to a periodic signal $u(t) = 50\sin(10t)$.}\label{fig:LorenzRecon} \end{figure} \section{Discussion} In this work, we have generalized the sparse identification of nonlinear dynamics (SINDY) algorithm to include inputs and control. This involved generalizing the library of candidate nonlinear terms to include functions not only of the state $\mathbf{x}$, but also of the input $\mathbf{u}$, including cross terms between state and input. This new algorithm is cast in an overarching regression framework in Fig.~\ref{fig:overview}, relating it to other algorithms that determine models from data, including dynamics mode decomposition (DMD), DMD with control, extended DMD, and Koopman analysis. The new method has been tested on a predator prey model and the Lorenz system with various forcing and control models. The proposed algorithm should scale to the same class of problems where SINDY is useful, since they are built on the same computational architecture. There are a number of interesting directions to extend this work. First, it is important to determine optimal strategies to disambiguate the effect of a state-feedback control signal from internal state dynamics; this may be achieved by additive white noise on the input signal or occasional kicks to the system, but understanding the tradeoffs and benefits of these strategies will be useful. More importantly, it is likely possible to design input sequences that optimally probe complex systems to extract high-value information that will be useful to characterize the system. For example, perhaps perturbing some systems off-attractor will provide valuable information about nonlinear terms in the dynamics if the on-attractor data may strongly resemble a linear system. If the state and control variables have different levels of sparsity, it may be possible to use a weighted convex optimization to penalize the state and control sparsity separately. It may also important to improve the model identification if the control law $\mathbf{u} = \mathbf{k}(\mathbf{x})$ is known. These are promising areas of current and future research. \small
1,116,691,501,199
arxiv
\section{Introduction} Globular clusters (GCs) are ubiquitous and apparently obey a universal formation mechanism \cite{1994ApJ...429..117H} as evidence by the constancy of the globlar cluster luminosity function (GCLF) \cite{1991ARA&A..29..543H}. However, this constancy of formation appears juxtaposed against the environments of the GCs host galaxies. The nature of this juxtaposition manifests itself in the variation of cluster specific frequency as a function of galaxy type and environment. In order to compare globular cluster populations of two disparate galaxies (say an E galaxy compared to an S0) the total number of clusters about that galaxy should be normalized to some trait which both objects have in common. The common measure in use is referred to as the globular cluster specific frequency and is defined as \begin{equation} S_N = N_t \time 10^{0.4(M_V^T + 15.0)} \end{equation} \cite{1981AJ.....86.1627H}, where $N_t$ is the total cluster population and $M_V^T$ is the integrated V magnitude of the galaxy. Observations of the globular cluster systems of brightest cluster galaxies (BCGs) have revealed the compelling relationship between GC specific frequency and the X-ray luminosity ($L_X$) of the host galaxy \cite{1997ApJ...481L..59B,1998AJ....115.1801H}. This relation gives an intuitive explanation of the variation in $S_N$ amongst the galaxies of the same type and suggests that galaxy mass is the determining factor in cluster formation. Use of X-ray temperatures ($T_X$) to determine the radially dependent gas density within M87 and comparing this to the radial variations in cluster numbers has further revealed that the efficiency of cluster formation is a constant of the density of gas \cite{1998JRASC...M}. The evidence that the formation efficiency of globulars is a constant with radius is reassuring to the previous result for BCGs as the relation between cluster numbers and $L_X$ was based solely on clusters within a distance $32h^{-1}kpc$ of the BCG center. The final piece of the $S_N$ puzzle is the variation in $S_N$ between spiral and elliptical galaxies. \section{$S_N$ and galaxy type} Use of the $S_N$ relation leads to the common result that spiral galaxies have a lower specific frequency than do ellipticals (Fig.~\ref{fig:SN}). This result is further emphasized when the high $S_N$ galaxies M87 and NGC 1399 are included and their total cluster populations are compared to their total luminosity. The reason for this variation of specific frequency is not yet well understood although many competing theories have been proposed and not all of these suggestions are obviously wrong. The development of multi-object spectrographic devices has made the determination of the mass of remote galaxies using their cluster populations possible. These investigations use the clusters as test particles in the potential well of the parent galaxies mass distribution and provide a reasonable measure of the total mass of the galaxy (for example Bridges et al. 1997 \nocite{1997MNRAS.284..376B}). Although the full decomposition of the mass distribution is not possible \cite{1993AJ....106.2229M} the use of a model dependent mass estimator such as the projected mass estimator (PME) \cite{1981ApJ...244..805B} makes possible a comparison of the masses of the parent galaxies using the clusters themselves. Several studies of galaxy halo masses using GCs as probes are now underway and the number of published results now makes clear the link between total cluster population and the mass of the parent galaxy. \section{Mass-Specific Frequency} Table~\ref{table:mass} lists those galaxy whose masses have been determined via spectroscopy of the cluster population. For each of these galaxies the mass has been estimated using the PME modified to account for extended mass distributions \cite{1985ApJ...298....8H}. These estimates are not necessarily the most accurate for each individual galaxy (in the cases of M87 \cite{1997ApJ...486..230C} and NGC 1399 \cite{1998AJ....115..105K} the large numbers of velocities available make possible a more complex model analysis) however, by using the same estimator similar biases are introduced for each galaxy and so a fair comparison of the relative mass of these galaxies is possible. Also shown is the radius out to which cluster velocities have been measured. When using the PME to determine the mass of a distribution the mass determined is that enclosed by the most distant test particles. Thus the mass quoted in the table is in fact the mass enclosed in the radius given for that galaxy. \begin{deluxetable}{lclcccll} \tablecaption{Galaxy Properties\label{table:mass}} \small \tablehead{ \colhead{Galaxy} & \colhead{Type} & \colhead{$M_V^T$} & \colhead{Mass} & \colhead{$N_t$} & \colhead{Radius} & \colhead{$S_N$}& \colhead{$S_M$} \nl & & & \colhead{$(10^{11}M_\odot$)} & & \colhead{$kpc$} & & \colhead{$10^{-4}$} } \startdata NGC 1399 & E1/cD & -21.1 & $50\pm10$\tablenotemark{h} & $2100\pm1000$\tablenotemark{a} & 25 & $19\pm6$ & $1.0\pm0.5$ \nl NGC 4486 & E0/cD & -22.4 & $60\pm20$\tablenotemark{i} & $4800\pm500$\tablenotemark{b} & 18 & $14\pm0.5$ & $1.9\pm0.6$ \nl NGC 4472 & E2 & -22.6 & $25\pm5$\tablenotemark{j} & $2200\pm600$\tablenotemark{c} & 18 & $5.6\pm1.7$ & $2.0\pm0.7$ \nl NGC 4594 & Sa & -22.3 & $5\pm1.5$\tablenotemark{k} & $660\pm200$\tablenotemark{d} & 14 & $2.3\pm0.7$ & $1.9\pm0.8$ \nl NGC 3115 & S0 & -21.45 & $10\pm4$\tablenotemark{e} & $415\pm50$\tablenotemark{e} & 19 &$2.0\pm0.5$ & $1.6\pm0.6$ \nl NGC 3031 & Sab & -21.2 & $3\pm1$\tablenotemark{l} & $210\pm30$\tablenotemark{f} & 20 & $0.7\pm0.1$ & $1.5\pm0.6$ \nl M31 & Sb & -21.7 & $4\pm0.4$\tablenotemark{m} & $250\pm100$\tablenotemark{g} & 20 & $0.7\pm0.2$ & $1.5\pm0.9$ \nl \nocite{1991AJ....101..469B \nocite{1994ApJ...422..486M \nocite{1981AJ.....86.1627H \nocite{1992AJ....103..800B \nocite{1998thesis.phd.jjk \nocite{1995AJ....109.1055P \nocite{1994AJ....107..555R \nocite{1998AJ....115..105K \nocite{1987AJ.....93..779H \nocite{Sharples...Private \nocite{1995AJ....110..620P \nocite{1993AA....274...87F \enddata \tablecomments{$S_N$ \cite{1991ARA&A..29..543H} determined using the entire cluster population and full integrated luminosity of the parent galaxy. The value for the specific mass frequency was determined using only clusters interior to the radius to which the mass of the galaxy has been determined. Keys refer to the reference section.} \end{deluxetable} As shown in Table~\ref{table:mass} the mass-to-light ratios of these various galaxies are not constant. In fact the immediate indication is that galaxies with high values of $S_N$ are also those with high values of $M/L$. This begs the question of the mass-specific frequency. Defining a mass-specific frequency as the ratio of a galaxies cluster population interior to some radius, $R$, to the mass of the galaxy within that radius provides the relation, \begin{equation} S_M = N_T(R) \frac{23.2\times10^{4} M_\odot}{M_{gal}(R)} \end{equation} where $N_T(R)$ is the total cluster population interior to some radius $R$ and $M_{gal}(R)$ is the mass interior to that radius, as determined via the PME. The normalization factor ($23.2\times10^4 M_\odot$) is the mean mass of the Milky Way GCs assuming $M/L = 2$. \cite{1996AJ....112.1487H}. This definition is different from previous ones which relied on a constant value of $M/L$ for a given galaxy type \cite{1991ARA&A..29..543H,1993MNRAS.264..611Z}. \begin{figure} \plottwo{SN.eps}{S_Mass.eps} \caption{a) The specific luminosity frequency of GCs as a function of parent galaxy luminosity. b) The specific mass frequency of GCs as a function of parent galaxy mass.\label{fig:SN}\label{fig:mass}} \end{figure} The values of $S_M$ are listed in column 6 of Table~\ref{table:mass} and the relation between parent galaxy luminosity and $S_M$ is shown in Fig.~\ref{fig:mass}. Clearly there is now no correlation between galaxy luminosity and cluster population and no obvious difference between the relative populations of spirals, ellipticals and cD galaxies, even those previously seen as super-abundant can now be reconciled by a single value of cluster formation efficiency. The cause of the anomalous $S_N$ values of some cD galaxies and the difference between $S_N$ of spirals and that of ellipticals must lie in the stellar population. Already in the literature there are some suggestions to this explanation of the $S_N$ values \cite{1997ApJ...481L..59B,1998AJ....115.1801H}. As the efficiency of cluster formation appears to be universal then it must be the star formation efficiency which varies. If the early burst of star formation results in a galactic wind shortly after the clusters formed but while there was still considerable gas left \cite{1998AJ....115.1801H} then a high $S_N$ system could result, leaving the expelled X-ray gas in the galaxy's halo. Here the obvious discriminator between high $S_N$ and low $S_N$ systems is the rate and density of cluster formation and not its efficiency. At the time of formation of the GCs the super giant molecular clouds (postulated to be the progenitor objcts of GCs \cite{1994ApJ...429..117H}) would provide sheilding against the UV radiation of the first generation of star formation, gas ejected from the progenator would, however, find itself photo-ionized and unable to cool and form stars. \section{Implication for galaxy formation} Accepting the preceding as true results in an implication regarding the star formation rate of a galaxy. Those galaxies with low values of $S_N$ are either more efficient at forming stars than their high mass cousins or the initial mass function of star formation in dense galaxies is tilted towards a preference for very massive stars early in that galaxies history. Although this first round of massive objects still contribute their mass they no longer contribute to the luminosity of the galaxy. In addition the universal efficiency of cluster formation implies that cluster formation occurs very early in the galaxy formation process, prior to the type of galaxy which will form having any chance to influence the number of cluster which it will posses. The notion that a stellar wind might drive gas away from giant E galaxies could actually be the factor which eventually determines the type of galaxy which will form. Those galaxies with lower densities of initial cluster formation can have their proto-galactic chunks dissipationally settle into a disk while those which form in dense regions with large numbers of clusters and a strong galactic wind will not manage to drag enough gas down into a plane in order that a disk might form. \section{The universality of the GCLF} As stated in the introduction the universality of the GCLF is paradoxical when compared to $S_N$, however when compared to $S_M$ the paradox is removed. The correlation of galaxy mass with total population implies that the majority of clusters form prior to the determination of the final galaxy type. During this epoch the potential well of the proto-galaxy is likely to be much softer and the destruction of GCs via tidal stripping and evaporation is likely to dominate other modes of destruction. Further these modes are mostly dependent on the mass of clusters formed and so the proportion destroyed is likely to be constant among different proto-galaxies. The early epoch of cluster formation, when clouds massive enough to eventually form clusters are present, occurs with uniform efficiency in all the environments studied to date. Further measures of the mass distribution about galaxies of various types, sizes and stages of evolution should solidify this result. THe conclusions of this work are: \begin{itemize} \item Globular clusters are create with an efficiency which is independent of galaxy type and enviroment. \item Given the correlation between cluster numbers and mass and the run of $S_N$ with galaxy type: elliptical galaxies are less efficient at producing stars than are spirals. \end{itemize}
1,116,691,501,200
arxiv
\section{INTRODUCTION} \label{sec:Introduction} High-velocity clouds (HVCs) are clouds in the Galactic halo traveling at high speed relative to the local standard of rest ($\ga$90~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}; \citealt{wakker97}). HVCs play an important role in the Galaxy, by supplying material to fuel star formation. However, their origins are uncertain, with Galactic fountains, gas stripped from satellite galaxies, and remnants from the formation of the Local Group all having been suggested as possible origins (see reviews by \citealt{bregman04} and \citealt{putman12}). The metallicity of an HVC is key to determining its origin, as it allows us to distinguish between material that has or has not been processed through the Galactic disk. The Smith Cloud \citep[SC;][]{smith63} is arguably the best-characterized of the Galaxy's HVCs. Its distance and mass are both known: $12.4 \pm 1.3$~kpc \citep[hereafter L08]{lockman08} and $\ga10^6~\ensuremath{M_{\odot}}$ each in \ion{H}{1} and \ion{H}{2} (L08; \citealt[hereafter H09]{hill09}; \citealt{lockman15}), respectively. The cloud is located approximately 3~kpc from the Galactic midplane, is approximately 1~kpc $\times$ 3~kpc on the sky, and is thought to be moving in the direction of its major axis. Uniquely among HVCs, its three-dimensional velocity is also constrained. From the variation in the line-of-sight velocity across the cloud, L08 concluded that the cloud is approaching the disk at $\sim$70~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}\ on a prograde orbit $\sim$50~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}\ faster than the Galactic rotation. Based purely on its orbit, L08 were unable to favor a Galactic or extragalactic origin for the SC, and pointed out the need to measure its metallicity. Using measurements of the [\ion{N}{2}] emission from the SC's head, H09 determined that the cloud's nitrogen abundance is likely 0.15--0.44 times solar, and concluded that this low abundance supported the idea that the cloud represents new material that is being accreted onto the Galaxy. On the other hand, using \ion{S}{2} absorption lines observed along three sight lines passing through the cloud's tail, \citet[hereafter F16]{fox16} measured the cloud's sulfur abundance to be $0.53^{+0.21}_{-0.15}$ times solar. Based on this, and the orbit determined by L08, F16 concluded that the SC had a Galactic origin (specifically, in the outer disk $\sim$13~kpc from Galactic Center, where the metallicity is half solar). It is important to note that the cloud should have experienced some mixing with the surrounding gas. Using hydrodynamical simulations, \citet{gritton14} showed that the high-velocity material in the tail of an HVC may contain significant amounts of material that originated in the halo as a result. Therefore, the true metallicity of the original cloud is not simply given by measurements of the metallicity of the high-velocity material, but depends also on the halo metallicity and the degree of mixing between the halo and the cloud. The goal of the current study is to determine how extensively mixing has changed the metallicity of an example HVC, the SC. To this end, we present three-dimensional hydrodynamical simulations of an SC-like HVC traveling through the halo (Section~\ref{sec:HydroSims}). We use these simulations to quantify the degree of mixing of cloud and halo material (Section~\ref{sec:Mixing}), and examine the relationship between the resulting metallicites and H09's and F16's metallicity measurements (Section~\ref{sec:Comparison}). We discuss our results in Section~\ref{sec:Discussion}. \begin{deluxetable*}{lcc} \tablecaption{Reference Model (Model A) Parameters\label{tab:ModelParameters}} \tablehead{ \colhead{Quantity} & \colhead{Symbol} & \colhead{Value} } \startdata Domain $x$ range & & $-2.56$ to +2.56~kpc \\ Domain $y$ range & & 0 to +2.56~kpc \\ Domain $z$ range & & $-1.28$ to +14.08~kpc \\ Spatial resolution\tablenotemark{a,b} & & 10 to 160~pc \\ Cooling curve metallicity\tablenotemark{c} & \ensuremath{Z_\mathrm{cool}} & $10^{-0.5} \ensuremath{Z_{\odot}}$ \\ Cloud radius & \ensuremath{r_\mathrm{cl}} & 500~pc \\ Cloud hydrogen density & \ensuremath{n_\mathrm{H,cl}} & 0.4~\ensuremath{\mathrm{cm}^{-3}} \\ Cloud temperature\tablenotemark{d} & \ensuremath{T_\mathrm{cl}} & 1500~K \\ Cloud speed & $|\ensuremath{\boldsymbol{v}_\mathrm{cl}}|$ & 150~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}} \\ Ambient halo temperature & \ensuremath{T_\mathrm{h}} & $2\times10^{6}$~K \\ Ambient halo hydrogen density\tablenotemark{e} & \ensuremath{n_\mathrm{H,h}} & $3\times10^{-4}$~\ensuremath{\mathrm{cm}^{-3}} \\ \enddata \tablenotetext{a}{We used adaptive mesh refinement during the simulation. If the whole domain were modeled at full resolution, the grid would consist of $512\times256\times1536$ cells, each of size (10~pc)$^3$.} \tablenotetext{b}{In models Alr and Ahr, the maximum resolution was 20 and 5~pc, respectively.} \tablenotetext{c}{In Model~B, we used $\ensuremath{Z_\mathrm{cool}}=\ensuremath{Z_{\odot}}$.} \tablenotetext{d}{Set by requiring pressure balance between the cloud and the ambient medium. In Model~C, $\ensuremath{T_\mathrm{cl}}=2500$~K.} \tablenotetext{e}{In Model~C, $\ensuremath{n_\mathrm{H,h}}=5\times10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$.} \end{deluxetable*} \section{HYDRODYNAMICAL SIMULATIONS} \label{sec:HydroSims} \subsection{Model Description} We used FLASH \citep{fryxell00} version 4.2 to simulate the hydrodynamical interaction between our model cloud and the halo. We used a three-dimensional Cartesian domain. Our simulations are similar to Model~B in \citet{gritton14}, in that the cloud started at rest at the origin, and the ambient medium flowed upward through the domain with velocity $-\ensuremath{\boldsymbol{v}_\mathrm{cl}}=|\ensuremath{\boldsymbol{v}_\mathrm{cl}}|\boldsymbol{\hat{z}}$, where \ensuremath{\boldsymbol{v}_\mathrm{cl}}\ is the cloud's velocity in the ISM rest frame. This enabled us to study the cloud's evolution over a long period of time, without the need for a prohibitively large domain. However, whereas \citet{gritton14} modeled a quarter-cloud, assuming symmetry about the $x=0$ and $y=0$ planes, here we modeled a half-cloud, assuming symmetry only about the $y=0$ plane. The size of our model domain is summarized in Table~\ref{tab:ModelParameters}, as are the other model parameters and notation, discussed below. Our reference model is called Model~A. Following \citet{galyardt16}, we initialized the cloud as a sphere of radius $\ensuremath{r_\mathrm{cl}}=500$~pc with hydrogen number density $\ensuremath{n_\mathrm{H,cl}}=0.4~\ensuremath{\mathrm{cm}^{-3}}$ and hydrogen to helium ratio of 10 to 1. Note that, at maximum resolution, the cloud's diameter is covered by 100 cells (for an example comparison, the corresponding number of cells from \citealt{gritton14} is 33). The cloud's density profile follows a hyperbolic tangent function at the cloud edge \citep{kwak11,gritton14}. This yields a total hydrogen mass of $5\times10^6~\ensuremath{M_{\odot}}$, similar to the observed mass of the SC (L08; H09). Also, the model cloud's initial diameter is equal to the observed length of the SC's minor axis ($\sim$1~kpc; note that the cloud is thought to be moving in the direction of its major axis; L08). The model cloud's initial speed relative to the ISM was 150~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}\ (i.e., the ambient medium flowed in the $+z$ direction at 150~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}). Although this is slightly higher than the SC's current speed relative to the ISM ($130\pm14$~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}; L08), the ram pressure of the ISM causes the cloud to decelerate relative to the ISM during the course of the simulation. We assumed that the temperature of the ambient halo is $\ensuremath{T_\mathrm{h}}=2\times10^6$~K \citep[e.g.,][]{henley13,henley15c}. The density of the halo is uncertain, though various observational constraints imply that it does not exceed a few times $10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$, except in the lower halo (see, e.g., discussion in \citealt{kwak11}, Section~2 and \citealt{henley14b}, Section~4.1). \citet{miller13} used \ion{O}{7} absorption line measurements to model the halo density. Their best-fit model implies that the halo density at the SC's current Galactocentric distance (L08) is $6\times10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$. If we follow L08 and use the cloud's current position and velocity to calculate its past trajectory within the Galactic potential of \citet{wolfire95b} (neglecting drag), we find that the cloud would have experienced an ambient density of $\sim$$(\mbox{3--6})\times10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$ over the past $\sim$60~Myr (cf., on this trajectory, the cloud would have been in the disk $\sim$70~Myr ago; L08). For our reference model, we assumed a constant ambient density of $\ensuremath{n_\mathrm{H,h}}=3\times10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$. The temperature of the cloud (1500~K in Model~A) was set by requiring pressure balance between the cloud and the ambient medium. Radiative cooling depends on the metallicity of the gas, which is uncertain in the SC and in its environment. As the halo's metallicity is likely to be subsolar, for our reference model we used the \citet{sutherland93} cooling curve with metallicity $\log(\ensuremath{Z_\mathrm{cool}}/\ensuremath{Z_{\odot}}) = -0.5$ (relative to the \citealt{anders89} solar abundances). Note that radiative cooling was suppressed below $10^4$~K (i.e., the cooling curve was set to zero below this temperature). This prevents runaway cooling of the coolest, densest gas. The physical justification for such a cut-off is that the cool gas will be heated by the ambient ultraviolet radiation field, by photoemission from dust grains \citep{wolfire95a}. However, we did not attempt to model this heating, and we acknowledge that this heating rate will be lower above the disk (where our model cloud resides) than in the disk \citep{wolfire95b}. The simulation time step was limited by multiple criteria and at no time was greater than half of the cooling time in any cell in the model domain. In order to test the sensitivity of our results to the assumed cooling curve, we tried a model that used a solar-metallicity cooling curve (Model~B). To test the sensitivity to the ambient gas density, we tried a variant with $\log(\ensuremath{Z_\mathrm{cool}}/\ensuremath{Z_{\odot}}) = -0.5$ but a higher ambient density, $\ensuremath{n_\mathrm{H,h}}=5\times10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$ (Model~C).\footnote{We also experimented with a model with $\ensuremath{n_\mathrm{H,h}}=6\times10^{-4}~\ensuremath{\mathrm{cm}^{-3}}$, which is equal to the halo density at the SC's current location (using the \citealt{miller13} halo model). However, we found that the resulting higher ram pressure of the ambient medium braked the high-velocity material too severely, meaning that there was virtually no high-velocity material at angles $\ga$25\degr\ behind the cloud head (cf.\ F16's sight lines lie 14--28\degr\ behind the SC head).\label{fn:HighDensity}} Finally, to test the sensitivity to the simulation resolution, we tried two variants in which the maximum resolution was twice or half that of our reference model (Models~Ahr (high resolution) and Alr (low resolution), respectively). We traced the mixing of cloud and halo material by defining two inert fluids which were advected with the flow, one representing the cloud material and the other the ambient material. The mass fractions of these fluids were initially one and zero, respectively, within the cloud, and vice versa outside the cloud. When postprocessing the simulations, we determined the fraction of the material in each grid cell that originated in the halo, \ensuremath{f_\mathrm{h}}, directly from these fluids' mass fractions. We ran the simulations for 400~Myr. This is several times longer than the time the SC is expected to have spent in the halo, assuming it has stayed on its current trajectory (L08). However, running our simulation for this length of time ensures that the mixing in the model cloud's tail has reached a reasonably steady state (see Section~\ref{sec:Mixing}). It also allows us to quantify the time variability of that mixing, which we use to estimate the uncertainty in the degree of mixing at the current epoch. Our simulations do not include a magnetic field or the possibility that the SC may have a dark matter minihalo, both of which would affect the degree of mixing in the tail. It is not clear how significant these omissions are. Nor do our simulations collide the SC with the Galactic disk \citep{nichols14}, a scenario that \citet{galyardt16} found would destroy the cloud even if the SC has a dark matter minihalo. \subsection{Results} \begin{figure*} \centering \plotone{fig1a.pdf} \caption{Slices through our reference model domain, showing the density in the $xz$ plane at $t=0$, 100, ..., 400~Myr (left to right). The diagonal black lines in the later epochs' plots show model sight lines used to calculate the fraction of the high-velocity material that originated in the halo (Section~\ref{sec:Mixing}). These sight lines are from a vantage point to the right of the model domain, 12.4~kpc from the cloud head, and cross the $z$ axis at angles corresponding to the observational sight lines in Table~\ref{tab:Obs}. \label{fig:SliceA}} \end{figure*} \begin{figure*} \centering \plotone{fig2a.pdf} \caption{Same as Figure~\ref{fig:SliceA}, but for Model~B. \label{fig:SliceB}} \end{figure*} \begin{figure*} \centering \plotone{fig3a.pdf} \caption{Comparison of the densities in the $xz$ plane near the origin from Models~A (upper row) and B (lower row), at $t=10$, 20, ..., 90~Myr (left to right). \label{fig:SliceAB}} \end{figure*} \begin{figure*} \centering \plotone{fig4a.pdf} \caption{Same as Figure~\ref{fig:SliceA}, but for Model~C. \label{fig:SliceC}} \end{figure*} Figure~\ref{fig:SliceA} shows slices through the reference model domain, showing the density at 100-Myr intervals. The upward-flowing ambient medium ablates material from the cloud, which mixes with the halo gas and gets stretched out in the $+z$ direction. There is considerable spatial and temporal variation in the structure of the tail. When we compare Model~B (Figure~\ref{fig:SliceB}) with Model~A, we find that the Model~A cloud fragments more than the Model~B cloud. This difference appears to originate in the first $\sim$90~Myr of the clouds' evolution. At $t\sim20$~Myr, a vortex emerges from the back of each cloud, causing the hot ambient medium to be mixed in behind the cloud (Figure~\ref{fig:SliceAB}), forming intermediate-temperature gas. In Model~B, starting from $t\sim60$~Myr, this gas cools and becomes denser than in Model~A, because of Model~B's larger cooling rate. This tends to smooth out the density structure behind the cloud, which impairs the fragmentation of the cloud, compared with Model~A. In Model~C (Figure~\ref{fig:SliceC}), the cloud is pushed further up through the model domain than in Model~A, because of the greater ram pressure of the ambient medium. (At $t=400$~Myr, the cloud head is at $z=6.6$ and 4.1~kpc in Models~C and A, respectively.) The Model~C cloud also fragments less than the Model~A cloud. This result is surprising, as the timescale for the growth of Kelvin-Helmholtz instabilities is expected to be smaller in Model~C, due to the smaller density contrast between the cloud and the ambient gas \citep{chandrasekhar61}. \section{MIXING OF HALO AND CLOUD MATERIAL} \label{sec:Mixing} \begin{deluxetable*}{lcccccccc} \tablecaption{Observation Details and Model Results\label{tab:Obs}} \tablehead{ \colhead{Target} & \colhead{$l$} & \colhead{$b$} & \colhead{Dist. from SC head\tablenotemark{a}} & \colhead{$\log(Z/\ensuremath{Z_{\odot}})$\tablenotemark{b}} & \colhead{Ref.} & \multicolumn{3}{c}{\ensuremath{F_\mathrm{h}}\tablenotemark{c}} \\ \cline{7-9} & \colhead{(deg)} & \colhead{(deg)} & \colhead{(deg)} & & & \colhead{Model A} & \colhead{Model B} & \colhead{Model C} } \startdata Cloud head & 38.6 & $-13.1$ & 0\tablenotemark{d} & $-0.82$ to $-0.36$ & H09 & $0.153 \pm 0.030$ & $0.132 \pm 0.017$ & $0.217 \pm 0.038$ \\ RX J2043.1+0324 & 49.72 & $-22.88$ & 14.13 & $-0.14 \pm 0.20$ & F16 & $0.449 \pm 0.047$ & $0.292 \pm 0.043$ & $0.414 \pm 0.048$ \\ PG 2112+059 & 57.04 & $-28.01$ & 22.49 & $-0.09 \pm 0.36$ & F16 & $0.493 \pm 0.058$ & $0.426 \pm 0.066$ & $0.555 \pm 0.144$ \\ RX J2139.7+0246 & 58.09 & $-35.01$ & 27.82 & $-0.58 \pm 0.25$ & F16 & $0.535 \pm 0.076$ & $0.503 \pm 0.085$ & $0.692 \pm 0.166$ \\ \enddata \tablenotetext{a}{Located at $(l,b)=(38\fdg67,-13\fdg41)$ (L08).} \tablenotetext{b}{SC metallicity, assuming that H09's nitrogen abundance and F16's sulfur abundances correspond directly to the overall metallicity, i.e., $[\mathrm{N}/\mathrm{H}]=[\mathrm{S}/\mathrm{H}]=\log(Z/\ensuremath{Z_{\odot}})$. For the F16 measurements we have combined the statistical and systematic uncertainties.} \tablenotetext{c}{Fraction of high-velocity material along the sight line originating in the halo, extracted from our hydrodynamical models.} \tablenotetext{d}{The tabulated coordinates give the pointing direction of the Wisconsin H$\alpha$ Mapper (WHAM), used by H09. Although this direction is not exactly toward the location of the SC head given in L08, this location lies within the WHAM field of view.} \end{deluxetable*} \begin{figure} \centering \plotone{fig5a.pdf} \plotone{fig5b.pdf} \plotone{fig5c.pdf} \caption{The fraction of the high-velocity material originating in the halo, \ensuremath{F_\mathrm{h}}, as a function of angular distance behind the cloud head, extracted from various epochs of Models~A--C (top to bottom; see keys). The vertical dashed lines indicate the location of F16's sight lines relative to the SC head. \label{fig:MixingProfile}} \end{figure} We calculated the proportion of halo material within the high-velocity gas on various sight lines running through our model domain in the following manner. We first defined high-velocity material as that which travels faster than 90~\ensuremath{\mathrm{km}\ \mathrm{s}^{-1}}\ relative to the ambient medium. We then defined \ensuremath{f_\mathrm{h}}\ for any given cell in the simulational domain as the fraction of the high-velocity material in that cell that originated in the halo. Thus, $1-\ensuremath{f_\mathrm{h}}$ is the fraction of the high-velocity material in the cell that originated in the HVC. For any sight line through the simulational domain, the fraction of the high-velocity material that originated in the halo is \begin{equation} \ensuremath{F_\mathrm{h}} = \frac{\int \ensuremath{f_\mathrm{h}} \rho \, dl}{\int \rho \, dl}, \label{eq:Fhalo} \end{equation} where $\rho$ is the gas density and only high-velocity gas is included in the integrals. We then determined the angle that the simulational domain must be viewed at so as to mimic actual observations of the SC's head and tail. This angle is equal to the angle between the SC's velocity relative to the surrounding co-rotating ISM and the line of sight to the SC's head, which, based on the cloud's position, distance, and velocity from L08, is 130\degr. We determined where the cloud's head is located, when viewed from this angle, by choosing the line of sight that has the greatest high-velocity hydrogen column density, \ensuremath{N_\mathrm{H,HVC}}, from all of the lines of sight that cross the $z$ axis of our domain at 130\degr. This was done at every model epoch in order to determine the location of the cloud's head as a function of time.\footnote{At some epochs, dense regions would form temporarily in the tail of the model HVC, meaning that the maximum of \ensuremath{N_\mathrm{H,HVC}}\ was several kiloparsecs behind the front of the model HVC. Therefore, in order to prevent one of these temporary dense regions in the model cloud's tail being defined as the cloud head, we restricted our search for the maximum of \ensuremath{N_\mathrm{H,HVC}}\ to within 2~kpc of the cloud front.} We examined a series of sight lines that cross through the $z$ axis of the domain behind the cloud's head. The angles between each of these sight lines and the sight line through the cloud's head were determined from the assumption that the observer was on Earth, 12.4~kpc (L08) from the SC (see Figures~\ref{fig:SliceA}, \ref{fig:SliceB}, and \ref{fig:SliceC}). Figure~\ref{fig:MixingProfile}(a) shows \ensuremath{F_\mathrm{h}}\ as a function of this angle for Model~A at various epochs. For angles $\la$30\degr, \ensuremath{F_\mathrm{h}}\ reaches a reasonably steady state by $t=300$~Myr (i.e., the variation of \ensuremath{F_\mathrm{h}}\ from epoch to epoch is relatively small compared to earlier epochs). However, there is still some time variation of the \ensuremath{F_\mathrm{h}}\ profiles due to the fact that the flow in the model HVC's tail is not steady, and for the last few tens of megayears of the simulation, \ensuremath{F_\mathrm{h}}\ is drifting toward larger values for angles $\la$15\degr. For Models~B and C (Figures~\ref{fig:MixingProfile}(b) and (c), respectively), we find that \ensuremath{F_\mathrm{h}}\ also reaches a reasonably steady state by around $t=300$~Myr, although again there is some time variation of the \ensuremath{F_\mathrm{h}}\ profiles. In addition, the Model~C profiles exhibit gaps at some angles, meaning that there is no material above the high-velocity cut-off along the sight line at those particular angles. These gaps in the high-velocity material distribution are due to the higher-density ambient medium having a stronger braking effect on the cloud material than in the other models (this braking effect was noted in footnote~\ref{fn:HighDensity} for a model with an even higher ambient density, for which the effect was more severe than in Model~C). \begin{figure} \centering \plotone{fig6a.pdf} \caption{\ensuremath{F_\mathrm{h}}\ sampled every 1~Myr and averaged over $t=300$--400~Myr from Models~A, B, and C (red, blue, and green, respectively). The colored bands indicate the standard deviation. The vertical dashed lines indicate the location of F16's sight lines relative to the SC head. \label{fig:AverageProfile}} \end{figure} Figure~\ref{fig:AverageProfile} shows \ensuremath{F_\mathrm{h}}\ averaged over multiple epochs. Within the epoch-to-epoch variations, these averaged profiles are similar for Models~A and B at the cloud head and at angles $\ga$20\degr\ behind the cloud head. The difference between the models at intermediate angles is because the Model~A cloud fragments more than the Model~B cloud (as noted above), leading to more rapid mixing in the first few kiloparsecs behind the cloud head. However, other than in the first few kiloparsecs behind the cloud head, the mixing results are not very sensitive to the assumed cooling rate. In contrast, the averaged profiles from Models~A and C are similar at all angles, though this is in part due to the large time variation of the Model~C profiles. These results suggest that the extent of mixing is not very sensitive to the assumed ambient density. \begin{figure} \centering \plotone{fig7a.pdf} \caption{Similar to Figure~\ref{fig:AverageProfile}, but comparing Models~Alr, A, and Ahr, which have maximum resolutions of 20, 10, and 5~pc, respectively (see Table~\ref{tab:ModelParameters}). \label{fig:CompareResolution}} \end{figure} \begin{figure*} \centering \plottwo{fig8a.pdf}{fig8b.pdf} \plottwo{fig8c.pdf}{fig8d.pdf} \caption{Observed SC metallicities (triangle: H09; circles: F16) plotted against the values of \ensuremath{F_\mathrm{h}}\ that were derived from Models A--C and Ahr ((a)--(d), respectively) for the observed angular separations between the observed directions and the cloud's head. The error bars indicate $1\sigma$ uncertainties. The various curves show the relationships between \ensuremath{F_\mathrm{h}}\ and the metallicity of high-velocity material that would be expected to be observed for different values of \ensuremath{Z_\mathrm{h}}\ and \ensuremath{Z_\mathrm{cl}}\ (relative to solar), calculated using Equation~(\ref{eq:Zobs}). \label{fig:MetalvMixing}} \end{figure*} Figure~\ref{fig:CompareResolution} compares the averaged profiles calculated from versions of our reference model with different maximum resolutions. In general, the low- and high-resolution versions of the model (Alr and Ahr, respectively) yield similar results to each other. The standard-resolution version also yields similar results, except for $\sim$7\degr--19\degr\ behind the cloud head. However, when we compare our model predictions with observations in the following section, we find that this discrepancy does not affect our conclusions. We therefore conclude that a maximum resolution of 10~pc (our standard resolution) is adequate for our purposes. \section{COMPARISON WITH OBSERVATIONS} \label{sec:Comparison} The SC metallicity measurements from H09 and F16 are summarized in Table~\ref{tab:Obs}. In addition, we used the values shown in Figure~\ref{fig:AverageProfile} to estimate \ensuremath{F_\mathrm{h}}\ (the fraction of the high-velocity material that originated in the halo) for synthetic sight lines that cross through the cloud as far behind the SC head and at the same angles as each of the observed sight lines do. These values are shown in the final three columns of Table~\ref{tab:Obs}, for each of our three models. The tabulated uncertainties on \ensuremath{F_\mathrm{h}}\ indicate the one standard deviation variation in the model results. Figure~\ref{fig:MetalvMixing} plots the metallicites observed by H09 and F16 versus the values of \ensuremath{F_\mathrm{h}}\ we calculated from our models for equivalent lines of sight through the model SC. Overplotted are the metallicities of high-velocity material that would be observed (\ensuremath{Z_\mathrm{obs}}) as a function of \ensuremath{F_\mathrm{h}}, given various possible values of intrinsic halo and cloud metallicities (\ensuremath{Z_\mathrm{h}}\ and \ensuremath{Z_\mathrm{cl}}, respectively) and their relationship \begin{equation} \ensuremath{Z_\mathrm{obs}} = \ensuremath{F_\mathrm{h}} \ensuremath{Z_\mathrm{h}} + (1 - \ensuremath{F_\mathrm{h}}) \ensuremath{Z_\mathrm{cl}}. \label{eq:Zobs} \end{equation} In general, the predictions that assume a metal-rich halo ($\ensuremath{Z_\mathrm{h}}\ga0.7$) and a metal-poor initial cloud ($\ensuremath{Z_\mathrm{cl}}\la0.3$) agree with the metallicity measurement for the head and two of the measurements for the tail. However, the metallicity measurement from the sight line furthest down the tail (RX~J2139.7+0246, the rightmost data point in Figure~\ref{fig:MetalvMixing}) is overpredicted. It is possible that the metallicity along the RX~J2139.7+0246 sight line is an outlier, as the high-velocity \ion{H}{1} column density on this sight line is 4--5 times those on F16's other two sight lines, despite its being the furthest from the SC head. It is also possible that this sight line passes through a relatively pristine (and therefore low-metallicity, in the scenario considered here) blob of cloud material that was stripped from the head of the cloud without undergoing significant mixing. Such pristine blobs are sometimes seen in the tails of model HVCs (J.~A. Gritton, 2016, private communication). Alternatively, the nonmonotonic trend in observed metallicities may indicate spatial variation in the halo or cloud metallicities. Excluding the RX~J2139.7+0246 sight line, the remaining three data points imply $0.1\la\ensuremath{Z_\mathrm{cl}}\la0.3$ and $0.7\la\ensuremath{Z_\mathrm{h}}\la1.0$. If the halo metallicity is near solar, then $\ensuremath{Z_\mathrm{cl}}\la0.3$ should be preferred over $\ensuremath{Z_\mathrm{cl}}\sim0.5$ (the average of the observed metallicities on the four sight lines), as the latter would result in an even higher metallicity along the sight line furthest along the tail. The predictions in the scenario that assumes a metal-poor halo ($\ensuremath{Z_\mathrm{h}}\sim0.1$) and a metal-rich initial cloud ($\ensuremath{Z_\mathrm{cl}}\ga0.5$) also agree with some but not all of the data points. In particular, if the cloud metallicity is large enough to match the measured metallicity on the RX~J2043.1+0324 and PG~2112+059 sight lines (the middle two data points in Figure~\ref{fig:MetalvMixing}), then the metallicity toward the cloud head (the leftmost data point) is overpredicted. However, a metal-poor halo is inconsistent with independent measurements ($\ensuremath{Z_\mathrm{h}}\ga0.6$; \citealt{miller16}). Despite the differences between the various models apparent in Figures~\ref{fig:AverageProfile} and \ref{fig:CompareResolution}, the above conclusions are the same for all models (hence our comment in Section~\ref{sec:Mixing} that our maximum model resolution of 10~pc appears to be adequate). \section{DISCUSSION AND CONCLUSIONS} \label{sec:Discussion} Our three-dimensional hydrodynamical simulations of an SC-like cloud passing through halo gas reveal that the tail of the cloud entrains more halo gas than does the head of the cloud. This naturally results in an increase in metallicity from cloud head to tail if the halo has greater metallicity than the cloud (and the reverse if the halo has lesser metallicity than the cloud). The predictions from both scenarios agree with the observed metallicites on some but not all of the observed sight lines; however, the former scenario is preferred as it is in better accord with the findings of \citet{miller16} that $\ensuremath{Z_\mathrm{h}}\ga0.6$. We explored models with different cooling curves, to represent the uncertainty in the halo and cloud metallicities (our model does not calculate the metallicity dependence of the cooling self-consistently). We also explored models with different ambient densities, representing the change in the ambient density as the SC moves through the halo. While the general conclusions derived from the various models are the same, the differences between the models contribute to the uncertainty in the degree of mixing between the halo and the cloud (as does the time variation within each model). As a result, the uncertainties on the inferred halo and initial cloud metallicities are large. Our models do not include magnetic fields. Magnetohydrodynamic (MHD) simulations suggest that magnetic fields suppress mixing between cloud and halo material \citep{mccourt15}. This suppression of mixing would tend to shift the model predictions in Figure~\ref{fig:MetalvMixing} to the left, meaning that the observed metallicity, \ensuremath{Z_\mathrm{obs}}, would vary more rapidly with respect to the degree of mixing, \ensuremath{F_\mathrm{h}}. Since, from Equation~(\ref{eq:Zobs}), $d\ensuremath{Z_\mathrm{obs}}/d\ensuremath{F_\mathrm{h}}=\ensuremath{Z_\mathrm{h}}-\ensuremath{Z_\mathrm{cl}}$, this in turn means that the difference between the halo and initial cloud metallicities inferred from MHD simulations would be larger than from hydrodynamical simulations such as those presented here. Additional observations could aid in determining the metallicity gradient along the SC. Given the current state of knowledge, we conclude that due to mixing with the halo, the SC's true original metallicity may be less than is implied by H09's and F16's metallicity measurements. Furthermore its uncertainty is likely greater than is implied by the uncertainties on H09's and F16's metallicity measurements. Consequently, the ultimate origin of the SC remains uncertain. \acknowledgements We wish to thank Jason Galyardt for assistance with programming and F.\ Jay Lockman, Andrew Fox, Alex Hill, J.\ Chris Howk, and Nicolas Lehner for interesting conversations about the Smith Cloud and other HVCs. We also thank the anonymous referee whose comments helped improve this paper. The software used in this work was developed in part by the DOE NNSA ASC- and DOE Office of Science ASCR-supported Flash Center for Computational Science at the University of Chicago. The simulations were performed at the Georgia Advanced Computing Resource Center (GACRC) of the University of Georgia. We thank Shan-Ho Tsai for her invaluable technical support. We acknowledge use of the R and yt software packages \citep{R,turk11}. This research was funded by NASA grant NNX13AJ80G, awarded through the Astrophysics Theory Program.
1,116,691,501,201
arxiv
\section{Introduction} \label{intro} Pillars and globules are probes of the dramatic physical and structural changes produced as a result of the erosion of the neutral gas at the interface between \hii-regions and molecular clouds by the ionising radiation from OB-stars. Expanding ionisation fronts encounter pre-existing dense condensations in the molecular medium and overrun them, leaving them either isolated inside the \hii~region (globules) or connected to the outside by a bridge of molecular gas (pillars). Recent simulations by \citet{tre12} show that the curvature of the cloud surface plays a decisive role in this process (destruction or formation of a pillar/globule). Observations of some of these globules and pillars have shown them to be active star-forming sites with bright embedded sources \citep{com99,sug02}. Recent {\it Spitzer} images of the Cygnus~X region (cf. Fig.~\ref{overview}), a vast molecular complex with intense star-forming activity (see \citet{rei08} for an overview), display abundant examples of globules and pillars formed as the molecular cloud is progressively destroyed by the O stars of the nearby Cygnus OB2 association \citep[e.g.][]{kno00,com02,com12,wri14}. Detailed observations of far-infrared lines of \CII , \OI\ and CO J=11$\to$10 observed with SOFIA\footnote{Stratospheric Observatory for Far Infrared Astronomy, https://www.sofia.usra.edu/} \citep{sch12} and {\it Herschel} \citep{sch16}, have revealed the peculiar nature of one globule in Cygnus X, \object{IRAS~20319+3958} (hereafter referred to as 'the globule'). The SOFIA study indicates that the globule is highly dynamic, with velocity features corresponding to rotation and outflowing gas. With the {\sl Herschel} data, the physical properties of the globule were determined, giving a mass of $\sim$240 M$_\odot$ and a temperature of $\sim$20 K. Most interestingly, however, is the finding that the observed UV-flux is of the order of 550 G$_\circ$ (with G$_{\circ}$ in units of the Habing field 2.7$\times$10$^{-3}$erg cm$^{-2}$ s$^{-1}$), much higher than that found for all other globules in the Cygnus X region. \begin{figure*}[ht] \begin{center} \hspace{-0.5cm} \includegraphics [width=14cm, angle={0}]{cygnus_overview_ir-paper.eps} \caption []{{\it Spitzer}\protect\footnotemark\ IRAC 3.6 $\mu$m image of the central Cygnus X region including the globule (indicated by a dashed circle and located at $\alpha_{2000}$ = 20$^h$33$^m$49.3$^s$ and $\delta_{2000}$ = 40$^\circ$08$'$52$''$). The most massive stars of the Cyg OB2 association are indicated with grey stars and the approximate center \citep[e.g.][]{wri15} with a white cross. The \hii\ regions DR15, 18, 20, and 22 are indicated.} \label{overview} \end{center} \end{figure*} \footnotetext{Data taken within the Cygnus-X legacy survey, see https://www.cfa.harvard.edu/cygnusX/.} The globule hosts a small, moderately embedded cluster. \citet{kro06} and \citet{kum06} reported nine cluster members within a projected radius of 0.53 pc, with an estimated stellar mass of 17 M$_\odot$. Here we adopt a distance of 1.4 kpc for the globule, which is the one determined for structures associated with the Cygnus~X complex \citep{ryg12} from parallax observations. A previous study by \citet{coh89}, motivated by mid-infrared emission features ascribed to PAHs, led to the identification of two visible stars as the source of excitation, for which those authors estimated mid-B spectral types. A question not answered with the observations presented in \citet{sch12} is whether the current physical conditions of the gas in the globule are mainly determined by its external illumination by the Cyg OB2 cluster, or whether the brightest stars of the \object{IRAS~20319+3958} cluster produce their own photon dominated region (PDR) and \hii\ region. One of the objectives of this paper is to answer this question and to reveal the stellar content of the globule. Previous studies \citep{sch12,sch16} have proposed that the globule is one of the rare examples of massive/intermediate-mass star formation in globules found in Cygnus~X. Our finding is that the globule hosts a group of more than 30 very young stars, of which at least two are early B-type stars and likely the origin of the ionised gas found in the centre of the stellar aggregate. For that, we obtained and present here deep near-infrared imaging of the globule complemented with visible imaging, and with spectroscopy in the visible and infrared of selected cluster members. The combination of near-infrared and existing {\it Spitzer} observations allows us to obtain a much more complete census of cluster members than previously available and to study in detail the evolutionary state of the cluster and some of its most outstanding members. At the same time, the detailed views of the respective distributions of the PDR and the ionised gas, together with the diagnostics provided by selected emission lines, yield further insight into the interaction of the cluster members with the various phases of the interstellar gas in the globule. This paper (the second one of a series) is part of a larger study on understanding pillar and globule formation, and the conditions under which stars can form within them (PI: N. Schneider). The project draws together far-infrared imaging and spectroscopy from {\it Herschel}, {\it Spitzer}, and SOFIA, and ground-based millimeter molecular-line mapping\footnote{https://hera.ph1.uni-koeln.de/$\sim$nschneid/}. \section{Observations and data reduction} \label{obs} \subsection{Near-IR imaging} \subsubsection{Omega2000 observations} The $J$, $H$, $K_S$, Br$_{\gamma}$ and H$_2$ imaging was obtained in service mode with the Omega2000 infrared camera on the 3.5m telescope in Calar Alto on 29th and 30th of June, 2012. The Hawaii-2 array (2048 $\times$ 2048 $\times$ 18 $\mu$m) with a pixel resolution of 0$\farcs$449 gives a total field-of-view (FoV) of 15.4' $\times$ 15.4'. The imaging was done with a dither pattern consisting of 15 positions, using an integration time of approximately 1 minute on each position. The total integration time was 15 minutes for the broad-band filters $J$, $H$ and $K_S$ and 1 hour for the Br$_{\gamma}$ and H$_2$ narrowband filters. Details are given in Table~\ref{tab-obs}. The images have been reduced using the IRAF package and a set of own scripts. Bad pixels were treated in all images by interpolation, using a bad pixel mask provided by the observatory. Flat field correction was made using differential (lamp ON-OFF) dome flats for each filter. For each filter, 15 dithered images, obtained within approximately 20 minutes, were scaled and median filtered to make a sky image for subtraction, after re-scaling. The images were shifted and combined. The plate solution for each final image was found using the 2MASS catalogue \citep{skr06} as a reference, and all images were placed on the same scale. \begin{table}[t] \caption{Omega2000 observations. Each of the N$_{\rm im}$ dithered images is a co-addition of N$_{\rm coadd}$ individual integrations of T$_{\rm dit}$ single exposure time.} \begin{tabular}{lcrrrrrrr} \hline Band & $\lambda_c$ & T$_{tot}$ & N$_{im}$ & T$_{dit}$ & N$_{coadd}$ & FWHM & Airmass \\ \hline & ($\mu$m) & (sec) & & (sec) & & ('') & \\ \hline $J$ & 1.209 & 900 & 15 & 6 & 10 & 2.0 & 1.155 \\ $H$ & 1.647 & 900 & 15 & 3 & 20 & 1.6 & 1.110 \\ $K_S$ & 2.151 & 900 & 15 & 2 & 30 & 1.7 & 1.076 \\ Br$_{\gamma}$ & 2.166 & 3600 & 60 & 20 & 3 & 1.2 & 1.007 \\ H$_2$ & 2.122 & 3600 & 60 & 20 & 3 & 1.8 & 1.412 \\ \hline \end{tabular} \label{tab-obs} \end{table} Due to the chosen dither pattern, the region of interest for our study has an uneven coverage. The deepest part is a combination of 15 images while the southernmost area is made from only 3 images, and therefore the noise varies spatially across the image. Point source detection was done subregion by subregion using {\it daofind} in the IRAF {\it digiphot} package and with a threshold of 5 sigma and a conservative limit on $\sigma_{sky}$ in each subregion. For this reason our estimate of completeness is uncertain. PSF photometry was made on all sources in each filter, using a penny1 model of the PSF that varies quadratically over the FoV. A total of 12921 stars were measured in the $H$-band which results in the deepest image. Using the plate solution based on 2MASS positions, RA/DEC coordinates were calculated for all sources and the table was cross-correlated with 2MASS, finding a total of 2215 stars in common. The derived positions of the Omega2000 sources are estimated to be accurate to approximately 0$\farcs$1. For the photometric calibration, we use 2MASS stars with quality flag `AAA' (837 stars). The offsets between Omega2000 and 2MASS magnitudes were examined for trends in colours, as well as spatial trends over the detector. There is a clear dependence on Y position for the $J$-band, and we used an empirical correction as a function of Y-pixel by a linear fit to the data ($Jc = J - 4.9 \times 10^{-5} \times$~Y-pixel). In $H$ and $K_S$ no clear spatial dependence was found. Colour trends were then checked by plotting magnitude and colour differences versus the $(J-H)_{\rm 2MASS}$ colour index over a range from 0 to 3.3 magnitudes, giving the following results: {\small \begin{eqnarray} \Delta (H-K_S) = 0.004 \pm 0.007 - 0.001 \pm 0.005 \times (J-H)_{2MASS} \\ \Delta (J-H) = -0.026 \pm 0.006 + 0.020 \pm 0.005 \times(J-H)_{2MASS}, \end{eqnarray} } where $\Delta$ indicates the difference (Omega2000 - 2MASS) for the index in question. We found these sufficiently small to safely neglect any colour corrections, since neither the foreground nor the internal extinction cause strong enough reddening on the sources of interest. The median offsets between 2MASS and Omega2000 magnitudes were then used to calibrate our $JHK_S$ magnitudes. The scatter in the offsets are large, however, as expected due to different spatial resolution. In order to estimate the accuracy of the calibration we select sources with $K_S < 12$~mag that have no IR excess and are not blended with other sources, limiting the number to approximately 140 stars. On this sample we use a 2-sigma clipping and find standard deviations around the mean offset of 0.043, 0.024 and 0.037~mag in $J$, $H$ and $K_S$, respectively. We take this as an estimate of our calibration error and add it up in quadrature to the output error from the PSF photometry. In the source table, we have excluded Omega2000 magnitudes for sources brighter than approximately 10.4, 10.0 and 8.8 mag at $J$, $H$ and $K_S$, respectively, since these bright stars enter the non-linear range of the Omega2000 array (25000 ADU) for our choices of individual exposure times. This means that in our further analysis, we use the 2MASS magnitudes and, where available, NOTCam magnitudes for these sources. In the deepest part of the field, the $5\sigma$ magnitude limit, defined as S/N=5 in the final photometry, is reached at J = 20.5, H = 19.7, and K$_S$ = 18.5 mag. We make an approximate and conservative estimate of the completeness to be 18.5, 17.5 and 16.5 mag in $J$, $H$ and $K_S$, respectively, based on the detection histograms, but note that the observations have a different depth across the field, making such an estimate uncertain. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=9cm]{glob_khj_rgb_o2k.eps} \caption [] {JHK$_S$ RGB combined image of a 15$'\times$15$'$ field centred on the globule at $\alpha_{2000}$ = 20$^h$33$^m$49.3$^s$ and $\delta_{2000}$ = 40$^\circ$08$'$52$''$ obtained with Omega2000. North up and east left.} \label{o2kmap} \end{center} \end{figure} \subsubsection{NOTCam observations} At the Nordic Optical Telescope (NOT), La Palma, the NOT's near-IR camera and spectrograph NOTCam \citep{abb00,dju10} was used in its high-resolution mode, with a pixel scale of 0$\farcs$078/pix and a FoV of 80$\arcsec \times 80\arcsec$, on 6th September, 2012, to image the central part of the globule in photometric and good seeing conditions. To avoid saturation, we used 3.6 seconds individual integration time and dithered the telescope in a 3x3 grid. Discarding a few individual images in the reductions, we obtained total integration times of 21.6, 25.2 and 28.8 seconds in $J$, $H$ and $K_S$, respectively. All raw images were first corrected for non-linearity using the pixel-by-pixel correction coefficients available from the NOTCam web pages\footnote{For details on the calibration of NOTCam see http://www.not.iac.es/instruments/notcam/}. After this the images were bad-pixel corrected, flat-field corrected using differential skyflats, and sky-subtracted using the frames themselves to estimate the sky. The shifted images were combined, resulting in a FWHM of the PSF in the combined images of approximately 0$\farcs$4. PSF fitting photometry was made on the point sources detected, and magnitudes were found for 177 stars in $H$ and $K$ and 64 stars in $J$. Selecting stars with quality flag `AAA' from the 2MASS catalogue, excluding multiples resolved by NOTCam, results in nine stars that are useful to calibrate magnitudes and positions. The positional accuracy with respect to 2MASS positions is 0$\farcs$1, and the standard deviation of the difference magnitudes m$_{\rm NOT}$ - m$_{\rm 2MASS}$ is 0.08, 0.06 and 0.06 mag in $J$, $H$ and $K_S$, respectively. This calibration error is added in quadrature to the individual errors in the PSF magnitudes. The completeness limit is estimated to be 18, 18 and 17 mag for $J$, $H$ and $K_S$, respectively. \subsection{Visible imaging} Images centered on \object{IRAS~20319+3958} were obtained through the $VR_CI_C$ filters using the CAMELOT camera at the IAC80 telescope at the Teide Observatory on the night of 20th June, 2012. The camera covers a FoV of 10$\farcm$1 $\times$ 10$\farcm$1 on the sky. The exposure times were 600s, 300s and 120s through the $V$, $R_C$ and $I_C$ filters, respectively. Flux calibration was achieved through observations of the nearby cluster NGC~6910 in all three filters with exposure times of 60s ($V$), 30s ($R_C$) and 10s ($I_C$) when the cluster was near meridian transit, followed by observations with the same exposure times centered on the globule. The deeper images were obtained immediately before those of NGC~6910, thus ensuring that all observations were taken at virtually the same air mass. Standard image data reduction was carried out using IRAF, and PSF instrumental photometry on the resulting images was carried out with dedicated IRAF scripts using the layered DAOPHOT package \citep{ste87}. Astrometry was carried out using stars in common with the UCAC4 catalogue \citep{zac13} and used to cross-match the sources detected in the CAMELOT images with those detected in the infrared observations. Since no dedicated $VR_CI_C$ photometry of NGC~6910 seems to exist in the literature, the CCD $Vr'i'$ photometry of stars in the field of NGC~6910 included in the UCAC4 catalogue was used to determine photometric zero-points in each filter using the transformation equations between the $Vr'i'$ and the $VR_CI_C$ photometric systems given by \citet{fuk96}: \begin{eqnarray} R_C = r'-0.19(V-r')-0.15 ,\\ I_C = R_C - 0.23 - 1.02 (r'-i') .\end{eqnarray} We then transferred the zero-points obtained on the NGC~6910 field to the shallow exposures of the globule field. We preferred this two-step approach rather than the direct calibration using the UCAC4 $Vr'i'$ photometry of stars in the globule field because the former field is more crowded, thus allowing us to obtain a more robust determination of the photometric zero-points. The estimated $5\sigma$ detection limits of the deep exposures of the globule are $V \simeq 20.0$, $R_C \simeq 19.5$, and $I_C \simeq 18.0$. Saturation in the deep images was reached at $V \simeq 15.0$, $R_C \simeq 14.0$, and $I_C \simeq 14.0$. The photometry of the stars brighter than these limits was taken from the shallow images of the field used for photometric calibration. \subsection{Optical spectroscopy} A set of optical spectra of the two brightest stars (referred to as star~$A$ and star~$B$ in Sect.~\ref{results}) that appear projected on the globule's head at visible wavelengths was obtained on the night of 25th August, 2012, using the TWIN spectrograph at the Calar Alto 3.6m telescope. A dichroic mirror was used allowing us the simultaneous use of the blue and red arms of TWIN, resulting in a continuous spectral coverage in the range 4000~\AA \ to 7500~\AA. The gratings used yielded a resolving power of $R = 2400$ in both the blue and red arms, using a common entrance slit of 1$\farcs$2 width. The slit was oriented at a fixed position angle so as to include both stars in a single setting. The slit length, of 6$'$, greatly exceeds the 30$''$ of separation between those stars. Slit losses due to differential refraction are minimal due to the fact that the observations were obtained when the target region was near meridian transit, a few degrees from the zenith, at a maximum air mass never exceeding 1.12. Eight sets of spectra with an exposure time of 1800s each were obtained in each arm. The individual science frames were bias-subtracted and divided by a flat field obtained with the telescope dome illuminated by a continuum lamp the morning after the observations. Wavelength calibration frames using a thorium-argon lamp were obtained immediately before and after the set of observations. One-dimensional spectra of each of the two bright stars were extracted from each frame using a fixed-width aperture centered on the spectral trace of each of them. In addition, a spectrum of the brightest portion of the \HII\ region, located approximately 9$''$ East of the bright star near the center of the globule's head, was also extracted. Background subtraction from the stellar spectra was carried out by averaging the spectrum extracted from two apertures adjacent to each star, and the same was done for the spectrum of the nebula. In this way, the spectrum of the all-pervading, structureless \HII\ nebulosity present all along the slit is subtracted from the extracted spectra of both stars and of the nebula. \subsection{$K$-band spectroscopy} $K$-band spectra were obtained with NOTCam for the three brightest young stellar object (YSO) candidates in the globule. Two different spectral resolutions were used, both applying grism~\#1 and the K-band filter \#208. In low-resolution mode the wide-field camera and the 128~$\mu$m slit (a long slit of width 0.6$''$ or 2.6 pixels) covers the range from 1.95 to 2.37 $\mu$m with a dispersion of 4.1 \AA /pix, giving a resolving power $R = \lambda /\Delta \lambda$ of approximately 2100. In medium-resolution mode the high-resolution camera lens together with the 44 $\mu$m slit (0.2$''$ or 2.6 pixels wide) covers the range from 2.07 to 2.20 $\mu$m with a dispersion of 1.4 \AA /pix and a resolving power $R \sim$ 5500. The spectra were obtained by dithering along the slit in an A-B-B-A pattern, reading out the array non-destructively while sampling up the ramp. Argon and halogen lamps were obtained while pointing to target in order to minimise effects due to instrument flexure and to improve fringe correction. To correct the spectra for atmospheric features we used HD199373 (F5 V) and HIP99346 (F0 V) as telluric standards. \begin{table}[t] \caption{NOTCam $K$-band spectra listed with date, target name, resolving power, exposure time, seeing conditions and spatial pixel resolution.} \begin{tabular}{llrrrl} \hline Date & Star & $\lambda /\Delta \lambda$ & T$_{\rm exp}$ & FWHM & Pixel scale \\ \hline 22 Jul 2013 & $A$ & 2100 & 120s $\times$ 8 & 0.5$''$ & 0.234$''$ \\ 02 Aug 2015 & $A$ & 5500 & 300s $\times$ 4 & 0.5$''$ & 0.078$''$ \\ 05 Oct 2014 & $B$ & 2100 & 60s $\times$ 8 & 0.6$''$ & 0.234$''$ \\ 22 Aug 2014 & $C$ & 2100 & 300s $\times$ 4 & 0.7$''$ & 0.234$''$ \\ \hline \end{tabular} \label{tab-kspec} \end{table} The 0$\farcs$5 separation double star~$A$ was first observed in good seeing in low-resolution mode with the slit aligned to include both components, but it was difficult to separately extract the two spectra. The medium-resolution spectrum, however, which has three-times higher spatial resolution, resolved both components $A_N$ and $A_S$ well. The 1$\farcs$4 separation double star~$C$ (see Sect.~\ref{results}) was easily spatially resolved in the low-resolution mode, and star~$B$ is apparently single. Slit positions are overlaid on a high-resolution NOTCam $K$-band image in Fig.~\ref{slitimage}. \subsection{Spitzer IRAC data} Mid-IR photometry was obtained from the archive {\it Spitzer} post-BCD images in the four IRAC bands at 3.6, 4.5, 5.8 and 8.0 $\mu$m in the 15$'\times$15$'$ field mapped in JHK$_S$ with Omega2000. The point source photometry was done through the PSF fitting procedure as described in \citet{com11}. The flux-to-magnitude conversion was made using the fluxes for a zeroth magnitude star as given in the IRAC and MIPS web pages. The catalogue of IRAC photometry was cross-correlated with the Omega2000 table using a tolerance of 1$\farcs$3 in the matching of RA and DEC coordinates. We find a median offset of 0$\farcs$22 in RA and 0$\farcs$04 in DEC, and a total of 3476 sources are correlated (compared to 2119 correlated with 2MASS). The minimum magnitudes for point sources before entering saturation (for a frame-time of 12s) was found to be 9.8, 9.3, 6.7 and 6.8 mag for the bands 3.6, 4.5, 5.8 and 8.0 $\mu$m, respectively. Sources brighter than this are flagged as upper magnitude limits. The 5$\sigma$ point source detection limits are found at 15.3 mag, 14.8 mag, 12.7 mag and 11.7 mag for the bands 3.6, 4.5, 5.8 and 8.0 $\mu$m, respectively. \section{Results and analysis} \label{results} \subsection{YSO classification} \label{irex} We analyse the Omega2000 JHKs photometry and the {\it Spitzer}/IRAC photometry in a 15$'\times$15$'$ field centred on the globule at $\alpha_{2000}$ = 20$^h$33$^m$49.3$^s$ and $\delta_{2000}$ = 40$^\circ$08$'$52$''$ and shown in Fig.~\ref{o2kmap}. The IRAC photometry allows us to use well-established criteria to select Young Stellar Objects (YSOs) and classify them on the basis of their spectral energy distribution (SED) in the 3.6~$\mu$m to 8~$\mu$m region. Additional near-IR $JHK_S$ photometry extends the SEDs towards shorter wavelengths and helps characterising fainter YSOs that are not detected in all IRAC bands because of lower spatial resolution and sensitivity. The near to mid-IR photometric analysis is based on detecting IR excesses owing to circumstellar dust that absorbs and re-emits in the infrared. The slope of the SED from near to mid-IR, the $\alpha$ index, defined as the slope of $\log (\lambda F_{\lambda})$ vs $\log (\lambda)$, originally defined between 2 and 20 $\mu$m, has been used to define observed YSO classes and link them to the early stellar evolutionary stages \citep{lad84,ada87,and93,gre94}, see \citet{eva09} for a review. Class~IIIs have $\alpha < -1.6$, Class~IIs occupy a region of $ -1.6 < \alpha < -0.3$, flat-spectrum sources have $ -0.3 < \alpha < 0.3$, and Class~Is have $\alpha > 0.3$. Class~0s are often not directly detected at mid-IR wavelengths and are classified by their large submillimeter luminosity. Class~I sources are embedded protostars with infalling dust envelopes, flat-spectrum are thought to have cleared out a cavity in their dust envelope, and Class~IIs are pre-main sequence (PMS) stars with dust disks. Class~IIIs have no detectable IR excesses and can not, therefore, be distinguished from field stars with near- and mid-IR photometry alone. An alternative to the SED slope is to use appropriately defined loci in colour-colour (CC) diagrams to distinguish between the YSO classes and separate IR-excess sources from reddened field stars. For IRAC photometry in particular, a number of slightly different colour criteria have been used by different authors \citep{meg04,all04,gut08,gut09}. \begin{figure} \includegraphics[angle=270,width=9cm]{irac1234.eps} \caption []{The IRAC $[3.6]-[4.5]/[5.8]-[8.0]$ diagram for the sample of 316 sources with photometric errors $<$ 0.2 mag in all bands (black dots) and the selected Class~I (red squares) and Class~II (green circles) candidates using the criteria given in the text. The location of the three sources A, B and C are marked with blue star symbols. The direction and size of the reddening vector for $A_K$ = 3 mag is according to empirical relations for the IRAC bands by \citet{ind05}. } \label{irac1234} \end{figure} \begin{figure} \includegraphics[angle=270,width=9cm]{jhk2.eps} \caption []{The $J - H/K-[4.5]$ diagram for the sample of 2116 sources with photometric errors $<$ 0.2 mag in these bands (black dots). The already classified YSO candidates are shown with red squares (Class~I) and green circles (Class~II). The location of the bright sources $A$, $B$ and $C$, of which $A$ and $C$ are resolved doubles in the near-IR, are marked with blue stars. Some sources have upper limits only in the $J$ band. Some have lower limits due to saturation in the 4.5 $\mu$m band. The slope of the reddening band is indicated as a dashed line. } \label{jhk2} \end{figure} Our strategy to define the YSO sample is summarised as follows: 1) find Class~I and Class~II candidates from the IRAC colour-colour (CC) diagram in Fig.~\ref{irac1234}, 2) use the CC diagram $J - H/K_S - [4.5]$ in Fig.~\ref{jhk2} to check validity of the candidates found above as well as extract IR-excess sources not detected in step 1, and 3) use the CC diagram $J - H$/$H - K_S$ in Fig.~\ref{jhk} to search for any additional IR-excess sources not found in any of the above two steps. From the complete source table with Omega2000 $J$, $H$ and $K_S$ magnitudes and IRAC [3.6], [4.5], [5.8] and [8.0] magnitudes, we first consider the sample of 302 sources with photometric errors $<$ 0.2 mag in all IRAC bands, and use the IRAC $[3.6]-[4.5]/[5.8]-[8.0]$ diagram shown in Fig.~\ref{irac1234} to select Class~I and Class~II candidates. The extinction vector is based on the average IRAC extinction relations found by \citet{ind05} for the diffuse ISM. In line with the Class~I and Class~II criteria of \citet{meg04} and \citet{all04} we define Class~I candidates as those that have: ($[5.8]-[8.0] - \sigma_{[5.8]-[8.0]} >$ 1.2) and ($[3.6]-[4.5] - \sigma_{[3.6]-[4.5]} >$ 0.4) or ($[3.6]-[4.5] - \sigma_{[3.6]-[4.5]} >$ 0.8). This reveals 11 Class~I candidates, of which 8 are located in the globule head within a radius of 0.4 pc ($\sim$1$'$ at d=1.4 kpc) from the central bright star~A, see Fig.~\ref{h2image_ysos}. The three remaining sources are faint (13.4 $<$ [3.6] $<$ 14.2 mag) and separated from the globule by more than 6$'$. These could be scattered Class~Is anywhere along the line of sight, or they could be extragalactic sources, such as AGNs or galaxies with strong PAH features contaminating the sample. We did not perform the rigorous `weeding out' of extragalactic contaminants described in \citet{gut08}, since our main target of this study is a small globule over whose area such a contamination must be negligible. Some of the Class~I candidates in the globule are very bright in the IRAC bands and enter the saturation limits. For these, the IRAC colour indices are uncertain, but as shown below, their lower limit on the $K - [4.5]$ index strongly supports a Class~I designation. \begin{figure}[t] \includegraphics[angle=270,width=9cm]{jhk.eps} \caption []{Omega2000 $J-H/H-K_S$ diagram for 3435 sources with errors $<$ 0.06~mag in all bands (black dots). The empirical reddening vector (dashed line) has a slope of 1.97. We have overlaid the positions of known Class~Is (red squares), Class~IIs (green circles), and the stars $A$, $B$ and $C$ (blue stars). The nine new YSO candidates (magenta triangles) are sources with $K$-band excess not already extracted using the previous two CC diagrams.} \label{jhk} \end{figure} From the remaining sources we define Class~II candidates as those that satisfy both $[5.8]-[8.0] - \sigma_{[3.6]-[4.5]} >$ 0.4 and $[3.6]-[4.5] - \sigma_{[3.6]-[4.5]} >$ 0. This results in 35 Class~II sources of which only two are located in the globule head, namely the bright stars A and B. Both of these are saturated in all IRAC bands, thus the Class~II designation is uncertain, but the clear near-IR excesses confirm their YSO status. The remaining 33 Class~IIs have a more extended spatial distribution, as shown in Fig.~\ref{h2image_ysos}. To check that the above Class~I and Class~II candidates are not subject to PAH contamination that would masquerade as excess emission at 5.8 and 8.0 $\mu$m, or source confusion in the centre of the globule, we require that the candidates have point source counterparts in at least two of the near-IR bands, and we use the $J - H/K - [4.5]$ diagram shown in Fig.~\ref{jhk2} to verify IR-excess in the $K - [4.5]$ index. Practically all YSO candidates, except three Class~IIs, are well separated from the reddening band with clear IR-excesses, and in addition, Class~Is have, in general, larger IR-excesses than Class~IIs. We note that for two Class~Is we only have lower-limit estimates on the $J$-band magnitudes. For some of the brightest sources, the IRAC magnitudes are upper limits due to saturation/non-linearity. The double sources Stars A and C are not spatially resolved by IRAC, and we simply divided their IRAC fluxes in two, thus, their location in this diagram is only an indication. The $J - H/K - [4.5]$ diagram is also used to find fainter IR-excess sources. It shows a total sample of 2116 point sources with photometric errors $<$ 0.2 mag in all the bands $J$, $H$, $K$, and $[4.5]$. The vast majority is located along a reddening band with no excess in the $K-[4.5]$ index. The upper part of the reddening band becoming bluer is believed to be due to giants with strong CO absorption in the [4.5] band \citep{meg04}. The reddening vector is based on $A_{\rm 4.5} = 0.43 A_K$ \citep{ind05} together with our near-IR extinction (see below) to have a slope of 1.95 which is in good agreement with the value 1.97 used by \citet{riv15} as the division line for sources without IR excesses in a similar study in Cygnus. We assign as IR-excess sources those that are located to the right and below this slope by more than 2$\sigma$ in the respective colour indices. This gives 130 YSO candidates, of which 88 are new, while 42 of the 44 already suggested YSO candidates also obey this criterion. The 88 new YSOs have clear IR-excess in their $K - [4.5]$ colour, and the majority occupy the same locus as previously defined Class~II sources. We interpret these sources as fainter Class~II sources that could not be classified with IRAC photometry alone. As many as 27 of the 88 new YSO candidates are located within a projected radius of $\sim$1$'$ in the globule head. Next we explore the sample of 3435 sources with photometric errors $<$ 0.06 mag in $J$, $H$, and $K_S$ to check if there are fainter YSOs with excess emission in the $K_S$ band not already classified above. The $J-H$/$H-K_S$ diagram is shown in Fig.~\ref{jhk}. The reddening slope was found, by a linear fit to all sources that show no IR excesses, to be 1.97 $\pm$ 0.01, and it is drawn (dashed line) from the position of the reddest M dwarfs, that is, shifted by 0.08~mag to the right. This slope corresponds to $\beta = 1.83$ for the near-IR extinction law in the form $A_{\lambda} \propto \lambda^{-\beta}$. The positions of known Class~I and Class~II sources are indicated with red squares and green circles, as well as the locations of the stars $A_N$, $A_S$, $B$, $C_{NE}$ and $C_{SW}$ (blue stars). We identify sources with $K$-band excess as those lying more than $2\sigma$ of the uncertainty in the colour indices to the right of and below the reddening vector. A total of 26 such sources were found, of which 16 also have 4.5 $\mu$m excess and one is a previously found Class~II. Thus, only nine are new YSOs not already classified. \begin{figure} \includegraphics[angle=0,width=9cm]{h2image_ysos.eps} \caption []{Spatial distribution of the YSO candidates overlaid on the Omega2000 15$'\times$15$'$ H$_2$ v=1-0 S(1) image. Class~Is (red squares), Class~IIs (green circles), additional YSOs with 4.5 $\mu$m excess (blue plus signs) and $K$-band excesses (magenta diamonds) not otherwise classified. The black circle has a radius of 1$'$ (0.4 pc) and shows the approximate extent of the cluster in the globule head.} \label{h2image_ysos} \end{figure} \begin{figure}[ht] \includegraphics[angle=0,width=9cm]{h2hj_zoom_ysos2.eps} \caption [] {Close-up view of the globule in RGB-coded colours where red is the $H_2$ image, green is $H$ and blue is $J$ from Omega2000, N up E left. YSOs are marked as in Fig.~\ref{h2image_ysos} and the three bright stars $A$, $B$ and $C$ are labeled above (north of) the stars. The two circles indicate radii of 0.2 and 0.4 pc from star $A$.} \label{h2hj_zoom_ysos} \end{figure} In the 15$\arcmin$ $\times$ 15$\arcmin$ field we thus found 11 Class~I candidates, 36 Class~IIs, 88 YSO candidates with excess emission at 4.5 $\mu$m and not already in the previous two groups, and further 9 sources with K-band excess not previously designated with a YSO status, making a total of 144 YSO candidates over the whole field. As shown in Fig.~\ref{h2image_ysos} the spatial density of these is clearly enhanced at the location of the globule, where we find 40 YSOs within a projected radius of 1 arc minute (0.4~pc). Fig.~\ref{h2hj_zoom_ysos} shows a close-up view of the globule with the YSOs overlaid and including the three bright stars $A$, $B$ and $C$. { Thus, the globule clearly hosts a small embedded cluster, supporting the scenario proposed by \citet{sch12}.} \subsubsection{The YSO sample\label{yso_sample}} We note that the above Class~I criteria make no distinction between flat-spectrum, Class~I or Class~0 sources. Those in the Class~I group that are located in the globule, have a relatively large $[5.8]-[8.0]$ index, combined with a relatively small $[3.6]-[4.5]$ index, a region in the CC diagram typically occupied by flat-spectrum sources, as shown in \citet{all07} and \citet{eva09}. A few Class~Is have $K - [4.5]$ indices typical of Class~II sources. The sample of 88 + 9 IR-excess sources that can not be given a definite Class~I or Class~II designation are most likely pre-main sequence (PMS) stars with disks. One of these sources is 2MASS J20333684+4009387, located approximately 2.5$'$ away from the globule to the WNW, a previously spectroscopically identified TTauri star \citep{vin08}. In the $J - H$/$K_S - [4.5]$ diagram, the majority of the IR-excess sources are located in the region dominated by Class~IIs, while only a few have significantly larger excesses. Thus, we include these in the Class~II group for statistical considerations. Because our selection of YSO candidates is based on IR-excess only, a number of sources without IR-excess could, in principle, be young PMS members of Class~III-type. The number of sources within 0.4 pc of star $A$ that have detections in all near-IR bands is 140, while 40 of these are IR-excess YSOs. For two reference fields of equal size outside the globule, we find 86 and 122 sources with detections in all near-IR bands. Thus, on average, there is no over-density of stars at the location of the globule, apart from the IR-excess YSOs; an indication that the contribution of Class~III sources is insignificant. Keeping in mind the above loose definitions of the Class~I and Class~II groups, we find a number ratio of Class~I/Class~II in the total area of 11/133, while within a projected radius of 0.4~pc of the globule head this ratio is 8/32. If we consider the inner part of the globule within a radius of 0.2~pc, this number ratio is 7/10. The lifetime of the Class~I stage is estimated to be 0.54 Myr \citep{eva09} so that such { a high number of Class~Is suggests that there has been a recent burst of star formation in the central part of the globule.} The Class~I sources are all located in the southern part of the inner cluster, while the more evolved sources are mainly in the northern part. Counting up the total number of YSOs and the Class~I fraction, we find that in the northern half of the area inside a radius of 0.4 pc, Class Is constitute only 8\% (2/25) of the YSO population, while in the southern half they comprise 53\% (9/17). The projected spatial distribution of the Class~Is is highly clustered and clearly to the south of the cluster centre. The remaining YSOs, on the other hand, have a projected distribution that is more dispersed overall, but 23 of the 31 sources are in the northern part. Despite the fact that the velocity dispersion of the newly formed stars will tend to, with time, smear out any initial spatial pattern of star formation (up to 1~pc, or $2'5$ on the plane of the sky at the distance of Cygnus~OB2 over 1~Myr for a velocity of 1~km~s$^{-1}$), the subsisting predominance of Class~II sources ahead (North) of the globule could thus be an indication of star formation progressing from the North towards the south inside the globule, that is, propagating away from the center of Cygnus~OB2. In Table~\ref{tab-ysos} we list the 40 YSOs within a projected radius of 0.4~pc centred on star A with positions and multi-band photometry. The near-IR photometry is from Omega2000 except for the brightest sources that enter the non-linear regime of the detector, where we instead use the NOTCam magnitudes when available and elsewhere the 2MASS magnitudes. For sources unresolved by Omega2000, we use the NOTCam photometry where available. \begin{figure}[ht] \includegraphics[angle=0,width=8cm]{glob_khj_zoom2.eps} \caption [] {NOTCam JHK$_S$ 9'' zoom-in on star $A$, resolved in two components with separation 0$\farcs$52. The southern red component is a NIR excess source with an extinction A$_K \sim$ 2 mag, while the blue northern component has no NIR excess and an extinction of $\sim$ 1 mag.} \label{notcam-zoom} \end{figure} \begin{figure} \includegraphics[angle=270,width=9cm]{sed_ABC.eps} \caption [] { Observed SEDs of the stars A, B and C. The double sources A (0$\farcs$52 separation) and C (1$\farcs$42 separation) are resolved only in the bands $J$, $H$ and $K_S$. The optical counterparts are likely $C_{NE}$ and $A_N$. } \label{sed} \end{figure} \begin{landscape} {\scriptsize \begin{table} \caption{The 40 YSO candidates within a radius of 0.4 pc from star $A$. The columns give: ID, J2000 positions, magnitudes and errors in all bands: $V$, $R$, $I$, $J$, $H$, $K_S$, [3.6], [4.5], [5.8], [8.0], and YSO classification. Near-IR photometry is from Omega2000 except for very bright or unresolved sources, where we use the NOTCam photometry. IRAC magnitudes that reach saturation are flagged as lower limits. The YSOs are listed in groups, first the 8 Class~Is, then the 4 Class~IIs, followed by 28 PMS stars with IR-excess emission. The last column gives the YSO class or the selection criterion, that is, excess in the $K-[4.5]$ index (4) or excess in the $H-K_S$ index (5). The full table of 144 YSO candidates is available in electronic version. } \label{tab-ysos} \input{final_40ysos_v2.tex} \end{table} } \end{landscape} \subsection{The brightest members of the globule population \label{ind_stars}} The three brightest stars projected on the globule at near- and mid-infrared wavelengths are respectively labeled as stars~$A$, $B$ and $C$ in Fig.~\ref{h2hj_zoom_ysos}. The spectroscopic study of these three stars yields additional information useful to assess their nature as well as the evolutionary status of the globule and the stellar population associated with it. With the high spatial resolution of our NOTCam images, Star~$A$ is resolved into two close components separated by 0$\farcs$52, see Fig.~\ref{notcam-zoom}. This is too close to be resolved by {\it Spitzer}, and the combined luminosity of the pair results in the characteristics of a Class~II source. The resolution of the pair in the NOTCam images at $J$, $H$ and $K_S$ shows that only the Southern component, which is also redder, displays $K-$band excess (see Fig.~\ref{jhk}). Star~$B$ also has a Class~II SED, and appears unresolved in all the images. The extended emission of the globule hints at a peak coincident with its position. The brightest Class~I source in the mid-IR is star~$C$, approximately 15$\arcsec$ to the South of star~$A$. Star~$C$ is a resolved binary with a separation of 1$\farcs$42 both in the Omega2000 and NOTCam datasets. Both components have near-IR excess. The NE component is the brightest and the SW component is the reddest. Figure~\ref{sed} shows the observed SEDs of these stars from the $V$ band to the 8 $\mu$m IRAC band. \begin{figure} \includegraphics[angle=0,width=9cm]{slitimage.ps} \caption [] {Slit positions and widths overlaid on NOTCam high-resolution K-band image zoomed in to 50$\arcsec \times$40$\arcsec$. Three near-IR slit positions (red) across the double sources stars $A$ and $C$, and the single source $B$, and the optical slit (blue) across stars~$A$ and $B$ and the Nebula. } \label{slitimage} \end{figure} \begin{figure*}[ht] \includegraphics[angle=0]{starA_label.eps} \caption [] {Optical spectrum of star $A$. Note the absorption lines of Helium at 5875~\AA \ and 6678~\AA .} \label{starA} \end{figure*} \subsubsection{Stars $A$ and $B$} The spectra of both stars $A$ and $B$ (see Fig.~\ref{slitimage} for the slit positions and Figs.~\ref{starA} and \ref{starB} for the spectra) display very reddened continua consistent with the high extinction derived in their direction. Besides the telluric absorption bands in the red, both spectra display abundant diffuse interstellar bands \citep{sar06}. However, the He\,{\scriptsize I} 5875~\AA \ and 6678~\AA \ lines are clearly seen in absorption in both spectra, thus unambiguously classifying {\sl both stars as B-type or earlier}. The emission-line spectra of both stars show remarkable differences, though. Superimposed onto star $B$ is the spectrum of a low-excitation \HII\ region with an electron density $n_e \simeq 800$~cm$^{-3}$ derived from the ratio of the [S\,{\scriptsize II}] lines at 6717~\AA \ and 6731~\AA , representing a very local enhancement of the \HII\ nebula. The expected photospheric H$\alpha$ absorption in the spectrum of star $B$ is completely filled by the nebular emission at the same wavelength, preventing the determination of its spectral type through the ratio of the He\,{\scriptsize I} to the Balmer lines. However, the ability of star $B$ to excite a local \HII\ region and the low excitation suggests an early B spectral type. The possible appearance of weak Si\,{\scriptsize III} absorption at 5740~\AA , which is only tentatively detected in our spectrum, would constrain it to the B0.5 - B1.5 interval \citep{wal80}. While the continuum and absorption line spectrum of star~$A$ is similar to that of star~$B$, including the presence of He\,{\scriptsize I} in absorption, the emission-line spectrum at its position is markedly different and clearly indicates a circumstellar origin. The very intense H$\alpha$ emission displays an obvious P-Cygni profile, with an extended red wing. The same P~Cygni profile is clearly seen in the H$\beta$ line as well. No other prominent emission lines appear in the red, but the blue spectrum displays strong Fe\,{\scriptsize II} lines of multiplets 42 and 49. \begin{figure*}[t] \includegraphics[angle=0]{starB_norm.eps} \caption [] {Optical spectrum of star $B$. Note the absorption lines of Helium at 5875~\AA \ and 6678~\AA .} \label{starB} \end{figure*} The concurrent appearance of H$\alpha$ with P~Cygni profile \citep{bes99,kra08} and the Fe\,{\scriptsize II} multiplets in emission \citep{ros99,her04} together with a photospheric spectrum displaying He\,{\scriptsize I} in absorption classify {\sl Star~A as a Herbig Be star}. However, the optical spectrum needs to be interpreted with caution in view of its binarity. The orientation of the slit, coupled with its width, the 1$''$.5 seeing under which the spectra were taken, and the close separation between both components implies that the spectrum is the blend of the individual spectra of components North and South. The photometry presented in Table~\ref{tab-ysos} suggests that the Northern component, being the bluest, dominates the continuum in the region covered by our spectra, but the characteristics of component South, being a source with strong near-infrared excess, make it a candidate contributor to the emission-line spectrum. $K$-band spectroscopy of star~$A$ under excellent image quality conditions, allowing us to partly resolve the spectra of both components, is presented in Figure~\ref{kspec2}. Both spectra have essentially featureless continua, consistent with them being produced by early-type photospheres with some degree of continuum veiling. The spectra also show that Br$\gamma$ is largely produced by the Southern component, with an equivalent width of 8 \AA . The much stronger Br$\gamma$ emission of the southern component hints that the H$\alpha$ emission in the blended spectrum, and possibly the entire emission-line spectrum of the system, is actually dominated by the redder Southern component. \subsubsection{Star C \label{StarC}} Star~$C$ is one of the brightest stars in the globule at near- and mid-IR wavelengths, with colours corresponding to a Class~I young stellar object based on {\it Spitzer} photometry, which does not resolve the two 1$\farcs$4 separation sources. Both components $C_{NE}$ and $C_{SW}$ have near-IR excesses (cf. Fig.~\ref{jhk})with $C_{SW}$ being the fainter and the redder of the two. The K-band spectra of the two components of Star $C$ are shown in Fig.~\ref{kspec}. The brighter northeastern component has a strong Br$\gamma$ emission line with an equivalent width as large as 70 \AA \ , typically seen only for the most luminous Class\,I type YSOs \citep{ish01}, together with He\,{\scriptsize I} at 2.0583 $\mu$m in emission and a tentative detection of Fe\,{\scriptsize II} in emission at 2.0893 $\mu$m. The lack of photospheric features hampers its classification, but the simultaneous presence of IR excess and He\,{\scriptsize I} and Fe\,{\scriptsize II} in emission suggests a massive YSO. The fainter and redder southwestern component has a faint Br$\gamma$ emission line (EW $\sim$ 3 \AA ) on an otherwise featureless spectrum. Both stars show faint and slightly extended H$_2$ emission at 2.1218 $\mu$m. We measure the line ratio of He\,{\scriptsize I} (2.058 $\mu$m) to Br$\gamma$ to be $\sim$ 0.1. According to models by \citet{lum03} in their study of He and H line ratios of compact H~II regions, a line ratio of 0.1 suggests that $C_{NE}$ has a temperature T$_{\rm eff} \ll $ 36000~K, and probably $<$ 30000~K. Thus, $C_{NE}$ could be a late O or early B star, probably more massive than Star~$A$, where we do not detect He\,{\scriptsize I} emission. For further comparison, we obtained a K-band spectrum of Star~$B$ and also found a He\,{\scriptsize I} to Br$\gamma$ ratio of approximately 0.1 for this star, but with the difference that the Br$\gamma$ equivalent width of star~$B$ is almost a factor of ten smaller (see Fig.~\ref{kspec}). Interestingly, the He\,{\scriptsize I} emission is seen superimposed on the spectrum of $C_{NE}$ alone and is not seen towards $C_{SW}$ located only 1$\farcs$4 ($<$ 0.01~pc projected distance) away. \begin{figure}[t] \includegraphics[angle=270,width=9cm]{Kspec_HR_A.eps} \caption [] {Medium-resolution (R=5500) normalised $K$-band spectra of the two 0$\farcs$52 separation components of star $A$ resolved spatially in $A_N$ (upper) and $A_S$ (lower). The spectra are offset by unity on the y-axis.} \label{kspec2} \end{figure} \begin{figure}[t] \includegraphics[angle=270,width=9cm]{Kspec_WF_ABC.eps} \caption [] {Low-resolution (R=2100) $K$-band spectra of the three brightest stars. Star~$A$ shows both components unresolved, star~$B$ is apparently single, and star~$C$ is well resolved into the NE and SW components. The spectra are offset by unity on the y-axis.} \label{kspec} \end{figure} \subsection{Intrinsic properties of the YSO population\label{phys_ysos}} The combination of near-infrared photometry and the visible photometry available for many of the sources in the globule allows us to probe the photosphere-dominated region of their spectral energy distributions and compare it to evolutionary tracks to infer their intrinsic properties. The number of YSOs identified in the concentration toward the head of the globule also provides some insight into their collective properties, most notably their mass function. We have used the PMS evolutionary tracks of \citet{pal99} for masses below 4~M$_\odot$, with colours from \citet{tes98}, complemented for higher masses with the evolutionary tracks of \citet{lej01} for non-rotating stars of solar metallicity, which include a synthetic computation of the intrinsic colours. The fluxes at $J$ and shorter wavelength bands were fitted to the synthetic fluxes leaving the $V$-band extinction $A_V$ as an adjustable parameter, scaling it to the other bands by means of the Cardelli et al. (1989) extinction law for a total-to-selective extinction ratio $R_V = 3.1$. We did not include infrared bands longward from $J$ in the fit due to the possible existence of circumstellar emission substantially contributing to the flux at long wavelengths. We used the 1~Myr isochrone as being approximately representative of young Class~II objects \citep[e.g.][]{eva09}, where the initial circumstellar envelope has been largely dissipated and the central object has become accessible to visible and near-infrared observations. Isochrone fitting, as a way to determine the age of the aggregate, is not applicable to our data due to the degeneracy of isochrones at high and intermediate masses in colour-magnitude diagrams, and to the fact that we do not have optical photometry for the lowest-mass stars, for which such degeneracy breaks, which would allow us to independently fit the spectral energy distribution and the amount of foreground extinction as described below. A least-square fit to the $V, R_C, I_C, J$ magnitudes was performed for those stars having measurements in at least two of those bands to derive their mass and the extinction in their direction. This is the case for 13 of the 40 YSO candidates listed in Table~\ref{tab-ysos}. Those stars show $A_V$ values mostly within the range $7.0< A_V < 11.9$, with the exception of two sources without strong infrared excess for which we derive a much smaller value of $A_V$, and which are likely foreground stars. The small size and low density of the head of the globule suggest a low level of internal extinction: assuming a maximum extension along the line of sight of 0.8~pc, a typical density $n(H) \sim 200$~cm$^{-3}$ (see Section~\ref{neb_spect}) and a proportionality between visible extinction $A_V$ and hydrogen column density $N_H$ of the form $A_V = 5.6 \times 10^{-22}$~cm$^{2} N_H$ \citep{lis14}, we estimate that the internal extinction does not exceed $A_V \simeq 0.3$~mag. Most of the extinction is thus produced in the foreground, possibly associated with the Cygnus Rift. This feature is an extended area of gas and dust at a distance of approximately 600 pc, not related to the Cygnus X region \citep[for example]{sch07}, which produces a sudden increase in visual extinction \citep{staude1982}. For stars with no photometric measurements in bands bluer than $J$ we have adopted $A_V =9.6$, the average value obtained for those stars for which individual $A_V$ can be obtained, and have estimated their mass from the corresponding absolute magnitude $M_J$ given by the isochrones. The adopted value has an estimated uncertainty of $\sim \pm 3$~mag, translating to an uncertainty $M_J \sim \pm 0.8$~mag. \begin{table}[t] \caption{Estimated properties of the YSOs, assuming 1~Myr age.} \begin{tabular}{lcccl} \hline \hline Source & Mass (M$_\odot$) & $M_J$ & $A_V$ & Notes \\ \hline 3 & $<0.1$ & - & - & \\ 4 & 1.8 & 1.89 & 9.6$^a$ & \\ 5 & 0.7 & 3.10 & 9.6$^a$ & \\ $C_{NE}$ & 8.1 & -1.12 & 11.9 & \\ 8 & 1.7 & 1.95 & 9.6$^a$ & \\ 9 & 0.8 & 2.91 & 9.6$^a$ & \\ 10 & 0.4 & 3.70 & 9.6$^a$ & \\ $B$ & 22.9 & -3.32 & 10.7 & \\ $A_N$ & 12.9 & -2.05 & 8.8 & \\ 65 & 1.3 & 2.28 & 7.8 & \\ 66 & 0.3 & 4.37 & 1.6 & Likely foreground \\ 68 & 1.8 & 1.93 & 9.6$^a$ & \\ 69 & 1.9 & 1.91 & 8.8 & \\ 70 & 0.9 & 2.87 & 9.6$^a$ & \\ 71 & 0.5 & 3.50 & 9.6$^a$ & \\ 73 & 0.6 & 3.27 & 9.6$^a$ & \\ 74 & 1.6 & 2.07 & 9.6$^a$ & \\ 75 & 5.2 & -0.13 & 11.0 & \\ 76 & 0.1 & 5.23 & 9.6$^a$ & \\ 77 & 0.2 & 4.65 & 9.6$^a$ & \\ 78 & 0.4 & 3.85 & 9.6$^a$ & \\ 80 & 0.1 & 4.95 & 9.6$^a$ & \\ 82 & 0.4 & 3.79 & 9.6$^a$ & \\ 83 & 0.6 & 3.33 & 9.6$^a$ & \\ 85 & 0.8 & 2.93 & 9.6$^a$ & \\ 87 & 1.2 & 2.32 & 9.6$^a$ & \\ 88 & 8.9 & -1.21 & 9.1 & \\ 89 & 2.6 & 0.88 & 10.9 & \\ 90 & - & - & $\sim 0$ & Foreground \\ 94 & $<0.1$ & - & - & \\ 96 & 0.3 & 4.28 & 7.0 & \\ 98 & 0.8 & 3.08 & 9.6$^a$ & \\ 99 & 0.4 & 3.69 & 9.6$^a$ & \\ 100 & 0.2 & 4.86 & 2.7 & Foreground? \\ 140 & 0.2 & 4.69 & 9.6$^a$ & \\ 142 & 0.9 & 2.82 & 9.6$^a$ & \\ \hline \end{tabular} \label{tab-physprop} $^a$ Properties based on the $J$ magnitude adopting average value $A_V$ (see text). \end{table} The derived mass $M$ and absolute $J$ magnitude $M_J$, and the derived or adopted extinction $A_V$, are given in Table~\ref{tab-physprop}. The YSO population of the globule covers over two orders of magnitude in mass, from Stars~$A_N$ and $B$, which are the most massive at $M \simeq 13$~M$_\odot$ and $M \simeq 23$~M$_\odot$, respectively, (in consistency with their early B spectral types) to below 0.1~M$_\odot$, as wederive for stars \#3 and \#94. These latter two objects, which display clear infrared excesses at the longer {\it Spitzer} bands, may be massive brown dwarf YSOs of the aggregate in the globule. We note that the mass that we derive for star~$C_{NE}$ of 8.1~M$_\odot$ makes it significantly less massive than stars $A_N$ and $B$, in consistency with its lower brightness in the visible and near-infrared. However, such relatively moderate mass for a young stellar object seems to be in contradiction with the appearance of He\,{\scriptsize I} in emission in the $K$-band spectrum, which we used in Sect.~\ref{StarC} to argue for a spectral type possibly earlier than those of stars $A_N$ and $B$. The apparent contradiction might be accounted for by the possibility that the He\,I emission is not produced in a compact \hii \ region around star~$C$ but in the hot spots caused by accretion onto its surface instead, thus being unrelated to the temperature of the central object. On the other hand it is also possible that the main contribution to the flux received at short wavelengths does not actually come directly from the central object, but rather from the scattering of its light by the dusty circumstellar envelope that is also responsible for the strong infrared excess of star~$C$ at long wavelengths. The available observations do not allow us to discern between those two possibilities, and we must consider that the actual mass of star~$C$ remains particularly uncertain. The determination of the mass function of the aggregate is complicated by several factors that include the uncertainties in PMS evolutionary tracks, the unknown age of the members, the approximate correction by extinction effects, incompleteness at the lowest masses, an incompleteness due to the existence of members still in early evolutionary phases and not yet accessible to direct observation in the visible and near-infrared. Nevertheless, the histogram of masses shown in Fig.~\ref{imf} that we obtain from the estimated physical properties gives some interesting insights into the collective properties of the aggregate. \begin{figure} \includegraphics[angle=0,width=9cm]{histomass.eps} \caption []{Histogram of the distribution of stellar masses in the globule aggregate, according to the derived values listed in Table~\ref{tab-physprop}. An age of 1~Myr has been assumed.} \label{imf} \end{figure} Assuming a power-law slope of the initial mass function of the form $N(M) \propto M^{(\Gamma + 1)} \Delta \log M$, the histogram for masses above solar indicates a slope $\Gamma = -1.6 \pm 0.2$, shallower than the Salpeter exponent ($\Gamma = - 2.35$) characterising the functional forms of the initial mass function above one solar mass. The peak of the mass histogram is, in principle, reminiscent of that observed in the initial mass function in the field and in clusters, reproduced by the log-normal part of the analytical representation of the initial mass function \citep{cha05} where such a peak appears at 0.1-0.2~M$_\odot$. We find it at a significantly higher mass, $\sim 1$~M$_\odot$, but the mass of the peak is very sensitive to the adopted age and drops to $\sim 0.3$~M$_\odot$ in the extreme case of adopting the 0.1~Myr PMS isochrone of \citet{pal99}. The offset may also be a simple artifact of incompleteness of our sample, but we note that stars near the peak have relatively bright absolute magnitudes, $M_J \simeq 2.5$), well above our completeness limit even at the distance modulus of Cygnus OB2 when obscured by $A_V = 10$ ($A_J = 2.9$), therefore suggesting that the peak is real. If our sample is nearly complete down to where the initial mass function declines with decreasing mass, the sum of the estimated stellar masses given in Table~\ref{tab-physprop} yields a total mass of the stellar aggregate of $\sim 90$~M$_\odot$. An overestimate of the actual age of the aggregate members would also have the effect of making the high-mass slope of the mass function shallower: using the 0.1~Myr isochrone yields $\Gamma = -1.4 \pm 0.2$. All this hints to a departure from the canonical initial mass function for the aggregate, in the sense that the mass function of the population embedded in IRAS~20319+3958 would be top-heavy and have its peak at a higher mass. However, given the many uncertainties involved, deeper imaging and complementary observations allowing a more robust determination of the mass distribution would be required to make firm conclusions on this point. \subsection{The extended emission} Deep narrow-band imaging at 2.122~$\mu$m, covering the ro-vibrational 1-0 S(1) line of H$_2$, and at 2.166~$\mu$m, covering Br$\gamma$, shows extended emission in both of these lines at the location of the globule (see Fig.~\ref{nb_rgb_im}). We have made approximate continuum subtraction by scaling the K$_S$ band image intensity down by a factor 0.075 and 0.055 for H$_2$ and Br$\gamma$, respectively, based on the transmission curves of the filters. By comparison with Fig.~\ref{h2hj_zoom_ysos} where the positions of YSOs are overlaid, we see that the Br$\gamma$ emission peaks approximately 5$\arcsec$ to the south of Star $A$. There is also a rim of Br$\gamma$ emission along the northern edge of the globule, which we interpret as being due to the external UV field from the Cygnus OB2 cluster, located to the North of the globule (see Fig.~\ref{overview}). This emission is much fainter than the centrally peaked emission, however, which we argue is due to the internal feedback from the stars inside the globule. The highest intensity is found around the binary Class~I star $C$ and coincides with the brightest peak of \OI\ emission \citep[see][Fig.~3]{sch12}, thus highlighting the extreme youth of this part of the globule. The comparison between the high-angular-resolution images tracing the PDR and the ionised gas emission highlights the striking differences between their respective distribution, already hinted in \citet{sch12}. In the H$_2$ images, the PDRs appear mainly as bright fingers of emission highlighting rims where the molecular gas is being photodissociated. Figure~\ref{nb_rgb_im} strongly suggests that the molecular gas traced by the PDR emission is approximately distributed in a shell that delineates the outer contour of the globule. The globule and its tail is initially shaped by external feedback from the UV field with the globule head pointing towards the OB2 cluster with the tail away from it. The brightness pattern of the emission rims, however, suggests that they are illuminated from the inside, reinforcing the interpretation of \citet{sch12} in favour of an internal origin for the PDR emission caused by the embedded cluster, rather than an external origin due to the UV field. On the other hand, the clear central condensation of the Br$\gamma$ emission, together with the spectroscopic evidence that we present in Sect.~\ref{neb_spect}, clearly show that the ionised gas is also caused by the embedded cluster stars. \begin{figure}[ht] \includegraphics[angle=0,width=9cm]{h2brgj_rgb.eps} \caption [] {The globule colour-coded with H$_2$ emission (red), Br$\gamma$ (green), and J-band (blue).} \label{nb_rgb_im} \end{figure} \subsubsection{Spectroscopy of the nebula \label{neb_spect}} The extracted optical spectrum of the nebula approximately 9$''$ East of the position of star~$A$ clearly indicates a low-excitation \HII\ region. No hints of [O\,{\scriptsize III}] emission are seen near 5000~\AA, {and there is only a marginal detection of He\,{\scriptsize I} at 6678~\AA , with He\,{\scriptsize I} at 5875~\AA\ being undetectable (Fig.~\ref{nebula_spec_vis}). This is what one would expect from a small \HII\ region ionised by the soft radiation of early B-type stars in the globule, rather than by the harder radiation field dominated by the O stars of nearby Cygnus~OB2. This leads us to conclude that the radiation field inside the globule is dominated by its embedded B stars, rather than by harder radiation leaking in from the outside. The electron density inferred from the [S\,{\scriptsize II}] lines is $n_e \simeq 200$~cm$^{-3}$. \begin{figure}[ht] \includegraphics[angle=0,width=9cm]{nebula_spec_label.eps} \caption [] {Visible-red spectrum of the \HII\ region. The intensity ratio of [S\,{\scriptsize II}] lines at 6717 and 6731 indicates an electron density $n_e \simeq 200$~cm$^{-3}$. The marginal detection of He\,{\scriptsize I} at 6678~\AA \ (see inset) and the non-detection of He\,{\scriptsize I} at 5875~\AA\ indicate a soft ionising radiation like that expected from the early B stars embedded in the globule.} \label{nebula_spec_vis} \end{figure} \section{Discussion} Our observations reveal a cluster of very young stellar objects associated with the globule IRAS~20319+3958. Youth is clearly highlighted by the ratio of Class~I to Class~II sources, implying that star formation there has been proceeding for a period comparable to the typical span of the Class~I stage. Signposts of this youth are also found among the brightest young stellar objects of the globule: one of them (star~$A$) is a binary in which at least one of the components is a Herbig Be star, and the other two, star~$B$ and the binary Class~I star~$C$, have very compact, low-excitation \HII\ regions around them. An obvious upper limit to the number of diskless (Class~III) young stellar objects is given by the number of stars that appear projected on the area of the globule and that do not display infrared excess, as compared to the number of those same sources in a typical off-cloud area. We find no significant enhancement of non-excess sources at the position of the globule as compared to the background population, indicating that the number of Class~III sources is small and probably negligible in comparison to that of Class~I and II sources. Some extreme Class~I or even Class~0 objects surrounded by thick envelopes may be missed by the near-infrared observations, but the lack of sources detected by {\it Spitzer} and without a near-infrared counterpart in our $JHK_S$ images argues against this being a numerically significant population. We thus conclude that, although having taken place very recently, the star-formation process in IRAS~20319+3958 is essentially finished now. The high Class~I-to-Class~II ratio together with the absence of a significant Class~III population suggests that the age of the cluster is much shorter than typical disk dispersal timescales. The latter are suspected to depend on the central object's mass, the stellar density of the environment \citep{luh10} and binarity \citep{dae16}, but disk lifetimes of 1-3~Myr appear to be typical, both on a theoretical and an observational basis \citep[e.g.,][]{lix16}. The duration of the Class~I phase has been traditionally estimated by comparing the relative number of Class~I to Class~II YSOs in star-forming regions under the assumption of a constant star-formation rate \citep{eva09}, but recent work based on detailed observations and modelling of the accretion on the central object have challenged an evolutionary picture in which the circumstellar envelope steadily disperses until the Class~I/II transition takes place, suggesting instead that Class~I objects may reach ages comparable to those of T~Tauri stars, up to 1~Myr and older \citep[][and references therein]{whi07}. We can thus take 1~Myr as a reasonable estimate of the age of the globule population, as already anticipated in Section~\ref{phys_ysos}. However, if the more traditional interpretation of Class~I objects as a population distinctly less evolved and shorter lived than the Class~II population is adopted, then the Class~I/Class~II ratio argues for an even younger age of the aggregate. The youth of the cluster leads us to wonder if its formation could have been triggered by the passage of the ionisation front that left it as an isolated feature. Evidence for triggered star formation near rim nebulae and clumps exposed to eroding ultraviolet radiation, although elusive, has been reported in a number of cases \citep[e.g.][]{sic14,get07,niw09,hay12,com05,com07} under conditions that closely resemble those found in IRAS~20319+3958. {\it Spitzer} and molecular line images of the region (see Fig.~\ref{overview} and \citet{sch16}) show the abrupt interface between the Cygnus X molecular complex and the interstellar medium cleared by the ionisation and winds from the stars in Cygnus OB2. In the region where the globule lies, such interface, which marks the current location of the ionisation front, is located approximately 24$'$ South of the globule, corresponding to a projected distance of 10.5~pc (see Fig.~\ref{overview}). If the globule had started forming stars spontaneously before being run over by the ionisation front, a cluster age of 1~Myr would require an averaged propagation speed of the ionisation front of approximately 10~km~s$^{-1}$ or more. The possibility that star formation started before the arrival of the ionisation front remains plausible in view of our results, unless our choice of 1~Myr as the age of the aggregate grossly overestimates its actual age, which is unlikely in view of its sizeable Class~II population. Our results, therefore, do not unambiguously argue for the passage of the ionisation front as the factor that triggered star formation in the globule. However, the approximate coincidence of the onset of star formation in the globule with the estimated passage of the ionisation front, if the latter propagated at 10~km~s$^{-1}$ \citep[see][]{sch16}, is circumstantial evidence for a causal connection between both events. We note in support of this connection that the distribution of Class~II sources in the periphery of the globule, preferentially projected on the side facing the densest regions of the Cygnus~OB2 as noted in Sect.~\ref{yso_sample}, also suggests that the changes in the environment of the globule caused by its rapid exposure to the ionising radiation \citep{ber89,gri09,tre12} played a determinant role in the star-formation history of the globule. The hints of an atypical mass function described in Sect.~\ref{phys_ysos}, if real, might also be related to the particularities of star formation in a globule exposed to intense external ultraviolet radiation. Our results resemble those obtained on another star-forming region suspected to be externally irradiated, \object{IRAS~16362-4845} in \object{RCW108}, where we also found indications of a top-heavy mass function \citep{com07}. However, possible departures from a universal form of the initial mass function are far from established. Examining the excess of young stellar objects around bubbles as proof of triggered star formation, \citet{tho12} find no evidence for an initial mass function different from that in the field. Similarly, in their detailed study of the stellar population in the Trumpler~37 cluster and its adjacent globule IC~1396A, \citet{get12} obtain a good fit to their results with the widely used initial mass function of \citet{kro01}, for the populations of both the central cluster and the globule, although the latter does not contain stars more massive than 2~M$_\odot$. On the theoretical side, the problems in relating the effects of temperature, turbulence, and magnetic fields with the peak of the mass function and its slope at different masses have been outlined by \citet{off14}, but no clear connection between peculiarities in the initial mass function and the star-formation conditions expected as the result of the triggering mechanism has been demonstrated. \section{Summary and conclusions} We studied the content of the cluster embedded in the IRAS~20319+3958 globule and the distribution of its excited molecular hydrogen and ionised gas by means of visible and near-infrared imaging and spectroscopy, complemented with mid-infrared {\it Spitzer}/IRAC imaging. The study complements that of \citet{sch12} on the physical and kinematical conditions of the molecular and photodissociating gas, thus providing a comprehensive picture of the globule and its content. Our conclusions can be summarised as follows: \begin{itemize} \item The IRAS~20319+3958 globule contains an embedded aggregate of approximately 30 very young stellar objects, most of them characterised by the strong near and/or mid-infrared excesses expected from a very early evolutionary stage. We estimate the age of the embedded aggregate to be $\sim 1$~Myr, based on the high observed ratio of Class~I to Class II objects. We also estimate a stellar mass of the aggregate of $\sim 90$~M$_\odot$. \item The most massive members of the aggregate are three systems containing early B-type stars. Two of them are able to produce very compact \HII\ regions, one of them being still highly embedded and coinciding with a peak in PDR emission. Two of these three systems are resolved binaries, and one of them contains a visible Herbig Be star. \item The mass function of the aggregate, based on an approximate derivation of the individual stellar masses, suggests a high-mass tail characterised by a slope shallower than that found in most clusters, and a peak of the mass function possibly lying at higher masses. This might be due to the influence of the strong ionising radiation field under which the aggregate was formed. However, large uncertainties affecting our derivation of the individual masses of the members of the aggregate, most notably the age adopted for its faintest members, make this conclusion tentative at best. \item The compared morphologies of the H$_2$ emission that traces the photon-dominated regions and the Br$\gamma$ emission that traces the ionised gas suggest that molecular gas is roughly distributed as a shell around the embedded aggregate, filled with centrally-condensed ionised gas. Both the morphology and the low excitation of the \HII\ region indicate that the sources of ionisation are the B stars of the embedded aggregate, rather than the external UV field caused by the O-stars of Cygnus~OB2. \item Considering the estimated age of the embedded aggregate and the isolation of the globule, we find it likely that the formation of the embedded aggregate started when the globule was overtaken by the large-scale ionisation fronts that the O stars of Cygnus~OB2 drive into the Cygnus~X region. \end{itemize} The {\it Spitzer} images of the Cygnus~OB2 / Cygnus~X region offer a dramatic view of pillars and globules resulting from erosion by the massive association of its parental molecular gas, and, simultaneously, of the recent star formation revealed by the strong infrared excesses caused by hot circumstellar dust. Those images strongly suggest that the globule IRAS~20319+3958 is a typical feature in the region rather than a rarity, and the same is apparent from the images of other massive star-forming regions. The study of globules and pillars and their embedded populations can thus provide excellent samples for the study of triggered star formation and the features that it may imprint on the stellar aggregates resulting from it. \begin{acknowledgements} Based on observations made with Omega2000 at the 3.5m telescope of the Centro Astron\'{o}mico Hispano Alem\'{a}n (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut f\"{u}r Astronomie and the Instituto de Astrof\'{i}sica de Andaluc\'{i}a (CSIC). Partly based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrof\'{i}sica de Canarias, as well as observations with the IAC80 telescope, operated on Tenerife by the Instituto de Astrof\'{i}sica de Canarias in the Spanish Observatorio del Teide. NS acknowledges support by the ANR-11-BS56-010 project STARFICH and support from the Deutsche Forschungsgemeinschaft, DFG, through project number Os 177/2-1 and 177/2-2, central funds of the DFG-priority program 1573 (ISM-SPP), and the SFB 956. We thank the Calar Alto staff for the execution of our observations in Service Mode. We thank Stefan Geier, Jussi Harmanen, and Joonas Sario for some of the observations during their studentships at the NOT. \end{acknowledgements}
1,116,691,501,202
arxiv
\section{Introduction} \label{intro} Generally speaking, a complex system can be understood as a large number of interacting elements whose global behaviour cannot be derived from the local laws that characterize each of its components. The global response of the system can be observed at different scales and the vast number of degrees of freedom makes prediction very difficult. In this context, a probabilistic description of the phenomenon is needed in order to reasonably characterize it in terms of random variables. When the response of these systems exhibits lack of characteristic scales, it can be described in terms of power-law type probability density functions (PDF), $f(x) \propto x^{-\gamma}$, where $x$ corresponds to the values that the random variable that characterizes the response of the system can take, $\propto$ denotes proportionality and the power-law exponent $\gamma$ acquires values larger than one. The power-law is the only function which is invariant under any scale transformation of the variable $x$ \cite{Christensen_Moloney}. This property of scale invariance confers a description of the response of the system where there are no characteristic scales. This common framework is very usual in different disciplines \cite{bak1997nature,Sethna2001,Corral2018} such as condensed matter physics \cite{saljedahmen}, geophysics \cite{Corral2018}, seismology \cite{kagan2013earthquakes,Kaganbeta}, economics \cite{Gabaix2016}, linguistics \cite{moreno2016}, etc. It has been broadly studied \cite{StanleyRMP1999,STANLEY200060} that different complex systems present common values of all their power-law exponents and can be grouped into the same universality class. Therefore, it is important to determine rigorously these exponents, not only to properly characterize phenomena but also to provide a good classification into universality classes. In practice, exponents are difficult to measure empirically. Due to experimental limitations that distort the power-law behaviour, the property of scale invariance can only be measured in a limited range. Therefore, when a power-law distribution is fitted to empirical data is more convenient to talk about local or restricted scale invariance. In this context, the wider the range the fitted PDF spans, the more reliable and strong this property will be. A paradigmatic example of power-law behaviour in complex systems is the well-known Gutenberg-Richter (GR) law for earthquakes \cite{Utsu1999}. This law states that, above a lower cut-off value, earthquake magnitudes follow an exponential distribution; in terms of the magnitude PDF \begin{equation} f(m) = (b \ln 10) 10^{-b(m-m_{min})} \propto 10^{-b m}, \end{equation} defined for $m\ge m_{min}$, with $m$ the magnitude (moment magnitude in our case), $m_{min}$ the lower cut-off in magnitude and $b$ the so called $b-$value. The general relationship between seismic moment $x$ and moment magnitude $m$ is given by: \begin{equation} x = 10^{\frac{3}{2}m +9.1} \label{eq:moments} \end{equation} measured in units of Nm \cite{Kanamori79}. The GR law is a power-law distribution when it is written as a function of the seismic moment $x$: \begin{equation} f(x) = \frac{2}{3} \frac{b}{x_{min}} \left( \frac{x}{x_{min}} \right)^{-\left( 1+\frac{2}{3}b \right)} = \frac{\gamma -1}{x_{min}} \left( \frac{x}{x_{min}} \right)^{-\gamma} \label{eq:notrunc} \end{equation} where we conveniently define $\gamma=1+\frac{2}{3}b$ and $x_{min}$ corresponds to the value of the seismic moment of the cut-off magnitude $m_{min}$ \cite{Kanamori79} introduced in Eq.~\ref{eq:moments}. Note that this PDF has a domain $x \in \left[ x_{min}, +\infty \right)$ as $b>0$, then $\gamma>1$. An earthquake catalog is an empirical dataset that characterizes each earthquake by an array of observations: time of occurrence, spatial coordinates, magnitude, etc. The magnitude $m_{min}$ is usually regarded as the completeness threshold, such as all earthquakes with $m \geq m_{min}$ are recorded in the catalog \cite{Woessner2005}. For $m<m_{min}$, events are missing from the catalog due to the difficulties of detecting them (e.g.\cite{Gonzalez2017,Schorlemmer2008}), specially in aftershock sequences, when their waveforms tend to overlap each other \cite{Woessner2005,Davidsen2010}. This incompleteness distorts the power-law behaviour below $m_{min}$. One has also to keep in mind that there also exists an upper cut-off due to the finite-size effects \cite{CorralRosalba2018}, implying that, at a certain value of the magnitude, there are deviations from the power-law behaviour. Consequently, strictly speaking, the range of validity of the GR law cannot be extended up to infinity \cite{Serra2017,Corral2018}. By ignoring which is the model that conveniently fits the tail of the distribution, the power-law behaviour has to be restricted to an upper cut-off $x_{max}$ and the PDF for the truncated GR law is written: \begin{equation} f(x) = \frac{1-\gamma }{x_{max}^{1-\gamma}-x_{min}^{1-\gamma}} x ^{-\gamma}, \label{eq:trunc} \end{equation} defined for $x \in \left[ x_{min}, x_{max} \right]$. Recent studies regarding the acoustic emission (AE) in compression experiments of porous glasses and minerals \cite{Baro2013,Nataf2014,Navas-Portella2016,Yoshimitsu2014,Main2013} or wood \cite{Makinen2015} have focused the attention in the energy distribution of AE events due to the similarities with the GR law for earthquakes \cite{Serra2017}. According to the terminology which is used in some of these studies, we will name as labquakes those AE events that occur during the compression of materials. Earthquake and labquake catalogs as well as other empirical datasets in complex systems only report a limited range of events, making it difficult to estimate parameters of the power-law PDF accurately. In this work we try to solve this problem by combining datasets with rigorous statistical tools, with the goal of finding a broader range of validity. In addition, if different datasets can be combined and characterized by a unique power-law exponent, that means that the particular exponents of each dataset are statistically compatible, therefore, providing a possible way of classifying them into the same universality class. The manuscript is structured as follows: in Sec. \ref{sec:methods} we will expose the different statistical procedures that are used in order to conveniently merge datasets and to find a global power-law PDF (\ref{sec:statistic}). In Sec. \ref{sec:applications} we apply this methodology to different datasets that are explained Sec.\ref{sec:datasets} and analysed in Sec. \ref{sec:earthquake} for earthquake catalogs and in Secs.\ref{sec:earthquake+charcoal} and \ref{sec:vycor} for AE data obtained during the compression of two different materials. \section{Methods} \label{sec:methods} \subsection{Statistical methodology: Merging datasets } \label{sec:statistic} By considering $n_{\rm ds}$ datasets of $N_{i}$ ($i=1,...,n_{\rm ds}$) observations each, one wants to fit a general power-law distribution with a unique global exponent for all of them. We assume that for the $i$-th dataset, the variable $\mathcal{X}$ (seismic moment if one works with the GR law for earthquakes or AE energy if one works with the GR law for labquakes) follows a power-law PDF $f_{\mathcal{X}}(x; \gamma_{i},x_{min}^{(i)},x_{max}^{(i)})$ from a certain lower cut-off $x_{min}^{(i)}$ to an upper cutt-off $x_{max}^{(i)}$ with exponent $\gamma_{i}$ and number of data $n_{i}$ ($n_i \leq N_i$) in the range $\left[ x_{min}^{(i)},x_{max}^{(i)}\right]$. Note that one also can consider the untruncated power-law model for the $i$-th dataset if $x_{max}^{(i)}\rightarrow \infty$. By means of the methods explained in Refs. \cite{Deluca2013,Corral2018} (or, alternatively, Ref.~\cite{Clauset2009}), one can state that data from the $i$-th dataset does not reject the power-law hypothesis for a certain range. Note that, in the $i$-th dataset, the variable $\mathcal{X}$ can acquire values in a range typically spanning several orders of magnitude. Generally, the procedure of merging datasets is performed by selecting upper and lower cut-offs $x_{min}^{(i)}$ and $x_{max}^{(i)}$ ($x_{min}^{(i)} < x_{max}^{(i)}$) for each dataset. Data outside these ranges are not considered. All the possible combinations of cut-offs $\lbrace x_{min} \rbrace$ and $\lbrace x_{max} \rbrace$ are checked with a fixed resolution (see below). The Residual Coefficient of Variation (CV) test can be used to fix some upper cut-offs, thus reducing the computational effort. For more details about the CV test, see Appendix \ref{sec:appa}. Given a set of cut-offs, datasets can be merged by considering two different models: \begin{itemize} \item \textbf{Model $\alpha$}: All datasets are merged by considering a unique global exponent $\Gamma$ ($\gamma_i=\Gamma$ for all datasets). \item \textbf{Model $\beta$}: All datasets are merged, but each one with its own exponent $\gamma_i$ ($i=1,...,n_{\rm ds}$). \end{itemize} Note that model $\alpha$ is nested in model $\beta$ and the difference in the number of parameters characterizing these models is $n_{\beta}-n_{\alpha}=n_{ds}-1$. Since we are interested in merging datasets with a unique global exponent (model $\alpha$), we need enough statistical evidence that this simpler model is suitable to fit the data. For a given set of cut-offs, the fit is performed by means of the following protocol: \begin{enumerate} \item \textbf{Maximum Likelihood Estimation (MLE) of model $\alpha$}: The log-likelihood function of model $\alpha$ can be written as: \begin{equation} \log \mathcal{L}_{\alpha}= \sum_{i=1}^{n_{\rm ds}} \sum_{j=1}^{n_{i}} \log f_{\mathcal{X}} \left( x_{ij}; \Gamma, x_{min}^{(i)},x_{max}^{(i)} \right) \label{eq:general} \end{equation} where $x_{ij}$ corresponds to the $n_i$ values of the variable $\mathcal{X}$ that are in the range $x_{min}^{(i)} \leq x_{ij} \leq x_{max}^{(i)}$ in the $i$-th dataset, $\log$ is the natural logarithm and $\Gamma$ is the global exponent. The definition of likelihood is consistent with the fact that likelihoods from different datasets can be combined in this way \cite{pawitan2013all}[p.27]. At this step, one has to find the value $\hat{\Gamma}$ of the global exponent $\Gamma$ that maximizes the log-likelihood expression in Eq. (\ref{eq:general}). For the particular expressions corresponding to the truncated and untruncated power-law PDF, see Eqs. (\ref{eq:likepl}) and (\ref{eq:likepltrunc}) in Appendix \ref{sec:appb}. If all the power-law distributions are untruncated, this exponent can be easily found analytically \cite{Navas2018} as $$\hat{\Gamma} = 1 + \frac{ \sum_{i=1}^{n_{ds}} n_{i}}{\sum_{i=1}^{n_{ds}} \frac{n_{i}}{\hat{\gamma}_{i}-1}} $$ where the hats denote the values of the exponents that maximize the log-likelihood of the particular power-law distribution (model $\beta$) and the general one in Eq.(\ref{eq:general}). If truncated power-law distributions are considered, one has to use a numerical method in order to determine the exponent $\Gamma$ that maximizes this expression \cite{Baro2012}. \item \textbf{MLE of model $\beta$}: The log-likelihood function of model $\beta$ can be written as: \begin{equation} \log \mathcal{L}_{\beta}= \sum_{i=1}^{n_{\rm ds}} \sum_{j=1}^{n_{i}} \log f_{\mathcal{X}} \left( x_{ij}; \gamma_{i}, x_{min}^{(i)},x_{max}^{(i)} \right) \label{eq:generalbeta} \end{equation} using the same notation as in Eq. (\ref{eq:general}). For the particular expressions corresponding to the truncated and untruncated power-law PDFs, see Eqs. (\ref{eq:likepl}) and (\ref{eq:likepltrunc}) in Appendix \ref{sec:appb}. The values of the exponents that maximize Eq.(\ref{eq:generalbeta}) are denoted as $\hat{\gamma}_i$. \item \textbf{Likelihood-ratio test}: We perform the Likelihood Ratio Test (LRT) for the models $\alpha$ and $\beta$ in order to check whether model $\alpha$ is good enough to fit data or not in comparison with model $\beta$. For more details about the LRT see Appendix \ref{sec:appb}. If model $\alpha$ ``wins'', go to step (4). Otherwise, this fit is discarded and a different set of cut-offs $\lbrace x_{min} \rbrace$ and $\lbrace x_{max} \rbrace$ is chosen, and go back to step (1). Note that model $\alpha$ can be a good model to fit if the particular exponents $\hat{\gamma}_i$ do not exhibit large differences among each other in relation to their uncertainty. \item \textbf{Goodness-of-fit test}: In order to check whether it is reasonable to consider model $\alpha$ as a good candidate to fit data, we formulate the next null hypothesis $\rm H_{0}$: the variable $\mathcal{X}$ is power-law distributed with the global exponent $\hat{\Gamma}$ for all the datasets. In this work, we are going to use two different statistics in order to carry out the goodness-of-fit tests: the Kolmogorov-Smirnov Distance of the Merged Datasets (KSDMD) and the Composite Kolmogorov-Smirnov Distance (CKSD). The KSDMD statistic can be used as long as datasets overlap each other whereas the CKSD statistic does not require this condition. For more details about how these statistics are defined and how the $p$-value of the test is found see Appendix \ref{sec:appd}. If the resulting $p$-value is greater than a threshold value $p_{c}$ (in the present work we are going to use $p_{c}=0.05$ and $p_{c}=0.20$), we consider this as a valid fit and it can be stated that the variable $\mathcal{X}$ is power-law distributed with exponent $\hat{\Gamma}$ along all the different datasets for the different ranges $\lbrace x_{min} \rbrace$ and $\lbrace x_{max} \rbrace$. Otherwise, this fit will not be considered as valid. A different set of cut-offs is chosen and we go back to step (1). \end{enumerate} When all the combinations of cut-offs have been checked, one may have a list of valid fits. In order to determine which one of them is considered the definitive fit, the following procedure is carried out: \begin{enumerate}[label=(\roman*)] \item The fit that covers the largest sum of orders of magnitude: $ \max \left[ \sum_{i=1}^{n_{ds}} \log_{10}\left(\frac{x_{max}^{(i)}}{x_{min}^{(i)}} \right) \right] $ is chosen. If the power-law fit is untruncated, $x_{max}^{(i)}$ can be substituted by the maximum observed value $x_{top}^{(i)}$. If there is a unique candidate with a maximum number of orders of magnitude, then this is considered as the definitive global fit. Otherwise go to the next step (ii). \item The fit with the broadest global range: $\max \left[ \frac{\max \left( x_{max}^{(i)} \right)}{ \min \left( x_{min}^{(i)} \right) } \right]$ for $i=1,...,n_{ds}$ is chosen. If there is a unique candidate, this is considered as the definitive global fit. Otherwise go to the next step (iii). \item The fit with the maximum number of data $\mathcal{N}=\sum_{i=1}^{n_{ds}}n_i$ is considered as the definitive global fit. \end{enumerate} By means of these three steps, a unique fit has been found for all the datasets analysed in this work. Nevertheless, one could deal with datasets in which more conditions are needed in order to choose a definitive fit unambiguously. At the end of this procedure one is able to state that the datasets that conform the global fit correspond to phenomena that are candidates to be be classified into the same universality class, at least regarding the observable $\mathcal{X}$. If no combination of cut-offs is found to give a good fit, then it can be said that there exists at least one catalog that corresponds to a phenomenon that must be classified in a different universality class. \section{Applications} \label{sec:applications} In this section we apply the methodology of merging datasets to different earthquake and labquake catalogs. Firstly, we try to merge three earthquake catalogs. Secondly, a fourth catalog of charcoal labquakes is added in order to check whether these two phenomena can be classified into the same universality class. Finally, we apply the methodology to four catalogs of Vycor labquakes that cover different observation windows. \subsection{Datasets} \label{sec:datasets} \subsubsection{Earthquake Catalogs} We have selected catalogs that have different completeness magnitudes in order to cover different magnitude ranges. We hope that a convenient combination of these datasets will give us a larger range of validity of the GR law. Let us briefly describe the catalogs that have been used in this work (See also Fig.~\ref{fig:mapzoom}): \begin{itemize} \item Global Centroid Moment Tensor (CMT) Catalog: It comprehends earthquakes worldwide from 1977 to 2017 \cite{CMT1,CMT2}. This catalog reports the values of the moment magnitude as well as the seismic moment. The resolution of the magnitude in the catalog is $\Delta m=0.01$. \item Yang--Hauksson--Shearer (YHS) A: It records the earthquakes in Southern California with $m \geq 0$ in the period 1981-2010 \cite{2012BuSSA.102.1179Y}. This catalog does not report the seismic moment but a preferred magnitude that is converted into seismic moment according to Eq.(\ref{eq:moments}). The resolution of the catalog is $\Delta m=0.01$. \item Yang--Hauksson--Shearer (YHS) B: It is a subset of YHS A that contains the earthquakes in the region of Los Angeles in the period 2000-2010 \cite{2012BuSSA.102.1179Y}. LA region is defined by the following four vertices in longitude and latitude: $\left(119^{\circ}W, 34^{\circ}N \right)$,$\left(118^{\circ}W, 35^{\circ}N \right)$,$\left(116^{\circ}W, 34^{\circ}N \right)$, and $\left(117^{\circ}W, 33^{\circ}N \right)$ ( see Fig.~\ref{fig:mapzoom}). This region has been selected because it is among the best monitorized ones \cite{Hutton,Schorlemmer2008}. Furthermore, we selected this time period due to the better detection of smaller earthquakes than in previous years \citep{Hutton} what should reduce the completeness magnitude of the catalog \cite{Schorlemmer2008}. The resolution of this catalog is the same as YHS A. \end{itemize} In order to not to count the same earthquake more than once, the spatio-temporal window corresponding to the YHS B catalog has been excluded from the YHS A and the spatio-temporal window corresponding to the YSH A catalog has been excluded from the CMT catalog. Figure \ref{fig:mapzoom} shows the epicentral locations of the earthquakes contained in each catalog. \begin{figure} \centering \includegraphics[scale=0.35]{map.jpg} \includegraphics[scale=0.45]{mapCALIFORNIA.jpg} \includegraphics[scale=0.45]{mapCALIFORNIAWINDOW.jpg} \caption{Earthquake epicenters of the different catalogs. Horizontal and vertical axes correspond to longitude and latitude respectively in degrees. Top, CMT catalog for the period 1977-2017 \cite{CMT1,CMT2}. Middle, YHS A Catalog \cite{2012BuSSA.102.1179Y} of Southern California for the period 1981-2010. Bottom figure corresponds to a region (YHS B) of the YHS catalog for Los Angeles area for the period 2000-2010.} \label{fig:mapzoom} \end{figure} \subsubsection{Labquake Catalogs} We performed uni-axial compression experiments of porous materials in a conventional test machine ZMART.PRO (Zwick/Roell). Samples with no lateral confinement were placed between two plates that approached each other at a certain constant displacement rate $\dot{z}$. Simultaneous to the compression, recording of an AE signal was performed by using a piezoelectric transducer embedded in one of the compression plates. The electric signal $U(t)$ was pre-amplified, band filtered (between 20 kHz and 2 MHz), and analysed by means of a PCI-2 acquisition system from Euro Physical Acoustics (Mistras Group) with an AD card working at 40 Megasamples per second with 18 bits precision \cite{PCI2}. Signal pre-amplification was necessary to record small AE events. Some values of the pre-amplified signal were so large that cannot be detected correctly by the acquisition system. This fact led to signal saturation and, consequently, an underestimated energy of the AE event \cite{Navas2018}. Recording of data stopped when a big failure event occurred and the sample got destroyed. An AE event (often called AE hit in specialized AE literature) starts at the time $t_{j}$ when the signal $U(t)$ exceeds a fixed detection threshold and finishes at time $t_{j}+\tau_{j}$ when the signal remains below threshold from $t_{j}+\tau_{j}$ to at least $t_{j}+\tau_{j}+200\mu$s. The energy $E_{j}$ of each event is determined as $E_{j}=\frac{1}{R}\int_{t_{j}}^{t_{j}+\tau} U^{2}(t) dt$ where $R$ is a reference resistance of $10$ k$\Omega$. This AE energy corresponds to the radiated energy received by the transducer. At the end of an experiment, a catalog of events is collected, each of them characterized by a time of occurrence $t$, energy $E$, and duration $\tau$. Experiments were performed with two different materials: \begin{itemize} \item \textbf{Charcoal}: One experiment with a charcoal sample was performed at a constant rate $\dot{z} = 0.005$mm/min with a preamplification of $40$ dB and a value of $43$dB for the detection threshold. The sample corresponded to commercially available fine art fusains (HB5 mm, NITRAM, Canada). \item \textbf{Vycor}: We use the labquake catalogs of Ref.~\cite{Navas2018}, in which the authors performed four experiments with cylindrical samples (diameters $\Phi = 4.45$ mm and heights $H=8$ mm) of Vycor (a mesoporous silica glass with $40 \%$ porosity) were performed at a constant rate $\dot{z}=0.005$mm/min. Before compression, samples were cleaned with a $30\%$ solution of $\rm H_{2}O_{2}$, during 24 h and dried at 130$^{\circ}$C. The four experiments were performed with the following pre-amplification values: $60$ dB, $40$ dB, $20$ dB and $0$ dB, and the respective values of the detection threshold $23$ dB, $43$ dB, $63$ dB and $83$ dB referring to the signal $U(t)$, not the preamplified signal (in such a way that after preamplification the threshold always moves to $83$ dB). This value of the threshold was set as low as possible in order to avoid parasitic noise. \end{itemize} An important difference between these two materials is the degree of heterogeneity. The mesoporous silica structure of Vycor is much more homogeneous than the one of the charcoal which may contain voids and macropores. These structural differences may lead to differences in the energy exponents \cite{Vives2019}. \subsection{Merging Earthquake Catalogs} \label{sec:earthquake} \begin{table*}[htbp] \begin{tabular}{|ll|r|r|r|r|r|r|l|l|l|} \hline & & \multicolumn{1}{l|}{\textbf{$N$}} & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$m_{min} $}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (Nm)}} & \multicolumn{1}{l|}{\textbf{$m_{top}$}} & \multicolumn{1}{l|}{\textbf{$x_{top}$(Nm)}} & \multicolumn{1}{l|}{\textbf{$\hat{b}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{\textbf{$p_{fit}$}} \\ \hline & \textbf{(0) Charcoal} & 101524 & 18625 & - & $4.93\times 10^{-18}$ & - & $1.86 \times 10^{-11}$ & $0.984(8)$ & $1.656(5)$ & $0.088(9)$ \\ \hline & \textbf{(1) YHS B} & 26330 & 3412 & 1.93 & $10^{12}$& 5.39 & $1.53 \times 10^{17}$ & $0.99(3)$ & $1.66(2) $ & $0.072(8)$ \\ \hline & \textbf{(2) YHS A} & 152924 & 4353 & 3.17 & $7.08 \times 10^{13}$ & 7.20 & $7.94 \times 10^{19}$ & $0.98(1)$ & $1.65(1)$ & $0.080(9)$ \\ \hline & \textbf{(3) CMT} & 48637 & 22336 & 5.33 & $1.24 \times 10^{17}$ & 9.08 & $5.25 \times 10^{22}$ & $0.982(7)$ & $1.655(5)$ & $0.36(2)$ \\ \hline \end{tabular} \caption{Results of fitting the GR law for each individual catalog. The total number of earthquakes in each catalog is given by $N$ whereas the number of data entering into the fit is $n$. The value $m_{top}$ corresponds to the maximum observed value for each catalog. The GR law is valid for each catalog from $\left[m_{min}, m_{max} \right]$ with a particular $b$-value. $m_{max}$ has no upper-limit for any of the fits except for the CMT catalog, in which $m_{max}^{(3)}=7.67$ ($x_{max}^{(3)}=4.03 \times 10^{20}$ Nm). For the charcoal catalog, $x$ represents the energy collected by the transducer (instead of the seismic moment). No magnitude scale is provided in this case. Numbers in parentheses correspond to the error bar estimated with one $\sigma$ in the scale given by the last digit. The $p$-value of the fits has been computed with $10^3$ simulations and $p_c=0.05$. For completeness, values of cut-offs in seismic moment $x$ and power-law exponents $ \hat{\gamma}$ have been included.} \label{tab:universaltable} \end{table*} In its usual form, the GR law fits a power-law model that contemplates a unique power-law exponent. Nevertheless, several studies have elucidated the existence of a double-power-law behaviour in the GR law for global seismicity \cite{Corral2018,Pacheco1992,YODER2012167}. Authors in Ref.~\cite{Corral2018} claim that a truncated power-law with exponent $\gamma \simeq 1.66$ cannot be rejected up to $m_{max} \simeq 7.4$ and a second power-law tail emerges from $m_{min}'=7.67$ with an exponent $\gamma'= 2.1\pm 0.1$. We have checked these results by using the residual CV-test. Furthermore, we have been able to establish that, by fixing the upper truncation at $m_{max}=m_{min}'=7.67$, the truncated power-law hypothesis cannot be rejected ( see Table \ref{tab:universaltable}). Consequently, if one wants to fit a power-law PDF with a unique exponent, one has to exclude all those earthquakes with $m \geq m_{max}= 7.67$. For the CMT catalog, we fix the upper cut-off $x_{max} =10^{\frac{3}{2}m_{max}+9.1}$ and we work by following the same procedure we have been using so far. The rest of catalogs can be safely fitted by untruncated power-law PDFs because the CV-test does not reject the hypothesis of a unique power-law tail and the magnitudes which are studied are considerably smaller than those in the CMT catalog. Thus, we consider two untruncated power-law distributions and a third one which is truncated for the CMT catalog. For each decade, we sample 5 values of $x_{min}^{(i)}$ equally spaced in logarithmic scale, and all the possible combinations of cut-offs $x_{min}^{(1)}$, $x_{min}^{(2)}$, $x_{min}^{(3)}$ are checked for a fixed upper-truncation $x_{max}^{(3)}$. The labels (1),(2) and (3) correspond to the catalogs YSH B, YSH A and CMT respectively. \begin{table*}[htbp] \begin{tabular}{|ll|r|r|r|l|l|l|l|} \hline & $p_{c}=0.05$ & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$m_{min} $}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (Nm)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{b}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{\textbf{$p_{fit}$}} \\ \hline & \textbf{(1) YHS B} & 3412 & 1.93 & $10^{12}$ & 5.18 & $0.99(1)$ & $1.633(7)$ & $0.072(8)$ \\ \hline & \textbf{(2) YHS A} & 3500 & 3.27 & $10^{14}$ & 5.90 & $ 0.99(2)$ & $1.66(1) $ & $0.089(9)$ \\ \hline & \textbf{(3) CMT} & 19003 & 5.40 & $1.58 \times 10^{17}$ & 3.40 & $ 0.98(8)$ & $1.655(5) $ & $0.26(1)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$\sum$\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{b}_{g}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{} & 25915 & 1.93 & $10^{12}$ & 14.48 & $0.991(6)$ & $1.661(4)$ & $0.079(9)$ \\ \hline \hline \hline & $p_{c}=0.20$ & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$m_{min} $}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (Nm)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{b}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{\textbf{$p_{fit}$}} \\ \hline & \textbf{(1) YHS B} & 3412 & 1.93 & $10^{12}$ & 5.18 & $0.99(1)$ & $1.633(7)$ & $0.072(8)$ \\ \hline & \textbf{(2) YHS A} & 3500 & 3.27 & $10^{14}$ & 5.90 & $ 0.99(2)$ & $1.66(1) $ & $0.089(9)$ \\ \hline & \textbf{(3) CMT} & 10422 & 5.67 & $3.98 \times 10^{17}$ & 3 & $1.00(1)$ & $1.663(7)$ & $0.62(2)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$\sum$\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{b}_{g}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{} & 17334 & 1.93 & $10^{12}$ & 14.08 & $1.000(8)$ & $1.667(5)$ & $0.326(5)$ \\ \hline \end{tabular} \caption{Results of fitting the models $\alpha$ and $\beta$ to the earthquake catalogs when doing the goodness-of-fit test with the CKSD statistis and different values of $p_c$. If the goodness-of-fit test is performed with the KSDMD statistic, the same set of cut-offs as the fit done by using CKSD$_{0.20}$ is found (see Table \ref{tab:statistics}). Same symbols as in Table \ref{tab:universaltable}. The number of orders of magnitude $OM= \log_{10}\left( \frac{x_{max}^{(i)}}{x_{min}^{(i)}} \right)$ covered by each fit as well as for model $\alpha$ is also shown. The value $x_{max}^{(i)}$ is replaced for $x_{top}^{(i)}$ for untruncated fits. Note that $7.67-1.93=5.74$ units in magnitude correspond to $8.6$ orders of magnitude in seismic moment. } \label{tab:globalfitting2} \end{table*} \begin{table*}[htbp] \begin{tabular}{|ll|r|r|} \hline & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\textbf{$D_{e}$}} & \multicolumn{1}{l|}{\textbf{$p_{fit}$}} \\ \hline & \textbf{KSDMD} & 0.018487 & $0.20(1)$ \\ \hline & \textbf{CKSD$_{0.05}$} & 3.25082 & $0.079(9)$ \\ \hline & \textbf{CKSD$_{0.20}$} & 2.778202 & $0.326(5)$ \\ \hline \end{tabular} \caption{Results of fitting model $\alpha$ to all the catalogs when doing the goodness-of-fit test with the KSDMD and the CKSD statistics for $p_{c}=0.05$ (same cut-offs for all the catalogs) and for the set-of cut-offs for which the $p$-value is larger or equal than $p_{c}=0.20$ for the CKSD statistic. KSDMD has a unique fit because the fit already exceed $p_c=0.20$ and no valid fits are found with a smaller $p$-value.} \label{tab:statistics} \end{table*} \begin{figure*}[htpb] \resizebox{\textwidth}{!}{\includegraphics[scale=0.70]{completenew.jpg}} \caption{Left panel: Estimated Complementary Cumulative Distribution Functions (CCDF) of the Gutenberg-Richter law for each catalog (top) and for the merged catalogs (bottom). Right panel: Estimated PDFs $f(x)$ of the Gutenberg-Richter law for each catalog (top) and for the merged catalogs (bottom). Merged histogram is plotted by following the procedure explained in Ref.~\cite{Navas2018}. Fits are represented by solid black lines. Top axis represents the same scale in moment magnitude.} \label{fig:GRGlobal} \end{figure*} In Table \ref{tab:globalfitting2} we present the results of the global fit for models $\alpha$ and $\beta$. The same global fit is found independently from the choice of the statistic of the goodness-of-fit test. The values of the statistics as well as the resulting $p$-values are shown in Table \ref{tab:statistics}. It can be observed that, for this particular set of cut-offs, the CKSD test has a smaller $p$-value and can be considered to be a more strict statistic with respect the KSDMD. No fit with a smaller $p$-value has been found for the KSDMD statistic, and the fit shown in Table \ref{tab:globalfitting2} has a $p$-value that exceeds both $p_c=0.05$ and $p_c=0.20$. A $b$-value very close to one holds for more than 8 orders of magnitude in seismic moment from $m_{min}=1.93$ to $m_{max}=7.67$. The value of the global exponent is approximately in agreement with the harmonic mean of the particular exponents of the GR-law for each catalog \cite{Navas2018,kamer2015}. Due to the upper-truncation, the value of the global exponent is not exactly the same as the value of the harmonic mean of the particular exponents. The results do not show remarkable differences if the critical $p$-value is set to $p_c=0.20$ (see Table \ref{tab:globalfitting2}). We consider that the definitive fit is the one whose goodness-of-fit test has been performed with a threshold value of $p_c=0.20$. \subsection{Universality of Earthquakes and Charcoal labquakes} \label{sec:earthquake+charcoal} \begin{table*}[htbp] \begin{tabular}{|ll|r|r|r|l|l|l|l|} \hline & \textbf{$p_c =0.05$} & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$m_{min} $}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (Nm)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{$\hat{b}$-value} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{\textbf{$p_{fit}$}} \\ \hline & \textbf{(0) Charcoal} & 15906 & - & $6.31\times 10^{-18}$ & 6.47 & $0.988(8)$ & $1.658(5)$ & $0.15(1)$ \\ \hline & \textbf{(1) YHS B} & 1353 & 2.33 & $9.99 \times 10^{7}$ & 4.58 & $0.98(3)$ & $ 1.66(2) $ & $0.10(1)$ \\ \hline & \textbf{(2) YHS A} & 234 & 4.47 & $1.59 \times 10^{11}$ & 4.10 & $0.98(6)$ & $1.65(4)$ & $0.62(2)$ \\ \hline & \textbf{(3) CMT} & 7689 & 5.80 & $1.59 \times 10^{13}$ & 2.80 & $1.00(1)$ & $1.667(9)$ & $0.393(5)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$\sum$\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{b}_{g}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{CKSD $D_{e}=3.589973$} & 25182 & & $6.3 \times 10^{-18}$& 17.95 & $1.003(6)$ & $1.669(4)$ & $0.057(7)$ \\ \hline \hline \\ \hline & \textbf{$p_c = 0.20$} & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$m_{min} $}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (Nm)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{$\hat{b}$-value} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{\textbf{$p_{fit}$}} \\ \hline & \textbf{(0) Charcoal} & 3555 & - & $6.31 \times 10^{-17}$ & 5.47 & $1.04(2)$ & $1.69(1)$ & $0.88(1)$ \\ \hline & \textbf{(1) YHS B} & 1007 & 2.47 & $1.59 \times 10^{8}$ & 4.38 & $1.66(2)$ & $0.99(3)$ & $0.058(7)$ \\ \hline & \textbf{(2) YHS A} & 234 & 4.47 & $1.59 \times 10^{11}$ & 4.10 & $0.98(6)$ & $1.65(4)$ & $0.62(2)$ \\ \hline & \textbf{(3) CMT} & 3014 & 6.20 & $6.33 \times 10^{13}$ & 2.20 & $1.00(2)$ & $1.67(2)$ & $0.59(2)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{}& \multicolumn{1}{l|}{$\sum$\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{b}_{g}$-value}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{CKSD $D_{e}=2.998867$} & 7810 & & $6.3 \times 10^{-17}$ & 16.15 & $1.03(1)$ & $1.688(8)$ & $0.21(1)$ \\ \hline \end{tabular} \caption{ Results of fitting models $\alpha$ and $\beta$ to the charcoal and earthquake datasets for two different values of $p_c$. Same notation as in previous tables. } \label{tab:universaltable2} \end{table*} \begin{figure*}[htpb] \includegraphics[width=\textwidth]{hist.jpg} \caption{Estimated PDF $f(x)$ of the Gutenberg-Richter law for the merged earthquake and charcoal labquake catalogs. The fit is represented by a solid black line. The methodology to merge the three earthquake catalogs is the same as in the rest of plots whereas the addition of the charcoal catalog to this fit has been done \textit{ad hoc} by conveniently rescaling both parts (those corresponding to charcoal and earthquakes respectively).} \label{fig:GRUniversal} \end{figure*} Motivated by the fact that the power-law exponents of earthquake catalogs and charcoal labquakes are very similar (see Table \ref{tab:universaltable}), we apply this methodology by adding also the charcoal catalog. At this step it is important to stress the fact that the seismic moment does not correspond with the radiated energy $E_{r}$ by the earthquake, which would be the reasonable energy to compare with the AE energy. Whether the ratio of seismically radiated energy over the seismic moment is independent on the moment magnitude is still an unsolved question \cite{KB04}. A constant ratio would imply that the static stress drop is constant for all the earthquakes. This ratio may depend on different earthquake parameters such as moment magnitude and the depth of the source \cite{BrodskyKanamori2001,VassiliouKanamori1982,IzutaniKanamori2001,KanamoriHeaton2000}. On the contrary, the seismically radiated energy is in some cases underestimated such as the ratio may be considered as constant \cite{IdeBeroza2001,Wang2014,BilekLayRuff2004,ZolloOreficeConvertito2014}. For our study, we are going to consider this ratio as constant so that the values of the seismic moment should just be multiplied by an unique factor. The value of this unique factor is $\frac{E_r}{M}=10^{-4.6}$ Ref.~\cite{Bormann2015}, where $M$ is the moment magnitude (previously called $x$). In this case, $x$ corresponds to the energy radiated in seismic waves by earthquakes $E_{r}$ and the AE energy. It can be shown that, for both models $\alpha$ and $\beta$, multiplying the variable by a constant factor only introduces a constant term in the log-likelihood that does not change neither the maximum nor the difference of the log-likelihoods. As the CKSD statistic is a weighted average of the particular KS distances of each dataset, it does not change neither. Therefore, the results shown in Table \ref{tab:universaltable} would not change except for the values of the cut-offs. The CV-test does not reject the hypothesis of a unique power-law tail for the charcoal catalog and an untruncated power-law model is considered for this catalog. For each decade, 5 different values of $x_{min}^{(i)}$ equally spaced in logarithmic scale, for a fixed upper-truncation $x_{max}^{(3)}$ are checked and all the possible combinations of cut-offs $x_{min}^{(0)}$, $x_{min}^{(1)}$, $x_{min}^{(2)}$, $x_{min}^{(3)}$ are checked for a fixed upper-truncation $x_{max}^{(3)}$. The labels (0),(1),(2) and (3) correspond to the catalogs of the charcoal experiment, YSH B, YSH A and CMT respectively. In Table \ref{tab:universaltable2} we present the results of the global fit for the CKSD statistic. In this case, not all the catalogues overlap each other and the CKSD statistic is the only one that can be used for the goodness-of-fit test. The value of the global exponent is approximately in agreement with the harmonic mean of the particular exponents of the GR-law for each catalog \cite{Navas2018}. The results do not show remarkable differences if the critical $p$-value is set to $p_c=0.20$ ( see Table \ref{tab:universaltable2}). We consider that the definitive fit is the one whose goodness-of-fit test has been performed with a threshold value of $p_c=0.20$. \subsection{Vycor labquakes} \label{sec:vycor} The values of the particular exponents $\gamma_i$ of Vycor labquake catalogs differ remarkably from those found for earthquakes and charcoal labquakes \cite{Navas2018}. No combination of the cut-offs has lead to a good fit if Vycor labquakes ar merged with charcoal labquakes or earthquakes. Therefore, Vycor labquakes can be considered in a different universality class. Due to the limitations due to saturation effects for high energy AE events, it is convenient to check whether it is necessary an upper truncation for the power-law regime. The residual CV-test reveals the upper truncations of the experiments performed at $60$,$40$ and $0$ dB but it does not provide conclusive results for the experiment at $20$dB. In order to explore fewer combinations of cut-offs, thus reducing the computation time, we fix the upper truncations $x_{max}^{(0)}$, $x_{max}^{(1)}$ and $x_{max}^{(3)}$ whereas $x_{max}^{(2)}$ is considered as a free parameter. The labels (0),(1),(2) and (3) correspond to the experiments performed at 60, 40, 20 and 0 dB respectively. Five intervals per decade equally spaced in logarithmic scale are sampled for each catalog and all the possible combinations of cut-offs $x_{min}^{(0)}$, $x_{min}^{(1)}$, $x_{min}^{(2)}$, $x_{max}^{(2)}$, $x_{min}^{(3)}$ are checked for fixed upper-truncations $x_{max}^{(0)}$, $x_{max}^{(1)}$ and $x_{max}^{(3)}$. In Table \ref{tab:globalvycor} we present the results of the global fits for Vycor catalogs for the KSDMD and the CKSD statistics. In both cases, the global exponents are very similar but the fit performed with the KSDMD statistic maximizes the sum of orders of magnitude $\sum_{i=1}^{n_{ds}} \log_{10} \left( \frac{x_{max}^{(i)}}{x_{min}^{(i)}} \right)$ and will constitute the definitive fit. The same combination of cut-offs would not be a good fit if the CKSD statistic is used instead. Although the CKSD statistic is more restrictive in this case, it is preferable to consider the maximum number of orders magnitude because it cannot be ensured that this statistic will always be more restrictive for different datasets. The definitive fit covers about nine orders of magnitude in energy with a number of data $\mathcal{N}=30005$ entering into the fit. The results do not differ remarkably if the critical $p$-value is set to $p_c=0.20$ (See Table \ref{tab:globalvycor}). \begin{figure*}[htpb] \resizebox{\textwidth}{!}{\includegraphics[scale=0.70]{completevycornew.jpg}} \caption{Left panel: Estimated Complementary Cumulative Distribution Functions (CCDF) of the Gutenberg-Richter law for each Vycor catalog (top) and for the merged Vycor catalogs (bottom). Right panel: Estimated PDFs $f(x)$ of the Gutenberg-Richter law for each Vycor catalog (top) and for the merged catalogs (bottom). Merged histogram is plotted by following the procedure explained in Ref.~\cite{Navas2018}. Fits are represented by solid black lines. } \label{fig:vycor} \end{figure*} \begin{table*}[htbp] \begin{tabular}{|ll|r|r|r|r|r|r|r|r|r|} \hline & \textbf{$p_{c}=0.20$} & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{$x_{top}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{$x_{max}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{(0) PRE 60} & 24338 & 1.58 & $2.66 \times 10^7$ & $5.37 \times 10^5$ & 5.53 & $1.351(2)$ & $0.080(6)$ \\ \hline & \textbf{(1) PRE 40} & 5083 & 158.489 & $1.82 \times 10^8$ & 15860 & 2.00 & $1.34(1)$ & $0.087(9)$ \\ \hline & \textbf{(2) PRE 20} & 263 & 6309.57 & $3.76 \times 10^9$ & $10^9$ & 5.20 & $1.30(2)$ & $0.67(1)$ \\ \hline & \textbf{(3) PRE 0} & 321 & $6.31 \times 10^5$ & $2.62 \times 10^{10}$ & $3.19 \times 10^{8}$ & 2.70 & $1.31(1)$ & $0.25(1)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$\sum$ \textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{} \\ \hline & KSDMD $D_{e}=0.005911$& 30005 & 1.58 & $2.62 \times 10^{10}$ & $10^9$ & 15.43 & $1.350(2)$ & $0.20(1)$ \\ \hline \\ \hline\hline & \textbf{$p_{c}=0.05$} & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{$x_{top}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{$x_{max}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{(0) PRE 60} & 24338 & 1.58 & $2.66 \times 10^7$ & $5.37 \times 10^5$ & 5.53 & $1.351(2)$ & $0.080(6)$ \\ \hline & \textbf{(1) PRE 40} & 4179 & 251.189 & $1.82 \times 10^8$ & 15860 & 1.80 & $1.35(1)$ & $0.19(1)$ \\ \hline & \textbf{(2) PRE 20} & 181 & $2.51 \times 10^4$ & $3.76 \times 10^9$ & $2.51 \times 10^9$ & 5 & $1.29(3)$ & $0.18(1)$ \\ \hline & \textbf{(3) PRE 0} & 198 & $2.51 \times 10^6$ & $2.62 \times 10^{10}$ & $3.19 \times 10^{8}$ & 2.10 & $1.36(6)$ & $0.36(2)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$\sum$ \textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{} \\ \hline & CKSD $D_{e}=3.362608$ & 28896 & 1.58 & $2.62 \times 10^{10}$ & $2.51 \times 10^9$ & 14.43 & $1.351(2)$ & $ 0.052(7)$ \\ \hline \\ \hline\hline & $p_c =0.20$ & \multicolumn{1}{l|}{\textbf{$n$}} & \multicolumn{1}{l|}{\textbf{$x_{min}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{$x_{top}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{$x_{max}$ (aJ)}} & \multicolumn{1}{l|}{\textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{\gamma}$}} & \multicolumn{1}{l|}{$p_{fit}$} \\ \hline & \textbf{(0) PRE 60} & 20778 & 2.51 & $2.66 \times 10^7$ & $5.37 \times 10^5$ & 5.33 & $1.354(3)$ & $0.51(2)$ \\ \hline & \textbf{(1) PRE 40} & 3404 & 398.107 & $1.82 \times 10^8$ & 15860 & 1.60 & $1.36(2)$ & $0.36(2)$ \\ \hline & \textbf{(2) PRE 20} & 155 & $3.98 \times 10^4$ & $3.76 \times 10^9$ & $10^9$ & 4.40 & $1.32(4)$ & $0.20(1)$ \\ \hline & \textbf{(3) PRE 0} & 159 & $3.98 \times 10^6$ & $2.62 \times 10^{10}$ & $3.19 \times 10^{8}$ & 1.90 & $1.34(7)$ & $0.29(1)$ \\ \hline\hline & \textbf{Model} $\alpha$ & \multicolumn{1}{l|}{\textbf{$\mathcal{N}$}} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$\sum$ \textbf{OM}} & \multicolumn{1}{l|}{\textbf{$\hat{\Gamma}$}} & \multicolumn{1}{l|}{} \\ \hline & \textbf{CKSD $D_{e}=2.586434$} & 24496 & 2.51 & $2.62 \times 10^{10}$ & $10^9$ & 13.23 & $1.354(2)$ & $ 0.20(1)$ \\ \hline \end{tabular} \caption{Results of fitting models $\alpha$ and $\beta$ to the Vycor Labquake catalogs when doing the goodness-of-fit test with the KSDMD and the CKSD statistics and different values of $p_c$. KSDMD has a unique fit because the fit already exceed $p_c=0.20$ and no valid fits are found with a smaller $p$-value. Same notation as in previous tables. } \label{tab:globalvycor} \end{table*} \section{Conclusions} \label{sec:conclusion} We have presented a statistical procedure to merge different datasets in order to validate the existence of universal power-law exponents across different scales or phenomena. This methodology can be useful in the study of different complex systems. In this work, the methodology has been applied to the Gutenberg-Ricther law for earthquakes and labquakes. By merging earthquake catalogs, a global power-law with a global exponent $\Gamma=1.667$ holds for more than 8 orders of magnitude in seismic moment (from $m_{min}=1.93$ to $m_{max}=7.67$ in moment magnitude). To our knowledge, this is the broadest fitting range that has been found for the Gutenberg-Richter law for earthquakes with a unique value of the exponent \cite{Kaganbeta}. There are catalogs of tiny mining-induced earthquakes which exhibit a much smaller completeness magnitude \cite{Jaguars2011} than the ones of natural seismicity used in this work. They were not considered here because they are not currently public and show $b$-values significantly different \cite{Jaguars2010} form the one found here, which would result in non-acceptable fits when merging them with the rest of catalogs, possibly pointing to a different universality class. Earthquake catalogs have been also merged with a charcoal labquake catalog with a global power-law exponent $\Gamma=1.688$ suggesting that these different systems might be classified into the same universality class. A previous methodology for merging datasets was not able to find a good fit for the Gutenberg-Richter law for the four Vycor Labquake catalogs \cite{Navas2018}. The previous procedure did not take into account the fact that the cut-offs of the merged catalogs could be different to those that limited the power-law regime for the individual catalogs. With the methodology exposed in this paper, which allows to consider different cut-offs, these Vycor labquake catalogs have been merged and a GR law with exponent $\Gamma=1.35$ for the energy AE events has been found to hold for nine orders of magnitude. The labquakes in this material have been found to belong to a different universality class than earthquakes and charcoal labquakes. \section{Appendix} \subsection{Appendix A: Likelihood Ratio Test } \label{sec:appb} The likelihood ratio test (LRT) is a statistical procedure used to check whether it is worth using a statistical model $\alpha$ (characterized by a set of $n_{\alpha}$ parameters) which is nested into a more general statistical model $\beta$ (characterized by a set of $n_{\beta}$ parameters). The null hypothesis of this test states that the simpler model $\alpha$ is good enough to describe the data and a more general model $\beta$ does not provide more information. In order to perform this test, the statistic which is used is: \begin{equation} \mathcal{R}= \log \left( \frac{\hat{\mathcal{L}}_{\beta}}{\hat{\mathcal{L}}_{\alpha}} \right) =\log \hat{\mathcal{L}}_{\beta} - \log \hat{\mathcal{L}}_{\alpha} \end{equation} Where $\log$ is the natural logarithm and the hats denote that the likelihoods $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$ are evaluated at those values of the set of parameters for which they reach their maximum values. Although not all the regularity conditions exposed in Ref.~\cite{pawitan2013all} are satisfied when dealing with power-law PDFs, it has been numerically tested that the statistic $2\mathcal{R}$ follows a chi-squared distribution with $n_{\beta}-n_{\alpha} >0$ degrees of freedom, at least for the largest percentiles \cite{Serra2017}. Once the significance level of the test has been fixed, one is able to determine the critical value $2\mathcal{R}_c$. If the empirical value of the statistic is below this threshold, then one can consider that there are not enough statistical evidences to say that the more complex model $\beta$ is needed in order to describe the data. In this situation, due to its simplicity, model $\alpha$ is preferable to fit data. This procedure can be naively understood as a statistical version of the Occam's razor criterion. Note that the rejection of the null hypothesis does not imply the rejection of model $\alpha$ as a fit to data, on the contrary, the ``acceptance" of model $\alpha$ does not imply that it is a good fit to the data. The test is just a relative comparison between two models. We present the expressions that are necessary to compute the statistic of the likelihood ratio test in this work. The log-likelihood for the untruncated PDF in Eq. (\ref{eq:notrunc}) is written: \begin{equation} \log \mathcal{L}_{untrunc} \left( x; x_{min}, \gamma \right) = n \log \left( \frac{\gamma-1}{x_{min}^{1-\gamma} } \right)-\gamma \sum_{j=1}^{n} \log x_{j} \label{eq:likepl} \end{equation} whereas the log-likelihood for the truncated PDF in Eq. (\ref{eq:trunc}) is written: \begin{equation} \begin{split} \log \mathcal{L}_{trunc} \left( x; x_{min}, x_{max}, \gamma \right) = & n \log \left( \frac{1-\gamma}{x_{max}^{1-\gamma} - x_{min}^{1-\gamma} } \right)\\ & -\gamma \sum_{j=1}^{n} \log x_{j} \end{split} \label{eq:likepltrunc} \end{equation} \subsection{Appendix B: Residual Coefficient of Variation Test} \label{sec:appa} A useful statistical tool to check whether an untruncated power-law distribution is a good model to fit data in a certain range is the so called Residual Coefficient of Variation (CV) test \cite{Malevergne2011}. Let us suppose that we have a variable $\mathcal{X}$ that acquires $N$ values $\lbrace x \rbrace$. We define $x_{(j)}$ as the $j$-th value of the variable when it is sorted in ascending order: $x_{(1)} \leq x_{(2)} \leq ... \leq x_{(j)} \leq ... \leq x_{(N)}$. The null hypothesis of the test states that there exists a power-law tail given by $x > x_{(k)}$. The test is based on the idea that for an untruncated power-law distribution the logarithmic CV is close to 1: \begin{equation} CV_{l} = \frac{s_{l}}{m_{l}} \end{equation} where $m_l$ corresponds to the mean of the logarithm of the rescaled variable $l=\log(x/x_{(k)})$: \begin{equation} m_{l} = \frac{1}{N-k} \sum_{j=k+1}^{N} \log \left( \frac{x_{(j)}}{x_{(k)}} \right) \end{equation} and $s^{2}_l$ is the unbiased variance: \begin{equation} s_{l}^2 = \frac{1}{N-k-1} \sum_{j=k+1}^{N} \left( \log \left( \frac{x_{(j)}}{x_{(k)}} \right)-m_l \right)^2 \end{equation} In order to check whether the value of the residual CV is close to one or not for a particular number of remaining data, one simulates many samples power-law distributed with the same number of data $N-k$ in order to extract a distribution for $CV_{l}$. It is important to remark that the distribution of the statistic does not depend on the power-law exponent and, therefore, it is not necessary to estimate it previously. Once one determines the level of significance for the test (in this case $0.05$), one can obtain the upper and lower critical values of the statistic by checking the percentiles of the distribution of simulated values (percentiles $2.5$ and $97.5$ in our case). If the empirical value of $CV_{l}$ lies between these two critical values, one cannot reject the null hypothesis of power-law tail. Otherwise, if the empirical value is below percentile 2.5, the power-law is rejected in favour of a truncated log-normal; if the empirical value is above percentile 97.5 the power-law is rejected but there is no alternative. One proceeds by computing the residual $CV_{l}$ for increasing values of the index $k$, thus analysing the remaining values in the tail of distribution. \subsection{Appendix C: Goodness-of-fit test} \label{sec:appd} Determining whether the null hypothesis of considering a global exponent $\Gamma$ is compatible with the values of the particular fits presents some differences with the standard Kolmogorov-Smirnov (KS) goodness-of-fit test presented in Ref.~\cite{Deluca2013}. In this appendix we expose the two different statistics for the goodness-of-fit test that have been used for the global fit: the first statistic is essentially an adaptation of the standard KS test for the merged case whereas the second method uses a statistic which is a composition of KS distances. When the value of the global exponent $\Gamma$ has been found through MLE, one can understand that each dataset contributes to the global PDF with a global exponent $\Gamma$ in their particular ranges $\left[ x_{min}^{(i)} ,x_{max}^{(i)} \right]$ ($i=1,...,n_{\rm cat}$). The global distribution can be understood as a global power-law PDF with exponent $\Gamma$ ranging from $X_{min}=\min \lbrace x_{min}^{(i)} \rbrace$ to $X_{max}=\max \lbrace x_{max}^{(i)} \rbrace$. In this situation, we need to redefine the KS distance in this case where we have merged several datasets. \begin{itemize} \item \textbf{Kolmogorov-Smirnov Distance of the Merged Datasets (KSDMD):} This statistic can be used as long as datasets overlap each other. Datasets can be merged by pairs by giving a certain weight to each point from data. Depending on which overlapping or non-overlapping region a point from our data comes from, each point will have a certain weight $\omega_{j}$ ($j=1,...,\mathcal{N}$). For more details about the expression of these weights see Ref.~\cite{Navas2018}. By sorting data in ascending order, it is easy to construct the empirical Cumulative Distribution Function (CDF) for the merged datasets by using the following expression: \begin{equation} CDF_e (x_{(k)}) = \frac{\sum_{j=1}^{k} \omega_j }{\sum_{j=1}^{\mathcal{N}}\omega_{j}}, \end{equation} with $k=1,...,\mathcal{N}$ and the sub-index $e$ refers to the empirical CDF. Note that, for a standard situation in which datasets are not merged, all the weights would be one \cite{Deluca2013}. Note also that the Complementary Cumulative Distribution Function (CCDF) is $CCDF(x_{(k)})=1-CDF(x_{(k)})$. Once we have constructed the merged $CDF_{e}(x)$, we can easily compute the KS distance by: \begin{equation} D_{e}^{(KSDMD)} = \max_{X_{min} \leq x \leq X_{max}} \biggr\vert \left( \frac{x^{1-\Gamma}-X_{min}^{1-\Gamma}}{X_{max}^{1-\Gamma}-X_{min}^{1-\Gamma}} \right) - CDF_{e}(x) \biggr\vert \end{equation} \item \textbf{Composite KS Distance (CKSD):} This second statistic can be computed independently on whether the datasets are overlapping each other or not. This statistic is constructed from the $n_{\rm cat}$ particular KS distances: \begin{equation} D_{e,i}= \max_{x_{min}^{(i)} \leq x \leq x_{max}^{(i)}} \biggr\vert \left( \frac{x^{1-\Gamma}-x_{min}^{(i)1-\Gamma}}{x_{max}^{(i)1-\Gamma}-x_{min}^{(i)1-\Gamma}} \right) - CDF_{e,i}\left( x; x_{min}^{(i)},x_{max}^{(i)} \right) \biggr\vert, \end{equation} where the subindex $i$ refers to the $i$-th catalog and $CDF_{e}$ is the empirical CDF. In order to compute the statistic, we perform the following summation: \begin{equation} D_{e}^{(CKSD)}=\sum_{i=1}^{\rm n_{\rm cat}} \sqrt{ n_{i}} D_{e,i}, \end{equation} where the factors $\sqrt{ n_{i}}$ are due to the scaling of the KS distance with the number of data \cite{Press}. \end{itemize} Once the statistic is computed, one needs to determine whether it is big or small in relation to the one found for data sampled from a PDF with the same parameters $\lbrace x_{min}^{(i)} \rbrace$,$\lbrace x_{max}^{(i)} \rbrace$, $\Gamma$ and $\lbrace n_{i} \rbrace$. Data is sampled from a power-law PDF in the range given by the $i$-th catalog with probability $q_{i}=n_{i}/\mathcal{N}$, where $\mathcal{N}=\sum_{i=1}^{\rm n_{ds}} n_{i}$. Note that the particular number in each simulated dataset is not necessarily the empirical one $n_{i}$ but the total number of data $\mathcal{N}$ is maintained. Hence, one needs a first random number to choose the dataset $i$ and therefore the range $\left[ x_{min}^{(i)},x_{max}^{(i)} \right]$ and a second one to generate the random truncated power-law number in that range with exponent $\hat{\Gamma}$ \cite{Deluca2013}. When $\mathcal{N}$ events have been generated according to this procedure, one finds the global exponent $\hat{\Gamma}_{\rm sim}$ by maximizing the global log-likelihood and computes either $D_{sim}^{(KSDMD)}$ or $D_{sim}^{(CKSD)}$ for the merged simulated datasets. By performing several realizations of the previous procedure, one can estimate the $p$-value of the fit by computing the fraction of simulated datasets where the simulated statistic is larger than the empirical one. \section*{Acknowledgements} The research leading to these results has received founding from ``La Caixa'' Foundation. V. N. acknowledges financial support from the Spanish Ministry of Economy and Competitiveness (MINECO, Spain), through the ``Mar\'{\i}a de Maeztu'' Programme for Units of Excellence in R \& D (Grant No. MDM-2014-0445). We also acknowledge financial support from MINECO under Grants No. FIS2015-71851-P, FIS-PGC 2018-099629-B-100 and MAT2016-75823-R, and from Ag\`encia de Gesti\'o d'Ajuts Universitaris i de Recerca (AGAUR) under Grant No. 2014SGR-1307 and Juan de la Cierva research contract FJCI-2016- 29307 hold by A.G.
1,116,691,501,203
arxiv
\section{Introduction} Let $C \subset \PP^2$ be a projective curve of degree $d > 2$ such that all its singularities are nodes, and let $\nu\colon \hatC \to C$ be the normalization mapping. For a point $p \notin C$ consider the projection $\pr_p: C \to \PP^1 = p^\perp$ from $p$; here $p^\perp \subset \mathbb P^2$ is the set of lines passing through $p$, and $\pr _p$ assigns to $x$ the line joining $p$ and $x$. The composition $\pr _p \circ \nu: \hat C \to \mathbb P^1$ will be called a generalized projection; its branch locus coincides with $p^\perp \cap C^*$, where $C^* \subset (\PP^2)^*$ is the curve dual to $C$. The generalized projection has simple ramification if and only if $p^\perp$ is transversal to $C^*$. Suppose that $\pi\colon C'\to p^\perp$, where $C'$ is a smooth projective curve, is a holomorphic mapping with simple ramification and such that the branch locus of $\pi$ coincides with $p^\perp \cap C^*$. We are looking for a criterion for $\pi$ to be isomorphic to the generalized projection $\pr_p\circ\nu$. \subsubsection*{Problem background} Consider first the case when $C$ is smooth. An easy dimension count shows that if $d > 3$ then the branch locus of a projection is not arbitrary. Namely, it follows from the Riemann--Hurwitz formula that a degree $d$ map from a curve of degree $d$ has $d(d-1)$ critical values, provided the ramification is simple. So, its branch locus is a point of $\Sym^{d(d-1)}\PP^1 = \PP^{d(d-1)}$. The space of projective curves $C \subset \PP^2$ of degree $d$ has dimension $\binom{d+2}{2}-1 = d(d+3)/2$. For a point $p \in \PP^2$ there is a $3$-dimensional group of projective automorphisms $\PP^2 \to \PP^2$ preserving $p$ and all the lines containing $p$. So, the space of all projections has the dimension $m_d \mathrel{:=} d(d+3)/2-3 = (d^2+3d-6)/2 \ll d(d-1)$ for large $d$. Consider now small $d$. For $d=1$ and $d=2$ the situation is trivial. For $d=3$ one has $m_3 = 6 = 3(3-1)$, and indeed, it is easy to see that any $6$ pairwise distinct points $a_1, \dots, a_6 \in \PP^1$ are branch points of a suitable generic projection of a cubic curve. The case $d=4$ was studied in detail by R.Vakil in \cite{Vakil}. Here $m_4 = 11$, $4(4-1) = 12$, so branch loci of generic projections lie in a hypersurface $V \subset \PP^{12}$. For a point $a \in V$ there are $255$ pairs $(C',\pi)$ where $C'$ is a smooth curve and $a$ is the branch locus of the mapping $\pi: C' \to \PP^1$ of degree $4$. For $120$ pairs $C'$ is a smooth plane projective curve and $\pi$ is isomorphic to a projection. For the remaining $135$ pairs the curve $C'$ is genus $3$ hyperelliptic, hence not plane. For $d > 4$ it is easy to derive from~\cite[Proposition 1]{EGH} that if $C'$ is a smooth plane projective curve then each degree $d$ mapping $\pi: C' \to \PP^1$ is isomorphic to a projection. We do not, though, assume $C'$ to be a plane curve, so a situation similar to the $d=4$ case may arise; cf.\ Proposition~\ref{prop:example}. For $C$ nodal a degree $d$ map $\hatC \to \PP^1$ does not need to be equivalent to a projection. \subsubsection*{Main results} To make our problem more tractable we impose some generality hypotheses on the curves and mappings in question. To wit, we will assume that the mappings $\pi\colon\hatC\to\PP^1$ have simple ramification (see Definition~\ref{def:moderate_covering} below) and that the nodal curve $C\subset\PP^2$ is general enough in the sense of Definition~\ref{def:general_curve}. If $\nu\colon\hatC\to C$ is the normalization, then for the general point $p\in\PP^2\setminus C$ the generalized projection $\pr _p\circ\nu \colon \hatC\to\PP^1$ has simple ramification. We will say that the holomorphic mappings $\ph_1\colon C_1\to\PP^1$ and $\ph_2\colon C_2\to\PP^1$ are \emph{equivalent} if there exists a holomorphic isomorphism $\psi\colon C_1\to C_2$ such that $\ph_2\circ\psi=\ph_1$. \begin{theorem}\label{th.main} Suppose that $C\subset \PP^2$ is a nodal curve of degree~$>2$ that is general enough in the sense of Definition~\ref{def:general_curve}. Suppose that a point $p\in\PP^2\setminus C$ is such that the composition $\pr_p\circ\nu\colon\hatC\to\PP^1$, where $\nu\colon\hatC\to C$ is the normalization and $\pr_p\colon C \to p^\perp = \PP^1$ is the projection from $p$, has simple ramification. Suppose that $C'$ is a smooth projective curve and $\pi\colon C'\to p^\perp$ is a holomorphic mapping with simple ramification such that the branch locus of $\pi$ coincides with $p^\perp \cap C^*$. If $\deg C=3$, assume in addition that $C$ has a node or $\deg\pi\ne4$. Then the following two conditions are equivalent: \textup{(a)} $\pi$ is equivalent to $\pr _p \circ \nu$: there exists an isomorphism $\ph\colon C'\to\hat C$ such that $(\pr_p\circ\nu)\circ \ph=\pi$. \textup{(b)} There exist a smooth projective surface $X$, a finite holomorphic mapping $f\colon X\to(\PP^2)^*$ that is ramified exactly over $C^*$, and an isomorphism $\ph\colon C'\to f^{-1}(p^\perp)$ such that $\left.f\right|_{f^{-1}(p^\perp)}\circ \ph=\pi$. \end{theorem} \begin{note} The implication $(\mathrm a)\Rightarrow(\mathrm b)$ holds without the genericity hypotheses, for arbitrary nodal curves and arbitrary generalized projections. See the proof of Proposition~\ref{gen(a)=>(b)} below. \end{note} \begin{note} If $C$ is a smooth cubic and $\deg f = 4$ then the theorem is wrong; see Remark~\ref{chisini:example}. \end{note} \begin{note} The condition of smoothness of $X$ in the theorem cannot be omitted, at least for $\deg f$ equal to $3$ or $4$: in Section \ref{sec:example} we show that for $d=3$ and $4$ there exists a general enough smooth curve $C\subset\PP^2$ of degree $d$, a line $p^\perp \subset (\PP^2)^*$ (for some $p \in \PP^2$) transversal to~$C^*$, a smooth projective curve $C'$, and a morphism $\pi\colon C'\to p^\perp$ simply ramified over $p^\perp \cap C^*$ such that condition~(b) of Theorem \ref{th.main} with the smoothness omitted holds but $\pi$ is not equivalent to the projection $\pr _p$ (Proposition~\ref{prop:example}). We do not know similar counterexamples with $d > 4$. \end{note} In other words, if we are given a simply ramified covering of $\PP^1$ with the same branch locus as that of a (generalized) projection from a given point~$p$, then this covering is the generalized projection if and only if it can be extended to a finite ramified covering of $(\PP^2)^*$ branched over $C^*$, with a smooth surface upstairs. Theorem~\ref{th.main} may be restated in topological terms: we will show that if $L \subset (\PP^2)^*$ is a projective line then the mapping $\pi\colon C'\to L$ is equivalent to the generalized projection $\pr_p\circ\nu$ if and only if the covering $\pi^{-1}(L\setminus C^*)\to L\setminus C^*$ can be extended to a covering $X_0\to(\PP^2)^*\setminus C^*$ such that its fiber monodromy satisfies some extra conditions ($C^*$ must have no bad nodes or bad cusps in the sense of Definitions~\ref{def:good.nodes} and~\ref{def:bad.cusp}); see Theorem~\ref{th.main2}. Note that we make (almost) no assumption about genus of the curve $C'$ or degree of the morphism~$f$. The $(\mathrm a) \Rightarrow (\mathrm b)$ implication in Theorem~\ref{th.main} is easy (see Section~\ref{(a)=>(b)}). If $\deg C\ge 7$, the $(\mathrm b) \Rightarrow (\mathrm a)$ implication follows very easily from Theorem~10 of Victor Kulikov's paper~\cite{Kulikov1999} (see Proposition~\ref{Chisini_argument} below), but for the remaining cases $3\le\deg C\leq 6$, where the Chisini conjecture for coverings ramified over $C^*$ is not completely proved, it requires more work. We give an argument that works uniformly in all cases, including the case $\deg C\ge 7$; our proof is quite different from Kulikov's one. As a by-product, we establish the Chisini conjecture for coverings of $\PP^2$ ramified over curves dual to general enough nodal curves of degrees $4$, $5$, and $6$ (and of degree $3$ provided that either the curve does have a node or degree of the covering is not $4$). This extends Theorem~10 from~\cite{Kulikov1999}. The paper is organized as follows. In Section~\ref{(a)=>(b)} we prove the easy part of Theorem~\ref{th.main}. In Section~\ref{sec:plastering} we study coverings of the projective plane that are ramified over a curve having only nodes and standard cusps as singularities, and in Section~\ref{sec:PL} we prove the difficult part of Theorem~\ref{th.main}. Finally, in Section~\ref{sec:example} we show that, at least if $\deg C = 3$ and $\deg C = 4$, the smoothness condition in Theorem~\ref{th.main} cannot be omitted. The authors would like to thank Maxim Kazarian, Victor Kulikov, Alexander Kuznetsov, Sergey Lando, Sergey Natanzon, Ossip Schwarzman, Andrey Soldatenkov, and Victor Vassiliev for useful discussions. We are grateful to Igor Khavkine, Francesco Polizzi, Damian R\"ossler, Will Sawin, and particularly Rita Pardini for consultations at \url{http://mathoverflow.net}. The first named author wishes to thank the Technion --- Israel Institute of Technology, where the final version of the paper was written, for the warm hospitality. Our special thanks go to Boris Shapiro, at whose instigation this research was started. Without his constant encouragement this paper would never have been written. \section*{Notation, conventions, and definitions} The base field for all the algebraic varieties will be the field \C of complex numbers. Except for the case when we mention explicitly Zariski topology, all topological spaces will be assumed Hausdorff and locally simply connected, and all the topological terms will refer to the classical topology of complex algebraic varieties. We will say that a singular point of a curve is a \emph{node} if it is locally analytically isomorphic to the singularity of the plane curve defined by the equation $xy=0$, and that a singular point is a \emph{standard cusp} (or simply a cusp) if it is locally analytically isomorphic to the singularity of the plane curve defined by the equation $y^2+x^3=0$. If $f\colon X\to Y$ is a finite holomorphic map of smooth varieties of equal dimension, then by \emph{ramification locus} (a.k.a.\ the set of critical points) of $f$ we mean the closed subset \[ R=\{x\in X\mid\text{derivative of $f$ is degenerate at $x$}\}; \] by \emph{branch locus} (a.k.a.\ the set of critical values) of $f$ we mean the subset $f(R)\subset Y$. Let $f\colon W\to V$ be a covering of topological spaces, and take $v_0\in V$. Then the action of the group $\pi_1(V,v_0)$ on the set $f^{-1}(v_0)$ via loop lifting will be called the \emph{fiber monodromy} of $f$. If $f\colon Y \to X$ is a finite morphism of algebraic varieties of equal dimension (i.e., a proper holomorphic map with finite fibers) and $B \subset X$ is its branch locus, then the restriction of $f$ to $f^{-1}(X \setminus B) \subset Y$ is a finite-sheeted covering of topological spaces. Take $x_0 \in X \setminus B$. Then by fiber monodromy of $f$ we will mean fiber monodromy of the covering $f^{-1}(X \setminus B) \to Y\setminus B$ with respect to the point~$x_0$. Suppose now that $X = \PP^1$, so the branch locus $B\subset \PP^1$ is a finite set. If $p \in B$ and $\gamma_p$ is the conjugacy class in $\pi_1(\PP^1\setminus B)$ corresponding to a small loop around $p$, then the image of $\gamma_p$ under the fiber monodromy homomorphism is a conjugacy class in the symmetric group~$S_d$, where $d$ is the number of points in $f^{-1}(x_0)$. This conjugacy class (or the corresponding partition of $d$) is called the \emph{cyclic type} of the point~$p$. \begin{definition}\label{def:moderate_covering} Suppose that $C$ is a smooth projective curve. We will say that a finite morphism $f\colon C\to\PP^1$ has \emph{simple ramification} if each branch point $\xi\in\PP^1$ has the cyclic type of a transposition. \end{definition} If $C \subset \PP^2$ is a nodal curve and $\nu\colon\hatC\to C$ is the normalization, then the generalized projection of $\hatC$ from a general point of $\PP^2$ has simple ramification. \begin{definition}\label{def:general_curve} Suppose that $C\subset\PP^2$ is an irreducible projective curve such that each singular point of $C$ is a node. Let us say that $C$ is \emph{general enough} if the following conditions are satisfied. -- all the inflexion points of $C$ are simple; -- no line is tangent to $C$ at more than two points; -- if a line is tangent to $C$ at an inflexion point, it is not tangent to $C$ elsewhere. \end{definition} Here, we assume that a line is tangent to $C$ if it is either tangent to $C$ at a smooth point or tangent to a branch of $C$ at a node. By inflexion point we mean either a smooth point $x\in C$ for which the local intersection index of $C$ with its tangent at $x$ is greater than $2$, or a node $x\in C$ for which the local intersection index of at least one of the two (limiting) tangent lines to branches a $x$ with the corresponding branch is greater than~$2$; we say that an inflexion point is \emph{simple} if the intersection index in question equals exactly~$3$. For a projective subspace $\alpha \subset \PP^n$ we denote by $\alpha^\perp \subset (\PP^n)^*$ the set of hyperplanes in $\PP^n$ containing $\alpha$, where $(\PP^n)^*$ is the dual projective space. If $\dim \alpha = k$ then $\alpha^\perp \subset (\PP^n)^*$ is a projective subspace of dimension $n-1-k$; in particular, if $\alpha$ is a point then $\alpha^\perp$ is a hyperplane (denoted by $H_\alpha$ in SGA7, see \cite{SGA7.2}). If $X \subset \PP^n$ is a projective variety, then by $X^*\subset{(\PP^n)}^*$ we will denote its projective dual, i.e., the closure of the set of hyperplanes tangent to $X$ at smooth points. We also will be using the notation $\alpha^\perp$ and $X^*$ when $\alpha, X \subset (\PP^n)^*$, where the canonical isomorphism $((\PP^n)^*)^* = \PP^n$ is assumed. If $X\subset\PP^2$ is a projective curve, we say that a line $L\subset\PP^2$ is transversal to $X$ if $L$ does not pass through singular points of $X$ and is not tangent to $X$ at any non-singular point. If $D_1$ and $D_2$ are Cartier divisors on a projective surface, then $(D_1,D_2)$ stands for their intersection index. \section{Proof of the $(\mathrm a) \Rightarrow (\mathrm b)$ implication in Theorem \ref{th.main}}\label{(a)=>(b)} This part of Theorem~\ref{th.main} follows immediately from \begin{proposition}\label{gen(a)=>(b)} Suppose that $C\subset\PP^2$ is a nodal curve, $\nu\colon \hat C\to C$ is the normalization, $p\in\PP^2\setminus C$, and $\pr _p\colon C\to p^\perp$ is the projection from $p$. Then there exists a smooth projective surface $X_C$, a finite regular mapping $f_C\colon X_C\to(\PP^2)^*$ having the dual curve $C^* \subset(\PP^2)^*$ as its branch locus, and an isomorphism $\ph\colon \hatC\to f_C^{-1}(p^\perp)$ such that $f_C\circ \ph=\pr_p\circ\nu$. \end{proposition} \begin{proof} Put \begin{equation}\label{def_of_X_C} X_C=\{(x,t)\in \hatC\times(\PP^2)^*\mid \nu(x)\in t^\perp\}. \end{equation} The surface $X_C$, being the projectivization of the vector bundle $\nu^*\T_{\PP^2}(-1)$, is smooth. Define the mapping $f_C\colon X_C\to (\PP^2)^*$ by the formula \begin{equation}\label{def:f_C} f(x,t) = t. \end{equation} For $t\in(\PP^2)^*$, \[ f_C^{-1}(t)=\{(x,t)\mid \nu(x)\in t^\perp\}; \] $f$ is a finite morphism of degree~$d$ since for the general $t\in(\PP^2)^*$ the line $t^\perp$ intersects $C$ at $d$ smooth points. Its branch locus is the set of $t$ such that $t^\perp$ is either tangent to $C$ at a smooth point or tangent to a branch of $C$ at a node, so the branch locus of $f$ coincides with~$C^*$. If $p\in \PP^2\setminus C$ then \[ f_C^{-1}(p^\perp)=\{(x,t)\in \hatC\times(\PP^2)^*\mid \nu(x), p \in t^\perp\}. \] It is clear that the mapping $\ph\colon \hatC\to f^{-1}(p^\perp)$ sending $x$ to the pair $(x,\overline{px})$ is the required isomorphism. \end{proof} \section{Plastering of generic coverings over complements to curves}\label{sec:plastering} In this section we prepare for the proof of the implication $\mathrm{(b)}\Rightarrow\mathrm{(a)}$ in Theorem~\ref{th.main}. We start with a simple general fact. Suppose that $D\subset\PP^2$ is an arbitrary projective curve, $X_0$ is a smooth affine surface and $f_0\colon X_0\to \PP^2\setminus D$ is a finite (topological) covering. \begin{proposition}\label{prop:same_mono} Suppose that, in the above setting, there exists a line $L_0\subset\PP^2$ transversal to $D$ such that the induced covering $f_0^{-1}(L_0)\to L_0\setminus D$ has simple ramification. Then for any line $L\subset\PP^2$ transversal to $D$ the induced covering $f_0^{-1}(L)\to L\setminus D$ also has simple ramification. \end{proposition} Following \cite{Kulikov1999}, we will say that coverings satisfying hypotheses of the proposition are \emph{generic}. \begin{proof}[Proof of the proposition] The set \[ U=\{t\in(\PP^2)^*\mid \text{$t^\perp$ is transversal to~$D$}\} \] is arcwise connected; let $\gamma: [0,1] \to U$ be a path such that $\gamma(0) = L_0$, $\gamma(1) = L$. Let also \[ \X=\{(x,t)\in(\PP^2\setminus D)\times U\mid x\in t^\perp\}; \] the natural projection $p\colon \X\to U$ is a locally trivial fiber bundle with the fiber homeomorphic to $\PP^1$ punctured at $\deg D$ points. The pullback covering $p^* f_0\colon \tilde \X\to \X$ restricted to $p^{-1}(t) \subset \X$ is isomorphic to the covering~$f_0^{-1}(t^\perp)\to t^\perp$. The pullback bundle $\gamma^* p: \overline X \to [0,1]$ is trivial, so the proposition follows. \end{proof} If $f_0\colon X_0\to \PP^2\setminus D$ is a covering, then $X_0$ carries a unique structure of complex variety for which the mapping $f$ is a finite unramified morphism. Suppose now that the covering is generic and all the singularities of the curve $D$ are nodes or standard cusps. It follows from a theorem of Grauert and Remmert~\cite[Expos\'e~XII, Th\'eor\`eme 5.4]{SGA1} that in this situation $f_0$ extends uniquely to a finite mapping $f\colon X\to\PP^2$ with normal~$X$. A well-known GAGA-style result~\cite[Expos\'e~12, Corollaire 4.6]{SGA1} implies that the resulting complex space $X$ will be a projective surface. We are going now to describe explicitly the singularities of surface $X$ as well as the structure of the mapping $f$ near ramification locus. In the sequel, $\Delta$ will denote the unit disk $\{z\colon \left|z\right|<1\} \subset \C$, and $\Delta^*$ will stand for the punctured disk $\Delta \setminus \{0\}$. Suppose that $q$ is a node of $D$; choose a neighborhood $U\ni q$ and an isomorphism $U\to \Delta^2$ such that $D\cap U$ is mapped to the set $\{(x,y) \in D^2\mid xy=0\}$. Fix a point $q_0\in U\setminus D$. The covering $f_0^{-1}(U\setminus D)\to U\setminus D$ induces a fiber monodromy action of the group $\pi_1(U\setminus D,q_0) = \Z \oplus \Z$ on the set $f^{-1}(q_0)$. The two generators of $\pi_1(U\setminus D,q_0)$ are the small loops around two branches of $D$ at $q$; since $f_0$ is a generic covering, Proposition~\ref{prop:same_mono} shows that the generators act on $f^{-1}(q_0)$ by transpositions. These transpositions commute, so they can be either disjoint or equal. \begin{definition}\label{def:good.nodes} In the above setting, we will say that a node $q\in D$ is \emph{good} (with respect to the covering $f_0$) if the two generators of $\pi_1(U\setminus D,q_0)$ act by disjoint transpositions. We will say that a node $q\in D$ is \emph{bad} (with respect to $f_0$) if they act by equal transpositions. \end{definition} Suppose now that $q$ is a (standard) cusp of the curve $D$. Choose a neighborhood $U\ni q$ and an isomorphism $U\to \Delta^2$ such that $D\cap U$ is mapped to the set $\{(x,y)\in D^2\mid x^3+y^2=0\}$. Fix a point $q_0\in U\setminus D$. The covering $X\setminus f^{-1}(D)\to\PP^2\setminus D$ induces a fiber monodromy action of the group $\pi_1(U\setminus D,q_0)$ on the set $f^{-1}(q_0)$. It is well known that $\pi_1(U\setminus D,q_0)$ is isomorphic to the Artin braid group on $3$ strings $B_3=\langle u,v\mid uvu=vuv\rangle$, where the generators $u$ and $v$ correspond to the two loops around two intersection points $\ell\cap D$, where $\ell$ is a line close to $q$. Since $f_0$ is a generic covering, the elements $u$ and $v$ of $\pi_1(U\setminus D,q_0)$ act on $f^{-1}(q_0)$ by transpositions. These transpositions (call them $U$ and $V$) satisfy the relations $UVU = VUV$, so they are either non-commuting or equal. \begin{definition}\label{def:bad.cusp} In the above setting, we will say that the cusp $q\in D$ is \emph{good} (with respect to the covering $f_0$) if the transpositions $U$ and $V$ are non-commuting. We will say that the cusp $q\in D$ is \emph{bad} (with respect to $f_0$) if $U = V$. \end{definition} \begin{proposition}[cf.\ {\cite[Section 1.3]{Kulikov1999}}]\label{not.smooth.over.bad} The local behavior of the mapping $f$ near points of the preimage $f^{-1}(q)$, $q \in D$, is as follows: \begin{enumerate} \item\label{It:Smooth} If $q$ is a smooth point of $D$ then $f^{-1}(q)$ consists of $d-1$ points, and $X$ smooth at all of them. At exactly one of them $f$ is ramified, and near this point $f$ is locally isomorphic to $f(x,y) = (x,y^2)$. \item\label{It:GoodNode} If $q$ is a good node then $f^{-1}(q)$ consists of $d-2$ points, and $X$ is smooth at all of them. At exactly two of them $f$ is ramified, with the same local behavior as in Case \ref{It:Smooth}. \item\label{It:BadNode} If $q$ is a bad node then $f^{-1}(q)$ consists of $d-1$ points. All of them except one are smooth. The remaining point is singular, locally isomorphic to $\{(x,y,z) \in \Delta^3 \mid z^2 = xy\}$ \textup(Du Val's $A_1$\textup); the mapping $f$ in the same coordinates is $f(x,y,z) = (x,y)$. \item\label{It:GoodCusp} If $q$ is a good cusp then $f^{-1}(q)$ consists of $d-2$ points, and $X$ is smooth at all of them. At exactly one of them $f$ is ramified, and the pair $(X,f)$ near this point is locally isomorphic to $X = \{(x,y,z) \in \Delta^3 \mid z^3 + 3xz + 2y = 0\}$, $f(x,y,z) = (x,y)$. \item\label{It:BadCusp} If $q$ is a bad cusp then $f^{-1}(q)$ consists of $d-1$ points. All of them except one are smooth, the remaining point is singular, locally isomorphic to $\{(x,y,z) \in \Delta^3 \mid z^2=x^3+y^2\}$ \textup(Du Val's $A_2$\textup); the mapping $f$ in the same coordinates is $f(x,y,z) = (x,y)$. \end{enumerate} \end{proposition} \begin{proof} A direct computation shows that the branch loci $B$ of the local models described above are as follows: $B = \{(x,0) \mid x \in \Delta\}$ for case \ref{It:Smooth}, $B = \{(x,y) \in \Delta^2 \mid xy=0\}$ for the cases \ref{It:GoodNode} and \ref{It:BadNode}, and $B = \{(x,y) \in \Delta^2 \mid x^3+y^2=0\}$ for the cases \ref{It:GoodCusp} and \ref{It:BadCusp}. Thus, in all the cases the branch loci are locally biholomorphic to the curve $D$ in a neighbourhood of $q$. By the uniqueness statement of the Grauert--Remmert theorem it is enough to check that the monodromy action of $\pi_1(\PP^2 \setminus D)$ on the fiber $f_0^{-1}(q) \subset X$ is isomorphic to the action of $\pi_1(\Delta^2 \setminus B)$ on the fiber over $0$ in the local model. If $a \in f^{-1}(q)$ is a smooth point such that $f$ has the normal form $f(z) = z^k$ in it, then the monodromy action is a cycle of length $k$; this proves the statement in cases \ref{It:Smooth} and \ref{It:GoodNode}. In case \ref{It:BadNode} the branch locus is a union of two lines; the group $\pi_1(\Delta^2 \setminus B)$ is generated by two loops circling around these lines; apparently, they both act by the same transposition of the two preimages of the origin. To cover cases \ref{It:GoodCusp} and \ref{It:BadCusp} notice that $\pi_1(\Delta^2 \setminus B) = B_3$ in both of them. Any homomorphism $B_3 \to S_3$ maps the standard generators of the group into permutations $U$ and $V$ satisfying $UVU = VUV$; it is easy to see that such $U$ and $V$ must be transpositions, either equal or non-commuting. So, there exist only two actions, up to conjugation, of $B_3$ on a set of three elements; this proves the statement in cases \ref{It:GoodCusp} and \ref{It:BadCusp}. \end{proof} Suppose that $C\subset \PP^2$ is a nodal projective curve that is general enough in the sense of Definition~\ref{def:general_curve} and that $L\subset(\PP^2)^*$ is a line transversal to $C^*$. Suppose in addition that $\pi\colon C'\to L$, where $C'$ is a smooth projective curve, is a holomorphic mapping with simple ramification and with branch locus $L\cap C^*$. Choose a point $t_0\in L\setminus C^*$. \begin{corollary}\label{cor:factors=>exists_X} If the fiber monodromy homomorphism $\pi_1(L\setminus C^*,t_0)\to\Perm(\pi^{-1}(t_0))$ factors through the homomorphism $\pi_1(L\setminus C^*,t_0)\to \pi_1(\PP^2\setminus C^*,t_0)$ induced by the embedding $L\hookrightarrow (\PP^2)^*$, then there exists a unique pair $(X,f)$, where $X$ is a normal projective surface and $f\colon X\to (\PP^2)^*$ is a finite holomorphic mapping such that $\left.f\right|_{f^{-1}(L)}=\pi$. Over each node of $C$, there is at most one singular point of $X$, and it must be a Du~Val $A_1$-singularity. Over each cusp of $C^*$, there is at most one singular point of $X$, and it must be a Du~Val $A_2$-singularity. The other points of $X$ are smooth. \end{corollary} \begin{proof} Since the monodromy $\pi_1(L\setminus C^*,t_0)\to\Perm(\pi^{-1}(t_0))$ factors through the homomorphism $\pi_1(L\setminus C^*,t_0)\to \pi_1(\PP^2\setminus C^*,t_0)$, there exists a topological covering $X_0\to (\PP^2)^*\setminus C^*$ of degree $\deg\pi$ extending the covering $\pi^{-1}(L\setminus C^*)\to L\setminus C^*$. The restriction of $f_0$ to the line transversal to $C^*$ has simple ramification, so $f_0$ is a generic covering by Proposition \ref{prop:same_mono}. The rest follows from Proposition~\ref{not.smooth.over.bad} with $D = C^*$. \end{proof} \section{Smooth surfaces simply ramified over duals to general nodal curves}\label{sec:PL} In this section we prove the $(\mathrm b) \Rightarrow (\mathrm a)$ implication in Theorem~\ref{th.main}. Throughout this section we will be working in the following setting. $C\subset\PP^2$ is a nodal curve that is general enough in the sense of Definition~\ref{def:general_curve}, $C^*\subset(\PP^2)^*$ is the dual curve, $X$ is a smooth projective surface, $f\colon X\to(\PP^2)^*$ is a finite holomorphic mapping with simple ramification and branch locus~$C^*$. Recall that the smooth surface $X_C$ and the mapping $f_C\colon X_C\to(\PP^2)^*$ were defined by equations~\eqref{def_of_X_C} and~\eqref{def:f_C}, respectively. For the point $p \in \PP^2$ denote by $\pr_p\colon C \to p^\perp = \PP^1$ the projection of $C$ from $p$. Also denote by $\nu\colon\hatC\to C$ the normalization mapping. Suppose that $\Phi\colon X\to X_C$ is an isomorphism such that $f_C\circ\Phi=f$. \begin{proposition}\label{final_step} Let $p$ be such that the composition $\pr_p\colon C \to p^\perp$ has simple ramification. Then for $C' \mathrel{{:}{=}} f^{-1}(p^\perp)$ there exists an isomorphism $\ph\colon \hatC\to C'$ such that $f \circ \ph=\pr_p\circ\nu$. \end{proposition} \begin{proof} The required isomorphism $\ph$ is just the restriction of $\Phi^{-1}$ to $f_C^{-1}(p^\perp)$. \end{proof} \begin{proposition}\label{Chisini_argument} The $(\mathrm b) \Rightarrow (\mathrm a)$ implication in Theorem~\ref{th.main} holds in each of the following cases. \textup{(1)} $\deg C\ge7$; \textup{(2)} $\deg C=6$ and $C$ has at least one node; \textup{(3)} $\deg C=5$ and $C$ has at least three nodes. \end{proposition} \begin{proof} Theorem 10 from~\cite{Kulikov1999} says that if $C$ satisfies the hypothesis of the proposition, then a finite mapping $f\colon X\to(\PP^2)^*$ with simple ramification over $C^*$ is unique. Thus, the mapping $f\colon X\to (\PP^2)^*$ is equivalent to $f_C\colon X_C\to(\PP^2)^*$, and Proposition~\ref{final_step} applies. \end{proof} The rest of this section is devoted to the proof of the $(\mathrm b) \Rightarrow (\mathrm a)$ implication in Theorem~\ref{th.main} in the remaining cases. This proof is independent of the results of \cite{Kulikov1999} and works for all the cases in Theorem~\ref{th.main}, including those covered by Proposition~\ref{Chisini_argument}. It follows from the hypotheses on $C$ that the dual curve $C^*\subset(\PP^2)^*$ has no inflexion points, all the singularities of $C^*$ are nodes and standard cusps, and no line is tangent to $C^*$ at three points (bitangents of $C^*$ correspond to nodes of~$C$). \begin{proposition}\label{smooth_f^-1} Suppose that $L\subset(\PP^2)^*$ is a line such that the point $L^\perp$ does not lie on $C$. Then the curve $f^{-1}(L)\subset X$ is smooth. \end{proposition} \begin{proof} By virtue of projective duality one has $C=(C^*)^*$. Hence, for a line $L \subset (\PP^2)^*$, one has $L^\perp \in C$ if and only if $L$ either is tangent to $C^*$ at a smooth point, or is tangent to a branch of $C^*$ at a node (we call such $L$ a tangent to $C$ at the node), or is the (limiting) tangent to $C$ at a cusp. In the third case $L^\perp$ is an inflexion point of $C$, in the second case it lies on a double tangent to $C$, and in the first case neither takes place. Now the result follows immediately from cases \ref{It:Smooth}, \ref{It:GoodCusp} of Proposition~\ref{not.smooth.over.bad}, where one puts $D=C^*$. \end{proof} \begin{proposition}\label{singular_f^-1} Suppose that a line $L\subset(\PP^2)^*$ is tangent to $C^*$ (at a smooth point, a node, or a cusp). Then the curve $f^{-1}(L)$ has singular points only over the points of tangency; moreover, there is exactly one singular point of $f^{-1}(L)$ over each point of tangency, and this singular point is a node. \end{proposition} \begin{proof} Since the curve $(C^*)^*=C$ has no cusps, the curve $C^*$ has no inflexion points; now everything follows from Proposition~\ref{not.smooth.over.bad}. \end{proof} Let the line $L\subset(\PP^2)^*$, where $L^\perp\notin C$, vary. Proposition~\ref{smooth_f^-1} shows that the curve $f^{-1}(L)$ is smooth for such $L$, so variation of $L$ induces a monodromy action of $\pi_1(\PP^2 \setminus C)$ on $H^1(f^{-1}(L),\Z)$. More formally, consider the incidence variety \begin{equation}\label{inc.var} I=\{(p,x)\in \PP^2\times X\colon f(x)\in p^\perp\} \end{equation} and denote by $\pr_1: I \to \PP^2$ and $\pr_2: I \to X$ the projections. For each $p\in\PP^2\setminus C$ the fiber $\pr_1^{-1}(p)$ is isomorphic to $f^{-1}(p^\perp)$. It follows from Proposition~\ref{smooth_f^-1} that the derivative of $\pr_1$ has maximal rank everywhere on $\pr_1^{-1}(\PP^2\setminus C)$, whence $\pr_1$ restricts to a locally trivial fibration over the preimage of $\PP^2\setminus C$. So, for a point $q \in \PP^2\setminus C$ the fundamental group $\pi_1(\PP^2\setminus C,q)$ acts on $H^1(f^{-1}(q^\perp),\Z)$; this is the monodromy action in question. \begin{proposition}\label{finite_monodromy} The image of the monodromy action of $\pi_1(\PP^2 \setminus C,q)$ on $H^1(f^{-1}(q^\perp),\Z)$ is cyclic of order dividing $d=\deg C$. \end{proposition} \begin{proof} Since $C$ is a nodal projective curve, the group $\pi_1(\PP^2\setminus C)$ is abelian (see~\cite{Del}), whence it is isomorphic to $H_1(\PP^2\setminus C,\Z)$; the latter is obviously cyclic of order~$d$. \end{proof} Recall the main result of local Picard--Lefschetz theory for Lefschetz pencils with one-dimensional fiber (\cite[Expos\'e XIV, 3.2.5]{SGA7.2} or \cite[\S\S\,5 and~6]{Lamotke}). \begin{proposition}\label{PL} Suppose that \X is a $2$-dimensional complex manifold and $p\colon \X\to\Delta$ is a proper surjective holomorphic mapping with the following properties. \textup{(i)} Over $\Delta^*$, the mapping $p$ has no critical points. \textup{(ii)} In $p^{-1}(0)$, the mapping $p$ has only one critical point~$w$ and the Hessian of $p$ at $w$ is non-degenerate. \textup{(iii)} All fibers of $p$ are connected. Fix a point $z_0\in \Delta^*$ and put $C=p^{-1}(z_0)$ \textup(in view of~\textup{(i)}, $C$ is a compact Riemann surface\textup). Then \textup{(a)} The curve $C_0=p^{-1}(0)$ is smooth everywhere except for a node at~$w$. \textup{(b)} The curve $C$ contains an embedded circle~$S$ such that $C_0$ is homeomorphic to the quotient space $C/S$. \textup{(c)} The monodromy operator on $H^1(C,\Z)$ corresponding to the generator of $\pi_1(\Delta^*)$ is defined by the formula \begin{equation}\label{eq:PL} x\mapsto x-(x,c)c, \end{equation} where $c\in H^1(C,\Z)$ is the Poincar\'e dual of the fundamental class of $S$. \end{proposition} (The circle $S$, as well as its Poincar\'e dual cohomology class, is called \emph{vanishing cycle}.) Put $C_0=f^{-1}(\ell)$, where $\ell\subset (\PP^2)^*$ is a general line, and denote the genus of $C_0$ by~$g$. \begin{proposition}\label{tangent:generalities} Suppose that $L\subset (\PP^2)^*$ is a line such that $L^\perp \in C$. Then the curve $f^{-1}(L)$ is connected, and $L$ can be tangent to $C^*$ only at one or two points. \end{proposition} \begin{proof} The connectedness of $f^{-1}(L)$ (for arbitrary $L$) follows from~\cite[Proposition~1]{FH}. A line $L$ cannot be tangent to $C^*$ at more than two points since all the singularities of the curve $C=(C^*)^*$ are nodes. \end{proof} \begin{proposition}\label{f(-1)(tangent)} If a line $L\subset (\PP^2)^*$ is tangent to $C^*$ at one point, then the curve $f^{-1}(L)$ is homeomorphic to the quotient space $C_0/S$, where $S\subset C_0$ is homeomorphic to the circle and $S$ is homologous to zero in $C_0$. \end{proposition} \begin{proof} Take a point $t \in L \setminus C^*$, so that $L^\perp \in t^\perp \subset \PP^2$, and $t^\perp$ is transversal to $C$. Then consider the incidence variety \begin{equation}\label{eq:Lefschetz_pencil} \tilde X=\{(x, a)\in X\times t^\perp\colon f(x)\in a^\perp\} \end{equation} (the surface $\tilde X$ is just the blow-up of $X$ at $f^{-1}(t)$). The existence of $S$ follows now from Proposition~\ref{PL} applied to the natural projection $\tilde X \to t^\perp$ restricted on the preimage of a small disk $\Delta \subset t^\perp$ centered at $L^\perp$. We are left only to check that the vanishing cycle is zero-homologous. Choosing $z_0\in \Delta\setminus \{L^\perp\}$ and putting $C_0=f^{-1}(L)$, observe that $\Delta \setminus L^\perp \subset \PP^2 \setminus C$, so the monodromy representation \begin{equation}\label{local_mono} \pi_1(\Delta\setminus \{L^\perp\})\to \Aut(H^1(C_0,\Z)) \end{equation} factors through the monodromy representation \begin{equation}\label{global_mono} \pi_1(\PP^2\setminus C)\to \Aut(H^1(C_0,\Z)). \end{equation} The image of the homomorphism~\eqref{global_mono} is finite by Proposition~\ref{finite_monodromy}, whence the image of the generator of $\pi_1(\Delta\setminus\{L^\perp\})$ under the homomorphism~\eqref{local_mono} has finite order~$k$. The $k$th power of the monodromy operator \eqref{eq:PL} is \[ x\mapsto x - k\cdot(x,c)c, \] which is identity if and only if $(c,x)=0$ for all $x\in H^1(C_0,\Z)$. So, $c=0$ and the curve $S$ is homologous to zero. \end{proof} \begin{proposition}\label{unique_section} Suppose that $\deg C>2$ and $L\subset(\PP^2)^*$ is a line tangent to $C^*$ at exactly one point~$q$. Then only the following two cases are possible: \textup{(a)} $f^{-1}(L)=Y_1\cup Y_2$, where $Y_1$ and $Y_2$ are smooth projective curves intersecting transversally at a point lying over~$q$, where the restriction $\left.f\right|_{Y_1}\colon Y_1\to L$ is an isomorphism, the restriction $\left.f\right|_{Y_2}\colon Y_2\to L$ has degree~$>1$, and $(Y_1,Y_1)=0$; \textup{(b)} $C$ is a smooth cubic curve and $f$ is equivalent to a projection of the Veronese surface $v_2(\PP^2)\subset\PP^4$ \textup(in particular, $\deg f=4$\textup). In this case both $Y_1$ and $Y_2$ is a smooth rational curve, and each of the restrictios $\left.f\right|_{Y_1}\colon Y_j\to L$ has degree~$2$. \end{proposition} \begin{proof} Propositions~\ref{f(-1)(tangent)} and~\ref{tangent:generalities} show that $f^{-1}(L)=Y$ is a connected curve homeomorphic to the quotient space $Y'/S$, where $Y'$ is a sphere with handles, and $S\subset Y'$ is a zero-homologous circle. Then $Y'/S$ is homeomorphic to the wedge sum of two spheres with handles, so the curve $Y$ has exactly two components. Since the divisor $Y=Y_1+Y_2$ is the inverse image of the line~$L\subset (\PP^2)^*$ and $f(Y_1)=f(Y_2)=L$ (which follows from the finiteness of the morphism~$f$), one has, for $j=1$ or~$2$, \begin{equation}\label{eq:pos} (Y_j,Y_1+Y_2)=\deg(\left.f\right|_{Y_j})>0. \end{equation} The curves $Y_1$ and $Y_2$ intersect transversally at one point, whence $(Y_1,Y_2)=1$ and therefore \begin{equation}\label{both_non-neg} (Y_1,Y_1)\ge0,\quad (Y_2,Y_2)\ge0. \end{equation} Denote by $V\subset H^2(X,\R)$ the subspace generated by fundamental classes of $Y_1$ and $Y_2$. Consider two cases. \emph{Case 1}: $\dim V=1$. In this case the classes of $Y_1$ and $Y_2$ are proportional: $Y_2 = rY_1$. Observe that the number $r$ must be positive because $(Y_1,f^{-1}L)>0$, $(Y_2,f^{-1}L)>0$ for a general line $L\subset(\PP^2)^*$. Since \[ 1=(Y_1,Y_2)=r(Y_1,Y_1)=r^{-1}(Y_2,Y_2), \] and both $(Y_1,Y_1)$ and $(Y_2,Y_2)$ are integers, one has $r=1$, so that $Y_1 = Y_2$ in cohomology, and $(Y_1,Y_1)=(Y_2,Y_2)=1$. Since homology classes of $C$ and $C'$ are equal and $C+C'=f^{-1}(L)$ is an ample divisor, the divisors $C$ and $C'$ are also ample. Now $\deg f=(Y_1+Y_2,Y_1+Y_2)=4$. Since fundamental classes of the curves $Y_1$ and $Y_2$ coincide, their genera are the same; denote this number by $g$. \begin{lemma}\label{2g=g} The curves $Y_1$ and $f^{-1}(L)$ have the same genus. \end{lemma} \begin{proof} Observe that the surface $\tilde X$ defined by the equation~\eqref{eq:Lefschetz_pencil} with the natural projection $q\colon \tilde X\to t^\perp$ is a Lefschetz pencil. Now a standard argument (see, for example, \cite[Expos\'e XVIII, Theorems 5.6.8 and~5.6.2]{SGA7.2}) shows that image of the natural injection $H^1(X,\C)\hookrightarrow H^1(f^{-1}(L),\C)$, where the line $L\subset(\PP^2)^*$ is transversal to $C^*$ and passes through~$t$, coincides with the invariant subspace \[ H^1(f^{-1}(L))^{\pi_1(L\setminus C^*)}. \] Put $f^{-1}(L)=H$; since the monodromy group is generated by the operators~\eqref{local_mono} and all the vanishing cycles~$c$ are zero in view of Proposition~\ref{f(-1)(tangent)}, we conclude that the restriction $H^1(X,\C)\hookrightarrow H^1(H,\C)$ is an isomorphism, whence the homomorphism $H^1(X,\Ooo_X)\to H^1(H,\Ooo_H)$ from the exact sequence \[ 0\to\Ooo_X(-H)\to\Ooo_X\to\Ooo_H\to 0 \] is also an isomorphism; the Kodairra vanishing theorem and Serre's duality show then that this is equivalent to the equation $\dim \lmod K_X\rmod = \dim \lmod K_X+H\rmod$; since $\dim \lmod H\rmod > 0$, this is equivalent to the relation $\lmod K_X+H\rmod=\varnothing$ (cf.~\cite[Proposition 6.1]{Lvovski}). Since $H=Y_1+Y_2$ and $Y_2$ is an effective divisor, it follows now that the linear system $\lmod K_X+Y_1\rmod$ is also empty; since the divisor $Y_1$ is ample, the above argument in the reverse order shows that $H^1(X,\Ooo_X) \cong H^1(Y_1,\Ooo_{Y_1})$. Thus, genera of the curves $f^{-1}(L)=H$ and $Y_1$ are both equal to $\dim H^1(X,\Ooo_X)$. \end{proof} If a line $L'\subset(\PP^2)^*$ is transversal to $C^*$, then, according to Proposition~\ref{PL}b, $Y_1\cup Y_2$ is homeomorphic to $Y'=f^{-1}(L')$ with a circle contracted to a point. Hence, the genus of $Y'$ equals $2g$. Since $2g=g$ by Lemma~\ref{2g=g}, we infer that $g=0$. Thus, $Y_1$ and $Y_2$ are smooth rational ample curves with self-intersection indices~$1$ on the smooth projective surface~$X$. It is well known (see, for example, \cite[Proposition 2.3]{Lvovski2}) that if a smooth rational curve $Y$ on a smooth projective surface $X$ is ample and $(Y,Y)=1$, then there exists an isomorphism from $X$ to $\PP^2$ mapping $Y$ to a line. Thus, $X\cong \PP^2$ and $\Ooo_X(f^{-1}(L))\cong \Ooo_{\PP^2}(2)$, so the mapping $f\colon X\to\PP^2$ is isomorphic to a projection $\pr_\Lambda\colon v_2(\PP^2)\to\PP^2$, where $v_2(\PP^2)\subset \PP^5$ is the quadratic Veronese surface and $\Lambda\subset\PP^5$ is a $2$-plane disjoint from $v_2(\PP^2)$. Since the curve $C^*$ is the branch locus of the projection $\pr_\Lambda$, its dual curve $C\subset\PP^2$ coincides with the intersection $(v_2(\PP^2))^*\cap\Lambda^\perp$ (cf. the proof of Proposition~\ref{prop:example} below), which is clearly a smooth plane cubic since $(v_2(\PP^2))^*$ is the variety of degenerate symmetric $3\times3$-matrices. \emph{Case 2}: $\dim V = 2$. By the Hodge index theorem, \[ \begin{vmatrix} (Y_1,Y_1)&(Y_1,Y_2)\\ (Y_1,Y_2)&(Y_2,Y_2) \end{vmatrix} <0, \] whence $(Y_1,Y_1)(Y_2,Y_2)<1$. So at least one of the factors must be zero; assume without loss of generality that $(Y_1,Y_1)=0$, then \[ \deg(\left.f\right|_{Y_1})=(Y_1,Y_1+Y_2)=1 \] Thus, the restriction of $f$ to $Y_1$ is an isomorphism from $Y_1$ to the line $L$. Let us prove that degree of the restriction $\left.f\right|_{Y_2}\colon Y_2\to f(Y_2)$ is greater than one. Indeed, $L$ is tangent to $C^*$ at only one point, so $Y$ has only two components. Thus, if $Y_2$ maps with degree~$1$ onto~$L$, then \[ (Y_2,Y_2)=(Y_2,Y_1+Y_2)-(Y_2,Y_1)=1-1=0 \] and $\deg f=(Y_1+Y_1,Y_2+Y_2)=2(Y_1,Y_2)=2$, so all nodes and/or cusps of $C^*$ must be bad in the sense of Definition~\ref{def:bad.cusp}, which contradicts the smoothness of $X$ in view of Proposition~\ref{not.smooth.over.bad}. Thus, $C^*$ must be smooth. Since its dual curve $(C^*)^*=C$ has no cusps, the smooth curve~$C^*$ has no inflexion points. It is well known (and follows, for example, from the Pl\"ucker formulas~\cite[Section 9.1, Theorem 1]{BK}) that the only smooth plane curve without inflexions is the conic, whence $C$ is a conic, which contradicts the hypothesis. \end{proof} \begin{note} The trick with Hodge index theorem is essentially contained in Van de Ven's paper~\cite{VandeVen} (see the proof of Theorem~I). A similar argument allows one to give a proof of the main result of the paper~\cite{Zak} that is valid in arbitrary characteristic (see~\cite{Lvovski2}). \end{note} \begin{proposition}\label{X_C=X} Suppose that $C\subset\PP^2$ is a smooth or nodal curve of degree~$>2$ that is general enough in the sense of Definition~\ref{def:general_curve}. Let also $X$ be a smooth projective surface and $f\colon X\to(\PP^2)^*$ be a finite morphism with branch locus $C^*$ and such that the ramification is simple. If $\deg C=3$, assume in addition that $C$ has a node or $\deg f\ne 4$. Then there exists an isomorphism $\Phi\colon X\to X_C$ such that $f_C\circ\Phi=f$, where $X_C$ and $f_C$ are defined by equations~\eqref{def_of_X_C} and~\eqref{def:f_C}, respectively. \end{proposition} We know two proofs of this proposition, one ``topological'' and one algebraic geometric. For the expositon in this paper, we have chosen the latter since its rigorous version appears to be shorter. \begin{proof} As before, denote the normalization of $C$ by $\nu\colon \hatC\to C$. Let $\gamma\colon \hatC\to C^*$ be the Gauss mapping, which attaches to a point $x\in C$ the tangent to $C^*$ at $\nu(x)$, where by tangent we mean the limiting tangent if $\nu(x)$ is a cusp of $C^*$ and the limiting tangent at the branch corresponding to $x$ if $\nu(x)$ is a node of~$C^*$. If follows from projective duality that the definition of the surface $X_C$ (see~\eqref{def_of_X_C}) may be rewritten as \[ X_C=\{(x,t)\in \hatC\times(\PP^2)^*\mid t\in \gamma(x)^\perp\}. \] Now put \[ \Zcal=\{(x,y)\in \hatC\times X\mid f(y)\in \gamma(x)^\perp\}. \] \begin{lemma}\label{lemma:component} There exists a component $\Zcal_1\subset\Zcal$ consisting of components of fibers that have intersection index $1$ with $D=f^{-1}(L)$ \textup(or, equivalently, project isomorphically onto tangents to~$C^*$; in the proof of Proposition~\ref{unique_section} these components were called~$Y_1$\textup). \end{lemma} \begin{proof}[Proof of the lemma] We will be using the language of schemes. Denote the natural projection $\Zcal\to\hatC$ by $\pi_1$ and the morphism $(x,y)\mapsto f(y)$ by $\pi_2 \colon\Zcal\to(\PP^2)^*$. Pick a line $L\subset(\PP^2)^*$ transversal to $C^*$ and denote by $D\subset\Zcal$ the closed subscheme $\Pi_2^{-1}L$. Proposition~\ref{unique_section} shows that for almost all (in the sense of Zariski topology) closed points $x\in\hatC$, the fiber $\pi_1^{-1}(x)$ consists of two components $Y_1$ and $Y_2$ and $D\cap \pi_1^{-1}(x)$ is a reduced scheme of length $\deg C^*$; moreover, one of the points of $D\cap \pi_1^{-1}(x)$ lies in $Y_1$ and the remaining $\deg C^*-1$ lie in $Y_2$. Let $K$ stand for the function field of \hatC and $\bar K$ for its algebraic closure; put $\eta=\Spec K$, $\bar\eta=\Spec\bar K$, and let $\Zcal_\eta$ and $\Zcal_{\bar\eta}$ be the generic and generic geometric fibers of $\pi_1$ respectively. Then it is apparently evident that $\Zcal_{\bar\eta}$ consists of two components and the pullback of $D$ is also a reduced scheme of length $\deg C^*$ such that one of its points lies on one of the components of $\Zcal_{\bar\eta}$, while the remaining $\deg C^*-1$ points lie on the other component. To prove this assertion rigorously, we invoke EGAIV. To wit, consider the coherent sheaf $\F=\Ooo_\Zcal\oplus\Ooo_D$ on \Zcal. The primary type (``type primaire'') \cite[Remarque 9.8.9]{EGAIV-3} (roughly speaking, it is the collection of lengths of fibers of \F at the generic points of components of its support and at the generic points of various intersections of the closures of these points) of pullbacks of \F to fibers over almost all (in the sense of Zariski topology) closed points is the same; since the set of points with the property ``the pullback of \F to the geometric fiber over the point in question has a given \emph{type primaire}'' is constructible due to~\cite[9.8.9.1]{EGAIV-3}, it follows that \emph{type primaire} of the pullback of \F to $\Zcal_{\bar\eta}$ is the same as that of its pullbacks to almost all fibers over closed points, whence the assertion. Now observe that the Galois group $G=\mathrm{Gal}(\overline K/K)$ acts on the set of the two components of $\Zcal_{\bar\eta}$. Since the pullback of $D$ to $\Zcal_{\bar\eta}$ is $K$-rational, it is $G$-invariant. On the other hand, one of these component contains only one point of $D$, while the other contains $\deg C^*-1>1$ such points (if $\deg C^*-1=1$, then $C^*$ and $C$ are conics, which contradicts the hypothesis). Thus, both these components are $G$-invariant, whence $\Zcal_\eta$ contains a component containing only one point from the pullback of~$D$. This proves the lemma. \end{proof} Returning to the proof of the proposition, observe that self-intersection index of almost all of the fibers of the projection $\Zcal_1\to C^*$ as curves on $X$ is zero due to Proposition~\ref{unique_section}, so they are disjoint. Thus, the natural mapping $\Phi_1\colon\Zcal_1\to X$ (induced by projection to the second factor) is generically one to one; since $X$ is smooth, Zariski's main theorem implies that $\Phi_1$ is an isomorphism. On the other hand, if we define the morphism $\Phi_2\colon \Zcal_1\to X_C$ by the formula~$(x,y)\mapsto (x,f(y))$, then it is easy to see that $\Phi_2$ is also generically bijective, whence isomorphic. Now we may put $\Phi=\Phi_2\circ\Phi_1^{-1}$. \end{proof} Now the $(\mathrm b) \Rightarrow (\mathrm a)$ implication in Theorem~\ref{th.main} follows immediately from Propositions~\ref{X_C=X} and~\ref{final_step}. \begin{note}\label{chisini:example} If $\deg C=3$ and $\deg f=4$, Theorem~\ref{th.main} does not hold. Indeed, if $f\colon v_2(\PP^2)\to\PP^2$ is a general projection of the quadratic Veronese surface, then the branch locus of $f$ is the curve $C^*\subset \PP^2$ that is dual to a smooth cubic~$C$. If $L\subset \PP^2$ is a general line, then the restriction $\pi=\left.f\right|_{f^{-1}(L)}\colon f^{-1}(L)\to L$ is a generic covering of degree $4$ ramified over $L\cap C^*$, that is, over the branch locus of the projection $\pr_{L^\perp}\colon C\to L$. By the very construction, the mapping $\pi$ can be extended to the mapping $f\colon v_2(\PP^2)\to\PP^2$ ramified over~$C^*$, but $\deg f=4\ne3$, so $f$ is not equivalent to a projection of the plane cubic. \end{note} The following corollary of Theorem \ref{th.main} is essentially its ``topological'' reformulation: \begin{corollary}\label{th.main2} Suppose that $C\subset \PP^2$ is a nodal curve of degree~$>2$ that is general enough in the sense of Definition~\ref{def:general_curve}. Suppose that a point $p\in\PP^2\setminus C$ is such that the composition $\pr _p\circ\nu\colon\hatC\to\PP^1$, where $\nu\colon\hatC\to C$ is the normalization and $\pr _p$ is the projection from $p$, has simple ramification. Denote by $L\subset(\PP^2)^*$ the line in the dual plane corresponding to the point $p\in\PP^2$, and suppose that $C'$ is a smooth projective curve and $\pi\colon C'\to L$ is a holomorphic mapping with simple ramification and such that the branch locus of $\pi$ coincides with $L\cap C^*$. If $\deg C=3$, assume in addition that $C$ has a node or $\deg f\ne4$. Then the following two conditions are equivalent. \textup{(a)} There exists an isomorphism $\ph\colon C'\to\hat C$ such that $(\pr _p\circ\nu)\circ \ph=\pi$. \textup{(b)} The covering $\pi^{-1}(L\setminus C^*)\to L\setminus C^*$ can be extended to a covering $X_0\to(\PP^2)^*\setminus C^*$ with respect to which all nodes and cusps of the curve~$C^*$ are good in the sense of Definitions~\ref{def:good.nodes} and~\ref{def:bad.cusp}. \end{corollary} \begin{proof} Immediate from Theorem~\ref{th.main} and Proposition~\ref{not.smooth.over.bad}. \end{proof} Our proof of Theorem~\ref{th.main} implies the following corollary: \begin{corollary}\label{chisini_duals} Suppose that $C\subset\PP^2$ is a nodal curve that is general enough in the sense of Definition~\ref{def:general_curve}. If $C$ is not a smooth cubic, then any two generic coverings $X\to(\PP^2)^*$, ramified over $C^*$ and with smooth $X$, are equivalent. \end{corollary} If $\deg C \ge 7$, or $\deg C = 6$ and $C$ has at least one node, or $\deg C = 5$ and $C$ has at least three nodes, then this corollary is implied by \cite[Theorem~10]{Kulikov1999}. It is well known (see, for example,~\cite{Kulikov1999}) that this assertion is wrong if $C$ is a smooth cubic. A~relevant counterexample is essentially contained in our proof of Proposition~\ref{unique_section} (Case~1). \section{The smoothness condition is (sometimes) essential}\label{sec:example} In this section we show that the smoothness condition in Theorem \ref{th.main} is not trivial, at least for the case of smooth curves of degree $d=3$ or $4$ and mappings of degree~$d$. To wit, we prove the following. \begin{proposition}\label{prop:example} Suppose that $d=3$ or $4$. Then there exist a smooth curve $C\subset\PP^2$, $\deg C=d$, a line $p^\perp\subset(\PP^2)^*$ (where $p \in \PP^2$ is a point) transversal to~$C$, a smooth projective curve $C'$, and a holomorphic mapping $\pi\colon C'\to p^\perp$ with simple ramification such that $\deg\pi=d$ and $\pi$ is ramified exactly over $C^*\cap p^\perp$ with the following properties. \textup{(1)} For a point $t_0\in p^\perp \setminus C^*$, the fiber monodromy homomorphism $\pi_1(p^\perp\setminus C^*,t_0)\to \Perm (\pi^{-1}(t_0))$ can be factored through the homomorphism $\pi_1(p^\perp\setminus C^*,t_0)\to \pi_1((\PP^2)^*\setminus C^*,t_0)$ induced by the embedding $p^\perp \setminus C^*\hookrightarrow (\PP^2)^*\setminus C^*$. \textup{(2)} The mapping $\pi\colon C'\to p^\perp$ is \emph{not} equivalent to the projection $\pr _p\colon C\to p^\perp$. \end{proposition} We will need the following lemma. \begin{lemma}\label{self-dual:3,4} If $d=3$ or $4$, then there exists a surface with isolated singularities $X\subset\PP^3$, $\deg X=d$, such that its dual $X^*\subset(\PP^3)^*$ is isomorphic to $X$ and the general hyperplane section of $X$ is general enough in the sense of Definition~\ref{def:general_curve}. \end{lemma} \begin{proof}[Proof of the proposition modulo Lemma~\ref{self-dual:3,4}] Let $X \subset \PP^3$ be a surface from Lemma~\ref{self-dual:3,4}. For a general point $r\in\PP^3$, the projection $\pr_r\colon X\to \PP^2$, where $\PP^2$ is a projective plane (incidentally, $\PP=(r^\perp)^*$), is simply ramified. If $B\subset\PP^2$ is the branch locus of $\pr_r$, then a line $\ell\subset\PP^2$ is tangent to $B$ if and only if the plane $\Pi_r$ spanned by $r$ and $\ell$ is tangent to $X$. Thus, the dual curve $B^*\subset(\PP^2)^*$ is projectively isomorphic to $X^*\cap r^\perp =C$; if $p$ is general, Lemma~\ref{self-dual:3,4} asserts that $C$ is a general enough smooth plane curve. Take $p \in \PP^3$ general and let $C'=X\cap\Pi_p$; denote the restriction of $\pr_p$ to~$C'$ by $\pi\colon C'\to L$. We see (taking into account that $B=C^*$) that $\pi$ extends to a finite morphism $\pr_p\colon X\to (\PP^2)^*$ ramified over $C^*$, where the surface $X$ is singular. Thus, $\pi$ satisfies Condition (b) of Theorem \ref{th.main} with the smoothness omitted. On the other hand, $\pi$ cannot be equivalent to the projection $\pr_r$. Indeed, if such an equivalence existed, then by Proposition ~\ref{gen(a)=>(b)} the restriction of $\pi$ to the preimage of the complement of the branch locus could be extended to a finite morphism $X_C\to\PP^2$ ramified over $C^*$ with smooth $X_C$. However, if such an extension to a finite mapping $X\to(\PP^2)^*$ ramified over $C^*$ and with normal (in particular, smooth) $X$ exists, this extension is unique by the theorem of Grauert and Remmert~\cite[Expos\'e~XII, Th\'eor\`eme 5.4]{SGA1} we cited before. Since we already have such an extension with singular $X$, we arrive at the desired contradiction. \end{proof} It remains to prove Lemma~\ref{self-dual:3,4}. If $d=3$, we let $X$ be the projective closure of surface in $\mathbb A^3$ defined by the equation $x_1x_2x_3=1$ (this surface was considered in~\cite{ACT}). A direct computation shows that $X$ contains only three singular points (of the type $A_2$) and that $X$ is projectively isomorphic to~$X^*$. Since any smooth plane cubic is automatically general enough in the sense of Definition~\ref{def:general_curve}, the surface $X$ is as required. To treat the case $d=4$ we need to recall the definition and some properties of Kummer surfaces. Suppose that $J$ is a principally polarized Abelian variety of dimension~$2$ that is not the product of two elliptic curves and $\Theta\subset J$ is a theta divisor of polarization. Then the complete linear system $\lmod 2\Theta\rmod$ is base point free and defines a finite morphism $J\to\PP^3$; the image of this morphism is a surface of degree $4$ with exactly $16$ singular points of the type $A_1$; the surfaces obtained by this construction are called Kummer surfaces (see, for example, \cite[Section 10.3]{Dolgachev}). Any Kummer surface $X\subset\PP^3$ is projectively isomorphic to its dual $X^*\subset(\PP^3)^*$ (see \cite[Theorem 10.3.19]{Dolgachev}). Thus, to finish the proof of Lemma~\ref{self-dual:3,4}, it remains to show that there exists a Kummer surface such that its general plane section is general enough in the sense of Definition~\ref{def:general_curve}; since such sections are plane quartics, it suffices to establish that a general plane section has only simple inflexion points. The following lemma and its proof were pointed out to us by Rita Pardini. \begin{lemma}[R. Pardini]\label{Pardini} Each non-hyperelliptic curve of genus $3$ \textup(smooth and projective\textup) is isomorphic to a plane section of an appropriate Kummer surface in~$\PP^3$. \end{lemma} \begin{proof}[Proof of the lemma] Suppose that $C$ is such a curve, and let $\sigma\colon C_1\to C$ be an unramified covering of degree~$2$. Denote by $(J,\Theta)$ the Prym variety corresponding to this covering~\cite[Section 12.2]{BiLa}. According to \cite[Proposition~12.5a]{BiLa}, the Abel--Prym map embeds $C'$ in $J$ as an element of the linear system~$2\Theta$. Multiplication by $-1$ on $J$ restricts on $C_1$ to the involution induced by $\sigma$, hence the image of $C_1$ via the map given by $\lmod 2\Theta\rmod$ is a plane section of the Kummer surface associated to $(J,\Theta)$, and this section is isomorphic to~$C$. \end{proof} Now if we pick a smooth plane curve $C_0\subset\PP^2$ of degree $4$ such that all its Weierstrass points have index one, or, equivalently, all its inflexion points are simple, Lemma~\ref{Pardini} shows that $C_0$ is a plane section of a Kummer surface $X\subset\PP^3$; since the property ``all the Weierstrass points have index one'' is open, a generic plane section of $X$ has only simple inflexion points, which completes the proof of Lemma~\ref{self-dual:3,4}. \bibliographystyle{amsalpha}
1,116,691,501,204
arxiv
\section{Introduction} On November 8, 2017, ``Drone Integration Pilot Program'' \cite{USprogramm_UAV} was launched under a presidential memorandum from the White House, which aimed at further exploring expanded use of unmanned aerial vehicles (UAVs) including beyond-visual-line-of-sight flights, night-time operations, flights over people, etc. \cite{whitehouse_UAV}. In fact, the past several years have witnessed an unprecedented growth on the use of UAVs in a wide range of civilian and defense applications such as search and rescue, aerial filming and inspection, cargo/packet delivery, precise agriculture, etc. \cite{drone_num}. In particular, there has been a fast-growing interest in utilizing UAVs as aerial communication platforms to help enhance the performance and/or extend the coverage of existing wireless networks on the ground \cite{zeng2016wireless,bor2016new}. For example, UAVs such as drones, helikites, and balloons, could be deployed as aerial base stations (BSs) and/or relays to enable/assist the terrestrial communications. UAV-enabled/aided wireless communications possess many appealing advantages such as swift and cost-effective deployment, line-of-sight (LoS) aerial-to-ground link, and controllable mobility in three-dimensional (3D) space, thus highly promising for numerous use cases in wireless communications including ground BS traffic offloading, mobile relaying and edge computing, information/energy broadcasting and data collection for Internet-of-Things (IoT) devices, fast network recovery after natural disasters, etc. \cite{3gpp_UAV,qualcom_UAVreport,zhang2017spectrum,xu2017uav2,yang2017proactive,yang2017energy,zhang2016fundamental}. For example, Facebook has even ambitiously claimed that ``Building drones is more feasible than covering the world with ground signal towers'' \cite{fb_UAV}. By leveraging the aerial BSs along with terrestrial and satellite communications, Europe has established an industry-driven project called ``ABSOLUTE'' with the ultimate goal of enhancing the ground network capacity to many folds, especially for public safety in emergency situations \cite{absolute}. At present, there are two major ways to practically implement aerial BSs/relays by using tethered and untethered UAVs, respectively, which are further explained as follows. A tethered UAV literally means that the UAV is connected by a cable/wire with a ground control platform (e.g., a custom-built trailer). Although it may sound ironic for a UAV to be on a tethering cable, this practice is very common due to many advantages including stable power supply and hence unlimited endurance, more affordable payload (e.g., more antennas), ultra-high speed backhaul with secured data transmission (e.g., real-time high-definition video), robustness to wind, etc. All these evident benefits have triggered a great interest in testing tethered UAV BSs, such as Facebook's ``Tether-Tenna'', AT\&T's ``flying cell-on-wings (COWs)'', and Everything-Everywhere's (UK's largest mobile network operator, EE) ``Air Masts''. However, such a tethering feature also limits the operations of UAVs to taking off, hovering, and landing only, thus rendering the wireless networks employing tethered UAV BSs like ``hovering cells'' over the air. As a result, the research efforts in this paradigm mainly focus on the UAV deployment/placement optimization in a given target area to meet the ground data traffic demand \cite{al2014optimal,lyu2016placement,bor2016efficient, mozaffari2016efficient,yang2017proactive,alzenad20173d}. In particular, \cite{al2014optimal} provides an analytical approach to optimize the altitude of a UAV for providing the maximum coverage for ground users (GUs). Alternatively, by fixing the altitude, the horizontal positions of UAVs are optimized in \cite{lyu2016placement} to minimize the number of required UAVs to cover a given set of GUs. In contrast to tethered UAVs, the untethered UAVs generally rely on on-board battery and/or solar energy harvesting for power supply \cite{wu2016overview} and wireless links for data backhaul. Although untethered UAVs in general have smaller payload and limited endurance/backhaul rate as compared to their tethered counterparts, they have fully controllable mobility in 3D space which can be exploited for communication performance enhancement. First, an untethered UAV not only can hover above a fixed ground location like tethered UAVs, but also can fly freely over a wide ground area to significantly extend the communication coverage. Furthermore, the free-flying feature enables the UAV BS to timely adjust its position according to the dynamic distributions of the GUs, and even follow closely some specific GUs, to achieve a new ``user-centric and cell-free'' communication service. In fact, the reduced UAV-GU distance not only decreases the signal attenuation but also increases the probability of the LoS link between them, which is particularly crucial for high-rate communication. As a result, untethered UAV BSs/relays have been envisioned to be a revolutionizing technology for future wireless communication systems \cite{fotouhi2017dronecells} and preliminary industry prototypes have been built and tested including e.g. Facebook's Aquila and Nokia's flying-cell (F-Cell). This has also inspired a proliferation of studies recently on the new research paradigm of jointly optimizing the UAV trajectory design and communication resource allocation, for e.g. mobile relaying channel \cite{zeng2016throughput,li2016energy}, multiple access channel (MAC) and broadcast channel (BC) \cite{wu2017joint,lyu2016cyclical,JR:wu2017_ofdm}, interference channel (IFC)\cite{JR:wu2017joint}, and wiretap channel \cite{guangchi2016_UAV}. In particular, as shown in \cite{lyu2016cyclical} and \cite{JR:wu2017_ofdm}, significant communication throughput gains can be achieved by mobile UAVs over static UAVs/fixed terrestrial BSs by exploiting the new design degree of freedom of UAV trajectory optimization, especially for delay-tolerant applications. In \cite{JR:wu2017joint}, a joint UAV trajectory, user association, and power control scheme is proposed for cooperative multi-UAV enabled wireless networks. A multi-objective path planning (MOPP) framework is proposed in \cite{yin2017offline} to explore a suitable path for a UAV operating in a dynamic urban environment. In \cite{mozaffari2017mobile}, an efficient mobile architecture is proposed for uplink data collection application in IoT networks. To optimize the wireless system performance by exploiting UAV-enabled BSs, assorted UAV trajectory designs have been proposed in the literature \cite{li2016energy,wu2017joint,JR:wu2017joint,lyu2016cyclical,JR:wu2017_ofdm,jeong2016mobile,zeng2016throughput,mozaffari2016unmanned,mozaffari2017mobile,wu2017ofdma}, based on optimization techniques such as successive convex optimization (SCA) and solutions for Travelling Salesman Problem (TSP) as well as its various extensions. However, all these works either assume time division multiple access (TDMA) \cite{li2016energy,wu2017joint,JR:wu2017joint,lyu2016cyclical,mozaffari2016unmanned} or frequency division multiple access (FDMA) \cite{JR:wu2017_ofdm,jeong2016mobile,zeng2016throughput,mozaffari2017mobile,wu2017ofdma} to simplify the multiuser communication design, which, however, are in general suboptimal from an information-theoretic perspective. As a result, the fundamental capacity limits of UAV-enabled multiuser communication systems still remain largely unknown, which thus motivates this work. \begin{figure}[!t] \centering \includegraphics[width=0.4\textwidth]{UAV_twouser2_region2.eps} \caption{A UAV-enabled broadcast channel (BC) with two GUs at fixed locations. } \label{system:model}\vspace{-0.5cm} \end{figure} In this paper, we aim to characterize the capacity region of a UAV-enabled BC and reveal the capacity-optimal joint UAV trajectory and communication design. As an initial study, we consider the simplified setup with two GUs, as shown in Fig. \ref{system:model}. Specifically, it is assumed that a UAV with the maximum speed $V$ meter/second (m/s) flies at a constant altitude of $H$ m to serve two GUs at fixed locations with a distance of $D$ m. We consider the communication within a given UAV flight duration of $T$ s. Note that if $VT\ll D$, the considered system simplifies to e.g. a tethered UAV BS, which can be placed above a fixed ground location. On one hand, we assume that $H$ is sufficiently large such that the channels from the UAV to both GUs are dominated by the LoS link, according to the recently conducted experimental results by Qualcomm \cite{qualcom_UAVreport}. On the other hand, we assume that $D$ is sufficiently large and comparable to $H$ so that the UAV's horizontal position can have a non-negligible impact on the UAV-GU channel strength.\footnote{Otherwise, if $D\ll H$, the UAV trajectory design becomes trivial and it should simply stay above any point along the line between the two GUs and their channels can be regarded as constant irrespective of $D$. } As a result, an effective time-varying BC can be generally established between the UAV and the two GUs as the UAV moves horizontally above them. Given the UAV trajectory, the UAV-GU BC resembles the conventional fading BC with a terrestrial BS \cite{tse2005fundamentals}. However, their fundamental difference lies in that the UAV trajectory and hence its induced time-varying channel are controllable and thus can be proactively designed to maximize the capacity of the BC, while this is impossible for conventional fading channels due to the randomness in the propagation environment. As such, the joint UAV trajectory and communication design can exploit this additional degree of freedom to enlarge the capacity region compared to the conventional BC with a static BS on the ground. To this end, we characterize the capacity region of this new UAV-enabled BC over a given UAV flight duration $T$, by jointly optimizing the UAV's trajectory and transmit power/rate allocations over time, subject to the practical UAV's maximum speed and transmit power constraints. Specifically, we adopt the \emph{rate-profile} approach as in \cite{zhang2010cooperative} to maximize the sum rate of the two GUs under their different rate ratios, which leads to a complete characterization of all the achievable rate-pairs for the two GUs on the so-called \emph{Pareto} boundary of the capacity region. However, such a joint optimization problem is shown to be non-convex and difficult to solve in general. Nevertheless, we obtain the optimal solution of the considered problem by exploiting its particular structure and applying tools from convex optimization. The main results of this paper are summarized as follows: \begin{itemize} \item First, to draw essential insights, we consider two special cases with asymptotically large UAV flight duration, i.e., $T\rightarrow \infty$ (or equivalently $VT\gg D$) and asymptotically low UAV speed, i.e., $V\rightarrow 0$ (or equivalently $VT \ll D$), respectively. We introduce a simple and practical UAV trajectory called hover-fly-hover (HFH), where the UAV successively hovers at a pair of initial and final locations above the line segment connecting the two GUs each with a certain amount of time, and flies unidirectionally between them at its maximum speed. Then, for the case of $T\to \infty$, we show that the HFH trajectory with hovering locations above the two GUs together with the TDMA based orthogonal multiuser transmission is capacity-achieving. In contrast, for the case of $V\rightarrow 0$, it is shown that the UAV should hover at a fixed location that is nearer to the GU with larger achievable rate and in general superposition coding (SC) based non-orthogonal transmission with interference cancellation at the receiver of the nearer GU is required. Furthermore, it is shown that in general there exists a significant capacity gap between the above two cases, which demonstrates the potential of exploiting the UAV's trajectory design and motivates the study for the general case with finite UAV maximum speed and flight duration. \item Next, for the case of finite UAV speed and flight duration, we prove that the proposed HFH trajectory is also optimal while SC is generally required to achieve the capacity. In addition, the initial and final hovering locations need to be properly selected from the points above the line segment between the two GUs to achieve the capacity region Pareto boundary. It is also observed that by increasing the UAV maximum speed and/or flight duration, the capacity region is effectively enlarged, especially for the low signal-to-noise ratio (SNR) case. To gain more insights, we further analyze the high SNR case and it is shown that the HFH trajectory reduces to a static point above one of the two GUs. This result implies that dynamic UAV movement is less effective for capacity enhancement as SNR increases. \item Last, for the sake of comparison, we further characterize the achievable rate region of the UAV-enabled two-user BC with the TDMA-based (instead of the optimal SC-based) transmission with finite UAV speed and flight duration. It is shown that the optimal UAV trajectory still follows the HFH structure as in the capacity-achieving case with SC-based transmission, while the difference lies in that the hovering locations can only be those above the two GUs in the TDMA case. It is also revealed that the capacity gain of the optimal SC-based transmission over the suboptimal TDMA-based transmission decreases as the UAV maximum speed and/or flight duration increases. \end{itemize} The rest of this paper is organized as follows. Section II introduces the system model and presents the problem formulation for capacity region characterization. In Sections III-V, we study the capacity region for two special cases and the general case, respectively. Section \ref{Section:TDMA} addresses the case with TDMA-based transmission. Finally, we conclude the paper in Section VI. \emph{Notations:} In this paper, scalars are denoted by italic letters, vectors and matrices are denoted by bold-face lower-case and upper-case letters, respectively. $\mathbb{R}^{M\times 1}$ denotes the space of $M$-dimensional real-valued vectors. For a vector $\mathbf{a}$, $\|\mathbf{a}\|$ represents its Euclidean norm and $\mathbf{a}^T$ denotes its transpose. For a time-dependent function $\mathbf{x}(t)$, $\dot{\mathbf{x}}(t)$ denotes the derivative with respect to time $t$. For a set $\Aa$, $|\Aa|$ denotes its cardinality, ${\rm{int}}( \Aa)$ and $\partial \Aa$ represent the interior and boundary of a set $\Aa$, $\rm{Conv}(\Aa)$ represents the convex hull of a set $\Aa$, which is the set of all the convex combinations of the points in $\Aa$, i.e., ${\rm{Conv}}(\Aa)=\big\{ \sum^{|\Aa|}_{n=1}\alpha_nc_n: \forall\, \alpha_n\geq 0, \sum_{n=1}^{|\Aa|}\alpha_n=1\big\}$. For two sets $\Aa$ and $\Bb$, $ \Aa \backslash \Bb$ is the set of all elements in $\Aa$ excluding those in $\Bb$. Notation $\mathbf{a} \preceq \mathbf{b}$ indicates that vector $\mathbf{a}$ is element-wisely less than or equal to vector $\mathbf{b}$. \vspace{-0.1cm} \section{System Model and Problem Formulation} As shown in Fig. \ref{system:model}, we consider a UAV-enabled BC with one UAV transmitting independent information to two GUs at fixed locations. Without loss of generality, we consider a two-dimensional (2D) Cartesian coordinate system. Let the location of each GU $k \in \{1,2\}$ be denoted by $(x_k,0)$, where $x_1 =-D/2$ m and $x_2 =D/2$ m with $D>0$ denoting their distance. The UAV is assumed to fly at a constant altitude of $H$ m. In practice, the value of $H$ is set based on regulations on the minimum UAV height as well as the communication system requirement. We focus on a particular UAV flight duration of $T$ s and denote the UAV's time-varying location at time instant $t \in \mathcal T \triangleq [0,T]$ by $(x(t), H)$. The system bandwidth is denoted by $B$ in Hertz (Hz) and hence the symbol period is $T_s=1/B$ s. We assume that $TB$ is sufficiently large such that the UAV can adopt the Gaussian signaling with a sufficiently long symbol block length to achieve the channel capacity. It is also assumed that the UAV's location change within a symbol period is negligible compared to the altitude $H$, i.e., $VT_s\ll H$, where $V \ge 0$ denotes the UAV maximum speed in m/s. Thus, the UAV-GU channel is assumed to be constant within each symbol interval. Mathematically, we express the UAV speed constraint as \cite{JR:wu2017joint,JR:wu2017_ofdm}, \begin{align}\label{UAVspeed} |\dot{x}(t)| &\le V, \forall\, t\in \mathcal T. \end{align} For the purpose of exposition, we consider the free-space path loss model for the air-to-ground wireless communication channels from the UAV to the two GUs, as justified in Section I. It is assumed that the Doppler effect induced by the UAV mobility can be perfectly compensated at the user receivers \cite{zeng2016wireless,zhang2017cellular}. As a result, the channel power gain from the UAV to each GU $k$ at time instant $t$ is modeled as \begin{align}\label{eq:channelgain} \tilde{h}_k(x(t)) = \frac{\gamma_0}{(x(t) - x_k)^2+H^2}, \end{align} where $\gamma_0$ denotes the channel power gain at the reference distance $d_0=1$ m. At time instant $t\in\mathcal T$, let $s_1(t)$ and $s_2(t)$ denote the UAV's transmitted information-bearing symbols for GUs 1 and 2, respectively. Accordingly, the received signal at GU $k$ is expressed as \begin{align}\label{eq:033} y_k(t) = \sqrt{\tilde{h}_k(x(t))} \big(s_1(t) + s_2(t)\big)+ n_k(t), k\in\{1,2\}, \end{align} where $n_k(t)$ denotes the additive white Gaussian noise (AWGN) at the receiver of GU $k$. For simplicity, the noise power is assumed to be equal for the two GUs, denoted by $\sigma^2$. With given $x(t)$, the signal model in \eqref{eq:033} resembles a conventional fading BC consisting of one transmitter (the UAV) and two receivers (GUs) \cite{tse2005fundamentals}. In order to achieve the capacity region of this channel, the UAV transmitter should employ Gaussian signaling by setting $s_k(t)$'s as independent circularly symmetric complex Gaussian (CSCG) random variables with zero mean and variances $p_k(t) = \mathbb{E}(|s_k(t)|^2), k\in\{1,2\}$. Suppose that at each time instant $t$, the UAV is subject to a maximum transmit power constraint $\bar P$, similarly as assumed in \cite{tse1998multiaccess,li2001capacity}, i.e., \begin{align}\label{UAVpower} p_1(t) + p_2(t) \le \bar P, \forall t\in\mathcal T. \end{align} In this paper, we are interested in characterizing the capacity region of the UAV-enabled two-user BC, which consists of all the achievable average rate-pairs for the two GUs over the duration $T$, subject to the UAV's maximum speed constraint in \eqref{UAVspeed} and maximum power constraint in \eqref{UAVpower}. For given UAV trajectory $\Q\triangleq\{ x(t), t\in \T\}$ and power allocation $\pow \triangleq \{p_{k}(t), t\in \T, k\in \{1,2\}\}$, let $\C(\Q, \pow)$ denote the set of all achievable average rate-pairs ($r_1, r_2)$ in bits per second per Hertz (bps/Hz) for the two GUs, respectively, which need to satisfy the following inequalities \cite{tse1998multiaccess,jindal2004duality}: \begin{align} r_1 \le & \frac{1}{T}\int_{0}^T \log_2\big(1+ p_1(t)h_1(x(t))\big) {\rm{d}}t, \label{MAC:ineq1}\\ r_2 \le & \frac{1}{T}\int_{0}^T\log_2\big(1+ p_2(t)h_2(x(t))\big){\rm{d}}t, \label{MAC:ineq2}\\ r_1 + r_2 \le& \frac{1}{T}\int_{0}^T\log_2\big(1+ p_1(t)h_1(x(t)) +p_2(t)h_2(x(t))\big){\rm{d}}t, \label{MAC:ineq3} \end{align} where $h_k(x(t)) = \tilde{h}_k(x(t))/\sigma^2, k\in\{1,2\}$. Denote by $\X_1$ and $\X_2$ the feasible sets of $\Q$ and $\pow$ specified by the UAV's speed constraint \eqref{UAVspeed} and maximum power constraint \eqref{UAVpower}, respectively. Then, the capacity region of the UAV-enabled two-user BC is defined as \begin{align}\label{capacityregion} \C (V,T,\bar P) = \bigcup_{\Q \in \X_1,\pow \in \X_2}\C (\Q, \pow). \end{align} Our objective is to characterize the Pareto boundary (or the upper-right boundary) of the capacity region $\C (V,T,\bar P)$ by jointly optimizing the UAV trajectory $\Q$ and power allocation $\pow$. The Pareto boundary consists of all the achievable average rate-pairs at each of which it is impossible to improve the average rate of one GU without simultaneously decreasing that of the other GU. Since it remains unknown yet whether the capacity region $\C (V,T,\bar P)$ is a convex set or not, we apply the rate-profile technique in \cite{zhang2010cooperative} that ensures a complete characterization of the capacity region, even if it is non-convex.\footnote{Another commonly adopted approach for characterizing the Pareto boundary is to maximize the weighted sum of the average rates of the two GUs. Although this approach is effective in characterizing Pareto boundary points for convex capacity regions, it may fail to obtain all the boundary points when the capacity region is a non-convex set \cite{Boyd}.} Specifically, let $\mv\alpha = (\alpha_1,\alpha_2)$ denote a rate-profile vector which specifies the rate allocation between the two GUs with $\alpha_k\geq 0, k\in\{1,2\}$, and $\alpha_1+\alpha_2=1$. Here, a larger value of $\alpha_k $ indicates that GU $k$ has a higher priority in information transmission to achieve a larger average rate. Then, the characterization of each Pareto-boundary point corresponds to solving the following problem, \begin{align} \text{(P1)}: ~~\max_{\{r,r_1,r_2,\Q, \pow\}} & r \\ \mathrm{s.t.}~~~~& r_k \ge \alpha_k r, \forall\, k\in\{1,2\},\label{eq:9}\\ &(r_1,r_2) \in \C(\Q, \pow),\label{P1:11}\\ &p_1(t) + p_2(t) \le \bar P, \forall\,t\in \T,\label{P1:12}\\ &|\dot{x}(t)| \le V, \forall\, t\in \T, \label{P1:speed} \\ &p_k(t) \geq 0, \forall\, t\in \T, k\in\{1,2\},\label{P1:14} \end{align} where $r$ denotes the achievable sum average rate of the two GUs. Problem (P1) is challenging to be solved optimally due to the following two reasons. First, the constraints in (11) are non-convex, as the rate functions in $\C(\Q, \pow )$ (i.e., the right-hand-sides (RHSs) of \eqref{MAC:ineq1}, \eqref{MAC:ineq2}, and \eqref{MAC:ineq3}) are non-concave with respect to $\Q$. Second, problem (P1) involves an infinite number of optimization variables (e.g., $x(t)$'s over continuous time $t$). As a result, (P1) is a highly non-convex optimization problem and in general, there is no standard method to solve such problem efficiently. For notational convenience, the optimal UAV trajectory for problem (P1) under any given $(\alpha_1,\alpha_2)$ is denoted by $\{x^*(t)\}$, and the optimal power allocation is denoted by $\{p_k^*(t), k\in\{1,2\}\}$. Furthermore, the corresponding rate-pair achieved by the above optimal UAV trajectory and power allocation is denoted by $(r^*_1, r^*_2)$, which is on the (Pareto) boundary of the capacity region $\C (V,T,\bar P)$ \vspace{-0.25cm} \subsection{ Capacity Region Properties and HFH Trajectory} Before explicitly characterizing the capacity region $\C (V,T,\bar P)$, we first provide some interesting properties of this region, which can be used to simplify the optimization of the UAV trajectory in (P1) later. \begin{lemma}\label{lemma0} The capacity region $\C (V,T,\bar P)$ is symmetric with respect to the line $r_1=r_2$. \end{lemma} \begin{proof} Suppose that a rate-pair $(\check{r}_1,\check{r}_2)\in \C (V,T,\bar P)$ is achieved by $\check{\Q}=\{ \check{x}(t), t\in \T\}$ and $\check{\pow}= \{\check{p}_{k}(t), t\in \T, k\in\{1,2\}\}$. Then we can construct another solution $\hat{\Q}=\{\hat x(t),t\in \T\}$ and $\hat{\pow}=\{\hat p_k(t), t\in \T, k\in\{1,2\}\} $ with $\hat x(t) = -\check{x}(t)$, $\hat p_1(t)=\check{p}_2(t)$, and $\hat p_2(t)=\check{p}_1(t)$, $\forall\, t$, which can be easily shown to achieve the symmetric rate-pair $(\check{r}_2,\check{r}_1)$. As the newly constructed solution is also feasible to (P1), this lemma is thus proved. \end{proof} Based on Lemma \ref{lemma0}, it is evident that the boundary of $\C (V,T,\bar P)$ is also symmetric with respect to the line $r_1=r_2$. \begin{lemma}\label{lemma1} For problem (P1), the optimal UAV trajectory satisfies $x^*(t) \in [-D/2, D/2], \forall\, t\in \T$, i.e., the UAV should stay above the line segment between the two GUs. \end{lemma} \begin{proof} Supposing that the optimal UAV trajectory does not lie within the interval $[-D/2, D/2]$, we can always construct a new trajectory with $x^*(t) \in [-{D}/{2}, {D}/{2}], \forall\, t\in \T$, which simultaneously decreases the distances from the UAV to both GUs, thus resulting in a strictly componentwise larger rate-pair based on \eqref{eq:channelgain} and \eqref{MAC:ineq1}-\eqref{MAC:ineq3}. This thus completes the proof. \end{proof} \begin{lemma}\label{lem:flydirection} For problem (P1), there always exists an optimal UAV trajectory $\{x^*(t)\}$ that is unidirectional, i.e., $x^*(t_1) \le x^*(t_2)$ if $t_1<t_2$, $\forall\, t_1, t_2\in \T$ \end{lemma} \begin{proof} Suppose that $\{x^*(t)\}$ is a non-unidirectional optimal UAV trajectory to problem (P1), which implies that the UAV visits some locations more than one times. We denote the total time that the UAV stays at such a location $x_A$ ($ \min\limits_{t\in \T} \{x^*(t)\}\le x_A\le \max\limits_{t\in \T} \{x^*(t)\}$) by $\delta_{t,A}>0$. Then, we show that there always exists an alternative unidirectional UAV trajectory that achieves the same objective value of (P1). Specifically, we construct a unidirectional UAV trajectory $\{\hat{x}(t)\}$ with $\hat{x}(0)= \min\limits_{t\in \T} \{x^*(t)\}$ and $\hat{x}(T)= \max\limits_{t\in \T} \{x^*(t)\}$ as its initial and final locations, respectively, where the UAV stays at location $x_A$ with a duration $\delta_{t,A}$, i.e., $\hat{x}(t)=x_A$, $t\in [t_A, t_A+\delta_{t,A}]$, with $t_A$ being the time instant once the UAV reaches location $x_A$. It is easy to show that $\hat{x}(t)$ is feasible to (P1) and always achieves the same objective value as $x^*(t)$. This thus completes the proof. \end{proof} With Lemmas \ref{lemma1} and \ref{lem:flydirection}, we only need to consider the unidirectional UAV trajectory between $[-{D}/{2}, {D}/{2}]$ in the rest of the paper. In addition, as implied by Lemma \ref{lemma0}, we only need to obtain the boundary point $(r^*_1,r^*_2)$ with $r^*_2\ge r^*_1$ at one side of the line $r_1 = r_2$. This corresponds to the optimal solution to problem (P1) in the case with $\alpha_2 \ge \alpha_1$. When $\alpha_1=0$ or $\alpha_2=0$, it is easy to show that the optimal rate-pair is $(r^*_1, r^*_2)= (0, \log_2(1+ \frac{\bar P\beta_0}{H^2}))$ or $(r^*_1, r^*_2)=(\log_2(1+ \frac{\bar P\beta_0}{H^2}), 0)$ where $\beta_0 \triangleq \frac{\gamma_0}{\sigma^2}$. Next, we introduce a simple and yet practical HFH UAV trajectory, which will be shown optimal for (P1) in the sequel. Specifically, with the HFH trajectory, the UAV successively hovers at a pair of initial and final locations, denoted by $x_{\rm I}$ and $x_{\rm F}$, respectively, with $-D/2\leq x_{\rm I}\leq x_{\rm F}\leq D/2$, each for a certain amount of time, denoted by $t_{\rm I}\geq 0$ and $t_{\rm F}\geq 0$ with $t_{\rm I} +t_{\rm F}\leq T$, and flies at the maximum speed $V$ between them. Mathematically, the HFH trajectory is generally given by \begin{align}\label{optimal:trj} x(t)=\left\{\begin{aligned} & x_{\rm I},\qquad \qquad\qquad ~ t \in \T_1, \\ & x_{\rm I} + (t-t_{\rm I})V, ~~~~ t \in \T_2,\\ & x_{\rm F}, \qquad \quad \quad ~~~~~~ t \in \T_3, \end{aligned} \right. \end{align} where $\T_1= [0,t_{\rm I} ]$, $\T_2 = (t_{\rm I}, T-t_{\rm F})$, $\T_3= [T-t_{\rm F},T]$, and $t_{\rm I} + \frac{x_{\rm F}-x_{\rm I}}{V} + t_{\rm F}=T$. It follows from \eqref{optimal:trj} that under the HFH trajectory, the UAV hovers at two different locations at most; while if $x_{\rm I}=x_{\rm F}$, then the UAV trajectory reduces to hovering at a fixed location during the entire flight duration $T$. In the following two sections, we first investigate the solutions to (P1) for the two special cases with $T\to \infty$ and $V\rightarrow0$, respectively. \section{Capacity Characterization with Large Flight Duration}\label{section:infinite} In this section, we study the special case when the UAV flight duration is asymptotically large, i.e., $T\rightarrow \infty$, where the corresponding capacity region is denoted by $\C ( V,\infty,\bar P)$. To this end, we first ignore the UAV maximum speed constraint \eqref{P1:speed} in (P1) and derive its optimal solution for any $T>0$. Then, we show that the resulting capacity region is equal to $\C ( V,\infty,\bar P)$ as $T\rightarrow \infty$. By dropping constraint \eqref{P1:speed} under finite $T$, problem (P1) is reduced to \begin{align} \text{(P2)}: ~~\max_{\{r,r_1,r_2,\Q, \pow\}} & r \\ \mathrm{s.t.}~~~~& \eqref{eq:9},\eqref{P1:11}, \eqref{P1:12}, \eqref{P1:14}, \label{P2:eq:9} \end{align} whose optimal objective value serves as an upper bound of that of problem (P1). Although problem (P2) is a non-convex optimization problem, we obtain its optimal solution as in the following lemma. \begin{lemma}\label{lemm:P2} Under given $(\alpha_1, \alpha_2)$ with $\alpha_1 + \alpha_2 = 1$, the optimal trajectory and power allocation solution to (P2) is given as $x^*(t) = -D/2$, $p^*_1(t) = \bar P$, $p^*_2(t) = 0$, $\forall t \in \hat{\T}_1$, and $x^*(t) = D/2, p^*_2(t) = \bar P, p^*_1(t) = 0, \forall t \in \hat{\T}_2$, where $\hat{\T}_1 = [0, \alpha_1T)$, and $\hat{\T}_2 = [\alpha_1T, T]$. Accordingly, the optimal rate-pair is obtained as $r^*_1 = \alpha_1\log_2(1+ \frac{\bar P\beta_0}{H^2})$ and $r^*_2 = \alpha_2\log_2(1+ \frac{\bar P\beta_0}{H^2})$. \end{lemma} \begin{proof} Please refer to Appendix A. \end{proof} Based on Lemma \ref{lemm:P2} and by changing the values of $\alpha_1$ and $\alpha_2$ for (P2), the capacity region without considering the UAV maximum speed constraint \eqref{P1:speed}, denoted by $\hat{\C}(\bar P)$, can be easily obtained in the following proposition. \begin{proposition}\label{prop1} In the absence of constraint \eqref{P1:speed}, the capacity region $\hat{\C}(\bar P)$ of the UAV-enabled two-user BC is given by \begin{align}\label{eq:region1} \hat{\C}(\bar P)= \bigg \{ (r_1,r_2) : & ~ r_1+r_2 \leq\log_2\left(1+ \frac{\bar P\beta_0}{H^2}\right), r_1\geq 0, r_2\geq 0\bigg\}, \end{align} which is an equilateral triangle. \end{proposition} From Lemma \ref{lemm:P2} and Proposition \ref{prop1}, the optimal UAV trajectory for achieving the boundary points of $\hat{\C}(\bar P)$ is to let the UAV successively hover above each of the two GUs for communication in a TDMA manner. It is worth pointing out that the UAV maximum speed is finite in practice and thus constraint \eqref{P1:speed} cannot be ignored in general. As a result, the capacity region $\hat{\C}(\bar P)$ in \eqref{eq:region1} generally serves as an ``upper bound'' of the capacity region with finite $V$. However, as shown in the following theorem, $\hat{\C}(\bar P)$ can be asymptotically achieved when $T$ is sufficiently large for any $V>0$. \begin{theorem}\label{achievingTDMA} As $T\rightarrow \infty$, we have $\C ( V,\infty,\bar P) = \hat{\C}(\bar P)$, $\forall\, V>0$, where the optimal UAV trajectory follows the HFH structure in \eqref{optimal:trj} with $x_{\rm I}=-D/2$ and $x_{\rm F}=D/2$, and the TDMA-based transmission is capacity-achieving. \end{theorem} \begin{proof} First, it is evident that $\C(V,T,\bar P) \subseteq \hat{\C}(\bar P)$ for any $V>0$ and $T>0$. Next, we show that the HFH trajectory in \eqref{optimal:trj} with $x_{\rm I} = -D/2$ and $x_{\rm F} = D/2$ together with TDMA-based transmission achieves the boundary of $\hat{\C}(P)$ as $T \rightarrow \infty$, as follows. For any boundary point $(r^{**}_1, r^{**}_2) \in \C(V,T,\bar P)$ in \eqref{eq:region1} satisfying $r^{**}_1 = \alpha_1\log_2( 1+ \frac{\bar P\beta_0}{H^2})$, $r^{**}_2 = \alpha_2\log_2( 1+ \frac{\bar P\beta_0}{H^2})$, $\alpha_1 +\alpha_2 =1$, we can construct a feasible solution for (P1) where the UAV flies at the maximum speed between the two GUs and hovers above GUs 1 and 2 for $\alpha_1$ and $\alpha_2$ proportion of the remaining time. In addition, the UAV only transmits information to GU 1 or 2 when hovering above that GU via TDMA. Thus, the corresponding achievable rate-pair of the two GUs, denoted by $(r_1^{\star},r_2^{\star})$, are given by $r_1^{\star} = \alpha_1(1-\frac{D}{VT})\log_2( 1+ \frac{\bar P\beta_0}{H^2})$ and $r_2^{\star} = \alpha_2(1-\frac{D}{VT})\log_2( 1+ \frac{\bar P\beta_0}{H^2})$, where $\frac{D}{VT}$ corresponds to the proportion of time for the UAV's maximum-speed flying from $x_{\rm I} = -D/2$ to $x_{\rm F} = D/2$. As $T\rightarrow \infty$, $(r_1^{\star} , r_2^{\star}) \rightarrow (r^{**}_1, r^{**}_2)$ for any $V>0$ and $D>0$ since $\frac{D}{VT}\rightarrow 0$. Based on the facts that $(r_1^{\star} , r_2^{\star})\preceq (r_1^{*} , r_2^{*})$ and $(r_1^{*} , r_2^{*})\preceq (r_1^{**} , r_2^{**})$ (due to $\C(V,T,\bar P) \subseteq \hat{\C}(\bar P)$), we have $ (r_1^{*} , r_2^{*}) \rightarrow (r^{**}_1, r^{**}_2)$ as $T\rightarrow \infty$. Thus, the unidirectional HFH trajectory with $x_{\rm I}=-D/2$ and $x_{\rm F}=D/2$ with TDMA is asymptotically optimal and $\C (V, \infty,\bar P) = \hat{\C}(\bar P)$. \end{proof} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{region_infinity.eps} \caption{The capacity region of a UAV-enabled two-user BC when $T\to \infty$. } \label{region:infty}\vspace{-0.6cm} \end{figure} For the purpose of illustration, Fig. \ref{region:infty} shows the capacity region $\C (V, \infty,\bar P)$ with $\sigma_0=-100$ dBm, $\gamma_0= -50$ dB, $H = 100$ m, $D = 1000$ m, and $\bar P= 10$ dBm. For comparison, we also show the capacity region achieved when the UAV is fixed at a given location $x$, where $x=-D/2$, $0$, or $D/2$. In this case, the system becomes a conventional two-user AWGN BC with constant channel gains and its capacity region is denoted by $\mathcal{C}_{f}(x)$. Based on the uplink-downlink duality \cite{jindal2004duality}, we have $\mathcal{C}_{f}(x)= \underset{p_1+p_2\le \bar P, p_1, p_2\ge 0}{ \bigcup} \C_{\rm{MAC}} (x, p_1,p_2)$, where $\C_{\rm MAC} (x, p_1,p_2)$ denotes the capacity region of the dual two-user MAC specified by the following inequalities \cite{tse1998multiaccess} (or equivalently $\C(\Q,\pow)$ with $x(t) = x, p_k(t) = p_k, \forall t\in \T, k\in\{1,2\}$): \begin{align} r_1 \le & \log_2\big(1+ p_1h_1(x)\big), \label{eq:fixed1}\\ r_2 \le & \log_2\big(1+ p_2h_2(x)\big), \label{eq:fixed2}\\ r_1 + r_2\le & \log_2\big(1+ p_1h_1(x) +p_2h_2(x)\big) \label{eq:fixed3}, \end{align} where $h_k(x) = \frac{\beta_0}{(x- x_k)^2+H^2}, k\in\{1,2\}$. It is known that the capacity region $\mathcal{C}_{f}(x)$ is convex and its boundary is generally achieved by SC-based non-orthogonal transmission with interference cancellation at the receiver of the GU with higher channel gain (or nearer to the UAV in our context) \cite{tse2005fundamentals}.\footnote{For degraded BC, dirty paper coding (DPC) also achieves the same capacity boundary as SC \cite{tse2005fundamentals}. Without loss of optimality, we consider SC in this paper.} For example, in Fig. \ref{region:infty}, when $x=D/2$ or $x=-D/2$, the corresponding boundary points of $\mathcal{C}_{f}(x)$ can only be achieved by applying SC while TDMA is strictly suboptimal (except for the two extreme points)\cite{tse2005fundamentals}. By contrast, when $x=0$, the two GUs have the same channel gain and thus both SC and TDMA are optimal. Interestingly, based on Theorem \ref{achievingTDMA}, when the UAV mobility can be fully exploited (say, with untethered UAVs) with $T\to \infty$ (or equivalently $D\ll VT$), TDMA-based orthogonal transmission along with the simple HFH UAV trajectory is capacity-achieving whereas SC is not required. Nevertheless, it is also worth pointing out that the significant capacity gain by exploiting the high mobility UAV over the static UAV comes at the cost of transmission delay at one of the two GUs (e.g., GU 2 needs to wait for about $T/2$ to be scheduled for transmission, which can be substantial when $T$ becomes large). Therefore, there is a fundamental throughput-delay trade-off in wireless communications enabled by high-mobility UAVs \cite{lyu2016cyclical, JR:wu2017_ofdm}. \begin{remark} Based on Proposition \ref{prop1}, we next provide a property of capacity region for finite flight duration, which helps reveal the fundamental reason why the UAV mobility can potentially enlarge the capacity region: $\C (V,T,\bar P)$ is non-convex for $0< T< \infty$. The non-convexity of the capacity region essentially suggests that it can be enlarged if the convex combinations of the rate-pairs can be achieved (generally achieved at different locations). This is the reason why the UAV mobility/movement can be helpful, leading to a time-sharing of multiple locations. \end{remark} \vspace{-0.25cm} \section{Capacity Characterization with Limited UAV Mobility} In this section, we study the other special case with limited UAV mobility, i.e., $V\rightarrow 0$ (or equivalently $VT\ll D$). In this case, the UAV's horizontal movement has negligible impact on the UAV-GU channels with $H\gg VT$ (since $H$ is comparable with $D$). As a result, the UAV should hover at a fixed location during the entire $T$ once it is deployed (e.g., a tethered UAV), i.e., $x(t) = x, \forall\, t\in\mathcal T$, which becomes a special case of the proposed HFH trajectory in \eqref{optimal:trj} with $x_{\rm I}=x_{\rm F}$. In this case, solving (P1) is equivalent to finding the optimal hovering location of the UAV, $x$, given the rate-profile parameters $(\alpha_1, \alpha_2)$, as well as the corresponding transmission power, $p_1(t_1) = p_1$ and $p_2(t_1) = p_2$, $\forall\, t\in\mathcal T$, and rates $r_1$ and $r_2$ for GUs 1 and 2, respectively. As such, $\C(\Q, \pow) = \C_{\rm{MAC}} (x, p_1,p_2)$ holds and problem (P1) is reformulated as \begin{align} \text{(P3)}: ~~\max_{\{r,r_1,r_2, x, p_1,p_2\}} & r \\ \mathrm{s.t.}~~~~& r_k \ge \alpha_k r, \forall\, k\in\{1,2\},\label{P3:eq01}\\ &(r_1,r_2) \in \C_{\rm MAC}(x, p_1,p_2),\label{P3:eq02}\\ &p_1 + p_2 \le \bar P, \label{P3:eq03}\\ &p_k \geq 0, k\in\{1,2\}. \label{P3:eq05} \end{align} \begin{proposition}\label{thm2 For problem (P3) with $\alpha_2 \ge \alpha_1$, there always exists an optimal UAV hovering location $x^*$, such that $0 \le x^* \le D/2$ \end{proposition} \begin{proof} Please refer to Appendix B. \end{proof} Proposition \ref{thm2} suggests that when $\alpha_2 \ge \alpha_1$, the UAV should be placed closer to GU 2 such that it has a larger channel gain than GU 1. As a result, GU 2 needs to decode GU 1's signal first and then decodes its own signal after canceling the interference from GU 1's signal. Therefore, we have $r_1 = \log_2(1+\frac{p_1h_1(x) }{p_2h_1(x)+1})$ and $r_2 = \log_2\left(1+p_2h_2(x)\right)$, where $h_1(x)= \frac{\beta_0}{ (x+\frac{D}{2})^2+H^2 }$ and $h_2(x)= \frac{\beta_0}{ (x-\frac{D}{2})^2+H^2 }$. As the inequalities in \eqref{P3:eq01} must be tight at the optimality, (P3) can be transformed into the following problem, \begin{align} \text{(P4)}: ~~ \max_{\{x,p_2\}} ~&\frac{1}{\alpha_2}\log_2\left(1+p_2h_2(x)\right) \\ \mathrm{s.t.}~& \alpha_1 \log_2\left(1+p_2h_2(x)\right) =\alpha_2 \log_2\left(1+\frac{(\bar P-p_2)h_1(x) }{p_2h_1(x)+1}\right),\label{eq:28}\\ &0 \le p_2\le \bar P,~ x\in [0,D/2 ]. \end{align} Note that for $\alpha_k>0$, $k\in \{1,2\}$, the LHS (RHS) of \eqref{eq:28} increases (decreases) with $p_2$. It thus follows that under any given $x$, the optimal solution of $p_2$ is unique and can be directly obtained by solving the equality in \eqref{eq:28} with a bisection search, and thus the objective value of (P4) can be obtained accordingly. Therefore, to solve problem (P4), we only need to apply the one-dimensional search over $x\in [0,D/2]$, together with a bisection search for $p_2$ under each given $x$. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{region_zero3.eps} \caption{The capacity region of a UAV-enabled two-user BC when $V\rightarrow0$. } \label{region:zero}\vspace{-0.5cm} \end{figure} Fig. \ref{region:zero} shows the capacity region $\C (0,T,\bar P)$ of the UAV-enabled two-user BC as $V\rightarrow 0$. The parameters are same as those for Fig. \ref{region:infty}. It is observed that $\C (0,T,\bar P)$ is a non-convex set that is larger than the two-user AWGN BC capacity region $\C_f(x)$ at any fixed location $x$, thanks to the location optimization for the UAV based on the GU rate requirements (or rate-profile vector). In addition, it is interesting to observe that at some locations, e.g., $x = 2D/5$ and $x = -D/4$, the fixed-location capacity region $\C_f(x)$ touches the boundary of $\C (0,T,\bar P)$, while at other locations, e.g., $x = 0$, $\C_f(x)$ lies strictly inside $\C (0,T,\bar P)$. The latter observation suggests that some locations are inferior to the others in the sense that they achieve componentwise smaller rate-pairs. This further implies that when $V>0$, the UAV should hover at such superior locations rather than the inferior locations in order to maximize the GU average rates. This is illustrated by observing $\C_f(0) \subset \C_f(2D/5) \bigcup \C_f(-D/4)$ in Fig. \ref{region:zero}. However, since the UAV's speed is finite in practice, it may need to fly over some inferior locations (e.g., $x=0$), in order to travel between and hover over different superior locations that are far apart (e.g., $x=2D/5$ and $x=-D/4$) in a time-sharing manner. This intuitively explains why the UAV should fly at the maximum speed in the optimal HFH trajectory for the general case with finite UAV maximum speed $V$ and flight duration $T$, as will be rigorously proved in the next section. Finally, by comparing $\C (0,T,\bar P)$ with $\C (V, \infty,\bar P)$, it is observed that significant capacity improvement can be achieved by increasing the UAV maximum speed and/or flight duration. \section{Capacity Characterization for Finite UAV Speed and Flight Duration}\label{section:capacity:region} In this section, we characterize the capacity region by solving problem (P1) for the general case with finite UAV maximum speed $V$ and flight duration $T$. \vspace{-0.3cm} \subsection{Capacity Region Characterization} First, we reveal an important property of the optimal UAV trajectory solution to problem (P1) with any given $V>0$ and $T>0$, based on which, we show that the HFH UAV trajectory is capacity-achieving. According to Lemmas \ref{lemma1} and \ref{lem:flydirection}, we consider a unidirectional UAV trajectory without loss of generality, in which the initial and final locations are denoted by $x^*(0)= x_{\rm I}$ and $x^*(T) = x_{\rm F}$, respectively, with $-D/2 \leq x_{\rm I}\leq x_{\rm F}\leq D/2$. Intuitively, the fixed-location capacity regions of $x_{\rm I}$ and $x_{\rm F}$, i.e., $\C_f(x_{\rm I})$ and $\C_f(x_{\rm F})$, should have rate superiority over those of the other locations on the line between them, which is affirmed by the following proposition. \begin{proposition}\label{trj_lemma} At the optimal UAV trajectory solution $\{x^*(t)\}$ to problem (P1), it must hold that \begin{align}\label{eq:V1} \C_f(x_A) \subseteq {\rm{Conv}}\big( \C_f(x_{\rm I}) \bigcup \C_f(x_{\rm F})\big), \end{align} for any location $x_A$ between the initial and final locations $x_{\rm I}$ and $x_{\rm F}$, i.e., $x_{\rm I}\le x_A\le x_{\rm F}$. \end{proposition} \begin{proof} Please refer to Appendix C. \end{proof} \begin{figure}[!t] \centering \includegraphics[width=0.35\textwidth]{region_illustration2.eps} \caption{Illustration of Proposition \ref{trj_lemma} where $\C_f(x_A) \subseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F}) \big)$ for any location $x_A$ between $x_{\rm I}$ and $x_{\rm F}$. } \label{region:infty0}\vspace{-0.7cm} \end{figure} Proposition \ref{trj_lemma} essentially implies there always exists a rate-pair in the boundary of the convex hull, $\C_f(x_A) \subseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F}) \big)$, which is componentwise no smaller than any given rate-pair in the fixed-location capacity region at location $x_A$ (i.e., $\C_f(x_A)$), as illustrated by Fig. \ref{region:infty0}. Based on Proposition \ref{trj_lemma}, the optimal UAV trajectory to problem (P1) is obtained as follows. \begin{theorem}\label{prop4} For problem (P1) with given $T>0$ and $V>0$, the HFH trajectory in \eqref{optimal:trj} is optimal. \end{theorem} \begin{proof} Please refer to Appendix D. \end{proof} Based on Theorem \ref{prop4}, the optimal UAV trajectory $\{x^*(t)\}$ is determined only by the initial and final hovering locations $x_{\rm I}$ and $x_{\rm F}$ as well as the hovering time $t_{\rm I}$ at location $x_{\rm I}$. Accordingly, the optimal hovering time $t_{\rm F}$ can be obtained as $t_{\rm F}=T-t_{\rm I} - \frac{x_{\rm F}-x_{\rm I}}{V}$. Therefore, we can solve problem (P1) by first optimizing the power allocation under any given UAV trajectory, and then searching over the three variables $x_{\rm I}$, $x_{\rm F}$, and $t_{\rm I}$ to obtain the optimal UAV trajectory for any given $(\alpha_1, \alpha_2)$. Specifically, based on a fixed HFH UAV trajectory $\{x(t)\}$, (P1) is reduced to the following problem, \begin{align} \text{(P5)}: ~~\max_{\{r,r_1,r_2, \pow \}} & r \\ \mathrm{s.t.}~& r_k \ge \alpha_k r, \forall\, k\in\{1,2\},\label{eq:99}\\ &(r_1,r_2) \in \mathcal{C}(\Q, \pow ),\\ &p_{1}(t)+ p_{2}(t)\leq \bar P, \forall\,n,\\ &p_k(t) \geq 0, \forall\, t\in \T, k\in\{1,2\}. \end{align} Note that (P5) is a convex optimization problem and thus can be solved efficiently by applying the well-established polymatroid structure and the Lagrange duality method \cite{tse1998multiaccess,Boyd}. Therefore, the optimal rate-pair $(r_1^*, r_2^*)$ corresponding to rate-profile $(\alpha_1,\alpha_2)$ can be found by applying a three-dimensional search on $x_{\rm I}$, $x_{\rm F}$, and $t_{\rm I}$, and selecting their values to maximize $r$ in (P5). The details are omitted here due to the space limitation. Notice that in this case, SC-based transmission is generally required. \vspace{-0.5cm} \begin{figure*}[!t] \begin{minipage}[t]{0.50\linewidth} \centering \includegraphics[width=3.2in, height=2.5in]{CDMA_mobile2.eps} \caption{The capacity region of a UAV-enabled two-user BC with $\bar P = 10$ dBm.} \label{region:CDMA:mobile} \end{minipage ~~ \begin{minipage}[t]{0.50\linewidth} \centering \includegraphics[width=3.2in, height=2.5in]{highSNR.eps} \caption{The capacity region of a UAV-enabled two-user BC with $\bar P = 30$ dBm. } \label{region:highSNR} \end{minipage}\vspace{-0.5cm} \end{figure* \subsection{Numerical Results} In Fig. \ref{region:CDMA:mobile}, the capacity region $\C (V,T,\bar P)$ for finite UAV maximum speed and flight duration is shown under different setups. The parameters are same as those for Figs. \ref{region:infty} and \ref{region:zero}. It is interesting to observe that although $\C (V,T,\bar P)$ is generally a non-convex set, its boundary has a general \emph{concave-convex-concave} shape when $V$ and $T$ are finite. This observation helps explain whether the UAV movement is able to enlarge the capacity region or not. Specifically, when $V=0$, the boundary is convex for $r_2 \in [1, 3]$ bps/Hz, which implies that the convex combination of any two boundary points (rate-pairs) in this regime always achieves a componentwise larger rate-pair than any boundary points between them. As such, increasing the UAV maximum speed and/or flight duration enables the UAV to fly closer to each GU and thus achieves higher rate-pairs, leading to an enlarged capacity region. By contrast, the boundary is concave for $r_2 \in [3.5, 6]$ bps/Hz, which means that the convex combination of any two boundary points within this regime will achieve a componentwise smaller rate-pair than any boundary points between them. This suggests that if $V$ and $T$ are small such that the UAV can only fly locally among these locations, it is not desirable for the UAV to move in terms of achieving a componentwise larger rate-pair. This is in fact the reason why in Fig. \ref{region:CDMA:mobile}, when $r_2 \in [3.5, 6]$ bps/Hz, the boundary for $V=30$ m/s and $T=20$ s remains the same as that for $V=0$. However, when the duration $T$ is further increased from $20$ s to $60$ s, it is observed that the boundary for $r_2 \in [3.5, 4.5]$ bps/Hz shifts towards the upper-right direction, which means that the UAV movement becomes helpful. This is because with sufficiently large $T$, the UAV is able to fly over its nearby locations to reach some superior locations and hence can achieve a componentwise larger rate-pair. In fact, with any given $V>0$, as long as $T$ is sufficiently large, the UAV movement is always beneficial to enlarge the capacity region. \vspace{-0.25cm} \subsection{High SNR Case} Lastly, we consider the asymptotically high SNR case with $\bar{P}\rightarrow \infty$ such that $\frac{\bar P \beta_0}{ D^2 + H^2 }\gg1$ can be assumed, to provide more insights. This assumption means that if the UAV is placed above one (near) GU, the SNR of the other (far) GU (with maximum UAV-GU distance $\sqrt{D^2+H^2}$) is still sufficiently large when $\bar P$ is used \begin{theorem}\label{prop5} Under the assumption of $\frac{\bar P \beta_0}{ D^2 + H^2 }\gg1$, the optimal HFH UAV trajectory to (P1) is simplified to $x^*(t) = D/2,\forall\, t\in \T$ if $\alpha_2 \ge \alpha_1$; and $x^*(t) = -D/2, \forall\, t\in \T$ if $\alpha_2 < \alpha_1$. Accordingly, the capacity region is given by \begin{align}\label{eq:highSNR} \mathcal{C}_{\rm h-SNR}(V, T, \bar P) = \bigg \{ (r_1,r_2) : & ~ r_1+r_2\leq \log_2\left(\frac{\bar P \beta_0}{ H^2 }\right), r_1\geq 0, r_2\geq 0\bigg\}. \end{align} \end{theorem} \begin{proof} Please refer to Appendix E. \end{proof} Similar to Proposition \ref{prop1}, Theorem \ref{prop5} shows the superiority of the UAV hovering locations right above the GUs. However, unlike $\mathcal{C}(V, T, \bar P)$, the capacity region $\mathcal{C}_{\rm h-SNR}(V, T, \bar P)$ is independent of both UAV flight duration $T$ and maximum speed $V$, which suggests that the UAV movement is less effective to enlarge the capacity region as the SNR becomes large. In Fig. \ref{region:highSNR}, we plot the capacity region $\C (V,T,\bar P)$ for the same setup of Fig. \ref{region:CDMA:mobile} except that $\bar{P}$ is increased from $10$ dBm to $30$ dBm. In this case, we have $\frac{\bar P \beta_0}{ D^2 + H^2 }\approx 100 \gg1$, i.e., the high SNR assumption for Theorem \ref{prop5} approximately holds. It is observed that when $V=0$, the UAV always hovers above the GU that requires larger rate, e.g., $x=D/2$, for all rate-pairs satisfying $r^*_2\ge r^*_1$, and SC is needed. Furthermore, it is observed that hovering at the middle location, i.e., $x=0$, where the two GUs have equal channel gains, suffers a significant capacity loss in the high SNR regime even for maximizing the equal rate with $r^*_1=r^*_2$. Finally, it is observed that the capacity region improvement is very limited by increasing the UAV maximum speed and/or flight duration in this case since the gap between $\mathcal{C}(0, T, \bar P)$ and $\mathcal{C}(V, \infty, \bar P)$ is already very small, i.e., the gain achieved by exploiting the UAV movement is not appealing. \section{Achievable Rate Region with TDMA }\label{Section:TDMA} In this section, we consider the UAV-enabled two-user BC with TDMA-based communication. \subsection{Achievable Rate Region Characterization } For TDMA, the UAV can communicate with at most one GU at any time instant. Denote by $\pi_k(t)\in \{0,1\}, k\in\{1,2\}$ the binary variable which indicates that GU $k$ is scheduled for communication at time instant $t$ if $\pi_k(t)=1$; otherwise, $\pi_k(t)=0$. Accordingly, the achievable average rate region with TDMA is given by \begin{align} \bigg\{(r_1, r_2):~& r_1 \le \frac{1}{T}\int_{0}^T \pi_1(t) \log_2(1+ \bar Ph_1(x(t))) {\rm{d}}t,\label{TDMA:eq1} \\ & r_2 \le \frac{1}{T}\int_{0}^T\pi_2(t) \log_2(1+ \bar Ph_2(x(t))){\rm{d}}t,\label{TDMA:eq2} \\ &\pi_1(t) +\pi_2(t)\leq 1, \forall\,t \in \T,\label{TDMA:eq3}\\ &\pi_k(t) \in \{0,1\}, \forall\, t\in \T, k\in\{1,2\}\bigg\}. \label{TDMA:eq4} \end{align} We denote the achievable rate region characterized by \eqref{TDMA:eq1} and \eqref{TDMA:eq2} subject to \eqref{TDMA:eq3} and \eqref{TDMA:eq4} as $\C_{\rm TD} (V,T,\bar P)$. Let $ \mathbf{\Pi} \triangleq\{ \pi_k(t), t\in \T, k\in\{1,2\} \}$. Similarly as for problem (P1), we can apply the rate-profile approach to characterize $\C_{\rm TD} (V,T,\bar P)$ with rate-profile parameters $(\alpha_1,\alpha_2)$ and the optimization problem is formulated as {\begin{align} \text{(P6)}: ~~\max_{\{r,\Q, \mathbf{\Pi}\}} &~ r \\ \mathrm{s.t.}~~~~& \frac{1}{T}\int_{0}^T \pi_1(t) \log_2(1+ \bar Ph_1(x(t))) {\rm{d}}t \ge \alpha_1 r, \label{eq:900}\\ &\frac{1}{T}\int_{0}^T\pi_2(t) \log_2(1+ \bar Ph_2(x(t))){\rm{d}}t \ge \alpha_2 r, \label{eq:1000}\\ &|\dot{x}(t)| \le V, \forall\, t \in \T, \label{P4:eqspeed}\\ & \eqref{TDMA:eq3}, \eqref{TDMA:eq4}. \end{align}}Note that problem (P6) is a non-convex optimization problem since it involves binary variables in $ \mathbf{\Pi}$ and the LHSs of \eqref{eq:900} and \eqref{eq:1000} are not concave with respect to $\Q$ even for given $ \mathbf{\Pi}$. Nevertheless, we show in the following proposition that the optimal UAV trajectory still follows the HFH structure as for (P1), except that the UAV only hovers at $x=-D/2$ and/or $x=D/2$. \begin{proposition}\label{thm:TDMA} The optimal UAV trajectory to problem (P6) satisfies the HFH trajectory in \eqref{optimal:trj}. Furthermore, the following properties hold: 1) if $-D/2<x_{\rm I} \le x_{\rm F} < D/2$, then $t_{\rm I}=t_{\rm F}=0$ and $VT=x_{\rm F}-x_{\rm I}$; 2) if $-D/2= x_{\rm I}\le x_{\rm F} <D/2$, then $t_{\rm I}=T- \frac{x_{\rm F}-x_{\rm I}}{V}$ and $t_{\rm F}=0$; 3) if $-D/2<x_{\rm I}\le x_{\rm F} = D/2$, then $t_{\rm I}=0$ and $t_{\rm F}=T- \frac{x_{\rm F}-x_{\rm I}}{V}$; and 4) if $-D/2= x_{\rm I}< x_{\rm F} = D/2$, then $t_{\rm I}+t_{\rm F} +\frac{D}{V}=T$. \end{proposition} \begin{proof} First, similar to the proofs of Lemmas \ref{lemma1} and \ref{lem:flydirection} and Theorem \ref{prop4}, it can be shown that the optimal UAV trajectory to problem (P6) satisfies the HFH trajectory in \eqref{optimal:trj} with $x_{\rm I}$ and $x_{\rm F}\in [-D/2, D/2]$. Therefore, we only need to prove the properties for the above four cases, by contradiction. For brevity, we only consider case 1) at below by showing that the UAV will neither hover at $x_{\rm I}$ nor $x_{\rm F}$, while the other three cases can be verified similarly. For case 1), suppose that at the optimal UAV trajectory solution to (P6), the UAV flies from $x_{\rm I}$ to $x_{\rm F}$ and hovers at location $x_B=x_{\rm I}$ or $x_B=x_{\rm F}$ for $t_B>0$ with durations $\tau_1>0$ and $\tau_2>0$ assigned to GUs 1 and 2 for communication, respectively, where $\tau_1+\tau_2= t_B$. Then, we can construct a new trajectory where the difference is that the UAV flies from $x_{\rm I}'$ to $x_{\rm F}'$ at the maximum speed with $x_{\rm I}'=x_{\rm I}-V\Delta\tau_1$ and $x_{\rm F}'= x_{\rm F}+ V\Delta\tau_2$, where $\Delta\tau_1\in [0, \tau_1]$ and $\Delta\tau_2\in [0, \tau_2]$ are chosen such that $x_{\rm I} > x_{\rm I}'\ge -D/2$ and/or $x_{\rm F}< x_{\rm F}'\le D/2$. Since $x\in [x_{\rm I}', x_{\rm I}]<x_B$ and/or $x\in [x_{\rm F}, x_{\rm F}']>x_B$, it can be shown from \eqref{eq:channelgain} that this newly constructed trajectory can achieve a componentwise larger rate-pair than the assumed one. The proof is thus completed. \end{proof} Proposition \ref{thm:TDMA} suggests that although the UAV shall fly at the maximum speed between the initial and final locations as in the capacity-achieving UAV case with SC-based transmission, the hovering locations can only be $x=-D/2$ and $x=D/2$ in the case of TDMA-based transmission. This is quite different from the capacity-achieving UAV trajectory in \eqref{optimal:trj} where the UAV may hover at $x\in(-D/2, D/2)$. This is because to achieve the capacity boundary, SC is generally required and hence there may exist some hovering locations $x\in(-D/2, D/2)$ that can strike a good balance between the channel gains of the two GUs since they will be affected simultaneously (with one decreasing and the other increasing) if the UAV moves. However, since only one GU will be scheduled at any time for TDMA, the UAV's movement between $(-D/2, D/2)$ can always help increase the channel gain of one GU (scheduled for transmission) while without degrading that of the other (not scheduled for transmission) Under the optimal given unidirectional UAV trajectory given in Proposition \ref{thm:TDMA}, it can be shown that the UAV will first schedule GU 1 and then schedule GU 2 for transmission, i.e., $\pi_1(t)=1$ and $\pi_2(t)=0$ for $t\in [0, t_1]$, and $\pi_1(t)=0$ and $\pi_2(t)=1$ for $t\in (t_1, T]$ where $t_1\in \T$. Based on $\{x(t)\}$ and $t_1$, problem (P4) is reduced to the following problem, \begin{align}\label{probm:notrj} \text{(P5)}: ~~\max_{\{r,t_1\}} &~ r \\ \mathrm{s.t.}~~~~& \frac{1}{T}\int_{0}^{t_1}\log_2(1+ \bar Ph_1(x(t))) {\rm{d}}t \ge \alpha_1 r, \label{eq:90}\\ &\frac{1}{T}\int_{t_1}^T \log_2(1+ \bar Ph_2(x(t))){\rm{d}}t \ge \alpha_2 r, \label{eq:100}\\ &t_1 \in \T. \end{align} Note that the LHS of \eqref{eq:90} and \eqref{eq:100} increases and decreases with $t_1$, respectively. In addition, constraints \eqref{eq:90} and \eqref{eq:100} need to be met with equalities at the optimal solution. Thus, the optimal $t_1$ is unique and can be obtained efficiently by applying a bisection search over the interval $[0, T]$. As such, the optimal rate-pair $(r_1^*, r_2^*)$ corresponding to rate-profile $(\alpha_1,\alpha_2)$ can be found by applying a two-dimensional search for $x_{\rm I}$ and $t_1$, and selecting their values to maximize $r$ in (P5). \vspace{-0.5cm} \begin{figure*}[!t] \begin{minipage}[t]{0.50\linewidth} \centering \includegraphics[width=3.2in, height=2.5in]{TDMA_mobile3.eps} \caption{Achievable rate region of a UAV-enabled two-user BC with TDMA, and $\bar P = 10$ dBm.} \label{region:TDMA:mobile} \end{minipage ~~ \begin{minipage}[t]{0.50\linewidth} \centering \includegraphics[width=3.2in, height=2.5in]{CDTDMA_V3060s4.eps} \caption{Comparison of SC and TDMA with $\bar P = 10$ dBm. } \label{region:CD:TD:compare} \end{minipage}\vspace{-0.5cm} \end{figure*} \subsection{Numerical Results} We consider the same parameters as those for Figs. \ref{region:infty}, \ref{region:zero}, and \ref{region:CDMA:mobile}. In Fig. \ref{region:TDMA:mobile}, the boundary of $\C_{\rm TD} (V,T,\bar P)$ is characterized in different setups. It is observed that $\C_{\rm TD} (V,T,\bar P)$ is in general a non-convex set. Furthermore, unlike the boundary of $\C (V,T,\bar P)$ that generally follows a convex-concave-convex shape as shown in Fig. \ref{region:CDMA:mobile}, the boundary of $\C_{\rm TD} (V,T,\bar P)$ is observed to be always convex. Such a convex boundary essentially suggests that increasing the UAV speed and/or flight duration is always beneficial to enlarge the achievable rate region of the UAV-enabled BC with TDMA. This is expected since with a larger $V$ and/or $T$, the UAV can always fly closer to the GU that is being scheduled for communication and thus have a better UAV-ground channel in the case of TDMA. Fig. \ref{region:CD:TD:compare} compares the capacity region, $\C (V,T,\bar P)$ with SC-based transmission, and the achievable rate region based on TDMA, $\C_{\rm TD} (V,T,\bar P)$. First, it is observed that $\C (V,T,\bar P)$ is generally larger than $\C_{\rm TD} (V,T,\bar P)$ for same values of $V$, $T$, and $\bar P$. In particular, for $V=0$ and $r_1= 1$ bps/Hz, the achievable rate of GU 2 in $\C (V,T,\bar P)$ is improved about $80\%$ compared to that in $\C_{\rm TD} (V,T,\bar P)$. Such a rate gain comes not only from using SC but also from the corresponding UAV location optimization. Second, as $V$ and/or $T$ increases, it is observed that the boundary of $\C (V,T,\bar P)$ touches more points with that of $\C_{\rm TD} (V,T,\bar P)$ and the gap between them shrinks, suggesting that TDMA becomes more close to be optimal, and they become identical as $T\rightarrow \infty$ (see Theorem \ref{achievingTDMA}). For example, when $V=0$, the boundaries of $\C (V,T,\bar P)$ and $\C_{\rm TD} (V,T,\bar P)$ only intersect at two vertices where the UAV schedules and hovers above only one GU. When $V=30$ m/s and $T=60$ s, the boundary of $\C (V,T,\bar P)$ overlaps with that of $\C (V,T,\bar P)$ for $r_1\in[1, 4]$ bps/Hz where the achievable rates of the two GUs are relatively comparable. This suggests that in this regime, the optimal trajectory for TDMA is also capacity-achieving. This is because when both GUs' rates are non-negligible, it is worth for the UAV flying closer to and even hovering above the two GUs to communicate. However, for a rate-pair in which one of the GU's rate is very small but not zero, the UAV only needs to perform proper power allocation to attain this rate-pair rather than spending time in moving towards the GU with the smaller rate even if $VT>D$. In this case, the optimal UAV trajectory for TDMA in which the UAV greedily flies towards the GU that is being scheduled, becomes strictly suboptimal compared to that for SC. \section{Concluding Remarks} This paper characterizes the capacity region of a new two-user BC with a UAV-mounted aerial BS by investigating a joint UAV trajectory and communication design. We show that the optimal rate trade-off of the two GUs or the Pareto boundary of the capacity region is generally achieved by a simple HFH UAV trajectory, under different UAV flight duration and maximum speed, as well as with SC or TDMA-based transmission. It is shown that the capacity region can be significantly enlarged by exploiting different forms of mobility of the UAV, via either placement optimization for low-mobility UAVs or HFH trajectory optimization for high-mobility UAVs. In addition, it is shown that TDMA-based design achieves close-to-optimal performance of SC-based design for sufficiently large UAV speed and/or flight duration and is capacity-achieving when the flight duration goes to infinity. We hope that the results in this paper for the simplified two-user BC would provide useful insights and guidelines for designing more general/complex UAV-enabled multiuser communication systems in future, especially from an information-theoretic perspective. In the following, we point out some promising directions to motivate future work. \begin{itemize} \item In this paper, to simplify the analysis, it is assumed that the noise power is equal for the two GUs. For the general case with unequal noise power, the capacity region is not symmetric and thus needs further investigation. In addition, a simplified LoS link is assumed for UAV-GU channels in this paper, while more practical channels models such as Rician fading can be considered in future work \cite{zeng20173d}. \item Capacity characterization of the general multiuser BC with more than two GUs is also worth pursuing. Whether our proposed HFH trajectory for the UAV is still capacity-achieving in the more general setup remains as an open problem. How to extend the joint UAV trajectory and communication design to characterize the capacity region of other multiuser communication network models such as MAC and IFC with single-/multi-antenna nodes is also worthy of further investigation. \end{itemize} \appendices \vspace{-0.3cm} \section*{Appendix A: Proof of Lemma \ref{lemm:P2}} \label{apdx:1} By relaxing the UAV maximum speed constraint \eqref{P1:speed}, one can show that problem (P2) satisfies the so-called time-sharing condition \cite{yu2006dual}, and therefore, strong duality holds between (P2) and its dual problem. Thus, we can solve (P2) by applying the Lagrange dual method. Let $\mu_k, k\in\{1,2\}$, denote the dual variable associated with the $k$th constraint in (10) in (P2). Then the partial Lagrangian of (P2) is given by $\mathcal{L} (r,r_1,r_2,\{x(t), p_1(t), p_2(t)\}, \mu_1,\mu_2)=(1-\mu_1\alpha_1-\mu_2\alpha_1)r + \mu_1r_1 + \mu_2r_2$. To obtain the dual function under given $\mu_1$ and $\mu_2$, we need to maximize the Lagrangian $\mathcal{L} (r,r_1,r_2,\{x(t), p_1(t), p_2(t)\}, \mu_1,\mu_2)$ by optimizing $r,r_1,r_2,\{x(t), p_1(t), p_2(t)\}$, subject to constraints \eqref{P1:11}, \eqref{P1:12}, and \eqref{P1:14}. To ensure the dual function unbounded from the above, it follows that $1-\mu_1\alpha_1-\mu_2\alpha_1=0$. As a result, we only need to maximize $ \mu_1r_1 + \mu_2r_2$ by optimizing $r_1,r_2,\{x(t), p_1(t), p_2(t)\}$, subject to \eqref{P1:11}, \eqref{P1:12}, and \eqref{P1:14}. Towards this end, we consider the three cases with $\mu_1>\mu_2$, $\mu_1<\mu_2$, and $\mu_1=\mu_2$, respectively. First, when $\mu_1>\mu_2$, we invoke the polymatroid structure \cite{tse1998multiaccess} for the above problem to remove constraint \eqref{P1:11}. Accordingly, we obtain the following equivalent problem as \begin{align} \max_{\{x(t),p_1(t),p_2(t)\}}&\frac{1}{T}\int_{0}^T\left( \mu_1 \log_2(1+p_1(t) h_1(x(t))) + \mu_2\log_2\left(1+ \frac{p_2(t)h_2(x(t))}{p_1(t)h_1(x(t)) +1}\right) \right)\mathrm{d}t \nonumber\\ \mathrm{s.t.}~~&\eqref{P1:12},~ \eqref{P1:14}.\label{P:50} \end{align} It then follows that problem \eqref{P:50} can be decoupled over any $t$ as the following subproblem, in which the index $t$ is omitted for brevity. \begin{align}\label{vinfinite} \max_{x,p_1,p_2} & (\mu_1-\mu_2)\log_2(1+p_1h_1(x)) + \mu_2\log_2(1+p_1h_1(x)+p_2h_2(x)) \\ \mathrm{s.t.}~&p_1+p_2\le \bar P,\nonumber\\ ~& p_1\geq 0, p_2\geq 0.\nonumber \end{align} Although problem \eqref{vinfinite} is still non-convex, it can be shown that for $\mu_1>\mu_2$, $p_1\geq 0$, $p_2\geq 0$, $h_1(x)\leq h_1(-D/2)= \frac{\beta_0}{H^2}$, and $h_2(x)\leq h_2(D/2)= \frac{\beta_0}{H^2}$, we have \begin{align} (\mu_1-\mu_2)\log_2(1+p_1h_1(x))& \leq (\mu_1-\mu_2)\log_2(1+ \bar P\frac{\beta_0}{H^2}), \label{1}\\ \mu_2\log_2(1+p_1h_1(x)+p_2h_2(x)) &\leq \mu_2\log_2(1+p_1\frac{\beta_0}{H^2}+p_2\frac{\beta_0}{H^2}) = \mu_2\log_2(1+ \bar P\frac{\beta_0}{H^2}). \label{2} \end{align} As the inequalities in \eqref{1} and \eqref{2} are active simultaneously only when $p^*_1=\bar P$, $p^*_2=0$, $x^*=-\frac{D}{2}$, it follows that $p^*_1=\bar P$, $p^*_2=0$, $x^*=-\frac{D}{2}$ are indeed the unique optimal solution to problem \eqref{vinfinite}. As a result, the optimal solution to the above problem is given as $p^*_1(t)=\bar P$, $p^*_2(t)=0$, $x^*(t)=-\frac{D}{2},\forall t$, which achieves zero rate for GU 2 with $r_2 = 0$. Next, when $\mu_2>\mu_1$, it can be similarly shown that the optimal solution to the above problem is $p^*_2(t)=\bar P$, $p^*_1(t)=0$, and $x^*(t)={D}/{2},\forall t$, which achieves zero rate for GU 1 with $r_1 = 0$. Furthermore, when $\mu_1=\mu_2$, the above two trajectory and power allocation solutions are optimal for the above problem. As a result, the optimal solution is non-unique in this case, and the time-sharing between the two optimal solutions is required to achieve different rate pairs. Note that $\frac{r_1}{r_2}=\frac{\alpha_1}{\alpha_2}$ must hold at the optimality of problem (P2). Therefore, it follows that $\mu_1=\mu_2$ must be true at the optimal dual solution of (P2). In this case, the solution in Lemma \ref{lemm:P2} is primal optimal to (P2). This thus completes the proof. \vspace{-0.3cm} \section*{Appendix B: Proof of Proposition \ref{thm2}} \label{apdx:2} Suppose that the optimal UAV location solution to problem (P3) is $x^{*}\in[- {D}/{2}, 0)$, and the correspondingly obtained rate-pair $(r^*_1, r^*_2)$ satisfies $r^*_2\geq r^*_1$. Let $p_1$ and $p_2$ denote the corresponding UAV's transmit powers for GUs 1 and 2 to achieve the rate pair $(r^*_1, r^*_2)$. As $h_1(x^{*}) > h_2(x^{*})$, GU 1 can use interference cancellation before decoding its own signal; thus, $r^*_1$ and $r^*_2$ can be explicitly expressed as $r^*_1= \log_2( 1+p_1h_1(x^{*}))$ and $r^*_2 = \log_2( 1+ \frac{ p_2h_2(x^{*})}{p_1h_2(x^{*})+1})$. In the following, we show that we can always find an alternative UAV location $\hat{x} = -x^*\in [0, {D}/{2}]$ to achieve a rate-pair $(\hat{r}_1, \hat{r}_2)$ with $(\hat{r}_1, \hat{r}_2)\succeq (r^*_1, r^*_2)$, by considering two cases with $r^*_2= r^*_1$ and $r^*_2> r^*_1$, respectively, as follows. First, consider the case of $r^*_2= r^*_1$. It has been shown in Lemma \ref{lemma0} that for any rate-pair $(r^*_1, r^*_2)$, we can find a symmetric rate-pair $(r^*_2, r^*_1)$ at the location $\hat{x}=-x^{*}\in (0, {D}/{2}]$. Thus, it follows that $(\hat{r}_1, \hat{r}_2)=(r^*_2, r^*_1)= (r^*_1, r^*_2)$. Therefore, $\hat{x} \in (0, {D}/{2}]$ is also an optimal solution to problem (P3). Next, consider the case of $r^*_2> r^*_1$. To start with, when the UAV is located at $\hat{x} = -x^*$, we denote the maximum rate of GU 2 as $R_2=\log_2(1+{{\bar P}}h_2(\hat{x}))$, and the corresponding rate pair as $(0, R_2)$. Then, we construct an alternative rate-pair $(\hat{r}_1, \hat{r}_2)$ by time-sharing between $(0, R_2)$ and the symmetric rate-pair $(r^*_2, r^*_1)$. Let $\beta_1\ge 0$ and $\beta_2\ge 0$ denote the time-sharing ratios with $\beta_1+\beta_2=1$. Accordingly, the newly constructed rate-pair is expressed as $(\hat{r}_1,\hat{ r}_2) =\beta_1(r^*_2, r^*_1) + \beta_2(0, R_2)$. In particular, we choose $\beta_1 = \frac{R_2-r^*_2}{R_2-r^*_1}$ and $\beta_2 =1- \frac{R_2-r^*_2}{R_2-r^*_1}$ such that $\hat{r}_2=r^*_2$. Accordingly, we have $\hat{r}_1 = \frac{R_2-r^*_2}{R_2-r^*_1}r^*_2$. As a result, in order to show $(\hat{r}_1, \hat{r}_2)\succeq (r^*_1, r^*_2)$ in this case, it remains to show that $\hat{r}_1 \ge r_1^*$, or equivalently, \begin{align}\label{apdx:eq19} \hat{r}_1-r^*_1 = \frac{R_2-r^*_2}{R_2-r^*_1}r^*_2-r^*_1 =\frac{r^*_2-r^*_1}{R_2-r^*_1}(R_2-r^*_1-r^*_2) \ge 0. \end{align} Notice that \begin{align}\label{eq22} r^*_1+r^*_2 &= \log_2( 1+p_1h_1(x^{*})) + \log_2( 1+ \frac{ p_2h_2(x^{*})}{p_1h_2(x^{*})+1}) \nonumber\\ &\overset{(a)}\leq \log_2( 1+p_1h_1(x^{*})) + \log_2( 1+ \frac{ p_2h_1(x^{*})}{p_1h_1(x^{*})+1}) \nonumber\\ &= \log_2( 1+\bar P h_1(x^{*}))\overset{(b)}= \log_2( 1+\bar P h_2(\hat x))=R_2, \end{align} where $(a)$ and $(b)$ hold due to the facts that $h_1(x^{*}) > h_2(x^{*})$ and $h_1(x^{*}) = h_2(\hat x)$, respectively. It thus follows that $R_2-r^*_1-r^*_2 \ge 0$. By using this together with $\frac{r^*_2-r^*_1}{R_2-r^*_1} \ge 0$, we have $\hat{r}_1-r^*_1 \ge 0$. Therefore, we have shown that $(\hat{r}_1, \hat{r}_2)\succeq (r^*_1, r^*_2)$. By combining the above two cases, the proof is thus completed. \vspace{-0.3cm} \section*{Appendix C: Proof of Proposition \ref{trj_lemma}} \label{apdx:B} Proposition \ref{trj_lemma} is proved by contradiction. Suppose that the optimal rate-pair $(r^*_1, r^*_2)$ to problem (P1) is achieved by the UAV trajectory $\{x^*(t)\}$ and power allocation $\{{p}^*_k(t)\}$, in which there exists at least a location $x_A$ with $x_A\in (x_{\rm I}, x_{\rm F})$, such that \eqref{eq:V1} is violated, i.e., $\C_f(x_A) \nsubseteq {\rm{Conv}}( \C_f(x_{\rm I}) \bigcup \C_f(x_{\rm F})\big)$. In other words, there exists at least one rate-pair $( r_1^{\Aaa}, r_2^{\Aaa})\in \C_f(x_A) $ (with the power allocation being denoted by $\{p^A_{k}\}$), such that $( r_1^{\Aaa}, r_2^{\Aaa})\notin {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$. Then we show that we can always construct a new feasible UAV trajectory $\{\hat{x}(t)\}$ that achieves a larger objective value for problem (P1) or equivalently a rate-pair $(\hat{r}_1, \hat{r}_2)$ with $(\hat{r}_1, \hat{r}_2)\succ (r^*_1, r^*_2)$. \begin{figure}[!t] \centering \includegraphics[width=0.4\textwidth]{common_tangent2.eps} \caption{Illustration of the common tangent of $\C_f(x_{\rm I})$ and $\C_f(x_{\rm F})$. } \label{tangent}\vspace{-0.6cm} \end{figure} First, we construct new UAV trajectory $\{\hat{x}(t)\}$ and power allocation $\{\hat{p}_k(t)\}$ based on $\{x^*(t)\}$ and $\{{p}^*_k(t)\}$. To facilitate the design, we first partition the UAV flight/communication duration $T$ into three portions $[0,\beta_{\I}\Delta t ]$, $(\beta_{\I}\Delta t,T-\beta_{\F}\Delta t)$, and $[T-\beta_{\F}\Delta t, T ]$, in which $\beta_{\I} > 0$ and $\beta_{\F}> 0$ with $\beta_{\I}+\beta_{\F}=1$. We choose $\Delta t > 0$ to be sufficiently small such that $x^*(t)\approx x_{\rm I}, \forall t\in [0,\beta_{\I}\Delta t ]$, and $x^*(t)\approx x_{\rm F}, \forall t\in [T-\beta_{\F}\Delta t, T ]$. Accordingly, from \eqref{P1:11} in (P1), the average rate-pair of the two GUs can be expressed as \begin{align}\label{eq:adx556} (r^*_1, r^*_2) = \frac{\Delta t}{T}\left( \beta_{\I}(r^{\I}_1,r^{\I}_2) + \beta_{\F}(r^{\F}_1,r^{\F}_2) \right) + (R_1, R_2), \end{align} where $(r^{\I}_1,r^{\I}_2)$ and $(r^{\F}_1,r^{\F}_2)$ are the rate-pairs achieved at locations $x_{\rm I}$ and $x_{\rm F}$, respectively, and $R_k= \frac{1 }{T}\int_{\beta_{\I}\Delta t}^{T-\beta_{\F}\Delta t} \log_2\big(1+ p^*_k(t)h_k(x^*(t))\big) {\rm{d}}t, k\in\{1,2\}$. Here, note that $(r^{\I}_1,r^{\I}_2)$ and $(r^{\F}_1,r^{\F}_2)$ must be the rate-pairs in which $\C_f(x_{\I})$ and $\C_f(x_{\F})$ touch with their common tangent, respectively, as shown in Fig. \ref{tangent}, since otherwise a larger objective value for (P1) can be achieved by using rate-pairs $(r^{\I}_1,r^{\I}_2)$ and $(r^{\F}_1,r^{\F}_2)$ at $x_{\I}$ and $x_{\F}$. Then, the new UAV trajectory $\{\hat{x}(t)\}$ and power allocation $\{\hat{p}_k(t)\}$ are constructed by letting the UAV stay at the three locations $x_{\rm I}$, $x_{\rm F}$, and $x_{A}$ during the duration $\Delta t$, and using the trajectory $\{x^*(t)\}$ and power allocation $\{p^*_k(t)\}, t\in (\beta_{\I}\Delta t, T-\beta_{\F}\Delta t),$ during the remaining duration $T-\Delta t$. Let $\hat{\beta}_{\I}\Delta t$, $\hat{\beta}_{\F}\Delta t$, and $\hat{\beta}_{\Aaa}\Delta t$ denote the durations when the UAV stays at $x_{\rm I}$, $x_{\rm F}$, and $x_{A}$, respectively, where $\hat{\beta}_i\ge 0, i\in \{\I,\F, \Aaa\}$ are the corresponding time proportions with $\hat{\beta}_{\I}+\hat{\beta}_{\F}+\hat{\beta}_{\Aaa}=1$. Accordingly, the average rate-pair achieved by the constructed solution $\{\hat{x}(t)\}$ and $\{\hat{p}_k(t)\}$ can be expressed as \begin{align}\label{eq:adx557} (\hat{r}_1, \hat{r}_2) = \frac{\Delta t}{T}\left(\hat{\beta}_{\I}( r^{\I}_1, r^{\I}_2)+ \hat{\beta}_{\F}(r^{\F}_1,r^{\F}_2)+ \hat{\beta}_{\Aaa}(r^{\Aaa}_1, r^{\Aaa}_2) \right) + (R_1, R_2), \end{align} where $(r^{\Aaa}_1, r^{\Aaa}_2) \in \C_f(x_A)$ and $( r_1^{\Aaa}, r_2^{\Aaa})\notin {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$. By comparing $(r^*_1, r^*_2)$ in \eqref{eq:adx556} and $(\hat{r}_1, \hat{r}_2)$ in \eqref{eq:adx557}, we have \begin{align}\label{eq:apdx58} \kern -0.7mm (\hat{r}_1, \hat{r}_2)- (r^*_1, r^*_2) = \frac{ \Delta t}{T} \left( \hat{\beta}_{\I}( r^{\I}_1, r^{\I}_2)+ \hat{\beta}_{\F}(r^{\F}_1,r^{\F}_2)+ \hat{\beta}_{\Aaa}(r^{\Aaa}_1, r^{\Aaa}_2) - \beta_{\I}(r^{\I}_1,r^{\I}_2) - \beta_{\F}(r^{\F}_1,r^{\F}_2) \right). \end{align} Therefore, in order to show $(\hat{r}_1, \hat{r}_2)\succ (r^*_1, r^*_2)$, it remains to show that there exists a set of parameters $\hat{\beta}_{\I}$, $\hat{\beta}_{\F}$, $\hat{\beta}_{\Aaa}$, $\beta_{\I}$, and $\beta_{\F}$, such that \begin{align}\label{eq:59:xj} \hat{\beta}_{\I}( r^{\I}_1, r^{\I}_2)+ \hat{\beta}_{\F}(r^{\F}_1,r^{\F}_2)+ \hat{\beta}_{\Aaa}(r^{\Aaa}_1, r^{\Aaa}_2) \succ \beta_{\I}(r^{\I}_1,r^{\I}_2) + \beta_{\F}(r^{\F}_1,r^{\F}_2). \end{align} Next, note that $(r^{\Aaa}_1, r^{\Aaa}_2)\notin {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$ does not provide an explicit relation between $(r^{\Aaa}_1, r^{\Aaa}_2)$ and $( r^{\I}_1, r^{\I}_2)$ and $(r^{\F}_1,r^{\F}_2)$, which thus makes it difficult to verify the inequality in \eqref{eq:59:xj}. To overcome this difficulty, we introduce an equivalent interpretation of $(r^{\Aaa}_1, r^{\Aaa}_2)\notin {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$ in the following lemma, which is based on the properties of the capacity region $\C_f(x)$. \begin{figure}[!t]\label{example} \centering \subfigure[]{\includegraphics[width=2.1in, height=1.7in]{time_sharing1.eps}} \subfigure[]{\includegraphics[width=2.1in, height=1.7in]{time_sharing2.eps}} \subfigure[]{\includegraphics[width=2.1in, height=1.7in]{time_sharing3.eps}} \caption{The three cases where point $(r_1^{\Aaa}, r_2^{\Aaa})$ does not lie inside the triangle region, $\C_{\rm IF}$, defined in \eqref{equiva_triangle}: a) $0 < r_1^{\Aaa} \leq r_1^{\F}$, b) $r_1^{\F} < r_1^{\Aaa} < r_1^{\I} $, and c) $r_1^{\I} \leq r_1^{\Aaa} < r_{1,\max}^{\I}$.} \label{UAV_location}\label{example}\vspace{-0.8cm} \end{figure} \begin{lemma}\label{lem_convhull} For any location $x\in[x_{\rm I}, x_{\rm F}]$, $\C_f(x) \subseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$ \emph{if} and \emph{only if} $\C_f(x) \subseteq \C_{\rm IF}$ where $\C_{\rm IF}$ is a triangle region given by \begin{align}\label{equiva_triangle} \C_{\rm IF} = \bigg\{( r_1, r_2): r_2 \le k_{\rm IF} (r_1 - r^{\I}_1 ) + r^{\I}_2, r_1\geq 0, r_2\geq 0, k_{\rm IF} = \frac{ r^{\I}_2- r^{\F}_2}{r^{\I}_1- r^{\F}_1} \bigg\}. \end{align} \end{lemma} \begin{proof} Please refer to Appendix F. \end{proof} Based on Lemma \ref{lem_convhull}, $(r^{\Aaa}_1, r^{\Aaa}_2)\notin {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$ implies $( r_1^{\Aaa}, r_2^{\Aaa}) \notin \C_{\rm IF}$, i.e., \begin{align}\label{eq27} r_2^{\Aaa}> k_{\rm IF} (r^{\Aaa}_1 - r^{\I}_1 ) + r^{\I}_2. \end{align} Finally, with \eqref{eq27} at hand, we find parameters $\hat{\beta}_{\I}$, $\hat{\beta}_{\F}$, $\hat{\beta}_{\Aaa}$, $\beta_{\I}$, and $\beta_{\F}$, to ensure \eqref{eq:59:xj}. Note that for $x_A\in (x_{\rm I}, x_{\rm F})$, we have $0 < r_1^{\Aaa}< r_{1, \max}^{\I}$ where $r^{\I}_{1,\max}$ is the maximum achievable rate of GU 1 when the UAV is placed at location $x_{\I}$ and allocates $\bar P$ to GU 1, i.e., $r^{\I}_{1,\max} = \log_2(1+\frac{\bar P\beta_0}{ (x_{\rm I}+\frac{D}{2})^2+H^2 })$. In general, there are overall three possible cases depending on the values of $r_1^{\Aaa}$, i.e., 1) $0 < r_1^{\Aaa} \leq r_1^{\F}$, 2) $r_1^{\F} < r_1^{\Aaa} < r_1^{\I} $, and 3) $r_1^{\I} \leq r_1^{\Aaa} < r_{1,\max}^{\I}$, for which the rate-pair $(r_1^{\Aaa},r_2^{\Aaa})$ is shown in Fig. \ref{example} based on Lemma \ref{lem:inequality} in Appendix F. For simplicity, we only discuss case 1), as cases 2) and 3) can be similarly analyzed. When $0 < r_1^{\Aaa} \leq r_1^{\F} $, it follows that $r_1^{\Aaa} \leq r_1^{\F}< r_1^{\I}$. From \eqref{eq27}, we have $k_{\rm AF} = \frac{ r_2^{\Aaa}- r_2^{\I}}{ r_1^{\Aaa} - r^{\I}_1}<k_{\rm IF} <0$ where $k_{\rm AI}$ is the slope of the line across points $( r_1^{\Aaa}, r_2^{\Aaa})$ and $( r_1^{\I}, r_2^{\I})$, and the line segment is given by \begin{align}\label{eq29} r_2= k_{\rm AI} (r_1 - r^{\I}_1 ) + r^{\I}_2, \end{align} with $r_1 \in [ r_1^{\Aaa}, r^{\I}_1 ]$. Note that performing proper time-sharing (convex combination) between points $( r_1^{\Aaa}, r_2^{\Aaa})$ and $( r_1^{\I}, r_2^{\I})$, i.e., $\hat{\beta}_{\F}=0$, can achieve any point $(r_1, r_2)$ on line segment in \eqref{eq29} that is above the line segment $r_2 = k_{\rm IF} (r_1 - r^{\I}_1 ) + r^{\I}_2$, $r_1\in (r^{\F}_1, r^{\I}_1)$. In other words, we can always find a convex combination of $( r^{\I}_1, r^{\I}_2)$, $(r^{\F}_1,r^{\F}_2)$, and $(r^{\Aaa}_1, r^{\Aaa}_2)$, which strictly outperforms any given convex combination of $( r^{\I}_1, r^{\I}_2)$ and $(r^{\F}_1,r^{\F}_2)$ (with $\beta_{\I} >0$ and $\beta_{\F} > 0$). Therefore, there exist parameters $\hat{\beta}_{\I}$, $\hat{\beta}_{\F}$, $\hat{\beta}_{\Aaa}$, $\beta_{\I}$, and $\beta_{\F}$, to ensure \eqref{eq:59:xj}, which thus completes the proof. \vspace{-0.3cm} \section*{Appendix D: Proof of Theorem \ref{prop4}} \label{apdx:prop4} Notice that when $x_{\rm I} = x_{\rm F}$, Theorem \ref{prop4} follows directly. Therefore, we only need to focus on the proof in the case with $x_{\rm I}<x_{\rm F}$, by contradiction. Suppose that there exists an optimal UAV trajectory solution to problem (P1), in which the UAV flies at the speed $V^{\star}$ less than $V$ ($0<V^{\star}<V$)\footnote{Note that if the UAV hovers at some locations between $[x_A-\delta_d/2, x_A+\delta_d/2]$, we can obtain its average speed $V^*$ during this interval without loss of optimality, which is always larger than zero. } over an infinitesimal interval $[x_A-\delta_d/2, x_A+\delta_d/2]$ containing location $x_A$ ($x_{\rm I}<x_A-\delta_d/2<x_A+ \delta_d/2<x_{\rm F}$) with $\delta_d>0$ being infinitesimal. Then the time needed for flying over this interval is $t=\frac{\delta_d}{{V}^{\star}}$ and the UAV's location can be assumed constant within this interval as it is sufficiently small. As a result, we can always construct an alternative feasible trajectory which is same as the assumed trajectory except that we reallocate the flying and hovering time at locations $x_A$, and $x_{\rm I}$ and $x_{\rm F}$. Specifically, we let the UAV fly at the maximum speed $V$ over $[x_A-\delta_d/2, x_A+\delta_d/2]$ and the time saved due to maximum speed flying is given by $\triangle t = \delta_d\left(\frac{1}{V^{\star}}-\frac{1}{{V}}\right)>0$. Then, we let the UAV perform proper hovering time-sharing between $x_{\rm I}$ and $x_{\rm F}$ by using the saved time $\triangle t$. From Proposition \ref{trj_lemma}, we know that for the optimal trajectory solution to problem (P1), $\C_f(x_A) \subseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$ for $x_{\rm I}<x_A<x_{\rm F}$. Thus it can be shown that the newly constructed UAV trajectory together with optimized power allocation can always achieve a rate-pair that is componentwise no smaller than the assumed one, which thus completes the proof. \vspace{-0.3cm} \section*{Appendix E: Proof of Theorem \ref{prop5}} \label{apdx:prop5} Based on Lemma \ref{lemma0}, we only need to consider $r_2\geq r_1$ for problem (P1). To obtain the optimal solution under the high SNR assumption, we first obtain an upper bound of the optimal objective value of (P1) and then we show that this upper bound is tight and can be achieved by $x^*(t)=D/2, t\in \T$. From constraints \eqref{P1:11} or \eqref{MAC:ineq3} in (P1), it follows that $r=r_1+r_2\le \frac{1}{T}\int_{0}^T\log_2\big(1+ p_1(t)h_1(x(t)) +p_2(t)h_2(x(t))\big){\rm{d}}t\overset{(a)}{\le} \frac{1}{T}\int_{0}^T\log_2(1+ \frac{\bar P \beta_0}{H^2}){\rm{d}}t \overset{(b)}{\thickapprox} \log_2( \frac{\bar P \beta_0}{H^2})$, where $(a)$ holds due to $h_1(x(t))\le \frac{\beta_0}{H^2} $, $h_2(x(t))\le \frac{\beta_0}{H^2}$, and $ p_1(t)+p_2(t)\le \bar P$, and $(b)$ holds due to the high SNR assumption $\frac{\bar P \beta_0}{ H^2 }\ge \frac{\bar P \beta_0}{ D^2 + H^2 }\gg1$. Thus, in this case, the optimal objective value of (P1) is upper-bounded by $\log_2( \frac{\bar P \beta_0}{H^2})$. Next, we show that the above upper bound can be achieved by a feasible solution to problem (P1) with $x^*(t)= x^*, \forall\, t\in \T$, in which case problem (P1) reduces to (P3). From Proposition \ref{thm2}, we know that GU 2 performs interference cancellation before decoding its own signal when $r_2\geq r_1$. Furthermore, based on the assumption $\frac{\bar P \beta_0}{ D^2 + H^2 }\gg1$ and $r_2\geq r_1$, it can be shown $\frac{p_2\beta_0}{ (x+\frac{D}{2})^2+H^2 } \gg1$. As a result, the achievable rates of GUs 1 and 2 under the high SNR assumption can be expressed as {\small\begin{align} r_1&=\log_2\left(1+\frac{\frac{p_1\beta_0}{ (x+\frac{D}{2})^2+H^2 }}{\frac{p_2\beta_0}{ (x+\frac{D}{2})^2+H^2 } + 1}\right)\thickapprox \log_2\left( 1+\frac{p_1}{p_2}\right), \label{eq:58}\\ r_2&=\log_2\left(1+\frac{p_2\beta_0}{ (x-\frac{D}{2})^2+H^2 }\right)\thickapprox \log_2\left( \frac{p_2\beta_0}{ (x-\frac{D}{2})^2+H^2 }\right).\label{eq:59} \end{align}}From \eqref{eq:58} and \eqref{eq:59}, it can be observed that $r_1$ is independent of $x$ and $r_2$ increases monotonically with $x$ for $x\in [-D/2, D/2]$. Thus, the objective function of (P3) is $r=r_1+r_2=\log_2( \frac{\bar P \beta_0}{ (x-\frac{D}{2})^2+ H^2 })$ where the optimal UAV location is $x^*=D/2$ with the maximum objective value $\log_2( \frac{\bar P \beta_0}{ H^2 })$. Based on the above results, the capacity region $\C_{\rm h-SNR}(V, T, \bar P)$ can be obtained as in \eqref{eq:highSNR}. This thus completes the proof. \vspace{-0.3cm} \section*{Appendix F: Proof of Lemma \ref{lem_convhull}} \label{apdx:B} To start with, we provide Lemmas \ref{lem5} and \ref{lem:inequality} below to facilitate the proof of Lemma \ref{lem_convhull} in the next. First, we consider two UAV locations $x_{\rm B}$ and $x_{\rm C}$, with $-\frac{D}{2}\leq x_{\rm C}<x_{\rm B}\leq \frac{D}{2}$. For any boundary point $(r_1, r_2)\in \partial \C_f(x_{m}), m \in\{\rm B, C\}$, we use a rate function $r_2 = r_2^m(r_1)$ to express their relation. Also, let ${r}^{m }_{k, \max}\triangleq \log_2\left(1+\bar Ph^m_k\right)$ denote the maximum rate of GU $k$ when the UAV is located at $x_m$, where $h^m_k = \frac{\beta_0}{ (x_{m}-x_k)^2+H^2 }$, $m\in\{\rm B, C\}$, $k\in\{1,2\}$. We then have Lemma \ref{lem5} as follows to reveal an interesting property of $r_2^B(r_1)$ and $r_2^C(r_1)$. \begin{lemma}\label{lem5} $r_2^B(r_1)$ and $r_2^C(r_1)$ have one unique intersecting point, denoted by $(\bar{r}^{BC}_1,\bar{r}^{BC}_2)$, where $0<\bar{r}^{BC}_1<{r}^{B}_{1, \max}$ and $0<\bar{r}^{BC}_2<{r}^{C}_{2, \max}$. Furthermore, when $0\leq r_1 <\bar{r}^{BC}_1$, it follows that $r_2^B(r_1)>r_2^C(r_1) $; otherwise when $\bar{r}^{BC}_1< r_1\leq {r}^{B }_{1, \max}$, $r_2^B(r_1)<r_2^C(r_1)$ must hold. \end{lemma} \begin{proof} Since $-\frac{D}{2}\leq x_{\rm C}<x_{\rm B}\leq \frac{D}{2}$, we must have ${r}^{B }_{1, \max}<{r}^{C}_{1, \max}$ and ${r}^{B }_{2, \max}>{r}^{C}_{2, \max}$. Thus, it follows that $0<\bar{r}^{BC}_1<{r}^{B }_{1, \max}$ and $0<\bar{r}^{BC}_2<{r}^{C}_{2, \max}$. Next, we prove that the intersecting point $(\bar{r}^{BC}_1,\bar{r}^{BC}_2)$ is unique, by considering three possible cases, depending on the values of $x_{\rm B}$ and $x_{\rm C}$, i.e., 1) $ 0 \leq x_{\rm C} < x_{\rm B}\leq \frac{D}{2}$; 2) $-\frac{D}{2}\leq x_{\rm C}<0<x_{\rm B}\leq \frac{D}{2}$; and 3) $-\frac{D}{2}\leq x_{\rm C}<x_{\rm B}\leq 0$, respectively. Since case 3) is similar to case 1), we only analyze the first two cases in the following for brevity. In case 1), we have the following inequality regarding the channel gains of the two GUs with the UAV locations being $x_{\rm B}$ and $x_{\rm C}$: $h^B_2> h^C_2> h^C_1>h^B_1>0$. At both locations $x_{\rm B}$ and $x_{\rm C}$, GU 2 will decode GU 1's signal first and then perform interference cancellation. As such, the achievable rates of GUs 1 and 2 can be expressed as $r_1^m = \log_2 \left(1+ \frac{p_1^mh_1^m}{p_2^mh_1^m+1} \right)$ and $r_2^m = \log_2(1+ p_2^mh_2^m)$, respectively, $m\in\{\rm B,C\}$. By eliminating $p_1^m$ and $p_2^m$ with $p_1^m+p_2^m =\bar P$, it yields $r_2^m= \log_2 (1+\frac{(\bar P h_1^m-2^{r_1^m}+1)h_2^m}{2^{r_1^m}h_1^m}), m\in\{\rm B,C\}$. Note that the intersecting point $(\bar{r}^{BC}_1,\bar{r}^{BC}_2 )$ should satisfy both equalities above. Therefore, we have { \begin{align}\label{eq:33} \frac{(\bar P h^B_1-2^{\bar{r}^{BC}_1}+1)h^B_2}{2^{\bar{r}^{BC}_1}h^B_1 } =\frac{(\bar P h^C_1-2^{\bar{r}^{BC}_1}+1)h^C_2}{2^{\bar{r}^{BC}_1}h^C_1 }. \end{align}} From \eqref{eq:33}, we obtain $\bar{r}^{BC}_1 = \log_2\left(\bar Ph^B_1h^C_1\left(\frac{h^B_2-h^C_2}{h^B_2h^C_1-h^B_1h^C_2}\right) +1\right)$. Since $\bar Ph^B_1h^C_1\left(\frac{h^B_2-h^C_2}{h^B_2h^C_1-h^B_1h^C_2}\right) >0$, there always exists a unique solution of $\bar{r}^{BC}_1>0$. In case 2), we have the following inequalities: $h^B_2> h^B_1>0$, $ h^C_1> h^C_2>0$, $h^B_2> h^C_2>0$, and $ h^C_1> h^B_1>0$. Therefore, when the UAV is located at $x_{\rm B}$, the achievable rates of GUs 1 and 2 are given as those in case 1). When the UAV is located at $x_{\rm C}$, GU 1 needs to decode GU 2's signal first and then perform interference cancellation where the achievable rates of GUs 1 and 2 can be written as $r_1^C = \log_2 \left(1+ p_1^Ch_1^C\right)$ and $r_2^C = \log_2\left(1+ \frac{p_2^Ch_2^C}{p_1^Ch_1^C+1}\right)$, respectively. Then, for the intersecting point $(\bar{r}^{BC}_1,\bar{r}^{BC}_2 )$, we have { \begin{align}\label{eq38} \frac{(\bar P h^C_2+1) h^C_1}{(2^{\bar{r}^{BC}_1}-1) h^C_2+ h^C_1}-\frac{(\bar P h^B_1+1) h^B_2}{2^{\bar{r}^{BC}_1}h^B_1}+\frac{h_2^B}{h_1^B} -1=0. \end{align}} Let $w\triangleq 2^{\bar{r}^{BC}_1}$. Then, \eqref{eq38} can be equivalently transformed to \begin{small} \begin{align}\label{eq39} \frac{ h^C_2}{ h^C_1}\left(\frac{ h^B_2}{ h^B_1} -1\right)w^2 -\left( \bar Ph_2^C\left(1-\frac{h_2^B}{h_1^C}\right) - \frac{2h_2^Bh_2^C}{h_1^Bh_1^C} +\frac{h_2^B}{h_1^B} +\frac{h_2^C}{h_2^C} \right)w + h_2^{B}\left( \frac{h_2^{C}}{h_1^{C}}-1 \right)(\bar P +h_1^{B})=0, \end{align} \end{small} \kern -1.6mm where the LHS of \eqref{eq39} is a quadratic function with respect to $w$. Based on inequalities of the channel gains, it can be shown that \eqref{eq39} always has a unique root $w>1$, i.e., $\bar{r}^{BC}_1>0$ is unique. Finally, by using the result that $\bar{r}^{BC}_1>0$ is unique, together with the facts that ${r}^{B }_{1, \max} < {r}^{C}_{1, \max}$ and ${r}^{B }_{2, \max} > {r}^{C}_{2, \max}$, it is evident that when $0\leq r_1 <\bar{r}^{BC}_1$, it follows that $r_2^B(r_1)>r_2^C(r_1) $; otherwise when $\bar{r}^{BC}_1< r_1\leq {r}^{B }_{1, \max}$, $r_2^B(r_1)<r_2^C(r_1)$ must hold. Therefore, the second part of this lemma is proved. Hence, Lemma \ref{lem5} follows. \end{proof} Next, we denote the upper right common tangent of convex hull of the union of $ \C_f(x_{\rm B})$ and $\C_f(x_{\rm C})$ by $\mathcal{\ell}$ and it touches two points on the boundaries of $\C_f(x_{\rm B})\bigcup \C_f(x_{\rm C})$, denoted by $(\bar r^B_1, \bar r^B_2)$ and $(\bar r^C_1, \bar r^C_2)$, respectively. Then we have the following lemma. \begin{lemma}\label{lem:inequality} It follows that $0 \leq \bar r^B_1<\bar{r}^{BC}_1<\bar r^C_1$ and $0 \leq \bar r^C_2<\bar{r}^{BC}_2<\bar r^B_2$. \end{lemma} \begin{proof} Since both $r_2^B(r_1)$ and $r_2^C(r_1)$ are monotonically decreasing functions with respect to $r_1$, $0 \leq \bar r^C_2<\bar{r}^{BC}_2<\bar r^B_2$ holds \emph{if} and \emph{only if} $0 \leq \bar r^B_1<\bar{r}^{BC}_1<\bar r^C_1$ holds. Thus, it only remains to show $0 \leq \bar r^B_1<\bar{r}^{BC}_1<\bar r^C_1$. In the following, we focus on proving $\bar{r}^{BC}_1<\bar r^C_1$ by contradiction, while $\bar r^B_1<\bar{r}^{BC}_1$ can be similarly verified and thus is omitted. First, suppose that $\bar r^C_1< \bar{r}^{BC}_1$. From Lemma \ref{lem5}, it follows that $r_2^B(\bar r^C_1)>r_2^C(\bar r^C_1) $ and hence $ (\bar r^C_1, \bar r^C_2) \prec(\bar r^C_1, \bar r^B_2)$. Since $(\bar r^C_1, \bar r^B_2) \in \C_f(x_{\rm B})$, we must have $(\bar r^C_1, \bar r^C_2) \in \C_f(x_{\rm B})$ and $(\bar r^C_1, \bar r^C_2)$ is not on the boundary of $\C_f(x_{\rm B})$, i.e., $(\bar r^C_1, \bar r^C_2) \in {\rm {int}}( \C_f(x_{\rm B}))$. This contradicts that line $\ell$ is the common tangent of $\C_f(x_{\rm B})$ and $\C_f(x_{\rm C})$ because it crosses an interior point of $\C_f(x_{\rm B})$. Next, suppose that $\bar r^C_1= \bar{r}^{BC}_1$. Then we have $(\bar r^C_1, \bar r^C_2) =(\bar{r}^{BC}_1,\bar{r}^{BC}_2 )\in \C_f(x_{\rm B})$ and $(\bar r^C_1, \bar r^C_2)$ is also on the boundary of $\C_f(x_{\rm B})$ as $(\bar r^B_1, \bar r^B_2)$, i.e., $(\bar r^C_1, \bar r^C_2)\in \partial \C_f(x_{\rm B})$. Note that if $x_{\rm B}\neq 0$, then $\C_f(x_{\rm B})$ is a strictly convex set, in which case line $\ell$ cannot be the common tangent of $\C_f(x_{\rm B})$ because it crosses two points $(\bar r^C_1, \bar r^C_2)$ and $(\bar r^B_1, \bar r^B_2)$ of a strictly convex boundary. By contrast, if $x_{\rm B}=0$, then $\C_f(x_{\rm B})$ is a convex but not strictly convex set, and more specifically, it is an equilateral triangle region, with its boundary being a line segment that lies on line $\ell$. Since $x_{\rm B}\neq x_{\rm C}$, it follows from Lemma \ref{lem5} that line $\ell$ intersects with the boundary of $\C_f(x_{\rm C})$, which thus contradicts that line $\ell$ is the tangent of $\C_f(x_{\rm C})$. By combining the two cases, the proof is completed. \end{proof} Now, with Lemmas \ref{lem5} and \ref{lem:inequality} obtained, we are ready to prove Lemma 5. We first prove the ``only if" part. By the definition of a common tangent (supporting hyperplane theorem \cite{Boyd}), all the achievable rate-pairs in the convex hull should lie in the same half (lower left) plane separated by the tangent, i.e., ${\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big) \subseteq \C_{\rm IF}$. Thus, $\C_f(x_A) \subseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)\Rightarrow \C_f(x_{A}) \subseteq \C_{\rm IF}$. Next, we prove the ``if" part, i.e., $\C_f(x_A) \subseteq \C_{\rm IF} \Rightarrow \C_f(x_A) \subseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$. This is proved by contradiction. Suppose that $\C_f(x_A) \subseteq \C_{\rm IF}$ but $\C_f(x_A) \nsubseteq {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$. It means that there exists at least a rate-pair that satisfies $(\bar r^A_1, \bar r^A_2)\in \C_f(x_A)$ but $(\bar r^A_1, \bar r^A_2) \in \C_{\rm IF} \backslash {\rm{Conv}}\big(\C_f(x_{\rm F})\bigcup \C_f(x_{\rm I})\big)$. Note that $(\bar r^{\I}_1, \bar r^{\I}_2)$ and $(\bar r^{\F}_1, \bar r^{\F}_2) \in {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$. From Lemma \ref{lem:inequality}, we have $0 \leq \bar r^{\F}_1<\bar{r}^{\rm IF}_1<\bar r^I_1$. Thus, the rate-pairs that lie on and below the line segment between $(\bar r^{\rm F}_1, \bar r^{\rm F}_2)$ and $(\bar r^{\rm I}_1, \bar r^{\rm I}_2)$ also lie within ${\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$, i.e., $(r_1, r_2)\in {\rm{Conv}}\big(\C_f(x_{\rm I})\bigcup \C_f(x_{\rm F})\big)$, where $\bar r^{\rm F}_1 \leq r_1 \leq \bar r^{\rm I}_1$ and $(r_1, r_2)\in \C_{\rm IF}$. Then, the rate-pair $(\bar r^A_1, \bar r^A_2)$ must satisfy one of the following two cases: 1) $0<\bar r^A_1< \bar r^{\rm F}_1$, and 2) $\bar r^{\rm I}_1<\bar r^{A}_1$. If $0<\bar r^A_1< \bar r^{\rm F}_1$, since $(\bar r^A_1, \bar r^A_2)\in \C_f(x_A)$ and $(\bar r^A_1, \bar r^A_2)\notin \C_f(x_{\rm F})$, it follows that $(\bar r^A_1, \bar r^A_2)\succ (\bar r^A_1, r^{\rm F}_2(\bar r^A_1))$. This implies that the intersecting point, denoted by $(r_1^{FA}, r_2^{FA})$, satisfies $r_1^{FA}<\bar r^A_1$. From Lemma \ref{lem5}, we have $(\bar r^{\rm F}_1, \bar r^{\rm F}_2)\prec (\bar r^{\rm F}_1, r^{A}_2(\bar r^{\rm F}_1))$ due to $ r_1^{FA}< \bar r^{A}_1< \bar r^{\rm F}_1$. It thus suggests that there exists a rate-pair $(\bar r^{\rm F}_1, r^{A}_2(\bar r^{\rm F}_1))\in \C_f(x_{\rm F})$ but $(\bar r^{\rm F}_1, r^A_2(\bar r^{\rm F}_1))\notin \C_{\rm IF}$, which contradicts the assumption $\C_f(x_{\rm F}) \subseteq \C_{\rm IF}$. The case of $\bar r^{\rm I}_1<\bar r^A_1$ can be analyzed similarly, where the result also contradicts the assumption $\C_f(x_{\rm F}) \subseteq \C_{\rm IF}$. This thus completes the proof of Lemma \ref{lem_convhull}. \vspace{-0.5cm} \bibliographystyle{IEEEtran}
1,116,691,501,205
arxiv
\section{Introduction} \label{sec:intro} The ongoing experiments at the Large Hadron Collider (LHC) allow us to enter new terrain at the ${\rm Te\kern -1pt V}$ scale and to search for new physics in an unprecedented way. In fact, due to the remarkable properties of supersymmetric (SUSY) extensions of the Standard Model (SM)~\cite{Wess:1992cp,Nilles:1983ge,Haber:1984rc,Martin:1997ns,Drees:2004jm,Baer:2006rs}, there are high hopes to discover superpartners of the SM fields in the mass range probed by the LHC experiments. Such a discovery would be a major breakthrough with far reaching consequences also for our understanding of cosmology and the early Universe. With the lightest SUSY particle (LSP) being a promising dark matter (DM) candidate~\cite{Jungman:1995df,Baltz:2006fm,Steffen:2008qp}, collider studies may help us to clarify the origin and identity of DM and to probe the early thermal history even prior to primordial nucleosynthesis. In this work we study the direct pair production of the lighter stau $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ at hadron colliders. The \ensuremath{{\tilde{\tau}}_{1}}\xspace is the lighter one of the two scalar partners of the tau lepton~$\tau$ and often the lightest slepton within the minimal SUSY extension of the SM (MSSM). Among the potential SUSY discovery channels, the production of color-charged SUSY particles, squarks and gluinos, is typically assumed to play a key role since they can be produced via the strong interaction; \cf~\cite{Martin:1997ns,Drees:2004jm,Baer:2006rs} and references therein. However, recent searches for signals with jets and missing energy at the LHC~\cite{Khachatryan:2011tk,daCosta:2011qk} disfavor very light squarks and gluinos. In case these searches keep on in setting new limits in the near future, the viable mass range for squarks and gluinos will soon be pushed towards and above $1~{\rm Te\kern -1pt V}$~\cite{Bechtle:2011dm}, where the associated production cross sections drop sharply, especially during the early run of the LHC with a center-of-mass energy of $\sqrt{S} = 7~{\rm Te\kern -1pt V}$. However, color-singlet SUSY particles such as sleptons, neutralinos, and charginos could still be light enough to be produced in large numbers at colliders. Direct stau production could thereby allow for SUSY discoveries in the near future. On the other hand, non-observation of the direct production channels will allow us to infer exclusion limits on subsets of the SUSY parameter space, independent of the colored sector. The lighter stau can play a crucial role not only for collider phenomenology but also in cosmology. While the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ with its electric charge cannot be DM, it can be the next-to-lightest SUSY particle (NLSP) in an R-parity conserving realization of SUSY. If the lightest neutralino $\ensuremath{\tilde{\chi}^{0}}_1$ is the LSP, a stau NLSP that is almost as light as the $\ensuremath{\tilde{\chi}^{0}}_1$ can participate via coannihilation in the primordial freeze-out of the $\ensuremath{\tilde{\chi}^{0}}_1$. In fact, such a coannihilation scenario might be the key for an agreement of the relic $\ensuremath{\tilde{\chi}^{0}}_1$ density $\Omega_{\neu_1}$ with the DM density $\Omega_{\mathrm{DM}}$; \cf~\cite{Jungman:1995df,Ellis:1999mm,Drees:2004jm,Baer:2006rs,Baltz:2006fm,Steffen:2008qp} and references therein. The direct pair production of such light $\ensuremath{{\tilde{\tau}}_{1}}\xspace$'s can then have a sizeable cross section at hadron colliders even if the colored sparticles are substantially more massive. Since each of those $\ensuremath{{\tilde{\tau}}_{1}}\xspace$'s will decay rapidly into a $\ensuremath{\tilde{\chi}^{0}}_1$, an excess in missing transverse energy is the expected signature of such a scenario. There are other well-motivated scenarios in which the lighter stau $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ is central for both cosmology and phenomenology. In fact, in large parts of the parameter space of many constrained SUSY models, such as the constrained MSSM (CMSSM), the lighter stau $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ is the lightest SUSY particle in the MSSM spectrum, to which we refer as the lightest ordinary SUSY particle (LOSP). While restrictive upper limits exist on the abundance of a stable charged massive particle (CHAMP)~\cite{Nakamura:2010zzi}, the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP becomes a viable possibility in scenarios with broken R-parity~\cite{Akeroyd:1997iq,Allanach:2003eb,Buchmuller:2007ui,Dreiner:2008rv,Desch:2010gi} or in R-parity-conserving scenarios in which the LSP is an extremely weakly interacting particle (EWIP), such as the gravitino~\cite{Ambrosanio:1997rv,Feng:1997zr,Martin:1998vb,Ambrosanio:2000ik,Buchmuller:2004rq,Steffen:2006hw,Ellis:2006vu} or the axino~\cite{Covi:2004rb,Brandenburg:2005he,Freitas:2011fx}. The $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP can then be long-lived and as such appear in the collider detectors as a quasi-stable muon-like particle. Such scenarios will come with distinctive signatures that are very different from those in the $\ensuremath{\tilde{\chi}^{0}}_1$ LSP case~\cite{Hamaguchi:2004df,Ellis:2006vu,Ishiwata:2008tp,Biswas:2009rba,Feng:2009bd,Ito:2009xy,Heckman:2010xz,Kitano:2010tt,Ito:2010xj,Ito:2010un,Asai:2011wy}. In fact, direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace$-pair production events will be easier to identify experimentally if the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ is quasi-stable. It may even be possible to stop initially slow staus within the main detectors~\cite{Martyn:2006as,Asai:2009ka,Pinfold:2010aq,Freitas:2011fx} or in some additional dedicated stopping detectors~\cite{Goity:1993ih,Hamaguchi:2004df,Feng:2004yi,Hamaguchi:2006vu} for analyses of their late decays. The cosmological implications of a long-lived $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ depend on its lifetime~$\ensuremath{\tau_{\stau}}$, its mass~$\ensuremath{m_{\stau}}\xspace$, and its primoridal relic abundance prior to decay~$\ensuremath{Y_{\stau}}\equiv\ensuremath{n_{\stau}}/s$, where $\ensuremath{n_{\stau}}$ is its comoving number density prior to decay and $s$ the entropy density. For example, in the R-parity conserving case with an EWIP LSP, each $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ NLSP decays into one LSP which contributes to the DM density $\Omega_{\mathrm{DM}}$. Moreover, $\ensuremath{\tau_{\stau}}$ can exceed $1$--$100~\ensuremath{\mathrm{s}}$ in scenarios with the gravitino LSP~\cite{Buchmuller:2004rq} or the axino LSP~\cite{Covi:2004rb,Brandenburg:2005he,Freitas:2011fx}. Long-lived $\ensuremath{{\tilde{\tau}}_{1}}\xspace$'s can then decay during or after big bang nucleosynthesis (BBN) and the emitted SM particles can reprocess the primordial abundances of deuterium, helium, and lithium~\cite{Cyburt:2002uv,Kawasaki:2004qu,Jedamzik:2006xz,Kawasaki:2008qe}. Negatively charged $\ensuremath{{\tilde{\tau}}_{1}}\xspace$'s may even form bound states with the primordial nuclei leading to catalyzed BBN (CBBN) of lithum-6 and beryllium-9~\cite{Pospelov:2006sc,Cyburt:2006uv,Hamaguchi:2007mp,Pradler:2007is,Pospelov:2007js,Pospelov:2008ta}. The DM abundance and the observationally inferred primordial abundances of the light elements thereby impose $\ensuremath{\tau_{\stau}}$-dependent upper limits on the yield $\ensuremath{Y_{\stau}}$ which translate into cosmological constraints on the parameter space of the respective SUSY models; \cf~\cite{Cyburt:2006uv,Steffen:2006wx,Pradler:2006hh,Pradler:2007is,Kersten:2007ab,Pradler:2007ar,Kawasaki:2008qe,Pospelov:2008ta,Bailly:2008yy,Freitas:2009fb,Freitas:2011fx} and references therein. The cosmological constraints on scenarios with long-loved staus $\ensuremath{\tilde\tau}\xspace_\heartsuit$ are often quite restrictive with potentially severe implications for collider phenomenolgy and cosmology. For example, the CBBN constraints on gravitino LSP scenarios can point to heavy colored sparticles, such as a gluino with mass $\ensuremath{m_{\gluino}}\gtrsim 2.5~{\rm Te\kern -1pt V}$~\cite{Cyburt:2006uv,Pradler:2006hh,Pradler:2007is,Pradler:2007ar}, so that direct stau production would be particularly important for SUSY discoveries at the LHC. Moreover, together with $\Omega_{\mathrm{DM}}$ limiting the thermally produced EWIP density~\cite{Bolz:2000fu,Brandenburg:2004du,Pradler:2006qh,Pradler:2006hh,Rychkov:2007uq,Strumia:2010aa,Bae:2011jb} from above, the CBBN constraints can restrict the post-inflationary reheating temperature to $\ensuremath{T_{\mathrm{R}}} \ll 10^9~{\rm Ge\kern -1pt V}$~\cite{Pradler:2006hh,Freitas:2009fb}, which disfavors the viability of thermal leptogenesis with hierarchical heavy Majorana neutrino masses~\cite{Fukugita:1986hr,Davidson:2002qv,Buchmuller:2004nz,Blanchet:2006be,Antusch:2006gy}. In light of those findings, it is remarkable that parameter regions exist in which one finds an exceptionally small relic $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ abundance~\cite{Ratz:2008qh,Pradler:2008qc} which may respect the (C)BBN limits on $\ensuremath{Y_{\stau}}$ so that the above restrictions do no longer apply. Such exceptional yields can result from efficient primordial annihilation of staus with $\ensuremath{m_{\stau}}\xspace\lesssim 200~{\rm Ge\kern -1pt V}$ via enhanced stau-Higgs couplings~\cite{Ratz:2008qh,Pradler:2008qc} and/or stau annihilation at the resonance of the CP-even heavy Higgs boson $\ensuremath{H^{0}}\xspace$~\cite{Pradler:2008qc}. Here in this paper we address the question whether such a scenario with efficient primordial stau annihilation can be identified by considering direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace$-pair production at hadron colliders. Our paper provides predictions for direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace$-pair production cross sections and kinematical distributions at hadron colliders in the framework of the R-parity conserving MSSM. Our calculations include Drell--Yan\ processes as well as $b$-quark annihilation and gluon fusion, where diagrams with s-channel \ensuremath{h^{0}}\xspace or \ensuremath{H^{0}}\xspace exchange can be become dominant. All third generation mixing effects are taken into account. While the obtained cross sections are independent of the stau lifetime, we interpret our findings with a special focus on scenarios with long-lived staus. For such scenarios, we address collider prospects for the SUSY parameter determination, the stopping of slow staus in the detectors, and viability tests of exceptionally small relic stau abundances. Theoretical predictions for slepton-pair production via the Drell--Yan\ channel are well known since many years and include next-to-leading order (NLO) corrections from QCD and SUSY-QCD~\cite{Eichten:1984eu,Baer:1997nh,Beenakker:1999xh} and resummation improved results at the next-to-leading logarithmic (NLL) level~\cite{Bozzi:2006fw,Bozzi:2007qr,Bozzi:2007tea}. The NLO QCD and SUSY-QCD results are available via the software package \Prospino \cite{Prospino}. Also contributions from \ensuremath{b\bar{b}}\xspace annhilation and from gluon fusion were studied in refs.~\cite{delAguila:1990yw,Bisset:1996qh,Borzumati:2009zx} which focussed on the limit of no left-right mixing. While this limit is usually a good approximation for first and second generation sleptons, the mixing between the (third generation) stau gauge eigenstates can be substantial. Moreover, the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are not included in the stau-pair production cross section predictions provided by Monte Carlo simulation codes such as \texttt{Pythia} \cite{Sjostrand:2000wi} or \texttt{Herwig} \cite{Corcella:2000bw}. This provides additional motivation for our present study which considers for the first time all three of the mentioned direct stau production mechanisms with particular emphasis on potentially sizeable left-right mixing. The outline of this paper is as follows. In the next section we elaborate on the motivation for exploring direct stau production and the considered parameter regions. Section~\ref{sec:production} presents our calculation of direct stau production at hadron colliders. Here we show our cross section results for the Tevatron and the LHC and compare the Drell--Yan\ contributions to those from $b\bar{b}$-annihilation and gluon fusion. Section~\ref{sec:phenolonglived} concentrates on scenarios with long-lived staus and associated prospects for the SUSY parameter determination and the stopping of staus. In section~\ref{sec:cmssm} we consider the CMSSM for parameters for which exceptionally small $\ensuremath{Y_{\stau}}$ occur. Here we study representative CMSSM benchmark scenarios also to explore the relative importance of direct stau production with respect to the stau production in cascade decays. In section~\ref{sec:cmssm_exceptional} we explore ways to probe the viability of an exceptionally small relic stau abundance at colliders. As exemplary models we consider the CMSSM and a model with non-universal Higgs masses (NUHM) to illustrate our main points. We summarize our findings in section~\ref{sec:conclusion}. \section{Motivation} \label{sec:motivation} In this section we continue to review the various implications of a stau NLSP in collider phenomenology and cosmology. Focussing on gravitino and axino dark matter scenarios with long-lived staus, we address the appearance of such staus in the detectors, existing limits, and the conceivable stopping of staus for studies of their late decays. We describe the typical primordial stau LOSP freeze-out and associated cosmological constraints. Considering the CMSSM with the gravitino LSP, the constraints can point to heavy colored sparticles and one may have to rely on direct stau production for a SUSY discovery at the LHC. We discuss the possibility of particularly efficient primordial stau annihilation leading to an exceptionally small thermal relic stau abundance such that restrictive cosmological constraints can be evaded. We summarize the conditions required for such a behavior and emphasize that the potential occurrence of a (color and) charge breaking (CCB) vacuum, B-physics observables, and Higgs searches can give restrictions in the relevant parameter regions. Before proceeding let us comment briefly on direct stau production events in the neutralino dark matter case. As already mentioned in the introduction, the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ NLSP is an attractive possibility in the $\ensuremath{\tilde{\chi}^{0}}_1$ LSP case and primordial $\ensuremath{\tilde{\chi}^{0}}_1$--$\ensuremath{{\tilde{\tau}}_{1}}\xspace$-coannihilation processes may even turn out to be crucial for $\Omega_{\neu_1}\simeq\Omega_{\mathrm{DM}}$. In an R-parity conserving realization of SUSY, we expect each directly produced $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ pair to decay into a pair of unlike-sign $\tau$ leptons and a $\ensuremath{\tilde{\chi}^{0}}_1$ pair. The emitted $\tau^+\tau^-$ pair would then help to identify direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production events experimentally. This identification however requires sophisticated studies since the $\ensuremath{\tilde{\chi}^{0}}_1$'s and the neutrinos from the rapidly decaying $\tau$'s will lead to missing transverse energy. Leaving such studies for further work, we focus in the following more on scenarios with a long-lived $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP. Nevertheless, our calculations of direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production cross sections (see section~\ref{sec:production}) apply also to $\ensuremath{\tilde{\chi}^{0}}_1$ dark matter scenarios with the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ NLSP. \subsection{Gravitino/axino dark matter scenarios with a long-lived stau NLSP} \label{sec:longlivedstau} Long-lived staus occur naturally in SUSY extensions of the SM in which the LSP is an EWIP such as the gravitino or the axino. Both of those EWIPs are viable DM candidates which can be produced thermally in the hot primordial plasma with the right relic density, $\Omega_{\mathrm{EWIP}}\simeq\Omega_{\mathrm{DM}}$, depending on the reheating temperature $\ensuremath{T_{\mathrm{R}}}$ after inflation~\cite{Bolz:2000fu,Brandenburg:2004du,Pradler:2006qh,Pradler:2006hh,Rychkov:2007uq,Strumia:2010aa,Bae:2011jb}. Both are not part of the MSSM (and thus cannot be the LOSP) but are still very well-motivated:% \footnote{For simplicity, we discuss scenarios in which only the gravitino or only the axino is lighter than the stau. There is also the possibility that the gravitino and axino are both simultaneously lighter than the stau, and we refer to refs.~\cite{Tajuddin:2010dr,Baer:2010gr,Cheung:2011mg,Freitas:2011fx} for studies of the cosmological and phenomenological implications.} \begin{itemize} \item The gravitino $\ensuremath{\widetilde{G}}$ is the gauge field of local SUSY transformations and an unavoidable implication of SUSY theories including gravity~\cite{Wess:1992cp,Nilles:1983ge}. In the course of SUSY breaking, it acquires a mass $\ensuremath{m_{\gravitino}}$ that can naturally be smaller than the one of the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP, for example, in gauge-mediated and gravity-mediated SUSY breaking scenarios~\cite{Martin:1997ns,Drees:2004jm,Baer:2006rs}. The stau lifetime $\ensuremath{\tau_{\stau}}$ is then governed by the two-body decay $\ensuremath{{\tilde{\tau}}_{1}}\xspace\to\tau\ensuremath{\widetilde{G}}$ which involves a supergravity vertex and thereby the Planck scale $M_\mathrm{Pl}=2.4\times10^{18}~{\rm Ge\kern -1pt V}$~\cite{Buchmuller:2004rq}. Moreover, $\ensuremath{\tau_{\stau}}$ depends sensitively on $\ensuremath{m_{\gravitino}}$ and $\ensuremath{m_{\stau}}\xspace$ and can easily exceed $100~\ensuremath{\mathrm{s}}$, \eg, for $\ensuremath{m_{\gravitino}}\sim 1~{\rm Ge\kern -1pt V}$ and $\ensuremath{m_{\stau}}\xspace\lesssim 300~{\rm Ge\kern -1pt V}$; \cf~eq.~(4.1) and figure~6 in ref.~\cite{Pospelov:2008ta}. \item The axino $\widetilde{a}$ is the fermionic superpartner of the axion and appears once the MSSM is extended by the Peccei--Quinn (PQ) mechanism in order to solve the strong CP problem. Its mass $m_{\axino}$ is model dependent and can be smaller than $\ensuremath{m_{\stau}}\xspace$. It is then the two-body decay $\ensuremath{{\tilde{\tau}}_{1}}\xspace\to\tau\widetilde{a}$ which governs~$\ensuremath{\tau_{\stau}}$ such that it depends on the axion model, the Peccei--Quinn scale $f_\mathrm{PQ}\gtrsim 6\times 10^{8}~{\rm Ge\kern -1pt V}$~\cite{Raffelt:2006cw,Nakamura:2010zzi}, $m_{\axino}$, and $\ensuremath{m_{\stau}}\xspace$~\cite{Covi:2004rb,Brandenburg:2005he,Freitas:2009fb,Freitas:2011fx}. Again $\ensuremath{\tau_{\stau}}\gtrsim 100~\ensuremath{\mathrm{s}}$ is possible, \eg, for $f_\mathrm{PQ}\sim 10^{12}~{\rm Ge\kern -1pt V}$ and $\ensuremath{m_{\stau}}\xspace\lesssim 300~{\rm Ge\kern -1pt V}$; \cf~eq.~(22) and figure~3 in ref.~\cite{Freitas:2009fb}. \end{itemize} In such settings, a directly produced stau will usually appear as a stable particle in the detector. In fact, already a $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP lifetime as short as $\ensuremath{\tau_{\stau}}\approx 10^{-6}~\ensuremath{\mathrm{s}}$ is associated with a decay length of $c\ensuremath{\tau_{\stau}}\approx 300~\mathrm{m}$ for which typically only a small fraction of the produced staus will decay within the collider detectors. Such quasi-stable staus will look like heavy muons with distinctive signatures in the collider detectors~\cite{Drees:1990yw,Nisati:1997gb,Ambrosanio:1997rv,Feng:1997zr,Martin:1998vb,Fairbairn:2006gg,Feng:2010ij} (see section~\ref{sec:phenolonglived}). Direct searches for long-lived staus at the Large Electron Positron (LEP) collider at CERN set currently the following model-independent limit~\cite{LEP2staulimits,Nakamura:2010zzi}:% \footnote{Note that~(\ref{eq:mstauLEPlimit}) is a conservative limit. Also the LEP limit $\ensuremath{m_{\stau}}\xspace\gtrsim 97.5~{\rm Ge\kern -1pt V}$ can be found~\cite{LEP2staulimits,Nakamura:2010zzi}.} \begin{align} \ensuremath{m_{\stau}}\xspace\gtrsim 82~{\rm Ge\kern -1pt V} \ . \label{eq:mstauLEPlimit} \end{align} Moreover, searches for long-lived staus in proton-antiproton collisions with $\sqrt{S}=1.96~{\rm Te\kern -1pt V}$ at the Tevatron~\cite{Abazov:2008qu,Aaltonen:2009kea} have led to the following upper limit on the stau production cross section~\cite{Aaltonen:2009kea}: \begin{align} \sigma(\sqrt{S}=1.96~{\rm Te\kern -1pt V}) \lesssim 10~\mathrm{fb} \ , \label{eq_tevatronlimit} \end{align} with which we will compare our cross section results in sections~\ref{sec:kinematicalcuts} and~\ref{sec:cmssm}. \subsection*{Stopping of long-lived staus and studies of their late decays} Quasi-stable staus may allow for other intriguing non-standard collider phenomenology. Because of ionisation energy loss, staus will be slowed down when traversing the detector material. In this way, staus that are produced with a relatively small initial velocity of \begin{align} p_{\ensuremath{{\tilde{\tau}}_{1}}\xspace}/\ensuremath{m_{\stau}}\xspace=\beta\gamma\lesssim 0.45 \label{eq:betagammacut} \end{align} are expected to get trapped, \eg, in the calorimeters of the ATLAS detector~\cite{Asai:2009ka} or in some additional dedicated stopping detector outside of the CMS detector~\cite{Hamaguchi:2006vu}. This can allow for experimental studies of the stau decays. Measurements of $\ensuremath{\tau_{\stau}}$ could then probe the coupling strength that governs the stau decays and thereby $\ensuremath{m_{\gravitino}}$~\cite{Buchmuller:2004rq} or the Peccei--Quinn scale $f_\mathrm{PQ}$~\cite{Brandenburg:2005he,Freitas:2011fx}. Moreover, one may be able to determine the mass of the EWIP LSP by analyzing the kinematics of the mentioned two-body decays~\cite{Buchmuller:2004rq,Brandenburg:2005he,Martyn:2006as,Hamaguchi:2006vu,Freitas:2011fx}. In the gravitino LSP case, a kinematically determined $\ensuremath{m_{\gravitino}}$ would allow us to test the Planck scale $M_\mathrm{Pl}$ microscopically~\cite{Buchmuller:2004rq} and to probe the reheating temperature $\ensuremath{T_{\mathrm{R}}}$ at colliders and thereby the viability of thermal leptogenesis~\cite{Pradler:2006qh}. Also studies of three-body stau decays are conceivable, which could give further insights into the nature of the EWIP LSP~\cite{Buchmuller:2004rq,Brandenburg:2005he,Hamaguchi:2006vu,Freitas:2011fx}. The success of such studies will depend sensitively on the number of staus that can be stopped for the analysis of their decays and thereby on the initial velocity distribution. This motivates us to consider such distributions in section~\ref{sec:stopping} below. \subsection{Thermal relic abundances of long-lived staus and cosmological constraints} Let us now turn to long-lived staus in the early Universe, the potential cosmological implications and associated constraints. The relic stau abundance $\ensuremath{Y_{\stau}}$ prior to decay depends on $\ensuremath{m_{\stau}}\xspace$, the left-right mixing of the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ and other details of the SUSY model, and on the early thermal history of the Universe. For a standard thermal history with a reheating temperature $\ensuremath{T_{\mathrm{R}}}$ that exceeds $\ensuremath{m_{\stau}}\xspace/20$, there was a period in which the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP was in thermal equilibrium with the primordial plasma. At the freeze-out temperature $\ensuremath{T_{\mathrm{f}}}\lesssim\ensuremath{m_{\stau}}\xspace/20$, at which the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ annihilation rate equals the Hubble rate, the by then non-relativistic staus $\ensuremath{{\tilde{\tau}}_{1}}\xspace$'s decouple from the thermal plasma. Taking into account all possible (co-)annihilation channels, the associated Boltzmann equation can be solved numerically~\cite{Belanger:2001fz,Belanger:2010gh}. For a dominantly right-handed stau, $\ensuremath{{\tilde{\tau}}_{1}}\xspace\simeq\ensuremath{{\tilde{\tau}_{\mathrm{R}}}}\xspace$, the resulting yield is found to be governed mainly by $\ensuremath{m_{\stau}}\xspace$, \begin{align} \ensuremath{Y_{\stau}} \simeq (0.4-2.0)\times 10^{-13} \left(\frac{\ensuremath{m_{\stau}}\xspace}{100~{\rm Ge\kern -1pt V}}\right) \label{eq:YstauR} , \end{align} where larger values of the prefactor account for possible mass degeneracies and associated effects such as stau-slepton or stau-neutralino coannihilation~\cite{Asaka:2000zh,Fujii:2003nr,Pradler:2006hh,Berger:2008ti}. Confronting the yield~(\ref{eq:YstauR}) with the CBBN constraints (shown \eg\ in figure~5 of ref.~\cite{Pospelov:2008ta}), the following upper limit on the stau lifetime emerges~\cite{Pospelov:2006sc,Pospelov:2008ta} \begin{align} \ensuremath{\tau_{\stau}} \lesssim 5\times 10^{3}~\ensuremath{\mathrm{s}} \ . \label{eq:taustauCBBN} \end{align} For the cases with the gravitino LSP and the axino LSP, this implies $\ensuremath{m_{\stau}}\xspace$-dependent upper limits on the gravitino mass $\ensuremath{m_{\gravitino}}$ (\cf\ figure~6 in ref.~\cite{Pospelov:2008ta}) and the PQ scale $f_a$ (\cf\ figure~5 in ref.~\cite{Freitas:2009jb}), respectively. In particular, a PQ scale $f_\mathrm{PQ}$ around the scale of grand unification $M_\mathrm{GUT}\simeq 2\times 10^{16}\,{\rm Ge\kern -1pt V}$ is in conflict with the CBBN constraint for the $\ensuremath{m_{\stau}}\xspace$ range accessible at the LHC~\cite{Freitas:2009jb}. Moreover, the $\ensuremath{m_{\gravitino}}$ limit disfavors the kinematical $\ensuremath{m_{\gravitino}}$ determination~\cite{Steffen:2006wx} and thereby both the mentioned $M_\mathrm{Pl}$ determination~\cite{Buchmuller:2004rq} and the probing of $\ensuremath{T_{\mathrm{R}}}$ at colliders~\cite{Pradler:2006qh}. These CBBN limits on $\ensuremath{m_{\gravitino}}$ and $f_\mathrm{PQ}$ tighten also the upper limits on the reheating temperature $\ensuremath{T_{\mathrm{R}}}$ imposed by $\Omega_{\mathrm{EWIP}}\leq\Omega_{\mathrm{DM}}$~\cite{Pradler:2006hh,Freitas:2009fb}. The resulting $\ensuremath{T_{\mathrm{R}}}$ limits can then be in considerable tension with $\ensuremath{T_{\mathrm{R}}}\gtrsim 10^9~{\rm Ge\kern -1pt V}$ required for viable thermal leptogenesis with hierarchical heavy Majorana neutrino masses~\cite{Fukugita:1986hr,Davidson:2002qv,Buchmuller:2004nz,Blanchet:2006be,Antusch:2006gy}. For CMSSM scenarios with the gravitino LSP, the CBBN constraints have been found to be particularly restrictive~\cite{Cyburt:2006uv,Pradler:2006hh,Pradler:2007is,Kersten:2007ab,Pradler:2007ar}. In the framework of the CMSSM, the gaugino masses, the scalar masses, and the trilinear scalar interactions are assumed to take on the respective universal values $m_{1/2}$, $\ensuremath{m_{0}}\xspace$, and $A_0$ at $M_\mathrm{GUT}$. Specifying those values in addition to the mixing angle in the Higgs sector $\tan\beta$ and the sign of the Higgs-higgsino-mass parameter $\mu$, the low energy mass spectrum is given by the renormalization group running from $M_\mathrm{GUT}$ downwards. Here the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP case occurs in a large part of the parameter space and in particular for $\ensuremath{m_{0}}\xspace^2\llm_{1/2}^2$. The CBBN constraint~(\ref{eq:taustauCBBN})---which emerges for $\ensuremath{Y_{\stau}}$ given by~(\ref{eq:YstauR})---can then be translated into the following $\ensuremath{m_{\gravitino}}$-dependent limits~\cite{Pradler:2007is,Pradler:2007ar}: \begin{eqnarray} & m_{1/2} & \geq \,\, 0.9~{\rm Te\kern -1pt V} \left(\frac{\ensuremath{m_{\gravitino}}}{10~{\rm Ge\kern -1pt V}}\right)^{2/5} \ , \label{eq:monetwolimit} \\ & \ensuremath{T_{\mathrm{R}}} & \leq \,\, 4.9\times 10^7\,{\rm Ge\kern -1pt V} \left(\frac{\ensuremath{m_{\gravitino}}}{10~{\rm Ge\kern -1pt V}}\right)^{1/5} \ , \label{eq:TRlimit} \end{eqnarray} where the latter accounts also for $\Omega_{\ensuremath{\widetilde{G}}}\leq\Omega_{\mathrm{DM}}$. For $\ensuremath{m_{\gravitino}}$ at the ${\rm Ge\kern -1pt V}$ scale, the lower limit~(\ref{eq:monetwolimit}) then implies heavy colored sparticles that will be difficult to probe at the LHC. As already mentioned above, this provides additional motivation for this work since a SUSY discovery could still be possible via direct stau pair production. \subsection{Exceptionally small thermal relic stau abundances} \label{sec:exceptionalYstau} The $\ensuremath{T_{\mathrm{R}}}$ limit~(\ref{eq:TRlimit}) illustrates the mentioned tension with thermal leptogenesis being a viable explanation of the baryon asymmetry in the Universe.% \footnote{In scenarios that are less constrained than the CMSSM, the $\ensuremath{T_{\mathrm{R}}}$ limit will be more relaxed if the ratio of the masses $\ensuremath{m_{\gluino}}$ and $\ensuremath{m_{\stau}}\xspace$ is smaller than $\ensuremath{m_{\gluino}}/\ensuremath{m_{\stau}}\xspace>6$ encountered in the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ LOSP region of the CMSSM~\cite{Steffen:2006wx}.} In fact, this tension has motivated studies of scenarios with non-standard thermal history in which $\ensuremath{Y_{\stau}}$ is diluted by significant entropy production after decoupling and before BBN~\cite{Pradler:2006hh,Hamaguchi:2007mp,Hasenkamp:2010if} and scenarios with R-parity violation~\cite{Buchmuller:2007ui} such that~(\ref{eq:taustauCBBN}) is respected. Nevertheless, with a standard thermal history and R-parity conservation, it has also been found that SUSY models with enhanced stau-Higgs couplings~\cite{Ratz:2008qh,Pradler:2008qc} and/or the pattern $2\ensuremath{m_{\stau}}\xspace\simeq\ensuremath{m_{H^{0}}}\xspace$ between the mass of the stau \ensuremath{{\tilde{\tau}}_{1}}\xspace and the mass of the heavy neutral CP-even Higgs $\ensuremath{H^{0}}\xspace$~\cite{Pradler:2008qc} can lead to particularly efficient primordial stau annihilation and thereby to exceptionally small yields that may respect the CBBN constraint~\cite{Pospelov:2008ta}:% \footnote{There are other cosmological constraints on $\ensuremath{Y_{\stau}}$ in addition to the considered CBBN constraints, as briefly mentioned in section~\ref{sec:intro}. The $\ensuremath{Y_{\stau}}$ limits imposed by these other constraints are typically at most equally restrictive and are usually evaded also for $\ensuremath{Y_{\stau}}$ satisfying~(\ref{eq:exceptionalYstau}); \cf\ section~1 in ref.~\cite{Pradler:2008qc}.} \begin{align} \ensuremath{Y_{\stau}}\lesssim 2\times 10^{-15} \ . \label{eq:exceptionalYstau} \end{align} For such a yield, the $\ensuremath{\tau_{\stau}}$ limit~(\ref{eq:taustauCBBN}) is no longer applicable and the larger values of $\ensuremath{m_{\gravitino}}$ or $f_\mathrm{PQ}$ are not disfavored such that $\ensuremath{T_{\mathrm{R}}}\gtrsim 10^9~{\rm Ge\kern -1pt V}$ and standard thermal leptogenesis may be viable. For the $\ensuremath{\widetilde{G}}$ LSP case, also the region $0.1\ensuremath{m_{\stau}}\xspace\lesssim\ensuremath{m_{\gravitino}}<\ensuremath{m_{\stau}}\xspace$, in which the kinematical $\ensuremath{m_{\gravitino}}$ determination~\cite{Buchmuller:2004rq} is expected to be viable, is no longer disfavored. In light of these appealing features, let us recall some aspects of the conditions that lead to enhanced stau-Higgs couplings; see also refs.~\cite{Ratz:2008qh,Pradler:2008qc}. (For details and notations of couplings we refer to appendix~\ref{app:staus}.) The stau-Higgs couplings are governd by $\tan\beta$, $\mu$, and the trilinear coupling $\ensuremath{A_{\tau}}\xspace$ in the stau sector. These parameters determine also the admixture of the left-handed and right-handed gauge eigenstates, $\ensuremath{{\tilde{\tau}_{\mathrm{L}}}}\xspace$ and $\ensuremath{{\tilde{\tau}_{\mathrm{R}}}}\xspace$, in the lighter stau mass eigenstate \begin{align} \ensuremath{{\tilde{\tau}}_{1}}\xspace=\cos\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\ensuremath{{\tilde{\tau}_{\mathrm{L}}}}\xspace+\sin\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\ensuremath{{\tilde{\tau}_{\mathrm{R}}}}\xspace \ . \label{eq:StauMixing} \end{align} Thereby, there is a relation between the size of the stau-Higgs couplings and the stau mixing angle $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$. This becomes most explicit in the decoupling limit~\cite{Gunion:2002zf} in which the light CP-even Higgs boson $\ensuremath{h^{0}}\xspace$ is much lighter than $\ensuremath{{H}^0}\xspace$ and the CP-odd Higgs boson $\ensuremath{{A}^0}\xspace$. Here one finds that the $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau\ensuremath{h^{0}}\xspace$ coupling is proportional to $\sin 2\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ and the off-diagonal term in the stau-mass squared matrix, $\ensuremath{X_{\tau}}=\ensuremath{A_{\tau}}\xspace-\mu\tan\beta$, while the $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau\ensuremath{{H}^0}\xspace$ coupling is found to be proportional to $(\ensuremath{A_{\tau}}\xspace\tan\beta-\mu)\sin 2\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$. Thus, the absolute value of these couplings becomes maximal for $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\to\pi/4$, which corresponds to maximal left-right mixing in~(\ref{eq:StauMixing}), and sizeable for large $\tan\beta$ and large absolute values of $\mu$ and/or $\ensuremath{A_{\tau}}\xspace$. In the corresponding parameter regions, on which we focus in this work, one then finds enhanced stau-Higgs couplings and the mentioned efficient stau annihilation that can lead to~(\ref{eq:exceptionalYstau}). Here one has to stress that additional theoretical constraints might become important in regions with large stau-Higgs couplings. These regions can be associated with unwanted CCB minima in the scalar MSSM potential~\cite{Pradler:2008qc,Ratz:2008qh,Endo:2010ya,Hisano:2010re}. Our electroweak vacuum is then only a local minimum and as such metastable. This will still be a viable scenario if the quantum transition rate to the CCB minimum is so small that the lifetime of our electroweak vacuum exceeds the age of the Universe. By studying the decay of the electroweak vacuum with the usual `bounce method'~\cite{Linde:1981zj} in an effective potential approach~\cite{Espinosa:1996qw}, it has been found from a fit in the relavant paramter space that a viable scenario has to respect the following metastability condition~\cite{Hisano:2010re}: \begin{align} \mu \ensuremath{\tan\beta}\xspace < 76.9\,\sqrt{m_{\tilde L_3}m_{\tilde E_3}} + 38.7\,(m_{\tilde L_3} + m_{\tilde E_3}) - 1.04\times 10^{-4}~{\rm Ge\kern -1pt V} \, , \label{eq:CCB} \end{align} where $m_{\tilde L_3}$ and $m_{\tilde E_3}$ are respectively the left-handed and right-handed stau soft-breaking masses. We have checked this condition by explicitly constructing the bounce action for several parameter points and agree within the uncertainty given in~\cite{Hisano:2010re}. However, this condition is not as rigid as, for example, bounds from direct SUSY particle searches or flavor changing decays since only the exponential contribution to the decay of the electroweak vacuum can be evaluated easily, while a calculation of the full width of the decay into the CCB minimum is highly non-trivial. Nevertheless, scenarios in which an exceptional yield~(\ref{eq:exceptionalYstau}) results from enhanced stau-Higgs couplings only~\cite{Ratz:2008qh,Pradler:2008qc}, can be disfavored by the CCB constraint~(\ref{eq:CCB}) if taken at face value (\cf\ sections~\ref{sec:cmssm} and~\ref{sec:cmssm_exceptional} and ref.~\cite{Endo:2010ya}).% \footnote{In a recent updated study \cite{Endo:2011uw} the authors of \cite{Endo:2010ya} also included collider implication of the cosmological motivated $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ resonance region. However, they did not consider the additional $\ensuremath{b\bar{b}}\xspace$ and $\gg$ direct production channels discussed in this work.} In scenarios with $2\ensuremath{m_{\stau}}\xspace\simeq\ensuremath{m_{H^{0}}}\xspace$, primordial stau annihilation can proceed efficiently via the $\ensuremath{{H}^0}\xspace$ resonance and thereby lead to an exceptionally small $\ensuremath{Y_{\stau}}$. Here the annihilation channel $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\tob\bar{b}$ turned out to be the most relevant one, which also benefits from enhanced stau-Higgs couplings. However, because of the $\ensuremath{{H}^0}\xspace$ resonance, an exceptional yield~(\ref{eq:exceptionalYstau}) is already possible with more moderate values of $\tan\beta$, $|\mu|$, and $|\ensuremath{A_{\tau}}\xspace|$~\cite{Pradler:2008qc}. Thereby, such scenarios can lead to~(\ref{eq:exceptionalYstau}) and still respect the discussed CCB constraints. There are additional constraints from B-physics observables and Higgs searches, which can become relevant in parameter regions with sizeable $\tan\beta$. In particular, the non-observation of the decay $B_s\to\mu^+\mu^-$ provides an upper limit on the corresponding branching ratio \cite{Asner:2010qj} \begin{align} \text{BR}(B_s \to \mu^+\mu^-) < 4.3\times 10^{-8}~\mbox{@}~95\%~\mathrm{CL} \ , \label{eq:Bmumu} \end{align} which sets stringent limits on the relevant parameter space. Also the measurement of \cite{Asner:2010qj} \begin{align} \text{BR}(b \to s \gamma) = (3.55 \pm 0.33) \times 10^{-4} \label{eq:b2sgamma} \end{align} can give relevant constraints. Furthermore, there are constraints on the Higgs sector of the MSSM in scenarios with large \ensuremath{\tan\beta}\xspace and small $m_{A^0}$ from Higgs searches in the $\tau\bar{\tau}$ and $\ensuremath{b\bar{b}}\xspace$ channels. Most stringent limits are set recently by the LHC experiments~\cite{Schumacher:2011jq,Chatrchyan:2011nx}. The study in ref.~\cite{Chatrchyan:2011nx}, \eg, excludes $m_{A^0}\lesssim 280~{\rm Ge\kern -1pt V}$ for $\ensuremath{\tan\beta}\xspace\gtrsim 50$ in the $m_h^{\mathrm{max}}$ benchmark scenario (defined \eg\ in~\cite{Chatrchyan:2011nx}) in the $\tau\bar{\tau}$ channel. However, as shown in section~\ref{sec:cmssm_exceptional}, scenarios with resonant primordial stau annihilation leading to~(\ref{eq:exceptionalYstau}) can respect these B-physics and collider constraints as well. In this paper, we investigate whether it can be possible to find manifestations of an exceptionally small yield~(\ref{eq:exceptionalYstau}) when studying the direct production of quasi-stable staus in current collider experiments. As we will see in the next sections, for this purpose it is crucial to consider not only the Drell--Yan\ process but to also include the additional channels from $\ensuremath{b\bar{b}}\xspace$ annihilation and $gg$ fusion in the cross section calculation. \section{Direct production of stau pairs at hadron colliders} \label{sec:production} \label{sec:stauprod} In this section we calulate the cross section for direct stau pair production at hadron colliders. We describe the relevant production channels and the methods used in our calculations. Numerical results are shown to illustrate the dependence on the SUSY parameters and to provide predictions for the Tevatron and the LHC. The obtained cross sections are independent of the stau lifetime. Within the MSSM, stau pairs can be produced directly at hadron colliders, \begin{align} pp(p\bar{p}) \to \ensuremath{{\tilde{\tau}}_{i}}\xspace^{}\ensuremath{{\tilde{\tau}}_{j}}\xspace^*, \end{align} where $\tilde{\tau}_{i,j}$ denotes any of the two stau mass eigenstates. After electroweak symmetry breaking the soft-breaking terms in the MSSM Lagrangian induce a mixing among the particles of identical color and electric charge. In the sfermion sector, left-handed and right-handed gauge eigenstates mix to form mass eigenstates, see appendix~\ref{app:staus}. The mixing is proportional to the mass of the SM partner fermion and can thus be sizeable for sleptons of the third generation. In the following, we concentrate on the production of the lighter $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ pairs. Results for ${\tilde\tau}_2^{\phantom{*}} {\tilde\tau_2}^*$ and ${\tilde\tau_1}^{\phantom{*}} {\tilde\tau_2}^*$ production can be obtained in close analogy. Their production cross sections, however, are suppressed by the heavier ${\tilde\tau_2}$ mass. \subsection[Direct $\tilde{\tau}_1 \tilde{\tau}_1^*$ production channels] {\boldmath Direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production channels} \label{subsec:overview} At hadron colliders typically the leading contribution to direct $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production arises from the $q\bar{q}$ induced Drell--Yan\ type process at ${\cal O}(\alpha^2)$, see \figref{fig:feynman_dybb}\,(a). \FIGURE[t]{ \includegraphics{plots/diagrams_DY.eps} \hspace*{2.5cm} \includegraphics{plots/diagrams_b.eps}\\[1ex] \small (a) \hspace*{6cm} (b) \hspace*{2cm} \caption{Feynman diagrams for stau pair production (a)~via the Drell--Yan\ process and (b)~via \ensuremath{b\bar{b}}\xspace annihilation. Here, $q= u,d,c,s$.} \label{fig:feynman_dybb} } The Drell--Yan\ production cross section depends only on the stau mass $\ensuremath{m_{\stau}}\xspace$ and the stau mixing angle $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$. Stau pairs can also be produced from \ensuremath{b\bar{b}}\xspace annihilation, mediated by the neutral gauge bosons ($\gamma$, $Z$) and by the neutral CP-even Higgs bosons ($h^0$, $H^0$), at the same order of perturbation theory. The corresponding Feynman diagrams are displayed in \figref{fig:feynman_dybb}\,(b). This channel is suppressed by the low bottom-quark density inside protons, but can be enhanced by on-shell Higgs propagators and by the bottom-Higgs and the stau-Higgs couplings in certain regions of the SUSY parameter space. Note that the CP-odd Higgs and Goldstone bosons, $A^0$ and $G^0$, do not couple to a diagonal $\ensuremath{{\tilde{\tau}}_{i}}\xspace^{}\ensuremath{{\tilde{\tau}}_{i}}\xspace^*$ pair at tree-level and, in absence of CP violating effects in the MSSM, there is also no induced mixing between the CP-even and the CP-odd Higgs boson states at higher orders of pertubation theory. The $A^0$ and $G^0$ bosons thus do not enter our calculation. Gluon-induced $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production is only possible at the one-loop level, mediated by a quark or squark loop, as shown in \figref{fig:feynman_gluglu}. \FIGURE[t]{ \includegraphics{plots/diagrams_glu} \caption{Feynman diagrams for the gluon fusion contribution to stau pair production. The quarks~$q$ and squarks $\tilde{q}_i$, $i=1,2$, running in the loops can be of any flavor.} \label{fig:feynman_gluglu} } Even though these contributions are formally of higher orders, ${\cal O}(\alpha_s^2 \alpha^2)$, they can give sizeable contributions at the proton-proton machine LHC at high center-of-mass energies where the $gg$ luminosity is significantly higher than the $q\bar{q}$~luminosity. Let us note again that the additional $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are not included in the general purpose Monte Carlo event generators like \texttt{Pythia} \cite{Sjostrand:2000wi} or \texttt{Herwig} \cite{Corcella:2000bw}. We use the programs \FeynArts~\cite{Hahn:2000kx} and \FormCalc with \LoopTools~\cite{Hahn:1998yk} to generate and calculate the amplitudes corresponding to the Feynman diagrams of \figsref{fig:feynman_dybb}{fig:feynman_gluglu}. The Higgs boson masses and the $H^0$ width are computed with \FeynHiggs~\cite{Frank:2006yh}. We include QCD and SUSY-QCD corrections at NLO for the Drell--Yan\ channel predictions calculated with \Prospino, by scaling our cross sections with the respective $K$-factors, $K\equiv\sigma_{\mathrm{NLO}}/\sigma_{\mathrm{LO}}$. Furthermore, we use a resummed effective $\ensuremath{b\bar{b}}\xspace h^0/\ensuremath{b\bar{b}}\xspace H^0$ vertex for the gluon fusion and \ensuremath{b\bar{b}}\xspace contributions, as explained below and in appendix~\ref{app:resum_b}. We do not include higher-order QCD and SUSY-QCD corrections to the Higgs-mediated channels. These are expected to be positive and similar to the results for on-shell Higgs production, see~\cite{Dittmaier:2011ti} and references therein. In this way our analysis gives a conservative estimate of the enhancement effects from the \ensuremath{b\bar{b}}\xspace and gluon fusion production channels. Note also that additional contributions to the direct $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production can arise from $W^{+}W^{-}$ fusion. Those however would be smaller by at least one order of magnitude compared to the other channels~\cite{delAguila:1990yw} and are not included in our analysis. As motivated in section~\ref{sec:motivation}, we are particularly interested in parameter regions with enhanced stau--Higgs couplings and thus typically in scenarios with large \ensuremath{\tan\beta}\xspace. It has been known for a long time~\cite{Hall:1993gn,Hempfling:1993kv,Carena:1994bv,Pierce:1996zz,Carena:1999py} that radiative corrections to the $\ensuremath{b\bar{b}}\xspace h^0/\ensuremath{b\bar{b}}\xspace H^0$ vertex can be important especially for large \ensuremath{\tan\beta}\xspace and drive down the cross section compared to the tree-level result. As shown in \cite{Carena:1999py,Heinemeyer:2004xw} the leading \ensuremath{\tan\beta}\xspace-enhanced corrections can be resummed to all orders in pertubation theory by using an appropriate effective bottom-quark mass, \ensuremath{m_b^{\text{eff}}}\xspace, and effective $\ensuremath{b\bar{b}}\xspace h^0/\ensuremath{b\bar{b}}\xspace H^0$ couplings. We adopt this approach, as explained in detail in appendix \ref{app:resum_b}. At hadron colliders, the gluon-fusion and \ensuremath{b\bar{b}}\xspace-annhilation processes with an $s$-channel Higgs boson can become resonant in regions of the SUSY parameter space in which the Higgs boson is heavier than the two produced staus. For intermediate $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ masses respecting the robust LEP limit $\ensuremath{m_{\stau}}\xspace\geq 82~{\rm Ge\kern -1pt V}$~\cite{Nakamura:2010zzi}, the lighter CP-even Higgs boson, $h^0$, is expected to be too light to go on-shell ($m_{h^0} < 140~{\rm Ge\kern -1pt V}$, \eg, \cite{Djouadi:2005gj}). This is different for the heavier $H^0$ boson. In parameter regions with $m_{H^0} \ge 2 \ensuremath{m_{\stau}}\xspace$ we therefore include the total decay width of the $\ensuremath{H^{0}}\xspace$ boson, $\Gamma_\ensuremath{H^{0}}\xspace$ in the propagator, \begin{eqnarray*} \frac{1}{p^2-\ensuremath{m_{H^{0}}}\xspace^2} \longrightarrow \frac{1}{p^2-\ensuremath{m_{H^{0}}}\xspace^2+i\ensuremath{m_{H^{0}}}\xspace\Gamma_{H^0}} \, . \label{eq:higgswidth} \end{eqnarray*} \subsection{Numerical results} \label{subsec:numerical} Let us now investigate direct $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production at hadron colliders numerically. Our focus is on the impact of the $\ensuremath{b\bar{b}}\xspace$-annihilation and the gluon-fusion channels in comparison to the Drell--Yan\ process. The cross section for direct stau production depends mainly on $\ensuremath{m_{\stau}}\xspace$, $m_{H^0}$, $\ensuremath{\tan\beta}\xspace$, and on $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ (or equivalently on $\mu$ and $A_{\tau}$). It also depends on the \ensuremath{H^{0}}\xspace~boson width, $\Gamma_{\ensuremath{H^{0}}\xspace}$, and thus indirectly on the SUSY mass spectrum. In addition, squark masses enter indirectly via the loops in the gluon-fusion channel and, as does the trilinear coupling $A_t$ in the stop sector, via the effective bottom couplings. We use the following input parameters in our numerical study. As a starting point, we choose a $\ensuremath{{\tilde{\tau}}_{1}}\xspace$-LOSP scenario with moderate squark masses and a large stau--Higgs coupling, fixed by the following soft-breaking parameters at the low scale: \begin{align} \begin{split} M_1 &= M_2 = M_3 = 1.2~{\rm Te\kern -1pt V}, \qquad \ensuremath{A_{{t}}}=\ensuremath{A_{{b}}}=\ensuremath{A_{\tau}}\xspace=600~{\rm Ge\kern -1pt V}, \\ m_{\tilde Q_i} &= m_{\tilde U_i} = m_{\tilde D_i} = 1~{\rm Te\kern -1pt V}, \qquad m_{\tilde L_{1/2}} = m_{\tilde E_{1/2}} = 500~{\rm Ge\kern -1pt V}, \label{eq_SUSYinputs} \end{split} \end{align} where $M_i$ denote the gaugino mass parameters associated with the SM gauge groups U$(1)_{\mathrm{Y}}$, SU$(2)_{\mathrm{L}}$, and SU$(3)_c$, $m_{\tilde Q_i}$ ($m_{\tilde U_i}$ and $m_{\tilde D_i}$) the left-handed (right-handed) squark soft-breaking masses, $m_{\tilde L_{1/2}}$ ($m_{\tilde E_{1/2}}$) the left-handed (right-handed) slepton soft-breaking masses for the first two generations, and $\ensuremath{A_{{b}}}$ the trilinear coupling in the sbottom sector. If not stated otherwise, we choose \begin{align} \begin{split} \ensuremath{\theta_{{\tilde{\tau}}}}\xspace &= 45^{\circ}, \qquad \ensuremath{m_{\stau}}\xspace = 200~{\rm Ge\kern -1pt V}, \qquad \\ \ensuremath{\tan\beta}\xspace &= 30, \qquad~ \mu = 500~{\rm Ge\kern -1pt V}, \qquad m_A = 400~{\rm Ge\kern -1pt V}, \end{split} \label{eq_STAUinputs} \end{align} as inputs for the third-generation sleptons, as discussed in appendix~\ref{app:staus}, and for the Higgs-sector, where $m_A$ denotes the mass of the CP-odd Higgs boson $A^0$. With these parameters, the considered scenario falls into the decoupling limit of the MSSM~\cite{Gunion:2002zf} where $\ensuremath{m_{H^{0}}}\xspace \approx m_A$. Note that the chosen value of \ensuremath{\tan\beta}\xspace (and also of $\mu$) is rather moderate compared to the cosmologically motivated scenarios considered in ref.~\cite{Pradler:2008qc} and discussed in section~\ref{sec:exceptionalYstau}. From the input parameters~(\ref{eq_SUSYinputs}) and~(\ref{eq_STAUinputs}), we calculate the physical MSSM parameters using tree-level relations for sfermions, neutralinos, and charginos. Physical masses are then passed to \Prospino to calculate the Drell--Yan\ $K$-factors at NLO in QCD and SUSY-QCD. The NLO corrections to the Drell--Yan\ channel typically amount to $20$--$40\%$ in the considered parameter space. SM input parameters are chosen according to~\cite{Nakamura:2010zzi} \begin{align} M_Z & = 91.1876~{{\rm Ge\kern -1pt V}},& M_W & = 80.4248~{\rm Ge\kern -1pt V},& G_F & =1.1664\times10^{-5}~{\rm Ge\kern -1pt V}, \nonumber\\ m_b^{\overline{\text{MS}}}(M_Z) & = 2.936~{\rm Ge\kern -1pt V},& m_t & = 173.1~{{\rm Ge\kern -1pt V}},& m_{\tau} & = 1.776~{{\rm Ge\kern -1pt V}}. \label{eq_SMinputs} \end{align} We include the MSTW08LO \cite{Martin:2009iq} set of parton distribution functions (PDFs) and use the running strong coupling constant $\alpha_s(\mu_R)$ they provide. Factorization and renormalization scales are set to the mass of the produced stau $\mu_R=\mu_F=\ensuremath{m_{\stau}}\xspace$. At this point we want to mention again that our calculation of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels is formally at LO. By using an effective bottom-quark mass, \ensuremath{m_b^{\text{eff}}}\xspace, and effective $\ensuremath{b\bar{b}}\xspace h^0/\ensuremath{b\bar{b}}\xspace H^0$ couplings, however, the dominant \ensuremath{\tan\beta}\xspace enhanced corrections are included in our results. Nevertheless, we do not consider non-\ensuremath{\tan\beta}\xspace enhanced higher-order corrections, and the remaining renormalization and factorization scale dependence yields a possibly large theoretical uncertainty to our cross section predictions. A more detailed study at NLO would be desirable, taking also uncertainties due to the dependence on the PDF set, and the bottom-quark PDF in particular, into account. In \figref{fig:crosssections1} we show the direct production cross section for $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ pairs at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ as a function of (a)~\ensuremath{m_{\stau}}\xspace, (b)~\ensuremath{\theta_{{\tilde{\tau}}}}\xspace, (c)~\ensuremath{m_{H^{0}}}\xspace, and (d)~\ensuremath{\tan\beta}\xspace. \FIGURE[t]{ \includegraphics[width=.49\textwidth,]{./plots/scan_mstau1_combined.eps} \includegraphics[width=.49\textwidth,]{./plots/scan_ThetaTau_normal_combined.eps}\\[.5ex] {\small \hspace*{2cm} (a) \hspace*{.47\linewidth} (b)} \\[2.5ex] \includegraphics[width=.49\textwidth,]{./plots/scan_mA_combined.eps} \includegraphics[width=.49\textwidth,]{./plots/scan_tanb_normal_combined.eps}\\[.5ex] {\small \hspace*{2cm} (c) \hspace*{.47\linewidth} (d)} \\[-1ex] \caption{Cross section of direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$-pair production (solid lines, blue) and the Drell--Yan\ prediction (dashed lines, red) at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ as a function of (a)~\ensuremath{m_{\stau}}\xspace, (b)~\ensuremath{\theta_{{\tilde{\tau}}}}\xspace, (c)~\ensuremath{m_{H^{0}}}\xspace, and (d)~\ensuremath{\tan\beta}\xspace. No kinematical cuts are applied. SUSY input parameters are as given in~(\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}) if not varied or unless stated otherwise in the legend of the respective panel. Note that $\ensuremath{m_{H^{0}}}\xspace \approx m_A$ (decoupling limit) holds in most of the shown regions.} \label{fig:crosssections1} } The remaining SUSY parameters are basically fixed according to (\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}). In figures \ref{fig:crosssections1}\,(b) and~(d), we move to $\ensuremath{m_{\stau}}\xspace=190~{\rm Ge\kern -1pt V}$, where stau production is possible via on-shell \ensuremath{H^{0}}\xspace exchange. The dashed (red) lines show the Drell--Yan\ (DY) cross section at NLO, whereas the solid (blue) lines include the additional $\ensuremath{b\bar{b}}\xspace$ and $gg$~contributions. The Drell--Yan\ cross section depends only on \ensuremath{m_{\stau}}\xspace and \ensuremath{\theta_{{\tilde{\tau}}}}\xspace, as already mentioned above. It decreases strongly for increasing $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ masses and varies with $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ by a factor that can be at most slightly larger than $2$. As shown in \figref{fig:crosssections1}\,(b), this factor takes on its largest value for $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace \approx 0$, \ie, for an almost left-handed \ensuremath{{\tilde{\tau}}_{1}}\xspace. The factor of $\approx2$ difference between the Drell--Yan\ cross sections at $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace \approx 0$ and $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace \approx \pi/2$ is determined mainly by different gauge couplings of the left-handed and right-handed states. It is almost independent of kinematics and hardly effected when going from $\sqrt{S}=14~{\rm Te\kern -1pt V}$ to $7~{\rm Te\kern -1pt V}$. The impact of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels depends strongly on the mass hierarchy between \ensuremath{{\tilde{\tau}}_{1}}\xspace and \ensuremath{{H}^0}\xspace, as can clearly be seen in figures~\ref{fig:crosssections1}\,(a) and~(c). If $\ensuremath{m_{H^{0}}}\xspace > 2 \ensuremath{m_{\stau}}\xspace$, these additional channels can change the direct production cross section by more than one order of magnitude with respect to the Drell--Yan\ result. At the threshold $\ensuremath{m_{H^{0}}}\xspace=2\ensuremath{m_{\stau}}\xspace$, the $\ensuremath{b\bar{b}}\xspace$ and $gg$ contributions drop steeply and are only marginally important for $\ensuremath{m_{H^{0}}}\xspace\ll2\ensuremath{m_{\stau}}\xspace$. Figures~\ref{fig:crosssections1}\,(b) and (d) illustrate the dependence of the total direct production cross section on the parameters $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ and $\ensuremath{\tan\beta}\xspace$ that govern the stau-Higgs-coupling strength. Here, the total direct production cross section is dominated by on-shell Higgs production. Thus, the dependence on $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ and $\ensuremath{\tan\beta}\xspace$ reflects the $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\ensuremath{H^{0}}\xspace$ coupling, discussed in section~\ref{sec:exceptionalYstau} and given in appendix \ref{app:stauhiggs_couplings}. As shown there and illustrated here, this coupling is basically proportional to $\sin2\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ and also to $\ensuremath{\tan\beta}\xspace$ (or more precisely to $\ensuremath{A_{\tau}}\xspace\ensuremath{\tan\beta}\xspace$). The additional contributions from the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are tiny in cases of very small mixing, $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\to 0, \pi$. The exact position of the minimum depends on the relative importance of the $\ensuremath{{\tilde{\tau}}_{1}}\xspace\ensuremath{{{\tilde{\tau}}}_1^{*}}\xspace\ensuremath{H^{0}}\xspace$ and $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau\ensuremath{h^{0}}\xspace$ couplings, and can be slightly above/below $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace = 0, \pi$, see also (\ref{eq:higgses-stau-stau-couplings-PHYS}). In turn, they become most important for maximal mixing, \ie, at $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\approx\pi/4$. There, the additional contributions push up the total direct production cross section by up to two orders of magnitude for very large \ensuremath{\tan\beta}\xspace and are already sizeable for small \ensuremath{\tan\beta}\xspace. Let us now turn to a scenario where the \ensuremath{H^{0}}\xspace is very heavy and thus almost decoupled, $\ensuremath{m_{H^{0}}}\xspace=1~{\rm Te\kern -1pt V}$. We again investigate the dependence of the total cross section on \ensuremath{\theta_{{\tilde{\tau}}}}\xspace and \ensuremath{\tan\beta}\xspace, shown in \figref{fig:crosssections2}. \FIGURE[t]{ \includegraphics[width=.49\textwidth,]{./plots/scan_ThetaTau_combined.eps} \includegraphics[width=.49\textwidth,]{./plots/scan_tanb_combined.eps}\\[.5ex] {\small \hspace*{2cm} (a) \hspace*{.47\linewidth} (b)} \\[-1ex] \caption{Cross section of direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$-pair production (solid lines, blue) and the Drell--Yan\ prediction (dashed lines, red) at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ as a function of (a)~\ensuremath{\theta_{{\tilde{\tau}}}}\xspace and (b)~\ensuremath{\tan\beta}\xspace. No kinematical cuts are applied. SUSY input parameters as given in (\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}) if not varied or stated otherwise. Note that here $\ensuremath{m_{H^{0}}}\xspace \approx m_A = 1~{\rm Te\kern -1pt V}$ (decoupling limit).} \label{fig:crosssections2} } We focus on parameters that allow for enhanced $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\ensuremath{h^{0}}\xspace$ couplings, \ie\ large values for $\mu$ and $\ensuremath{\tan\beta}\xspace$. In \figref{fig:crosssections2}\,(a) we consider $\mu=800~{\rm Ge\kern -1pt V}$ and $\ensuremath{\tan\beta}\xspace=50$ while the other parameters are fixed according to (\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}). Again, the contribution from the additional $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels can be sizeable. The enhancement amounts to a factor between two and three when considering very large values of \ensuremath{\tan\beta}\xspace and maximal mixing $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace \approx \pi/4$. Here dominant contributions come mainly from off-shell \ensuremath{h^{0}}\xspace exchange together with large stau-Higgs couplings. Thus, the relative importance between the $gg$ channel and the $\ensuremath{b\bar{b}}\xspace$ channel can be different compared to situations with dominant contributions from on-shell \ensuremath{H^{0}}\xspace exchange since the two Higgses couple differently to the quark and squark loops. Clearly, for a larger contribution of the loop-induced $gg$ channel, a stronger dependence on the squark masses is introduced in our calculation. This does not only concern the overall mass scale but also the mass splitting between the squarks, as is well known for on-shell Higgs production via gluon fusion, see \cite{Djouadi:2005gj} and references therein. For example, contributions from squark loops get small when squarks within one generation are almost degenerate. Additionally, slight enhancements in the $gg$ channel can appear at thresholds where the resonance requirement $2m_{\tilde q} \approx \ensuremath{m_{H^{0}}}\xspace$ is fullfilled between squarks in the loops and the heavy Higgs \ensuremath{H^{0}}\xspace, as shown in ref. \cite{Borzumati:2009zx}. We want to note that, despite the large couplings, all parameter points considered above are in agreement with the CCB constraint~(\ref{eq:CCB}). We summarize the potential impact of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels for the SUSY scenario defined in (\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}) again in \figref{fig:inclusive_compare}, \FIGURE[t]{ \includegraphics[width=.75\textwidth,]{./plots/scan_mstau1_compareDY.eps} \caption{Direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$-pair production cross sections as a function of $\ensuremath{m_{\stau}}\xspace$ for the SUSY scenario with (\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}) at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ (top, red) and $7~{\rm Te\kern -1pt V}$ (middle, blue) and the Tevatron with $\sqrt{S}=1.96~{\rm Te\kern -1pt V}$ (bottom, green). The Drell--Yan\ predictions are shown by the dashed lines and the full cross section, including $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels, by the solid lines.} \label{fig:inclusive_compare} } where the Drell--Yan\ predictions (dashed lines) and the full cross sections (solid lines) are shown for $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production at the LHC for $\sqrt{S}=14~{\rm Te\kern -1pt V}$ (top, red) and for $7~{\rm Te\kern -1pt V}$ (middle, blue). When going down from $14~{\rm Te\kern -1pt V}$ to $7~{\rm Te\kern -1pt V}$, the cross section decreases by up to about a factor of 5. The relative importance of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels however are similarly important in the region where on-shell \ensuremath{H^{0}}\xspace exchange contributes. Thus, for both $\sqrt{S}=7~{\rm Te\kern -1pt V}$ and $14~{\rm Te\kern -1pt V}$, the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels should not be neglected in a precise cross section prediction. We also show the direct $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production cross section expected at the Tevatron with $\sqrt{S}=1.96~{\rm Te\kern -1pt V}$ (bottom, green). Due to the higher parton momentum fractions $x$ needed at the Tevatron, the gluon and the $\ensuremath{b\bar{b}}\xspace$ luminosities are reduced compared to the LHC case. Accordingly, the respective $gg$ and $\ensuremath{b\bar{b}}\xspace$ contributions to direct stau production are only small, as illustrated. \section{Collider phenomenology with directly produced long-lived staus} \label{sec:phenolonglived} In this section we focus on scenarios with long-lived staus, \ie, $\tau_{\ensuremath{{\tilde{\tau}}_{1}}\xspace}\gtrsim 10^{-6}~s$. At collider experiments, pair production of long-lived staus will give a clear CHAMP signal in the detectors if kinematical cuts are applied to discriminate between signal and muon background. Here, we study the impact of these kinematical cuts on the direct production cross section prediction and differential distributions. We show that experimental observation of direct stau production could provide important insights into the SUSY model realized in nature. For particularly well-motivated cosmological scenarios, we find that relatively large numbers of staus are expected to get stopped already in the main detectors at the LHC. Thereby, analyses of stau decays may become a viable tool to identify the LSP and/or to probe high scales such as the Planck scale $M_\mathrm{Pl}$ or the Peccei--Quinn scale $f_\mathrm{PQ}$ in the laboratory. \subsection{Kinematical cuts} \label{sec:kinematicalcuts} For a realistic experimental analysis, we need to include kinematical cuts on the phase space of the staus to reduce possible backgrounds to the CHAMP signal. The signature of a CHAMP traversing a detector is a slowly moving minimal ionising particle with high transverse-momentum $\ensuremath{p^\mathrm{T}}\xspace$. In the experiments, this results in a long time-of-flight (TOF) and an anomalously large ionization-energy loss rate ($dE/dx$)~\cite{Raklev:2009mg}. Since the CHAMP loses energy primarily through low-momentum-transfer interactions, it will be highly penetrating and will likely be reconstructed as a muon~\cite{Aaltonen:2009kea}. At hadron colliders, experimental CHAMP searches have been performed by the CDF~\cite{Aaltonen:2009kea} and the D0~\cite{Abazov:2008qu} collaborations at the Tevatron and are planned at the LHC in the near future~\cite{CMS-PAS-EXO-08-003}. In accordance with those analyses, we apply the following kinematical cuts on the produced staus: \begin{align} \begin{split} \label{eq:cuts} \ensuremath{p^\mathrm{T}}\xspace > 40~{\rm Ge\kern -1pt V},\qquad &0.4 < \beta < 0.9, \\ \vert \eta \vert < 0.7 ~\text{(Tevatron)},\qquad &\vert \eta \vert < 2.4 ~\text{(LHC)}, \end{split} \end{align} where the cuts have to be fulfilled by at least one of the \ensuremath{{\tilde{\tau}}_{1}}\xspace's. Here $\eta = -\ln(\tan\theta/2)$ is the pseudo-rapidity and $\beta= |\mathbf{p}|/E$ the stau velocity. This should reduce the background from very slow moving muons to a negligible level~\cite{DeRoeck:2005bw}. For our theoretical predictions, we use the same inputs as in section~\ref{sec:production}. In particular, we include the inclusive NLO $K$-factors provided by \Prospino for the Drell--Yan\ channel, also when cuts are applied. Since QCD and SUSY-QCD corrections only effect the hadronic part of the considered \ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace production, the cut dependence of the $K$-factors is expected to be small. Note that we furthermore assume here direct $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ production to be the dominant $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ production source and do not include $\ensuremath{{\tilde{\tau}}_{1}}\xspace$'s resulting from cascade decays in our signal definition. Otherwise, an additional jet and/or lepton veto can be used to separate directly produced staus from ones produced at the end of a decay chain. We will briefly investigate the relative importance of these concurrent production mechanisms for representative CMSSM benchmark scenarios in section~\ref{sec:cascade_decays}. In \figref{fig:cuts_compare} \FIGURE[t]{ \includegraphics[width=.75\textwidth]{./plots/scan_mstau1_comparecuts.eps} \caption{Direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$-pair production cross sections before (dashed lines) and after (solid lines) application of the kinematical cuts~(\ref{eq:cuts}) as a function of $\ensuremath{m_{\stau}}\xspace$ for the SUSY scenario with (\ref{eq_SUSYinputs}) and (\ref{eq_STAUinputs}) at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ (top, red) and $7~{\rm Te\kern -1pt V}$ (middle, blue) and the Tevatron with $\sqrt{S}=1.96~{\rm Te\kern -1pt V}$ (bottom, green).} \label{fig:cuts_compare} } we compare the full direct production cross sections with (solid lines) and without (dashed lines) the kinematical cuts~(\ref{eq:cuts}) applied as a function of $\ensuremath{m_{\stau}}\xspace$ for the LHC with $\sqrt{S} =14~{\rm Te\kern -1pt V}$ (top, red) and $7~{\rm Te\kern -1pt V}$ (middle, blue) and the Tevatron with $\sqrt{S}=1.96~{\rm Te\kern -1pt V}$ (bottom, green). At the LHC, the cuts shift the excess slightly away from the $\ensuremath{H^{0}}\xspace$ threshold and towards smaller values of $\ensuremath{m_{\stau}}\xspace$ and reduce the overall cross section by some tens of percent. The reduction is stronger at the Tevatron, where in particular the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channel contributions get cut significantly, so that the Drell--Yan\ channel provides a good approximation for the full cross section. Assuming the produced \ensuremath{{\tilde{\tau}}_{1}}\xspace's to be stable on the scale of the detectors, our results for the Tevatron shown in \figref{fig:cuts_compare} can directly be compared with the CHAMP cross-section limit from the CDF collaboration~\cite{Aaltonen:2009kea} given in~(\ref{eq_tevatronlimit}). This comparison does not allow to exclude any $\ensuremath{m_{\stau}}\xspace > 100~{\rm Ge\kern -1pt V}$ for the considered parameters. Nevertheless, smaller masses, allowed by the conservative LEP limit~(\ref{eq:mstauLEPlimit}), $82~{\rm Ge\kern -1pt V}<\ensuremath{m_{\stau}}\xspace<100~{\rm Ge\kern -1pt V}$, can be in tension with this limit. However, any robust exclusion would require a full simulation including detector effects, which we do not perform here. Additional indirect stau-pair production mechanisms are not expected to alter this statement considerably since, as discussed above, they would contribute to a different signal region containing jets/leptons. Dedicated studies of the LHC experiments are announced for the near future. Because of the increased cross section at the LHC, they will probe in detail large parts of the small-\ensuremath{m_{\stau}}\xspace parameter space, where the stau can be produced via on-shell \ensuremath{H^{0}}\xspace exchange with only a relatively small amount of integrated luminosity already for $\sqrt{S}=7~{\rm Te\kern -1pt V}$. In fact, the experiments at the LHC have already performed searches for stable massive particles~\cite{Aad:2011yf, Khachatryan:2011ts}. However, those studies have searched for stable massive particles in the trackers and calorimeters. In those parts of the detectors, the sensitivity to color-singlet particles, such as the \ensuremath{{\tilde{\tau}}_{1}}\xspace, is reduced~\cite{Aad:2011yf}, and findings have only been interpreted for colored massive particles~\cite{Aad:2011yf, Khachatryan:2011ts}. So far we have concentrated on the integrated cross section. \FIGURE[t]{ \includegraphics[width=.49\textwidth]{./plots/MSSM_190_210_14_pt.eps}% \includegraphics[width=.49\textwidth]{./plots/MSSM_190_210_14_beta.eps}\\[.5ex] {\small \hspace*{1cm} (a) \hspace*{.45\linewidth} (b)} \caption{(a)~Transverse momentum $p^T$ and (b)~velocity $\beta=\vert\mathbf{p}\vert/E$ distributions of direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$-pair production at the LHC with $\sqrt{S} = 14~{\rm Te\kern -1pt V}$. The Drell--Yan\ predictions are shown by dashed lines and the full cross sections, including $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels, by solid lines. We consider $\ensuremath{m_{\stau}}\xspace =190~{\rm Ge\kern -1pt V}$ (red line) and $\ensuremath{m_{\stau}}\xspace =210~{\rm Ge\kern -1pt V}$ (blue line), whereas the other SUSY input parameters are as given in~(\ref{eq_SUSYinputs}) and~(\ref{eq_STAUinputs}).} \label{fig:distributions_pt_beta} } To further illustrate the importance of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channel contributions and to investigate the impact of the cuts~(\ref{eq:cuts}) on the different channels, we show the differential distributions with respect to transverse momentum $\ensuremath{p^\mathrm{T}}\xspace$ and the velocity $\beta$ of the directly produced staus in \figref{fig:distributions_pt_beta}. We give results for the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ only, however, results for the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ are qualitatively identical. We use the basic parameter inputs~(\ref{eq_SUSYinputs}) and~(\ref{eq_STAUinputs}) and consider two distinct scenarios, $\ensuremath{m_{\stau}}\xspace =190~{\rm Ge\kern -1pt V}$ (red line) and $\ensuremath{m_{\stau}}\xspace =210~{\rm Ge\kern -1pt V}$ (blue line). Here, it is $\ensuremath{m_{H^{0}}}\xspace=400~{\rm Ge\kern -1pt V}$, \ie, in the first scenario the intermediate \ensuremath{H^{0}}\xspace~boson can go on-shell while it can only be produced off-shell in the second scenario. We apply the cut $|\eta|<2.4$ on the pseudo-rapidity of one of the staus to ensure that not both of the pair-produced staus leave the detector outside of the sensitive region. Cuts on $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ are not applied to be independent of a specific choice of cuts. Also, their potential individual impact can be inferred directly from \figref{fig:distributions_pt_beta}. We show the Drell--Yan-cross-section prediction (dashed lines) now without any $K$-factors and the full cross section including also the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channel contributions (solid lines). Clearly, in both scenarios, staus produced in $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are softer and slower than their counterparts from the Drell--Yan-type production. The $\ensuremath{p^\mathrm{T}}\xspace$~distributions for the Drell--Yan\ channel peak at around $\ensuremath{m_{\stau}}\xspace$ and only few staus are produced with very low $\ensuremath{p^\mathrm{T}}\xspace$. In contrast, the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channel contributions peak for low $\ensuremath{p^\mathrm{T}}\xspace$ and fall off rapidly. The velocity distributions for pure Drell--Yan\ production rise towards fast moving staus, $\beta\approx 1$, while adding the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels results in a rather flat distribution for intermediate values of $\beta$. This behavior is more pronounced for scenarios in which an on-shell $\ensuremath{H^{0}}\xspace$ exchange is possible (red lines) since here the relative importance of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels with respect to the Drell--Yan\ channel is higher. In the case with $2\ensuremath{m_{\stau}}\xspace\gtrsim\ensuremath{m_{H^{0}}}\xspace$ with $2\ensuremath{m_{\stau}}\xspace$ still very close to $\ensuremath{m_{H^{0}}}\xspace$ (blue lines), the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels still contribute significantly and generate a large amount of events with very slow staus. At this point we want to comment again on scenarios with a heavy, decoupled $\ensuremath{H^{0}}\xspace$~boson such as those considered in \figref{fig:crosssections2}. The contributions from \ensuremath{H^{0}}\xspace-boson-mediated processes are suppressed in this case, and the same is true if the \ensuremath{H^{0}}\xspace~boson is far off-shell ($\ensuremath{m_{H^{0}}}\xspace \ll 2\ensuremath{m_{\stau}}\xspace$). Still, the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels can be sizeable if \ensuremath{h^{0}}\xspace-mediated processes are important. For large $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\ensuremath{h^{0}}\xspace$ couplings (large left-right mixing, large $|\mu|\ensuremath{\tan\beta}\xspace$ and/or large $|\ensuremath{A_{\tau}}\xspace|$), we find that the additional production mechanisms, and predominantly the $gg$-fusion channel, can give cross section contributions of the same order of magnitude as the Drell--Yan\ induced process. The shapes of the corresponding $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ distributions are found to be similiar to those of the Drell--Yan\ prediction but have a slightly softer $\ensuremath{p^\mathrm{T}}\xspace$ spectrum. To summarize, cuts on $\ensuremath{p^\mathrm{T}}\xspace$ or $\beta$ affect the three production channels to a very different extent and thus change the relative importance of each of the contributions considerably. If the cuts (\ref{eq:cuts}) are applied, large parts of the additional $\ensuremath{b\bar{b}}\xspace$ and $gg$ channel contributions can be lost. We therefore recommend to relax these cuts in future experimental searches to increase the sensitivity especially for cosmological motivated scenarios with $2\ensuremath{m_{\stau}}\xspace \approx \ensuremath{m_{H^{0}}}\xspace$, even when this implies that increased backgrounds have to be taken into account. A more detailed study of the interplay between different cut values and expected backgrounds would be required but is beyond the scope of this paper.% \footnote{Here we would like to refer to the recent paper on direct stau production at the LHC by Heisig and Kersten~\cite{Heisig:2011dr} which appeared during the final stage of our work. In ref.~\cite{Heisig:2011dr}, the authors focus on the Drell--Yan-production mechanism and study the LHC discovery potential by performing a Monte Carlo analysis of the signal and the main dimuon background.} \subsection{Prospects for SUSY parameter determination} \label{sec:parameter_determination} In this subsection we want to demonstrate how one could use the fact that the Drell--Yan\ production cross section depends only on the stau mass and mixing angle, whereas the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels also depend on other SUSY parameters. In fact, the interplay of the Drell--Yan\ cross section and the additional channels might turn out to be helpful to determine SUSY parameters. If signals of a quasi-stable stau are observed at the LHC, one of the first measurements will be the determination of the stau mass $\ensuremath{m_{\stau}}\xspace$ by using TOF data from the muon chambers. The expected accuracy of this $\ensuremath{m_{\stau}}\xspace$ measurement has been estimated to be $<1\%$~\cite{Ambrosanio:2000ik,Ellis:2006vu}. With such a precise knowledge of $\ensuremath{m_{\stau}}\xspace$, there are high hopes that further SUSY parameters can be extracted from the cross section and differential distributions by comparing experimental results and theoretical predictions. Clearly, these measurements are possible in the case of decay chains with several MSSM particles involved~\cite{Ellis:2006vu,Ishiwata:2008tp,Biswas:2009rba,Feng:2009bd,Ito:2009xy,Heckman:2010xz,Kitano:2010tt,Ito:2010xj,Ito:2010un,Asai:2011wy}. However, in the following we present ideas how such measurements could also be possible just from direct production. To disentangle those channels, appropriate jet and/or lepton vetos are assumed. Moreover, the measurements require more integrated luminosity than needed for a potential stau discovery. First of all, once the stau mass is known, the Drell--Yan\ production cross section can be given as a function of the stau mixing angle $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ alone. If also the direct stau production cross section can be identified experimentally, this will allow us to determine $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ in a scenario in which the stau pair production cross section is governed by the Drell--Yan\ channel. As shown in figures~\ref{fig:crosssections1}\,(b) and \ref{fig:crosssections2}\,(a) and as already discussed, the Drell--Yan\ cross section is maximal in case of an almost purely left-handed \ensuremath{{\tilde{\tau}}_{1}}\xspace and minimal for $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace \approx \pi/2$. An excess of the experimentally obtained cross section over the Drell--Yan\ expectation at $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\approx\pi/2$ would thus point to $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace<\pi/2$ and non-negligible mixing between the left-handed and right-handed eigenstates in Drell--Yan-channel-dominated scenarios. Furthermore, a sizeable deviation from $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace\approx\pi/2$ may support also the hypothesis that the observed CHAMP is indeed a stau and not a quasi-stable dominantly right-handed selectron or smuon. However, in general, a larger experimentally obtained cross section compared to the minimal Drell--Yan\ expectation for a certain mass could imply both $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace<\pi/2$ or also sizeable contributions from $\ensuremath{b\bar{b}}\xspace$ annihilation and $gg$ fusion; \cf\ figures~\ref{fig:crosssections1}\,(b) and \ref{fig:crosssections2}\,(a). On the other hand, a significant excess of the measured cross section over the maximal Drell--Yan\ cross section prediction may provide a first hint for the importance of the $\ensuremath{b\bar{b}}\xspace$-annihilation and $gg$-fusion processes calculated in this work; see also section~\ref{sec:cmssm_exceptional} below. These possible ambiguities in the interpretation of experimental findings on the integrated cross section could be resolved by studying also the differential distributions. As we have seen above, the $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ distributions differ strongly from the Drell--Yan\ prediction when the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are important. From the shape of the experimentally measured distributions, one could then be able to determine whether the Drell--Yan\ channel or the other channels give the dominant contribution to the production cross section. Also the distribution with respect to the invariant mass of the produced stau pair, $m_{\stau\stau^*}$, can be helpful for this purpose. In fact, the invariant mass distribution might allow even for the determination of the mass $\ensuremath{m_{H^{0}}}\xspace$ and the width $\Gamma_{\Hhiggs}$ of the \ensuremath{H^{0}}\xspace boson: If $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$, \ie, if the $\ensuremath{H^{0}}\xspace$~boson can go on-shell in the $\ensuremath{b\bar{b}}\xspace$ and $gg$~channels, there is the possibility to see the resonance of the $\ensuremath{H^{0}}\xspace$~boson in the invariant mass distribution of the staus at $m_{\stau\stau^*}\simeq\ensuremath{m_{H^{0}}}\xspace$ with a width given by $\Gamma_{\Hhiggs}$. To illustrate this procedure, we consider four benchmark points, $\alpha$, $\beta$, $\gamma$, and $\epsilon$ within the framework of the CMSSM with parameters listed in \tabref{tab:benchmarks}. \TABULAR[t]{lcC{1.6cm}C{1.6cm}C{1.6cm}C{1.6cm}}{ \hline\hline \multicolumn{2}{c}{Benchmark point} & $\alpha$ & $\beta$ & $\gamma$ & $\epsilon$ \\ \hline\hline \ensuremath{m_{1/2}}\xspace &[{\rm Ge\kern -1pt V}] & $600$ & $1050$ & $600$ & $440$ \\ \ensuremath{m_{0}}\xspace &[{\rm Ge\kern -1pt V}] & $800$ & $30$ & $600$ & $20$ \\ \ensuremath{\tan\beta}\xspace & & $55$ & $55$ & $55$ & $15$ \\ $A_0$ &[{\rm Ge\kern -1pt V}] & $1600$ & $60$ & $1200$& $-250$ \\ \hline \ensuremath{m_{\stau}}\xspace &[{\rm Ge\kern -1pt V}] & $193$ & $136$ & $148$ & $153$ \\ \ensuremath{\theta_{{\tilde{\tau}}}}\xspace & & $81^{\circ}$ & $73^{\circ}$& $77^{\circ}$ & $76^{\circ}$ \\ \ensuremath{m_{H^{0}}}\xspace &[{\rm Ge\kern -1pt V}] & $402$ & $763$ & $413$ & $613$ \\ $\Gamma_{\ensuremath{H^{0}}\xspace}$ &[{\rm Ge\kern -1pt V}] & $15$ & $26$ & $16$ & $2.2$ \\ $m_{\tilde{g}}$ &[{\rm Ge\kern -1pt V}] & $1397$ & $2276$ & $1385$& $1028$ \\ avg. $m_{\tilde{q}}$ &[{\rm Ge\kern -1pt V}] & $1370$ & $1943$ & $1287$ & $894$ \\ $\mu$ &[{\rm Ge\kern -1pt V}] & $667$ & $1166$ & $648$ & $562$ \\ $A_{\tau}$ &[{\rm Ge\kern -1pt V}] & $515$ & $-143$ & $351$ & $-275$ \\ \hline $\text{BR}(b \to s\gamma)$ & [$10^{-4}$] & $3.08$ & $3.03$& $2.94$ & $3.00$ \\ $\text{BR}(B^0_s \to \mu^+\mu^-)$&[$10^{-8}$] & $1.65$ & $1.04$& $2.44$ & $0.30$ \\ $a_{\mu}$ &[$10^{-10}$] & $13.2$ & $11.5$& $16.8$ & $18.7$ \\ CCB \cite{Hisano:2010re} & & $\checkmark$ & -- & $\checkmark$ & $\checkmark$ \\ \hline \ensuremath{Y_{\stau}}\xspace &[$10^{-15}$] & $3.5$ & $2.5$ & $37.7$ & $164$ \\ \hline\hline }{Benchmark CMSSM scenarios $\alpha$, $\beta$, $\gamma$, and $\epsilon$ defined by the given values of $\ensuremath{m_{1/2}}\xspace$, $\ensuremath{m_{0}}\xspace$, $\ensuremath{\tan\beta}\xspace$, and $A_0$. For all points, $\mu>0$. Low-scale masses and parameters are calculated using \SPheno. We also provide the quantities that are subject to the constraints discussed in section~\ref{sec:motivation}, as obtained with \SuperISO, and the thermal relic stau yield $\ensuremath{Y_{\stau}}$, as obtained with \micromegas. The CCB constraint~(\ref{eq:CCB}), as obtained in~\cite{Hisano:2010re}, is respected by the scenarios $\alpha$, $\gamma$, and $\epsilon$, whereas point $\beta$ is in tension with this constraint. \label{tab:benchmarks} } The low-energy SUSY spectrum is obtained using \SPheno~\cite{Porod:2003um}, while the Higgs sector is recalculated with \FeynHiggs~\cite{Frank:2006yh}. We also refer to the constraints discussed in section~\ref{sec:motivation}, evaluated with \SuperISO~\cite{Arbey:2011zz}, and provide the thermal relic stau yield $\ensuremath{Y_{\stau}}$ as calculated with \micromegas~\cite{Belanger:2010gh}. Benchmark points $\alpha$ and $\beta$ are similiar to points $B$ and $C$ of ref.~\cite{Pradler:2008qc}, respectively, where we have adjusted $\ensuremath{m_{0}}\xspace$ for point $\alpha$ and $\ensuremath{m_{1/2}}\xspace$ for point $\beta$ so that \SPheno provides low-energy spectra that are similar to the ones of those points $B$ and $C$. Point $\gamma$ is very similar to point $\alpha$ but has a much larger stau yield. Point $\epsilon$ was already introduced in ref.~\cite{DeRoeck:2005bw}. Here, we are mainly interested in the ratio of \ensuremath{m_{\stau}}\xspace and \ensuremath{m_{H^{0}}}\xspace. In all four benchmark scenarios, stau production via an on-shell \ensuremath{H^{0}}\xspace~exchange is possible. We have $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ for point $\alpha$, $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$ for point $\gamma$, and $2\ensuremath{m_{\stau}}\xspace \ll \ensuremath{m_{H^{0}}}\xspace$ for points $\beta$ and $\epsilon$. The stau-Higgs couplings are smallest for point $\epsilon$, where \ensuremath{\tan\beta}\xspace is relatively small. In \figref{fig:distributions_minv} we display the invariant mass distributions for the four benchmark points at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$. \FIGURE[t]{ \includegraphics[width=.495\textwidth,]{./plots/Alpha14_minv.eps}% \includegraphics[width=.495\textwidth]{./plots/Beta14_minv.eps}\\[.5ex] {\small \hspace*{2cm} (a) \hspace*{.45\linewidth} (b)} \\[2.5ex] \includegraphics[width=.495\textwidth,]{./plots/Gamma14_minv.eps}% \includegraphics[width=.495\textwidth]{./plots/Epsilon14_minv.eps}\\[.5ex] {\small \hspace*{2cm} (c) \hspace*{.45\linewidth} (d)} \\[-1ex] \caption{Invariant mass distributions of directly produced $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ pairs at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$. Drell--Yan\ predictions are shown by the dashed lines and the full cross sections, including the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels, by the solid lines. The distributions are shown for the benchmark points (a)~$\alpha$, (b)~$\beta$, (c)~$\gamma$, and (d)~$\epsilon$, which are defined in \tabref{tab:compare_cascade}. } \label{fig:distributions_minv} } The kinematical cuts~(\ref{eq:cuts}) are applied, with the requirement $\vert\eta\vert < 2.4$ for both staus. The invariant mass distributions show a resonance peak at the mass of the \ensuremath{H^{0}}\xspace~boson on top of the Drell--Yan\ continuum in all four considered scenarios. For point $\alpha$ considered in \figref{fig:distributions_minv}\,(a), the peak is close to the threshold, $m_{\stau\stau^*}\approx2\ensuremath{m_{\stau}}\xspace$, and the distribution falls off steadily at higher invariant masses. Such a Higgs resonance at the beginning of the invariant mass distribution is a strong hint towards efficient stau annihilation in the early Universe, as further discussed in section~\ref{sec:cmssm_exceptional} below. In the invariant mass distribution for point $\gamma$ shown in \figref{fig:distributions_minv}\,(c), the \ensuremath{H^{0}}\xspace resonance lies on top of the maximum of the Drell--Yan\ contribution and the peak is very pronounced. For points $\beta$ and $\epsilon$, the Drell--Yan\ continuum is dominant and the resonance appears as a small bump at the tail of the distribution as can be seen in figures~\ref{fig:distributions_minv}\,(b) and~(d). Note that although the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels do not distort much the shape of the Drell--Yan\ curve in scenario $\beta$, they increase the differential cross section sizeably. This typically happens for large left-right mixing and large \ensuremath{\tan\beta}\xspace even in the case of an heavy decoupled \ensuremath{H^{0}}\xspace~boson, see figure~\ref{fig:crosssections2}. Both the $\alpha$ and $\gamma$ scenarios would allow for a determination of the mass \ensuremath{m_{H^{0}}}\xspace and also the width $\Gamma_{\Hhiggs}$ (especially in case of point $\gamma$) with a few $\mathrm{fb}^{-1}$ of data at the LHC with $\sqrt{S} = 14~{\rm Te\kern -1pt V}$. Nevertheless, also in scenarios with a rather heavy \ensuremath{H^{0}}\xspace~boson (such as point~$\beta$) and in scenarios with small \ensuremath{\tan\beta}\xspace (such as point~$\epsilon$), the LHC might eventually be able to determine the mass $\ensuremath{m_{H^{0}}}\xspace$ from the invariant mass distribution of the directly produced long-lived $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ pairs. Such a procedure only requires $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$ and is thus generic in large parts of the MSSM parameter space and within the CMSSM. Here a future study of theoretical and experimental uncertainties including detector effects and the possible contamination with SUSY backgrounds from cascade decays is necessary to allow for more precise predictions. \subsection{Prospects for the stopping of staus} \label{sec:stopping} Very slow moving CHAMPs might loose all their momentum and get stopped within the main detector or in some additional stopping detector. The analysis of their subsequent decays may then help to identify the LSP in an R-parity conserving setting (\cf section~\ref{sec:longlivedstau}) or to probe the size of the R-parity violating coupling. In such a way, \eg, the gravitino or axino mass and also its couplings might be tested in the future. In general, as a rule of thumb, CHAMPs with $\beta\gamma < 0.45$, as stated in~(\ref{eq:betagammacut}), are expected to get stopped in the detectors at the LHC. In the following, we show that staus produced directly via the $b\bar{b}$ and $gg$ channels provide a large additional source of potentially stopped objects, especially in the cosmologically motivated scenarios discussed in section~\ref{sec:exceptionalYstau}. In \figref{fig:distributions_bg} we give the number of directly produced staus for an integrated luminosity of $L=1~\mathrm{fb}^{-1}$ at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ as a function of $\beta\gamma=\vert\mathbf{p}\vert/m$ for the benchmark points $\alpha$ and $\beta$ introduced above and defined in \tabref{tab:benchmarks}. \FIGURE[t]{ \includegraphics[width=.49\textwidth,]{./plots/Alpha14nocuts_betagamma.eps} \includegraphics[width=.49\textwidth,]{./plots/Beta14nocuts_betagamma.eps}\\[.5ex] {\small \hspace*{2.17cm} (a) \hspace*{.455\linewidth} (b)} \\[-1ex] \caption{Directly produced staus as a function of $\beta\gamma=\vert\mathbf{p}\vert/m$ at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ for an integrated luminosity of $L=1~\mathrm{fb}^{-1}$. The full results, including $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels, are shown by the solid lines and the Drell--Yan\ predictions by the dashed lines. Staus with $\beta\gamma \lesssim 0.45$ are expected to get stopped in the LHC detectors. Distributions are shown for the benchmark points (a)~$\alpha$ and (b)~$\beta$, which are defined in \tabref{tab:benchmarks}.} \label{fig:distributions_bg} } We only require the pseudo-rapidity of a stau to fulfill $\vert\eta\vert < 2.4$ to be included in the shown histograms. No other kinematical cuts are imposed as we are especially interested in very slow moving objects. We count each produced stau individually. In both scenarios $\alpha$ and $\beta$, the number of potentially stopped staus, \ie, those with $\beta\gamma<0.45$, is enlarged when the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are included in the cross section prediction. This enhancement is particularly substantial in scenario $\alpha$ where $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ and where the staus at the \ensuremath{H^{0}}\xspace~boson resonance are thus produced almost at rest in the center of mass frame. Here, a large sample of stopped staus could be collected already with a rather small integrated luminosity. For an integrated luminosity of $L=1~\mathrm{fb}^{-1}$, about $N\approx25$ staus would be stopped, which is encouraging. Indeed, from the Drell--Yan\ process alone, one would expect only $N\lesssim1$. For an account of the experimental prospects, it is thus crucial to consider also the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels. \section{Direct stau production within the CMSSM} \label{sec:cmssm} In this section we investigate the direct pair production cross section of light staus within the CMSSM. Our aim is to provide a precise prediction for the $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ cross section, taking the Drell--Yan\ process as well as the $\ensuremath{b\bar{b}}\xspace$~annihilation and $gg$~fusion channels into account. We are interested in parameter regions with a quasi-stable \ensuremath{{\tilde{\tau}}_{1}}\xspace LOSP, where the $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ cross section is of particular importance. We consider the \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace plane of the CMSSM with $A_0=2\ensuremath{m_{0}}\xspace$, $\ensuremath{\tan\beta}\xspace=55$, and $\mu > 0$, which is cosmologically motivated by the possible occurance of exceptional small stau yields~\cite{Pradler:2008qc}. In this plane, \ensuremath{\tan\beta}\xspace is large and the \ensuremath{{\tilde{\tau}}_{1}}\xspace prefers to be right-handed, \ie, $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace >\pi/4$; this is generic in the CMSSM due to the different running of the left-handed and right-handed soft masses. Thus, the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels can give large contributions to the stau production cross section. For the CMSSM benchmark scenarios introduced in section~\ref{sec:phenolonglived}, we compare our results for direct stau production with indirect stau production mechanisms via the production and decay of other heavier SUSY particles. We find that direct stau production is often the dominant source of staus at colliders, in particular, at lower center-of-mass energies when the production of other SUSY particles is suppressed by their heavier masses. \subsection{CMSSM scans of the direct stau pair production cross section} \FIGURE[t]{% \includegraphics[width=.95\textwidth]{./plots/scanCMSSM7tevlabels.eps}\\% \includegraphics[width=.95\textwidth]{./plots/scanCMSSM14tevlabels.eps}% \caption{Contours of the total direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\ensuremath{{{\tilde{\tau}}}_1^{*}}\xspace$ production cross section (shaded, colored) at the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ (top) and $14~{\rm Te\kern -1pt V}$ (bottom) after the cuts~(\ref{eq:cuts}) in the CMSSM $m_0$-$m_{1/2}$ plane with $\ensuremath{\tan\beta}\xspace=55$, $A_0=2\ensuremath{m_{0}}\xspace$, and $\mu>0$. The white region in the lower left is excluded because of a tachyonic spectrum, impossible EWSB, or $\ensuremath{m_{\stau}}\xspace\le82~{\rm Ge\kern -1pt V}$. In the upper white area, the lightest neutralino $\ensuremath{\tilde{\chi}^{0}}_{1}$ is the LOSP. Parameter points in the gray area around $\ensuremath{m_{0}}\xspace\sim\ensuremath{m_{1/2}}\xspace\sim500~{\rm Ge\kern -1pt V}$ do not respect the constraint $\text{BR}(B_s^0\to\mu^+\mu^-)<4.3\times 10^{-8}$. The lower hatched area is in tension with the CCB constraint~(\ref{eq:CCB}). The thin solid black lines indicate $\ensuremath{m_{h^{0}}}\xspace$, the dashed black lines $\ensuremath{m_{\stau}}\xspace$, and the white lines $\ensuremath{m_{H^{0}}}\xspace$, where the mass values are given in units of ${\rm Ge\kern -1pt V}$ at the respective contour. For a naive estimate of the discovery potential, thick black lines show regions in which at least one $\ensuremath{{\tilde{\tau}}_{1}}\xspace\ensuremath{{{\tilde{\tau}}}_1^{*}}\xspace$-pair-production event is expected for integrated luminosities of $\mathcal{L}=1~\mathrm{fb}^{-1}$ and $5~\mathrm{fb}^{-1}$ at $\sqrt{S}=7~{\rm Te\kern -1pt V}$ and $\mathcal{L}=1~\mathrm{fb}^{-1}$ and $10~\mathrm{fb}^{-1}$ at $\sqrt{S}=14~{\rm Te\kern -1pt V}$.} \label{fig:cmssm_LHC} } \afterpage{\clearpage} \FIGURE[h]{ \includegraphics[width=.95\textwidth]{./plots/scanCMSSM2tevlabels.eps} \caption{Contours of the total direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\ensuremath{{{\tilde{\tau}}}_1^{*}}\xspace$ production cross section (shaded, colored) at the Tevatron with $\sqrt{S}=1.96~{\rm Te\kern -1pt V}$ after the cuts~(\ref{eq:cuts}) in the CMSSM $m_0$-$m_{1/2}$ plane with $\ensuremath{\tan\beta}\xspace=55$, $A_0=2\ensuremath{m_{0}}\xspace$, and $\mu>0$. All shown contours and regions are as in \figref{fig:cmssm_LHC}. A tiny (dark orange) strip with $\sigma>10~\mathrm{fb}$ is in tension with searches for CHAMPs at the Tevatron~\cite{Aaltonen:2009kea}.} \label{fig:cmssm_tevatron} } In \figsref{fig:cmssm_LHC}{fig:cmssm_tevatron} we show in a \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace plane of the CMSSM the direct stau production cross section at the LHC and at the Tevatron, as well as mass contours and excluded/disfavored parameter regions. For $A_0=2\ensuremath{m_{0}}\xspace$, $\ensuremath{\tan\beta}\xspace=55$ and $\mu > 0$, we consider the region in which the \ensuremath{{\tilde{\tau}}_{1}}\xspace is the LOSP. Again, we compute the low-energy SUSY spectrum with \SPheno, while the Higgs sector is reevaluated with \FeynHiggs. Flavor constraints are evaluated using \SuperISO. The cross section prediction includes the Drell--Yan\ channels with NLO $K$-factors, the $\ensuremath{b\bar{b}}\xspace$~annihilation and $gg$~fusion contributions, and the cuts~(\ref{eq:cuts}) adequate for a long-lived stau are applied. The white area in the lower left is excluded by a tachyonic spectrum, impossible electroweak symmetry breaking (EWSB), or a stau mass $\ensuremath{m_{\stau}}\xspace \le 82~{\rm Ge\kern -1pt V}$ below the conservative LEP limit~(\ref{eq:mstauLEPlimit}). In the upper left white area, the LOSP is the lightest neutralino $\ensuremath{\tilde{\chi}^{0}}_{1}$ (as indicated). In the gray area around $\ensuremath{m_{0}}\xspace\sim\ensuremath{m_{1/2}}\xspace\sim500~{\rm Ge\kern -1pt V}$, $\text{BR}(B_s^0\rightarrow\mu^+\mu^-)$ exceeds the upper limit given in~(\ref{eq:Bmumu}). The hatched area is disfavored by the CCB constraint~(\ref{eq:CCB}). Contours for a constant Higgs mass of $\ensuremath{m_{h^{0}}}\xspace=113~{\rm Ge\kern -1pt V}$ and $114~{\rm Ge\kern -1pt V}$ are shown as thin solid black lines. The dashed black lines are $\ensuremath{m_{\stau}}\xspace$ contours and the solid white lines $\ensuremath{m_{H^{0}}}\xspace$ contours, where the associated mass values are indicated on the respective contours in units of ${\rm Ge\kern -1pt V}$. The cross section depends mainly on \ensuremath{m_{\stau}}\xspace and \ensuremath{m_{H^{0}}}\xspace and varies over several orders of magnitude in the given parameter ranges. At the LHC with $\sqrt{S} =14~{\rm Te\kern -1pt V}$, it reaches $10^3~\mathrm{fb}$ in the region with $\ensuremath{m_{0}}\xspace\lesssim 700~{\rm Ge\kern -1pt V}$ and $\ensuremath{m_{1/2}}\xspace\lesssim 500~{\rm Ge\kern -1pt V}$ and drops to $\lesssim 2\times10^{-2}~\mathrm{fb}$ for, \eg, $\ensuremath{m_{0}}\xspace\sim\ensuremath{m_{1/2}}\xspace\sim2~{\rm Te\kern -1pt V}$, as can be seen in the lower panel of \figref{fig:cmssm_LHC}. When going down from $\sqrt{S}=14~{\rm Te\kern -1pt V}$ to $7~{\rm Te\kern -1pt V}$, considered in the upper panel of \figref{fig:cmssm_LHC}, we observe a decrease of the cross section by up to about a factor of 5 (see also figures~\ref{fig:inclusive_compare} and~\ref{fig:cuts_compare}). To give a naive estimate of the discovery potential, we also show contours (thick lines) on which one $\ensuremath{{\tilde{\tau}}_{1}}\xspace\ensuremath{{{\tilde{\tau}}}_1^{*}}\xspace$-pair-production event is expected for integrated luminosities of $\mathcal{L}=1~\mathrm{fb}^{-1}$ and $5~\mathrm{fb}^{-1}$ at $\sqrt{S}=7~{\rm Te\kern -1pt V}$ and $\mathcal{L}=1~\mathrm{fb}^{-1}$ and $10~\mathrm{fb}^{-1}$ at $\sqrt{S}=14~{\rm Te\kern -1pt V}$. The parameter regions to the left of these lines could thus be accessible by the experiments at the LHC already in the very near future, in particular, since the SM backgrounds are expected to be well under control with the kinematical cuts discussed above. However, a more realistic determination of the discovery reach and/or the exclusion limits should be performed in the context of a detailed study including detector effects. Such a study has been performed recently in the case of direct Drell--Yan\ production \cite{Heisig:2011dr}. At the Tevatron, the overall cross section for the direct production of staus is much smaller than at the LHC. As can be seen from \figref{fig:cmssm_tevatron}, it ranges typically from some tenths to a few femtobarns. (Here we should note that the shading/color scales in \figref{fig:cmssm_tevatron} differ from the ones in \figref{fig:cmssm_LHC}). Larger values in the considered CMSSM plane are only found very close to the region excluded by the LEP mass limit~(\ref{eq:mstauLEPlimit}) in the lower left corner of \figref{fig:cmssm_tevatron}. Here, in a tiny strip of the parameter plane, the cross section exceeds $10~\mathrm{fb}$ (as indicated by the dark orange color coding) and thus challenges the current cross section limit~(\ref{eq_tevatronlimit}) from the CDF experiment~\cite{Aaltonen:2009kea}. The actual shape of the cross section and mass contours in the \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace planes can easily be understood. We focus here on parameter regions with a \ensuremath{{\tilde{\tau}}_{1}}\xspace LOSP, where typically $\ensuremath{m_{1/2}}\xspace>\ensuremath{m_{0}}\xspace$ in the CMSSM. For $\ensuremath{\tan\beta}\xspace>20$, the $A^0$ mass can approximately be written as $m_{A}^2\approx m_0^2+2.5 m_{1/2}^2$ (neglecting Yukawa interactions) \cite{Drees:1995hj} and thus $m_{A}$ is mainly determined by $\ensuremath{m_{1/2}}\xspace$. For $\ensuremath{m_{1/2}}\xspace\gg 0$, the Higgs sector is then typically in the decoupling limit, where $\ensuremath{m_{H^{0}}}\xspace \approx m_{A}$, and thus \ensuremath{m_{H^{0}}}\xspace also mainly determined by $\ensuremath{m_{1/2}}\xspace$. The dependence of $\ensuremath{m_{\stau}}\xspace$ on $\ensuremath{m_{0}}\xspace$ and $\ensuremath{m_{1/2}}\xspace$ is less intuitive in the discussed parameter range (\ensuremath{{\tilde{\tau}}_{1}}\xspace LOSP, large \ensuremath{\tan\beta}\xspace, sizeable Yukawa couplings), but the usual relation $\ensuremath{m_{\stau}}\xspace^2\propto\ensuremath{m_{0}}\xspace^2+0.15 \ensuremath{m_{1/2}}\xspace^2$ \cite{Drees:1995hj} can still be considered. Thus, one finds for $\ensuremath{m_{1/2}}\xspace\gg\ensuremath{m_{0}}\xspace$ that it is always $\ensuremath{m_{H^{0}}}\xspace>2\ensuremath{m_{\stau}}\xspace$. For the stau production cross section, this means that there can be important contributions from the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels in the region $\ensuremath{m_{1/2}}\xspace\gg\ensuremath{m_{0}}\xspace$, where the \ensuremath{H^{0}}\xspace~boson can go on-shell. Towards large \ensuremath{m_{0}}\xspace and \ensuremath{m_{1/2}}\xspace, however, the \ensuremath{{\tilde{\tau}}_{1}}\xspace and (even faster) the \ensuremath{H^{0}}\xspace become heavy and contributions from the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels become less important compared to the Drell--Yan\ channel. If the Drell--Yan\ contributions dominate, the overall production cross section is basically a function of the \ensuremath{{\tilde{\tau}}_{1}}\xspace mass and decreases strongly for higher \ensuremath{m_{\stau}}\xspace. Close to the boundary of the $\ensuremath{\tilde{\chi}^{0}}_{1}$ LOSP region, where $\ensuremath{m_{0}}\xspace\approx\ensuremath{m_{1/2}}\xspace$, the \ensuremath{{\tilde{\tau}}_{1}}\xspace gets heavier relative to the \ensuremath{H^{0}}\xspace~boson so that $2\ensuremath{m_{\stau}}\xspace>\ensuremath{m_{H^{0}}}\xspace$, which means that the direct stau production via an on-shell \ensuremath{H^{0}}\xspace~boson is no longer possible. However, the position of the transition at $2\ensuremath{m_{\stau}}\xspace=\ensuremath{m_{H^{0}}}\xspace$ depends strongly on \ensuremath{\tan\beta}\xspace. For large \ensuremath{\tan\beta}\xspace, bottom and tau Yukawa couplings can be sizable and drive down the masses of the heavy Higgses. For small \ensuremath{\tan\beta}\xspace, this transition lies mostly within the $\ensuremath{\tilde{\chi}^{0}}_1$ LOSP region and only for very large values of \ensuremath{m_{1/2}}\xspace within the \ensuremath{{\tilde{\tau}}_{1}}\xspace LOSP region. Thus, for smaller values of \ensuremath{\tan\beta}\xspace, contributions from on-shell \ensuremath{H^{0}}\xspace boson exchange are a generic feature of the CMSSM, as one usually finds $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$. Let us emphasize that even if the $\ensuremath{b\bar{b}}\xspace$ and $gg$ contributions are not necessarily large in parameter regions with $\ensuremath{m_{H^{0}}}\xspace>2\ensuremath{m_{\stau}}\xspace$, \ie, when the \ensuremath{H^{0}}\xspace~boson is heavy, this configuration still opens the channels for stau production via on-shell \ensuremath{H^{0}}\xspace~exchange. As discussed in section~\ref{sec:parameter_determination}, this could allow us to determine the \ensuremath{H^{0}}\xspace~boson width and mass by investigating the $\ensuremath{{{\tilde\tau}}_{1}^{\phantom{*}}\staubar}\xspace$ invariant mass distribution. \subsection{Direct stau production vs.\ staus from cascade decays} \label{sec:cascade_decays} So far we have focused on the analysis of direct stau production. But in a \ensuremath{{\tilde{\tau}}_{1}}\xspace LOSP scenario, staus will also be generated in any SUSY particle production process where heavier sparticles are produced that cascade down to lighter ones and eventually decay into the LOSP, with SM particles emitted along the SUSY decay chain. At hadron colliders, the largest contribution to the overall SUSY cross section is usually expected to originate from the production and subsequent decay of color-charged SUSY particles, \ie, squarks and gluinos. However, also direct production of neutralinos and charginos (\eg, $\ensuremath{\tilde{\chi}^{0}}_i\tilde{\chi}^{\pm}_j$) and associated production of a neutralino or chargino with a gluino or squark can give sizeable contributions in large parts of the allowed SUSY parameter space. In \tabref{tab:compare_cascade} \newcolumntype{L}[1]{>{\raggedright\arraybackslash}p{#1}} \newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1} \TABULAR[t]{L{2cm}cC{1.8cm}C{1.8cm}C{1.9cm}C{2cm}}{ \hline\hline \multicolumn{2}{c}{Benchmark point} & $\alpha$ & $\beta$ & $\gamma$ & $\epsilon$ \\ \hline\hline \multicolumn{2}{l}{LHC 7 TeV} &&&& \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ &[$\mathrm{fb}$] & $3.2 (2.3)$ & $12.5 \, (7.3) $ & $9.0 \, (5.6)$ & $7.95 \, (5.00)$ \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{\ensuremath{b\bar{b}}\xspace}}$ &[$\mathrm{fb}$] & $9.8 \, (5.1)$ & $0.03 \, (0.02)$& $19.2 \, (16.5)$ & $0.07 \, (0.06)$ \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{gg}}$ &[$\mathrm{fb}$] & $0.1 \, (0.1)$ & $3.3 \, (2.4)$ & $0.32 \, (0.25)$ & $0.01 \, (0.01)$ \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ &[$\mathrm{fb}$] & $13.1 \, (7.5)$ & $15.8 \, (9.7)$ & $28.5 \, (22.4)$ & $8.03 \, (5.07)$ \\ $\sigma(\tilde{g}\tilde{g})$ &[$\mathrm{fb}$] & $0.05$ & $10^{-6}$ & $0.06$ & $2.57$ \\ $\sigma(\tilde{g}\tilde{q})$ &[$\mathrm{fb}$] & $0.63$ & $4\times 10^{-4}$ & $0.99$ & $37.36$ \\ $\sigma (\tilde{q}\tilde{q})$ &[$\mathrm{fb}$] & $1.18$ & $0.006$ & $2.41$ & $77.25$ \\ $\sigma(\tilde{\chi}\tilde{q})$+$\sigma (\tilde{\chi}\tilde{g})$&[$\mathrm{fb}$] & $0.481$ & $0.007$ & $0.72$ & $12.77$ \\ $\sigma(\tilde{\chi}\tilde{\chi})$ &[$\mathrm{fb}$] & $20.4$ & $0.29$ & $19.8$ & $91.78$ \\ \hline \multicolumn{2}{l}{LHC 14 TeV} &&&& \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ &[$\mathrm{fb}$] & $11.2 \, (5.64)$ & $37.5 \, (15.9)$ & $28.0 \, (12.4)$ & $24.7 \, (11.2)$ \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{\ensuremath{b\bar{b}}\xspace}}$ &[$\mathrm{fb}$] & $58.4 \, (27.0)$ & $0.7 \, (0.2)$ & $113.3 \, (87.1)$& $0.5 \, (0.4)$ \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{gg}}$ &[$\mathrm{fb}$] & $0.7 \, (0.4)$ & $17.4 \, (11.1)$ & $1.8 \, (1.3)$ & $0.07 \, (0.05)$ \\ $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ &[$\mathrm{fb}$] & $70.3 \, (33.1)$ & $55.6 \, (27.2)$ & $143.1 \, (100.8)$ & $25.3 \, (11.6)$ \\ $\sigma (\tilde{g}\tilde{g})$ &[$\mathrm{fb}$] & $20.2$ & $0.12$ & $20.8$ & $232.19$ \\ $\sigma (\tilde{g}\tilde{q})$ &[$\mathrm{fb}$] & $104.4$ & $2.46$ & $133.2$ & $1328.4$ \\ $\sigma (\tilde{q}\tilde{q})$ &[$\mathrm{fb}$] & $92.5$ & $6.46$ & $139.0$ & $1301.1$ \\ $\sigma (\tilde{\chi}\tilde{q})$+$\sigma (\tilde{\chi}\tilde{g})$&[$\mathrm{fb}$] & $16.9$ & $1.08$ & $22.4$ & $175.12$ \\ $\sigma (\tilde{\chi}\tilde{\chi})$ &[$\mathrm{fb}$] & $134.5$ & $6.40$ & $131.1$ & $422.2$ \\ \hline\hline } { Hadronic cross sections for various SUSY pair production processes at the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ and $14~{\rm Te\kern -1pt V}$. For direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\ensuremath{{{\tilde{\tau}}}_1^{*}}\xspace$-pair production, we list our cross section results before and after applying the kinematical cuts~(\ref{eq:cuts}), where the latter are given in parantheses. The other cross sections are inclusive NLO results obtained with \Prospino, where no kinematical cuts have been considered. \label{tab:compare_cascade} } we compare our predictions for the direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$-pair production cross sections contributions from the Drell--Yan, $\ensuremath{b\bar{b}}\xspace$~annihilation, and $gg$~fusion channels with the inclusive cross section for other SUSY particle production cross sections, calculated at NLO with \Prospino, for the benchmark points defined in \tabref{tab:benchmarks}. (Note that an average squark mass $m_{\tilde{q}}$ is listed in \tabref{tab:benchmarks}.) We sum over all possible combinations of squark, neutralino, and chargino eigenstates. For simplicity, we consider inclusive cross sections without any kinematical cuts. Only for the direct stau production channels, cross sections after applying the cuts~(\ref{eq:cuts}) are additionally given in parentheses. Considering the LHC, each cross section is listed for $\sqrt{S}=7~{\rm Te\kern -1pt V}$ and $14~{\rm Te\kern -1pt V}$. From a comparison of the inclusive production cross sections, we can see that direct stau production is an important source of staus for the benchmark points $\alpha$, $\beta$, and $\gamma$ at the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$. Only electroweak neutralino/chargino pair production ($\tilde{\chi}\tilde{\chi}$) can give comparable contributions. We even find that direct stau production can constitute the dominant part of the overall SUSY cross section together with $\tilde{\chi}\tilde{\chi}$ production. The other cross sections are suppressed at $\sqrt{S}=7~{\rm Te\kern -1pt V}$ by the heavier masses of squarks and gluinos. The situation changes for $\sqrt{S} = 14~{\rm Te\kern -1pt V}$, where the center-of-mass energy is high enough so that strongly interacting SUSY particles can be produced copiously. However, for point $\beta$, where $\ensuremath{m_{1/2}}\xspace$ is particularly large, the gluino is so heavy that direct stau production is always the dominant source for staus at colliders. Still, we annotate that the LHC at $\sqrt{S}=7~{\rm Te\kern -1pt V}$ might in some scenarios provide a more suitable environment for the study of direct stau production than the LHC at $\sqrt{S}=14~{\rm Te\kern -1pt V}$, where staus originating from cascade decays would need to be suppressed by additional cuts. It is also interesting to look more closely at the composition of the total direct stau production cross section in these scenarios. Points $\alpha$ and $\gamma$ have a very similar composition of $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ (but a very different stau yield, see also section~\ref{sec:cmssm_exceptional}) and the \ensuremath{b\bar{b}}\xspace annihilation channel is the dominant stau production mechanism for both points. Benchmark point $\beta$ is considered to illustrate the impact of a large $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*h^0$ coupling. Here, large values of $\mu$ and \ensuremath{\tan\beta}\xspace together with a relatively large mixing result in this large coupling (\cf\ section~\ref{sec:exceptionalYstau} and appendix~\ref{app:stauhiggs_couplings}). However, such very large couplings are in strong conflict with CCB constraint~(\ref{eq:CCB}). For this point, the \ensuremath{H^{0}}\xspace~boson is very massive and \ensuremath{b\bar{b}}\xspace annihilation and gluon fusion into an intermediate \ensuremath{H^{0}}\xspace~boson is suppressed by the heavy particle's propagator. Stau production is thereby dominated by the Drell--Yan\ channel and gets sizeable contributions from the gluon fusion channel, where especially processes mediated by the \ensuremath{h^{0}}\xspace~boson are important. Finally, benchmark scenario $\epsilon$ differs from the above scenarios by much smaller values of $\ensuremath{m_{1/2}}\xspace$ and $\ensuremath{\tan\beta}\xspace$. It could be considered a `typical' \ensuremath{{\tilde{\tau}}_{1}}\xspace LOSP scenario, without an exceptional stau yield. In this case, the direct stau production cross section is well described by the Drell--Yan\ process, whereas the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are basically negligible. Moreover, the indirect stau production mechanisms are much more efficient than the direct ones. \section{Collider tests of an exceptionally small relic stau abundance} \label{sec:cmssm_exceptional} In the preceding sections we have considered various aspects of stau production at hadron colliders with emphasis on parameter regions that allow for an exceptionally small yield of a long-lived stau~(\ref{eq:exceptionalYstau}). Because of the appealing features described in section~\ref{sec:exceptionalYstau}, we now discuss the testability of such an exceptional yield in collider experiments. While related prospects for collider phenomenology were already addressed in refs.~\cite{Ratz:2008qh,Pradler:2008qc,Endo:2010ya,Endo:2011uw}, we show here for the first time that contributions from $b\bar{b}$ annihilation and gluon fusion to direct stau production can play a particularly important role for this testability at hadron colliders. Assuming a standard cosmological history with a reheating temperature $\ensuremath{T_{\mathrm{R}}}>\ensuremath{m_{\stau}}\xspace/20$, the key requirements for an exceptionally small thermal relic stau yield~(\ref{eq:exceptionalYstau}) are (i)~a relatively small stau mass of $\ensuremath{m_{\stau}}\xspace\lesssim 200~{\rm Ge\kern -1pt V}$, (ii)~the mass pattern $2\ensuremath{m_{\stau}}\xspace\simeq\ensuremath{m_{H^{0}}}\xspace$, which allows for primordial $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ annihilation via the $\ensuremath{{H}^0}\xspace$ resonance~\cite{Pradler:2008qc}, and/or (iii)~enhanced stau-Higgs couplings, which are often associated with a sizeable stau-left-right mixing~\cite{Ratz:2008qh,Pradler:2008qc}, as described in section~\ref{sec:exceptionalYstau} and appendix~\ref{app:stauhiggs_couplings}. Now, our studies of direct stau production in the previous sections demonstrate clearly that the contributions from $b\bar{b}$ annihilation and gluon fusion are sensitive to all three of these requirements. In contrast, the Drell--Yan\ process is sensitive to $\ensuremath{m_{\stau}}\xspace$ and the stau-mixing angle $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ only. \subsection*{\boldmath Excess of direct stau production cross sections over Drell--Yan\ predictions} Based on our results in sections~\ref{subsec:numerical} and~\ref{sec:cmssm}, we already know that $b\bar{b}$-annihilation and gluon-fusion processes can lead to direct stau production cross sections that exceed the Drell--Yan\ predictions significantly, in particular, for $2\ensuremath{m_{\stau}}\xspace\lesssim\ensuremath{m_{H^{0}}}\xspace$ and/or enhanced stau-Higgs couplings. This motivates us to explore the ratio \begin{align} R=\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}/\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}} \label{eq:R} \end{align} as a potential indicator for a cosmological scenario with an exceptionally small stau yield. Here the total direct stau production cross section, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$, and the Drell--Yan\ prediction, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$, are considered after applying the cuts~(\ref{eq:cuts}) for scenarios with a long-lived stau. Before presenting and discussing our theoretical results for $R$, let us comment on its experimental determination which will have to rely on measurements of $\ensuremath{m_{\stau}}\xspace$ and $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$. While an accuracy of $<1\%$ is expected for a $\ensuremath{m_{\stau}}\xspace$ determination at the LHC~\cite{Ambrosanio:2000ik,Ellis:2006vu}, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ measurements may be more difficult, as mentioned in section~\ref{sec:phenolonglived}. If indirect stau production is significant, they will require jet/lepton vetos and/or additional kinematical cuts. With a precisely known $\ensuremath{m_{\stau}}\xspace$, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ can be calculated theoretically with an uncertainty of about a factor of 2 that is related to its dependence on $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$; see figures~\ref{fig:crosssections1}\,(b) and~\ref{fig:crosssections2}\,(a). To obtain a conservative estimate of $R$, one will then evaluate~(\ref{eq:R}) with the maximum $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ at $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace=0$. In fact, as already addressed in section~\ref{sec:parameter_determination}, a measurement of $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ based on direct stau production is conceivable only if the Drell--Yan\ contribution dominates, \ie, for $R\simeq1$. On the other hand, studies of staus produced in cascade decays may help to indirectly probe $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$~\cite{Kitano:2010tt} and thereby to determine $R$ more precisely. Thus, we present in the following results for $R$ that are not conservative estimates but theoretical predictions taking into account $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ with its full $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$ dependence. For an unknown $\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$, $R>2$ will then be a required indication for sizeable contributions of the $b\bar{b}$ and $gg$ channels. In figure~\ref{fig:ratio7tevyield} \FIGURE[h!]{ \includegraphics[width=0.97\textwidth]{./plots/scanCMSSMYield7tevlabels.eps}\\% \includegraphics[width=0.97\textwidth]{./plots/scanCMSSMYield14tevlabels.eps} \caption{The ratio $R=\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}/\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ (shaded contours, colored) for the LHC with $\sqrt{S} = 7~{\rm Te\kern -1pt V}$ (top panel) and $14~{\rm Te\kern -1pt V}$ (bottom panel) and $\ensuremath{Y_{\stau}}=4\times10^{-15}$, $10^{-14}$, $4\times10^{-14}$ (white lines) in the \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace CMSSM plane with $A_0=2\ensuremath{m_{0}}\xspace$, $\tan\beta=55$, and $\mu>0$. The CCB constraint (gray hatched region) and excluded/unconsidered regions are as in figure~\ref{fig:cmssm_LHC}. The labeled (red) stars indicate the location of the benchmark points $\alpha$, $\beta$, and $\gamma$, defined in table~\ref{tab:benchmarks}.} \label{fig:ratio7tevyield} } the shaded (colored) contours indicate our theoretical predictions for $R$ at the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ (top panel) and $14~{\rm Te\kern -1pt V}$ (bottom panel) in the \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace CMSSM plane with $A_0=2\ensuremath{m_{0}}\xspace$, $\tan\beta=55$, and $\mu>0$. The white lines show contours of $\ensuremath{Y_{\stau}}=4\times10^{-15}$, $10^{-14}$, $4\times10^{-14}$ as obtained with \micromegas~\cite{Belanger:2010gh}. The labeled (red) stars indicate the location of the benchmark points $\alpha$, $\beta$, and $\gamma$, defined in table~\ref{tab:benchmarks}. Excluded and unconsidered regions are as in figure~\ref{fig:cmssm_LHC}, and also the region disfavored by the CCB constraint~(\ref{eq:CCB}) is indicated but now by the gray hatched region. The $R$ contours show very explicitly that the Drell--Yan\ prediction can underestimate the direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production cross section by up to a factor of 10 (20) for $\sqrt{S}=7~{\rm Te\kern -1pt V}$ ($14~{\rm Te\kern -1pt V}$). This demonstrates again the potential importance of the $b\bar{b}$ and $gg$ channels included in our calculations. In fact, their effect with respect to the Drell--Yan\ predictions is more relevant at $\sqrt{S} = 14~{\rm Te\kern -1pt V}$ than at $7~{\rm Te\kern -1pt V}$. This results from the $b\bar{b}$ and gluon luminosities in the proton that benefit more strongly from the higher $\sqrt{S}$ than those of the lighter quarks. On the other hand, also indirect stau production can be much more efficient for $\sqrt{S}=14~{\rm Te\kern -1pt V}$; cf.\ table~\ref{tab:compare_cascade}. Thereby, it may even be more difficult to identify direct stau production events at $\sqrt{S} = 14~{\rm Te\kern -1pt V}$ despite potentially larger values of $R$. In table~\ref{tab:YstauR} \TABULAR[t]{lcC{1.6cm}C{1.6cm}C{1.6cm}C{1.6cm}}{ \hline\hline \multicolumn{2}{c}{Benchmark point} & $\alpha$& $\beta$ &$\gamma$ & $\epsilon$ \\ \hline\hline $R_\mathrm{LHC7}$ & & $3.3$ & $1.3$ & $4.0$ & $1.01$ \\ $R_\mathrm{LHC14}$ & & $5.8$ & $1.7$ & $8.1$ & $1.04$ \\ \hline \ensuremath{Y_{\stau}}\xspace\,\,[$10^{-15}$] & &$3.5$ & $2.5$ & $37.7$ & $164$ \\ \hline\hline }{ The stau yield $\ensuremath{Y_{\stau}}$ and $R=\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}/\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ at the LHC with $\sqrt{S} = 7~{\rm Te\kern -1pt V}$ ($R_\mathrm{LHC7}$) and $14~{\rm Te\kern -1pt V}$ ($R_\mathrm{LHC14}$) for the benchmark scenarios $\alpha$, $\beta$, $\gamma$ and $\epsilon$, defined in table~\ref{tab:benchmarks} and partially indicated in figure~\ref{fig:ratio7tevyield}. The stau yield is obtained from \micromegas and the $R$ values from the respective cross sections after kinematical cuts~(\ref{eq:cuts}) given in parantheses in table~\ref{tab:compare_cascade}. \label{tab:YstauR} } we list the $R$ values at the LHC with $\sqrt{S} = 7~{\rm Te\kern -1pt V}$ ($R_\mathrm{LHC7}$) and $14~{\rm Te\kern -1pt V}$ ($R_\mathrm{LHC14}$) and the stau yield $\ensuremath{Y_{\stau}}$ for the benchmark points $\alpha$, $\beta$, $\gamma$, and $\epsilon$. We see again that $R$ increases when going from $\sqrt{S}=7~{\rm Te\kern -1pt V}$ to $14~{\rm Te\kern -1pt V}$. This effect is most pronounced for the points~$\alpha$ and~$\gamma$ for which $b\bar{b}$ annihilation dominates the direct stau production cross section. A considerable 30\% increase of $R$ is predicted also for point~$\beta$ for which gluon fusion contributes up to about $40\%$ of $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$; \cf~table~\ref{tab:compare_cascade}. To understand the shape of the $R$ contours in figure~\ref{fig:ratio7tevyield}, it is instructive to look at the lines of constant $\ensuremath{m_{\stau}}\xspace$ and $\ensuremath{m_{H^{0}}}\xspace$, which are shown for the same CMSSM parameter choice in figure~\ref{fig:cmssm_LHC}. By interpolating between the intersections of the $\ensuremath{m_{\stau}}\xspace=200~{\rm Ge\kern -1pt V}$ and $\ensuremath{m_{H^{0}}}\xspace=400~{\rm Ge\kern -1pt V}$ and of the $\ensuremath{m_{\stau}}\xspace=400~{\rm Ge\kern -1pt V}$ and $\ensuremath{m_{H^{0}}}\xspace=800~{\rm Ge\kern -1pt V}$ contours, on can infer the location of the line with $2\ensuremath{m_{\stau}}\xspace=\ensuremath{m_{H^{0}}}\xspace$. Only below this line, on-shell \ensuremath{H^{0}}\xspace exchange is possible and can lead to $R\gg 2$ in the vicinity of this line, where a smaller $\ensuremath{m_{H^{0}}}\xspace$ allows for a larger $R$. By going along a contour on which $\ensuremath{m_{\stau}}\xspace$ does not change (such as the $\ensuremath{m_{\stau}}\xspace=200~{\rm Ge\kern -1pt V}$ contour) from the region with smaller $\ensuremath{m_{H^{0}}}\xspace$ into the direction with larger $\ensuremath{m_{H^{0}}}\xspace$, one encounters the qualitative behavior that is illustrated in figure~\ref{fig:crosssections1}\,(c). Let us now turn to the main aspects of the $\ensuremath{Y_{\stau}}$ contours in figure~\ref{fig:ratio7tevyield}; for additional details we refer to~\cite{Pradler:2008qc} in which those contours were studied in the same CMSSM plane. An exceptionally small yield~(\ref{eq:exceptionalYstau}) can be found in the two separate regions enclosed by the $\ensuremath{Y_{\stau}}=4\times 10^{-15}$ contour. The region, to which point $\alpha$ belongs, allows for primordial $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ annihilation via the $\ensuremath{{H}^0}\xspace$ resonance, and the region, to which point $\beta$ belongs, for efficient $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ annihilation via enhanced stau-Higgs couplings. While the latter region is in conflict with the CCB contstraint~(\ref{eq:CCB}), we still include this point in our discussion of the testability of an exceptionally small $\ensuremath{Y_{\stau}}$ at colliders. Comparing regions with $R\gtrsim2$ to those with $\ensuremath{Y_{\stau}}<4\times10^{-15}$, we find that there is no one-to-one link between a sizeable $R$ value and an exceptionally small yield. For example, while the point $\beta$ is associated with such an exceptional yield, we obtain $R<2$, even at the LHC with $\sqrt{S} = 14~{\rm Te\kern -1pt V}$, as can be seen in table~\ref{tab:YstauR}. Moreover, in figure~\ref{fig:ratio7tevyield}, also moderate values of $R\simeq 2$--$3$ occur in the $2\ensuremath{m_{\stau}}\xspace\simeq\ensuremath{m_{H^{0}}}\xspace$ region enclosed by the $\ensuremath{Y_{\stau}}< 4\times 10^{-15}$ contour. On the other hand, point $\gamma$ is associated with $R\simeq 4$--$8$ while the yield at this point exceeds the limit~(\ref{eq:exceptionalYstau}) by more than one order of magnitude. Only for points such as $\alpha$, one finds both an exceptionally large $R\simeq3$--$6$ and an exceptionally small stau yield. Thus, a sizeable $R$ could very well be a first hint for the possibility of efficient stau-annihilation in the early Universe but additional investigations will be crucial to clarify the situation. \subsection*{\boldmath Exceptional $\ensuremath{Y_{\stau}}$ and $R$ in models with non-universal Higgs masses} Before proceeding, we would like to illustrate more clearly that a region with resonant primordial stau annihilation and an exceptionally small $\ensuremath{Y_{\stau}}$ can be associated with different values of $R$. The resonance condition $2\ensuremath{m_{\stau}}\xspace \approx \ensuremath{m_{H^{0}}}\xspace$ is found in $\ensuremath{m_{0}}\xspace$--$\ensuremath{m_{1/2}}\xspace$ CMSSM planes only in a small horizontal `funnel' region and for specific combinations of parameters. In less constrained models with non-universal Higgs masses (NUHM) this mass pattern occurs in a more generical way and already for smaller values of \ensuremath{\tan\beta}\xspace. In figure~\ref{fig:nuhm} \FIGURE[t]{ \includegraphics[width=.7\textwidth]{./plots/NUHM_14.eps} \caption{The masses $\ensuremath{m_{\stau}}\xspace$, $\ensuremath{m_{h^{0}}}\xspace$, and $\ensuremath{m_{H^{0}}}\xspace$ (top panel), the stau yield $\ensuremath{Y_{\stau}}$ (middle panel), the cross sections $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ and $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ and $R$ at the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ (bottom) as a function of the Higgs-mass parameter $m_{H_u}=m_{H_d}=m_0^H$, defined at the high scale, in the NUHM1 model with $\ensuremath{m_{1/2}}\xspace=400~{\rm Ge\kern -1pt V}$, $\ensuremath{m_{0}}\xspace=50$, $\ensuremath{\tan\beta}\xspace=30$, $A_0=0$.} \label{fig:nuhm} } we consider a scenario in which the two Higgs mass parameters are equal (and negative) but different from the other high scale scalar mass parameter, $m_{H_u}=m_{H_d}=m_0^H\ne m_0$, which is a framework denoted as NUHM1 model. We vary $m_0^H$ and set the other parameters to $\ensuremath{m_{0}}\xspace=50~{\rm Ge\kern -1pt V}$, $\ensuremath{m_{1/2}}\xspace=400~{\rm Ge\kern -1pt V}$, $\tan\beta=30$, $A_0=0$, and $\mu>0$, which are defined as in the CMSSM. In the top panel, \ensuremath{m_{H^{0}}}\xspace is indicated by the solid (red) line, \ensuremath{m_{h^{0}}}\xspace by the dashed (green) line, and \ensuremath{m_{\stau}}\xspace by the dotted (blue) line. The middle panel shows $\ensuremath{Y_{\stau}}$. In the bottom panel, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{DY}}$ is shown by the solid (red) line, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ by the dashed (green) line, and $R$ by the dotted (blue) line. Note that the negative mass parameter $m_0^H$ has to be understood as the root of the modulus of a negative squared soft-mass of the Higgses. For a more negative $m_0^H$, the \ensuremath{{\tilde{\tau}}_{1}}\xspace is no longer the LOSP and eventually (\ie, for an even more negative $m_0^H$) the stability bound $m_{H_{u/d}}^2 + \vert\mu\vert^2 \geq 0$ can be violated at $M_\mathrm{GUT}$. In such cases, there might be a vacuum instability leading to electroweak symmetry breaking already at $M_\mathrm{GUT}$. Considering the top panel, one sees that $\ensuremath{m_{H^{0}}}\xspace$ gets smaller and $\ensuremath{m_{\stau}}\xspace$ larger towards smaller values of $m_0^H$. For $m_0^H\simeq-815~{\rm Ge\kern -1pt V}$, one finds the resonance condition $2\ensuremath{m_{\stau}}\xspace=\ensuremath{m_{H^{0}}}\xspace$. In a narrow region around this point, a significant depletion of the stau yield $\ensuremath{Y_{\stau}}$ by about one order of magnitude down to an exceptional value of $\lesssim 2\times 10^{-15}$ can be seen in the middle panel. To the right of this resonance point, \ie, for $m_0^H>-815~{\rm Ge\kern -1pt V}$, $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$ so that on-shell $\ensuremath{H^{0}}\xspace$-boson exchange can contribute to direct stau production. Thereby, this leads to a significant contribution of the $b\bar{b}$ and $gg$ channels to $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ and $R$ increases significantly up to about 5. However, the maximum of $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ and of $R$ is shifted away from the resonance point at which $\ensuremath{Y_{\stau}}$ approaches its minimum, and towards this resonance point, $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ and $R$ decrease significantly and show even a local minimum. This behavior is qualitatively different from the one shown in figure~\ref{fig:crosssections1}\,(c) where the maximum of $\sigma(\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*)_{\text{all}}$ is very close to the threshold for on-shell Higgs exchange, $2\ensuremath{m_{\stau}}\xspace=\ensuremath{m_{H^{0}}}\xspace$. The difference results from the kinematical cuts~(\ref{eq:cuts}) applied in figure~\ref{fig:nuhm}. As shown in figure~\ref{fig:distributions_pt_beta}, the $b\bar{b}$ and $gg$ channels lead to a significant excess of staus with low $\ensuremath{p^\mathrm{T}}\xspace$ and low $\beta$ compared to the Drell--Yan\ channel in the vicinity of the resonance. Thus, by imposing the $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ cuts~(\ref{eq:cuts}), one loses a significant amount of direct stau production events near the resonance condition $2\ensuremath{m_{\stau}}\xspace=\ensuremath{m_{H^{0}}}\xspace$. For the CMSSM scans in figure~\ref{fig:ratio7tevyield}, the above explains also that maximum values of $R$ are shifted to some extend away from the $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ region and deeper into the $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$ region when moving along contours of constant $\ensuremath{m_{\stau}}\xspace$. In fact, the dip at the resonance is present also in those scans but not visible due to a limited resolution. Based on this finding and as already addressed in section~\ref{sec:kinematicalcuts}, we would thus recommend a shifting of the $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ cuts to include events with smaller $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$. If this is feasible such that direct stau production events can still be identified confidently and if close to the resonance, one will find $R$ values that increase substantially when lowering the cuts on $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$. Such a behavior can then provide a hint for resonant stau annihilation in the early Universe and thereby for the possibility of an exceptionally small $\ensuremath{Y_{\stau}}$. For further clarification, we propose in the following studies of differential direct stau production cross sections. On the side, we remark that the mass scale of the heavy Higgs bosons is, by renormalization group running, fixed by \ensuremath{m_{1/2}}\xspace in NUHM1 models. Although we can always realize $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ in NUHM1 models, these parameter points have a light \ensuremath{{\tilde{\tau}}_{1}}\xspace only for small \ensuremath{m_{1/2}}\xspace. Thus, one does not find scenarios with efficient direct stau production and subdominant indirect stau production from cascade decays. However, in the alternative less constrained NUHM2 models, $m_{H_u}$ and $m_{H_d}$ are chosen independently. These two parameters can be traded for the parameters $m_A$ and \ensuremath{\tan\beta}\xspace at the low scale. In the context of direct stau production, this setup can effectively be described by the low scale parameters $\ensuremath{m_{\stau}}\xspace$, $m_A$, and $\ensuremath{\tan\beta}\xspace$ with implications shown in section~\ref{sec:production}. Now, the different stau production mechanism might `decouple' and there is the possibility that direct production remains as the only relevant source for stau pairs at hadron colliders. \subsection*{\boldmath With a little help from differential direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production cross sections} So far our discussion in this section has focussed on the quantity $R$ (and thereby on the integrated direct stau production cross section) and its possible dependence on the $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ cuts. Now, with long-lived staus, it will be a realistic possibility to measure also differential direct $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production cross sections such as the ones discussed in section~\ref{sec:phenolonglived}. For a situation with a sizeable value of $R>3$, as encountered for points $\alpha$ or $\gamma$, the situation is most promising since the differential distributions differ substantially from the Drell--Yan\ predictions and provide valuable additional information. Our results show that such a large $R$ results from contributions of the $b\bar{b}$ channel that become substantial near $2\ensuremath{m_{\stau}}\xspace=\ensuremath{m_{H^{0}}}\xspace$, \ie, the threshold for on-shell $\ensuremath{H^{0}}\xspace$ exchange. Manifestations of an on-shell $\ensuremath{H^{0}}\xspace$ exchange that leads to the large $R$ value will then show up in the invariant mass distribution of the directly produced stau pair in the form of a resonance peak at $m_{\stau\stau^*}=\ensuremath{m_{H^{0}}}\xspace$. Considering those $m_{\stau\stau^*}$ distributions for the points $\alpha$ and $\gamma$ in figures~\ref{fig:distributions_minv}\,(a) and~(c), respectively, this feature is clearly visible. As already explained in section~\ref{sec:parameter_determination}, the associated $\ensuremath{m_{H^{0}}}\xspace$ can then be extracted and compared to $2\ensuremath{m_{\stau}}\xspace$ which marks also the minimum value of $m_{\stau\stau^*}$ in such a distribution. In fact, the $\ensuremath{H^{0}}\xspace$ resonance at point $\gamma$ shows that $\ensuremath{m_{H^{0}}}\xspace$ is still close to $2\ensuremath{m_{\stau}}\xspace$ but too large to allow for highly efficient resonant primordial stau annihilation. This is different for point $\alpha$ where the $\ensuremath{m_{H^{0}}}\xspace$ resonance peak sits much closer to the minimum $m_{\stau\stau^*}$ value. This tells us immediately that this is a scenario with $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ and thus with the possibility of efficient resonant primordial stau annihilation. Moreover, the $m_{\stau\stau^*}$ distribution provides us also with the width $\Gamma_{\ensuremath{H^{0}}\xspace}$ which is an important input in calculations of $\ensuremath{Y_{\stau}}$ in that region. For more moderate values of $R\simeq2$--$3$, it may still be possible to clarify the situation. If such an $R$ value results from on-shell $\ensuremath{H^{0}}\xspace$ exchange, again a $\ensuremath{H^{0}}\xspace$ resonance peak will show up in the corresponding $m_{\stau\stau^*}$ distribution so that the situation can be resolved as explained above. In the possible case that no resonance peak shows up in the $m_{\stau\stau^*}$ distribution (even under excellent experimental conditions), the excess over the Drell--Yan\ prediction may be due to a $2\ensuremath{m_{\stau}}\xspace$ value that is slightly above $\ensuremath{m_{H^{0}}}\xspace$ so that on-shell $\ensuremath{H^{0}}\xspace$ is just not possible. This is the situation encountered, \eg, for $m_0^H\simeq-820~{\rm Ge\kern -1pt V}$ in the NUHM1 model considered in figure~\ref{fig:nuhm}. Here we still expect an excess of events over the Drell--Yan\ predicition towards the minimum $m_{\stau\stau^*}$ value in the invariant mass distribution but---without a resonance peak---it will not be possible to determine $\ensuremath{m_{H^{0}}}\xspace$ or $\Gamma_{\ensuremath{H^{0}}\xspace}$ in experimental studies of direct stau production. Nevertheless, those quantities may still be accessible at the LHC, \eg, via studies of associated $b\bar{b}\ensuremath{h^{0}}\xspace/\ensuremath{H^{0}}\xspace$ production with $\ensuremath{h^{0}}\xspace/\ensuremath{H^{0}}\xspace\to\mu^+\mu^-$, which will remain conceivable also for very heavy colored sparticles. In fact, these reactions are considered to be very promising for $\ensuremath{m_{H^{0}}}\xspace$ measurements at the LHC, despite the relatively small $\text{BR}(\ensuremath{h^{0}}\xspace/\ensuremath{H^{0}}\xspace\to\mu^+\mu^-)$~\cite{Ball:2007zza}. If this is indeed feasible, the described shape of the $m_{\stau\stau^*}$ distribution together with a finding of $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$ will then point to the possibility of an exceptionally small $\ensuremath{Y_{\stau}}$. In fact, also in case of a more sizeable $R$, a second independent determination of $\ensuremath{m_{H^{0}}}\xspace$ in studies of other processes will provide an important consistency check and test whether the observed resonance is indeed associated with the $\ensuremath{H^{0}}\xspace$ boson. For a scenario with resonant primordial stau annihilation and the associated mass pattern $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$, the differential distributions with respect to $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ will also provide valuable information, which is already evident from our discussion on the $\ensuremath{p^\mathrm{T}}\xspace$ and $\beta$ cut dependence of $R$ and from section~\ref{sec:kinematicalcuts}. However, we would like to stress once more that these distributions are very different from the Drell--Yan\ prediction towards low $\ensuremath{p^\mathrm{T}}\xspace$ and low $\beta$ values for both the $2\ensuremath{m_{\stau}}\xspace<\ensuremath{m_{H^{0}}}\xspace$ case and the $2\ensuremath{m_{\stau}}\xspace>\ensuremath{m_{H^{0}}}\xspace$ case. This can be seen explicitly in figure~\ref{fig:distributions_pt_beta} for two different scenarios with $|2\ensuremath{m_{\stau}}\xspace-\ensuremath{m_{H^{0}}}\xspace|=10~{\rm Ge\kern -1pt V}$. The most challenging situation with respect to the testing of the viability of an exceptionally small $\ensuremath{Y_{\stau}}$ is encountered for scenarios such as point $\beta$. Here the exceptional yield results only from enhanced stau-Higgs couplings. Again the $m_{\stau\stau^*}$ distribution shows differences with respect to the Drell--Yan\ prediction as can be seen in figure~\ref{fig:distributions_minv}\,(b). These are manifestations of the $gg$ channel contribution: While the $\ensuremath{H^{0}}\xspace$-mediated processes lead to the $\ensuremath{H^{0}}\xspace$ resonance peak at $m_{\stau\stau^*}\simeq 760~{\rm Ge\kern -1pt V}$, the excess that shows up over a wide $m_{\stau\stau^*}$ range (and in particular towards lower $m_{\stau\stau^*}$) results mainly from the $\ensuremath{h^{0}}\xspace$-mediated processes that benefit from the significantly enhanced $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\ensuremath{h^{0}}\xspace$ coupling. However, from this information alone, it will be very difficult to infer confidently that an exceptional $\ensuremath{Y_{\stau}}$ is possible. Moreover, in our numerical studies, we find that the $gg$ channel contribution usually does not lead to $R>2$ (\cf\ point $\beta$ in table~\ref{tab:YstauR}). Thus, one will have to rely on other investigations that are sensitive to large values of $\tan\beta$, $|\mu|$, and/or $|A_\tau|$. As outlined in section~9 of ref.~\cite{Pradler:2008qc}, some of those investigations provide additional motivation for a future linear collider at which processes such as $e^+e^-\to\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\ensuremath{h^{0}}\xspace/\ensuremath{H^{0}}\xspace$ and $\gamma\gamma\to\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*\ensuremath{h^{0}}\xspace/\ensuremath{H^{0}}\xspace$ can indeed allow for direct experimental determinations of the stau-Higgs couplings. Experimental insights into $\tan\beta$, $|\mu|$, and/or $|A_\tau|$ will be relevant also for the scenarios with larger $R$ discussed above since larger stau-Higgs couplings are associated with more efficient resonant primordial stau annihilation and thereby with smaller $\ensuremath{Y_{\stau}}$. Here analyses of the decays $\ensuremath{H^{0}}\xspace\to\tau\bar{\tau}/b\bar{b}$ and associated limits/findings in the $m_A$--$\tan\beta$ plane will be highly interesting~\cite{Schumacher:2011jq,Chatrchyan:2011nx}. Equally exciting will be the outcome of the ongoing searches for the decay $B_s\to\mu^+\mu^-$, \eg, at the LHCb experiment~\cite{Aaij:2011rj}. As can be seen in table~\ref{tab:benchmarks}, with a sensitivity to a $\text{BR}(B_s \to \mu^+\mu^-)$ as small as $10^{-8}$, one will be able to test points such as $\alpha$ and $\beta$ which allow for an exceptionally small stau yield. \section{Conclusions} \label{sec:conclusion} We have studied the direct hadronic production of a pair of staus $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ within the MSSM. In addition to the well-known Drell--Yan\ process, we have considered $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ production processes initiated by $\ensuremath{b\bar{b}}\xspace$~annihilation and gluon fusion, with all third-generation mixing effects taken into account. This allows us to provide reliable predictions of hadronic slepton production at ${\cal O}(\alpha_s^2 \alpha^2)$. These predictions are independent of the stau lifetime and applicable in $\ensuremath{\tilde{\chi}^{0}}_1$ LSP scenarios with the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ NLSP as well as in settings in which the $\ensuremath{{\tilde{\tau}}_{1}}\xspace$ is long-lived. In considerable parts of the MSSM parameter space, we find that the additional $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels lead to a substantial enhancement of the direct stau production cross section over the NLO-QCD Drell--Yan\ prediction. This enhancement can be even larger than one order of magnitude. Particularly significant corrections are found when direct stau production can proceed via the exchange of an on-shell heavy CP-even Higgs boson \ensuremath{H^{0}}\xspace and when the left-right-stau mixing is sizeable. Moreover, the contributions of the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are enhanced also in the case of large stau-Higgs couplings which are associated with large values of $\ensuremath{\tan\beta}\xspace$, $|\mu|$, and/or $|A_{\tau}|$ and thereby again with a sizeable left-right-stau mixing. In cosmologically motivated scenarios with gravitino or axino dark matter, the stau can be the lightest SUSY particle within the MSSM. In an R-parity conserving setting, the stau will then typically be long-lived since it can only decay into the extremely weakly interacting gravitino or axino. Such long-lived staus can lead to the striking collider signature of a charged massive particle, \ie, a slowly moving charged object with large transverse momentum. SM backgrounds to this signature originate only from slow moving muons and kinematical cuts on the velocity $\beta$ and $p^T$ are required to separate these backgrounds. For such scenarios, we have investigated differential distributions of the directly produced staus and the associated integrated cross sections after application of the kinematical cuts. Our findings show that staus from the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels are often softer and slower than those produced in the Drell--Yan\ channel. We thus recommend that experiments should try to soften their cuts to improve sensitivity to these additional channels. Here, a detailed study including detector effects should be performed to investigate the possible discovery reach and exclusion limits. Once long-lived staus are observed at the LHC, one will be able to measure the stau mass $\ensuremath{m_{\stau}}\xspace$ accurately, \eg, in TOF measurements. A measurement of the direct stau production cross section---if governed by Drell--Yan\ processes---will then probe the stau mixing angle~$\ensuremath{\theta_{{\tilde{\tau}}}}\xspace$. However, we have shown that there is also the possibility of an early observation of a significant excess of the direct stau production cross section over the Drell--Yan\ prediction. Indeed, such a finding can be a first hint for Higgs physics at the LHC. Moreover, our results demonstrate that measurements of the distribution of direct stau production events as a function of the invariant stau-antistau mass $m_{\stau\stau^*}$ can give $\ensuremath{m_{\stau}}\xspace$, independently, and, more importantly, the mass of the heavy CP-even Higgs boson $\ensuremath{m_{H^{0}}}\xspace$. Although challenging, with precise measurements of the invariant mass $m_{\stau\stau^*}$, we find that these distributions may provide us also with the Higgs width $\Gamma_{\Hhiggs}$. In fact, for large event samples, both the $\ensuremath{m_{H^{0}}}\xspace$ and the $\Gamma_{\Hhiggs}$ determination might even be possible in parameter regions in which direct stau production is governed by the Drell--Yan\ channels. We have also presented results that are encouraging for the stopping of long-lived staus in the collider detectors~\cite{Martyn:2006as,Asai:2009ka,Pinfold:2010aq,Freitas:2011fx} or in additional surrounding material~\cite{Goity:1993ih,Hamaguchi:2004df,Feng:2004yi,Hamaguchi:2006vu}. With a large number of stopped staus, such experiments could allow for analyses of their late decays. Those analyses may give unique insights into the nature of the LSP into which the stau decays and into the vertex that governs this decay. In this way, it could be possible to use collider experiments to probe physics at scales as high as the Peccei--Quinn scale~\cite{Brandenburg:2005he,Freitas:2011fx} or the Planck scale~\cite{Buchmuller:2004rq}. A crucial criterium for the stopping of large numbers of staus is that a large fraction of them is produced with relatively slow initial velocities. This is exactly what we find if the staus are directly produced via the $b\bar{b}$ and $gg$ channels in the appealing scenarios in which the thermal relic stau abundance can be exceptionally small. Here the number of stopped directly produced staus may exceed expections based on the Drell--Yan\ channels by more than one order of magnitude. Within the CMSSM, we have provided cross section predictions for direct stau production at the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ and $14~{\rm Te\kern -1pt V}$ and at the Tevatron. Here our focus has been on a particular \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace plane in which the long-lived staus can have an exceptionally small thermal relic stau abundance. For the considered scenarios with $\ensuremath{\tan\beta}\xspace=55$, we predict substantial contributions from the $\ensuremath{b\bar{b}}\xspace$ and $gg$ channels in large areas of the \ensuremath{m_{0}}\xspace-\ensuremath{m_{1/2}}\xspace plane. By comparing our results for the Tevatron with the associated existing upper limit of $10~\mathrm{fb}$, a small strip along the conservative lower stau mass limit from LEP of $82~{\rm Ge\kern -1pt V}$ is found to be disfavored in that particular plane. On the other hand, our cross section predicitions show that it will be difficult to discover directly produced staus with $\ensuremath{m_{\stau}}\xspace\gtrsim 100~{\rm Ge\kern -1pt V}$ at the Tevatron. This is different for the LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ where tests of direct stau production will be possible in the very near future. In particular, the CMSSM parameter region in which an exceptionally small stau yield is possible because of resonant primordial stau annihilation will be tested very soon. To address the relative importance of direct stau production with respect to indirect stau production in cascade decays, we have considered four CMSSM benchmark points. Our calculations show that direct stau production can be one of the dominant contributions especially in the cosmologically motivated scenarios with an exceptionally small stau yield. Moreover, we find that the early LHC with $\sqrt{S}=7~{\rm Te\kern -1pt V}$ may provide a better environment for the study of direct stau production than the LHC with $\sqrt{S}=14~{\rm Te\kern -1pt V}$ at which indirect stau production is often expected to become dominant. Finally, we have explored the testability of the conditions that allow for an exceptionally small stau yield at the LHC. Within the CMSSM and for a NUHM1 scenario, we have studied whether an excess of the direct stau production cross section over the Drell--Yan\ prediction can be used as an indicator for the possibility of an exceptional yield. Although no one-to-one link is found, a large excess over the Drell--Yan\ prediction can very well be a first hint of efficient stau annihilation in the early Universe. Additional investigations---especially in the Higgs sector---will still be crucial to clarify the situation. Important additional insights can be provided by studying the differential distributions of the directly produced $\ensuremath{{\tilde{\tau}}_{1}}\xspace\stau^*$ pairs. In particular, the differential distribution as a function of the invariant mass $m_{\stau\stau^*}$ may clarify the situation in a striking way: If the mentioned large excess is observed and if one is in the region that allows for efficient resonant primordial stau annihilation, in which $2\ensuremath{m_{\stau}}\xspace\approx\ensuremath{m_{H^{0}}}\xspace$, this $m_{\stau\stau^*}$ distribution will show a pronounced \ensuremath{H^{0}}\xspace resonance peak right at the beginning. In summary, direct stau production including \ensuremath{b\bar{b}}\xspace and $gg$ channels can be probed in the very near future or even with data already available. Once discovered, this process might shed light on SUSY parameters and important cosmological questions soon. \section*{Acknowledgments} We are grateful to T.~Hahn, W.~Hollik, J.~Germer, P.~Graf, and A.~Landwehr for valuable discussions. This work was supported in part by the Cluster of Excellence ``Origin and Structure of the Universe'' and by the US DOE under contract No.~DE-FG02-95ER40896.
1,116,691,501,206
arxiv
\section{Introduction} With the progress of computers, microscopic simulation methods have become popular in the heavy-ion reaction studies. The benefit of microscopic simulation method is that one can investigate nuclear reactions without making any specific assumption on the reaction mechanism. There are many kinds of microscopic simulations such as the time-dependent Hartree Fock (TDHF) \cite{refTDHF} which is a mean-field theory, Vlasov equation which is the semiclassical approximation of TDHF, Vlasov-Uehling-Uhlenbeck (VUU) equation or Boltzmann-Uehling-Uhlenbeck (BUU) equation \cite{refVUU} in the other name, which consist of Vlasov and the two-body collision term, the Cascade model \cite{refCascade} which includes only two-body collision term, and so on. Especially, VUU/BUU equation, which includes both of mean-field and two-body collision term, has become a standard framework for the heavy-ion reaction study from the low or intermediate energy to the high energy region. However, VUU/BUU equation, which is basically one-body theory, has a difficulty in dealing with the phenomena of fluctuation such as the fragment formation. Molecular dynamics approaches like quantum molecular dynamics (QMD) \cite{refQMD,refQMDrep} have been developed in order to calculate the fragmentation process. In QMD, we assume a single-particle distribution function of a nucleon as a Gaussian wave packet, and calculate the time evolution of the system according to the classical Newtonian equation of motion plus the two-body collision term. It contains both aspects of the mean-field and the two-body collisions, and is applicable to the wide energy-region. Furthermore, QMD can deal with the fragment formation since it is based on a many-body framework which traces the motion of each nucleon. Due to its applicability and its ability to describe fragment formation, QMD is widely used recently for heavy-ion reactions from intermediate to high energies. For low-energy reactions such as fusion, fission and deep inelastic collision process, however, microscopic simulations by using the molecular dynamics have not been studied except for a few works. One example is the analysis of the fusion reaction using QMD \cite{refQMDfusion}. It was reported, however, that several extra nucleons are emitted during the collision process in the calculation. This is due to the insufficient stability of initial ground state nuclei. We have to settle this problem to study low-energy collisions of heavy-ions using the molecular dynamics. For the simulation of low-energy nuclear reactions, frameworks with anti-symmetrization of the total wave function like FMD \cite{refFMD} and AMD \cite{refAMD} are suitable for describing ground state properties and reaction processes. However, they are not applicable for very heavy systems since they need much CPU time which is approximately proportional to the fourth power of the particle number. For calculations of heavy systems, application of the QMD framework (without anti-symmetrization) is still necessary. In this paper we propose an extended version of QMD method in view of the simulation of low-energy phenomena. Its applicability to treat ground state properties and nuclear reaction is investigated. Several improvements over standard QMD are found in this study. In Sec.~2 we describe the formalism of the extended QMD method, and the ground state properties by our model is discussed in Sec.~3. Its application to the nuclear reaction is reported in Sec.~4. Finally, summary and the discussion is given in section 5. \section{Extension of QMD} The insufficient stability of the standard QMD ground state is mainly due to the fact that they are not at their energy-minimum states. In the standard QMD, the kinetic energy term arising from the momentum variance of wave-packets is spurious and we do not take into account this term. Thus the constituent nucleons have finite momenta and are moving around in the ground state with appropriate binding energies. If we take the energy-minimum states, all the nucleons stop their motion and get into the over-bound states where the Pauli principle is broken. To solve this difficulty we make an extension of QMD in two points so as to take energy-minimum state as an initial ground nucleus: First, we include the so-called Pauli potential into effective interactions \cite{refOhnishi} in order to approximate the nature of Fermion many-body system. Second, we take into account the kinetic-energy term of the momentum variance of wave-packets to the Hamiltonian. In accordance with this we treat the width of each wave packet as a dynamical variable \cite{refValta}. We call here this extension of QMD as ``EQMD''. \subsection{Equation of motion of the system} The equation of motion is obtained by the time-dependent variational principle. We assume the total wave function of the system as a direct product of Gaussian wave packets of nucleons \begin{eqnarray} \Psi &=& \prod_i \phi_i({\bf r}_i)\ , \\ \phi_i({\bf r}_i)&=&\left({\nu_i+\nu_i^*\over 2\pi}\right)^{3/4} \exp\left[-{\nu_i\over2}({\bf r}_i-{\bf R}_i)^2 +{i\over \hbar}{\bf P}_i\cdot{\bf r}_i\right]\ . \end{eqnarray} Here ${\bf R}_i$ and ${\bf P}_i$ are the centers of position and momentum of the $i$-th wave packet. We introduce the complex Gaussian width $\nu_i$ as \begin{equation} \nu_i\equiv{1\over\lambda_i}+i\delta_i , \end{equation} where $\lambda_i$ and $\delta_i$ are its real and imaginary parts, respectively. The Hamiltonian is written as \begin{eqnarray} H &=&\langle\Psi|\sum_i-{\hbar^2\over 2m}\nabla_i^2 -\hat T_{\rm CM} +\hat H_{\rm int} |\Psi\rangle \\ &=&\sum_i\left[{{\bf P}_i^2\over 2m} +{3\hbar^2(1+\lambda_i^2\delta_i^2)\over 4m\lambda_i}\right] -T_{\rm CM} +H_{\rm int} \ , \end{eqnarray} where $T_{\rm CM}$ and $H_{\rm int}$ denote the spurious zero-point center-of-mass kinetic energy and the potential energy term, respectively. Here, one should note that the Hamiltonian includes terms originating from momentum variances of wave packets which were neglected as spurious constant terms in the standard QMD. Subtraction of the spurious zero-point CM kinetic energy is necessary since we include the kinetic energy from the momentum variance of wave-packets. The equations of motion of these variables are determined by the time-dependent variational principle \begin{eqnarray} &&\delta \int_{t_1}^{t_2} {\cal L} {\rm d} t=0\ , \\ &&{\cal L} (\{{\bf R}_i,{\bf P}_i,\lambda_i,\delta_i, \dot{\bf R}_i,\dot{\bf P}_i,\dot \lambda_i,\dot \delta_i\}) \equiv \langle\Psi|i\hbar{d \over dt}-\hat H|\Psi\rangle . \end{eqnarray} Then we get 8$A$ dimensional classical equations of motion, i.e. equations of motion of 4$A$ parameters ($A$ means the number of constituent particles) \begin{eqnarray} &&\dot{{\bf R}_i}={\partial H\over\partial{\bf P}_i}\ , \ \ \ \dot{{\bf P}_i}=-{\partial H\over\partial{\bf R}_i}\ , \nonumber\\\\ &&{3\hbar\over4}\dot\lambda_i=-{\partial H\over\partial\delta_i}\ , \ \ \ {3\hbar\over4}\dot\delta_i={\partial H\over\partial\lambda_i}\ . \nonumber \end{eqnarray} \subsection{Subtraction of zero-point CM kinetic energy} We subtract the zero-point center-of-mass kinetic energy of the system $T_{\rm CM}$ following the basic idea of Ref.~\cite{refAMD}. In our case, however, all the wave-packets have different contributions to zero-point kinetic energy. Therefore we take $T_{\rm CM}$ as \begin{equation} T_{\rm\scriptscriptstyle CM} =\sum_i {t^{\rm\scriptscriptstyle CM}_i\over M_i}\ , \end{equation} where $t^{\rm\scriptscriptstyle CM}_i$ is the zero-point kinetic energy of the wave-packet $i$ written as \begin{equation} t^{\rm\scriptscriptstyle CM}_i = {\langle\phi_i|\nabla^2|\phi_i\rangle\over2m} -{\langle\phi_i|\nabla|\phi_i\rangle^2\over2m}\ . \end{equation} and $M_i$ is the ``mass number'' of the fragment to which the wave-packet $i$ belongs. The ``mass number'' is calculated as the sum of the ``friendships'' of other nucleons \begin{eqnarray} M_i&=& \sum_j F_{ij} \\ F_{ij}&\equiv& \cases{ \hbox to 4cm{1} (|{\bf R}_i-{\bf R}_j|<a)\cr \hbox to 4cm{$e^{-(|{\bf R}_i-{\bf R}_j|-a)^2/b}$} (|{\bf R}_i-{\bf R}_j|\geq a) }\ , \end{eqnarray} where we use the parameters $a=1.7\ \rm fm$ and $b=4\ {\rm fm}^2$. \subsection{Effective interaction} For the effective interaction, we use Skyrme, Coulomb, Symmetry and the Pauli potential \begin{equation} H_{\rm int}=H_{\rm Skyrme}+H_{\rm Coulomb} +H_{\rm Symmetry}+H_{\rm Pauli}\ . \end{equation} For the form of Skyrme interaction, we use the most simple one \begin{eqnarray} H_{\rm Skyrme} &=&{\alpha\over2\rho_0} \int\rho^2({\bf r}){\rm d}^3r +{\beta\over(\gamma+1)\rho_0^{\gamma}} \int\rho^{\gamma+1}({\bf r}) {\rm d}^3r \\ \rho({\bf r})&=&\sum_i^A\rho_i({\bf r}) \ , \\ \rho_i({\bf r})&=&{1\over(\pi\lambda_i)^{3/2}} \exp[-({\bf r}-{\bf R}_i)^2/\lambda_i] \ . \end{eqnarray} In the treatment of real system, however, we exclude the self interaction \begin{eqnarray} H_{\rm Skyrme} =&&{\alpha\over2\rho_0} \sum_{i,j\neq i} \int \delta({\bf r}_i-{\bf r}_j) \rho_i({\bf r}_i)\rho_j({\bf r}_j){\rm d}^3r_i{\rm d}^3r_j \nonumber\\ &&+ {\beta\over(\gamma+1)\rho_0^{\gamma}} \sum_{i,j\neq i} \int \delta({\bf r}_i-{\bf r}_j)\rho({\bf r}_i)^{\gamma-1} \rho_i({\bf r}_i)\rho_j({\bf r}_j){\rm d}^3r_i{\rm d}^3r_j\ , \\ \equiv&& H_2+H_{\gamma+1} \ . \end{eqnarray} For the numerical calculation of the density-dependent term $H_{\gamma+1}$, we perform a three-fold loop computation since we use the density-dependent term with $\gamma=2$ which is identical to the three-body interaction. Although the approximate treatment of density-dependent term with the two-fold loop computation much reduces the CPU time, it causes much ambiguity. We also employ the symmetry potential as \begin{equation} H_{\rm Symmetry}={c_{\scriptscriptstyle\rm S}\over 2\rho_0}\sum_{i,j\ne i} \int[2\delta(T_i,T_j)-1]\rho_i({\bf r})\rho_j({\bf r}){\rm d}^3r\ , \end{equation} where $T_i$ is the isospin index of nucleon $i$. In the nuclear matter limit, this term goes to \begin{equation} \int{c_{\scriptscriptstyle\rm S}\over 2} {(\rho_{\rm p}-\rho_{\rm n})^2\over\rho_0}{\rm d}^3r\ , \end{equation} where $\rho_{\rm p}$ and $\rho_{\rm n}$ denote proton and neutron densities, respectively. \subsection{Pauli potential} We introduce a phenomenological repulsive potential which inhibits nucleons of the same spin $S$ and isospin $T$ to come close to each other in the phase space. We assume a very simple form for this potential, \begin{eqnarray} H_{\rm Pauli}&=&{c_{\rm\scriptscriptstyle P}\over 2}\sum_i(f_i-f_0)^\mu \theta(f_i-f_0)\ , \\ f_i&\equiv&\sum_j \delta(S_i,S_j)\delta(T_i,T_j) \left|\langle\phi_i|\phi_j\rangle\right|^2\ , \end{eqnarray} where $f_i$ is the overlap of a nucleon $i$ with the same kind of nucleons (including itself), and we take the threshold parameter $f_0 \approx 1$. $c_{\rm\scriptscriptstyle P}$ is the strength of the potential. In order to see the statistical behavior of infinite system with our Pauli potential, we show in Fig.~1 the energy per nucleon of free nucleon gas with the density- and temperature-dependence. In the limit of infinite system, we assume all the wave packets approach to plane waves with uniform coordinate-space distribution. Then we perform the Metropolice simulation in the momentum space as in Refs.~\cite{refPauliA,refPauliB}. As for the parameters of Pauli potential in the figure, we take $c_{\rm\scriptscriptstyle P}=15$~MeV, $f_0=1.05$ and $\mu=2.0$. No nuclear potential and Coulomb potential is included. In the low-temperature region our gas is closer to the Fermi gas rather than the Boltzmann gas. In the high-temperature region, however, it approaches to the classical limit of the Boltzmann gas rapidly. Although this is a general tendency with any set of parameters, the behavior of our gas is apparently different from that of the simple classical gas and we can describe, to a certain extent, the property of Fermi gas. For the ground state nuclei and for low- and medium energy reactions treated in this paper, the inclusion of Pauli potential greatly improves the standard QMD as seen in this paper. \begin{figure} \vspace{170mm}\hspace{-5mm} \special{psfile=FIG1.ps vscale=0.6 hscale=0.6} \vspace{0mm}\hspace{90mm} \caption{ Energies per nucleon of infinite system without nuclear and Coulomb potentials. Open circles and crosses denote the total and the kinetic energy of the gas with Pauli potential with $c_{\rm\scriptscriptstyle P}=15$~MeV, $f_0=1.05$ and $\mu=2.0$. Dashed and dot-dashed lines denote Fermi gas and the classic Boltzmann gas.} \end{figure} There are two possible ways to fix the interaction parameters. One is to keep the saturation condition of nuclear matter with nuclear and Pauli potentials. In this case we have to adjust the Skyrme interaction parameters as well as the Pauli potential. We calculate the energy of the infinite system at zero-temperature as a function of its density. Then the Skyrme interaction parameter is adjusted to give 16 MeV binding energy at the saturating point. Keeping the saturation property of nuclear matter, we search good parameters of Pauli potential for finite nuclear systems. We call the parameters fixed in this way as ``parameter set 1''. Another way is to fix the Skyrme parameters in their original values neglecting the contribution of Pauli potential to the saturation property of matter. In this case we adopt standard parameters of the Skyrme interaction. The parameters of Pauli potential is searched to give a good agreement to the systematic trend of binding energies of finite systems. We call these parameters ``parameter set 2''. The values of the parameter of both sets are listed in Table I. \begin{table} \caption{ Parameter values used in the Skyrme, Symmetry and Pauli potentials. } \begin{tabular}{ccccccccc} & & $\alpha$ & $\beta$ & $\gamma$ & $c_{\scriptscriptstyle\rm S}$ & $c_{\rm\scriptscriptstyle P}$ & $f_0$ & $\mu$ \\ \noalign{\hrule} & parameter set 1 & $-116.6$ MeV & $70.8$ MeV & 2 & 25 MeV & 15 MeV & 1.05 & 2.0 \\ & parameter set 2 & $-124.3$ MeV & $70.5$ MeV & 2 & 25 MeV & 15 MeV & 1.0 & 1.3 \\ \end{tabular} \end{table} \subsection{Two-body collision term} For the treatment of two-body collisions, we follow the prescription of the standard QMD. If a pair of two nucleons fulfill these conditions, i.e., i) their relative distance takes its minimum value within the time-step, and ii) the minimum distance being smaller than a certain value $d_{\rm coll}$, then a stochastic two-body collision is set to occur iii) with the probability $P_{\rm coll}$ decided as \begin{equation} P_{\rm coll}={\sigma_{\rm NN}\over \pi {d_{\rm coll}}^2}\ . \end{equation} Here we take 2.0 fm for the value of $d_{\rm coll}$ and $\sigma_{\rm NN}$ is the energy-dependent nucleon-nucleon collision cross section parameterized \cite{refAMD} given as \begin{eqnarray} \sigma_{\rm NN} &=&{100\over 1+\epsilon/ 200\ {\rm MeV}}\ {\rm mb}\ , \\ \epsilon&\equiv& p_{\rm rel}^2/2m\ . \end{eqnarray} In this paper the angular distribution of the final state of collision is assumed to be isotropic. The final relative momentum of colliding nucleons is searched so as to conserve the total energy of the system, since the total energy conservation is not guaranteed due to the momentum dependence of Pauli potential. The Pauli principal is also checked and the collisions with unsatisfactory final states are canceled. \section{The ground state properties} One has to prepare energy-minimum states as initial ground nuclei. They are obtained by starting from a random configuration and by solving the damped equations of motion as \begin{eqnarray} &&\dot{{\bf R}_i}= {\partial H\over\partial{\bf P}_i} +\mu_{_{\bf R}}{\partial H\over\partial{\bf R}_i} \ , \ \ \ \dot{{\bf P}_i}= -{\partial H\over\partial{\bf R}_i} +\mu_{_{\bf P}}{\partial H\over\partial{\bf P}_i}\ , \nonumber\\ \label{eqDamp}\\ &&{3\hbar\over4}\dot\lambda_i= -{\partial H\over\partial\delta_i} +\mu_{_{\lambda}}{\partial H\over\partial\lambda_i}\ , \ \ \ {3\hbar\over4}\dot\delta_i= {\partial H\over\partial\lambda_i} +\mu_{_{\delta}}{\partial H\over\partial\delta_i}\ . \nonumber \end{eqnarray} Here $\mu_{_{\bf R}}$, $\mu_{_{\bf P}}$, $\mu_{_{\lambda}}$ and $\mu_{_{\delta}}$ are damping coefficients. With negative values of these coefficients the system goes to its (local) minimum point, \begin{eqnarray} {dH\over dt}&=&\sum_i\left[ {\partial H\over\partial{\bf R}_i}\dot{\bf R}_i+ {\partial H\over\partial{\bf P}_i}\dot{\bf P}_i+ {\partial H\over\partial\lambda_i}\dot\lambda_i+ {\partial H\over\partial\delta_i}\dot\delta_i\right] \\ &=&\sum_i\left[ \mu_{_{\bf R}}\left({\partial H\over\partial{\bf R}_i}\right)^2+ \mu_{_{\bf P}}\left({\partial H\over\partial{\bf P}_i}\right)^2+ {4\mu_{_{\lambda}}\over3\hbar}\left({\partial H\over\partial\lambda_i} \right)^2+ {4\mu_{_{\delta}}\over3\hbar}\left({\partial H\over\partial\delta_i} \right)^2\right]. \\ &\leq 0 \end{eqnarray} All the nucleon wave-packets obeying Eq.~\ref{eqDamp} stop their motions at the energy-minimum state, \begin{eqnarray} \dot{\bf R}_i&=&0\ , \ \ \ \ \ \dot{\bf P}_i=0\ , \nonumber\\ \\ \dot\lambda_i&=&0\ , \ \ \ \ \ \dot\delta_i=0\ , \nonumber \end{eqnarray} while in the standard QMD model nucleons are moving around in the ground-states. Fermi motions are shared by the momentum variance of wave-packets and the non-zero values of momenta ${\bf P}_i$ which come from the momentum-dependence of Pauli potential. \subsection{Mass number dependence of the binding energy} The binding energy of a nucleus in the present framework is obtained from the minimum-energy condition for the ground state, while in the standard QMD framework it is only an input to fit the empirical value. Our interest is therefore how well the calculated values reproduce the experimental data. Figure 2 shows the comparison of binding energy per nucleon $E_{\rm bind}$ of our calculation and the experimental value. \begin{figure} \vspace{-15mm} \hspace {10mm} \special{psfile=FIG2.ps vscale=0.5 hscale=0.5 rotation=270} \vspace{100mm} \caption{ Binding energies per nucleon of ground state nuclei. Circles and triangles denote our model with parameter set 1 and 2, crosses denote corresponding experimental values. } \end{figure} With interaction parameter set 1 which guarantees the infinite matter properties, we can reasonably reproduce the binding energies of finite systems shown with circles in Fig.~2. With parameter set 2 (shown with triangles in Fig.~2) which uses the standard value of Skyrme interaction without considering the matter properties, we can reproduce the binding energy of nucleus almost perfectly. \subsection{Some typical features of individual nuclei} The shapes and density distributions of ground states are also well reproduced. In the case of light nuclei, e.g., $^{12}{\rm C}$, we can reproduce the alpha-clustering structures as displayed in Fig.~3. On the left are shown the radial density distribution (upper) and distribution of the real parts of nucleon wave-packet widths (lower) and on the right is shown the contour plot of density. This $^{12}{\rm C}$ nucleus is calculated with parameter set 2 and has the binding energy of 92.7 MeV and the rms radius of 2.31 fm which agree to the experimental values (92.2 MeV and 2.46 fm \cite{refExpCarbonOxigen}). By using interaction parameter set 1, we can also make a very similar ground state of $^{12}{\rm C}$ except for the binding energy. This three-alpha structure of $^{12}{\rm C}$ is almost the same as those obtained in AMD \cite{refAMD} and FMD \cite{refFMD}. This gives a support of introducing the Pauli potential as a phenomenological substitute of anti-symmetrization. For heavy nuclei, the density profiles is shown in the upper parts of Fig.~4. We see that the general feature of the density and the surface thickness are well reproduced in our calculations. The value of density, however, is somewhat higher than normal matter density. This is one of the problems of our model which makes the size of nuclei and therefore the reaction cross section slightly smaller than the data. We guess one reason is the Pauli principle is not perfectly realized in the ground nuclei since the Pauli potential we are using is rather moderate. One can see this fact in the Fig.~1 at low temperature where our system is slightly deviating from the Fermi gas. The lower parts of Fig.~4 shows the distribution of the real parts of nucleon wave-packet widths. The abscissas represent the distance of the wave packet from the center. Wave packets near the center are spatially more spread than those near the surface. Note that this fact does not mean the kinetic-energy density is higher at the surface since the density and the imaginary part of the width also contribute to the kinetic-energy density. In fact we obtain the kinetic-energy density distribution higher at the center and lower at the surface of a nucleus. A narrow width for a surface nucleon is required because otherwise the surface diffuseness becomes unrealistically large. A wide width for a central nucleon is also expected because our model predicts naturally a kind of matter limit (large width limit) for the nucleon sitting in the central part of the nucleus of large mass limit. \begin{figure} \vspace{-15mm}\hspace{2mm} \special{psfile=FIG3.ps vscale=0.5 hscale=0.5 rotation=270} \vspace{80mm} \caption{ Radial distribution (upper-left), the contour plot (upper-right) of density and the real part of the Gaussian widths (lower part) of $^{12}{\rm C}$ ground state. Interaction parameter set 2 is used. } \end{figure} \begin{figure} \vspace{-15mm}\hspace{-8mm} \special{psfile=FIG4.ps vscale=0.5 hscale=0.5 rotation=270} \vspace{95mm} \caption { Density distribution (upper parts) and Gaussian widths (lower parts) of the ground states of $^{93}{\rm Nb}$ and $^{197}{\rm Au}$. Abscissas for both denote the distance from the center. Interaction parameter set 2 is used. } \end{figure} \section{Nucleus-nucleus collisions} We apply our model to nucleus-nucleus collisions in this section. In the calculation of nuclear reactions we boost two initial nuclei according to the incident energy and then solve the EQMD equation of motion together with the two-body collision term as in the standard QMD. For all the fragments produced, their statistical decay processes to the final products are calculated \cite{refQMDplusStat}. Typically, within a time-scale of about 100 fm/c the dynamical part of nuclear reaction is completed, and is followed by the statistical decay process for a time-scale of several-order longer \cite{refQMDplusStat,refQMDrel}. Since it is not practical nor reliable to treat this statistical process in a framework of simulation, we adopt the standard statistical model \cite{refQMDplusStat,refCascade} for the latter process. With this hybrid model of QMD plus statistical decay calculation, we can calculate observables such as the mass distribution and the energy spectra of fragments. \subsection{Fragment mass distribution in the medium energy collisions} Figure 5 shows our calculation of fragment production cross sections in the $^{12}{\rm C}+^{12}{\rm C}$ (29 MeV/nucleon) reaction compared with the standard QMD (QMDstd) and AMD. The AMD results \cite{refAMD} reproduce the experimental data well especially for light fragments. Solid lines show the final fragment distribution after the statistical decay calculation while dashed lines show fragments produced in the dynamical process before the statistical decay. Though the final results of three models are quite similar, there are some differences between them before the statistical decay. Especially, AMD and QMDstd apparently differ with each other: In the AMD result, enhancement of $A_{\rm f}$ = 4 and 8 ($N$ alpha fragments) is seen while there is no peak at 4$N$ in the QMDstd result. This is mainly because AMD can describe three-alpha structure of $^{12}{\rm C}$ while QMDstd can not. There exist peaks at $A_{\rm f}$=4 in the present results of EQMD with parameter sets 1 and 2. Dynamical emissions of alpha clusters are enhanced due to the improvement of ground states in our model. To see the effect of dynamical treatment of wave-packet width, we compare in Fig.~6 the full EQMD calculation with that of fixed-width constraint for the same quantity as Fig.~5. In the fixed width calculation we solve the equation of motion only for ${\bf R}_i$ and ${\bf P}_i$ keeping the widths $\nu_i \equiv \lambda_i^{-1}+i\delta_i$ as constants (6$A$-dimensional calculation) using exactly the same interactions and the initial conditions as in the full EQMD. Thus this calculation also differs from the QMDstd. With fixed wave-packet widths, the distribution of dynamically produced fragments has strong peaks at $A_{\rm f}=4N$ and productions of other fragments and nucleons are rather hindered. Underestimation of nucleon yield is seen even after the statistical decay. For the description of nucleon emission process, the dynamical treatment of wave-packet widths is essential in the QMD calculation. \begin{figure} \vspace{170mm}\hspace{-3mm} \special{psfile=FIG5.ps vscale=0.7 hscale=0.7 rotation=0} \vspace{-75mm} \caption { Fragment mass distribution in $^{12}{\rm C}+^{12}{\rm C}$ (29 MeV/nucleon) reaction calculated with standard QMD, AMD and EQMD with parameter sets 1 and 2. Dashed lines denote fragments produced in dynamical process before statistical decay calculation, while solid lines denote final fragments after statistical decay. } \end{figure} \begin{figure} \vspace{165mm}\hspace{-2mm} \special{psfile=FIG6.ps vscale=0.7 hscale=0.7 rotation=0} \vspace{-80mm} \caption{ Fragment mass distribution in $^{12}{\rm C}+^{12}{\rm C}$ (29 MeV/nucleon) reaction. Dashed lines denote fragments produced in dynamical process before statistical decay calculation, while solid lines denote final fragments after statistical decay. } \end{figure} \subsection{Fusion cross section of $^{16}{\rm O}$+$^{16}{\rm O}$ reaction} We examine our model in the case of the $^{16}{\rm O}+^{16}{\rm O}$ fusion reaction. The same reaction has been calculated by the standard QMD model in Ref.~\cite{refQMDfusion}. It was found there that if one admits events of up to three nucleons escape into the fusion events, fusion cross section is nicely reproduced by the standard QMD model. If one demands no nucleon emission, however, the calculated fusion cross section is almost zero. In the present calculation, we demand that in the fusion event there is no nucleon/fragment emission within a certain time (in this paper we have chosen 450 fm/c after the contact of two nuclei). Classification of nucleons into fragments is done by the condition that nucleons within the distance of 3.0 fm belong to the same fragment. Since we judge the fusion events at finite time, we don't calculate the statistical decay process in this case. This calculation corresponds to a complete fusion reaction, and such exclusive calculation is possible in EQMD owing to the fact that EQMD ground state is stable enough so that no spurious particle emission happens. \begin{figure} \vspace{-15mm}\hspace{10mm} \special{psfile=FIG7.ps vscale=0.5 hscale=0.5 rotation=270} \vspace{100mm} \caption { Fusion cross section for $^{16}{\rm O}+^{16}{\rm O}$ reaction. Crosses denote experimental data from Refs.~\protect\cite{refFusionOOa,refFusionOOb}. The full Circles and full triangles denote our results of full EQMD calculation with parameter set 1 and 2,respectively. Open circles and open triangles denote the results of EQMD calculation with constraint of fixed wave-packet widths. } \end{figure} The fusion cross section is obtained by calculating the fusion probabilities for impact parameters of 0.0, 3.0, 4.0, 5.0, 5.5 and 6.0 fm by simulating 40 events for each impact parameter. In Fig.~7 we show the incident-energy dependence of the fusion cross section. Crosses denote experimental data from Refs.~\protect% \cite{refFusionOOa,refFusionOOb} and full circles and triangles denote our calculation with parameter sets 1 and 2. There is no significant difference between the results with two parameter sets. They reproduce about 85 \% of experimental data but somewhat underestimate them. The ground state of $^{16}{\rm O}$ we have used has the binding energy of 130.8 MeV and the rms radius of 2.5 fm which is about 92 \% of the experimental value 2.73 fm \cite{refExpCarbonOxigen}. We guess the small size of $^{16}{\rm O}$ causes this underestimation of fusion cross section. Though we can not, at present, reproduce perfectly the fusion cross section, one should remember that we get almost zero fusion cross section with the standard QMD by using the same criterion of fusion event. We discuss again the effect of dynamical treatment of wave-packet width by comparing the full calculation with that of fixed width constraint as in the previous section. In the fixed width calculation, the initial condition and interaction is exactly the same as in the full EQMD calculation. As shown with open circles and triangles in Fig.~7, the fusion cross sections with fixed width calculation are close to the full calculation in low-energy region and are much smaller for higher incident energies. At very low energies, two nuclei can easily fuse if they overcome the fusion barrier. At higher energies, however, the dissipation of the incident energy into the internal excitation is necessary. The dynamical change of wave-packet widths offers larger degree of freedom for the dissipation than the case of fixed width. Thus the dynamical treatment of wave-packet width plays much important roles rather at higher energies where dissipation of the incident energy is important for the fusion process. \section{Summary} In this paper we have discussed how to treat low-energy reactions in the framework of quantum molecular dynamics (QMD). We have introduced a phenomenological Pauli potential into effective interactions and have made the width of each wave packet as a dynamical variable. With this extended QMD (EQMD) method, we can well describe the ground state properties such as binding energies, density profiles or alpha-clustering structure in light nuclei. In the calculation, we have introduced two sets of interaction parameters. Set 1 is obtained by adjusting the nuclear potential by keeping the matter saturation properties which depends both on Pauli potential and on the nuclear potential. Set 2 is obtained by the standard parameterization of nuclear potential and the Pauli potential is searched to give a good agreement to the systematic trend of binding energies of finite systems. With the latter parameter set, we can reproduce the binding energies of nuclei almost completely from very light to very heavy systems. As for the nucleus-nucleus collisions, however, we did not see any significant difference between different interaction parameter sets. The width of Gaussian wave-packets, which are taken as a constant of arbitrary value in standard QMD, were determined from the minimum energy condition for the ground state. We showed that for $^{12}{\rm C}$, the real parts of the widths become identical for whole 12 nucleons and a clear alpha structure appears where 3 alphas are located at the vertices of equilateral triangle. For medium and heavy nuclei the real parts of the width show a systematic distribution; wider widths in the central region and narrower ones near the surface, which was first obtained with this work. Remaining problems to be solved are somewhat higher matter density near the center and somewhat smaller root mean square radius. In the calculation of nucleus-nucleus collisions, we showed that EQMD is able to enhance the dynamical alpha emission process. This enhancement was seen in AMD calculation but not in the standard QMD. The effect of the dynamical change of the widths of wave-packets were clearly seen in two points: One is nucleon emission or disintegration of clusters in medium energy collisions. With the dynamical change of Gaussian widths, nucleons can escape easily from the cluster. If we freeze the widths, dynamical emission of nucleons is much hindered. The other is the fusion reaction where dissipation of the incident energy become essential. We showed that the change of widths for fusion of $^{16}{\rm O}+^{16}{\rm O}$ causes a big enhancement of the cross section at higher incident energies. Without this freedom, the EQMD model gives unphysically small fusion cross section. The study of EQMD in this paper showed that this is a promising direction to generalize the QMD simulations. The systematic calculation of EQMD, however, is now in a beginning stage. We need further calculations to clarify the basic features of this model. One of the most interesting item to study with EQMD calculation is the low energy reactions between heavy nuclei, because no other many-body model has been successfully applied to it. The study in this direction is in progress. \acknowledgements The authors would thank Dr.~Aldo Bonasera, Prof.~Hisashi Horiuchi and Dr.~Tomoyuki Maruyama for fruitful discussions. Support of Institute of Physical and Chemical Research (RIKEN) for the use of VPP-500 super-computer is also acknowledged.
1,116,691,501,207
arxiv
\section{Introduction} Time-delayed feedback was originally proposed in the context of chaos control \citep{PYR92} as an alternative to the famous OGY method developed earlier by \citet{OTT90}. The idea was to achieve stabilization of unstable periodic orbits (UPOs) by adding, to a chaotic system, a control force in the form of the difference between a system variable at time $t$ and at a delayed time $t-\tau$. This method proved to be very powerful and has been successfully applied to various physical systems since then \citep{SCH07}. The scheme was improved by \citet{SOC94}, and other variants have been elaborated \citep{KIT95,BAB02,BEC02,UNK03,SCH03a}, and applied also to stochastic systems \citep{GOL03,JAN03,HAU06}. Moreover, elegant analytical theories \citep{JUS97} were developed supporting, thus, numerical findings. Apart from the practically relevant application of time-delayed feedback, e.g. to lasers \citep{SCH06a}, the interest lies highly on the mathematical aspect of the problem. Delay differential equations are difficult to handle. The delay renders the system infinite-dimensional and the interplay with nonlinearity uncovers complex dynamic behaviour. Delay-induced multistability was already predicted in the first paper by \citet{PYR92}. The idea that time-delayed feedback may not only be used for controlling a system but also for creating new dynamics is not new. Nevertheless, the investigation of delay-induced bifurcations and multistability is still a growing field \citep{XU04,BAL05} with applications to the logistic map as well as to laser equations \citep{MAR03,MAR05a}. \section{The model} Here we study the influence of time-delayed feedback in a generic model representative for excitable dynamics \citep{GAN93,DIT94,RAP94}. The behaviour of the system without delay is governed by a global bifurcation, namely a saddle-node bifurcation on a limit cycle ({\em saddle-node infinite period bifurcation, SNIPER}), that takes place when a certain parameter exceeds a threshold. Such a bifurcation was first observed experimentally in a semiconductor device \citep{PEI89}, and also encountered in various semiconductor models, e.g. for Gunn domains \citep{SHI96}, superlattices \citep{PAT98,HIZ06}, or lasers \citep{DUB99,WIE02,KRA03a}. Excitability, naturally, yields the system highly sensitive to fluctuations and therefore this system has served perfectly as an example for {\it coherence resonance} \citep{PIK97} shown in the seminal paper by \citet{GAN93} and \citep{DIT94} over ten years ago. In this paper we do not consider the effect of noise but extend the generic model by incorporating time-delayed feedback according to the Pyragas scheme. The equations are the following: \begin{eqnarray} \label{eq:model} \dot{x}&=&x(1-x^2-y^2)+y(x-b)-K(x-x_{\tau})\\ \dot{y}&=&y(1-x^2-y^2)-x(x-b)-K(y-y_{\tau}). \end{eqnarray} Here $x$ and $y$ denote the variables at time $t$, while $x_{\tau}$ and $y_{\tau}$ the delayed ones at time $t-\tau$, with $\tau$ and $K$ being the delay and control strength, respectively. This kind of control is called {\it diagonal} because the control force may be written in the form of a unity matrix. In the absence of delay, i.\,e.\ $K=0$, $b$ plays the role of the bifurcation paramater. In polar coordinates $x=r \cos{\varphi}, y=r \sin{\varphi}$ Eq.~(\ref{eq:model}) with $K=0$ reads \begin{eqnarray} \label{eq:polarcoord} \dot{r}&=&r\left( 1-r^2 \right)\\ \dot{\varphi}&=&b-r\cos\varphi \end{eqnarray} When $b<1$ there are three fixed points: an unstable focus at the origin and a pair of a saddle-point and a stable node on the unit circle with coordinates $(b,+\sqrt{1-b^2})$ and $(b,-\sqrt{1-b^2})$, respectively. The latter collide at $b=1$ and a limit cycle $r=1$ is born. Above but close to the bifurcation, the frequency $f$ of this limit cycle obeys a characteristic square-root scaling law $f \sim (b-1)^{1/2}$. \section{Linear stability analysis of the delay equation} We prepare the system slightly below the bifurcation ($b=0.95$) and switch on the control. The first question that arises concerns the stability of the three fixed points and how this changes, or not, due to delay. For this, we perform a standard linear stability analysis and derive the characteristic equation for the roots, $\Lambda$, which determine the stability of the fixed points. For the unstable focus, the characteristic equations is: \begin{equation} \label{eq:char_poly_ufo} (1-K+Ke^{-\Lambda \tau}-\Lambda)^2+b^2=0. \end{equation} Due to the presence of the delay, Eq.~(\ref{eq:char_poly_ufo}) has infinitely many solutions. However, the stability of the fixed points is determined by a finite number of critical roots with largest real parts. Using the Lambert function $W$, which is defined as the inverse function of $g(z)=ze^z$ for complex $z$ \citep{HAL71,COR96}, the solution of Eq. (\ref{eq:char_poly_ufo}) can be expressed as: \begin{equation} \label{eq:Lambert_ufo} \Lambda=\frac{1}{\tau}W[K\tau e^{\tau(K-1\pm b)}]-K+1\mp ib. \end{equation} In the case of the saddle and the node, the characteristic equation can be factorized into two equations: \begin{eqnarray} \label{eq:char_saddle_node} \Lambda+K+2-Ke^{-\Lambda \tau}&=&0\\ \Lambda^{s,n} +K \mp \sqrt{1-b^2} -Ke^{-\Lambda^{s,n} \tau}&=&0, \end{eqnarray} with solutions: \begin{eqnarray} \label{eq:Lambert_saddle} \Lambda_1&=&\frac{1}{\tau}W[K\tau e^{\tau(2+K)}]-2-K\\ \label{eq:Lambert_node} {\Lambda_2}^{s,n}&=&\frac{1}{\tau}W[K\tau e^{\tau(K\mp \sqrt{1-b^2})}]-K\pm \sqrt{1-b^2}. \end{eqnarray} The superscripts '$s$' and '$n$' denote the saddle (upper sign) and the node (lower sign), respectively. Figure~\ref{fig:real_eigen_vs_tau} shows the real parts of the eigenvalues $\Lambda$ as a function of $\tau$ for a fixed value of $K$ for all three fixed points. One may see the eigenvalues of the uncontrolled system at $\tau=0$, and their interaction with the delay-induced modes (blue) with increasing $\tau$. In all three cases, control is unable to change the stability of the fixed point: in the case of the unstable focus, the mode with the largest real part (red) tends to very small values with increasing delay, remaining however positive. The same holds for the unstable mode of the saddle. Symmetric behaviour is observed for the stable modes of both saddle and node: they tend to zero as a function of $\tau$ but remain negative. The picture does not change qualitatively even for other values of $K$ and therefore one might conclude that no delay-induced bifurcations of fixed points take place. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{fig1_rep.eps} \caption{Real parts of the complex eigenvalues $\Lambda$ as a function of $\tau$, for fixed $K=1$ and $b=0.95$. For the (a) unstable focus, (b) the saddle point and (c) the stable node. The modes emerging from the uncontrolled system and the delay-induced modes are marked red and blue, respectively.} \label{fig:real_eigen_vs_tau} \end{figure} \section{Global bifurcation analysis} However, the above local analysis gives no information on the global changes in phase space that delay potentially induces. A numerical investigation shows, in fact, that there exists bistability in a certain parameter regime in the $K$-$\tau$ plane: trajectories starting close to the saddle point are attracted by a delay-induced limit cycle, whereas trajectories starting elsewhere end up in the stable node. Keeping $\tau=3$ fixed we find the critical value $K_c$ of $K$ for which this delay-induced limit cycle is born and observe a scaling $T \sim \ln\left\vert K-K_c\right\vert$ in the period $T$ of the corresponding oscillations, typical for the case of a homoclinic bifurcation. In Fig.~\ref{fig:hom_scaling_law} phase portraits of the system below, at and above the bifurcation are shown. Trajectories with different initial conditions are shown: one starting from the vicinity of the unstable focus in the origin (blue) and one from the vicinity of the saddle (red). Bistability is revealed in Fig.~\ref{fig:hom_scaling_law}(c) where two attractors (the stable node and the delay-induced limit cycle) coexist. The 'kink' in the trajectory shortly before the loop closes is due to the control: the control force starts acting at $t=3$ when the system is still moving on the slow part of the unit circle. Therefore, its effect is not so noticeable. As the system moves faster the control force attains higher values and the trajectory starts deviating from its deterministic path at $t=13$. This deviation becomes large at $t=18$ where the trajectory, shortly before settling in the stable node, appears to be "attracted" to the saddle, resulting in this 'kink' in the $x-y$ projection. Also, as $K$ approaches the critical value, the trajectory passes closer and closer to the saddle on its way to the stable node. This ends in a homoclinic orbit at $K=K_c$, (Fig.~\ref{fig:hom_scaling_law}(b)) from which a periodic orbit is generated (Fig.~\ref{fig:hom_scaling_law}(c)). The period of the born limit cycle scales according to $T\approx -{\Lambda_u}^{-1}ln|K-K_c|$, where $\Lambda_u$ is the real part of the least unstable eigenvalue of the saddle point (i.\, e.\,, the one closest to the imaginary axis). One may calculate this from Eq. (\ref{eq:Lambert_saddle}) and find ${\Lambda_u}^{-1}=0.1739^{-1}=5.75$. This is in rather good agreement with the slope of the solid line in Fig.~\ref{fig:hom_scaling_law}(d) which equals $5.35$. \begin{figure}[htbp] \centering \includegraphics[clip,width=\columnwidth]{fig2_rep.eps} \caption{(a) Two dimensional projection of the phase space below the homoclinic bifurcation ($K=0.335$). (b) Homoclinic orbit (red) achieved at $K_c=0.3401$. (c) Delay-induced limit cycle (red) above the homoclinic bifurcation ($K=0.3438$). (d) Scaling of the oscillation period $T$ above but close to the critical point $K_c$ (crosses: simulation data, solid line: linear fit). Full and open circles mark stable and unstable fixed points, respectively. Parameters: $b=0.95$, $\tau=3$. } \label{fig:hom_scaling_law} \end{figure} In the following we use a bifurcation continuation tool \citep{ENG02,ENG01} and follow the homoclinic bifurcation in the $K-\tau$ plane. The produced bifurcation curve can be seen in Fig.~\ref{fig:hom_branch} (left). It consists of two main curves: one running through points $A-E$ and a second tongue-like curve. In the white area the system is monostable (stable fixed point) while in the yellow area a delay-induced periodic attractor is born via a homoclinc bifurcation marked by the red curves. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{fig3_rep.eps} \caption{Curve of homoclinic bifurcations (red) in the $K-\tau$ plane (left). A - E labels various points with homoclinic orbits, which are shown in the $x-y$ phase plane in the panel on the right. Delay-induced limit cycles exist, in addition to the stable fixed point, in the yellow area. The blue dashed curve separates the regions $\sigma_0<0$ (left) and $\sigma_0>0$ (right)} \label{fig:hom_branch} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{fig4_rep.eps} \caption{Spectra of the eigenvalues $\Lambda$ of the saddle for two points on the homoclinic bifurcation line: (a) $K=0.3401$, $\tau$=3 (B in Fig.~\ref{fig:hom_branch}) and (b) {\bf $K=0.57$, $\tau=7.28$} (E in Fig.~\ref{fig:hom_branch}). The saddle quantity $\sigma_0=Re(\Lambda_s)+Re (\Lambda_u)$ is negative in (a) whereas in (b) it is positive. ($b=0.95$)} \label{fig:spectra} \end{figure} At this point one should emphasize the role of the saddle-point: due to the delay, the saddle possesses no longer two distinct eigenvalues (one positive, i.\,e.\ unstable, and one negative, i.\,e.\ stable) but infinitely many. Moereover, complex eigenvalues come into play as well. The eigenvalues, however, that determine the behaviour of the colliding homoclinic orbit, are the leading ones, i.\,e.\ those closest to the imaginary axis. In Fig.~\ref{fig:spectra} one can see the eigenvalue spectrum for two parameter values on the homoclinic curve and notice that the leading eigenvalues of the saddle (red) are a positive real eigenvalue, as in the original uncontrolled system, and a complex conjugate pair with negative real parts, generated by the delay. This means that the saddle may turn into a {\it saddle-focus} for certain values of $K$ and $\tau$. Homoclinic orbits attached to a saddle-focus approach the fixed point in an oscillating manner. This explains the phase portraits in Fig. \ref{fig:hom_branch} (right) which become more and more complicated as $K$ increases. \begin{figure}[tbp] \centering \includegraphics[clip,width=\columnwidth]{fig5_rep.eps} \caption{ (a) Period $T$ of limit cycle born in a homoclinic bifurcation at $(K,\tau)=(0.17145,7)$ (point A in Fig.~\ref{fig:hom_branch}, $\sigma_0<0$). (b) Period $T$ of limit cycles in the multistable regime at $(K,\tau)=(0.213,7)$ (point Z in Fig.~\ref{fig:hom_branch}, $\sigma_0>0$), undergoing infinitely many fold (F) and period-doubling (PD) bifurcations, before ending in a homoclinic orbit h for $T\rightarrow\infty$ at $K=0.213$. Solid blue and red dashed lines denote stable and unstable limit cycles, respectively. The insets show the two leading Floquet multipliers of the periodic orbit $\mu_1=1$ (green) and $\mu_2$ (blue) with $K$ as a parameter in (a), and $T$ as a parameter in (b), in the complex plane. $b=0.95$ } \label{fig:tau7} \end{figure} From the above, it is clear that the two basic ingredients responsible for the delay-induced dynamics in our system are the homoclinic orbits and the saddle-foci. The theory of homoclinic bifurcations for ordinary differential equations has been well developed \citep{KUZ95,GUC86,WIG88}. As already mentioned, global bifurcations are strongly related to excitability and therefore one expects to encounter them in excitable systems. Various physical systems such as modulation-doped semiconductor heterostructures \citep{DOE92}, semiconductor lasers \citep{WIE02,KRA03a}, neuron models \citep{FEU00} and chemical systems \citep{BOR06} have been studied in this respect, both theoretically and experimentally. On the other hand, less work has been carried out for systems with delay undergoing such nonlocal bifurcations \citep{SET06}. It is therefore appropriate to analyze a generic system like the one studied here which, despite its simplicity, exhibits rich delay-induced dynamics with a homoclinic bifurcation as key component. \section{Delay-induced multistability} In what follows we will apply the theorems on homoclinic orbits connecting to saddle-foci as developed by Shilnikov \citep{KUZ95}. According to them, the so-salled saddle quantity is crucial for the homoclinic bifurcations occurring in high-dimensional systems. The saddle quantity is defined as $\sigma_0=Re(\Lambda_s)+Re (\Lambda_u)$, where $\Lambda_s$ and $\Lambda_u$ are the leading stable and unstable eigenvalues, respectively. Shilnikov proved, among other, that negative $\sigma_0$ results in the birth of a unique stable limit cycle from a homoclinic orbit. On the other hand, for $\sigma_0>0$, a wide variety of homoclinic bifurcations may occur, some of which involve infinitely many periodic orbits in the vicinity of the homoclinic orbit. The green dashed curve in Fig.~\ref{fig:hom_branch} shows the condition $\sigma_0=0$. Along the homoclinic bifurcation line in Fig.~\ref{fig:hom_branch} the saddle quantity changes sign, thereby allowing for both scenarios to take place. Figure~\ref{fig:spectra} shows the eigenvalue spectra for two different points on the bifurcation line corresponding to negative and positive saddle quantities, respectively. In the following, we restrict ourselves to a fixed value of $\tau=7$ and reveal multistability beyond the homoclinic bifurcation. For $\tau=7$, a homoclinic bifurcation takes place at $K=0.17145$ (point A in Fig.~\ref{fig:hom_branch}). In this case $\sigma_0=-0.0116$, and the bifurcation creates one stable limit cycle, with Floquet multipliers within the unit circle (inset of Fig.~\ref{fig:tau7}(a) ). The period $T$ of this limit cycle increases monotonically as the bifurcation point is approached (Fig.~\ref{fig:tau7}(a)). Moving further along $\tau=7$ other homoclinic bifurcation curves are crossed, e.\,g.\ at $K=0.213$ (Fig.~\ref{fig:tau7}(b), cf. point Z in Fig.~\ref{fig:hom_branch}). There, the saddle quantity is positive ($\sigma_0=0.0023$, calculated analytically from Eq. (\ref{eq:Lambert_node})), and the picture is much more complicated: an infinite number of bifurcations take place, which are related to saddle-node (fold) bifurcations of pairs of stable and unstable limit cycles, and additional period-doubling (flip) bifurcations of the stable limit cycles. The dependence of the period $T$ of the limit cycles upon $K$ is a nonmonotonic multivalued function, whose turning points are associated with saddle-node bifurcations. In between the fold bifurcations of the stable limit cycles, pairs of forward and inverse period doubling bifurcations occur. In the insets of Fig.~\ref{fig:tau7} the trivial Floquet multiplier $\mu_1=1$ and the leading Floquet multiplier $\mu_2$ of the periodic orbit are plotted. It can be seen how $\mu_2$ changes as $T$ is varied along the multivalued function in the main figure, showing how fold and flip bifurcations occur, at $\mu_1=1$ and $\mu_1=-1$, respectively. One should also expect other bifurcations near the critical point due to secondary homoclinic orbits which are beyond the scope of this paper. \section{Conclusions} In conclusion, we have presented a mechanism for delay-induced multistability in a system near a global bifurcation. In addition to the fixed point attractor which the uncontrolled system already possesses, a time-delayed feedback in the form of Pyragas difference control induces one or more coexisting limit cycle attractors. Depending upon the feedback control strength $K$ and the delay time $\tau$, either a single stable limit cycle is born in a homoclinic global bifurcation, or an infinite number of (stable and unstable) periodic orbits is induced undergoing a rich menagerie of bifurcation scenarios including period doubling and fold bifurcations. We have shown that the key ingredient in the observed dynamics is a homoclinic orbit connected to a saddle-focus created by delay. A bifurcation continuation in the $K-\tau$ plane was performed. Moreover, we were able to verify Shilnikov's theory of homoclinic bifurcations in a certain parameter regime. The excitable nature of the system and the infinite-dimensional phase space, due to delay, appear to play a crucial role in the induced homoclinicity. These results are interesting also from the point of view of applications, since our generic model is representative for a wide range of real-world systems. For instance, the transition from stationary to moving field domains in semiconductor superlattices has been shown to be associated with a saddle-node bifurcation on a limit cycle as described by Eq.(\ref{eq:model}) at $K=0$ \citep{HIZ06}, and time-delayed feedback control can also be realized in this system \citep{SCH03a}. Already without delay, this system has been noted for its high multistability of stationary domain states \citep{PRE94,KAS94,KAS96}, and bistability or higher multistability has been found in many other semiconductor nanostructures, see e.g. \citet{MEI00b,SCH01}. \begin{acknowledgments} This work was supported by DFG in the framework of Sfb 555. We are grateful to A. Balanov, G. Bordyugov, and G. Stegemann for fruitful discussions. \end{acknowledgments} \newpage
1,116,691,501,208
arxiv
\section{A-Posteriori Error Estimator} \label{sec:a_posteriori} The model reduction error in the ArbiLoMod has to be controlled. To this end, an a posteriori error estimator is used which should have the following properties: \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item It is robust and efficient. \item It is online-offline decomposable. \item It is parallelizable with little amount of communication. \item After a localized geometry change, the offline computed data in unaffected regions can be reused. \item It can be used to steer adaptive enrichment of the reduced local subspaces. \end{enumerate} All these requirements are fulfilled by the estimator presented in the following. We develop localized bounds for the standard RB error estimator, \begin{equation} \Delta(\widetilde u_\mu) := \frac{1}{\alpha_\mu} \norm{R_\mu(\widetilde u_\mu)}_{V_h'} \end{equation} where $R_\mu(\widetilde u_\mu) \in V_h^\prime$ is the global residual given as $ \dualpair{R_\mu(\widetilde u_\mu)}{\varphi} = \dualpair{f_\mu}{\varphi } - a_\mu(\widetilde{u}_\mu, \varphi) $. This error estimator is known to be robust and efficient (\cite[Proposition 4.4]{HesthavenRozzaEtAl2016}): \begin{equation}\label{eq:global_estimator} \norm{u_\mu - \widetilde u_\mu}_V \leq \Delta(\widetilde u_\mu) \leq \frac{\gamma_\mu}{\alpha_\mu} \norm{u_\mu - \widetilde u_\mu}_V. \end{equation} \subsection{Abstract Estimates} We start by showing two abstract localized estimates for the dual norm of a linear functional. \begin{proposition} \label{thm:a_posteriori} Let $\{O_\xi\}_{\xi \in \Upsilon_E}$ be a collection of linear subspaces of $V_h$ for some finite index set $\Upsilon_E$ and let $\tilde V \subset V_h$ denote an arbitrary subspace. Moreover let $P_{O_\xi}: V_h \longrightarrow O_\xi \subseteq V_h, \xi \in \Upsilon_E$ be continuous linear mappings which satisfy $\sum_{\xi \in \Upsilon_E} P_{O_\xi} = \operatorname{id}_{V_h}$. With the stability constant of this partition modulo $\tilde V$ defined as \begin{equation} c_{\mathrm{pu},\tilde V} := \sup_{\varphi \in V_h \setminus \{0\}} \frac{( \sum_{\xi \in \Upsilon_E} \inf_{\tilde \varphi \in \tilde V \cap O_\xi} \norm{P_{O_\xi}(\varphi) - \tilde \varphi}_V^2)^\frac{1}{2}}{\norm{\varphi}_V}, \nonumber \end{equation} we have for any linear functional $f \in V_h^\prime$ with $ \dualpair{f}{\tilde \varphi} = 0 \ \forall {\tilde \varphi} \in \tilde V$ the estimate \begin{equation} \label{eq:abstract_localized_estimate} \norm{f}_{V_h^\prime} \leq c_{\mathrm{pu},\tilde V} \cdot \Big( \sum_{\xi \in \Upsilon_E} \norm{f}^2_{O_\xi^\prime} \Big)^{\frac{1}{2}}, \nonumber \end{equation} where $\norm{f}_{O_\xi^\prime}$ denotes the norm of the restriction of $f$ to $O_\xi$. \end{proposition} \begin{proof} Using the Cauchy-Schwarz inequality and $\dualpair{f}{\tilde \varphi} = 0 \ \forall {\tilde \varphi} \in \tilde V$, we have \begin{align*} \norm{f}_{V_h'} &= \sup_{\varphi \in V_h \setminus \{0\}} \frac{\sum_{\xi \in \Upsilon_E} \dualpair{f}{P_{O_\xi}(\varphi)}}{\norm{\varphi}_V} = \sup_{\varphi \in V_h \setminus \{0\}} \frac{\sum_{\xi \in \Upsilon_E} \inf_{\tilde \varphi \in \tilde V \cap O_\xi} \dualpair{f}{P_{O_\xi}(\varphi)- \tilde \varphi}}{\norm{\varphi}_V }\\ &\leq \sup_{\varphi \in V_h \setminus \{0\}} \frac{\sum_{\xi \in \Upsilon_E} \norm{f}_{O_\xi^\prime} \inf_{\tilde \varphi \in \tilde V \cap O_\xi} \norm{P_{O_\xi}(\varphi)- \tilde \varphi}_V}{\norm{\varphi}_V}\\ &\leq \sup_{\varphi \in V_h \setminus \{0\}} \frac{(\sum_{\xi \in \Upsilon_E} \norm{f}_{O_\xi^\prime}^2)^\frac{1}{2}( \sum_{\xi \in \Upsilon_E} \inf_{\tilde \varphi \in \tilde V \cap O_\xi} \norm{P_{O_\xi}(\varphi)- \tilde \varphi}_V^2)^\frac{1}{2}}{\norm{\varphi}_V}\\ &= c_{\mathrm{pu},\tilde V} \cdot \Big( \sum_{\xi \in \Upsilon_E} \norm{f}^2_{O_\xi^\prime} \Big)^{\frac{1}{2}}. \end{align*} \end{proof} A stability constant very similar to $c_{\mathrm{pu},\tilde V}$ appears in the analysis of overlapping domain decomposition methods (e.g. \cite[Assumption 2.2]{toselli2005domain}, \cite{Spillane2013}) and in localization of error estimators on stars (e.g. \cite{CDN2012}). \begin{proposition} \label{thm:efficiency} With the assumptions in Proposition \ref{thm:a_posteriori}, let $\dot{\bigcup}_{j=1}^J \Upsilon_{E,j} = \Upsilon_E$ be a partition of $\Upsilon_E$ such that \begin{equation*} \forall 1\leq j \leq J\ \forall \xi_1 \neq \xi_2 \in \Upsilon_{E,j}:\ O_{\xi_1} \perp O_{\xi_2}. \end{equation*} Then we have \begin{equation} \label{eq:abstract_efficiency_estimate} \Big( \sum_{\xi \in \Upsilon_E} \norm{f}^2_{O_\xi^\prime} \Big)^{\frac{1}{2}} \leq \sqrt{J} \norm{f}_{V_h^\prime}, \nonumber \end{equation} \end{proposition} \begin{proof} Let $O_{\xi_1} \perp O_{\xi_2}$ be some subspaces of $V_h$, and let $f$ be a continuous linear functional on $O_{\xi_1} \oplus O_{\xi_2}$. If $v_{f,1} \in O_{\xi_1}$ and $v_{f,2} \in O_{\xi_2}$ are the Riesz representatives of the restrictions of $f$ to $O_{\xi_1}$ and $O_{\xi_2}$, then due to the orthogonality of $O_{\xi_1}$ and $O_{\xi_2}$, $v_{f,1} + v_{f,2}$ is the Riesz representative of $f$ on $O_{\xi_1} \oplus O_{\xi_2}$. Thus, \begin{align*} \norm{f}_{(O_{\xi_1} \oplus O_{\xi_2})^\prime}^2 &= \norm{v_{f,1} + v_{f,2}}_{V_h}^2 \\ &= \norm{v_{f,1}}_{V_h}^2 + \norm{v_{f,2}}_{V_h}^2 = \norm{f}_{O_{\xi_1}^\prime}^2 + \norm{f}_{O_{\xi_2}^\prime}^2, \end{align*} where we have used the orthogonality of the spaces again. The same is true for a larger orthogonal sum of spaces. We therefore obtain: \begin{align*} \sum_{\xi \in \Upsilon_E} \norm{f}_{O_\xi^\prime}^2 &= \sum_{j=1}^{J} \sum_{\xi \in \Upsilon_{E,j}} \norm{f}_{O_\xi^\prime}^2 \\ &= \sum_{j=1}^{J} \norm{f}_{(\bigoplus_{\xi \in \Upsilon_{E,j}}O_\xi)^\prime}^2 \\ &\leq J \norm{f}_{V_h^\prime}^2. \end{align*} \end{proof} When grouping the spaces $O_\xi$ so that in each group, all spaces are orthogonal to each other, $J$ is the number of groups needed. Applying both estimates to the residual, we obtain an efficient, localized error estimator: \begin{corollary}\label{thm:abstract_error_estimate} The error estimator $\Delta_{loc}(\widetilde{u}_\mu)$ defined as \begin{equation} \Delta_{loc}(\widetilde{u}_\mu) := \frac{1}{\alpha_\mu} c_{\mathrm{pu},\tilde V} \big(\sum_{\xi \in \Upsilon_E} \norm{R_\mu(\widetilde{u}_\mu)}^2_{O_\xi^\prime} \big)^\frac{1}{2} \end{equation} is robust and efficient: \begin{equation} \label{eq:efficiency_estimate} \norm{ u_\mu - \widetilde{u}_\mu }_V \leq \Delta_{loc}(\widetilde{u}_\mu) \leq \frac{\gamma_\mu \sqrt{J} c_{\mathrm{pu},\tilde V}}{\alpha_\mu} \norm{ u_\mu - \widetilde{u}_\mu }_V \nonumber \end{equation} \end{corollary} \begin{proof} Applying Propositions \ref{thm:a_posteriori} and \ref{thm:efficiency} to the error estimator \\ $\Delta(\widetilde u_\mu) = \frac{1}{\alpha_\mu} \norm{R_\mu(\widetilde{u}_\mu)}_{V_h'}$ yields, together with (\ref{eq:global_estimator}), the proposition. \end{proof} Online-offline decomposition of this error estimator can be done by applying the usual strategy for online-offline decomposition used with the standard RB error estimator (see e.g. \cite[Sec. 4.2.5]{HesthavenRozzaEtAl2016} or the numerically more stable approach \cite{BEOR14a}) to every dual norm in $\Delta_{loc}(\widetilde u_\mu)$. \subsection{Choosing Spaces} The error estimator defined in Corollary \ref{thm:abstract_error_estimate} works for any spaces $O_\xi$ and mappings $P_{O_\xi}$ fulfilling the assumptions in Proposition \ref{thm:a_posteriori}. However, in order to obtain good constants $c_{\mathrm{pu},\tilde V}$ and $\sqrt J$, both have to be chosen carefully. In addition, two more properties are needed for good performance of the implementation. First, the subspaces should be spanned by FE ansatz functions, allowing the residual to be easily evaluated on these spaces. Second, the inner product matrix on the subspaces should be sparse, as the inner product matrix has to be solved in the computation of the dual norms. We use an overlapping decomposition based on the non-overlapping domain decomposition introduced in \eqref{eq:dd}. \begin{definition}[Overlapping space decomposition] Let the index set $\Upsilon_E$ for the overlapping space decomposition be given by the vertices of the domain decomposition, i.e. \begin{equation} \Upsilon_E = \Upsilon_2. \end{equation} We then define the overlapping spaces $O_\xi$ supported on the overlapping domains $\Omega_\xi$ by: \begin{equation} O_\xi := \bigoplus \Big\{U_\zeta \ \Big| \ \zeta \subseteq \xi \Big\}, \qquad \Omega_\xi := \overset{\circ}{\overline{\bigcup_{i \in \xi} \Omega_i}} \qquad \xi \in \Upsilon_E. \label{eq:overlapping_space_def} \end{equation} \end{definition} Note that we have $O_\xi = \{\psi \in \mathcal{B}\ |\ \overset{\circ}{\operatorname{supp}}(\psi) \subseteq \Omega_\xi\} \subseteq H^1_0(\Omega_\xi)$. Contrary to $V_\xi$ or $U_\xi$, these spaces do not form a direct sum decomposition of $V_h$. We next state a first estimate on the partition of unity constant of Corollary (\ref{thm:abstract_error_estimate}) for this choice of partition, which does not take into account that the residual vanishes on the reduced space. The resulting estimate thus depends on $H^{-1}$, where $H$ is the minimum diameter of the subdomains of the macro partition. Typically the size of the macro partition is moderate such that $H^{-1}$ is small. However, in the following Proposition \ref{thm:pu_bound} we will show that the constant $c_{\mathrm{pu},\tilde V}$ can be actually bounded independent of $H$, when we choose a partition of unity that is contained in the reduced space $\tilde V$. \begin{proposition} \label{thm:pu_bound_0} Let $c_\mathrm{ovlp} := \max_{x \in \Omega} \#\{\xi \in \Upsilon_E \ | \ x \in \Omega_\xi\}$ be the maximum number of estimator domains $\Omega_\xi$ overlapping in any point $x$ of $\Omega$ and let $H_\xi:= \operatorname{diam} (\Omega_\xi)$, $\xi \in \Upsilon_E$ and $H := \min_{\xi \in \Upsilon_E} H_\xi$. Furthermore, assume that there exist partition of unity functions $p_\xi \in H^{1,\infty}(\Omega)$, $\xi \in \Upsilon_E$ and a linear interpolation operator $\mathcal{I}: V \longrightarrow V_h$ such that \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label=(\roman*)] \item $\sum_{\xi \in \Upsilon_E} p_\xi(x) = 1$ for all $x \in \Omega$, \item $\max_{\xi \in \Upsilon_E} \norm{p_\xi}_\infty \leq 1$ and $\norm{\nabla p_\xi}_\infty \leq {c_{\mathrm{pu}}^\prime} H_\xi^{-1}$ for all $\xi \in \Upsilon_E$, \item $\mathcal{I}(\varphi) = \varphi$ for all $\varphi \in V_h$, \item $\mathcal{I}(p_\xi V_h) \subseteq O_\xi$ for all $\xi \in \Upsilon_E$, \item $\|\mathcal{I}(p_\xi v_h) - p_\xi v_h\|_V \leq c_I \|v_h\|_{\Omega_\xi,1}$ for all $\xi \in \Upsilon_E, v_h \in V_h$. \end{enumerate} Then we have: \begin{equation*} c_{\mathrm{pu},\tilde V} \leq \sqrt{4 + 2c_I^2 + 4\left({c_{\mathrm{pu}}^\prime} H^{-1}\right)^2}\cdot \sqrt{c_\mathrm{ovlp}}. \end{equation*} \end{proposition} \begin{proof} We compute the bound for $c_\mathrm{pu}$ using the partition of unity and the interpolation operator. To this end, let \begin{equation*} P_{O_\xi}(\varphi) := \mathcal{I}(p_\xi \varphi), \qquad \xi \in \Upsilon_E. \end{equation*} Due to (iv), these are linear mappings $V_h \longrightarrow O_\xi$, and using (i) and (iii) we obtain $\sum_{\xi \in \Upsilon_E} P_{O_\xi}(\varphi) = \mathcal{I}(\sum_{\xi \in \Upsilon_E} p_\xi \varphi) = \mathcal{I}(\varphi) = \varphi$ for all $\varphi \in V_h$. Thus, Corollary \ref{thm:abstract_error_estimate} applies with this specific choice of partition operators $P_{O_\xi}$. Now, using (ii) and (v) we have for any $\varphi \in V_h$ \begin{align*} \sum_{\xi \in \Upsilon_E} \norm{P_{O_\xi}(\varphi)}_V^2 &\leq 2 \sum_{\xi \in \Upsilon_E} \|\mathcal{I}(p_\xi\varphi) - p_\xi\varphi\|_V^2 + \norm{p_\xi \varphi}_V^2 \\ &\leq 2 \sum_{\xi \in \Upsilon_E} c_I^2 \|\varphi\|_{\Omega_\xi,1}^2 + \int_{\Omega_\xi} 2|\nabla p_\xi \varphi|^2(x) + 2|p_\xi \nabla \varphi|^2(x) + |p_\xi \varphi|^2(x) dx \\ &\leq 2 \sum_{\xi \in \Upsilon_E} c_I^2 \|\varphi\|_{\Omega_\xi,1}^2 + (1 + 2\left({c_{\mathrm{pu}}^\prime} H^{-1}\right)^2) |\varphi|_{\Omega_\xi, 0}^2 + 2|\varphi|_{\Omega_\xi, 1}^2 \\ &\leq (4 + 2c_I^2 + 4\left({c_{\mathrm{pu}}^\prime} H^{-1}\right)^2) c_\mathrm{ovlp} \norm{\varphi}_V^2 \end{align*} This gives us the estimate. \end{proof} \begin{remark} When the domain decomposition $\Omega_i$ is sufficiently regular (e.g. see the numerical examples below), partition of unity functions satisfying (i) and (ii) can easily be found. If $V_h \cup \{p_\xi\,|\, \xi \in \Upsilon_E\}$ consists of $p$-th order finite element basis functions for some fine triangulation of $\Omega$, Lagrange interpolation can be chosen as interpolation operator $\mathcal{I}$. In fact, using standard interpolation error estimates and inverse inequalities one sees that for each element $T$ of the fine triangulation with diameter $h$ one has: \begin{align*} \|\mathcal{I}(p_\xi v_h) - p_\xi v_h\|_{T,1} & \leq c h^p |p_\xi v_h|_{T, p+1} \\ & \leq c^\prime h^p \sum_{k=1}^p |v_h|_{T, k} \cdot |p_\xi|_{T, p+1-k, \infty} \\ & \leq c^{\prime\prime} h^p \sum_{k=1}^p h^{-(k-1)} |v_h|_{T,1} \cdot h^{-(p+1-k)} |p_\xi|_{T,0,\infty} \\ & \leq c^{\prime\prime} p |v_h|_{T,1}, \end{align*} where $c^{\prime\prime}$ is a constant bounded by the shape regularity of the fine triangulation. \end{remark} For the rectangular domain decomposition used in the numerical example below, the constant $J$ is $J = 2^d = 4$: it is possible to divide the overlapping domains into four classes, so that within each class, no domain overlaps with any other (cf.\ \cite[Sec. 5]{Chung2015}). Furthermore, the coercivity constant $\alpha_\mu$ and the stability constant $\gamma_\mu$, or estimates, are required. For the numerical example presented in Section \ref{sec:numerical_example}, those can be calculated analytically. In general this is not possible and the details of estimating them numerically are subject for further investigations. The numerical computation of a lower bound for the coercivity constant was subject of extensive research in the RB community (see e.g. \cite{HUYNH2007473,CHEN20081295})% , but these methods require the calculation of the coercivity constant at some parameter values and thus require the solution of a global, fine scale problem. To the authors' knowledge, there are no publications on localization of these methods so far. The upper bound on the constant $c_{\mathrm{pu},\tilde V}$ in Proposition \ref{thm:pu_bound_0} depends on the domain size $H$ approximately like $H^{-1}$. As the domain size is considered a constant in the context of ArbiLoMod, the error estimator is already considered efficient with this bound. In the next proposition, we however show that the constant can indeed be bounded independent of $H$, if we exploit that the residual vanishes on the reduced space $\tilde V$. \begin{proposition} \label{thm:pu_bound} Let $p_\xi$, $\xi \in \Upsilon_E$ be a partition of unity and $\mathcal{I}$ an interpolation operator satisfying the prerequisites of Proposition~\ref{thm:pu_bound_0}. Furthermore, assume $V = H^1_0(\Omega)$ and that $p_\xi \in \tilde{V} \cap O_\xi$ for $\xi \in \Upsilon_E^{\rm int} := \{\xi \in \Upsilon_E | \ \bar \Omega_\xi \cap \partial \Omega = \emptyset\}$, e.g. $p_\xi$ is chosen as a basis function of $\tilde{V}_\xi$ (see Subsections \ref{sec:decomposition} and \ref{sec:codim_2_spaces} above). Then the following estimate holds: \begin{equation*} c_{\mathrm{pu},\tilde V} \leq \sqrt{4 + 2c_I^2 + 4({c_{\mathrm{pu}}^\prime} c_{\mathrm pc})^2} \cdot \sqrt{c_\mathrm{ovlp}}, \end{equation*} with a Poincar$\rm \acute{e}$-inequality constant $c_{\mathrm pc}$ (see proof below) that does not depend on the fine or coarse mesh sizes ($h, H$). \end{proposition} \begin{proof} For arbitrary $\varphi \in V_h$ let $\bar \varphi_\xi := \frac{1}{|\Omega_\xi|}\int_{\Omega_\xi} \varphi$. We then have with $\Upsilon_E^{\rm ext}:= \Upsilon_E \setminus \Upsilon_E^{\rm int}$ \begin{eqnarray*} c_{\mathrm{pu},\tilde V} &=& \sup_{\varphi \in V_h \setminus \{0\}} \frac{( \sum_{\xi \in \Upsilon_E} \inf_{\tilde \varphi \in \tilde V \cap O_\xi} \norm{P_{O_\xi}(\varphi) - \tilde \varphi}_V^2)^\frac{1}{2}}{\norm{\varphi}_V} \\ &\leq& \sup_{\varphi \in V_h \setminus \{0\}} \!\!\! \frac{( \sum_{\xi \in \Upsilon_E^{\rm int}} \norm{P_{O_\xi}(\varphi) - \bar \varphi_\xi p_\xi}_V^2 + \sum_{\xi \in \Upsilon_E^{\rm ext}} \norm{P_{O_\xi}(\varphi)}_V^2 )^\frac{1}{2}}{\norm{\varphi}_V}, \end{eqnarray*} where we have used that by construction $\bar \varphi_\xi p_\xi \in { \tilde V \cap O_\xi}$ for all $\xi \in \Upsilon_E^{\rm int}$. For any $\varphi \in V_h$ and $\xi \in \Upsilon_E^{\rm int}$ we then have $\norm{P_{O_\xi}(\varphi) - \bar \varphi_\xi p_\xi}_V^2 \leq 2c_I^2 \|\varphi\|_{\Omega_\xi,1}^2 + 2 \norm{(\varphi - \bar \varphi_\xi) p_\xi}_V^2 $, where \begin{align*} \norm{(\varphi - \bar \varphi_\xi) p_\xi}_V^2 &\leq \int_{\Omega_\xi} 2|\nabla (\varphi - \bar \varphi_\xi) p_\xi |^2(x) + 2| (\varphi - \bar \varphi_\xi) \nabla p_\xi |^2(x) dx \\ &\hspace*{5em} + \|(\varphi - \bar \varphi_\xi) p_\xi\|^2_{L^2(\Omega_\xi)}. \end{align*} With a rescaled Poincar$\rm \acute{e}$-type inequality $$ \norm{\varphi - \bar \varphi_\xi}_{L^2(\Omega_\xi)} \leq c_{\mathrm pc} H_\xi \norm{\nabla \varphi}_{L^2(\Omega_\xi)}, $$ and $ \norm{\varphi - \bar \varphi_\xi}_{L^2(\Omega_\xi)} \leq \norm{\varphi}_{L^2(\Omega_\xi)}, $ we get \begin{align*} \int_{\Omega_\xi} 2|\nabla (\varphi - \bar{\varphi}_\xi) p_\xi |^2(x) &+ 2| (\varphi - \bar \varphi_\xi) \nabla p_\xi |^2(x) dx + \|(\varphi - \bar \varphi_\xi) p_\xi\|^2_{L^2(\Omega_\xi)}\\ & \qquad\qquad\leq (2 + 2({c_{\mathrm{pu}}^\prime} c_{\mathrm pc})^2 ) \norm{\nabla \varphi}^2_{L^2(\Omega_\xi)} + \|\varphi\|^2_{L^2(\Omega_\xi)}\\ \end{align*} In analogy we obtain for the boundary terms, i.e. $\xi \in \Upsilon_E^{\rm ext}$, the estimates $\norm{P_{O_\xi}(\varphi)}_V^2 \leq 2c_I^2 \|\varphi\|_{\Omega_\xi,1} + 2 \norm{\varphi p_\xi}_V^2 $, and \begin{align*} \norm{ \varphi p_\xi }_V^2 &\leq \int_{\Omega_\xi} 2|\nabla \varphi p_\xi|^2(x) + 2| \varphi \nabla p_\xi |^2(x) dx + \| \varphi p_\xi \|^2_{L^2(\Omega_\xi)} \\ & \leq (2 + 2({c_{\mathrm{pu}}^\prime} c_{\mathrm pc})^2) \norm{\nabla \varphi}^2_{L^2(\Omega_\xi)} + \|\varphi\|^2_{L^2(\Omega_\xi)}. \end{align*} using a rescaled Poincar$\rm \acute{e}$-type inequality which holds for $\xi \in \Upsilon_2^{\rm ext}$ as $\varphi \in V_h$ has zero boundary values, i.e. $$ \norm{\varphi}_{L^2(\Omega_\xi)} \leq c_{\mathrm pc} H_\xi \norm{\nabla \varphi}_{L^2(\Omega_\xi)}. $$ Summing up all contributions we then have \begin{align*} \sum_{\xi \in \Upsilon_E^{\rm int}} & \norm{P_{O_\xi}(\varphi) - \bar \varphi_\xi p_\xi}_V^2 + \sum_{\xi \in \Upsilon_E^{\rm ext}} \norm{P_{O_\xi}(\varphi)}_V^2 \\ &\leq \sum_{\xi \in \Upsilon} 2c_I^2\|\varphi\|_{\Omega_\xi,1}^2 + 2\left[ (2 + 2({c_{\mathrm{pu}}^\prime} c_{\mathrm pc})^2) \norm{\nabla \varphi}^2_{L^2(\Omega_\xi)} + \|\varphi\|^2_{L^2(\Omega_\xi)} \right]\\ & \leq (4 + 2c_I^2 + 4({c_{\mathrm{pu}}^\prime} c_{\mathrm pc})^2) c_\mathrm{ovlp} \norm{\varphi}^2_{V}. \end{align*} This gives us the estimate. \end{proof} Proposition \ref{thm:pu_bound} gives a bound on $c_{\mathrm{pu},\tilde V}$ that depends on the contrast of the underlying diffusion coefficient if $p_\xi \in \tilde V$, $\xi \in \Upsilon_E^{\rm int}$ is chosen as the MsFEM type hat functions as suggested in Section \ref{sec:decomposition} above. However, it is independent on the mesh sizes $h, H$. A crucial ingredient to obtain this bound is the fact that we included this macroscopic partition of unity in our reduced approximation space $\tilde V$. If alternatively we would chose $p_\xi \in \tilde V$ to be the traditional Lagrange hat functions, the bound on $c_{\mathrm{pu},\tilde V}$ in Proposition \ref{thm:pu_bound} would be independent of the contrast. In fact, we might expect that $c_{\mathrm{pu},\tilde V}$ behaves much better then the upper bound due to the approximation properties of the reduced space. It would actually be possible to compute $c_{\mathrm{pu},\tilde V}$ for given $V_h, \tilde V$ which would however be computationally expensive and thus not of any use in pratical applications. Proposition \ref{thm:pu_bound}, however shows that the localized a posteriori error estimator in Corollary \ref{thm:abstract_error_estimate} in the context of ArbiLoMod is indeed robust and efficient, even with respect to $H \to 0$. Comparing with other localized RB and multiscale methods, one observes a difference in the scaling of the efficiency constants. While in our case, $c_{pu}$ is independent of both $h$ and $H$, the a posteriori error estimator published for LRBMS has a $H/h$ dependency \cite[Theorem 4.6]{OS15} and in the certification framework for SCRBE, a $h^{-1/2}$ scaling appears \cite[Proposition 4.5]{Smetana2015a}. The error estimators published for GMsFEM in \cite{Chung2014b} also have no dependency on $H$ or $h$. However, they also rely on specific properties of the basis generation. Also in the analysis of the ``Discontinuous Galerkin Reduced Basis Element Method'' (DGRBE), Pacciarini et.al.~have a factor of $h^{-1/2}$ in the a priori analysis \cite{FrancescaAntonietti2015} and in the a posteriori error estimator \cite{PhdPacciarini}. \subsection{Local Efficiency} So far we did not use properties of the bilinear form other than coercivity and continuity. Assuming locality of the bilinear form as in \eqref{eq:heat_equation}, we get a local efficiency estimate and an improved global efficiency estimate. \begin{proposition} \label{thm:local_efficiency} Let the bilinear form $a$ be given by \eqref{eq:heat_equation}. Then we have the localized efficiency estimate \begin{equation} \norm{R_\mu(\widetilde{u}_\mu)}_{O_\xi'} \leq \gamma_\mu |u_\mu - \widetilde{u}_\mu|_{\Omega_\xi, 1}. \end{equation} \end{proposition} \begin{proof} Using the error identity \begin{equation} a_\mu(u_\mu - \widetilde{u}_\mu, \varphi) = \dualpair{R_\mu(\widetilde{u}_\mu) }{ \varphi }, \nonumber \end{equation} we obtain for any $\varphi \in O_\xi$ \begin{align*} \dualpair{R_\mu(\widetilde{u}_\mu)}{\varphi} &= \int_\Omega \sigma_\mu(x)\nabla(u_\mu - \widetilde{u}_\mu)(x) \nabla \varphi(x) dx \\ &= \int_{\Omega_\xi} \sigma_\mu(x)\nabla(u_\mu - \widetilde{u}_\mu)(x) \nabla \varphi(x) dx \\ &\leq \gamma_\mu |u_\mu - \widetilde{u}_\mu|_{\Omega_\xi,1} \norm{\varphi}_V, \end{align*} from which the statement follows. \end{proof} \begin{remark} Under the assumptions of Proposition \ref{thm:local_efficiency}, it is easy to see that we have the improved efficiency estimate \begin{equation} \Delta_{loc}(\widetilde{u}_\mu) \leq \frac{\gamma_\mu \sqrt{c_\mathrm{ovlp}} c_{\mathrm{pu},\tilde V}}{\alpha_\mu} \norm{ u_\mu - \widetilde{u}_\mu }_V. \nonumber \end{equation} In many cases, a better constant can be found. Finite Element ansatz functions are usually not orthogonal if they share support. So if $c_\mathrm{ovlp}$ spaces have support in one point in space, they have to be placed in different groups when designing a partition for Proposition \ref{thm:efficiency}, so $c_\mathrm{ovlp} \leq J$ (cf.\ \cite[p. 67]{toselli2005domain}). \end{remark} \subsection{Relative Error Bounds} From the error estimators for the absolute error, we can construct error estimators for the relative error. Estimates for the relative error are given in \cite[Proposition 4.4]{HesthavenRozzaEtAl2016}, but the estimates used here are slightly sharper. \begin{proposition} Assuming $\norm{\widetilde u_\mu}_V > \Delta(\widetilde u_\mu)$ and $\norm{\widetilde u_\mu}_V > \Delta_{loc}(\widetilde u_\mu)$, the error estimators defined by \begin{align*} \Delta^{rel}(\widetilde u_\mu) &:= \frac{\Delta(\widetilde u_\mu)}{\norm{\widetilde u_\mu}_V - \Delta(\widetilde u_\mu)} \\ \Delta^{rel}_{loc}(\widetilde u_\mu) &:= \frac{\Delta_{loc}(\widetilde u_\mu)}{\norm{\widetilde u_\mu}_V - \Delta_{loc}(\widetilde u_\mu)} \end{align*} are robust and efficient: \begin{eqnarray*} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{u_\mu}_V} &\leq \Delta^{rel}(\widetilde u_\mu) &\leq \left( 1 + 2 \Delta^{rel} (\widetilde u_\mu)\right) \frac{\gamma_\mu}{\alpha_\mu} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{u_\mu}_V} \\ \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{u_\mu}_V} &\leq \Delta^{rel}_{loc}(\widetilde u_\mu) &\leq \left( 1 + 2 \Delta^{rel}_{loc}(\widetilde u_\mu) \right) \frac{\gamma_\mu \sqrt{J} c_{\mathrm{pu},\tilde V}}{\alpha_\mu} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{u_\mu}_V} \end{eqnarray*} \end{proposition} \begin{proof} Realizing that $\left(\norm{\widetilde u_\mu}_V - \Delta(\widetilde u_\mu) \right)\leq \norm{u_\mu}_V$, it is easy to see that \begin{equation} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{u_\mu}_V} \leq \frac{\Delta(\widetilde u_\mu)}{\norm{u_\mu}_V} \leq \frac{\Delta(\widetilde u_\mu)}{\norm{\widetilde u_\mu}_V - \Delta(\widetilde u_\mu)}, \end{equation} which is the first inequality. Using $$ \norm{\widetilde u_\mu}_V + \Delta(\widetilde u_\mu) = \left(\norm{\widetilde u_\mu}_V - \Delta(\widetilde u_\mu) \right) \left( 1 + 2 \Delta^{rel}(\widetilde u_\mu) \right) $$ the second inequality can be shown: \begin{align*} \Delta^{rel}(\widetilde u_\mu) = \frac{\Delta(\widetilde u_\mu)}{\norm{\widetilde u_\mu}_V - \Delta(\widetilde u_\mu)} & \leq \frac{\gamma_\mu}{\alpha_\mu} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{\widetilde u_\mu}_V - \Delta(\widetilde u_\mu)}\\ &= \frac{\gamma_\mu}{\alpha_\mu} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{\widetilde u_\mu}_V + \Delta(\widetilde u_\mu)} \left( 1 + 2 \Delta^{rel}(\widetilde u_\mu) \right)\\ &\leq \frac{\gamma_\mu}{\alpha_\mu} \frac{\norm{u_\mu - \widetilde u_\mu}_V}{\norm{u_\mu}_V} \left( 1 + 2 \Delta^{rel}(\widetilde u_\mu) \right). \end{align*} The inequalities for $\Delta^{rel}_{loc}$ can be shown accordingly. \end{proof} Reviewing the five desired properties of an a posteriori error estimator at the beginning of this section, we see that the presented error estimator is robust and efficient (1) and is online-offline decomposable (2). Parallelization can be done over the spaces $O_\xi$. Only online data has to be transferred, so there is little communication (3). The online-offline decomposition only has to be repeated for a space $O_\xi$, if a new basis function with support in $\Omega_\xi$ was added. So reuse in unchanged regions is possible (4). How the adaptive enrichment is steered (5) will be described in the following section. \section*{#1}% \else \vspace{.05in}\footnotesize \parindent .2in {\upshape\bfseries #1. }\ignorespaces \fi} {\if@twocolumn\else\par\vspace{.1in}\fi} \newcommand\AMSname{AMS subject classifications} \newcommand\keywordsname{Key words} \newenvironment{keywords}{\begin{@abssec}{\keywordsname}}{\end{@abssec}} \newenvironment{AMS}{\begin{@abssec}{\AMSname}}{\end{@abssec}} \usetikzlibrary{external} \tikzexternalize[prefix=tikz/] \usepackage{chngcntr} \counterwithin{figure}{section} \numberwithin{equation}{section} \numberwithin{table}{section} \section{Conclusion} We introduced ArbiLoMod, a simulation technique aiming at problems with arbitrary local modifications and highly parallel computing environments. It is based on the Reduced Basis method, inheriting its advantages, but localizing the basis construction and error estimation. It consists of basis generation algorithms, a localized a posteriori error estimator controlling the reduction error, and a localized space enrichment procedure, improving the reduced local spaces if necessary. The initial basis generation algorithms require no communication of unreduced quantities in a parallel implementation. For the basis enrichment procedure, a strong compression of the communicated quantities is possible. We discussed the possibilities to use ArbiLoMod to implement a scalable parallel code by reducing communication costs due to the local structure and the local Reduced Basis strategy. ArbiLoMod was demonstrated on a coercive example in two dimensions, featuring high contrast, fine details and channels. Even though small local modifications to this example lead to strong global changes in the solution, ArbiLoMod was able to approximate the new solutions after these geometry changes with a small fraction of the effort needed for the initial geometry. BSD-licensed source code is provided along with this publication, anyone can reproduce all results presented here easily. \section{Local Basis Generation} \label{sec:training_and_greedy} For each local subspace $V_\xi$, an initial reduced local subspace $\wt V_\xi \subseteq V_\xi$ is generated, using only local information from an environment around the support of the elements in $U_\xi$. The strategy used to construct these reduced local subspaces depends on the type of the space: whether $\xi$ belongs to $\Upsilon_0$, $\Upsilon_1$ or $\Upsilon_2$. The three strategies are given in the following. The local basis generation algorithms can be run in parallel, completely independent of each other. See Section \ref{sec:runtime_and_communication} for further discussion of the potential parallelization. As the algorithms only use local information, their results do not change when the problem definition is changed outside of the area they took into account. So there is no need to rerun the algorithms in this case. Our numerical results indicate that the spaces obtained by local trainings and greedys have good approximation properties (see Section \ref{sec:numerical_experiments}). The quality of the obtained solution will be guaranteed by the a posteriori error estimator presented in Section \ref{sec:a_posteriori} below. \subsection{Basis Construction for Reduced Vertex Spaces} \label{sec:codim_2_spaces} The spaces $V_\xi$ for $\xi \in \Upsilon_2$ are spanned by only one function (see Figure~\ref{fig:extended_base_2} for an example) and are thus one dimensional. The reduced spaces are therefore chosen to coincide with the original space, i.e. $ \widetilde V_\xi := V_\xi, \forall \xi \in \Upsilon_2. $ \subsection{Local Training for Basis Construction of Reduced Face Spaces} \label{sec:codim_n_training} To generate an initial reduced local subspace for $V_\xi, \ \xi \in \Upsilon_1$ we use a local training procedure. Its main four steps are to \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item solve the equation on a small domain around the space in question with zero boundary values for all parameters in the training set $\Xi$, \item solve the homogeneous equation repeatedly on a small domain around the space in question with random boundary values for all parameters in $\Xi$, \item apply the space decomposition to all obtained local solutions to obtain the part belonging to the space in question and \item use a greedy procedure to create a space approximating this set. \end{enumerate} The complete algorithm is given in Algorithm \ref{algo:training} and explained below. The training is inspired by the ``Empirical Port Reduction'' introduced in Eftang et al.\ \cite{Eftang2013} but differs in some key points. The main differences are: (1)~% Within~\cite{Eftang2013}, the trace of solutions at the interface to be trained is used. This leads to the requirement that interfaces between domains do not intersect. In ArbiLoMod, a space decomposition is used instead. This allows ports to intersect, which in turn allows the decomposition of space into domains. (2)~% The ``Empirical Port Reduction'' trains with a pair of domains. We use an environment of the interface in question, which contains six domains in the 2D case. In 3D, it contains 18 domains. (3)~% PR-SCRBE aims at providing a library of domains which can be connected at their interfaces. The reduced interface spaces are used in different domain configurations and have to be valid in all of them. Within the context of ArbiLoMod, no database of domains is created and the interface space is constructed only for the configuration at hand, which simplifies the procedure. (4)~% The random boundary values used in \cite{Eftang2013} are generalized Legendre polynomials with random coefficients. In ArbiLoMod, the finite element basis functions with random coefficients are used, which simplifies the construction greatly, especially when there is complex structure within the interface. \subsubsection*{The Training Space} \input{figure_training_spaces} To train a basis for $\widetilde V_\xi$, $\xi \in \Upsilon_1$, we start from the subspace $U_\xi$. For each subspace $U_\xi$, we define a corresponding training space $\training{U_\xi}$ on an environment associated with the face $\xi$. The following definition is geometric and tailored to a domain decomposition in rectangular domains. More complex domain decompositions would need a more complex definition here. We define the neighborhood \begin{equation} \mathcal{N}_\xi := \Big\{i \in \{1, \dots, N_D\} \ \Big| \ \overline{\Omega}_i \cap \Bigl( \bigcap_{k \in \xi} \overline{\Omega}_k \Bigr) \ne \emptyset \Big\} \end{equation} and with that the training spaces \begin{equation} \training{U_\xi} := \bigoplus \Big\{U_\zeta \ \Big| \ \zeta \subseteq \mathcal{N}_\xi \Big\}. \end{equation} The training space is coupled to the rest of the system via its coupling spac \begin{equation} \coupling{\training{U_\xi}} := \bigoplus \Big\{U_\zeta \ \Big| \ \zeta \cap \mathcal{N}_\xi \ne \emptyset , \zeta \nsubseteq \mathcal{N}_\xi \Big\}. \end{equation} A sketch of the degrees of freedom associated with the respective spaces is given in Figure~\ref{fig:training_space}. These definitions are also suitable in the 3D case. We have fixed the size of the neighborhood to one domain from the interface in question in each direction. This facilitates the setup of local problems and the handling of local changes: After a local change, the affected domains are determined. Afterwards, all trainings have to be redone for those spaces which contain an affected domain in their training domain. While a larger or smaller training domain might be desirable in some cases (see \cite{henning_oversampling}), it is not necessary: As missing global information is added in the enrichment step, ArbiLoMod always converges to the desired accuracy, even if the training domain is not of optimal size. So the advantages of having the size of the training domain fixed to one domain outweighs its drawbacks. The reduced basis must be rich enough to handle two types of right hand sides up to a given accuracy $\varepsilon_\mathrm{train}$: \begin{inparaenum}[(a)] \item source terms and boundary conditions, and \item arbitrary values on the coupling interface, \end{inparaenum} both in the whole parameter space $\mathcal{P}$. We define an extended parameter space $\mathcal{P} \times \coupling{\training{U_\xi}}$. For this parameter space we construct a training space $\Xi \times G \subset \mathcal{P} \times \coupling{\training{U_\xi}}$, where $G$ denotes an appropriate sampling of $\coupling{\training{U_\xi}}$. We use the finite element basis ${\mathcal{B}}_{\coupling{\training{U_\xi}}}$ on the coupling space and generates $M$ random coefficient vectors $r_i$ of size $N_{\mathcal{B}} = \dim({\coupling{\training{U_\xi}}})$. With this an individual coupling space function $\varphi \in \coupling{\training{U_\xi}}$ is constructed as \begin{equation} \varphi_i = \sum_{j=1}^{N_{\mathcal{B}}} r_{ij} \phi_j; \qquad\phi_j \in {\mathcal{B}}_{\coupling{\training{U_\xi}}}.\label{eq:3} \end{equation} For our numerical experiments in Section \ref{sec:numerical_experiments}, we use uniformly distributed random coefficients over the interval $[-1,1]$ and Lagrange basis functions. For each $\mu \in \Xi$ and each pair $(\mu,g_c) \in \Xi \times G$ we construct snapshots $u_f$ and $u_c$ as solutions for right hand sides $f_\mu(.)$, $a_{\mu}(g_c,.)$ respectively, i.e. \begin{align} a_{\mu}(u_f,\phi) &= \dualpair{f_\mu}{\phi} \qquad \forall \phi \in \training{U_\xi},\\ a_{\mu}(u_c,\phi) & = - a_{\mu}(g_c,\phi) \quad \forall \phi \in \training{U_{\xi}}\,. \nonumber \end{align} Based on the set of snapshots which we call $Z$, a reduced basis ${\mathcal{B}}$ is constructed using a greedy algorithm (cf.\ Alg.\ \ref{algo:snapshot_greedy}). In the numerical experiments, the $V$-norm and $V$-inner product are used. The complete generation of the reduced face spaces $\wt V_\xi, \ \xi \in \Upsilon_1$ with basis ${\widetilde{\mathcal{B}}}_{\wt V_\xi}$ is summarized in Alg.\ \ref{algo:training}. \begin{algorithm2e} \DontPrintSemicolo \SetAlgoVline \SetKwFunction{SnapshotGreedy}{SnapshotGreedy \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Fn{\SnapshotGreedy{$Z,\varepsilon_\mathrm{train}$}}{ \Input{set of elements to approximate $Z$,\\ training tolerance $\varepsilon_\mathrm{train}$} \Output{basis of approximation space ${\mathcal{B}}$} ${\mathcal{B}} \leftarrow \emptyset$ \; \While{$\max_{z \in Z} \norm{z} > \varepsilon_\mathrm{train}$}{ $\hat z \leftarrow \operatornamewithlimits{arg\,max}_{z \in Z} \norm{z}$\; $\hat z \leftarrow \frac{\hat z}{\norm{\hat z}}$\; $Z \leftarrow \left\{z - (z,\hat z)\hat z \ | \ z \in Z \right\}$\; ${\mathcal{B}} \leftarrow {\mathcal{B}} \cup \{\hat z\}$ \; } \Return ${\mathcal{B}}$\; } \caption{SnapshotGreedy} \label{algo:snapshot_greedy} \end{algorithm2e} \begin{algorithm2e} \DontPrintSemicolo \SetAlgoVline \SetKwFunction{Training}{Training \SetKwFunction{RandomSampling}{RandomSampling \SetKwFunction{SnapshotGreedy}{SnapshotGreedy \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Fn{\Training{$\xi, M, \varepsilon_\mathrm{train}$}}{ \Input{space identifier $\xi$,\\ number of random samples $M$,\\ training tolerance $\varepsilon_\mathrm{train}$} \Output{reduced local subspace $\wt V_\xi$} $G \leftarrow$ \RandomSampling{$\xi$, M}\; $Z \leftarrow \emptyset$\; \ForEach{$\mu \in \Xi$}{ \Find{$u_f \in \training{U_\xi}$ \st}{ $ \qquad a_{\mu}(u_f,\phi) = f_\mu(\phi) \qquad \forall \phi \in \training{U_\xi} $} $Z \leftarrow Z \cup P_{V_\xi}(u_f)$\; \ForEach{$g_c \in G$}{ \Find{$u_c \in \training{U_{\xi}}$ \st}{ $ \qquad a_{\mu}(u_c + g_c,\phi) = 0 \qquad \forall \phi \in \training{U_{\xi}} $} $Z \leftarrow Z \cup P_{V_\xi}(u_c)$\; } } ${\widetilde{\mathcal{B}}}_{V_\xi} \leftarrow$ \SnapshotGreedy{$Z, \varepsilon_\mathrm{train}$}\; \Return $\operatorname{span}({\widetilde{\mathcal{B}}}_{V_{\xi}})$\; } \caption{Training to construct reduced face spaces $\wt V_\xi, \ \xi \in \Upsilon_1$} \label{algo:training} \end{algorithm2e} \subsection{Basis Construction for Reduced Cell Spaces Using Local Greedy} \label{sec:codim_0_greedy} For each cell space $V_\xi, \ \xi \in \Upsilon_0$ we create a reduced space $\wt V _\xi$. These spaces should be able to approximate the solution in the associated part of the space decomposition for any variation of functions from reduced vertex or face spaces that are coupled with it. We define the reduced coupling space $\rcoupling{V_{\xi}}$ and its basis ${\widetilde{\mathcal{B}}}_{\rcoupling{V_\xi}}$. \begin{equation} \Upsilon^C_\xi := \Big\{\zeta \in \Upsilon \ | \ \zeta \cap \xi \ne \emptyset, \zeta \nsubseteq \xi \Big\} \end{equation} \begin{equation} \rcoupling{V_{\xi}} := \bigoplus_{\zeta \in \Upsilon^C_\xi} \wt V_\zeta \qquad \qquad {\widetilde{\mathcal{B}}}_{\rcoupling{V_\xi}} := \bigcup_{\zeta \in \Upsilon^C_\xi} {\widetilde{\mathcal{B}}}_{\wt V_\zeta} \end{equation} We introduce an extended training set: $\Xi \times \{1, \dots, N_{\widetilde{\mathcal{B}}}+1\}$, $N_{{\widetilde{\mathcal{B}}}} := \dim(\rcoupling{V_\xi})$. Given a pair $(\mu, j) \in \Xi \times \{1, \dots, N_{\widetilde{\mathcal{B}}}+1\}$, we define the associated right hand side as \begin{equation} g_{\mu,j}(\phi) := \begin{cases} - a_\mu(\psi_j, \phi) & \qquad \text{if } j \le N_{\widetilde{\mathcal{B}}}\\ \dualpair{f_\mu}{\phi} & \qquad \text{if } j = N_{\widetilde{\mathcal{B}}}+1\,, \end{cases} \end{equation} where $\psi_j$ denotes the $j$-th basis function of ${\widetilde{\mathcal{B}}}_{\rcoupling{V_\xi}}$. We then construct the reduced cell space $\widetilde V_\xi$ as the classical reduced basis space (cf.\ Alg.\ \ref{algo:local_greedy}, LocalGreedy) with respect to the following parameterized local problem: Given a pair $(\mu, j) \in \Xi \times \{1, \dots, N_{\widetilde{\mathcal{B}}}+1\}$, find $u_{\mu,j} \in V_\xi$ such that \begin{equation} a_\mu(u_{\mu,j},\phi) = g_{\mu,j}(\phi) \qquad \forall \phi \in V_\xi. \end{equation} The corresponding reduced solutions are hence defined as: Find $\widetilde u_{\mu,j} \in \wt V_\xi$ such that: \begin{equation} a_\mu(\widetilde u_{\mu,j},\phi) = g_{\mu,j}(\phi) \qquad \forall \phi \in \wt V_\xi \end{equation} Both problems have unique solutions due to the coercivity and continuity of $a_\mu$. For the LocalGreedy (Algorithm \ref{algo:local_greedy}) we use the standard Reduced Basis residual error estimator, i.e. \begin{equation} \norm{u_{\mu,j} - \widetilde u_{\mu,j}}_{V_\xi} \leq \Delta_{cell}(\widetilde u_{\mu,j}) := \frac{1}{\alpha_{LB}(\mu)} \norm{ R_{\mu,j}(\widetilde u_{\mu,j}) }_{{V'_\xi}}\,, \end{equation} with the local residual \begin{eqnarray} R_{\mu, j}: V_\xi &\rightarrow& V'_\xi\\ \varphi &\mapsto& g_{\mu, j}(\cdot) - a_\mu(\varphi, \cdot) \nonumber \end{eqnarray} and a lower bound for the coercivity constant $\alpha_{LB}$. The idea of using a local greedy to generate a local space for all possible boundary values can also be found in \cite{phdiapichino,FrancescaAntonietti2015}. \begin{algorithm2e}[t] \DontPrintSemicolo \SetAlgoVline \SetKwFunction{LocalGreedy}{LocalGreedy \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Fn{\LocalGreedy{$\xi, \varepsilon_\mathrm{greedy}$}}{ \Input{space identifier $\xi$,\\ greedy tolerance $\varepsilon_\mathrm{greedy}$} \Output{reduced local subspace $\wt V_\xi$} ${\widetilde{\mathcal{B}}}_{\wt V_\xi} \leftarrow \emptyset$ \; \While{ $\max\limits_{\substack{ \mathllap{\mu} \in \mathrlap{\Xi} \\ \mathllap{j} \in \mathrlap{\{1, \dots, N_{\widetilde{\mathcal{B}}} +1 \}}} }\ \Delta_{cell}(\widetilde{u}_{\mu,j}) > \varepsilon_\mathrm{greedy}$ }% { $\hat \mu,\hat{\jmath} \leftarrow \operatornamewithlimits{arg\,max}\limits_{\substack{ \mathllap{\mu} \in \mathrlap{\Xi} \\ \mathllap{j} \in \mathrlap{\{1, \dots, N_{\widetilde{\mathcal{B}}} +1 \}}}}% \ \Delta_{cell}(\widetilde{u}_{\mu,j})$\; \Find{$u_{\hat \mu, \hat{\jmath}} \in V_\xi$ \st}{ $ \qquad a_{\hat \mu}(u_{\hat \mu, \hat{\jmath}},\phi) = g_{\hat \mu, \hat{\jmath}}(\phi) \qquad \forall \phi \in V_\xi $} $u_{\hat \mu, \hat{\jmath}} \leftarrow u_{\hat \mu, \hat{\jmath}} - \sum\limits_{\mathclap{\phi\in{\widetilde{\mathcal{B}}}_{\wt V_\xi}}} {( \phi, u_{\hat \mu, \hat{\jmath}})_V} \, \phi$\; ${\widetilde{\mathcal{B}}}_{\wt V_\xi} \leftarrow {\widetilde{\mathcal{B}}}_{\wt V_\xi} \cup \left\{\norm{u_{\hat \mu, \hat{\jmath}}}_V^{-1}u_{\hat \mu, \hat{\jmath}}\right\}$\; } \Return $\operatorname{span}({\widetilde{\mathcal{B}}}_{\wt V_\xi})$\; } \caption{LocalGreedy to construct local cell spaces $\wt V_\xi, \ \xi \in \Upsilon_0$} \label{algo:local_greedy} \end{algorithm2e} \section{Enrichment Procedure} \label{sec:enrichment} The first ArbiLoMod solution is obtained using the initial reduced local subspaces generated using the local training and greedy procedures described in Section \ref{sec:training_and_greedy}. If this solution is not good enough according to the a posteriori error estimator, the solution is improved by enriching the reduced local subspaces and then solving the global reduced problem again. The full procedure is given in Algorithm \ref{algo:online_enrichment} and described in the following. For the enrichment, we use the overlapping local subspaces introduced in \eqref{eq:overlapping_space_def}, which are also used for the a posteriori error estimator. Local problems are solved in the overlapping spaces. The original bilinear form is used, but as a right hand side the residual of the last reduced solution is employed. The local spaces and the parameter values for which the enrichment is performed are selected in a D\"orfler-like \cite{dorfler1996convergent} algorithm. The thus obtained local solutions $u_l$ do not fit into our space decomposition, as they lie in one of the overlapping spaces, not in one of the local subspaces used for the basis construction. Therefore, the $u_l$ are decomposed using the projection operators $P_{V_\xi}$ defined in Definition~\ref{def:local_projection_operators}. In the setting of our numerical example (Section~\ref{sec:numerical_example}), this decomposition yields at most 9 parts (one codim-2 part, four codim-1 parts and four codim-0 parts). Of these parts, the one worst approximated by the existing reduced local subspace is selected for enrichment. ``Worst approximated'' is here defined as having the largest part orthogonal to the existing reduced local subspace. We denote the part of $P_{V_\xi}(u_l)$ orthogonal to $\wt V_\xi$ w.r.t.\ the inner product of $V$ by $(P_{V_\xi}(u_l))^\perp$. To avoid communication, cell spaces $\wt V_\xi, \xi \in \Upsilon_0$ are not enriched at this point. Such an enrichment would require the communication of the added basis vector, which might be large. Instead, only the other spaces are enriched, and the cell spaces associated with $\Upsilon_0$ are regenerated using the greedy procedure from Section \ref{sec:codim_0_greedy}. For the other spaces, a strong compression of the basis vectors is possible (cf.\ Section \ref{sec:runtime_and_communication}). This selection of the local spaces can lead to one reduced local space being enriched several times in one iteration. Numerical experiments have shown that this leads to poorly conditioned systems, as the enrichment might try to introduce the same feature into a local basis twice. To prevent this, the enrichment algorithm enriches each reduced local subspace at most once per iteration. \begin{algorithm2e} \DontPrintSemicolo \SetAlgoVline \SetKwFunction{OnlineEnrichment}{OnlineEnrichment \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Fn{\OnlineEnrichment{$d, \mathrm{tol}$}}{ \Input{enrichment fraction $d$,\\ target error $\mathrm{tol}$} \While{ $\max\limits_{\mu \in \Xi} \ \Delta(\widetilde{u}_\mu) > \mathrm{tol}$ }% { $E \leftarrow \emptyset$\; \While{ $\left(\sum\limits_{(\mu,\xi) \in E} \norm{R_\mu(\widetilde{u}_\mu)}_{(O_\xi)'} \right) / \left(\sum\limits_{(\mu,\xi) \in (\Xi \times \Upsilon_E)} \norm{R_\mu(\widetilde{u}_\mu)}_{(O_\xi)'} \right)< d$ }% { $\hat \mu,\hat \xi \leftarrow \operatornamewithlimits{arg\,max}\limits_ {(\mu, \xi) \in \left(\Xi \times \Upsilon_E\right) \setminus E} \norm{R_\mu(\widetilde{u}_\mu)}_{(O_\xi)'}$\; $E \leftarrow E \cup (\hat \mu, \hat \xi)$\; } \tcc{$S$ is used for double enrichment protection.} $S \leftarrow \emptyset$\; \For{$(\mu,\xi) \in E$} { \Find{$u_l \in O_{\xi}$ \st}{ $a_{\mu}(u_l, \varphi) = \dualpair{R_\mu(\widetilde{u}_{\mu})}{\varphi} \qquad \forall \varphi \in O_{\xi}$} $\check\xi \leftarrow \operatornamewithlimits{arg\,max}\limits_{\xi \in \Upsilon \setminus \Upsilon_0}\norm{ (P_{V_\xi}(u_l))^\perp } _{V_\xi}$\; \If{$\check \xi \notin S$}{ $\wt V_{\check\xi} \leftarrow \wt V_{\check\xi} \oplus \operatorname{span}((P_{V_{\check\xi}}(u_l))^\perp)$\; $S \leftarrow S \cup \check\xi$\; } } run \texttt{LocalGreedys}\; recalculate reduced solutions\; } } \caption{Online Enrichment} \label{algo:online_enrichment} \end{algorithm2e} \section{Introduction} Finite Element based simulation is a standard tool in many CAD/CAE assisted workflows in engineering. Depending on the complexity of the design simulated, on the underlying partial differential equation, and on the desired fidelity of the approximation of the solution, performing a simulation may take hours, days or even weeks. ArbiLoMod aims at the acceleration of a very specific class of problems, namely the repetitive simulation of parameterized problems with fine microstructure without scale separation: Problems which exhibit a microstructure on a scale much smaller than the domain require a very fine mesh to resolve the geometry and thus take very long to compute. They often have much more degrees of freedom than necessary for the description of their physical behavior, thus model order reduction can succeed. For problems with scale separation, there are established methods to reduce the model such as MsFEM \cite{Hou97amultiscale}, HMM \cite{E02theheterogeneous}, VMM \cite{HUGHES19983}, or LOD \cite{Maalqvist2014}. However, when there is no scale separation, homogenization is not possible. ArbiLoMod is based on a localized RB approach which does not require scale separation. In addition, engineers often want to change the problem definition and resimulate several times in an iterative process. Their goal is not to come from a problem definition to an approximation of the solution, but to provide approximations to the solutions of a sequence of problems, where each problem was created by modifying the previous problem. We assume that changes might be arbitrary, i.e. non parametric, but are of local nature, so that the majority of the problem setting (e.g.\ large parts of the geometry) is unchanged between two simulation runs. ArbiLoMod exploits the similarities between subsequent problems by reusing local approximation spaces in regions of the computation domain unaffected by the change. Furthermore, for many design applications in each iteration, the structure under consideration has to be simulated for a multitude of model parameters to analyze its behavior. ArbiLoMod includes online offline decomposition techniques from RB methods \cite{HesthavenRozzaEtAl2016,QuarteroniManzoniEtAl2016,Ha14}, using the regularity structure of the solution manifold \cite{Binev2011} for fast many-query simulation of the model. A particular example that we have in mind is the design of printed circuit boards (PCBs). The design of PCBs has all of the above mentioned properties: PCBs are nowadays very complex, there is no scale separation, improvements are often obtained by local changes of the electronic components and conductive tracks, and when solved in frequency domain, it is a parameterized problem with the frequency as a parameter. A possible change is depicted in Figure \ref{fig:boards}. The same applies to integrated circuit (IC) packages, which are in structure similar to PCBs. Other areas of application could be e.g. resonance analysis in cars or trains or electromagnetic filter design. \begin{figure} \centering \includegraphics[width=.9\textwidth,viewport=0 220 1476 470, clip=true]{pcb.png} \caption{DDR memory channel on a printed circuit board subject to local modification of conductive tracks.} \label{fig:boards} \end{figure} ArbiLoMod's goal is the reduction of overall simulation times. When measuring simulation times, we assume that the runtime on a single workstation is not the right quantity to look at. Instead, the runtime on any hardware easily available to the user is what matters, which particularly includes massive computing power in cluster and cloud environments, but usually not millions of cores as in supercomputers. As the user still has to pay per compute node in cloud environments, hundreds to thousands of compute nodes is the foreseen environment. Secondary goals during the development of ArbiLoMod were that the method should be easily implementable on top of existing finite element schemes and that the detection of changed regions vs.\ unchanged regions between two simulation runs is completely automatic. After an overview over existing methods in the literature, the definition of the problem setting and a short overview of ArbiLoMod, the structure of the paper follows the structure of ArbiLoMod: In Section \ref{sec:space_decomposition} the space decomposition used in this paper is presented. Training and Greedy algorithms for local basis generation are subject of Section \ref{sec:training_and_greedy}. The a posteriori error estimator employed is discussed Section \ref{sec:a_posteriori}. Localized enrichment of the bases is described in Section \ref{sec:enrichment}. The procedure followed on each geometry change is given in Section \ref{sec:onchange}. Potentials for parallelization are sketched in Section \ref{sec:runtime_and_communication}. Section \ref{sec:numerical_experiments} contains numerical results. \subsection*{Existing Approaches} The combination of ideas of the fields of Reduced Basis Methods, of multiscale methods and domain decomposition methods gained a lot of attention in recent time. In 2002, Maday and R\o nquist published the ``Reduced Basis Element'' (RBE) method \cite{Maday2002a,Maday2004}, combining the reduced basis approach with a domain decomposition, coupling local basis vectors by polynomial Lagrange multipliers on domain boundaries. Built on top of the RBE is the ``Reduced Basis Hybrid Method'' (RBHM) \cite{Iapichino2012} and the ``Discontinuous Galerkin Reduced Basis Element'' method (DGRBE) \cite{FrancescaAntonietti2015}. Similar in motivation is the ``Static Condensation Reduced Basis Element'' (SCRBE) method \cite{PhuongHuynh2012}, which also aims at systems composed of components, where the geometry of the components can be mapped to reference geometries. While the connection between the components is simply achieved by polynomial Lagrange multipliers in the RBE, significant research has been conducted on choosing the right coupling spaces in the context of the SCRBE. Choosing the right space at the interfaces is called ``Port Reduction'' and was included in the name, leading to the ``Port Reduced Static Condensation Reduced Basis Element'' (PR-SCRBE) method \cite{Eftang2013,Eftang2014} which employs a so-called ``pairwise training''. A variation of this idea is used in our method. Recently, also an algorithm to obtain optimal interface spaces for the PR-SCRBE was proposed \cite{Smetana2015} and a framework for a posteriori error estimation was introduced \cite{Smetana2015a}. While the PR-SCRBE performs excellent in using the potentials of cloud environments, its goals are different from ours. While being able to handle various changes to the system simulated, it does not aim at arbitrary modifications. And it does not try to hide the localization from the user, but rather exposes it to let the user decide how the domain decomposition should be done. Another combination of domain decomposition ideas with reduced basis methods coupling different physical formulations on the domain boundaries was presented in \cite{Maier2014,Maier2014a}. A completely different approach to local model order reduction is to see it as an extension to multiscale methods. There are two ways to combine RB and multiscale methods. First is to use RB to accelerate the solution of localized problems which occur in multiscale methods (``RB within multiscale''). This has been done by multiple authors, see e.g. \cite{Abdulle20127014,Abdulle2013203,hesthavenRBMS2015}. The second way is to use a subspace projection for the global problem, but use ideas from multiscale methods to construct the basis functions. This second approach is used by ArbiLoMod and is shared with several methods in the literature: The ``Generalized Multiscale Finite Element Method'' GMsFEM \cite{Efendiev2013} uses the idea of ``Multiscale Finite Elements'' MsFEM \cite{Hou97amultiscale} and constructs reduced spaces which are spanned by ansatz functions on local patches, using only local information. It allows for non-fixed number of ansatz functions on each local patch. GMsFEM uses local eigenproblems and a partition of unity for basis generation. Adaptive enrichment for the GMsFEM is presented by Chung et al.\ in \cite{Chung2014,Chung2014b} and online-adaptive enrichment in \cite{Chung2015}. While an application of the GMsFEM to the problem of arbitrary local modifications would be very interesting, GMsFEM was not designed to be communication avoiding. A parallel implementation of the enrichment described in \cite{Chung2015} would require the communication of high dimensional basis representations, which is avoided in ArbiLoMod. There are also developments of the ``Generalized Finite Element Method'' GFEM \cite{strouboulis2000design,strouboulis2001generalized} which could be extended to handle arbitrary local modifications. Especially the recent development of the Multiscale-GFEM \cite{babuvska2014machine} could be promising in this regard. Similar to ArbiLoMod in spirit is the \! \! ``Localized Reduced Basis Multiscale Method''\! (LRBMS) \cite{LRBMSwithOnlineEnrichment,OS15}. It uses a non overlapping domain decomposition and discontinuous ansatz spaces, which are coupled using a DG ansatz at the interface. While LRBMS could be extended to handle arbitrary local modifications and it can be implemented in a communication avoiding scheme, LRBMS cannot easily be implemented on top of an existing conforming discretization scheme. ArbiLoMod, in contrast, inherits all conformity properties from the underlying discretization. It can be built on top of standard, conforming finite element schemes and is then conforming itself. Some of the basic ideas of ArbiLoMod were already published by the authors \cite{Buhr2009,BO15}. First results for electrodynamics were published in \cite{BEOR16}. The main advantage of ArbiLoMod is its speed in cloud environments. At every design decision during the development of ArbiLoMod, care was taken to keep the required communication in a parallel implementation at a minimum. \section{Handling Local Changes} \label{sec:onchange} When the enrichment iteration has converged to a sufficient accuracy, the obtained solution is handed over to the user. The envisioned implementation should then wait for the user to modify the model under consideration. After a change to the model simulated, the full procedure described above is repeated, but wherever possible, existing data is reused. On the changed domains, a new mesh is generated, if necessary. Basis vectors having support in the changed region are discarded. The domain decomposition is never changed and thus has to be independent of the geometry. The potential savings are not only in the reduced basis generation, but also in the assembly of the system matrices. In an implementation featuring localized meshing and assembly domain by domain, meshing and assembly has to be repeated only in the domains affected by the change. This approach of recomputing everything in the changed region was chosen because it allows a robust implementation without any assumptions about the changes. \section{Numerical Example}\label{sec:numerical_example} The numerical experiments were performed using \mbox{pyMOR} \cite{pymor}. The source code for the reproduction of all results presented in this section are provided as a supplement to this paper. See the README file therein for installation instructions. Note that this code is kept simple to easily explore ArbiLoMod's mathematical properties. It is not tuned for performance. First results for electrodynamics were published in \cite{BEOR16}. \label{sec:numerical_experiments} \begin{figure} \centering \includegraphics[width=0.35\textwidth]{h_withboundaries.png} \caption{First structure in sequence of simulated structures. Unit square with high and low conductivity regions. White: constant conductivity region ($\sigma = 1$), black: parameterized high conductivity region ($\sigma_\mu = 1 + \mu$). Homogeneous Neumann boundaries $\Gamma_N$ at top and bottom marked blue, inhomogeneous Dirichlet boundaries $\Gamma_D$ at left and right marked light red.} \label{fig:geo1} \end{figure} \subsection{Problem Definition} To illustrate the capabilities of ArbiLoMod we apply it to a sequence of locally modified geometries. We consider heat conduction without heat sources in the domain on the unit square $\Omega := ]0,1[^2$. We approximate $u$ solving \begin{equation} -\nabla \cdot ( \sigma_\mu \nabla u_\mu) = 0 \end{equation} where $\sigma_\mu : \Omega \rightarrow \mathbb{R}$ is the heat conductivity. We apply homogeneous Neumann boundaries at the top and the bottom: $\nabla u \cdot n = 0 \ \mathrm{on} \ \Gamma_N := ( ]0,1[ \ \times \ 0 ) \cup ( ]0,1[ \ \times \ 1 ) $ and inhomogeneous Dirichlet boundaries at the left and right: $ u = 1 \ \mathrm{on} \ \Gamma_{D,1} := 0 \ \times \ ]0,1[ $, $ u = -1 \ \mathrm{on} \ \Gamma_{D,-1} := 1 \ \times \ ]0,1[ $ (see also Figure~\ref{fig:geo1}). \input{structure_table} The unit square is partitioned into two regions: one region with constant heat conductivity $\sigma_\mu = 1$ and one region with constant, but parameterized conductivity $\sigma_\mu = 1 + \mu$ where $\mu \in [10^0, 10^5]$. We call this second region the ``high conductivity region'' $\Omega_{h,i}$. For reproduction or benchmarking, we give precise definitions in the following. In the first geometry, the high conductivity region is \begin{eqnarray} \Omega_{h,1} &:=& \big[(0.0 ,0.1) \times (0, 1) \big] \ \cup \ \big[(0.9, 1.0) \times (0, 1) \big] \ \cup \ \\ && \nonumber \big[(0.11, 0.89) \times (0.475, 0.485) \big] \ \cup \ \big[(0.1, 0.9) \times (0.495, 0.505) \big] \ \cup \ \\ && \nonumber \big[(0.11, 0.89) \times (0.515, 0.525) \big], \end{eqnarray} see also Figure~\ref{fig:geo1}. The inhomogeneous Dirichlet boundary conditions are handled by a shift function $u_s$ and we solve for $u_0$ having homogeneous Dirichlet values where $u = u_0 + u_s$. The parametrization of the conductivity in the high conductivity region, $\sigma_\mu$, leads to a term in the affine decomposition of the bilinear form. The affine decomposition of the bilinear form and linear form are \begin{eqnarray} a_\mu(u_0,\varphi) &=& \mu \int_{\Omega_{h,i}} \nabla u_0 \cdot \nabla \varphi \ \mathrm{d}x + \int_{\Omega} \nabla u_0 \cdot \nabla \varphi \ \mathrm{d}x \\ \nonumber \dualpair{f_\mu}{\varphi} &=& - \mu \int_{\Omega_{h,i}} \nabla u_s \cdot \nabla \varphi \ \mathrm{d}x - \int_{\Omega} \nabla u_s \cdot \nabla \varphi \ \mathrm{d}x . \end{eqnarray} The coercivity constant $\alpha_\mu$ of the corresponding bilinear form with respect to the $H^1$ norm is bounded from below by $\alpha_{LB} := \frac{\sigma_\mathrm{min}}{c_F^2+1}$ where $\sigma_\mathrm{min}$ is the minimal conductivity and $c_F = \frac{1}{\sqrt{2}\pi}$ is the constant in the Friedrich's inequality $ \norm{\varphi}_{L^2(\Omega)} \leq c_F \norm{\nabla \varphi}_{L^2(\Omega)} \forall \varphi \in H^1_0(\Omega) $. The problem is discretized using $P^1$ ansatz functions on a structured triangle grid with maximum triangle size $h$. The grid is carefully constructed to resolve the high conductivity regions, i.e. $h$ is chosen to be $1/n$ where $n$ is a multiple of 200. To mimic ``arbitrary local modifications'', the high conductivity region is changed slightly four times, which leads to a sequence of five structures to be simulated in total. The high conductivity regions are defined as: \begin{eqnarray} \Omega_{h,2} &:=& \Omega_{h,1} \setminus \big[(0.1, 0.11) \times (0.495, 0.505) \big]\\ \Omega_{h,3} &:=& \Omega_{h,2} \setminus \big[(0.89, 0.9) \times (0.495, 0.505) \big] \nonumber\\ \Omega_{h,4} &:=& \Omega_{h,3} \cup \big[(0.1, 0.11) \times (0.515, 0.525) \big] \nonumber\\ \Omega_{h,5} &:=& \Omega_{h,4} \cup \big[(0.89, 0.9) \times (0.495, 0.505) \big] \nonumber \end{eqnarray} These modifications only affect a very small portion of the domain (actually, only 0.01\%), but for high contrast configurations, they lead to strong global changes in the solution, see Figure~\ref{fig:structures}. An equidistant domain decomposition of $8 \times 8$ domains is used. The mesh resolves the domain boundaries. \subsection*{Configuration} If not specified otherwise, we use a mesh size of $1/h = 200$, a training tolerance of $\varepsilon_\mathrm{train} = 10^{-4}$, a number of random samplings of $M = 60$, a greedy tolerance of $\varepsilon_\mathrm{greedy} = 10^{-3}$, a convergence criterion of $\norm{R_\mu(\widetilde u_\mu)}_{V_h'} < 10^{-2}$, an enrichment fraction of $d = 0.5$, a training set of size $|\Xi| = 6$, and the parameter for extension calculation is $\overline \mu = 10^5$. \subsection{Results} \label{sec:num_results} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{tolerances.pdf} \caption{Maximum relative $H^1$-error on training set $\Xi$ in dependence of tolerances in codim 1 training and codim 0 greedy. Online enrichment disabled. Script: \texttt{experiment\_tolerances.py}.} \label{fig:tolerances} \end{figure} The initial reduced space is created using the local trainings and greedy algorithms. In both the trainings and the greedy algorithms a tolerance parameter steers the quality of the obtained reduced space: In the trainings, $\varepsilon_\mathrm{train}$ is the stopping criterion for the SnapshotGreedy (Algorithm \ref{algo:snapshot_greedy}) and the local greedys stop when the local error estimator stays below the prescribed tolerance $\varepsilon_\mathrm{greedy}$. The resulting reduction errors in dependence on the two tolerances are depicted in Figure~\ref{fig:tolerances}. \begin{table} \footnotesize \centering \begin{tabu}{c|[1pt]c|[lightgray]c|c|[lightgray]c|c|[lightgray]c|[1pt]c|[lightgray]c} \multirow{2}{*}{geometry}& \multicolumn{6}{c|[1pt]}{\em with training} & \multicolumn{2}{c}{\em without training}\\ &\multicolumn{2}{c|}{trainings} & \multicolumn{2}{c|}{greedys} & \multicolumn{2}{c|[1pt]}{iterations} & \multicolumn{2}{c}{iterations} \\ \hline &\multicolumn{2}{c|}{reuse:} & \multicolumn{2}{c|}{reuse:} & \multicolumn{2}{c|[1pt]}{reuse:} & \multicolumn{2}{c}{reuse:} \\ & no & yes & no & yes & no & yes & no & yes\\ \tabucline[lightgray]{2-} 1 & 112 & 112 (-0 \%) & 64 & 64 (-0 \%) & 24 & 24 (-0 \%) & 46 & 46 (-0 \%) \\ 2 & 112 & 5 (-96 \%) & 64 & 8 (-88 \%) & 24 & 13 (-46 \%) & 48 & 28 (-42 \%) \\ 3 & 112 & 5 (-96 \%) & 64 & 8 (-88 \%) & 20 & 14 (-30 \%) & 42 & 27 (-36 \%) \\ 4 & 112 & 3 (-97 \%) & 64 & 6 (-91 \%) & 25 & 10 (-60 \%) & 54 & 23 (-57 \%) \\ 5 & 112 & 5 (-96 \%) & 64 & 8 (-88 \%) & 25 & 12 (-52 \%) & 52 & 27 (-48 \%) \\ \end{tabu} \caption{Number of iterations of online enrichment: (a) With and without codim 1 training. (b) With and without reuse of basis functions of previous simulations. Convergence criterion: $\norm{R_\mu(\widetilde u_\mu)}_{V_h'} < 10^{-4}$, greedy tolerance: $\varepsilon_\mathrm{greedy} = 10^{-5}$. See also Figures \ref{fig:basisreusewithtraining}, \ref{fig:basisreuse}.\\ Scripts: \texttt{experiment\_basisreuse\_with\_training.py}, \texttt{experiment\_basisreuse.py}.} \label{tab:basisreusecompare} \end{table} \begin{figure} \centering \input{figure_basisreuse_with_training} \caption{Relative error over iteration with and without basis reuse after geometry change. With codim 1 training. Convergence criterion: $\norm{R_\mu(\widetilde u_\mu)}_{V_h'} < 10^{-4}$, greedy tolerance: $\varepsilon_\mathrm{greedy} = 10^{-5}$. See also Table~\ref{tab:basisreusecompare}.\\ Script: \texttt{experiment\_basisreuse\_with\_training.py}.} \label{fig:basisreusewithtraining} \end{figure} \begin{figure} \centering \input{figure_basisreuse} \caption{Relative $H^1$ error over iterations with and without basis reuse after geometry change. Without codim 1 training. Convergence criterion: $\norm{R_\mu(\widetilde u_\mu)}_{V_h'} < 10^{-4}$, greedy tolerance: $\varepsilon_\mathrm{greedy} = 10^{-5}$. See also Table~\ref{tab:basisreusecompare}.\\ Script: \texttt{experiment\_basisreuse.py}.} \label{fig:basisreuse} \end{figure} \begin{figure} \centering \input{figure_estimatorperformance} \caption{ Error estimators $\Delta^{rel}$ and $\Delta^{rel}_{loc}$ over iterations, compared to the relative error. Plotted for $\alpha_\mu = c_{\mathrm{pu},\tilde V} = 1$. Simulation performed with a convergence criterion of $\norm{R_\mu(\widetilde u_\mu)}_{V_h'} < 10^{-6}$, a training tolerance of $\varepsilon_\mathrm{train} = 10^{-5}$, and a greedy tolerance of $\varepsilon_\mathrm{greedy} = 10^{-7}$. Script: \texttt{experiment\_estimatorperformance.py}.} \label{fig:estimatorperformance} \end{figure} If the resulting error is too big, it can be further reduced using iterations of online enrichment as depicted in Figure \ref{fig:basisreusewithtraining}. Results suggest an very rapid decay of the error with online enrichment. The benefits of ArbiLoMod can be seen in Figure \ref{fig:basisreusewithtraining} and Table \ref{tab:basisreusecompare}: after the localized geometry changes, most of the work required in the initial basis creation does not need to be repeated and the online enrichments converge faster for subsequent simulations, leading to less iterations. The online enrichment presented here converges even when started on empty bases, as depicted in Figure~\ref{fig:basisreuse}. It does not rely on properties of the reduced local subspaces created by trainings and greedys. \input{runtimes_table} \input{h_study_table} The performance of the localized a posteriori error estimator $\Delta^{rel}_{loc}$ can be seen in Figure~\ref{fig:estimatorperformance}. Comparison of the localized estimator $\Delta^{rel}_{loc}$ with the global estimator $\Delta^{rel}$ shows that, for the example considered here, the localization does not add a significant factor beyond the factor $c_{\mathrm{pu},\tilde V}$, which is supposed to be close to one and was thus neglected in Fig.\ \ref{fig:estimatorperformance}. Even though our implementation was not tuned for performance and is not parallel, we present some timing measurements in Table~\ref{tab:runtimes} and Table~\ref{tab:H_study}. Table~\ref{tab:runtimes} shows that, already in our unoptimized implementation, trainings and greedys have a shorter runtime than a single global solve for problems of sufficient size. Taking into account that trainings and greedys create a solution space valid in the whole parameter space, this data is a strong hint that ArbiLoMod can realize its potential for acceleration for large problems, in an optimized implementation and a parallel computing environment. Especially when the solution is to be calculated at multiple parameter values. Table~\ref{tab:H_study} shows the effect of choosing the domain size $H$: Large domains lead to large local problems, while small domains lead to a large reduced global problem (see also Section \ref{sec:H_explanation}). \begin{figure} \centering \includegraphics[width=0.35\textwidth]{basis_sizes.png} \caption{Distribution of local basis sizes after initial training. Relative reduction error at this configuration: $1.3 \cdot 10^{-3}$. Script: \texttt{experiment\_draw\_basis\_sizes.py}.} \label{fig:basis_sizes} \end{figure} \subsection*{Structure of ArbiLoMod} The main ingredients of ArbiLoMod are: \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item a localizing space decomposition, \item local training and greedy algorithms, \item a localized a posteriori error estimator, \item a localized adaptive enrichment procedure. \end{enumerate} ArbiLoMod builds upon a space decomposition of the original ansatz space $V$ consisting of a set of subspaces $V_i \subset V$ whose direct sum is the original ansatz space, i.e. \begin{equation} V = \bigoplus_i V_i. \end{equation} The subspaces $V_i$ are meant to have local properties, i.e. all elements of one subspace $V_i$ have support only in a small subset of the domain. In the first step, for each local subspace $V_i$, a reduced local subspace $\widetilde V_i \subseteq V_i$ is constructed using training and greedy algorithms. Thereafter, the global problem is solved in the space formed by the direct sum of all reduced local subspaces: \begin{eqnarray} &&{\mathrm{ with}} \qquad \widetilde V := \bigoplus_i \widetilde V_i\\ \label{eq:red_problem} && {\mathrm{ find }} \qquad \widetilde u_\mu \in \widetilde V \quad {\mathrm{ such\ that }} \qquad a_\mu( \widetilde u_\mu, v) = \dualpair{f_\mu}{v} \qquad \forall \,v \in \widetilde V. \nonumber \end{eqnarray} To assess the quality of the thus obtained solution, a localized a posteriori error estimator is employed. If necessary, the solution is improved by enriching the reduced local subspaces, using a residual based, localized enrichment procedure. Finally, on each localized change, affected bases are discarded and the procedure starts over. An overview is given in Figure~\ref{fig:workflow}. \begin{figure} \centering \includegraphics[width=.7\textwidth]{workflow.png} \caption{Overview of ArbiLoMod. Generation of initial reduced interface spaces by training and of reduced volume spaces by greedy basis generation is subject of Section \ref{sec:training_and_greedy}. Convergence is assessed with the localized a posteriori error estimator presented in Section \ref{sec:a_posteriori}. Enrichment is discussed in Section \ref{sec:enrichment}. On each geometry change, the procedure starts over, where new initial interface spaces are only generated in the region affected by the change. } \label{fig:workflow} \end{figure} \section{Preliminaries}% \input{problem_setting} \input{overview} \input{space_decomposition} \input{conditionings} \input{a_posteriori} \input{enrichment_procedure} \input{local_change} \input{runtime_and_communication} \input{numerical_example} \input{conclusion} \section*{Acknowledgment} {\it The authors would like to thank Clemens Pechstein for many fruitful discussions and the referees, whose reviews lead to significant improvements of this manuscript. } \bibliographystyle{abbrv} \subsection*{Problem Setting} In this contribution we particularly look at problems that are modeled by partial differential equations with complex local structure, typically on a very fine scale compared to the overall problem setting. In addition, we assume that the problem might depend on a number of parameters $\mu \in \mathcal{P} \subset \ \mathbb{R}^p$. As a model problem to explore and design our new simulation technique ArbiLoMod, we consider elliptic equations with complex micro-structure. To simplify the presentation, we restrict ourselves to the two dimensional case in this publication. Thus, let $\Omega \subset \ \mathbb{R}^d$, $d=2$ denote the polygonal computational domain, and $V$, $H^1_0(\Omega) \subset V \subset H^1(\Omega)$ denote the solution space. We then look at variational problems of the form \begin{equation} {\mathrm{ find }} \ u_\mu \in V \quad {\mathrm{ such\ that }} \qquad a_\mu( u_\mu, v) = \dualpair{f_\mu}{v} \qquad \forall \,v \in V. \end{equation} Here, $a_\mu: V \times V \to \ \mathbb{R}$, $\mu \in \mathcal{P}$ denotes a parameterized bilinear form with inherent micro-structure and $f_\mu \in V'$ a force term. $V$ is equipped with the standard $H^1$ inner product and the thereby induced norm. $a_\mu$ is assumed to be coercive and continuous and by $\alpha_\mu, \gamma_\mu$ we denote the lower (upper) bounds for the coercivity (norm) constants of $a_\mu$, i.e.\ $\alpha_\mu \norm{\varphi}_V^2 \leq a_\mu(\varphi, \varphi)$ for all $\varphi \in V$ and $\norm{a_\mu} \leq \gamma_\mu$. We further assume that the bilinear form $a_\mu(u,v)$ has a decomposition affine in the parameters and can be written as a sum of parameter independent bilinear forms $a^b(u,v)$ with parameter dependent coefficient functions $\theta_b(\mu)$: \begin{equation} \label{eq:affine_decomposition} a_\mu(u,v) = \sum_b \theta_b(\mu) a^b(u,v) \end{equation} As an example $a_\mu$ could be given as \begin{equation} \label{eq:heat_equation} a_\mu( u, v) = \int_\Omega \sigma_\mu(x) \nabla u(x) \nabla v(x) {\mathrm{dx}} \end{equation} where $\sigma_\mu: \Omega \to \ \mathbb{R}$ denotes a parameterized heat conduction coefficient that varies in space on a much finer scale then the length scale of $\Omega$. \section{Runtime and Communication} \label{sec:runtime_and_communication} A major design goal of ArbiLoMod is communication avoidance and scalability in parallel environments. Although the main topic of this publication are the mathematical properties, we want to highlight the possibilities offered by ArbiLoMod to reduce communication in a parallel setup. \input{figure_webservice_no_communication} Similar to overlapping Domain Decomposition methods we require, that not only the local domain, but also an overlap region is available locally. For a subdomain $\Omega_i$ the overlap region is the domain itself and all adjacent domains, as depicted in Figure~\ref{fig:no_communication}, i.e.\ all subdomains in the neighborhood $\mathcal{N}_{\{i\}}$. As the overlap region includes the support of all training spaces, one can compute all initial reduced local subspaces with support in $\Omega_i$ without further communication. This work can be distributed on many nodes. Afterwards, only reduced representations of the operator have to be communicated. Using the operator decomposition $a^b(u,v) = \sum_{i=1}^{N_D} a_{\Omega_i}^b(u,v)$, a global, reduced operator is collected using an all-to-one communication of reduced matrices. The global reduced problem is then solved on a single node. It is assumed that the global, reduced system is sufficiently small. If the accuracy is not sufficient, online enrichment is performed. This step requires additional communication; first for the evaluation of the error estimator and second to communicate new basis vectors of reduced face spaces $\widetilde V_\xi, \ \xi \in \Upsilon_1$. Note that it is sufficient to communicate the local projection $P_{U_\xi}(\psi)$ and reconstruct the actual basis function as its extension, so that we save communication costs proportional to the volume to surface ratio. The evaluation of the localized error estimator requires the dual norms of the residual in the localized spaces $\norm{R_\mu(\widetilde u_\mu)}_{O_\xi'}$. Using a stabilized online-offline splitting \cite{BEOR14a}, the evaluation of the error estimator can be evaluated for the full system using only reduced quantities. The computation of the reduced operators is performed in parallel, similar to the basis construction in the first step. The actual evaluation of the error estimator can be performed on a single node and is independent of the number of degrees of freedom of the high fidelity model. \label{sec:H_explanation} An important parameter for ArbiLoMod's runtime is the domain size $H$. The domain size affects the size of the local problems, the amount of parallelism in the algorithm, and the size of the reduced global problem. An $H$ too large leads to large local problems, while an $H$ too small leads to a large reduced global problem (see also the numerical example in Section \ref{sec:num_results} and especially the results in Table \ref{tab:H_study}). $H$ has to be chosen to balance these two effects. As the focus of this manuscript is ArbiLoMod's mathematical properties, the question of choosing $H$ for optimal performance will be postponed to future research. \section{Space Decomposition} \label{sec:space_decomposition} In this section we introduce definitions for the different kinds of spaces needed in the method. Since ArbiLoMod is designed to work on top of existing discretization schemes, we formulate the method on a discrete level. The following definitions are for two dimensional problems, but can be easily extended for three dimensional problems. \subsection{Basic Subspaces} $V_h$ denotes a discrete ansatz space, spanned by ansatz functions $\{\psi_i\}_{i=1}^N =: {\mathcal{B}}$. We assume that the ansatz functions have a localized support, which is true for many classes of ansatz functions like Lagrange- or N\'ed\'elec-type functions. We first introduce a direct decomposition of the ansatz space $V_h$ into ``basic subspaces''. These are used to construct the space decomposition later. To obtain the subspaces we classify the ansatz functions by their support and define each subspace as the span of all ansatz functions of one class. To this end, we introduce a non overlapping domain decomposition of the original domain $\Omega$ into open subdomains $\Omega_i$: \begin{equation} \overline \Omega = \bigcup_{i=1}^{N_D}\overline \Omega_i \qquad \Omega_i \cap \Omega_j = \emptyset\ \mathrm{for}\ i\ne j \label{eq:dd} \end{equation} where $N_D$ is the number of subdomains. For each $\psi$ in ${\mathcal{B}}$, we call $I_\psi$ the set of indices of subdomains that have non-empty intersection with the support of $\psi$, i.e. \begin{equation} I_\psi := \Big\{i \in \{1, \dots, N_D\} \ \Big| \ \operatorname{supp}(\psi) \cap \Omega_i \ne \emptyset \Big\}. \end{equation} We collect all occurring domain sets in $\Upsilon$: \begin{equation} \Upsilon := \Big\{ I_\psi , \psi \in {\mathcal{B}} \Big\}. \end{equation} We define sets for each codimension: \begin{equation}\label{def:codimsets} \Upsilon_0 := \Big\{ \xi \in \Upsilon \ \Big| \ |\xi| = 1\Big\},\ \Upsilon_1 := \Big\{ \xi \in \Upsilon \ \Big| \ |\xi| = 2\Big\},\ \Upsilon_2 := \Big\{ \xi \in \Upsilon \ \Big| \ |\xi| > 2\Big\}. \end{equation} The elements of $\Upsilon_2$,$\Upsilon_1$, and $\Upsilon_0$ can be associated with interior vertices (codimension 2), faces (codimension 1) and cells (codimension 0) of the domain decomposition. The classification is very similar to the classification of mesh nodes in domain decomposition methods, see for example \cite[Def. 3.1]{klawonn2006dual} or \cite[Def. 4.2]{toselli2005domain}. \begin{definition}[Basic Subspaces] \label{def:basic_space_def} For each element $\xi \in \Upsilon$ we define a basic subspace $U_\xi$ of $V$ as: \begin{equation} U_\xi := \operatorname{span}\Big\{\psi \in {\mathcal{B}} \ \Big| \ I_\psi = \xi \Big\}. \nonumber \end{equation} \end{definition} \begin{remark}[Basic Decomposition] The definition of $U_\xi$ induces a direct decomposition of $V_h$: \begin{equation} V_h = \bigoplus_{\xi \in \Upsilon} U_\xi. \nonumber \end{equation} \end{remark} \subsection{Space Decomposition} \label{sec:decomposition} As mentioned in the introduction, ArbiLoMod is based on a space decomposition. It can work on the Basic Decomposition introduced in Definition \ref{def:basic_space_def}. However, faster convergence and smaller basis sizes are achieved using the modified space decomposition introduced in this section. Here we assume that the discrete ansatz space $V_h$ is spanned by finite element ansatz functions defined on a mesh which resolves the subdomains $\Omega_i$. For each of the spaces $U_\xi$ defined in Definition \ref{def:basic_space_def}, we calculate extensions. The extensions are computed on the ``extension space'' $\extension{U_\xi}$ which is defined as \begin{equation} \label{eq:definition_extension_space} \extension{U_\xi} := \bigoplus \Big\{U_\zeta \ \Big| \ \zeta \subseteq \xi \Big\}. \end{equation} \input{figure_extension_spaces} Examples for extension spaces are given in Figure~\ref{fig:extension_space}. For each space $U_\xi$, a linear extension operator $\mathrm{Extend}$ is defined: \begin{equation} \mathrm{Extend} : U_\xi \rightarrow \extension{U_\xi}. \end{equation} For all $\xi$ in $\Upsilon_0$, $\mathrm{Extend}$ is just the identity. For all $\xi$ in $\Upsilon_1$, we extend by solving the homogeneous version of the equation with Dirichlet zero boundary values for one (arbitrary) chosen $\overline{\mu} \in \mathcal{P}$. For example, in the situation depicted in Fig. \ref{fig:extended_base_1}, \begin{eqnarray*} \mathrm{Extend} : U_{\{1,2\}} &\rightarrow& U_{\{1\}} \oplus U_{\{1,2\}} \oplus U_{\{2\}}\\ \varphi &\mapsto& \varphi + \psi\\ &&\mbox{where } \psi \in U_{\{1\}} \oplus U_{\{2\}} \mbox{ solves}\\ && a_{\overline{\mu}}(\varphi + \psi, \phi) = 0 \qquad \forall \phi \in U_{\{1\}} \oplus U_{\{2\}} \end{eqnarray*} For all $\xi$ in $\Upsilon_2$, $\mathrm{Extend}$ is defined by first extending linearly to zero on all edges in the extension domain, i.e.~in all spaces in $\extension{U_\xi}$ which belong to $\Upsilon_1$. Then, in a second step, the homogeneous version of the equation with Dirichlet boundary values is solved on the spaces in $\extension{U_\xi}$ which belong to $\Upsilon_0$. The procedure is visualized in Figure~\ref{fig:vertexextension}. The functions constructed by this two step procedure are the same as the MsFEM basis functions used by Hou and Wu \cite{Hou97amultiscale}. Note that the base functions for $\extension{U_\xi}, \xi \in \Upsilon_2$ form a partition of unity in the interior of the coarse partition of the domain. They can be completed to form a partition of unity on the whole coarse partition of the domain if suitable base functions for the vertices at the boundary of the domain are added. This will be used in Section \ref{sec:a_posteriori} below for the robust and efficient localization of an a posteriori error estimator. Examples of extended functions for all codimensions are given in Figure~\ref{fig:extended_base}. In the case of the Laplace equation these basis functions coincide with the hat functions on the coarse partition (see Figure~\ref{fig:vertexextension}). As discussed in Section \ref{sec:a_posteriori} below, the choice of hat function can be an alternative choice that allows for a better a priori bound of the constants in the localized a posteriori error estimator, as their gradient is controled by $1/H$ -- where $H$ denotes the mesh size of the macro partition -- independent of the contrast of the data. \input{figure_extension_bases} \input{figure_codim2_extension} For the communication avoiding properties of the ArbiLoMod, it is important to note that extensions can be calculated independently on each domain, i.e. $\mathrm{Extend}(\varphi)|_{\Omega_i}$ can be calculated having only information about $\varphi$ and $\Omega_i$, without knowledge about other domains. Using this operator, we define the local subspaces $V_\xi$: \begin{definition}[Extended Subspaces] \label{def:extended_space_def} For each element $\xi \in \Upsilon$ we define an extended subspace $V_\xi$ of $V_h$ as: \begin{equation} \label{eq:space_decomposition} V_\xi := \Big\{ \mathrm{Extend} (\varphi) \ \Big| \ \varphi \in U_\xi \Big\}. \nonumber \end{equation} According to the definition in \eqref{def:codimsets} we call $V_\xi $ a cell, face, or vertex space, if $\xi \in \Upsilon_0, \xi \in \Upsilon_1$, or $\xi \in \Upsilon_2$, respectively (cf.\ Fig. \ref{fig:extended_base}). \end{definition} \begin{remark}[Extended Decomposition] The definition of $V_\xi$ induces a direct decomposition of $V_h$: \begin{equation} V_h = \bigoplus_{\xi \in \Upsilon} V_\xi. \nonumber \end{equation} \end{remark} ~\\[-1ex] \noindent Space decompositions of the same spirit are used in the context of Component Mode Synthesis (CMS), see \cite{Hetmaniuk2010,hetmaniuk2014error}. \subsection{Projections} \input{projections}
1,116,691,501,209
arxiv
\section{Introduction} Recently, various authors \cite{KLT,KKLR,GKKLR} considered pure $R^2$ theories of gravity coupled to matter. These theories are particularly interesting also in regard to cosmology because they naturally accommodate for de Sitter universes. While demanding conformal invariance (Weyl local gauge symmetry) would require spin two ghosts arising from the Weyl square term \cite{stelle,GFvN}, the rigid scale invariant $R^2$ theory propagates only physical massless modes in de Sitter space, in contrast with the $R+R^2$ theory, which has in addition a Minkowski phase with a massive scalar, the inflaton. An Einstein term is then obtained through quantum effects as substantiated by the analysis of \cite{KLT}. Further restrictions that follow from the supersymmetric extensions of these theories are the aim of the present investigation. In particular, in this note we implement the analysis of \cite{KLT} by requiring that pure $R^2$ supergravity be effectively derived solely in terms of the geometry of curved superspace. This poses severe restrictions on the dual standard supergravity theory which, in fact, cannot be an arbitrary scale invariant theory of supergravity. For example, we find that only certain cases are possible among the ones worked out in \cite{KLT}. Moreover, one conformally coupled chiral superfield (that we call $S$) and another one that we call the chiral superfield $T$, are not matter fields but have a pure gravitational origin. In fact, out of the three (unique) suggested forms of superpotential in \cite{KLT}, only a certain linear combination of $TS$ and $S^3$ can arise; $T^{3/2}$ alone is also possible. This result parallels the same analysis made in the $R+R^2$ theory \cite{Cecotti, Cecotti-Kallosh}. Also, the $T^{3/2}$ theory, where only the $T$ field is present, is not an $R^2$ completion but a particular (scale invariant) case of the super $F(R)=R^3$ chiral theory originally investigated in \cite{Ketov,KS1,FKP}. The latter has in fact an anti-de Sitter rather than a de Sitter phase \cite{FKP,ENOl,KLT}. All the scale invariant theories discussed above present instabilities and therefore have to be modified both at the classical and at the quantum level. The problem caused by these instabilities is similar to the one found in the context of $R+R^2$ supergravity \cite{ENOl,KL,EON1,FKR,FKLP,FKLP2,FKvP,FKP,KL1,EON,FeKR,ADFS,FKL1, KL00,DZ,FP,Far,Far2,LT}, which was solved in \cite{KL}. Even more interesting is the analysis in the new minimal formulation \cite{Sohnius:1982fw,Ferrara:1988qxa}. Here the dual supergravity $R+R^2$ theory is a gauge theory in the Higgs phase \cite{CFPS,FKR,FKLP,FKLP2}. The de-Higgsed phase corresponds to the pure $R^2$ theory in the limit that $H M_P=g M_P^2$ is kept fixed ($H$ is the Hubble constant) while the gauge coupling goes to zero. This theory is in fact the extension of the Freedman model \cite{Freedman}, where a massless vector multiplet with a Fayet-Iliopoulos (FI) term gives rise to a positive cosmological constant. Here there is an additional massless chiral field, dual to the antisymmetric tensor auxiliary field that has become dynamical. The paper is organized as follows. In section 2 we present the superconformal rules needed for our analysis and we discuss the pure $R^2$ in the old minimal formulation of the ${\cal N}=1$ supergravity. We also present the corresponding scale invariant matter couplings. In section 3, where chiral multiplets are added, we describe pure $R^2$ supergravity in the new minimal formulation; we conclude in section 4. \section{$R^2$ Supergravity in the Old Minimal Formulation} For our convenience we report here some rules of superconformal tensor calculus that will be useful in order to go from the $R^2$ theory to its standard supergravity form. These rules are explained in \cite{FvP}, and also in \cite{FKvP,FKP}. Superconformal fields are denoted by their Weyl weight $w$ and chiral weight $n$. So we will use the notation $X_{w,n}$ and we will only consider scalar superfields. The basic operator is the $\Sigma$ operator, which is the curved superspace analog of $\bar D ^2$. The $\Sigma$ operator has weights $(1,3)$ and it can be applied to a superconformal field $X_{w,w-2}$ so that $\Sigma X_{w,w-2}$ is a chiral superfield of weights $(w+1,w+1)$. $X$ can be (anti)chiral only if $w=-1$, in which case $\Sigma\bar X _{(1,-1)}$ is a chiral superfield of weigh $(2,2)$. The basic identity between F and D densities of a chiral superfield $f$ of weight $(0,0)$ is \begin{eqnarray} [f {\cal R} S_0^2]_F=[(f+\bar f)S_0\bar S _0]_D, \label{i1} \end{eqnarray} where ${\cal R}=(\Sigma(\bar S _0)/S_0)_{(1,1)}$ is the chiral scalar curvature multiplet. The notation $[O]_{D,F}$ denotes, as usual, the standard D- and F-term density formulae of conformal supergravity, for a real superfield $O$ with scaling weight 2 and vanishing chiral weight or a chiral superfield with Weyl (and chiral) weight 3. In particular, the bosonic components of the curvature chiral scalar multiplet ${\cal R}$ are \begin{eqnarray} {\cal R}=\frac{1}{3}\bar u +\cdots+\theta^2 {\mathscr{F}}_R, \end{eqnarray} where \begin{eqnarray} {\mathscr{F}}_R=-\frac{1}{2} R-3 A_\mu^2+3 i D^\mu A_\mu . \end{eqnarray} and $u, A_\mu$ are the supergravity auxiliary fields \cite{FvN,SW}. \subsection{Scale invariant Supergavity} It can easily be seen from the chiral curvature superfield ${\cal R}$ that we can write the following scale-invariant supergravity action \begin{eqnarray} {\cal L}_{scal.inv}=\alpha [{\cal R}\bar {\cal R}]_D -\beta[{\cal R}^3]_F, \label{a1} \end{eqnarray} where $\alpha,\beta$ are dimensionless couplings. We may write eq.(\ref{a1}) in a dual form by introducing Lagrange multiplier superfields $T,S$ so that \begin{eqnarray} {\cal L}_D=\alpha\big{[}S_0\bar S_0 S \bar S\big{]}_D-\beta\big{[}S_0^3S^3\big{]}_F-3\bigg[T\left(\frac{{\cal R}}{S_0}-S\right)S_0^3\bigg]_F , \label{a2} \end{eqnarray} It is easy to check that by integrating out the Lagrange multiplier superfield $T$ in (\ref{a2}), we get back the original theory (\ref{a1}). However, by using the identity in eq.(\ref{i1}), we may write (\ref{a2}) as \begin{eqnarray} {\cal L}_D=-[(3T+ 3\bar T-\alpha S\bar S)S_0\bar{S}_0]_D+[ (-\beta S^3+3 T S)S_0^3]_F ,\label{a3} \end{eqnarray} which describes standard supergravity with K\"ahler potential ($\alpha\geq 0$) \begin{eqnarray} K=-3 \log\left(T+\bar T-\frac{\alpha}{3} S\bar S\right), \end{eqnarray} and superpotential \begin{eqnarray} W(T,S)=3 TS-\beta S^3 . \end{eqnarray} \vskip.1in \noindent {\bf The case $\mathbf{\boldsymbol\alpha=0}$}. In this particular case, the scale invariant supergravity action turns out to be \begin{eqnarray} {\cal L}_D=-\beta[{\cal R}^3]_F, \label{a30} \end{eqnarray} which can be written in a dual form as \begin{eqnarray} {\cal L}_D=-[3(T+ \bar T)S_0\bar{S}_0]_D+[ (-\beta S^3+3 T S)S_0^3]_F .\label{a33} \end{eqnarray} We see that $S$ appears now as a Lagrange multiplier superfield and it can be integrated out. As a result, we find that $S=(T/\beta)^{1/2}$ and eq.(\ref{a33}) is written as \begin{eqnarray} {\cal L}_D=-[3(T+ \bar T)S_0\bar{S}_0]_D+ [ 2\beta^{-1/2} T^{3/2}S_0^3]_F ,\label{a333} \end{eqnarray} so that the K\"ahler potential and the superpotential are given by \begin{eqnarray} &&K=-3 \log(T+\bar T)\, , ~~~\\ &&W=\frac{2}{\beta^{1/2}}T^{3/2}. \end{eqnarray} This is one of the models used in \cite{KLT} to describe a supergravity dual of pure $R^2$ supergravity. However, its origin is not from the scale invariant ${\cal R}\bar {\cal R}$ term but rather from the other scale invariant ${\cal R}^3$ term. As observed in \cite{KLT}, it has a negative cosmological constant and so it cannot be the dual of an $R^2$ theory \cite{FKP,ENOl}. \subsection{Scale Invariant Matter Couplings} Pure $R^2$ supergravity (in its dual formulation) is invariant under scale symmetry under \begin{eqnarray} T\to T e^{\lambda}, ~~~S\to S e^{\lambda/2}, ~~~S_0\to S_0 e^{-\lambda/2}, \end{eqnarray} which is inherited from the scale symmetry of the gravitational $R^2$ theory \begin{eqnarray} \frac{{\cal R}}{S_0}\to \frac{{\cal R}}{S_0} e^{\lambda/2} , ~~~S_0\to S_0 e^{-\lambda/2}. \label{rs} \end{eqnarray} Let us add $n$ superconformal chiral multiplets $A^i$ with scaling $A^i\to A^i e^{\lambda/2}$ (but $(0,0)$ superfield weights). Then as in \cite{KLT}, we can have conformally coupled matter $C^{\bar i}_jA^j \bar A_{\bar i} S_0 \bar S _0$ but also chiral F terms coupling to the curvature $R$~\footnote{A term $d_i R \bar A ^i$ generates a mixing $d_i S \bar A^i+h.c$ in the K\"ahler potential is also possible.} \begin{eqnarray} RC_{ij} A^i A^jS_0^2,~~~\frac{R^2}{2}C_i A^i S_0,~~~C_{ijk} A^i A^j A^k S_0^3, \end{eqnarray} with some constant coefficient $C^{\bar i}_j,~C_{ij},~C_i, ~C_{ijk}$. In this case, the dual theory takes the form \begin{eqnarray} [(T+\bar T -\alpha S \bar S-C^{\bar j}_i A^i \bar A _{\bar j})S_0 \bar S _0]_D +[W(T,S,A^i)S_0^3]_F \end{eqnarray} with \footnote{Note that the terms containing $S$ in $W$ can be transferred to the K\"ahler potential by a $T$ redefinition.} \begin{eqnarray} W(T,S,A^i)=-T S +\beta S^3+\frac{S^2}{2}C_i A^i+S C_{ij} A^i A^j +C_{ijk} A^i A^j A^k. \end{eqnarray} This is a restricted superpotential which does not have a $T^{3/2}$ term, neither other direct coupling to matter. Note that the scaling symmetry weight is not the same as the superconformal weight, that in our notation is always $(0,0)$ for all chiral fields with the exception of $S_0$ ${(1,1)}$ and ${\cal R}$ ${(1,1)}$. ${\cal R}$ is actually scale inert as it is obvious from eq.(\ref{rs}). So the scale symmetry in the pure gravitational theory is only dictated by the compensator $S_0$. \section{$R^2$ Supergravity in the New Minimal Formulation} In new-minimal supergravity, the appropriate gauging is implemented by a real linear multiplet $L$ with scaling weight $w=2$ and vanishing chiral weight $n=0$ \cite{Sohnius:1982fw,FGKP}. In particular, the pure $R^2$ new minimal supergravity Lagrangian can be written as \begin{eqnarray} {\cal L} = \frac{1}{4g^2} \bigg( [ W^{\alpha}(V_R) W_{\alpha}(V_R)]_F +{\rm h.c.} \bigg), \label{NM1} \end{eqnarray} where \begin{eqnarray} \label{VR} V_R &=& \text{ln}\,\left(\frac{L}{S_0\overline{S}_0}\right), \\ \label{VRW} W_{\alpha}(V_R) &=&-\frac{1}{4} \overline{\nabla}^2 \, \nabla_{\alpha} ( V_R ) . \end{eqnarray} The Lagrangian in eq.(\ref{NM1}) is superconformally invariant and the superconformal symmetry can be fixed by choosing $L = 1. $ Then, the superspace geometry is described by the new-minimal formulation, \cite{Sohnius:1982fw,Ferrara:1988qxa}, where the graviton multiplet $(e_\mu^a,\psi_\mu, A_\mu,B_\mu)$ consists of: the graviton $e_\mu^a$, the gravitino $\psi_\mu$ and two auxiliary gauge fields $A_\mu$, and $B_{\mu\nu}$ possessing the gauge symmetry \begin{eqnarray} \delta A_\mu=\partial_\mu b\, , ~~~~\delta B_{\mu\nu}=\partial_\mu b_\nu-\partial_\nu b_\mu. \end{eqnarray} In fact, the superconformal gauge fixing respects the $U(1)_R$ $R$-symmetry of the superconformal algebra, which is gauged by the vector $A_\mu$. Then, $V_R$ is the gauge multiplet of the supersymmetry algebra with components (in the Wess-Zumino gauge) \begin{eqnarray} \label{gaugemult} V_R=\left(A_\mu - 3H_\mu , - \gamma_5\gamma^\nu r_\nu, -\frac{1}{2}\hat{\cal{R}} -3 H_\mu H^\mu \right), \label{vp} \end{eqnarray} where $r_\nu$ is the supercovariant gravitino field strength, $\hat{{\cal R}}$ is the (supercovariant) Ricci scalar and $H^\mu$ the Hodge dual of the (supercovariant) field strength for the auxiliary two-form \cite{Ferrara:1988qxa} \begin{eqnarray} H_\mu=-\frac{1}{3!} \epsilon_{\mu\nu\rho\sigma} H^{\nu\rho\sigma}\, , ~~~ H_{\mu\nu\rho}=\partial_\mu B_{\nu\rho}+\mbox{cyclic perm. } \end{eqnarray} Obviously, $H_\mu$ is divergenceless \begin{eqnarray} \nabla^\mu H_\mu=0. \label{dH} \end{eqnarray} In other words, the bosonic content of the gauge multiplet $V_R$ is \begin{eqnarray} V_R=(A_\mu^-,\, 0,\, -\frac{1}{2} R-3 H_\mu H^\mu), ~~~A_\mu^-=A_\mu-3 H_\mu. \label{vrb} \end{eqnarray} Clearly, the F-term in eq.(\ref{NM1}) will produce the usual $D_R^2$ term in the bosonic action, where $D_R$ is the highest component of the gauge multiplet. Since the latter contains the scalar curvature $R$ as can be seen form eq.(\ref{vrb}), it is obvious that eq.(\ref{NM1}) describes an $R^2$ theory \cite{CFPS,FKR,FKLP,FKLP2}. Indeed, by employing eqs.(\ref{VRW},\ref{vp}), we find that the bosonic part of (\ref{NM1}), is written as \begin{eqnarray} \label{star} e^{-1}{\cal L}= \frac{1}{8g^2} \Big{(}R+6H_{\mu}H^{\mu}\Big{)}^2 - \frac{1}{4g^2} F_{\mu\nu}(A_\rho^-)F^{\mu\nu}(A_\rho^-) \,. \end{eqnarray} We can integrate out $H_\mu$ after introducing the Lagrange multipliers $\Lambda, a$ such that \begin{eqnarray} \label{star1} e^{-1} {\cal L}= \frac{1}{8g^2} \Lambda \left(R+6H_\mu H^\mu\right) - \frac{1}{32g^2} \Lambda^2 - \frac{1}{4g^2} F_{\mu\nu}F^{\mu\nu} +\frac{1}{g^2}a\partial_\mu H^\mu . \end{eqnarray} Integrating out $\Lambda$ gives back (\ref{star}), whereas the field $a$ enforces the constraint (\ref{dH}). The auxiliary field $H_\mu$ appears now quadratically and it can easily be integrated out leading to \begin{eqnarray} H_\mu=\frac{2}{3 \Lambda}\partial_\mu a; \end{eqnarray} therefore, eq.(\ref{star1}) is equivalent to \begin{eqnarray} \label{star2} e^{-1} {\cal L}= \frac{1}{8g^2} \Lambda R - \frac{1}{4g^2} F_{\mu\nu}F^{\mu\nu} - \frac{1}{32g^2} \Lambda ^2 - \frac{1}{3 g^2\Lambda} \partial_\mu a\partial^\mu a. \end{eqnarray} The theory in eq.(\ref{star2}) is in a Jordan frame and it can be expressed in the Einstein frame after the conformal transformation \begin{eqnarray} g_{\mu\nu}\to e^{-\sqrt{\frac{2}{3}}\phi} \, g_{\mu\nu}, \label{conf0} \end{eqnarray} where \begin{eqnarray} •\phi = \sqrt{\frac{3}{2}} \, \ln \frac{ \Lambda}{4g^2} . \end{eqnarray} Then, eq.(\ref{star2}), after rescaling $A_\mu^- \rightarrow g A_\mu^-$ and $a\to g^2 \sqrt{6} a$, is written in the Einstein frame as \begin{eqnarray} e^{-1} {\cal L}= \frac{1}{2} R - \frac{1}{4} F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}\partial_\mu \phi \, \partial^\mu \phi-\frac{1}{2}e^{-2\sqrt{\frac{2}{3}}\phi}\partial_\mu a \, \partial^\mu a -\frac{1}{2} g^2. \label{fr} \label{suV} \end{eqnarray} Therefore, $R^2$ in new minimal supergravity is described by a standard supergravity coupled to a massless vector field and a massless complex scalar \begin{eqnarray} T= \frac{1}{2}e^{\sqrt{\frac{2}{3}} \phi } + i\frac{ a}{\sqrt{6}}. \end{eqnarray} The field $T$ parametrizes the symmetric space $SU(1,1)/U(1)$ of scalar curvature $R=-2/3$. In fact, in new minimal supergravity, the $R^2$ theory and its dual form, can both be described by a unique Lagrangian of the form \cite{FKLP,FKR,FP} \begin{eqnarray} {\cal L}=[B(L-S_0 \bar S_0 e^U)]_D+\frac{1}{4}W^\alpha(U)W_\alpha(U)]_F+c.c.\, , \label{ld} \end{eqnarray} where $U$ is an unconstrained vector superfield, $W_{\alpha}(U) =-\frac{1}{4} \overline{\nabla}^2 \, \nabla_{\alpha} ( U ) $ and $B$ is a real-multiplet Lagrange multiplier. It is easy to see that by integrating out $B$ we find that \begin{eqnarray} U=V_R. \label{UVR} \end{eqnarray} Substituting (\ref{UVR}) into (\ref{ld}), we get back the new minimal supergravity action (\ref{NM1}). On the other hand, integrating out $L$ we get \begin{eqnarray} B=T+\bar T, \end{eqnarray} where $T$ is chiral. Hence, eq.(\ref{ld}) can be written in standard old minimal form as \begin{eqnarray} {\cal L}=-[S_0\bar S _0 e^{U}(T+\bar T)]_D+\frac{1}{4}W^\alpha(U)W_\alpha(U)]_F+c.c \, . \label{NM3} \end{eqnarray} We see that the K\"ahler potential is \begin{eqnarray} K = -3 \ln \left[ (T+\overline{T} ) \right], \label{kk} \end{eqnarray} whereas the term $e^U$ will give rise a FI term \cite{SW2,BFN}. Indeed, in component form, Lagrangian (\ref{NM3}) is \begin{align} \label{NM6} e^{-1}{\cal{L}}&=\frac{1}{2} R -\frac{1}{4} F^{\mu\nu} F_{\mu\nu} - \frac{3}{ (T+\overline{T})^2 } \partial_\mu T \,\partial_\mu \bar T -\frac{1}{2}g^2. \end{align} In fact the pure $R^2$ theory can be seen already from the $R+R^2$ theory in the new minimal supergravity, which is described by the action \cite{FP}, by taking an appropriate limit. Restoring dimensions in the FI term $\xi=gM_P^2$, the limit is $g\to 0$ with $\xi$ fixed, which corresponds to a de-Higgsed phase. The $R+R^2$ theory is described by the master action \cite{FP} \begin{eqnarray} {\cal L} = -[S_0 \bar S_0 e^U U]_D+[B(S_0\bar S _0 e^U-L)]_D+ \frac{1}{4g^2} ( [ W^{\alpha}(V_R) W_{\alpha}(V_R)]_F +{\rm h.c.} ). \label{NM101} \end{eqnarray} The rescaling \begin{eqnarray} S_0\to S_0 e^{-\lambda/2}, ~~~B\to B e^{\lambda} , ~~~L\to L e^{-\lambda} \end{eqnarray} clearly gives eq.(\ref{NM1}) in the $\lambda\to \infty$ limit. Therefore, the dual $R^2$ theory in new minimal supergravity can be described as standard supergravity coupled to a massless chiral superfield and a massless vector superfield with a FI term. This theory is an extension of the Freedman model \cite{Freedman} by a masless chiral multiplet. The latter describes a massless vector coupled to supergravity with a positive cosmological constant. \section{Conclusions} Prompted by the interesting proposal of \cite{KLT,KKLR,GKKLR}, we have discussed here the supersymmetric completion of pure $R^2$ gravity. The latter is rigidly scale invariant and propagates a massless graviton and a massless scalar on a de Sitter backgound \cite{GKKLR}, contrary to the $R+R^2$ theory which has an additional Minkowski phase with a massive scalar (the inflaton). In ${\cal N}=1$ supergravity, one can write two scale invariant superspace densities, an ${\cal R}\bar {\cal R}$ (D-term) and an ${\cal R}^3$ (F-term). If both terms are present, the dual theory in old-minimal formulation contains the usual scalaron field $T$ together with a conformally coupled scalar $S$ of gravitational origin. In this case the most general superpotential turns out to be a linear combination of $ST$ and $S^3$. However, when only the ${\cal R}^3$ term is present, it turns out that $S$ is auxiliary and after integrating it out, a superpotential of the form $T^{3/2}$ arises. This theory has an anti-de Sitter rather than a de Sitter phase \cite{FKP,ENOl,KLT}. When matter fields with definite scaling are introduced, it is possible to couple them to supergravity either by a D-term or an F-term coupling to the chiral curvature multiplet ${\cal R}$. In this case, it turns out that the matter fields mix with $S$ but not with $T$ in the superpotential. A similar analysis can be done in the new minimal formulation of ${\cal N}=1$ supergravity, which reveals that the dual theory of the $R^2$ theory is described by a massless chiral multiplet together with a massless vector multiplet with a FI term. The dual theory is thus an extension by a chiral multiplet of the Freedman model. \vskip.5in \noindent {\bf {Acknowledgment}} \vskip.1in \noindent We would like to thank L. Alvarez-Gaume, F. Farakos, K. Kounnas, D. L\"ust, A. Riotto and A.~Van Proeyen for discussions. This research was implemented under the ARISTEIA Action of the Operational Programme Education and Lifelong Learning and is co-funded by the European Social Fund (ESF) and National Resources. M.P. is supported in part by NSF grant PHY-1316452. \vskip.5in
1,116,691,501,210
arxiv
\section{Introduction} \label{introduction} \noindent Malware attacks pose a massive threat to the security of companies and individuals. The average annual cost of malware attacks has increased to \$2.6 million per mid-sized company worldwide \cite{bissell2019}. Anti-malware engines are essential to proactively prevent these attacks \cite{tounsi2018survey}. Most anti-malware engines mainly rely on signature-based approaches that match manually-defined patterns against known malicious files \cite{anderson2018learning}. The success of signature-based methods significantly depends on the quality and recency of the pre-defined rules that are often handcrafted by malware analysts. While useful, signature-based engines suffer from two significant deficiencies: first, they could be ineffective in dealing with newly evolved variants of malware, and thus, vulnerable to ‘unseen’ variants known as zero days \cite{chenB2019adversarial}; second, they rely on manually defined rules that cannot keep up with the rapid evolution of malware variants. Due to the deficiencies of signature-based anti-malware engines, researchers have presented machine learning-based malware detection. However classic machine learning algorithms often require manual feature engineering. Recently, a new stream of Deep Learning (DL)-based malware detector has emerged that can consume the whole raw malware binary as input and extract the salient features automatically, without relying on manually defined rules or feature engineering. As a result, successful DL-based anti-malware engines have emerged \cite{raff2018malware}, \cite{fleshman2019non}, \cite{krvcal2018deep}. However, DL-based anti-malware engines have shown to be susceptible to small perturbations in their input, featured by automated attacks known as Adversarial Example Generation (AEG) \cite{demetrio2019explaining}. These attacks yield slightly perturbed malware variants that can mislead the DL-based engines into miss-classifying them as benign. Given the crucial role of anti-malware in preventing cyber-attacks and improving the security posture of many organizations, there is a vital need to devise automatic ways to protect anti-malware engines against the AEG attacks. Although AEG can negatively affect the performance of DL-based engines, it can also be utilized to further improve their performance. Anti-Malware Evasion (AME) has emerged as a promising method to automate the AEG process for this purpose \cite{chenY2019training}. Malware variants that successfully evade the DL-based malware detectors can be employed in re-training and improving them. Moreover, verifying DL-based anti-malware engines against AEG is a viable defense mechanism \cite{goodfellow2018making}. In effect, automatic emulation of AEG attacks can help strengthen the ability of DL-based engines to detect malware. AME methods often rely on additive approaches, which inject bytes into the malware binary, known as append attacks \cite{suciu2019exploring}. Append attacks are a natural fit for AME because they do not affect the functionality of the malware since their injected payload is not executed by the operating system and thus they do not interfere with the malware execution \cite{castroandbiggio2019poster}, \cite{suciu2019exploring}. Nevertheless, current approaches for launching these attacks suffer from two major issues that limit their applicability. First, many attacks assume full knowledge about the anti-malware architecture, its parameters, or the confidence level of the anti-malware response. These assumptions do not apply to realistic attack scenarios in which the information is hidden from the adversary \cite{hu2018black}. Second, since they often rely on brute-force mechanisms to craft new malware variants, they require a high volume of appended bytes (i.e., payload) to evade the anti-malware engine \cite{suciu2019exploring}. Deep learning methods have shown promise in generating smaller and more effective perturbations \cite{kreuk2018adversarial}. Recently, among deep learning methods, deep language models have shown promise in malware analysis by treating the malware binary sequence as characters in a written language \cite{awad2018modeling}. Generative Recurrent Neural Network (RNN) is a powerful architecture to learn such language models \cite{mogren2019character}. Motivated by the importance of finding the vulnerabilities of current DL-based anti-malware engines, we propose a new threat model that utilizes a novel RNN-based method to automatically construct adversaries for evading several DL-based anti-malware engines simultaneously. To this end, we focus on how to automatically generate evasive malware samples on a large scale. Our study offers a novel approach to directly learn a language model on binary executables and generate benign-looking content without requiring \textit{any} knowledge of the targeted anti-malware. To our knowledge, the proposed method contributes to the first automated attack against DL-based anti-malware engines without these restrictive assumptions. Furthermore, our approach does not require expensive dynamic malware analysis. To foster reproducibility, we made the code and the dataset available to the AI-enabled security research community on GitHub at {\color{blue}\underline{https://github.com/johnnyzn/MalRNN}}. \section{Background and Related Work} \label{background} \subsection{Adversarial Example Generation (AEG)} Deep learning models have been recently shown to fail when an adversary carefully modifies their input data with subtle perturbations. Adversarial examples are instances with meticulous feature perturbations that can cause a target machine learning model to make wrong decisions. Automatically crafting such instances by an adversary against a specific class of target machine learning models is an emerging task in artificial intelligence, referred to as AEG \cite{goodfellow2018making}. This concept of AEG that we use in this study is not meant to be confused with Automatic Exploit Generation). Verifying machine learning models against AEG is a crucial defense mechanism that not only helps improve the resistance of these models, but also provides insights for designing better machine learning models \cite {goodfellow2018making}. Depending on the information available to the adversary from the targeted machine learning model, AEG is carried out under four possible scenarios \cite{qiu2019review,anderson2018learning}. In the first scenario, known as a white-box attack, the adversary has full access to the structure and parameters of the attack target. The second AEG scenario is referred to as gray-box AEG and pertains to situations in which the parameters of the attacked neural network model are not available but the adversary has access to the features that are important for decision making by target classifier. The third scenario, called black-box AEG, relates to when the adversary cannot access the model's specification, features, or parameters; however, it can obtain a real-valued feedback, also known as confidence score, from the attack target. Finally, binary black-box AEG applies to a black-box scenario in which not only no a priori knowledge is assumed about the target, but also the adversary does not have access to a real-valued feedback from the attack target. Instead, in binary black-box scenario, the adversary can only observe a binary response associated with the success or failure of the crafted instance in evading the attack target. This type of attack is also known as binary black box \cite{anderson2018learning}. Binary black-box AEG is the most restrictive and the most common scenario in real-world \cite{fleshman2019non}, since oftentimes the specification and confidence score of the attack target are unknown. \subsection{Anti-Malware Evasion} Conducting AEG in the Malware detection domain gives rise to anti-malware evasion (AME) attacks, a new stream of research that employs AEG to perturb malware samples and generate variants that evade anti-malware engines while still preserving the functionality of the original malware. AME attacks can be categorized based on the type of threat model they implement (i.e., white, gray, black, and binary black-box). Consistent with our goal of proposing a more realistic AME attack scenario in our study, we examine the past AME studies that support black-box and binary black-box attacks. Among these studies, AME studies that offer black-box attacks do not require knowing the specifications of the targeted anti-malware \cite{demetrio2020efficient,castroandbiggio2019poster,castroandschmitt2019armed,chenB2019adversarial,park2019generation,suciu2019exploring,hu2018black}. These studies employ a wide range of methods such as genetic algorithm \cite{demetrio2020efficient}, random perturbations \cite{castroandschmitt2019armed,chenB2019adversarial}, dynamic programming \cite{park2019generation}, and RNN \cite{hu2018black}. However, these methods heavily rely on the confidence score feedback obtained from anti-malware engines to craft their perturbations. The confidence score is a real value ranging between 0 and 1, which indicates the probability that the input is malware. This value is interpreted as the confidence of the decision made by the anti-malware engine. While black-box attacks are more realistic than white-box attacks, the confidence score is \textit{internal} to the anti-malware engine and thus not visible to the adversary. This issue restricts the usability of these methods \cite{rosenberg2019defense}. Binary black-box attacks, on the other hand, do not require observing the anti-malware's confidence score \cite{dey2019evadepdf,fang2019evading,rosenberg2019defense,anderson2018learning}, and thus, are applicable to real attack scenarios. Nevertheless, most current binary black-box AME studies target signature-based anti-malware engines \cite{dey2019evadepdf,fang2019evading,anderson2018learning}. Although Rosenberg et al. \cite{rosenberg2019defense} propose a binary black-box attack on DL-based anti-malware engines, their approach requires an API call sequence obtained from expensive and time-consuming dynamic analysis of malware binary in a sandbox. We also note that Hu and Tan \cite{hu2018black} propose a black-box AME attack that is based on training an RNN. However, similar to \cite{rosenberg2019defense}, their approach requires a sequence of API calls obtained during the dynamic analysis in a sandbox. Another practical limitation of the current binary black-box AME methods relates to the brute-force operationalization of append attacks, which requires a large size of binary content to be appended to the original malware file (e.g., three times larger than the size of the original malware) \cite{suciu2019exploring,castroandschmitt2019armed}. This results in generating abnormally large malware variants that can be detected by anti-malware engines due to the suspicious size of the resulting malware variant. Furthermore, most AME studies on attacking DL-based anti-malware engines are designed to only target a specific anti-malware architecture with certain parameter settings. Focusing on evading one specific architecture limits the generalizability of such methods to other anti-malware models. We expect that learning a universal language model from benign executables can facilitate attaining more generalizable AME methods. \subsection{Generative RNN-based Language Models} \label{background_RNN} Constructing a language model amounts to learning a probability distribution over a sequence of strings or characters. Once learned, a language model can be used to \textit{generate} the next element in a given sequence. Neural language models with recurrent architectures have shown promise in generating high-quality sequences in Natural Language Processing (NLP) tasks \cite{kim2016character}. A generative RNN processes sequential input while preserving temporal patterns in the sequence. At each time step $t$, an RNN takes an input $x_t$ and the current hidden state $h_t$ to emit a continuous value. This value is used to generate/predict future elements in the sequence. This generative nature of RNNs makes them suitable for sequence analysis tasks such as language modeling \cite{belletti2019quantifying}, where the elements of the input are time-dependent. Once trained, RNNs yield effective language models on short natural language text and binary content \cite{zuo2018neural} that are able to predict the next element based on a given input sequence. Two major challenges arise in utilizing RNN-based language models on malware content. First, using RNN language models for learning long sequences of malware content is challenging \cite{raff2018malware} due to the large number of time steps, which leads to the attenuation of the error signal during training, widely known as vanishing gradient problem \cite{goldberg_neural_2017}. Adding gating mechanism to the input and output of RNN units can address this issue and yields an effective variant of RNN, Gated Recurrent Units (GRU) \cite{goldberg_neural_2017}. Generative RNNs require the input and output sequences to have the same dimensions. While this is useful in machine learning tasks such as part of speech tagging, it limits their applicability in the malware domain. Among RNN-based architectures for language modeling, \emph{sequence-to-sequence} models address this issue by adding an additional encoding step before feeding the data to the generative RNN. Sequence-to-sequence models have recently yielded breakthrough results in many sequence analysis tasks such as machine translation \cite{ono-etal-2019-hybrid} and speech recognition \cite{irie2019choice}. They can map the input sequence of a fixed length to a generated output sequence of a different length. Given their recent success in other machine learning fields, we expect that sequence-to-sequence RNN-based language models can provide an effective tool to automatically generate benign-looking adversarial examples for AME applications. Accordingly, we propose to construct an RNN language model directly on the binary content (as opposed to the sequence of API calls in a sandbox) to accomplish adversarial malware generation in a binary black-box scenario without requiring dynamic analysis. \section{Proposed Method (M\lowercase{a}lRNN )} \label{method} As noted in Section 2, most black-box AME methods rely on a brute-force approach in which they inject bytes into a malware sample until the generated variant evades the anti-malware. The brute-force property of these methods leads to crafting variants with large payload size that renders AME less effective. This issue motivates a threat model that limits the volume of injected bytes, as opposed to the one that allows adding an indefinite length of perturbations. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{Model_Illustration.png} \caption{Abstract view of M\lowercase{a}lRNN malware evasion architecture} \label{architecture} \end{figure*} \subsection{Threat Model} Consistent with \cite{anderson2018learning}, we define the threat model for launching binary black-box AME attacks against static anti-malware models. Nevertheless, unlike the threat model proposed in \cite{anderson2018learning}, which targets feature-based anti-malware engines, our threat model focuses on launching attacks against DL-based anti-malware engines. Three major components of our threat model are: \begin{itemize} \item{\verb|Adversary’s Goal:|} Automatically crafting malware variants that are capable of evading DL-based anti-malware. \item{\verb|Adversary’s Knowledge:|} The structure and parameters of the anti-malware model are unknown to the adversary. Furthermore, the adversary does not have access to the confidence score produced by anti-malware. The only information available to the adversary is whether the generated malware variant can evade the anti-malware or not. \item{\verb|Adversary's Capability:|} Applying functionality preserving append modifications on malware binary, while the maximum modification size is limited. We focus on append modifications, since they very often do not interfere with the functionality of the malware. \end{itemize} To realize this threat model, we propose MalRNN, a byte-level sequence-to-sequence generative model that learns a language model on benign samples and injects benign-looking byte sequences into the original malware binary in order to obtain evasive malware variants. \subsection{M\lowercase{a}lRNN Design} \label{designing_malrnn} In accordance with the above threat model, it can be expected that that mimicking the patterns of benign executables could be a viable attack approach. We incorporate this insight into our design of MalRNN. Specifically, this is achieved through learning a language model on bytes that can generate benign-looking samples. Such a language model significantly contributes to alleviating brute-force trial and error for generating evasive variants. Figure \ref{architecture} illustrates the major components of our MalRNN malware evasion architecture. We describe each component in the remaining of this section. \subsection{Data Acquisition} Developing M\lowercase{a}lRNN requires two datasets of binary executables: 1) a malware executable dataset that serves as the initial seed to generate evasive malware variants, and 2) a benign executable dataset to train the language model. To obtain the former dataset, we compiled an up-to-date collection with recent real malware samples from the last three years. The dataset includes over 6,000 malware binaries from eight common malware categories. The distribution of the dataset is described later. To obtain the benign executable dataset, following \cite{raff2018malware}, we collected 4,329 benign executables from a clean installation folder of Microsoft Windows. In both datasets, we converted the binary input to hexadecimal characters suitable for processing by a character-level language model. Furthermore, to avoid inefficient training with long input byte sequences in malicious and benign executables, we employed systematic sampling. This process samples the input binary sequence in fixed intervals to reduce the input size for generative sequence-to-sequence RNN language model. \subsection{Generative Sequence-to-Sequence RNN Language Model} Our model employs a character-level sequence-to-sequence RNN to learn a language model from \emph{benign} malware binaries. We adopt Gated Recurrent Units (GRU) \cite{goldberg_neural_2017} as the building block of our sequence-to-sequence model to alleviate the gradient vanishing problem in processing long sequences,\cite{dey2017gate}. M\lowercase{a}lRNN aims to maximize the adversarial loss \cite{madry2017towards} of the anti-malware model, which is formulated in the following equation: \begin{equation} \underset{\delta \in \Delta}{\operatorname{maximize}}\ \mathcal{L}(\mathcal{H}_\theta(x+\delta),y) \end{equation} where $x$ is the input malware sample, $\mathcal{H_\theta}$ is the attacked DL-based anti-malware model parameterized by $\theta$, and $\delta \in \Delta$ denotes the allowable perturbations that preserve functionality (appending byte sequences in our case). The loss function $\mathcal{L}$ represents binary cross-entropy loss in most DL-based anti-malware engines. However, in a (binary) black-box setting the exact loss function from the anti-malware is not accessible and thus, cannot be incorporated into the model's loss. Accordingly, directly maximization of Eq. 1 is impractical. The key idea behind our model is that maximizing the loss in Eq. 1 translates to minimizing the loss of an adversarial model in generating benign-looking samples that can bypass the anti-malware. The middle box in Figure \ref{architecture} shows our character-level generative RNN for learning such an adversarial model serving as a binary content generator. Inspired by the recent sequence-to-sequence RNN in language modeling, M\lowercase{a}lRNN 's generator consists of two main RNN components: An encoder RNN and a decoder RNN. The encoder aims to encapsulate the salient features of the input byte sequence into a feature vector. This vector is obtained from the final hidden state of the last RNN unit in the encoder architecture and is fed to the decoder RNN (shown in the vertical inner box in Figure \ref{architecture}). The encoder's current hidden states $h_t$ is obtained as a function of both its previous state $h_{t-1}$ and the current input element $x_t$. More formally, $h_t$ is given by Equation \ref{encoder_hidden}: \begin{equation} h_t = f(W^{h}h_{t-1} + W^{x}x_t) \label {encoder_hidden} \end{equation} where $W^{h}$ denotes the network weights between the hidden units and and $W^{x}$ represents the network weights between hidden units and the input elements. Function $f$ is a nonlinear activation such as $tanh(.)$. The decoder receives the feature vector from the encoder and reconstructs the byte sequence that minimizes a cross-entropy loss between the generated bytes and benign samples ($\mathcal{L}_{RNN}$) at each time step. Unlike the encoder, the decoder's hidden state at each time step is only a function of the previous hidden state and is given by Equation \ref{decoder_hidden}: \begin{equation} h_t = f(W^{h}h_{t-1}) \label {decoder_hidden} \end{equation} After training is complete, the generator learns to append benign-looking binary content to the malware binary in order to maximize the adversarial loss and construct an evasive malware variant. To craft a candidate malware variant, after completion of each training iteration, the generated byte sequence from MalRNN's generator is attached to the original malware. The candidate malware variant is checked against one or more black-box anti-malware models to assess if it can evade them. The output of the anti-malware is a binary output with 1 and 0 denoting detection and evasion, respectively. If the generated malware variant successfully evades the detectors, the candidate sample is saved as an evasive variant and will be further processed for ensuring its functionality. Such a model is suitable for launching binary black-box attacks described in our threat model since it depends neither on the gradients obtained from a differentiable anti-malware model, as in \cite{castroandbiggio2019poster,kolosnjaji2018adversarial}, nor on the confidence score received from the anti-malware engine, as in \cite{chen2017zoo}. This amounts to achieving an adversary that is agnostic to the targeted anti-malware's deep learning architecture. It is worth noting that, following \cite{anderson2018learning}, in order to comply with the binary black-box attack scenario, the confidence score provided by anti-malware architectures was masked to mimic a binary output from anti-malware. In each iteration, M\lowercase{a}lRNN is trained on a sample of benign executables and generates a byte sequence that is appended to the end of the original anti-malware to form a new variant, which subsequently is tested against the targeted black-box anti-malware models. In each iteration, the RNN is trained on a sample of benign files, and generates the new bytes based on a given malware sequence. This process repeats until the new variant evades the anti-malware or the maximum number of attempts is reached. In case the maximum number of attempts for a specific input sample is reached the model proceeds to the next malware sample. We implemented M\lowercase{a}lRNN using PyTorch. M\lowercase{a}lRNN was run on a single Nvidia RTX 2080 GPU with 4,352 CUDA cores and 8 GB internal memory. The code is designed to run on both GPU and CPU environments. The data comprises the full testbed including the benign executables for training the language model and also the malware binary dataset. MalrRNN's specifications, including the architecture and (hyper) parameter settings are given in Appendix A. \subsection{Ensuring the Functionality of Generated Malware Variants} \label{ensure_func} We used VirusTotal’s API, which supports large-scale malware analysis, for assessing the functionality of malware samples after modification. VirusTotal provides a malware behavior report that includes static and dynamic analysis of the malware sample. These reports describe network behavior, file access behavior, etc. Using the VirusTotal API, we compare the behavior reports for the modified evasive variants and original (i.e., unmodified) malware samples. Through this process, we ensure that the key parts of the Virus Total's report stay the same after modification, showing that the modified malware samples can be executed on the operating system and are fully functional. All 6,037 malware samples in our dataset were checked to be functional after appending bytes to their overlay. That is, the non-functional samples in the original dataset (more than 90\%) were excluded from the evaluation. \section{Implementation and Evaluation} \label{evaluation} \subsection{Testbed and Evaluation Criterion} \label{evaluation_criteria} We obtained an academic license of VirusTotal and extracted 6,307 recent malware binaries from the past three years (2017-2019) in eight categories, including botnet, ransomware, spyware, adware, virus, dropper, backdoor, and rootkit. Table \ref{testbed} shows the distribution of the dataset by malware category. To be able to gain insight into each specific malware category, we evaluate M\lowercase{a}lRNN 's performance on each category separately. Utilizing the functionality assessment process described earlier, we checked the functionality of all modified malware binary samples to ensure they retain their functionality after modification. \begin{table}[ht] \small \caption{Breakdown of testbed based on different malware categories} \begin{tabularx}{0.45\textwidth} { | >{\centering\arraybackslash}X | >{\centering\arraybackslash}p{3cm} | >{\centering\arraybackslash}X | } \hline \textbf{Malware Category} & \textbf{Examples} & \textbf{\# of Malware Samples} \\ [0.5ex] \hline Adware & eldorado, razy, gator & 1,947\\ \hline Backdoor & lunam, rahack, symmi & 678\\ \hline Botnet & virut, salicode, sality & 526 \\ \hline Dropper & dunwod, gepys, doboc & 904\\ \hline Ransomware & vtflooder, msil, bitman & 900\\ \hline Rootkit & onjar, dqqd, shipup & 53\\ \hline Spyware & mikey, qqpass, scar & 640 \\ \hline Virus & nimda, shodi, hematite & 659\\ \hline Total & All subtypes & 6,307\\ \hline \end{tabularx} \label{testbed} \end{table} As our attack target, we selected three renowned DL-based static malware detectors. All three are cited frequently by security researchers and are made available by authors through GitHub repositories. \begin{itemize} \item{\verb|MalConv|} \cite{raff2018malware}, is among the most successful DL-based malware detectors, developed through a collaboration between the Laboratory for Physical Sciences (LPS) and NVIDIA. The model incorporates a deep convolutional neural network architecture that is trained on approximately half a million malware binaries and achieves an area under the ROC curve (AUC) of 98.5\% on an unseen test set. \item{\verb|NonNeg|} \cite{fleshman2019non} is a successor of MalConv developed by LPS, which modifies MalConv's architecture with non-negative weight constraints. The model was trained on 2 million malware binaries and obtained the AUC of 95.3\% on a holdout sample. \item{\verb|ConvNet|} \cite{krvcal2018deep} was developed by Avast research group and features a deeper neural network than MalConv and NonNeg, with a total of eight layers. It was trained on 20 million proprietary malware samples from Avast and achieved 70.4\% AUC. \end{itemize} Both MalConv and NonNeg were featured as recent malware detector architectures in an AME competition hosted by Endgame in 2019 \cite{anderson_machine_2019}. It is important to note that all malware samples in our dataset were recognized as malware by all three anti-malware models. Following \cite{fleshman2019non,anderson2018learning}, we adopt evasion rate as our evaluation criterion. The evasion rate of an AME method against a given anti-malware is defined as follows: \begin{equation} Evasion\ Rate = \frac{|E\cap F|}{N} \end{equation} where $E$ and $F$ denote the sets of evasive and functional modified malware obtained from the AME method, respectively. $N$ denotes the total number of malware samples given as input to the AME method. This statistic yields the efficacy of a given AME method in evading a malware detector. We use this metric to evaluate M\lowercase{a}lRNN against other benchmark methods later in this section. \subsection{Experiment Setup} \label{experiment_setup} We conduct three different experiments. In the first experiment, we examine the number of attempts M\lowercase{a}lRNN requires to generate evasive variants. In the second experiment, we measure the changes of MalRNN's performance by varying the append size for each malware category. Finally, in the third experiment, we compare M\lowercase{a}lRNN 's performance on all three malware detectors to that of other AME benchmarks for a fixed append volume (determined in our second experiment). For comparison, we identified two state-of-the-art binary black-box and one black-box AME benchmarks: \begin{itemize} \item{\verb|Random Append (RA)|} \cite{suciu2019exploring,castroandschmitt2019armed}: Appends sequences of random bytes to the end of a malware sample until the evasion occurs. \item{\verb|Benign Append (BA)|} \cite{castroandbiggio2019poster}: Appends random sections from benign files to the end of a malware sample until evasion occurs. \item{\verb|Enhanced Benign Append (EBA)|} \cite{chenB2019adversarial}: Appends specific byte sequences that lower the confidence score of the anti-malware in a brute-force manner. \end{itemize} It is worth noting that since EBA requires access to the confidence score, it is qualified as black-box and has an unfair advantage compared to the other two benchmarks and our proposed method. The following subsections describe each experiment and its corresponding results in detail. \subsection{Can M\lowercase{a}lRNN Learn to Generate Evasive Variants?} It is often desirable to verify if a machine learning model learns during training by monitoring the training loss or number of iterations required to solve the problem at hand. In order to assess whether M\lowercase{a}lRNN learns to generate evasive bytes, we monitor the number of attempts (i.e, iterations) required for evasion during the training of M\lowercase{a}lRNN (Figure \ref{training_iterations}). \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Training_Iteration_Illustration.png} \caption{Running average of the number of iterations required to bypass the anti-malware engine for each sample.} \label{training_iterations} \end{figure} As seen in Figure \ref{training_iterations}, when training starts M\lowercase{a}lRNN needs around 20 attempts to modify a given malware sample such that it can evade the anti-malware. However, as the training proceeds, this number significantly decreases. As a result, at the latest stages of training (after processing almost 300 malware samples) the number of required attempts reduces to around eight. This behavior is consistent among all eight categories and suggests that M\lowercase{a}lRNN improves during the training process and learns to generate evasive content. \subsection{How Does the Append Size Affect the Evasion Rate?} As noted, very large append sizes can defeat the purpose of developing an effective AME method that is able to accomplish an evasion attack through \textit{minimal} modification of the original malware. As such, in practice, it is crucial to limit the maximum append size of AME methods. To empirically observe the effect of append size on the evasion rate, we track the changes in evasion rate for various append sizes in the virus category as it is one of the most damaging malware types. Table \ref{append_volume} summarizes the results. \begin{table}[h] \caption{Evasion rates and number of required training iterations obtained at different append sizes} \small \begin{tabularx}{0.5\textwidth} { | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | } \hline \textbf{AVG Append Size (\%)} & \textbf{AVG Append Size (KB)} & \textbf{\# of Evaded Samples} & \textbf{Evasion Rate} & \textbf{\# of Training Iterations} \\ \hline 5 & 7.5 & 763 & 82.4\% & 14,357\\ \hline 10 & 15 & 862 & 93.09\% & 8,588\\ \hline 20 & 30 & 882 & 95.25\% & 5,133\\ \hline 40 & 66 & 906 & 97.84\% & 3,169\\ \hline 80 & 132.8 & 910 & 98.27\% & 3,065\\ \hline 100 & 166 & 912 & 98.49\% & 3,205\\ \hline 120 & 199.2 & 919 & 99.24\% & 2,840\\ \hline 180 & 298.8 & 921 & 99.46\% & 2,406\\ \hline \end{tabularx} \label{append_volume} \end{table} \begin{table*}[t] \small \caption{Comparing M\lowercase{a}lRNN 's performance on three renowned DL-based anti-malware detectors with black-box AME benchmark methods across eight malware categories} \begin{center} \begin{tabularx}{1\textwidth} { | >{\centering\arraybackslash}p{1.1cm} | >{\centering\arraybackslash}p{1.1cm} | >{\centering\arraybackslash}X | >{\centering\arraybackslash}p{1.3cm} | >{\centering\arraybackslash}p{0.9cm} | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}p{0.8cm} | >{\centering\arraybackslash}p{0.85cm} |} \hline \textbf{Detector} & \textbf{Method} & \textbf{Adware} & \textbf{Backdoor} & \textbf{Botnet} & \textbf{Dropper} & \textbf{Ransomware} & \textbf{Rootkit} & \textbf{Spyware} & \textbf{Virus} & \textbf{Average} \\ \hline \multirow{3}{*}{\textbf{Malconv}} & RA & 14.34\% & 9.88\% & 8.56\% & 14.16\% & 11.78\% & 13.21\% & 10.16\% & 11.53\% & 12.29\% \\ \cline{2-11} & BA & 49.15\% & 41.30\% & 20.34\% & 41.92\% & 38.44\% & 11.32\% & 35.31\% & 28.22\% & 39.43\% \\ \cline{2-11} & EBA & \textbf{75.55\%} & 68.29\% & 46.58\% & 69.69\% & \textbf{80.22\%} & 56.60\% & 65.31\% & 61.76\% & 69.54\% \\ \cline{2-11} & \textbf{MalRNN} & 68.75\% & \textbf{72.72}\% & \textbf{53.66}\% & \textbf{90\%} & 64.28\% & \textbf{69.23\%} & \textbf{80\%} & \textbf{85.71\%} & \textbf{73.24\%} \\ \cline{2-11} \hline \multirow{3}{*}{\textbf{NonNeg}} & RA & 0.67\% & 0.44\% & 0.19\% & 1.00\% & 0.44\% & 5.66\% & 0.63\% & 0.76\% & 0.67\% \\ \cline{2-11} & BA & 96.61\% & 99.41\% & 99.05\% & 94.91\% & 99.00\% & 90.57\% & 93.91\% & 88.47\% & 96.04\% \\ \cline{2-11} & EBA & 96.10\% & 94.40\% & 95.25\% & 98.78\% & 96.56\% & \textbf{100\%} & 94.38\% & 89.38\% & 95.45\% \\ \cline{2-11} & \textbf{MalRNN} & \textbf{99.87\%} & \textbf{100\%} & \textbf{100\%} & \textbf{100\%} & \textbf{99.87\%} & \textbf{100\%} & \textbf{100\%} & \textbf{100\%} & \textbf{99.97\%} \\ \cline{2-11} \hline \multirow{3}{*}{\textbf{ConvNet}} & RA & 30.71\% & 25.96\% & 69.01\% & 26.77\% & 10.67\% & 16.98\% & 46.88\% & 54.17\% & 33.95\% \\ \cline{2-11} & BA & 33.23\% & 27.43\% & 66.16\% & 35.62\% & 17.67\% & 35.85\% & 47.03\% & 49.92\% & 36.64\% \\ \cline{2-11} & EBA & 38.46\% & 35.29\% & 43.75\% & 47.83\% & 24.00\% & 45.28\% & 46.3\% & 51.22\% & 40.03\% \\ \cline{2-11} & \textbf{MalRNN} & \textbf{76.49\%} & \textbf{100\%} & \textbf{87.1\%} & \textbf{69.23\%} & \textbf{35.56\%} & \textbf{64.15\%} & \textbf{73.8\%} & \textbf{70.59\%} & \textbf{72.03\%} \\ \cline{2-11} \hline \multirow{3}{*}{\textbf{All Three}} & RA & 0.00\% & 0.00\% & 0.00\% & 0.55\% & 0.00\% & 1.89\% & 0.00\% & 0.00\% & 1.49\% \\ \cline{2-11} & BA & 15.56\% & 14.31\% & 5.30\% & 5.63\% & 9.33\% & 5.66\% & 3.75\% & 0.00\% & 8.51\% \\ \cline{2-11} & EBA & 23.52\% & 23.15\% & 20.53\% & 19.58\% & 23.44\% & 15.09\% & 34.69\% & 22.91\% & 22.86\% \\ \cline{2-11} & \textbf{MalRNN} & \textbf{34.77\%} & \textbf{54.28\%} & \textbf{23.57\%} & \textbf{46.57\%} & \textbf{29.33\%} & \textbf{41.51\%} & \textbf{45.47\%} & \textbf{34.75\%} & \textbf{38.78\%} \\ \hline \end{tabularx} \end{center} \label{experiment_results} \end{table*} Two major observations are made from Table \ref{append_volume}. First, it is seen that by appending only 7.5 KB on average to an original malware binary, MalRNN is able to achieve the evasion rate of 82.4\%. This speaks to the effectiveness of the bytes generated by the proposed method, as will be thoroughly investigated in our third experiment. Second, and more importantly, as the append size increases from 5\% to 40\% of the original malware size, the evasion rate rapidly increases to 97.84\%. After this point, the rate of increase almost stabilizes. Also, the total number of training iterations required to evade the anti-malware decreases and exhibits the same behavior at 40\% append size. Figure \ref{evasion_v_append} visualizes this behavior by plotting the evasion rate against the changes in append size. \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{Append_Volume_Illustration.png} \caption{Evasion rate vs. append volume} \label{evasion_v_append} \end{figure} The 40\% append size has been shown as an elbow point, which denotes where the evasion rate stops to increase significantly. We thus fix the append size for all methods in the benchmark evaluations to 40\%. Although our proposed model yields satisfactory results at much lower append sizes (i.e., 5\% and 10\%), we selected 40\% append size in favor of the benchmark methods involved in our third experiment. Moreover, even though our model implements the black-box threat model, the amount of bytes it appends are comparable to white-box gradient-based attacks in \cite{kolosnjaji2018adversarial}, which is around 1\% to achieve 60-70\% evasion rate. \subsection{How Does MalRNN Compare to the State-of-the-Art Black-Box AME Benchmarks?} We conduct four benchmark evaluations, each focusing on specific malware detectors. The first three evaluations compare M\lowercase{a}lRNN 's ability to conduct targeted AME attacks on a specific DL-based malware detector (i.e., MalConv, NoNeg, or ConvNet) individually. The last benchmark evaluation targets M\lowercase{a}lRNN 's capability to evade \textit{all three} anti-malware engines simultaneously. That is, the evasion occurs only if the variant can successfully evade all three malware detectors. Such a benchmark evaluation allows us to verify M\lowercase{a}lRNN 's generalizability to different DL-based models. To provide evasion rates specific to each category, we conducted benchmark evaluations separately on each malware category. Table \ref{experiment_results} summarizes the results of all four benchmark evaluations. From Table \ref{experiment_results}, it is observed that M\lowercase{a}lRNN outperforms all other AME benchmarks in almost all of the categories for all three malware detectors. Interestingly, not only does MalrNN outperform its binary black-box AME counterparts, but it also outperforms EBA, which has access to the confidence scores, with the exception of adware and ransomware for evading MalConv. In addition to comparison with AME benchmarks, it is helpful to measure the performance of M\lowercase{a}lRNN on evading all three DL-based malware detectors across all eight malware types. Figure \ref{experiment_graph} illustrates our M\lowercase{a}lRNN 's evasion rate for collectively evading all three malware detectors for each malware type. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{Final_Evasion_Rate_Illustration.png} \caption{M\lowercase{a}lRNN 's averaged evasion rate against three DL-based malware detectors across eight malware types} \label{experiment_graph} \end{figure} As shown in Figure \ref{experiment_graph}, ransomware and botnet have the lowest overall evasion rate with 29.33\% and 23.57\%, respectively, which may suggest that these categories are less sensitive to AME append attacks. This could be attributed to the fact that ransomware binaries have significant sections dedicated to data encryption routines, which could be uniquely distinguished with DL-based classifiers. Similarly, botnet binaries are often unique in the sense that they incorporate a considerable amount of code devoted to establishing and maintaining the network of malicious devices on the internet. Such unique characteristics can render adversarial modifications less effective in causing these types of malware to evade. On the contrary, it is also observed that backdoor and dropper with 54.28\% and 46.57\%, respectively, have the highest evasion rate. This suggests that, overall, DL-based anti-malware models may be more susceptible to modifications of backdoor and dropper samples. This aligns with the fact that backdoor samples often contain malicious binary that is embedded into a variety of benign programs to bypass regular authentication and provide remote unauthorized access to a system. As a result, their content may be similar to non-detrimental content that are more likely to evade the DL-based anti-malware models. Similarly, droppers are also benign-looking malicious tools that are designed to embed other hidden malicious code (e.g., virus) to bypass anti-malware engines. Consequently, both backdoor and dropper malware types are difficult to identify for DL-based anti-malware models that operate on the entire binary content with large portions of benign code. This finding aligns with the intuition that crafting adversarial examples for malware executables that are already embedded in benign executables could be less difficult than other malware categories with larger portions of conspicuously malicious content (e.g., botnet and ransomware). \section{Conclusion and Future Work} Recently, static DL-based malware detectors have shown promise in detecting unseen malware without manual rule definition and feature engineering. However, they can themselves be vulnerable to AME attacks. We can strengthen these anti-malware engines by emulating AME attacks. Automating this process is crucial for improving anti-malware engines at a higher pace. Current approaches to this end unrealistically assume full or partial knowledge about targeted anti-malware. In this study, by treating adversarial malware generation as language modeling, we developed a novel method, MalRNN, to craft adversarial examples without requiring any knowledge of the targeted anti-malware. MalRNN directly learns a language model on binary executables and generates effective benign-looking byte sequences that can evade several DL-based anti-malware models simultaneously. MalRNN neither depends on the gradients of a differentiable anti-malware model, nor on the confidence score received from the anti-malware engine. The results signify the vulnerability of DL-based anti-malware models to adversarial append attacks and reveal that significant future research in this area is needed. Future research is needed for devising more sophisticated AME methods. One promising direction is extending the perturbations from append attacks to editing modifications to help provide more powerful AME methods. However, it should be noted that it is often harder to ensure the functionality of the malware variants obtained from editing modifications as opposed to additive ones. Due to nature of our study, its dual use is crucial to attend to. MalRNN contributes to emulating adversarial attacks as a viable defense mechanism to gain insight on the adversary’s capabilities. Though the ultimate goal of our study is reinforcing the robustness of anti-malware engines, precautionary measures should be taken to monitor and prevent large scale misuse of such AI techniques during the deployment of technology. Software-as-service deployment is one way to provide the monitoring so that the benevolent usage of the technology outweighs its malicious usage. \section{ Acknowledgments} We would like to thank Hyrum Anderson from Microsoft for valuable discussions and feedback. We also thank VirusTotal for providing the malware dataset and granting access to the APIs for functionality assessment. This material is based upon work supported by the National Science Foundation (NSF) under the grants SaTC-1936370, CICI-1917117, and SFS-1921485.
1,116,691,501,211
arxiv
\section{Introduction} The discovery that the rate of expansion of the Universe is apparently accelerating was one of the key advances in physical cosmology in the 1990s (\citealt{Riess1998, Perlmutter1999}). Understanding the nature of the dynamically dominant dark energy, which is believed to be responsible for this behaviour, is one of the biggest challenges now facing cosmologists. Over the past decade our knowledge of the basic cosmological parameters, which describe the content of the Universe, its expansion history and ultimate fate has improved tremendously. This progress is the result of advances on two fronts: the advent of datasets which have provided fresh views of the Universe with unprecedented detail and the development of the theoretical machinery required to interpret these new measurements. Currently, the values of many cosmological parameters are known to an accuracy of around $10\%$ (albeit with caveats regarding degeneracies between certain combinations of parameters and also regarding the precise number of parameters that are allowed to vary in the cosmological model; see, for example, \citealt{Sanchez2006}). The cold dark matter (CDM) model has emerged as the most plausible description of our Universe. In the most successful version of this model, more than $70\%$ of the density required to close the Universe is in the form of dark energy. Currently, there is no model which can reconcile the magnitude of the dark energy component with the value expected from particle physics arguments. A simple phenomenological description of the dark energy is provided by the equation of state that relates its pressure, $P$, and density, $\rho$, which is encapsulated in the parameter $ w = P/\rho c^{2}$. If the dark energy has the form of the cosmological constant, $w=-1$. The indications are that the dark energy now has a form close to that expected for a cosmological constant (\citealt{Riess2004, Sanchez2006}). However, in the absence of a theoretical model for the dark energy, it is possible that the equation of state could depend on space and/or time. A whole range of experiments and surveys is being planned which number amongst their goals determining the equation of state of the dark energy as a function of redshift (for a discussion, see \citealt{Albrecht2006} and \citealt{Peacock2006}). Several techniques are being considered, which are sensitive to the influence of the dark energy on various features of the cosmological world model. These include the Hubble diagram of Type IA supernovae, counts of clusters of galaxies, the weak gravitational lensing pattern of faint galaxies and the measurement of the baryonic acoustic oscillation scale in the matter distribution as a function of redshift. The measurements and data analysis required to obtain useful constraints on the equation of state parameter are so demanding, and so open to potential systematic errors, that it is necessary to pursue as many different avenues as possible. In this paper, we focus on the test using the baryonic acoustic oscillations (BAO). The BAO is the name given to a series of peaks and troughs on scales on the order of $100 \,h^{-1}\,$Mpc, imprinted on the power spectrum of matter fluctuations prior to the epoch of last scattering, when the matter and radiation components of the Universe were coupled (\citealt{PY1970}). The BAO are the counterpart of the acoustic peaks seen in the power spectrum of the temperature of the cosmic microwave background radiation, though they have a different phase and a much smaller amplitude (\citealt{SZ1970, Press1980, Hu1996, EH1998,Meiksin1999}). The wavelength of the BAO is related to the size of the sound horizon at recombination. This does not depend on the amount or nature of the dark energy, but on the physical density of matter ($\Omega_m h^2$) and baryons ($\Omega_b h^2$). Given the values of these parameters, for example, from the cosmic microwave background or large scale structure data, the sound horizon scale is known and can be treated as a standard ruler. The {\it apparent} size of this feature in the power spectrum of galaxies or galaxy clusters does depend on the dark energy and its equation of state through the angular diameter distance-redshift relation (e.g. \citealt{BG2003, HuHaiman2003}) BAO in the galaxy distribution were first glimpsed in the early stages of the ``2-degree-field galaxy redshift survey'' (\citealt{Percival2001}) and finally detected in the power spectrum of the completed 2dFGRS (\citealt{Cole2005}). The equivalent feature, a spike, was also found in the correlation function measured from the luminous red galaxy (LRG) sample of the Sloan Digital Sky Survey (SDSS) (\citealt{Eisenstein2005}). Cole et al used the BAO to constrain the parameter combination ($\Omega_{\rm M}/\Omega_{\rm b}$, $\Omega_{\rm M}$) (where $\Omega_{\rm M}$ and $\Omega_{\rm b}$ denote the matter and baryon density parameters respectively). Eisenstein et~al used the location of the spike in the correlation function to constrain the absolute distance to the median redshift of the SDSS LRG sample and hence constrained the value of $\Omega_{\rm M}$. H\"{u}tsi (2006a,b) carried out a power spectrum analysis of a similar LRG sample, and combined this measurement with other datasets to constrain the values of cosmological parameters. More recently, the BAO have been extracted from the power spectrum measured from a much larger sample of SDSS LRGs to constrain $\Omega_{\rm M}$ and $\Omega_{\rm b}/\Omega_{\rm m}$ ( \citealt{Tegmark2006, Blake2007, Padmanabhan2007, Percival2007}). To date, measurements of the BAO have only yielded constraints on the dark energy equation of state when combined with other datasets, such as the spectrum of temperature fluctuations in the microwave background or when restrictive priors have been adopted on certain parameters, such as the Hubble constant. The bulk of the work in the literature on the usefulness of the BAO has relied upon linear perturbation theory to assess the detectability of the features and to forecast the errors on the recovered value of $w$ (\citealt{BG2003, HuHaiman2003, GB2005, BlakeBridle2005, Blake2006, Parkinson2007}). There are, however, a range of dynamical and statistical effects which can alter the appearance of the power spectrum relative to the linear theory prediction, even on the scale of the BAO, which we review in this paper (\citealt{Seo2003, Angulo2005, Millennium, Seo2005, Eisenstein2006}). Some simulation work has been done to study these effects, mostly using computational cubes of side $500 \,h^{-1}\,$Mpc (\citealt{Seo2003, Seo2005, Millennium, Eisenstein2006}). These are only a small factor (2-3) bigger than the scale of the fluctuations of interest. Calculations with small boxes are subject to large sampling fluctuations and may even miss some features of the nonlinear growth of large scale fluctuations through the absence of long wavelength density fluctuations (\citealt{Crocce2006b}). Very recently, larger simulation volumes have been used, of around a cubic gigaparsec and larger (\citealt{Schulz2006, Huff2006, Angulo2005, Koehler2006}). However, such studies have tended to have relatively poor mass resolution, making it difficult to model galaxies without resorting to simplified biasing prescriptions (e.g. \citealt{Cole1998}). Given the significant commitment of resources required by the proposed galaxy surveys and the level of precision demanded by the BAO approach, it is imperative to ensure that accurate theoretical predictions are available both to help in the design of the survey strategy and to extract the maximum amount of information from the observations. This is a tough challenge computationally, because it requires ultra-large volume N-body simulations with sufficient mass resolution to identify the haloes likely to host the galaxies to be seen in the surveys, and a realistic model to populate these haloes with galaxies. In this paper, we use a combination of suitable N-body simulations and a semi-analytical model of galaxy formation to assess the visibility of the BAO. In Section 2, we describe the suite of N-body simulations used and outline the semi-analytical model. Section 3 gives a blow-by-blow account of how the power spectrum changes relative to the simple prediction of linear perturbation theory, as additional layers of realism are added to the modelling, starting with dark matter and ending with galaxies. We set out our approach for constraining the dark energy equation of state in Section 4, and present our results in Section 5. We give our conclusions in Section 6. \section{Method} In this section, we introduce the theoretical tools used to produce synthetic galaxy catalogues. First, we describe the N-body simulations (\S 2.1) which consist of a high resolution run (\S 2.1.1) and an ensemble of lower resolution runs (\S 2.1.2). Next, we discuss the measurement of power spectra from discrete distributions of objects and use the ensemble of low resolution simulations to estimate the errors on the power spectrum measurement (\S 2.1.3). In the second part of this section, we explain how a galaxy formation model is used to populate the high resolution N-body simulation with galaxies (\S 2.2). \begin{table} \begin{center} \begin{tabular}{rrrrrrrrr} \hline \hline & $N_{\rm p}$ & $m_{\rm dm}$ & $\epsilon$ \\ & & [$h^{-1}~$M$_\odot$] & [$h^{-1}$~kpc] \\ \hline \textsc{BASICC} & $3.03\times10^{9}$ & $5.49\times10^{10}$ & $50$ \\ \textsc{L-BASICC} & $8.99\times10^{7}$ & $1.85\times10^{12}$ & $200$ \\ \hline \end{tabular} \end{center} \caption{The values of some of the basic parameters used in the simulations. The columns are as follows: (1) The name of the simulation. (2) The number of particles. (3) The mass of a dark matter particle. (4) The softening parameter used in the gravitational force. In both cases, the length of the computational box is $1340 \,h^{-1}\, {\rm Mpc}$, and the same cosmological parameters are used, as given in Section 2.1. } \label{tab:params} \end{table} \subsection{N-Body Simulations} \begin{figure} \includegraphics[width=8.5cm]{eps/f1.eps} \caption{ A test of the choice of starting redshift used in the N-body simulations. The upper panel compares the power spectrum measured at $z=15$ in the {\tt BASICC} when the simulation is started at $z=63$ (dashed red curve) and at $z=127$ (solid blue curve). The power spectra plotted in the upper panel have been divided by the linear perturbation theory prediction for the dark matter power spectrum at $z=15$. The lower panel shows the ratio between the power spectrum measured from the simulation started at redshift 63 to that measured from the run which started at redshift 127. \label{fig:trans} } \end{figure} The N-body method is a long-established computational technique which is used to follow the growth of cosmological structures through gravitational instability (see, for example, the reviews by Bertschinger 1998 and Springel, Frenk \& White 2006). Our goal in this paper is to simulate the formation of structure within a sufficiently large volume to follow the growth of fluctuations accurately on the scale of the BAO, and with similar statistics for power spectrum measurements to those expected in forthcoming surveys. At the same time, we require a mass resolution which is adequate to identify the dark matter haloes likely to host the galaxies which will be seen in these surveys. To achieve these aims, we use a memory-efficient version of the {\tt GADGET-2} code of \cite{Springel2005}, which was kindly provided to us by Volker Springel and the Virgo Consortium. We use two types of calculation: a high resolution simulation, labelled the ``Baryon Acoustic Simulation at the ICC' or {\tt BASICC}, which is able to track galactic haloes, and an ensemble of lower resolution simulations, labelled {\tt L-BASICC}, which we use to study the statistics of power spectrum measurements on large scales. Here, we describe some of the common features of the simulations, before moving on to outline specific details in \S 2.1.1 and \S 2.1.2. We adopt a $\Lambda$CDM cosmology with the same parameters used in the Millennium Simulation (\citealt{Millennium}), which are broadly consistent with the latest constraints from the cosmic microwave background data and large scale structure measurements (\citealt{Sanchez2006, Spergel2007}). The values of the parameters are: the matter density parameter, $\Omega_{\rm M}=0.25$, the energy density parameter for the cosmological constant, $\Omega_{\Lambda}=0.75$, the normalization of density fluctuations, $\sigma_{8} = 0.9$ and Hubble constant, $h=H_{0}/(100{\rm km s}^{-1}{\rm Mpc}^{-1})=0.73$. Due to memory restrictions, the Fourier mesh used to set up the initial particle displacements has a dimension of $1580^3$ grid points which is not commensurate with the cube root of the particle number mesh. We therefore avoided using a regular particle grid to set up the initial conditions, as this would have led to a spurious feature in the power spectrum of the initial conditions at the beat frequency between the particle grid and the Fourier mesh. Instead, we used a glass-like distribution (\citealt{White1994, Baugh1995}). The input power spectrum of density fluctuations in linear perturbation theory is calculated using the {\tt CAMB} package of \cite{Lewis2000}. The amplitude of the Fourier modes is drawn from a Rayliegh distribution with mean equal to the linear theory power spectrum and the phase is drawn at random from the interval $0$ to $2\pi$. The initial density field is generated by perturbing particles from the glass-like distribution, using the approximation of Zel'dovich (1970). The simulations were started at a redshift of $z=63$. The Zel'dovich (1970) approximation used to set up the initial pattern of density fluctuations produces transients which can be seen in clustering signal measured for the dark matter at expansion factors close to the starting redshift (\citealt{Efstathiou1985,Baugh1995,Crocce2006}). Later on, we will use the power spectrum from a high redshift output from the simulation, $z=15$, as a proxy for linear perturbation theory, so it is important to check that this power spectrum in particular, and also the power spectra measured at all subsequent outputs are insensitive to the choice of starting redshift. We test this by comparing the power spectrum of the dark matter at $z=15$ in our standard run with the spectrum measured in a test run which started at $z=127$, but which did not run all the way through to $z=0$. The top panel of Fig.~\ref{fig:trans} shows that the power spectra measured for the dark matter in these two cases, divided by the power spectrum predicted by linear perturbation theory at $z=15$. The fluctuations in the measured power at low wavenumbers around the linear theory prediction reflect the sample variance noise which is not negligible even in a simulation of the volume of the {\tt BASICC}. The lower panel in Fig.~\ref{fig:trans} shows the $z=15$ power spectrum measured from the run started at $z=63$ divided by that measured from the run started at $z=127$. At large wavenumbers, the effect of transients is visible, although quite small, $\sim 1\%$. The focus of this paper, however, is the form of the power spectrum over wavenumbers smaller than $k=0.4\,h {\rm Mpc}^{-1}$, for which the spectra measured at $z=15$ for the two different choices of starting redshift agree to better than $0.3\%$. Our results are therefore unaffected by any transients resulting from the use of the Zel'dovich approximation. \subsubsection{The high resolution simulation: the {\tt BASICC}} The {\tt BASICC} simulation covers a comoving cubical region of side $1340\,h^{-1}\,$Mpc, in which the dark matter is represented by more than 3 billion ($1448^3$) particles. The equivalent Plummer softening length in the gravitational force is $\epsilon = 50\,h^{-1}\,\rm{kpc}$, giving a dynamic range in length of almost 27,000. The volume of the computational box, $2.41\,h^{-3}\,{\rm Gpc}^{3}$, is almost twenty times the volume of the Millennium Simulation (\citealt{Millennium}), and more than three times the volume of the catalogue of luminous red galaxies from the SDSS used to detect the acoustic peak by \cite{Eisenstein2005}. The {\tt BASICC} volume is within a factor of two of that proposed for a survey with WFMOS at $z \sim 1$ (\citealt{GB2005}). The simulation occupied the full 0.5 Terabytes of RAM of the second upgrade of the Cosmology Machine at Durham. The run took 11 CPU days on 506 processors, the equivalent of 130,000 cpu-hours. The particle mass in the {\tt BASICC} simulation is $m_{\rm p} = 5.49\times 10^{10}\,h^{-1}\,M_{\odot}$. This is approximately 64 times larger than the particle mass used in the Millennium Simulation. The mass resolution limits the usefulness of dark matter halo merger trees from the {\tt BASICC}, so we have chosen to output at a modest selection of redshifts: z=0, 0.3, 0.5, 1, 2, 3, 4, 6, 8, 10, 15 and 63. Each of these outputs occupies $\sim100\,$ {\tt Gb} of disk space. : In each snapshot we have identified groups of dark matter particles using a friends-of-friends algorithm (\citealt{Davis1985}) with a linking length of $0.2$ times the mean inter-particle separation. We have stored groups with 10 or more particles, i.e. haloes more massive than $5.49 \times 10^{11}\,h^{-1}\,\rm{M_{\sun}}$. There are 17\,258\,579 haloes in the $z=0$ output of the simulation with ten or more particles. The most massive halo has a mass of $6.74 \times 10^{15}\,h^{-1}\,M_{\odot}$ and $860$ haloes have a mass in excess of the Coma cluster ($\approx 10^{15}\,h^{-1}\,M_{\odot}$). The {\tt BASICC} simulation sits between the Millennium and Hubble Volume (\citealt{Evrard2002}) simulations. Its unique combination of mass resolution and volume makes it ideal for studying the large scale distribution of galaxies and clusters alike. \subsubsection{The ensemble of low resolution simulations: {\tt L-BASICC}} We also generated an ensemble of 50 ``low-resolution'' simulations to study the sample variance in the {\tt BASICC} and to test an analytic model for the errors expected on measurements of the power spectrum, which we discuss in the next subsection. These low resolution runs ({\tt L-BASICC}) have exactly the same cosmological parameters as the {\tt BASICC} and the same box size (see Table 1), but they have fewer particles ($448^3$). For each realization, a different random seed is used to set up the initial density field. The starting redshift of these simulations is $z=63$. The particle mass is comparable to that employed in the Hubble Volume simulation (\citealt{Evrard2002}). Each {\tt L-BASICC} simulation took $0.8$ days to run on $16$ processors of the third upgrade of the Cosmology Machine. The total volume of the ensemble is $120\, h^{-3}\,{\rm Gpc}^{3}$, more than four times that of the Hubble Volume, making this a unique resource for studying the frequency of rare objects in a $\Lambda$CDM universe. For {\tt L-BASICC}, the position and velocity are stored for every particle at 4 output times (z = 0.0, 0.5, 0.9, 3.8); we also produce a halo catalogue at each redshift retaining objects with ten or more particles (corresponding to a mass of $1.8 \times 10^{13}\,h^{-1}\,\rm{M_{\sun}}$). As we shall see in later sections, the ensemble allows us to assess whether or not a particular result is robust or simply due to sampling fluctuations. Due to their limited mass resolution, it is not feasible to populate these simulations with galaxies using the method outlined below (\S 2.2). \subsubsection{Power spectrum estimation and errors} \begin{figure} \includegraphics[width=8.5cm]{eps/f2.eps} \caption{ The power spectrum of the dark matter in real-space measured at the starting redshift of the {\tt BASICC}, $z=63$ (red points). The corresponding prediction of linear perturbation theory is shown by the green (solid) line. The blue (dot-dashed) curve shows the power spectrum of the {\it unperturbed} glass-like distribution of particle positions. The dashed line shows the Poisson noise expected for the number density of dark matter particles used in the {\tt BASICC}. The noise of the initial particle distribution is much less than Poisson. The arrow marks the position of the Nyquist frequency of the FFT grid. \label{fig:glass} } \end{figure} \begin{figure} \includegraphics[width=8.5cm]{eps/f3.eps} \caption{ The fractional error in the power spectrum of the dark matter (top panel) and in the power spectrum of haloes more massive than $1.8 \times 10^{13}\, h^{-1}\,M_{\odot}$ (bottom panel), estimated using the low resolution simulations from the dispersion of $P(k)$ around the ensemble mean. The smooth black curves show the error predicted by the analytical expression given in Eq.~\ref{eq:err}. The red points show the scatter from the ensemble of low resolution simulations. The arrow in the bottom panel shows the wavenumber for which $\bar{n}P(k=0.2 h {\rm Mpc}^{-1})=1$. \label{fig:err}} \end{figure} The two point statistics of clustering, the correlation function, and its Fourier transform, the power spectrum, $P(k)$, are the most commonly employed measurements of clustering. In this paper we focus on the power spectrum; in Sanchez et~al. (2007, in preparation), we address the visibility of the acoustic oscillations in the correlation function. The standard way to quantify the amplitude of a density fluctuation is by means of the density contrast, $\delta (x,t) = (\rho(x,t)-{\bar{\rho}})/\bar{\rho}$. If we consider the Fourier transform of the density contrast, $\rho_{\rm k}$, then the power spectrum is defined as the modulus squared of the mode amplitude, $P(k) = \langle | \delta_{\rm k}) |^{2} \rangle$. There are two steps in the computation of the power spectrum from a distribution of discrete objects, such as dark matter particles, dark haloes or galaxies. Firstly, a density field is constructed by assigning the objects to mesh points on a cubic grid. In the simplest mass assignment scheme, the nearest grid point, the contribution of each object to the density field is confined to the cell in which it is located. In higher-order assignment schemes, the mass of the particle is shared with adjacent cells. Here, we use the cloud-in-cell assignment scheme (see \citealt{Hockney1981}). Secondly, we perform a Fast Fourier Transform of the density field. The power spectrum is obtained by spherically averaging the resulting Fourier mode amplitudes in annuli of radius $\delta k = 2 \pi / L = 0.0047 \,h\,{\rm Mpc}^{-1}$. The mesh we use to store the density field has $N^{3}_{\rm FFT} = 512^3$ grid points. Estimating the density on a grid alters the form of the power spectrum at wavenumbers approaching the Nyquist frequency of the grid ($k_{\rm Nyquist} = 2 \pi/ L \,\, N_{\rm FFT}/ 2 = 1.2 h {\rm Mpc}^{-1}$ in our case). The degree of modification and the precise wavenumber above which the power spectrum is distorted depend upon the choice of assignment scheme (Hatton 1999; \citealt{Jing2005}). In practice, for the size of FFT mesh we use, this has little impact on the recovered power spectrum for wavenumbers of interest; the measured amplitude differs by less than 1\% from the true value at a wavenumber $k \sim 0.8 \,h\, {\rm Mpc}^{-1}$; in most cases we focus on the form of the power spectrum on large scales, $k < 0.4 \,h\,{\rm Mpc}^{-1}$. Nevertheless, we correct for the effects of the cloud-in-cell assignment scheme by dividing each mode by the Fourier transform of a cubical top hat: \begin{equation} \delta(k_{x},k_{y},k{z}) \Rightarrow \frac{ \delta(k_{x},k_{y},k{z}) }{ \textrm{{sinc}}( \frac{k_x L}{2 N_{\rm FFT}} ) \, \textrm{{sinc}}( \frac{k_y L}{2 N_{\rm FFT}} ) \, \textrm{{sinc}}( \frac{k_z L}{2 N_{\rm FFT}} ) }, \end{equation} \noindent where \begin{equation} \textrm{{sinc}}( x ) = \frac{\sin(x)}{x}. \end{equation} Note this is different from the approach taken by \cite{Jing2005}, who applied a correction to the spherically averaged power spectrum. A further possible distortion to the form of the measured power spectrum is discreteness noise and the associated Poisson or shot noise. Poisson-sampling a continuous density field with point objects of space density, $\bar{n}$, introduces a spurious contribution that should be subtracted from the measured power spectrum: $P_{\rm corr}(k) = P_{\rm meas}(k) - 1/\bar{n}$. In the case of dark matter halo centres or galaxies, the need for such a correction is justified. However, in the case of dark matter particles in our simulations, one should {\it not} subtract Poisson shot noise from the power spectrum because the particles were initially laid down by perturbing a glass-like configuration which is sub-Poissonian in nature. This is clear from Fig.~\ref{fig:glass}, which shows the power spectrum measured for the dark matter in the initial conditions of the {\tt BASICC}. The red curve shows the spectrum measured in the simulation and the smooth green curve shows the input spectrum predicted by linear perturbation theory. The two agree remarkably well over a wide range of wavenumbers. The power spectrum of the {\it unperturbed} glass-like particle distribution is shown by the blue curve. For the wavenumbers of interest, the power spectrum of the glass is many orders of magnitude below the discreteness noise expected for a Poisson distribution of objects with the same space density as the dark matter particles, as shown by the dashed line. In this paper, we do not apply any shot noise correction to power spectra measured for the dark matter, but we do make such a correction for spectra estimated for samples of haloes and galaxies. To close this subsection, we turn our attention to the error on the measurement of the power spectrum. A commonly used expression for the fractional error in the measured power spectrum was derived by Feldman, Kaiser \& Peacok(1994) (see also Efstathiou 1988, for a similar argument applied to the two point correlation function): \begin{equation}\label{eq:err} \frac{\sigma}{P} = \sqrt{ \frac{2}{n_{\rm modes}}} \left( 1 + \frac{1}{P \bar{n}} \right), \end{equation} where $n_{\rm modes}$ is the number of Fourier modes present in a spherical shell of width $\delta k $, which depends upon the survey volume $V$: for $k \gg 2 \pi /V^{1/3}$, this is given by $n_{\rm modes} = V 4 \pi k^{2}{\delta }k/\left(2 \pi\right)^{3}$ . The first term on the right hand side of Eq.~\ref{eq:err} quantifies the sample variance in the measurement, which decreases as the square root of the number of modes or, equivalently, as the square root of the volume probed. The second term arises from the discreteness of the objects under consideration. The combination ${P \bar{n}}$ quantifies the amplitude of the power spectrum in units of the Poisson shot noise, effectively giving the contrast of the power spectrum signal relative to the shot noise level. In the case where $P \bar{n} \gg 1$, $\sigma/P \propto 1/k $. On the other hand, when the amplitude of the power spectrum is comparable to the shot noise, and if $P(k) \propto k^{-1}$, then the fractional error in the power is approximately independent of wavenumber. We have tested this prescription in both regimes against the diagonal element of the covariance between power spectrum measurements extracted from the ensemble of low resolution simulations, as shown in Fig.~\ref{fig:err}. Over the wavenumber range of interest, the agreement is reasonably good for samples in which the shot noise is negligible compared to the clustering signal. For samples with low contrast power measurements, such as is the case for dark matter haloes used in the bottom panel of Fig.~\ref{fig:err}, the analytic expression works well until $k \sim 0.1 h {\rm Mpc}^{-1}$ and then overpredicts the errors by up to 50\%. We note that nonlinearities and the impact of the window function of a realistic survey could introduce off-diagonal terms in the power spectrum covariance matrix. In Section 5.3, we compare the constraints on the recovered oscillation scale using the scatter from the ensemble and using the simple mode-counting argument outlined above. We find good agreement which suggests that mode-coupling does not make a significant contributions to the errors on the scales relevant to the BAO. \subsection{Modelling the formation and evolution of galaxies} \label{ssec:galform} \begin{figure} \includegraphics[width=14.5cm]{eps/ccomp.ps} \caption{ Upper panel: the fraction of `resolved galaxies' in the high resolution N-body simulation as a function of magnitude, at different output redshifts (as given by the key in the lower panel). The magnitude is in the observer-frame $R$-band; to obtain an apparent $R$-band magnitude, the distance modulus corresponding to the redshift should be added to the plotted magnitude. The vertical lines mark the magnitude at which the galaxy sample is 100\% complete at each redshift. Lower panel: the cumulative luminosity function of galaxies brighter than a given $R$-band magnitude, for different redshifts as given in the key. The vertical lines show the 100\% completeness limits at each redshift and the horizontal lines indicate the associated space density of galaxies. \label{fig:comp} } \end{figure} The N-body simulations described in the previous section follow the growth of fluctuations in the mass which is dominated by collisionless matter. To connect the predictions of the cold dark matter theory to forthcoming galaxy surveys, we need to predict which structures host galaxies and how galaxy properties depend on halo mass. Some authors have chosen to incorporate galaxies into an N-body simulation empirically by using a parametric model called a halo occupation distribution function (HOD) to describe the probability distribution of galaxies expected in haloes of a given mass (\citealt{Benson2000}). The form of the HOD is constrained to reproduce a particular clustering measurement, such as the galaxy correlation function (e.g. \citealt{Peacock2000, Seljak2000, Scoccimarro2001, Cooray2002}). This approach has been applied to the study of the detectability of acoustic oscillations by several authors (\citealt{ Seo2005, Schulz2006, Huff2006}). Two assumptions are made when using the HOD to populate an N-body simulation with galaxies. Firstly, the parameterization used for the HOD is assumed to provide an accurate description of the manner in which galaxies populate haloes across a wide range of halo mass. Detailed comparisons between the clustering predictions made using HODs and those obtained directly from simulations of galaxy formation show that in practice, the HODs do a reasonable job (\citealt{Berlind2003, Zheng2005}). Recently, one of the fundamental assumptions which underpins the HOD approach has been called into question. Using the Millennium simulations, \cite{Gao2005} demonstrated that the clustering of dark matter haloes depends on a second parameter, such as the formation time of the halo, in addition to halo mass (see also \citealt{Harker2006} and \citealt{Wechsler2006}, \citealt{Wetzel2007}). In practice, for typical galaxy samples, this effect is largely washed out due to the mix of halo properties sampled (\citealt{Croton2007}). The second implicit assumption in the HOD method when applied to an N-body simulation is that all of the haloes in which galaxies are expected to be found can be resolved in the simulation; if the mass resolution of the simulation turns out to be inadequate, then the HOD realized will be distorted to compensate, compared with the true, underlying HOD in the Universe. In this paper, we take a more physical approach and make an {\it ab initio} prediction of which dark matter haloes should contain galaxies by modelling the physics of the baryonic component of the universe. We do this using a semi-analytic model of galaxy formation (for a review of this technique see \citealt{Baugh2006}). The semi-analytic model describes the key physical processes which are thought to determine the formation and evolution of galaxies. We use the {\tt GALFORM} code introduced by \cite{Cole2000} and developed in a series of papers (\citealt{Benson2002, Benson2003, Baugh2005, Bower2006}). The specific model we use is the one proposed by \cite{Baugh2005}, which reproduces the abundance of Lyman-break galaxies at $z=3$ and $z=4$, the number counts of sub-mm detected galaxies (with a median redshift $z \sim 2$), and a rough match to the abundance of luminous red galaxies (Almeida et~al. 2007, in preparation), whilst at the same time giving a reasonable match to the observed properties of local galaxies (e.g. \citealt{Nagashima2005b, Nagashima2005a, Almeida2007}). A key advantage of using a semi-analytic model is that we can investigate how the manner in which galaxies are selected affects the accuracy with which the acoustic oscillations can be measured. The model predicts the star formation history of each galaxy and uses this to compute a spectrum, broadband magnitudes and emission line strengths (for examples of the latter, see \citealt{leDelliou2005, leDelliou2006}). We can therefore select samples of model galaxies by applying precisely the same criteria which will be applied in the proposed surveys. Our methodology mirrors the hybrid schemes introduced by \cite{Kauffmann1997} and \cite{Benson2000}. We use a Monte Carlo technique to generate merger trees for dark mater haloes since our simulation outputs do not have the resolution in time or mass necessary to allow the construction of merger trees. (See \citealt{Baugh2006} for a discussion of the relative merits of these two approaches.) We first construct a grid of halo masses at the redshift of interest, which extends to lower mass haloes than can be resolved in the simulation. We then generate a number of Monte-Carlo realizations of mass assembly histories for each mass on the grid, using the algorithm introduced by \cite{Cole2000}. The number of realizations is chosen to allow robust predictions to be made for observables such as the galaxy luminosity function. The halo merger history is input into the semi-analytic code and the properties of the galaxy population are output at the redshift for which the galaxy catalogue is to be constructed. In the calculations in this paper, we output the broadband magnitudes in the $R$, $I$ and $K$ bands and the equivalent widths of $H_{\alpha}$ and OII[3727] for each galaxy. Finally, haloes from the grid are matched with haloes of similar mass identified in the N-body simulation. The central galaxy in each halo is assigned to the centre of mass of the matched halo in the simulation. The satellite galaxies are assigned randomly to dark matter particles in the halo. Galaxies placed in the simulation box in this way are called `resolved galaxies'. The Monte Carlo merger trees will not, of course, correspond in detail with those of the matched halos in the N-body simulation. However, to the extent that the halo assembly bias discussed by \cite{Gao2005} can be neglected, the properties of the trees are statistically similar for haloes in the same mass range. Because of the finite mass resolution of the N-body simulation, galaxy samples generated by populating resolved haloes will be incomplete fainter than some magnitude limit. In principle, since we are using Monte-Carlo merger trees, we can follow galaxies down to arbitrarily faint magnitudes {\it within} a resolved dark matter halo. However, as we consider progressively fainter objects, some fraction of these galaxies should also appear in haloes which the simulation cannot resolve, causing the sample to become incomplete. Thus, in some instances we need to consider galaxies which we would expect to find in haloes below the mass resolution of the simulation. These galaxies are called ``unresolved galaxies'' and are placed in the box in the following way. A volume-limited sample of galaxies is generated using the semi-analytic model, with a volume equal to that of the simulation cube. Only galaxies which reside in haloes from the grid which are less massive than the resolution limit of the N-body simulation are considered. (Recall that the grid of halo masses used in the semi-analytic calculation extends to lower mass than those resolved in the simulation). These galaxies are assigned to randomly selected dark matter particles which have {\it not} been identified as members of halos identified by the friends-of-friends algorithm. This approach was adopted for one of the mock catalogues used in \cite{Cole2005}. As we will see below, the unresolved galaxies are a minority within any of the samples we consider. They have little effect on the measured power spectrum, producing only a modest change in the amplitude of the clustering signal. We can use the semi-analytic calculation carried out on the grid of halo masses to find the completeness limit of the galaxy catalogue in the N-body simulation. To do this, we use the galaxy formation calculation carried out using the grid of halo masses to compute the cumulative luminosity function of galaxies, starting with the brightest galaxy, for two cases: 1) without any restriction on the mass of the halo which hosts the galaxy and 2) considering only those galaxies which reside in haloes above the resolution limit of the simulation. We then divide the second estimate of the cumulative luminosity function by the estimate made without any restriction on halo mass. The completeness ratios calculated in this way are shown for $z=0, 1$ and $2$ in Fig.~\ref{fig:comp}. The vertical lines show the magnitude limit down to which the `resolved galaxy' catalogues are $100\%$ complete. The lower panel shows the cumulative luminosity function in the model at the same redshifts, with horizontal lines marking the space density of galaxies at the sample completeness limit. (The magnitudes plotted are observer-frame absolute magnitudes in the $R$-band. The apparent magnitude is obtained by adding the appropriate distance modulus for each redshift. All magnitudes are on the AB scale.) The $z=2$ sample is complete down to $M_{R}-5\log h=-23$, or, equivalently to a space density of $3.2\times 10^{-5}\,h^{3}\,{\rm Mpc}^{-3}$. Faintwards of this magnitude, the completeness drops sharply to around $30-40\%$. The situation is much more encouraging at $z=1$. Here, the galaxy catalogue is complete to $M_{R}-5\log h=-22.3$ (corresponding to a space density of just under $10^{-4}\,h^{3}\,{\rm Mpc}^{-3}$) and faintwards of this there is a much more modest drop in the fraction of galaxies resolved in the simulation. The simulation resolves around two thirds of the space density of galaxies expected in the proposed WFMOS survey. At $z=0$, the galaxy samples are complete to a much higher space density, in excess of $10^{-3}\,h^{3}\,{\rm Mpc}^{-3}$. \section{The power spectrum of galaxy clustering} \begin{figure} \includegraphics[width=8.5cm]{eps/fig0.eps} \caption{ The growth of the power spectrum of density fluctuations in the dark matter, as measured in real-space. The smooth curves show the predictions of linear perturbation theory at the redshifts indicated by the key. The power spectra measured in the low resolution ensemble at $z=0$ are plotted to show the sampling variance for a simulation box of side $1340\,h^{-1}\,$Mpc. The smallest wavenumber plotted corresponds to the fundamental mode in the simulation, $2 \pi /L = 0.0469\,h^{-1}\,$Mpc. The maximum wavenumber shown is $0.67$ times the Nyquist frequency of the FFT grid, chosen to avoid any aliasing effects. \label{pk:dm}} \end{figure} In this section we examine the various phenomena which are responsible for changing the form of the power spectrum of galaxy clustering from that expected in linear perturbation theory. We systematically add in new effects and elements of sample selection, considering first the power spectrum of the dark matter, looking at nonlinear evolution (\S \ref{ssec:dm}) and the impact of peculiar velocities (\S \ref{ssec:pec}), before moving onto dark matter haloes (\S \ref{ssec:halos}) and finally to synthetic galaxy samples (\S \ref{ssec:galaxies}). For completeness, we first explain some of the terminology we use in this section. There are three types of phenomena responsible for distorting the linear theory power spectrum: i) non-linear growth of fluctuations, ii) redshift-space distortions and iii) bias. Non-linear growth refers to the coupled evolution of density fluctuations on different scales. Redshift-space distortions describe the impact of gravitationally induced peculiar motions on the clustering pattern. We will refer to clustering measurements as being made in ``real-space'' or ``redshift-space''; in the latter case peculiar motions are taken into account, as we describe in \S 3.2. The term ``bias'' has a range of meanings in the literature. Bias is used to describe the boost in the clustering of a particular tracer (e.g. galaxies or clusters) relative to a reference point, which could be the clustering of the dark matter in either linear perturbation theory or taking into account nonlinear evolution. One of the earliest uses of the concept of bias was in the application of the high peaks model to explain the enhanced clustering of Abell clusters (\citealt{Kaiser1984}). In this model, clusters are associated with rare peaks in the initial, Gaussian density field. The bias is defined as the square root of the ratio of the two-point correlation function of peaks of a certain minimum height to the clustering of the mass expected in linear perturbation theory. When considering galaxies, it is perhaps more natural to think in terms of a modulation of clustering relative to that displayed by the underlying mass at the same epoch, since galaxies populate dark matter haloes. In this case, the galaxy clustering will be measured relative to that of the evolved matter distribution. On large scales, these two reference points, the clustering of the matter expected in linear perturbation theory or the evolved clustering, should be essentially the same. We shall see later that this is approximately the case for the scales over which we compare clustering signals to measure bias factors. \subsection{The nonlinear growth of matter fluctuations} \label{ssec:dm} \begin{figure} \includegraphics[width=8.5cm]{eps/f6.eps} \caption{ The nonlinear growth of the power spectrum. Here we divide the power spectrum in real-space measured at the redshift indicated by the key by the power spectrum at $z=15$, after taking into account the change in the growth factor. Any deviation of the resulting ratio from unity indicates a departure from linear perturbation theory. The dashed lines show the same ratio as predicted using the ansatz of Smith et~al. (2003). \label{fig:dm1} } \end{figure} The early stages of the growth of a density fluctuation are particularly simple to describe analytically. The fluid equations can be written in terms of the perturbation to the density and Fourier transformed. In the simplest case, when the density contrast $\delta \ll 1$, the Fourier modes evolve independently of one another. This is called linear growth. In this regime, the power spectrum changes in amplitude with time, but not in shape. The shift in amplitude is described by the growth factor $D$, which is a function of the densities of matter and dark energy (as quantified by the present day density parameters, $\Omega_{\rm M}$ and $\Omega_{\Lambda}$, for matter and dark energy respectively) and redshift (see \citealt{Heath1977,Peebles1980}): \begin{equation} P(k,z) = D^{2}(z,\Omega_{\rm M},\Omega_{\Lambda}) P(k, z=0), \end{equation} where $D(z=0)=1$. We plot the power spectrum of the dark matter in real-space measured from the {\tt BASICC} at different output redshifts in Fig.~\ref{pk:dm}. The approximately linear growth of the power spectrum is readily apparent on large scales (low k). In an Einstein - de Sitter universe ($\Omega_{\rm M}=1$), the growth factor is equal to the expansion factor. If dark energy plays a role in setting the rate at which the universe expands, the growth of fluctuations is suppressed relative to the Einstein - de Sitter case at late times. The {\tt BASICC} started at $z_{\rm s}=63$, so if $\Omega_{\rm M}=1$, we would expect to see the power spectrum grow in amplitude by a factor of $(1+z_{\rm s})^{2}=4096$ by $z=0$. Using the approximate formula provided by \cite{Carroll1992}, we expect a suppression in the growth of the power by a factor of 0.5537 for the cosmological parameters used in the simulation. This gives an overall growth in power from the initial conditions to the present of a factor of 2268. This agrees to within 0.6\% with the factor expected from a direct numerical integration of the equation giving the growth factor (eqns. 28 and 9 from \citealt{Carroll1992}), which gives 2281.01. In the simulation, we find that the power in the fundamental mode grows by a factor of 2285.21 from the initial conditions at $z=63$ to $z=0$, which agrees with the growth predicted by linear perturbation theory to $0.02\%$. Fig.~\ref{pk:dm} shows that the growth of the power spectrum is clearly not linear at high wavenumbers. The shape of the spectrum at high $k$ at late times is different from that at high redshift, because the growth of modes of different $k$ becomes coupled. This behaviour can be followed to some extent using second- and higher-order perturbation theory (\citealt{Peebles1980, Baugh1994, Jain1994, Crocce2006a}). However, as the density contrast approaches unity, second-order perturbation theory breaks down (\citealt{Baugh1994}). The coupled evolution of the Fourier modes starts on surprisingly large scales, which demonstrates the necessity of a large volume simulation to accurately follow the development of the power spectrum (Smith, Scoccimarro \& Sheth 2006). This can be seen more clearly if we divide the measured spectrum by the growth expected according to linear perturbation theory, as is done approximately in Fig.~\ref{fig:dm1}. In this plot, we have divided the power spectra measured from the simulation by the spectrum measured at $z=15$, scaled by the square of the appropriate growth factor. This reduces the noise in the ratio arising from the finite number of modes realized at small wavenumbers in the simulation volume (\citealt{Baugh1994, Millennium}). Any deviation away from unity signifies a departure from linear perturbation theory due to coupling between modes. The ratio shows a characteristic dip at low $k$, i.e. less power than expected in linear theory, before showing a strong enhancement at higher wavenumbers (\citealt{Baugh1994}). It is remarkable that the transition between a deficit and excess of power happens at the same wavenumber, $k \sim 0.1 h {\rm Mpc}^{-1}$, at different epochs. The suppression in power at low $k$, on the order of a 3\%, is not as strong as that seen in an Einstein - de Sitter universe (see figure 4 of \citealt{Baugh1994}). Nevertheless, this drives the spectacular boost in power seen at higher wavenumbers. The dip in power is largest around $k \sim 0.05 h {\rm Mpc}^{-1}$, which corresponds to a length scale of $2 \pi /k \sim 125\,h^{-1}\,{\rm Mpc}$, close to the wavelength of the acoustic oscillations. Several authors have proposed ansatzes which transform the linear perturbation theory power spectrum into the non-linear power spectrum (e.g. \citealt{Hamilton1991, PD1994, PD1996, Smith2003}). We plot the predictions of the model proposed by \cite{Smith2003} in Fig.~\ref{fig:dm1} using dashed lines. The ratio is computed by dividing the power spectrum at the epoch of interest by the suitably scaled prediction of the model for z=15. The agreement is excellent at high redshift. At $z=0$, at higher wavenumbers, the \cite{Smith2003} formula recovers the simulation results to within 5\% over the range plotted. \subsection{The impact of redshift-space distortions on the power spectrum} \label{ssec:pec} \begin{figure} \includegraphics[width=8.5cm]{eps/f7.eps} \caption{ The ratio of the power spectrum measured for the dark matter in redshift-space, i.e. including the impact of peculiar motions in the distance determination, to the power spectrum measured in real-space. The deviation from unity shows the redshift-space distortion to the nonlinear power spectrum. The results are shown for selected output redshifts, as indicated by the key. The horizontal dotted lines indicate the boost in the redshift-space power expected due to coherent flows, as predicted by Eq.~\ref{boost}. The dashed lines show a simple fit to the distortions (see eq \ref{damp}). \label{fig:dmzs} } \end{figure} In a spectroscopic galaxy survey, the radial distance to an object is inferred from its measured redshift. The shift in the spectral features of the galaxy is produced by two contributions to its the apparent velocity: the expansion of the universe, which is responsible for the Hubble flow at the true distance to the galaxy, and local inhomogeneities in the gravitational field around the object, which generate an additional, ``peculiar'' velocity. Since we cannot correct a priori for the effects of the local gravitational field when inferring the radial distance from the Hubble law and the measured redshift, an error is made in the distance determination. The impact of such errors on the form of the measured power spectrum of clustering is called the redshift-space distortion. Peculiar motions display two extremes which produce different types of distortion to the power spectrum: i) On large scales, coherent bulk flows out of voids and into overdense regions lead to an enhancement in the density inferred in redshift-space, and hence to a boost in the recovered power. \cite{Kaiser1987} derived a formula for the enhancement of the spherically averaged power, under the assumption of linear perturbation theory for an observer situated at infinity (the plane parallel approximation): \begin{equation}\label{boost} f = \frac{P_{\rm s}(k)}{P_{\rm r}(k)} = (1 + \frac{2}{3}\beta + \frac{1}{5}\beta^2 ) , \end{equation} where $P_{\rm s}(k)$ is the power spectrum in redshift-space, $P_{\rm r}(k)$ is the spectrum in real-space and $\beta = \left({\rm d} \log \delta / {\rm d} \log a \right) /b \simeq \Omega_{\rm M}^{0.6}(z)/b$, where b is the bias factor ($b=1$ for the dark matter; for a discussion of the dependence of the growth factor on $\Omega_{\rm M}$, see Linder 2005; Linder \& Cahn 2007). ii) On small scales, the random motions of objects inside virialized dark matter haloes cause structures to appear elongated when viewed in redshift-space, leading to a damping of the power. \cite{PD1994} discussed a model for the redshift-space power spectrum, which takes into account both limits of peculiar motions (see also \citealt{Scoccimarro2004}). Fig.~\ref{fig:dmzs} shows the ratio of the power spectrum measured for the dark matter in redshift-space to that measured in real-space, at redshifts $z=3,1$ and 0. The dotted lines indicate the boost expected in the redshift-space power, computed using the expression in Eq.~\ref{boost} (\citealt{Kaiser1987}). This factor changes with redshift because the matter density parameter is changing. Fig.~\ref{fig:dmzs} shows that this behaviour is only approached asymptotically, on scales in excess of $100\,h^{-1}\,$Mpc. At higher wavenumbers, the power measured in redshift-space is suppressed by random motions. The dashed lines in this plot show a simple fit to this ratio \begin{equation}\label{damp} f = \frac{P_{\rm s}(k)}{P_{\rm r}(k)} = (1 + \frac{2}{3}\beta + \frac{1}{5}\beta^2 ) (1 + k^2 \sigma^2)^{-1}, \end{equation} where $\sigma$ is a free parameter, which is loosely connected to the pairwise velocity dispersion. The degree of damping grows between $z=3$ and $z=1$, but changes relatively little by $z=0$. We shall see in later sections that the form of the redshift-space distortion to the power spectrum depends on the type of object under consideration. \subsection{The power spectrum of dark matter halos in real and redshift-space} \label{ssec:halos} \begin{figure*} \includegraphics[width=\textwidth]{eps/f8.eps} \caption{ The power spectrum of dark matter haloes measured in real-space compared to a scaled version of the prediction of linear perturbation theory, which takes into account the growth factor and an effective bias computed on large scales $k < 0.1 h {\rm Mpc}^{-1}$. Each panel corresponds to a different output redshift. Different mass samples are considered, as indicated by the key, which correspond to low, average and high masses, defined in terms of the average halo mass present at each output time. The black dashed line shows the real-space power spectrum of the mass divided by the appropriate linear perturbation theory prediction. \label{fig:halosr}} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{eps/f9.eps} \caption{ The power spectrum of dark matter haloes measured in redshift-space divided by the power spectrum measured in real-space for the same sample. Each panel corresponds to a different output redshift. Different mass samples are considered, as indicated by the key, which correspond to low, average and high masses, defined in terms of the average halo mass present at each output time. The horizontal dotted lines show the expected ratio for the boost in the amplitude of the redshift-space power spectrum due to coherent flows, computed using an effective bias factor estimated on large scales. The dashed lines show the best fit model of Eq.~\ref{damp}, which turns out to be a poor description of the redshift-space distortions. No suitable fits were obtained at $z=3$. \label{fig:halosz}} \end{figure*} In modern theories of galaxy formation, dark matter haloes play host to galaxies. It is therefore instructive to compare the power spectra measured for different samples of haloes to that of the dark matter as a step towards understanding the power spectrum of galaxies. A common conception is that the clustering of haloes is a scaled version of the clustering of the underlying mass, with the shift in clustering amplitude quantified in terms of a bias factor, $b$, where $b^2 = P_{\rm{halos}}/P_{\rm{dm}}$ (\citealt{Cole1989, MoWhite1996}). As we commented earlier, since we use the dark matter power spectrum on large scales to define a bias, this is approximately the same as using the linear perturbation theory spectrum. Many authors have tested analytical prescriptions for computing the bias parameter using extensions of the theory of \cite{PressSchechter1974} (e.g. Mo, Jing \& White 1997; Sheth, Mo \& Tormen 2000; \citealt{Jing1998, Governato1999, Colberg2000, SeljakWarren2004}). In the extended Press-Schechter theory, the bias is only a function of halo mass and redshift. However, recent analyses of high resolution, large volume simulations have revealed some dependence of halo clustering on a second parameter besides mass, such as the halo's formation redshift or concentration parameter (\citealt{Gao2005, Harker2006, Wechsler2006}). In Fig.~\ref{fig:halosr}, we show that this simple picture, in which the clustering of haloes is a shifted version of that of the dark matter, is actually a poor approximation to what we find in the simulation. We show the ratio of the power spectrum of a sample of dark matter haloes measured in real-space to a scaled version of the linear perturbation theory power spectrum. The amplitude of the linear theory spectrum used in the ratio takes into account the growth factor appropriate to the output redshift and an effective bias, which is set by matching the linear theory prediction for the mass spectrum to the measured halo spectrum on large scales, i.e. for wavenumbers in the range $0.0046 < (k/h {\rm Mpc}^{-1}) < 0.1$. Each panel in Fig.~\ref{fig:halosr} corresponds to a different output redshift from the simulation. For each redshift, we have defined three samples of dark matter haloes, which contain the same number of objects. The mass intervals are set relative to the average halo mass present in the respective outputs, with ``low'', ``mean'' and ``high'' mass samples considered. Each of these contains 20\% of the total number of haloes present at each epoch, with the mass ranges used at each redshift indicated on the keys. The effective bias factors of the halo samples are also written in the key. For comparison, the dashed line in each panel shows the corresponding ratio for the dark matter. Fig.~\ref{fig:halosr} shows that at $z=3$, all of the haloes considered have effective biases much greater than unity, indicating they are more strongly clustered than the mass. This situation is reversed at $z=0$. At this epoch, the halo mass resolution of the {\tt BASICC} is smaller than the corresponding value of $M_{*}$\footnote{$M_{*}$ is a characteristic mass scale defined as the mass within a sphere for which the {\it rms} variance in linear perturbation theory is $\sigma (M) = \delta_{\rm crit}(z)$, where $\delta_{\rm crit}$ is the extrapolated critical linear overdensity given by the spherical collapse model at redshift $z$.} ($=5.78 \times 10^{12}\,h^{-1} \, M_{\odot}$ at z=0). The $z=0$ samples have a bias of unity or smaller. In addition to the difference in the effective bias parameters, the shape of the spectrum of the haloes in these extremes is also different (see also Smith et~al. 2006). The plot shows the shape of the power spectrum, after accounting for the effective bias on large scales. Any difference between the curves plotted for the haloes and that for the dark matter (dashed line) shows a difference in the clustering signal over and above that quantified by a constant effective bias. Similar behaviour was found for samples of cluster mass haloes in the Hubble Volume simulation by \cite{Angulo2005}. We now consider the clustering of haloes as viewed in redshift-space, taking the centre of mass velocity of the halo as its peculiar velocity. In Fig.~\ref{fig:halosz}, we plot the ratio of the redshift-space power spectrum for the halo samples used in Fig.~\ref{fig:halosr} to the power spectrum measured in real-space. As we did before for the case of the dark matter (Fig.~\ref{fig:dmzs}), we indicate the boost in power expected on large scales (small $k$) due to coherent bulk flows of haloes. The boost is calculated from Eq.~\ref{boost} using the effective bias of the halo sample. The plot shows that the redshift-space power spectrum at low wavenumbers is in reasonable agreement with this simple model. However, a range of behaviour is seen at higher wavenumbers. For haloes comparable to $M_{*}$, the boost in power in redshift-space is less than predicted by Eq.~\ref{boost}. For the more extreme, massive haloes, there is actually more power in redshift-space than is suggested by Kaiser's formula. This ``excess'' power was previously noted by \cite{Padilla2002} and \cite{Angulo2005}. The Kaiser formula assumes linear perturbation theory and breaks down in the case of objects with strongly nonlinear clustering. In the case of the less extreme haloes, the reduction in power is {\it not} due to virialized motions of haloes within larger structures. The halo finder we have used is designed to return an overdensity corresponding to virialized structures and not substructures. If the haloes were really part of a larger structure and were executing random motions, the group finder would simply have lumped them together as one larger structure. We are perhaps seeing instead haloes that have started to merge with one another, and whose motions have broken away from a coherent large scale flow. We know of no analytical description of the redshift-space clustering of dark matter haloes which explains this behaviour. \subsection{The power spectrum of galaxies} \label{ssec:galaxies} \begin{figure*} \includegraphics[width=14.5cm]{eps/f10.eps} \caption{ The power spectrum of different galaxy samples measured in real-space, divided by the square of an effective bias parameter and the appropriately scaled linear perturbation theory power spectrum. The sample definition and the value of the effective bias used are given by the key. The power spectrum of the dark matter spectrum in real-space, also divided by the linear perturbation theory spectrum, is shown by the black dashed line. The left hand panel shows the ratios at $z=0$ and the right hand panel at $z=1$. \label{fig:galr}} \end{figure*} \begin{figure*} \includegraphics[width=14.5cm]{eps/f11.eps} \caption{ The ratio of the power spectrum of galaxies measured in redshift-space to that in real-space, at $z=0$ (left) and $z=1$ (right). The samples are defined by the key in each panel. The dotted horizontal lines show the predictions of Eq.~\ref{boost} for the various samples. \label{fig:galz}} \end{figure*} The galaxy power spectrum can be very different from the power spectrum of a sample of dark matter haloes. The way in which the galaxies are distributed among haloes changes the form of the power spectrum. In a mass-limited sample of haloes, the contribution of each halo to the power spectrum can be determined through its space density, which acts as a weighting factor when computing the contribution of the halo to the clustering signal. The number of galaxies per halo acts to modify this weight e.g. more massive haloes could contain more galaxies than less massive haloes. Furthermore, the presence of satellite galaxies within a halo means that one expects to see a damping in power on small scales in redshift-space, due to the random motions of the satellites within the virialized dark halo. The precise modification of the power spectrum depends in detail on how galaxies populate dark matter haloes. As we discussed in \S 2.2, we have carried out an {\it ab initio} calculation of the number of galaxies per halo, using a semi-analytic model of galaxy formation. We are able to predict observable properties of galaxies, such as broadband magnitudes and the strength of emission lines. We consider a range of galaxy samples, defined either by a magnitude limit alone (set in the R-band) or by combining an R-band magnitude limit with a colour selection (in R-I) or a cut on the strength of the OII[3727] emission line: \begin{itemize} \item Sample A: magnitude-limited to reach a space density of $5 \times 10^{-4}\,h^{3}\,{\rm Mpc}^{-3}$. \item Sample B: magnitude-limited to reach half the space density of sample A, i.e. $2.5 \times 10^{-4}\,h^{3}\,{\rm Mpc}^{-3}$. \item Sample C. The reddest 50\% of galaxies from sample A, using the $R-I$ colour. \item Sample D. The 50\% of galaxies from sample A with the strongest emission lines, using the equivalent width of OII[3727]. \item Sample E. The bluest 50\% of galaxies from sample A, using the $R-I$ colour. \item Sample F. The 50\% of galaxies from sample A with the weakest emission lines, using the equivalent width of OII[3727]. \end{itemize} The power spectra measured in real-space from the various galaxy samples are plotted in Fig.~\ref{fig:galr}. The spectra have been divided by the linear perturbation theory power spectrum multiplied by the square of an effective bias factor, which was estimated by comparing the galaxy spectra to the power spectrum measured for the dark matter for wavenumbers $k < 0.1 h {\rm Mpc}^{-1}$. In all cases, for the space densities we have chosen, the effective bias factors estimated for the samples are modest. For comparison, the ratio of the power spectrum of the dark matter in real-space to the linear theory prediction is also plotted, using a dashed line. The deviation of the dashed line from unity shows where nonlinear effects are important for the dark matter. Any differences between the plotted ratios for galaxies and mass indicate a scale dependent bias. The comparison between the dashed and solid curves in Fig~\ref{fig:galr} shows that a constant bias is only a good approximation on large scales, $k < 0.15 h {\rm Mpc}^{-1}$. The redshift-space distortion in the galaxy power spectrum is shown in Fig~\ref{fig:galz}, where we plot the ratio of the redshift-space spectrum to the real-space spectrum for the galaxy samples shown in Fig.~\ref{fig:galr}. The horizontal lines show the Kaiser boost (Eq.~\ref{boost}) expected for the effective bias of the galaxy sample. This ratio is only attained on the very largest scales and seems to be an overestimate of the size of the effect at $z=1$. The damping of the power on intermediate and small scales is readily apparent and, unlike the case with dark matter haloes, is well described by the form given in Eq.~\ref{damp}. \section{Constraining the Dark Energy Equation of state} \begin{figure} \includegraphics[width=8.5cm]{eps/f12.eps} \includegraphics[width=8.5cm]{eps/f12_new.eps} \caption{ The relation between the dark energy equation of state parameter, $w$, and the scale factor, $\alpha$, defined by Eq.~\ref{eq:alpha}, for perturbations in the equation of state around $w_{\rm true}=-1$. Two cases are shown. In the upper panel, the values of the other cosmological parameters are kept fixed. In the lower panel, the ratio of the sound horizon scale to the angular diameter distance to the last scattering surface is held fixed. The relation between $\alpha$ and $w$ is shown for $z=1$ (solid lines) and $z=3$ (dashed lines). The horizontal and vertical lines guide the eye to show how a $1\%$ error in $\alpha$ translates into an error in $w$. \label{fig:walpha} } \end{figure} In this section we outline the procedures we follow to place constraints on the dark energy equation of state parameter, $w$, by measuring the length scale imprinted by baryonic acoustic oscillations on the power spectrum of the various tracers of the density field. The transformation of a measurement of a distance scale into a constraint on $w$ requires various approximations to be made, and depends upon the survey in question and upon the time variation assumed for the dark energy. Nevertheless it is instructive to go through this exercise, bearing these caveats in mind, to get a feel for how well future experiments will be able to measure $w$ for the case of a constant equation of state. The form of the power spectrum of density fluctuations contains information about basic cosmological parameters, and measurements of the galaxy power spectrum on large scales have been exploited to extract the values of these parameters (e.g. \citealt{Cole2005, Sanchez2006, Tegmark2006, Padmanabhan2007, Percival2007}). The apparent scale of features in the power spectrum offers another route to constrain selected cosmological parameters through the dependence of the distances parallel and perpendicular to the line of sight on the matter density parameter, $\Omega_{\rm M}$, the dark energy density parameter, $\Omega_{\rm DE}$, the dark energy equation of state parameter, $w$ and the Hubble constant. For such an approach to work, we either need to know the true physical scale of a particular feature in the power spectrum beforehand or to compare the relative size of a feature when measured parallel and perpendicular to the line of sight (Alcock \& Paczynski 1978). The baryonic oscillations present a promising candidate for such a feature. If we assume for the sake of argument that the cosmological parameters, apart from the equation of state of the dark energy, are well constrained, then the scale of the acoustic oscillations becomes a standard ruler. These features are expected on smaller scales than the turnover and have already been seen in current surveys at low redshift, although at too low a signal-to-noise ratio to use in isolation to extract a competitive constraint on the dark energy equation of state (Cole et~al. 2005; Eisenstein et~al. 2005). We can see how the value of the equation of state parameter parameter of the dark energy influences the form of the BAO with the following simple argument. To measure the power spectrum of galaxy clustering, we need to convert the angular positions and redshifts of the galaxies into comoving spatial separations. This requires a choice to be made for values of the cosmological parameters, including $w$. In our case, we set the parameters equal to the values used in the N-body simulations, with $w=w_{\rm true} = -1$ for the particular case we have run. The effect of a change in the value of $w$, $w_{\rm assumed} = w_{\rm true} + \delta w$ is to change the separations between pairs of galaxies, which leads to a change in the appearance of the power spectrum. For small perturbations away from the true equation of state, we assume that the alteration in the measured power spectrum can be represented by a rescaling of the wavenumber from $k_{\rm true}$ to $k_{\rm app}$. The ratio of these wavenumbers gives a ``stretch'' parameter, $\alpha$, which describes the change in the recovered oscillation scale: \begin{equation} \alpha = \frac{k_{\rm app}}{k_{\rm true}}. \end{equation} If $w_{\rm assumed} = w_{\rm true}$, then there is no shift in the BAO in the estimated power spectrum and $\alpha=1$. In the case of a wide-angle, deep galaxy survey with spectroscopic redshifts, the stretch parameter can be approximated by: \begin{equation} \alpha \approx \left( \frac{ D_A(z,w_{\rm assumed}) }{ D_A(z,w_{\rm true}) } \right)^{-2/3} \left( \frac{ H(z,w_{\rm true}) }{ H(z,w_{\rm assumed}) } \right)^{1/3}, \label{eq:alpha} \end{equation} where \begin{eqnarray} H(z,w) &=& H_0 \left[ \Omega_m (1+z)^3+\Omega_{\rm DE} (1+z)^{3(1+w)} \right]^{1/2} \\ D_A(z,w) &=& \frac{c}{1+z}\int_{0}^{z}\frac{\rm{dz}}{H(z)}. \label{eq:hz} \end{eqnarray} The values of the exponents in Eq.~\ref{eq:alpha}, $2/3$ for the distance transverse to the line of sight and $1/3$ for the distance parallel to the line of sight are motivated by the number of cartesian components in these directions (e.g. \citealt{Eisenstein2005}). The precise value of these exponents will depend upon the geometry and construction of the galaxy survey. For example, in a survey which relies upon photometric redshifts, the exponent parallel to the line of sight would be greatly reduced and it would be beneficial to compute the power spectrum transverse to the line of sight. Note that in Eqs.~9 and 10 we assume that $w$ is independent of redshift. There are many models in which $w$ is a function of redshift. In this case, the exponent of $\Omega_{\rm DE}$ in the expression for the Hubble parameter (Eq.~9) would be replaced by an integral over $w(z)$. It is instructive to see how the constraints on $\alpha$ translate into limits on the value of $w$. We can do this approximately using Eq.~\ref{eq:alpha}, for the case of a redshift independent equation of state, considering perturbations around $w_{\rm true}=-1$. We consider two illustrative cases: a ``pessimistic'' case in which we consider the constraints from BAO in isolation from any other data which constrains the cosmological parameters and an ``optimistic'' case, in which we perturb $w$ and only consider cosmological models that give similar predictions for the CMB.\footnote{We acknowledge the referee for suggesting this second case to us and for encouraging us to perform the calculation.} The translation in the pessimistic case is shown in the upper panel of Fig.~\ref{fig:walpha} for two different redshifts. Here we have assumed fixed values for $\Omega_{\rm M}$ and $\Omega_\Lambda$ and we have not marginalized over these parameters. This is the case discussed most commonly in the literature. Under these conditions, at $z=1$, a $1\%$ error in $\alpha$ corresponds approximately to a $4\%$ error in the value of $w$. At $z=3$, the boost is about 50\% larger, with $\delta w \approx 6 \delta \alpha$. In the ``optimistic'' case, we only consider models which give the same angular location for the first peak in the CMB spectrum. Hence, when the value of $w$ is perturbed, we restrict our attention to those models which give the same ratio of the sound horizon scale to the angular diameter distance to the last scattering surface as our default cosmology. Given the parametric forms quoted for these distances by Eisenstein \& Hu (1998), this is equivalent to keeping $\Omega_{\rm b}/\Omega_{\rm M}$ and $h$ fixed, and varying $\Omega_{\rm M}$. We have called this case ``optimistic'' because it does not include any error on the fixed parameters. In this scenario, shown in the lower panel of Fig.~\ref{fig:walpha}, the error on $w$ is now only around 50\% larger than the corresponding error on $\alpha$. We now explore two of the approaches which have been advocated in the literature to measure the value of $w$. Both methods involve making fits to the ratio of a measured power spectrum divided by a smooth reference spectrum. In the first approach, a parametric form is assumed for the ratio (\citealt{BG2003}). The second approach is more general as it does not assume a specific form for the ratio, but instead uses the linear perturbation theory power spectrum without any further approximations (\citealt{Percival2007}; see also \citealt{Eisenstein2005}). We shall henceforth refer to these methods as the parametric and general schemes respectively. In their original forms, there are also differences in the way in which a ``featureless'' reference spectrum is constructed, as we will briefly discuss when describing these approaches below. Blake \& Glazebrook (2003; see also Glazebrook \& Blake 2005) studied the feasibility of extracting measurements of the acoustic oscillations from forthcoming galaxy surveys using linear perturbation theory. Their starting point is to divide the power spectrum, including the imprint of baryons, divided by a smooth reference spectrum which is chosen to be free from any signature of acoustic oscillations. This method therefore does not use any of the information contained in the overall shape of the power spectrum, which Blake \& Glazebrook argue could be susceptible to large scale gradients arising from the effects we discussed in Section 3, such as galaxy bias or redshift-space distortions. Instead, they focused on the location and amplitude of the acoustic oscillations. The smooth reference spectrum is obtained using the zero-baryon transfer function written down by \cite{EH1998}. The parametric form suggested by Blake \& Glazebrook as a fit to the resulting ratio is a Taylor expansion of the ratio of a power spectrum for cold dark matter plus a small baryonic component, divided by a pure cold dark matter power spectrum. The sound horizon, which is a free parameter in their method, is treated as the oscillation wavelength in this parametric form. This is an approximation, as the wavelength of the acoustic oscillations actually changes with wavenumber, albeit slowly, and is therefore not a constant (see eqn.~22 of \citealt{EH1998}). Some authors have criticized this approach due to the sensitivity of the ratio to the choice of the reference power spectrum. \cite{Angulo2005} describe how realistic power spectra, which include nonlinear growth, bias effects and redshift-space distortions, require a ``linearization'' process before they become adequately described by the parametric form put forward by Blake \& Glazebrook. Due to the sensitivity of the ratio to the choice of reference spectrum at low wavenumbers, \cite{Koehler2006}) proposed ignoring power spectrum measurements below $k \sim 0.05 h{\rm Mpc}^{-1}$ to avoid this problem (although we note that they also discuss a different approach to measuring the equation of state parameter). \cite{Percival2007} proposed a new technique which has a number of appealing features compared with that of Blake \& Glazebrook. Firstly, the shortcut of fitting an approximate parametric form to the ratio of the measured power spectrum to a reference is dropped in favour of using a full linear perturbation theory power spectrum (with a modification; see later) to model the ratio. This is completely general, and permits one to use the most accurate description available of the linear perturbation theory power spectrum, such as the tabulated output of {\tt CAMB}. Secondly, the reference power spectrum is defined separately in the case of the data and the linear theory model, by using a coarse rebinning of the relevant power spectrum. The reference is constructed using a spline fit to a reduced number of wavenumber bins over the range in which the spectrum in question is defined. Thus, any deviations in the general form of the measured spectrum away from linear theory are naturally accounted for in the reference spectrum. Thirdly, Percival et~al. allow for a damping of the amplitude of the oscillations in the theoretical ratio beyond some wavenumber, which is treated as a free parameter in their fit. The quality of the fits is dramatically improved when damping of the higher harmonics is allowed. Percival et~al. applied their method to extract the matter density parameter from the power spectrum of luminous red galaxies in the SDSS. The majority of the results we present are obtained using the general method suggested by Percival et~al. For completeness, and because Percival et~al. did not actually apply their method to the extraction of the equation of state parameter, we set out the general approach step-by-step below: \begin{enumerate} \item[1.] A smooth reference spectrum (i.e. without any oscillatory features), $P_{\rm ref}$, is constructed from the measured power spectrum using a cubic spline fit over the wavenumber range $0.0046 < (k/h {\rm Mpc}^{-1}) < 1.2$, using the measured spectrum smoothed over 25 bins in wavenumber. The spline is constrained to pass through the data points in this coarse rebinning of the measured power spectrum. \item[2.] We compute the ratio, $R$(k), of the measured power spectrum, $P(k)$, to the reference spectrum, $P_{\rm ref}(k)$, obtained in step 1: \begin{equation} R(k) = \frac{P(k)}{P_{ \rm ref}(k)}. \end{equation} \item[3.] A linear perturbation theory power spectrum is generated with {\tt CAMB} for the cosmological parameters used in the {\tt BASICC} simulation. A smooth reference spectrum, $P^{\rm L}_{\rm ref}$, is defined for this spectrum in the same manner as described for the measured spectrum in Step 1, using the same wavenumber bins. A ratio, $R_{\rm L}$, is derived for the linear perturbation theory spectrum by dividing by this reference spectrum. \item[4.] The linear theory ratio, $R_{\rm L}$, is compared with the measured ratio, $R$. Two modifications are considered to the linear theory ratio. The first is a stretch or scaling of the wavenumber used in the linear theory ratio, as described above, to mimic the act of changing the dark energy equation of state parameter, $w$. The goal here is to see what variation in $w$ can be tolerated before $R_{\rm L}$ is no longer a good fit to the measured ratio $R$. The second change is to allow for a damping of the oscillations beyond some characteristic wavenumber by multiplying the theoretical power spectrum by a Gaussian filter: \begin{equation} W(k) = \exp \left(- \frac{k^2}{2 k_{\rm nl}^2} \right), \end{equation} where $k_{\rm nl}$ is a free parameter. Hence, the linear theory ratio is modified to: \begin{equation}\label{eq:nl} R_{\rm L}(k) = \left( \frac{P^{L}}{P^{L}_{ \rm ref}} (\alpha k) - 1 \right)\times W(k,k_{\rm nl})+ 1 \end{equation} \item[5.] A likelihood is computed for each combination of the parameters $k_{\rm nl}$ and $\alpha$, assuming Gaussian errors: \begin{equation}\label{eq:lik} - 2 \ln L = \chi^2 = \sum_{i} \left( \frac{ R^{i} - R_{\rm L}^{i} }{\sigma^{i}/P^{i}} \right)^2 \end{equation} where the summation is over wavenumber and $\sigma^{i}$ is the error on the power spectrum estimated in the \textit{i}$^{\rm th}$ bin (as given by Eq.~\ref{eq:err}). We generate a grid of models using $200^2$ different combinations of $\alpha$ and $k_{\rm nl}$ in the ranges [0.9,1.1] and [0,0.4] respectively. \item[6.] Finally, the best fit values for $\alpha$ and $k_{\rm nl}$ correspond to those for the model with the maximum likehood. We obtain confidence limits on the parameter estimation by considering the models within $\Delta \chi^2$ equal to $2.3$ and $6.0$; in the case of a Gaussian likelihood, these would correspond to the $68\%$ ($1$-$\sigma$ error) and $95\%$ ($2$-$\sigma$ error) confidence levels on the best fit. We note that in some cases presented later (see Fig. 15), the distribution of the likelihood is not Gaussian. \end{enumerate} In some cases, we also present constraints on $w$ derived using a slightly modified version of the approach of Blake \& Glazebrook. The main difference is that we follow step 1 to construct a ratio from the measured power spectrum, rather than using a zero-baryon transfer function. One issue to be resolved is the range of wavenumbers which should be used in the fitting process. To address this, we used the power spectrum of the dark matter measured at $z=6$. We systematically varied the minimum and maximum wavenumbers used in our fit and compared the values of the scaling parameter, $\alpha$, recovered. Our results are fairly insensitive to the choice of the maximum wavenumber, particularly when damping of the oscillations is included in the fitting algorithm. However, the recovered $\alpha$ shows a systematic shift once the minimum wavenumber exceeds $k \sim 0.1 h {\rm Mpc}^{-1}$. For minimum wavenumbers smaller than this, there is little difference in the recovered value of $\alpha$ or in the size of the errors on $\alpha$, as these modes have relatively large errors in our simulation. This is encouraging news for realistic survey geometries, for which the power spectrum measured at low wavenumbers will be distorted due to the window function of the survey. In the rest of the paper, we use the power spectrum in the wavenumber interval $k/(h{\rm Mpc}^{-1})=[0, 0.4]$ to constrain the value of $\alpha$. \section{Results} \begin{figure} \includegraphics[width=8.5cm]{eps/f13.eps} \caption{ The power spectra of dark matter particles, dark matter haloes and galaxies at $z=0$ (error bars). The real-space power spectra are plotted in the left hand column and the redshift-space power spectra appear in the right hand column. The red curves show the reference spectra derived from the measured spectra using a cubic spline fit, as described in Section 4. The blue curve is the same in each panel, showing the linear perturbation theory prediction for the $z=0$ matter power spectrum (plotted using a high redshift output obtained from the {\tt BASICC} simulation, which has been scaled in amplitude according to the difference in growth factors between the two epochs expected in linear perturbation theory) The errors on the power spectrum are estimated using Eq.~\ref{eq:err}. \label{fig:fit1}} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{eps/f14.eps} \caption{ The ratio of the measured power spectrum divided by a smooth reference spectrum. The symbols correspond to the measurements plotted in Fig.~\ref{fig:fit1} divided by the red curve in each panel of that figure. The red lines here show the best-fitting model in each case using the general method and the blue curves show the best fit for the parametric method. The errors on the power spectrum are estimated using Eq.~\ref{eq:err}.} \label{fig:fit2} \end{figure} In this Section, we present the expected constraints on the dark matter equation of state using the power spectra measured from our simulations. We first show how our algorithm for extracting the equation of state parameter works in practice, for dark matter particles, haloes and galaxies, comparing the results obtained in real-space and redshift-space (\S 5.1). We then assess the need for an accurate model of the linear theory power spectrum and the relative merits of the general and parametric fitting procedures (\S 5.2). In \S 5.3, we present our main results, which are summarized in Fig.~\ref{fig:res} and Table~\ref{tab:gals}, which lists the best-fitting value of $\alpha$ and the estimated error for different samples of galaxies at $z=1$, along with the corresponding fractional error in $w$. Finally, in \S 5.4, we use the results presented in \S 5.3 to make forecasts for the accuracy with which several forthcoming surveys will be able to measure the value of $w$. \subsection{The algorithm to extract the scale of the acoustic oscillations in action} We present a series of plots for samples at $z=0$, which illustrate the various stages in the fitting process. Fig.~\ref{fig:fit1} shows the power spectra measured for different tracers, both in real-space and redshift-space. The sample of dark matter haloes includes all objects with a mass in excess of $5.4 \times 10^{12}\,h^{-1}\,M_{\odot}$. The galaxy sample is magnitude-limited with a space density of $\bar{n}=5\times10^{-4}\,h^{-3}\,{\rm Mpc}^3$. For reference, the linear perturbation theory power spectrum for the mass at $z=0$ is shown by the blue line in each panel: this is the power spectrum of the dark matter measured in real-space at $z=15$, scaled by the ratio of growth factors in order to have the amplitude expected at $z=0$. It is important to bear in mind that the y-axis in this plot covers more than a factor of one thousand in amplitude. Fig.~\ref{fig:fit1} shows that there is considerable variation in the power spectra measured for different types of objects, and between the results in real-space and redshift-space, which re-inforces the points made in Section 3 regarding deviations from the predictions of linear perturbation theory on large scales. The red curve in each panel shows the corresponding reference power spectrum, which is constructed from the measured power spectrum as explained in Section~4. In Fig.~\ref{fig:fit2}, the symbols show the ratio obtained by dividing the measured power spectrum by the appropriate reference spectrum for the same samples plotted in Fig.~\ref{fig:fit1}. The ratios look remarkably similar for the different tracers up to $k \approx 0.15 h {\rm Mpc}^{-1}$. Beyond this wavenumber, the appearance of the oscillations varies from panel to panel, but the ratio stays close to unity. This similarity illustrates how well the approach for producing the reference spectrum works. The red curves in each panel show the best-fitting model produced in the general scheme whilst the blue curves show the fit obtained in the parametric approach. The best fits have somewhat different forms at wavenumbers below $ k \sim 0.05 h {\rm Mpc}^{-1}$. The constraints on the values of the parameters $k_{\rm nl}$ and $\alpha$ are presented in Fig.~\ref{fig:fit3}, where we show the 1, 2 and 3-$\sigma$ ranges in the case of two parameters, computed assuming Gaussian errors. There is a weak systematic trend for the best-fitting result for $\alpha$ to shift to lower values when galaxies are considered instead of the dark matter. The errors on the recovered parameters are larger in the case of galaxies than for the dark matter or for haloes, reflecting the lower signal-to-noise of the predicted galaxy power spectrum. \begin{figure} \includegraphics[width=8.5cm]{eps/f15.eps} \caption{ The constraints on the parameters $k_{\rm nl}$ and $\alpha$ for the power spectra plotted in Fig.~\ref{fig:fit1}. The contours show the 1, 2 and $3$-$\sigma$ confidence limits for two parameters. \label{fig:fit3}} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{eps/f16.eps} \caption{ The best fit value for the scaling parameter $\alpha$, recovered from the ensemble of low resolution simulations, using the dark matter power spectrum in real-space. The results are show for two different redshifts: $z=3.8$ (top) and $z=0$ bottom. The histograms marked {\tt CAMB} and BG03 show the results for the general and parametric fitting procedures, respectively. The blue histogram shows the results if the general method is followed with the {\tt CAMB} power spectrum replaced by the formula for the linear theory power spectrum presented by Eisenstein \& Hu (1998). \label{fig:hist}} \end{figure} \subsection{Two tests of the algorithm} Before presenting the main results of applying our algorithm to extract the acoustic oscillation scales for various samples drawn from the {\tt BASICC} run, we use the {\tt L-BASICC} ensemble to address two questions: 1) How accurately do we need to model the linear perturbation theory matter power spectrum to avoid introducing a systematic bias into the results for the oscillation scale? 2) How does the performance of the new method for constraining the oscillation scale introduced in this paper compare with earlier approaches? To help answer these questions, we use the power spectrum of the dark matter measured from the {\tt L-BASICC} runs in real-space at $z=0$ and $z=3.8$, the highest output redshift besides the initial conditions. The results of applying our standard algorithm for extracting the oscillation scale are shown by the red histogram labelled {\tt CAMB} in Fig.~\ref{fig:hist}, which gives the distribution of the best-fitting value of $\alpha$. The ensemble returns an unbiased mean value for the stretch parameter, $\alpha=1$. At $z=3.8$, the standard deviation on the best fit is 0.3\%; by $z=0$, this rises to $1\%$. To address the first issue above, regarding how well we need to model the linear theory power spectrum to get an unbiased result for the oscillation scale, we replace the {\tt CAMB} generated power spectrum in our algorithm by the approximation introduced by \cite{EH1998}. These authors proposed a physically motivated expression for the linear theory power spectrum, with parameters set to achieve a reasonable match to the results obtained from detailed calculations using Boltzmann codes over a much wider range of wavenumbers than are typically considered for baryonic acoustic oscillations. Eisenstein \& Hu's motivation was to provide physical insight into the form of the power spectrum in a cold dark matter universe and to produce a code which could rapidly calculate large numbers of power spectra for grids cosmological parameters. Of course, the correct approach in our fitting procedure is to use the same code to compute the linear theory spectrum as was used to generate the initial conditions in the N-body simulation. In the case of real data, we do not have the luxury of knowing which Boltzmann code to use, so we should use the one which claims to be the most accurate representation of the model we are testing. Nevertheless, it is instructive to perform this test to see what error is introduced by using a less accurate calculation of the transfer function. The choice of Eisenstein \& Hu's code is particularly relevant for this purpose as Blake \& Glazebrook used this formalism to inspire their parametric expression to fit the acoustic oscillations. The use of Eisenstein \& Hu's formalism to model the linear theory power spectra generated with {\tt CAMB} introduces a small but measurable systematic shift in the mean value of $\alpha$. At $z=0$, the mean $\alpha$ indicated by the blue histogram in Fig.~\ref{fig:hist} is $0.98 \pm 0.01$. We answer the second question by adopting the fitting algorithm of Blake \& Glazebrook (2003), which assumes a parametric form for the ratio of the power spectrum with baryons to a smooth, cold dark matter only power spectrum. Changing the fitting method in this way also introduces a similar magnitude of shift in the best-fitting value of $\alpha$. The green histogram shows the results when we use the parametric approach introduced by Blake \& Glazebrook (2003). The mean value of $\alpha$ in this case is $1.01 \pm 0.01$. These shifts are small but one must bear in mind that the corresponding bias in the dark energy equation of state parameter is several times larger than the shift in $\alpha$. \begin{table*} \begin{center} \begin{tabular}{ccc|cccccc|cccccc} \mc{15}{c}{$z=0$} \\ \hline \hline & Sel I & Sel II & \mc{6}{c}{Real-space} & \mc{6}{c}{Redshift-space} \\ id & $\bar{n}$ & & $b$ & $\bar{n}P$ & $k_{\rm nl } $ & $\alpha$ & $ \Delta \alpha$ & $\Delta \alpha $ & $b$ & $\bar{n}P$ & $k_{\rm nl } $ & $\alpha$ & $ \Delta \alpha$ & $\Delta \alpha $ \\ & $ h^{3}{\rm Mpc}^{-3}$ & & & & $h/{\rm Mpc}$ & & \% & \% & & & $h/{\rm Mpc}$ & & \% & \% \\ & & & & & & & & (SE07) & & & & & & (SE07) \\ \hline DM & $ $ & & 0.99 & 3567 & 0.120 & 0.993 & 0.91 & 1.02 & 1.15 & 3635 & 0.110 & 0.989 & 1.05 & 1.17\\ A & $5.0e-4$ & & 1.18 & 1.78 & 0.144 & 0.975 & 1.16 & 1.10 & 1.32 & 2.15 & 0.125 & 0.972 & 1.26 & 1.23\\ B & $2.5e-4$ & & 1.33 & 1.11 & 0.155 & 0.971 & 1.34 & 1.18 & 1.47 & 1.34 & 0.139 & 0.966 & 1.35 & 1.23\\ C & $2.5e-4$ & red & 1.32 & 1.15 & 0.152 & 0.978 & 1.35 & 1.21 & 1.46 & 1.36 & 0.127 & 0.975 & 1.49 & 1.37\\ D & $2.5e-4$ & strong & 1.06 & 0.67 & 0.155 & 0.956 & 1.75 & 1.41 & 1.20 & 0.86 & 0.138 & 0.956 & 1.67 & 1.42\\ E & $2.5e-4$ & blue & 1.03 & 0.66 & 0.141 & 0.964 & 1.92 & 1.56 & 1.17 & 0.83 & 0.130 & 0.962 & 1.79 & 1.53\\ F & $2.5e-4$ & weak & 1.30 & 1.16 & 0.132 & 0.980 & 1.55 & 1.40 & 1.44 & 1.34 & 0.115 & 0.972 & 1.66 & 1.54\\ haloes & $5.9e-5$ & & 1.56 & 0.81 & 0.197 & 0.980 & 1.32 & 1.07 & 1.71 & 1.04 & 0.148 & 0.975 & 1.43 & 1.25\\ \hline \end{tabular} \\ \begin{tabular}{ccc|cccccc|cccccc} \\ \mc{15}{c}{$z=1$} \\ \hline \hline & Sel I & Sel II & \mc{5}{c}{Real-space} & \mc{5}{c}{Redshift-space} \\ id & $\bar{n}$ & & $b$ & $\bar{n}P$ & $k_{\rm nl } $ & $\alpha$ & $ \Delta \alpha$ & $\Delta \alpha $ & $b$ & $\bar{n}P$ & $k_{\rm nl } $ & $\alpha$ & $ \Delta \alpha$ & $\Delta \alpha $ \\ & $ h^{3}{\rm Mpc}^{-3}$ & & & & $h/{\rm Mpc}$ & & \% & \% & & & $h/{\rm Mpc}$ & & \% & \% \\ & & & & & & & & (SE07) & & & & & & (SE07) \\ \hline DM & $ $ & & 0.99 & 1269 & 0.163 & 0.997 & 0.61 & 0.68 & 1.29 & 1710 & 0.133 & 0.991 & 0.77 & 0.88\\ A & $5.0e-4$ & & 1.34 & 0.87 & 0.188 & 0.980 & 1.30 & 1.10 & 1.60 & 1.19 & 0.164 & 0.976 & 1.21 & 1.07\\ B & $2.5e-4$ & & 1.31 & 0.43 & 0.212 & 0.975 & 2.02 & 1.47 & 1.57 & 0.59 & 0.174 & 0.970 & 1.72 & 1.38\\ C & $2.5e-4$ & red & 1.39 & 0.48 & 0.235 & 0.977 & 1.81 & 1.32 & 1.65 & 0.65 & 0.208 & 0.975 & 1.52 & 1.17\\ D & $2.5e-4$ & strong & 1.31 & 0.40 & 0.624 & 0.971 & 1.90 & 1.14 & 1.57 & 0.55 & 0.186 & 0.970 & 1.79 & 1.31\\ E & $2.5e-4$ & blue & 1.30 & 0.40 & 0.219 & 0.973 & 2.31 & 1.47 & 1.56 & 0.54 & 0.159 & 0.962 & 1.98 & 1.48\\ F & $2.5e-4$ & weak & 1.37 & 0.47 & 0.218 & 0.987 & 1.91 & 1.38 & 1.63 & 0.64 & 0.190 & 0.978 & 1.61 & 1.25\\ haloes & $5.9e-5$ & & 3.07 & 0.59 & 0.226 & 1.000 & 1.65 & 1.24 & 3.34 & 0.77 & 0.146 & 0.994 & 1.82 & 1.53\\ \hline \end{tabular} \caption{ The results of applying the general fitting procedure described in \S 4 to power spectra measured for different galaxy catalogues at $z=0$ (top) and $z=1$ (bottom). In each table, the first row gives the results for the dark matter and the final row lists results for a sample of dark matter haloes (all haloes with mass in excess of $2.7 \times 10^{13}\,h^{-1}\,M_{\odot}$). The first column gives the label of the sample, as defined in Section 2. The second column gives the space density of galaxies. The first two samples, A and B, are constructed by applying a magnitude limit. Samples C-F are derived from sample A by applying a second selection criterion, as listed in the third column. Samples C and E correspond to the red and blue halves of sample A respectively. Samples D and F comprise the $50\%$ of galaxies from sample A with the strongest and weakest (in terms of equivalent width) OII[3727] emission lines, respectively. Column 4(10) gives the effective bias of the sample, computed from the square root of the ratio of the measured galaxy power spectrum in real (redshift) space to the real space power spectrum of the dark matter over the wavenumber interval $0.01 < (k/h {\rm Mpc}^{-1}) < 0.05$. Column 5(1`) gives the ratio of the clustering signal to the shot noise for the power spectrum measurement, averaged over the wavenumber range $0.19 < (k/h {\rm Mpc}^{-1}) < 0.21$. Columns 6 and 7 (12 and 13) give the best-fitting values of the scaling parameter $\alpha$ and the $1-\sigma$ error on the fit, in real (redshift) space. Column 9 (15) gives the error expected on the scale parameter from Seo \& Eisenstein (2007). The rms lagrangian displacement was set equal to 1 over the best fit non linear scale ($1/k_{nl}$) for each case. } \label{tab:gals} \end{center} \end{table*} dsf \begin{figure} \includegraphics[width=8.5cm]{eps/f17.eps} \caption{ The best-fitting value of the scale factor $\alpha$ as a function of redshift, for different tracers of the density distribution, in real-space (top) and redshift-space (bottom). The symbols show results from the high resolution {\tt BASICC} simulation: dark matter (blue triangles), dark matter haloes with mass in excess of $5.4\times10^{12}\,h^{-1}\,M_{\odot}$ (green circles) and galaxies (red squares). The error bars show the 1-$\sigma$ range on $\alpha$, calculated from $\Delta \chi^2$. The hatched region shows the central $68\%$ range of the results obtained using the dark matter in the ensemble of low resolution simulations. Recall that $\alpha=1$ corresponds to an unbiased measurement of the equation of state parameter, $w$, and that $\delta w \approx 4 \delta \alpha$ at $z=1$. \label{fig:alpha} } \end{figure} \begin{figure} \includegraphics[width=8.5cm]{eps/f18.eps} \caption{ The best-fitting value of the damping scale $k_{\rm nl}$ as a function of redshift, for different tracers of the density distribution, in real-space (top) and redshift-space (bottom). The symbols show results from the high resolution {\tt BASICC} simulation: dark matter (blue triangles), dark matter haloes with mass in excess of $5.4\times10^{12}\,h^{-1}\,M_{\odot}$ (green circles) and galaxies (red squares). The error bars show the 1-$\sigma$ range on $\alpha$. The hatched region shows the central $68\%$ range of the results obtained using the dark matter in the ensemble of low resolution simulations. \label{fig:knl}} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{eps/f19.eps} \caption{ The recovered value of the stretch parameter $\alpha$ for the galaxy samples listed in Table 2. Recall that $\alpha=1$ corresponds to the equation of state parameter $w=-1$. At $z=1$, a shift in $\alpha$ away from unity implies a shift in the recovered value of $w$ given by $\delta w \approx 4 \delta \alpha$.} \label{fig:res} \end{figure} \subsection{The main results} We now turn our attention back to the general results shown in Fig.~\ref{fig:alpha} and Fig.~\ref{fig:knl}, and discuss the conclusions for different tracers of the density field in turn. In these plots, the symbols refer to the constraints obtained from the high resolution {\tt BASICC} simulation and the shading shows results from the ensemble of low resolution simulations, {\tt L-BASICC}. The blue triangles in Fig.~\ref{fig:alpha} show the values obtained for $\alpha$ from the power spectrum of the dark matter. There is a trend for the best-fitting value to deviate away from unity with decreasing redshift, although the result at $z=0$ is still within 1-$\sigma$ of $\alpha=1$. The mean of the ensemble of low resolution runs does not, however, show any deviation away from $\alpha=1$ as a function of redshift, although the scatter on the recovered value of $\alpha$ increases towards the present day. If we examine the analogous results for individual simulations taken from the low resolution ensemble, we find a wide range of behaviour for the best-fitting value of $\alpha$ for the dark matter. Some low resolution runs give results which look like the high resolution one, whereas others show deviations away from $\alpha=1$, with values of $\alpha > 1$, as $z=0$ is approached. The trend seen for the dark matter in the high resolution run serves to illustrate the importance of sampling fluctuations, even in such large volumes. In redshift-space, the scatter in the recovered value of $\alpha$ is larger than in real-space (see also \citealt{Seo2005,Eisenstein2006}). To obtain the errors quoted in Table 2 on the parameters $\alpha$ and $k_{\rm nl}$, we assume Gaussian mode counting errors on the power spectra measured in the {\tt BASICC} simulation, as given by Eq. 3. In Fig. 3, we showed that this simple estimate of the errors on the power spectrum agreed fairly well with the scatter found in the measurements from the {\tt L-BASICC} ensemble, particularly for the case of the dark matter. We have extended this comparison to look at how the errors on $\alpha$ and $k_{\rm nl}$ quoted in Table 2 match the scatter in these parameters obtained from the {\tt L-BASICC} runs. We find the scatter estimated from the ensemble is somewhat larger than the error inferred using the mode counting argument. At $z=0$, the mode counting errors are $20\%$ smaller for $\alpha$ for the dark matter in real space. In redshift-space, the discrepancy increases to nearly $30\%$. The mismatch between the two estimates is smaller at $z=1$. The level of disagreement is not remarkable. It could be the case that the scatter from the ensemble has not converged, even with 50 realizations of the density field. A more likely explanation, particularly in view of the redshift dependence of the discrepancy, is mode coupling in the power spectrum measurements arising from nonlinearities and redshift space distortions, which could increase the variance in the power spectrum compared with the Gaussian estimate. Fig.~\ref{fig:knl} shows that there is a strong trend for the best-fitting value of the smoothing scale, $k_{\rm nl}$, to decrease with decreasing redshift. This results from the oscillations being erased and modified down to smaller wavenumbers as the nonlinearities in the density field grow. The variation of the smoothing scale $k_{\rm nl}$ on redshift is well described by a linear relation: $k_{\rm nl} = a + b z$. In real-space, $a = 0.108 \pm 0.0082$ and $b = 0.054 \pm 0.0110$. In redshift-space, $a = 0.096\pm 0.0074$ and $b = 0.036 \pm 0.0094$. The constraints on $\alpha$ and $k_{\rm nl}$ for dark matter haloes (with masses in excess of $5\times 10^{12}\,h^{-1}\,M_{\odot}$) are plotted with green circles in Figs.~\ref{fig:alpha} and ~\ref{fig:knl}. The parameter constraints obtained for this sample of haloes are very similar to those found for the dark matter, except for the value of $k_{\rm nl}$ at high redshift. Considering haloes in place of dark matter represents a step closer to the observations, so it is reassuring that the conclusions do not change significantly. Finally, in Figs.~\ref{fig:alpha} and ~\ref{fig:knl}, we show using red squares the results for magnitude-limited samples of galaxies. The magnitude limit is varied with redshift such that in each case the galaxy sample has a space density $\bar{n}=5\times10^{-4}\,h^{-3}\, {\rm Mpc}^3 $. There is a weak systematic shift in the best-fitting values of $\alpha$ compared with the results obtained for the dark matter. At the same time the signal to noise of the power spectrum measurement is lower for the galaxy samples than for the dark matter, so the errors on the best-fitting parameters are correspondingly larger for the galaxies. The galaxy samples are consistent with $\alpha=1$ at slightly over 1-$\sigma$. The size of this systematic shift is comparable to the random measurement errors, so we cannot reach a firm conclusion. It will be very interesting to repeat our calculation with a larger simulation volume to reduce the size of the random errors and to assess if such shifts could genuinely provide an ultimate limitation to the accuracy of this method. As a result of using a semi-analytic galaxy formation model which makes predictions for the observable properties of galaxies, we can vary the selection criteria used to construct samples and compare the constraints on the equation of state. The results of this exercise at $z=1$ are presented in Table~2 and in Fig.~\ref{fig:res}, where we consider a range of samples defined either by a simple magnitude limit, or by a magnitude limit applied in combination with a colour cut or a restriction on the strength of an emission line. The key result from comparing the constraints for different samples is that whilst there are no strong systematic differences between the results, the accuracy of the constraints varies significantly. For example, using a catalogue of red galaxies, we predict that one could measure the dark energy equation of state with an accuracy $~40\%$ better than with the same number density of galaxies chosen by the strength of their emission lines. We compare the error on the acoustic scale extracted from our simulations with the results of the prescription set out by \cite{Seo2007}. The \cite{Seo2007} algorithm contains a parameter which is equivalent to $1/k_{\rm nl}$. If we use our best fitting values of $k_{\rm nl}$, we find that the Seo \& Eisenstein prescription gives a similar estimate of the error on the acoustic scale to that we obtain by fitting directly to the simulation results. However, if we use the value of $k_{\rm nl}$ suggested by \cite{Seo2007}, which they extract from a dark matter simulation, we find that their prescription gives an optimistic estimate of the error on $\alpha$. The reason we recover a larger value of $k_{\rm nl}$ from our galaxy samples than we do for the dark matter is due to the increased discreteness shot noise in these samples, which results in noisier power spectra at high $k$. This causes an elongation in the confidence levels in the $k_{\rm nl}$ versus $\alpha$ plane. It is interesting to compare the results for the dark matter and for the galaxy samples with those for a set of massive haloes. Table 2 also gives the constraints on $\alpha$ and $k_{\rm nl}$ for a sample of massive haloes (see also Angulo et~al. 2005). There are 142\,000 haloes in the {\tt BASICC} output at z=1 with a mass in excess of $2.7 \times 10^{13}\,h^{-1}\,M_{\odot}$. Although the effective bias of this sample of massive haloes is greater than that of any of the galaxy samples listed in Table~2, the reduction in space density means that $\bar{n}P \approx 1$ and the estimated error on $w$ is comparable to that found for the galaxy samples. \subsection{Forecasts for future surveys} We can use the results presented in Table~2 to make a rough estimate of the accuracy with which future surveys are likely to be able to constrain the scale of the acoustic oscillations. This can be done using a simple calculation motivated by the expression for the fractional error in the power spectrum given by Eq. 3. We assume that the error in the distance scale, $\Delta \alpha$, scales with the volume of the survey, $V_{\rm survey}$, and the product of the space density of galaxies and the power spectrum, $\bar{n}P(k=0.2h{\rm Mpc}^{-1})$, as: \begin{equation} \Delta \alpha \propto \frac{1}{\sqrt{V_{\rm survey}}} \left( 1 + \frac{1}{\bar{n}P}\right). \end{equation} The constant of proportionality can be set for a particular galaxy sample using the results given in Table 2. The WiggleZ survey is currently underway and will measure redshifts for 400,000 blue galaxies over 1000 square degrees in the redshift interval $z=0.5-1.0$ (Glazebrook et~al. 2007). For the cosmological parameters adopted in this paper, this gives a comoving volume of $1.13 \,h^{-3}\,{\rm Gpc}^{3}$. Using the blue colour selected sample or the large equivalent width sample from Table 2, and assuming $\bar{n}P \sim 1$ for WiggleZ galaxies, somewhat higher than we find in our simulation, we estimate that this survey will measure the distance scale to an accuracy of $\Delta \alpha \sim 2\% $, which is similar to that claimed by Glazebrook et~al. using linear perturbation theory. The WFMOS survey has been proposed to motivate the construction of a new spectrograph for the Subaru telescope (Glazebrook et~al. 2005). This will target galaxies with a space density of $\bar{n} = 5 \times 10^{-4} \,h^{3}\,{\rm Mpc}^{-3}$ in the redshift interval $z=0.5-1.3$ over 2000 square degrees, covering a volume of $4.4 \,h^{-3}\, {\rm Gpc}^{3}$. (There is also a WFMOS survey which will target $z=3$ galaxies but over a much smaller solid angle.) Using sample A from Table~2, and adopting $\bar{n}P = 1$, we obtain an estimated error of $\Delta \alpha = 0.83\% $, again in good agreement with Glazebrook et~al. Photometric surveys can generally cover a larger solid angle than spectroscopic surveys down to a fainter magnitude limit. The fainter magnitude limit results in a higher median redshift and a broader redshift distribution for the survey galaxies, which means that a larger volume is covered. However, the limited accuracy of photometric redshift estimates means that in practice Fourier modes are lost and the effective volume of the survey is greatly reduced. Blake \& Bridle (2005) estimate that the factor by which the survey volume is reduced is $\approx 12 \left( \delta z /(1+z)/0.03 \right)$, where $\delta z / (1+z)$ is the error in the photometric redshifts. The Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) survey will map $3 \pi$ steradians of the sky (http://pan-starrs.ifa.hawaii.edu/public/home.html). Cai et al. (2007, in preparation) show that the median redshift of the $3 \pi$ survey will be $z \approx 0.5$, with a tail extending to $z \approx 1.2$. The volume of the survey, assuming that $20,000$ square degrees cover low-extinction parts of the sky and give high quality clustering measurements, is around $41 \,h^{-3}\,{\rm Gpc}^{3}$. Talking sample A from Table 2, and setting $\bar{n}P >> 1$, as appropriate for the relatively high space density of galaxies in a photometric sample, and allowing for the reduction in the effective volume caused by a photometric redshift error of $\delta z/(1+z) = 0.03$, gives a forecast error on the oscillation scale of $\Delta \alpha \sim 0.5\%$. In the more likely event that the photometric redshift errors are twice as large, $\delta z/(1+z) \sim 0.06$, this figure increases to $\Delta \alpha \sim 0.7\% $. Remembering the crude conversion $\Delta w \approx 4 \Delta \alpha$ from Section 4, this means that the next generation of galaxy surveys is unlikely to deliver 1\% errors on a constant equation of state from BAO measurements used in isolation from other cosmological data. A survey with almost an order of magnitude more effective volume than Pan-STARRS will be needed to achieve this target. This will require an all-sky, spectroscopic galaxy redshift survey, such as the {\tt SPACE} mission being proposed to ESA's Cosmic Vision call. {\tt SPACE} will measure redshifts for galaxies in the interval $0.5 < z < 2$, covering around $150 h^{-3}{\rm Gpc}^{3}$. Extrapolating from Sample A, we forecast that an error in the oscillation scale of $\Delta \alpha \sim 0.15 \%$ could be achieved with {\tt SPACE}. In the case of the pessimistic translation to an error on $w$ considered in Section 4, this corresponds to $\Delta w \sim 0.6\%$; in the optimistic scenario, we expect a constraint of $\Delta w \sim 0.23\%$. \section{Conclusions} In the next five to ten years, several proposed galaxy surveys will allow high precision measurements of the clustering of galaxies on the scale of the acoustic oscillations at intermediate and high redshifts. Both photometric and spectroscopic surveys are planned, which will cover volumes up to tens of cubic gigaparsecs and will contain hundreds of thousands to hundreds of millions of galaxies. There is a clear need to ensure that theoretical predictions develop apace with sufficient accuracy and realism to allow such datasets to be fully exploited and to uncover any possible systematic errors in this cosmological test to uncover the nature of the dark energy. Early theoretical work in this area used linear perturbation theory (\citealt{BG2003,HuHaiman2003,GB2005}). Recently, more physical calculations have been carried out using N-body simulations with cubes of side $500-1100 \,h^{-1}\,$Mpc (\citealt{Seo2003, Seo2005, Schulz2006, Huff2006, Seo2007}). In this paper, we have improved upon previous modelling work in three ways. Firstly, we have used a simulation volume comparable to the largest of the currently proposed spectroscopic surveys. This allows us to accurately follow the growth of density fluctuations on an ultra large scales in excess $100 \,h^{-1}\,$Mpc, the scales of interest for the acoustic oscillations, which can only be followed approximately in smaller computational volumes. In particular, a large volume is necessary to obtain accurate predictions for bulk flows, which are sensitive to the power spectrum at low wavenumbers. The only published work with a larger simulation volume used the Hubble Volume simulation (\citealt{Angulo2005, Koehler2006}). The Hubble Volume has a larger particle mass than the {\tt BASICC}, which restricted these studies to consider either cluster mass dark matter haloes (Angulo et~al. 2005) or a simple biasing scheme to add galaxies (\citealt{Koehler2006}). Secondly, through the use of a large number of particles, we are able to resolve the majority of the haloes which are likely to host the galaxies which will be observed in the forthcoming surveys. Thirdly, we use a semi-analytic galaxy formation model to populate the simulation with galaxies. Unlike other studies which use phenomenological biasing schemes or the halo occupation model to add galaxies, this allows us to predict the shape and amplitude of the galaxy power spectrum and the signal-to-noise of the clustering expected for different galaxy selections. We use our N-body simulation in combination with a galaxy formation model to make the connection between the linear perturbation theory prediction for the matter power spectrum and the power spectrum of galaxies. We do this in a series of steps, starting with power spectrum of the dark matter, looking at the impact of the nonlinear growth of fluctuations and peculiar motions or redshift-space distortions, before examining the power spectrum of dark matter haloes and, finally, galaxies. A number of conclusions are reached from this study: i) The nonlinear evolution of the dark matter power spectrum is apparent even on scales larger than the sound horizon scale. Although the deviation from linear theory is only a few percent, the coupled evolution of different Fourier modes means that these scales need to be followed accurately to get the correct behaviour at higher wavenumbers. ii) The form of the distortion of the power spectrum due to peculiar motions is extremely sensitive to the type of object under consideration, being quite different for the cases of dark matter, dark haloes and galaxies. Moreover, different galaxy selections give different redshift-space distortions. iii) Galaxy bias is scale dependent and sensitive to the selection applied for wavenumbers $k > 0.15 h {\rm Mpc}^{-1}$. Eisenstein et~al. (2006) discuss a technique which attempts to reconstruct the linear density field from an observed distribution of objects. The reconstruction can reduce the damping of the higher harmonic oscillations in the power spectrum, thereby increasing the statistical significance of the acoustic scale measurement and diminishing any systematic effects caused by departures from linearity. It will be interesting to apply this method to the galaxy samples presented in this paper, to see if this approach still works at the required level in the case of biased tracers of the linear density field. We also present a new method to extract the dark energy equation of state parameter, based upon an approach put forward by \cite{Percival2007}. The method involves dividing the measured power spectrum by a smooth reference spectrum and comparing the resulting ratio to the predictions of linear perturbation theory. The algorithm has three key advances over earlier work, which can be credited to \cite{Eisenstein2005} and \cite{Percival2007}: i) The reference spectrum is derived from the measured spectrum, which avoids the need to apply major corrections to a linear theory reference. ii) The measured ratio is compared to a prediction generated using {\tt CAMB}, which is more accurate than assuming a parametric form for the ratio based on a Taylor expansion. iii) The linear theory ratio is modified by allowing the higher-order oscillations to be damped, which improves the fit to the measured ratio. Changing the value of the equation of state parameter is approximately equivalent to rescaling the wavenumber in the predicted power spectrum ratio; at $z=1$, a 1\% shift in wavenumber is equivalent to a $4\%$ shift in the recovered value of $w$. We explore the constraints on the dark energy equation of state using different tracers of the density field. By applying our algorithm for extracting the oscillation scale to the {\tt L-BASICC} ensemble, we have provided the most stringent test to date of usefulness of baryonic acoustic oscillations for measuring the equation of state of the dark energy. For the case of the dark matter, there is no significant bias in the recovered oscillation scale, compared with the value expected from linear perturbation theory. Within a given simulation, we find that $1\%$ deviations from the underlying length scale are possible although these are only at the 1-$\sigma$ level. Such excursions are the result of sampling variance arising from the finite volume of the computational box, which are important even in a simulation of the volume of the {\tt BASICC}. The error on the scale factor recovered from galaxy samples is larger than that found for the dark matter, reflecting the lower signal-to-noise of the galaxy power spectrum measurements. Different galaxy selections lead to variations in the clustering strength and hence in the error expected in the scale factor. Currently, the best constraints on the equation of state parameter come either from combining datasets, such as the power spectrum of galaxy clustering and measurements of the microwave background radiation (e.g. Sanchez et~al. 2006) or from the Hubble diagram of Type Ia, with priors on the flatness of the Universe and the matter density (Riess et~al. 2004). For example, Wood-Vasey et~al. (2007) combine high redshift SNe Ia from the ESSENCE Supernova Survey with the measurement of the BAO made by Eisenstein et~al. (2005), and, assuming a flat universe, constrain a constant equation of state to have $w=-1.05^{+ 0.13}_{-0.12}$(stat.)$\pm 0.11$(sys.), consistent with a cosmological constant. Possible contributions to the systematic error include the degree of dust extinction in the SNe host galaxy, evolution in the properties of SNe with redshift and local calibration effects such as a ``Hubble bubble''. We have used our simulation results to forecast the accuracy with which future galaxy surveys will use the BAO in isolation to constrain the scale of the acoustic oscillations, and under certain assumptions, $w$. We anticipate that Pan-STARRS, with accurate photometric redshifts, will have an accuracy comparable to that expected for the next generation of spectroscopic survey (WFMOS) and could potentially reduce the statistical errors on the value of $w$ by a factor of 2 compared with the current constraints. However, the target of $1\%$ random errors on $w$ using BAO measurements is beyond the grasp of any of the surveys likely to be completed or even to start within the next decade. The predictions we have presented here are idealized in a number of respects. The accuracy with which we expect the dark energy equation of state parameter will be measured assumes that the values of the other cosmological parameters are known with infinite accuracy. We have also neglected the impact of the survey window function on the power spectrum measurement; this will be particularly important in the case of surveys which rely on photometric redshifts. In future work, we plan an number of improvements: i) Use of an even larger simulation volume, to exceed that proposed in forthcoming surveys. One caveat on our quoted error on $w$ is that some of the planned surveys will be larger than the volume of the {\tt BASICC}, and will consequently have smaller sampling fluctuations. ii) The inclusion of the evolution of clustering along the line of sight. Although we have focused on $z=1$, proposed surveys will span a broad redshift interval centred on this value. iii) The inclusion of a survey window function, mimicing the angular and radial selections, and including the impact of errors on photometric redshifts. Such calculations represent huge challenges in computational cosmology, due to the volume coverage and mass resolution required in the N-body simulations used, and the post-processing needed to include galaxies. However, such calculations are essential if the BAO approach is to be used to its full potential. \section*{Acknowledgements} We are indebted to Volker Springel for providing us with a stripped down version of his GADGET-2 code and to the Virgo Consortium for access to their supercomputing resources. Lydia Heck provided vital system management support which made this project feasible. We thank the referee, Daniel Eisenstein, for providing a thorough and helpful report. We acknowledge useful conversations and comments on the draft from Richard Bower, Shaun Cole, Martin Crocce, Ariel Sanchez, Liang Gao, Adrian Jenkins, Volker Springel, Enrique Gaztanaga, Francisco Castander, Pablo Fosalba and Will Sutherland. This work was supported in part by the Particle Physics and Astronomy Research Council, the European Commission through the ALFA-II programme's funding of the Latin-american European Network for Astrophysics and Cosmology (LENAC), and the Royal Society, through the award of a Joint Project grant and its support of CMB. \bibliographystyle{mn2e}
1,116,691,501,212
arxiv
\section{Introduction} \IEEEPARstart{I}{n} recent years, as a kind of optical remote sensing image with high spectral resolution, hyperspectral images (HSIs) have attracted much attention because of their unique properties and the massive information they contain \cite{Li2019}. HSIs have been widely used in mineral exploration \cite{Yokoya2016}, soil salinity estimation \cite{Yang2017}, and anomaly detection \cite{Xie2019}. At present, HSI classification is one of the most active research areas in the remote sensing field. Its goal is to classify each marked pixel in the image correctly, where a pixel can be regarded as a high-dimensional vector with each entry corresponding to the spectral reflectance in a specific wavelength \cite{Li2019Overview}. Early work on HSI classification mainly focused on exploring spectral feature information and ignoring spatial information \cite{Qian2001,Melgani2004,Li2010,Zhong2012,Peng2015}, which often leads to poor generalization performance. After this, some studies began to incorporate spatial contextual information into pixel-level classifiers. Representative work includes extended morphological profiles (EMP) \cite{Benediktsson2005,Li2013}, edge-preserving filtering \cite{Kang2014} and sparse representation models \cite{Chen2011,Fang2014,Fang2017,Peng2019,Peng2017}, etc. Due to the spatial homogeneous characteristics of HSI, spectral-spatial HSI classification methods usually show better results than spectral-based methods. Notwithstanding, most of spectral-spatial methods depend on hand-crafted or shallow features, which are usually designed for specific tasks and rely on expert knowledge. This largely limits the flexibility and applicability of these methods in difficult situations. Recently, deep learning (DL) methods have been applied in HSI classification successfully. The earlier DL-based HSI classification methods based on fully connected neural networks, such as stacked autoencoders (SAEs) \cite{Chen2014} and recursive autoencoders (RAEs) \cite{Zhang2017}, can only handle one-dimensional vectors. Thus, the spatial structure information of an HSI is destroyed. The emergence of convolutional neural networks (CNNs) overcomes this defect. Several 2D CNN based architectures were proposed, including R-VCANet \cite{Pan2017} and bayesian 2D CNN \cite{Cao2018}, etc. However, a HSI has too many channels, which often causes 2D convolution kernels to be too deep, and there is also a significant increase in the number of parameters. Therefore, 3D CNN based HSI classification methods were proposed. Lee et al. \cite{lee2016} described a contextual deep CNN by applying multiple 3D local convolutional filters with different sizes to jointly exploit the spatial and spectral features of a HSI. Chen et al. \cite{chen2016} established a 3D CNN-based feature extraction model with regularization to extract the effective spectral-spatial features of HSIs. Hamida et al. \cite{Benhamida2018} proposed a new 3D deep learning method that can simultaneously process spectral and spatial information while incurring lower computation cost (i.e. floating point operations, FLOPs). Although 3D CNN can reduce the number of parameters, it still incurs more calculational consumption because it needs to traverse in depth and lacks a global view of spectral information. HybridSN, a recent work proposed by Roy et al. in \cite{Roy2020}, combined 2D CNN and 3D CNN, where 3D CNN was used to mine spatial and spectral information and 2D CNN integrated the information to output the prediction result. It can effectively improve classification accuracy and has a global view to integrate spectral features. However, without an end-to-end architecture, PCA was still used to compress the redundant spectral information, which adds computational cost. Although the aforementioned DL models have shown excellent HSI classification performance, they usually need a large number of training samples and network parameters, and also have higher computational cost. For HSI classification, the available labeled pixels are usually very limited due to the difficulty in collecting and cost in labeling. Moreover, the distribution of categories in these labeled data is imbalanced. Constructing robust DL models with high performance and low computational cost based on imbalanced and small sample-sized data is a huge challenge in this field. Recently, a lightweight neural network, LiteDenseNet \cite{Li2020}, has been proposed for HSI classification. Different from traditional 3D CNN architectures, LiteDenseNet has a 3D two-way dense structure and uses group convolution, which greatly reduces the number of parameters and computational cost. However, the group convolution cuts off the connection of different channels and may lead to a loss of accuracy. To solve this problem and further reduce the parameters, we propose an extremely lightweight network, LiteDepthwiseNet, for HSI classification tasks. Its main advantages are summarized as follows. \begin{enumerate} \item A 3D depthwise convolution is used to replace the group convolution. The pointwise convolution in the 3D depthwise convolution can connect all hyperspectral channels, and the corresponding network architecture has a full-channel receptive field and is more suitable for HSI classification task. Based on this new network architecture, LiteDepthwiseNet only involves a very small number of parameters and FLOPs. \item Focal loss (FL) is introduced to replace mainstream cross entropy loss (CEL) as the loss function. It increases the attention on small sample categories and difficult-to-classify samples, which helps to improve the ultimate performance of the model. \item The middle activation layer and normalization layer in the original 3D depthwise convolution network are stripped, which can enhance the linearity of the model, so as to reduce the overfitting phenomenon when there are few training samples for HSIs. \end{enumerate} The rest of this paper is arranged as follows. In Section \ref{II}, we briefly introduce the related work. The structure of the proposed LiteDepthwiseNet is detailed in Section \ref{III}. The comparison experiment results of seven algorithms on three well-known HSI datasets are reported in Section \ref{IV}. Section \ref{V} summarizes this paper. \section{Related work}\label{II} This section describes some related concepts and gives a brief introduction to LiteDenseNet, which inspires the main design of the proposed network architecture in this paper. \subsection{2D depthwise separable convolution} Different from standard convolution, the depthwise separable convolution proposed in \cite{Howard2017MobileNets} includes a depthwise convolution, i.e. a spatial convolution performed independently over each channel of an input, followed by a pointwise convolution, i.e. a $ 1 \times 1 $ convolution, projecting the channels output by the depthwise convolution onto a new channel space. Figs. \ref{figure1} and Figs. \ref{figure2} show a simple schematic of a standard convolution and a depthwise separable convolution, respectively. By splitting the standard convolution into two processes of depthwise convolution and pointwise convolution, the computational cost and the number of parameters in the 2D convolution can be significantly reduced under certain conditions. \begin{figure}[H] \centering \includegraphics[width=1\linewidth,height=0.32\linewidth]{dd.jpg} \caption{A standard convolution.} \label{figure1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth,height=0.3\linewidth]{dsc.png} \caption{A depthwise separable convolution.} \label{figure2} \end{figure} \subsection{Cross entropy loss} The CEL function is a loss function often used in classification problems. It is an important concept in information theory, where it is mainly used to measure the difference between two probability distributions. The standard CEL for classification problems is, \begin{equation}\label{formula1} \text{CEL}(p,y)= \begin{cases} - \text{log}(p) & \text{if} ~ y=1 \\ - \text{log}(1-p) & \text{otherwise}. \end{cases} \end{equation} where $ p $ represents the predicted probability of a sample in the category, and $ y $, an indicate variable equal to 0 or 1, represents whether the sample is correctly classified. It can be observed that when $y$ is $ 1 $, the closer the $ p $ is to $ 1 $, the smaller the loss and if $y$ is $ 0 $, the closer the $ p $ is to 0, the smaller the loss, which is in line with the direction of optimization. To simplify the expression, we introduce a piecewise function $ F $ as: \begin{equation}\label{formula2} F_y(t)= \begin{cases} t & \text{if} ~ y=1 \\ 1-t & \text{otherwise}. \end{cases} \end{equation} and then the standard CEL can be reformulated as: \begin{equation}\label{formula3} \text{CEL}(p,y) = - \text{log}( F_y(p) ) \end{equation} In order to handle data with imbalanced categories, a coefficient $ \alpha\in [0,1] $ is added to the standard cross-entropy loss function to balance the weights of samples of different categories, which results in the following balanced CEL (BCEL): \begin{equation}\label{formula4} \text{BCEL}(p,\alpha, y)=-F_y(\alpha) \text{log}(F_y(p)) \end{equation} \subsection{LiteDenseNet} In 2020, Li et al. \cite{Li2020} proposed a lightweight network LiteDenseNet. To obtain receptive fields in different scales, the network uses a 3D two-way dense layer which was proposed in PeleeNet \cite{Pelee2018} and DBDA \cite{Li2020Classification}. Its schematic diagram is shown in Fig. \ref{figure3}. \begin{figure}[H] \centering \includegraphics[scale=0.4]{DoubleDenseWay.jpg} \caption{The 3D two-way dense layer in \cite{Li2020}.} \label{figure3} \end{figure} This 3D two-way dense layer comprises a top layer and a bottom layer. After performing a $ 1 \times 1 \times 1 $ convolution transformation on the input data, the top layer uses a two stacked $ 3 \times 3 \times 3 $ convolution to capture global features and obtains an output with size $ (9 \times 9 \times 97, 12) $, and the bottom layer uses a convolution kernel with size $ 3 \times 3 \times 3 $ to capture local features and obtains an output with size $ (9 \times 9 \times 97, 12) $. The outputs of the two-way dense layer are used to perform the concatenate operation to generate a 3D-cube with size $ (9 \times 9 \times 97, 48) $. However, the 3D convolutional block in the original 3D two-way dense layer is a computationally intensive model. In order to reduce the number of parameters and computational cost, LiteDenseNet uses group convolution instead of normal convolution. The group convolution given in \cite{Li2020} is shown in Fig. \ref{figure4}. \begin{figure}[H] \centering \includegraphics[scale=0.14]{GroupConvolution3d.jpg} \caption{Group Convolution.} \label{figure4} \end{figure} Given an input data with size $ h \times w \times l$ and $ c_F $ channels, the normal convolution layer tends to have the same channel size as the inputs, whereas the group convolution divides the original convolution kernels and the input into 3 groups by channel dimension. As a result, the input is divided into three groups with the size of $ h \times w \times l $ and channels $c_F/3$. Three groups are fed into a normal convolution with the $c_F/3$ channels separately. So the channel size is reduced from $ h \times w \times l \times c_F $ to $ h \times w \times l \times (c_F/3)$. In this way, group convolution significantly reduces the number of parameters and computational cost while maintaining the size of the input and the output. \section{The proposed LiteDepthwiseNet}\label{III} In this section, we describe the proposed LiteDepthwiseNet in detail. \subsection{Modified 3D Depthwise Convonlution} LiteDenseNet reduces the parameters and computational cost using group convolution to replace standard convolution. However, group convolution permanently cuts off some connections between channels, which results in reducing the classification performance to some extent. Moreover, for HSI classification, there is still room to reduce the parameters and computational cost. So, we consider replacing group convolution with depthwise separable convolution, in which pointwise convolution can connect all the channels. At the same time, for HSIs, there are too many channels, so 2D depthwise separable convolution will still have too many parameters. A good choice is to use 3D depthwise convolution originally proposed in \cite{Ye20193D} for RGB images. At this point, a new problem appears. The limited number of training samples for HSIs always causes overfitting for the model which has a too strong nonlinear ability. So, we propose a modified 3D depthwise convolution for HSI classification. The major adjustment is that the middle activation layer and normalization layer of the original 3D depthwise convolution network are stripped, which enhances the linearity of the model and reduces the overfitting phenomenon. Moreover, the number of parameters and the computational consumption are greatly reduced. Structural comparison diagrams of group convolution, 3D depthwise convolution and modified 3D depthwise convolution are shown in Figs. \ref{figure5}, \ref{figure6}, and \ref{figure7}, respectively. \begin{figure}[H] \centering \includegraphics[scale=0.4]{group.png} \caption{Group convolution with 3 groups.} \label{figure5} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.4]{3D_depthwise_convolution.png} \caption{3D depthwise convolution.} \label{figure6} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.4]{modified_3D_depthwise_convolution.png} \caption{Modified 3D depthwise convolution.} \label{figure7} \end{figure} \subsection{Focal Loss} Since most HSIs have imbalanced data categories \cite{Li2019Overview}, using the standard CEL function for training will lead to low classification accuracy for the small sample categories. The BCEL function is commonly used to deal with this situation. However, the BCEL function is also likely to focus on the easy-to-classify samples. In order to improve the classification performance, the model should focus on both small-category samples and difficult-to-classify samples. For this purpose, a focal loss is proposed in \cite{Lin2017Focal} and expressed as \begin{equation}\label{formula5} {\rm FL}(p, \alpha, \gamma, y) = - F_y(\alpha) (1 - F_y(p))^\gamma \log(F_y(p)), \end{equation} where $\alpha\in [0,1], \gamma\geq 0.$ From formula (\ref{formula5}), it is easy to know the following three points. Firstly, similar to the BCEL, parameter $\alpha $ is introduced to reflect the imbalanced problem of different categories of samples, which helps to improve the accuracy of the final model. Secondly, the modulating factor $ (1 - F_y(p))^\gamma $ is added to adjust the weights of easy-to-classify samples and difficult-to-classify samples. When a sample is misclassified and $ F_y(p) $ is small, then $ (1 - F_y(p))^\gamma $ is close to 1, and the loss is not affected. However, if $ F_y(p) \to 1 $, which indicates that the classification prediction of the sample is better, $ (1 - F_y(p))^\gamma $ is 0, and the loss is down-weighted. Lastly, the focusing parameter $ \gamma $ smoothly adjusts the down-weighted rate of easy-to-classify samples. When $\gamma = 0 $, the FL is equivalent to the BCEL and with the increase of $ \gamma $, the effect of the modulating factor also increases. Of these, $ \alpha $ can be set by category frequency, or it can be obtained as a hyperparameter for cross-validation. \subsection{The Framework of the LiteDepthwiseNet} The overall framework of the proposed LiteDepthwiseNet and the specific parameters in an example are shown in Fig. \ref{figure8} and Table \ref{table1}, respectively. \begin{figure}[p] \centering \includegraphics[scale=0.3]{network.png} \caption{The LiteDepthwiseNet architecture} \label{figure8} \end{figure} Specifically, in our network framework, batch normalization (BN) and ReLU activation (ReLU) are added after each point convolution and ordinary convolution. For the sake of brevity, it will not be specified in the following description. When an input, a 3D-cube of size $ (9 \times 9 \times 200,1) $, is fed into a 3D-CNN layer with size $ (1 \times 1 \times 7,24) $, we get an output with size of $ (9 \times 9 \times 97,24) $. Next we put the output into two branches respectively. The right branch (Branch 1) performs group convolution on the input and divides the input into three groups, where each group is a 3D-cube with size of $ (9 \times 9 \times 97,8) $. Then, we put them into a 3D-CNN layer with size of $ (1 \times 1 \times 1, 16) $. Until now, we have retained the original design of LiteDenseNet, because modifying the first and second layers to 3D depthwise convolution increases the number of parameters. After this, the output results of the three 3D-CNNs are stacked by channel dimension, and we obtain the output with size of $ (9 \times 9 \times 97,48) $. The subsequent modules are all composed of 3D depthwise convolution, which divides $ (9 \times 9 \times 97,48) $ into 48 independent 3D blocks by channel dimension, and uses 48 3D-CNN filters with the size of $ (3 \times 3 \times 3,1) $ for one-to-one convolution operation, and the size of output is $ (9 \times 9 \times 97,48) $. We feed this into the 3D-CNN with the size of $ (1 \times 1 \times 1,12) $ for a point-by-point linear combination, which makes up for the defect of group convolution. Then, we continue to feed a 3D depthwise convolution, where the number of input and output channels are both 12. At this point, the design of the right branch is completed. The left branch (Branch 2) feeds the input obtained by the group convolution into a 3D depthwise convolution. Finally, we contact the outputs of the three channels, that is, the input that has not entered any branch with the size of $ (9 \times 9 \times 97,24) $, the output of the right branch with the size of $ (9 \times 9 \times 97,12) $, and the output of the left branch with the size of $ (9 \times 9 \times 97,12) $, to obtain the output with the size of $ (9 \times 9 \times 97,48) $, which is fed into the last 3D depthwise convolution, and the final result is obtained through a global average pooling layer and a fully connected layer. \begin{table}[!ht] \caption{THE IMPLEMENTATION DETAILS ON AN EXAMPLE OF LITEDEPTHWISENET} \scalebox{0.75}{ \begin{tabular}{ccccc} \toprule \toprule \multicolumn{2}{c}{Layer name} & Kernel Size & Group & Output Size \\ \hline \multicolumn{2}{c}{Input} & - & - & $(9\times9\times200,1)$ \\ \hline \multicolumn{2}{c}{3D-CNN+BN+ReLU} & $(1\times1\times7)$ & 1 & $(9\times9\times97,24)$ \\ \hline \multirow{3}*{Branch 1} & \tabincell{c}{3D-CNN+Concatenate\\+BN+ReLU} & $(1\times1\times1)$ & 3 & $(9\times9\times97,48)$ \\ \cline{2-5} & 3D-CNN+Concatenate & $(3\times3\times3)$ & 48 & $(9\times9\times97,48)$ \\ \cline{2-5} & 3D-CNN+BN+ReLU & $(1\times1\times1)$ & 1 & $(9\times9\times97,12)$ \\ \hline \multirow{5}*{Branch 2} & \tabincell{c}{3D-CNN+Concatenate\\+BN+ReLU} & $(1\times1\times1)$ & 3 & $(9\times9\times97,48)$ \\ \cline{2-5} & 3D-CNN+Concatenate & $(3\times3\times3)$ & 48 & $(9\times9\times97,48)$ \\ \cline{2-5} & 3D-CNN+BN+ReLU & $(1\times1\times1)$ & 1 & $(9\times9\times97,12)$ \\ \cline{2-5} & 3D-CNN+Concatenate & $(3\times3\times3)$ & 12 & $(9\times9\times97,12)$ \\ \cline{2-5} & 3D-CNN+BN+ReLU & $(1\times1\times1)$ & 1 & $(9\times9\times97,12)$ \\ \hline \multicolumn{2}{c}{Concatenate} & - & - & $(9\times9\times97,48)$ \\ \hline \multicolumn{2}{c}{3D-CNN+Concatenate} & $(3\times3\times97)$ & 48 & $(9\times9\times1,48)$ \\ \hline \multicolumn{2}{c}{3D-CNN+BN+ReLU} & $(1\times1\times1)$ & 1 & $(9\times9\times1,60)$ \\ \hline \multicolumn{2}{c}{Global Average Pooling} & - & - & $(1\times60)$ \\ \hline \multicolumn{2}{c}{Full Connected} & - & - & $(1\times16)$ \\ \bottomrule \bottomrule \end{tabular}} \label{table1} \end{table} \subsection{Comparing Calculation Cost and Number of Parameters} Under certain conditions, compared with group convolution, our modified 3D depthwise convolution reduces more parameters and has lower computation cost. For example, taking $ (h \times w \times l) , c_F$ as the size and the number of channels of the input, $ (h_F \times w_F \times l_F) $ as the convolution kernel size, $c_G$ as the size and the number of channels of the output, the number of parameters is given by \begin{equation*} (h \times w \times l \times (c_F/3)) \times (c_G/3) \times 3. \end{equation*} and the amount of the computation of group convolution is calculated as \begin{equation*} (h \times w \times l \times (c_F/3) \times h_F \times w_F \times l_F \times (c_G/3)) \times 3. \end{equation*} Whereas the number of parameters of the 3D depthwise convolution is as follows, \begin{equation*} h \times w \times l \times c_F \times 1 + 1 \times 1 \times 1 \times c_F \times c_G. \end{equation*} and the amount of the computation of 3D depthwise convolution is \begin{equation*} h \cdot w \cdot l \cdot c_F \cdot h_F \cdot w_F \cdot l_F + c_F \cdot c_G \cdot h_F \cdot w_F \cdot l_F. \end{equation*} For some special cases, such as $ 1\times1\times1 $ convolution kernel, the 3D depthwise convolution increases the number of parameters and the computation cost. Table \ref{table2}, Fig. \ref{figure9} and Fig. \ref{figure10} amount compare the results of the number of parameters and of the computation for our proposed LiteDepthwiseNet and five other deep learning algorithms when the input size is $ 25 \times 25 \times 200 $. Of these, for the HybridSN method, PCA is required for preprocessing. For the convenience of comparison, this operation is not used to calculate the computation overhead. \begin{table}[H] \centering \caption{THE COMPARISON OF SIX ALGORITHMS} \scalebox{0.9}{ \begin{tabular}{c|c|c|c} \toprule \toprule Network & Input size & FLOPs & Parameters \\ \hline ChenEtAlNet & $ 25 \times 25 \times 200 $ & 6.030G & 1.120M \\ HybridSN & $ 25 \times 25 \times 200 $ & 2.483G & 8.256M \\ HamidaEtAlNet & $ 25 \times 25 \times 200 $ & 1.148G & 6.452M \\ LeeEtAlNet & $ 25 \times 25 \times 200 $ & 37.819M & 5.532M \\ LiteDenseNet & $ 25 \times 25 \times 200 $ & 1.316G & 852.309k \\ LiteDepthwiseNet & $ 25 \times 25 \times 200 $ & {\bf 369.331M} & {\bf 51.616k} \\ \bottomrule \bottomrule \end{tabular}} \label{table2} \end{table} \begin{figure}[!t] \centering \includegraphics[scale=0.3]{Parameters.jpg} \caption{Parameters of each algorithm} \label{figure9} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=0.3]{FLOPs.jpg} \caption{FLOPs of each algorithm} \label{figure10} \end{figure} We can see that the number of parameters of the proposed LiteDepthwiseNet is much smaller than that of the other five algorithms. In particular, it is 93.94\% less than the LiteDenseNet. Moreover, the computation cost of LiteDepthwiseNet is 71.91\% less than that of the LiteDenseNet. Note that the LeeEtAlNet has the minimal computation cost, but later experiment results show that its classification accuracy is far lower than ours. \section{EXPERIMENT RESULTS}\label{IV} In this section, we perform experiments on three widely used hyperspectral datasets to compare the performance of the proposed LiteDepthwiseNet with six other methods, namely, LiteDenseNet \cite{Li2020}, LeeEtAlNet \cite{lee2016}, HamidaEtAlNet \cite{Benhamida2018}, HybridSN \cite{Roy2020}, ChenEtAlNet \cite{chen2016} and the classic SVM. Except for LiteDenseNet, the other algorithms are tested using their publicly recognized open source codes or the codes provided by the authors. Since LiteDenseNet encapsulates its code, we use our own reproduced code with balanced cross-entropy loss for the experiments. All the experiments are implemented on Ubuntu 18.04.3 LTS, CPU: i9 9900k, GPU: 2x2080ti, RAM: 64 GB, GPU memory: 22GB, Python: 3.6.1, Torch: 1.3.1. Three quantitative indicators, overall accuracy (OA), average accuracy (AA) and $\kappa$ coefficient (Kappa), are used to evaluate the performance of each method. \subsection{Data Description} The following three open HSI datasets are used for the experiments in this paper. They can be obtained from \url{http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes}. \textbf{Indian Pines (IP):} This dataset was acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor in north-western Indiana. The size of the scene is $ 145 \times 145 \times 200 $ with a spatial resolution 20 m per pixel and spectral coverage ranging from 0.4 to 2.5$ \mu m $. The ground truth (GT) available is divided into 16 classes of vegetation. \textbf{University of Pavia (UP):} This dataset captures an urban area surrounding the University of Pavia, Italy, which was collected by the Reflective Optics Imaging Spectrometer (ROSIS-03) sensor in Northern Italy in 2001. The size of the scene is $ 610 \times 340 \times 103 $ with a spatial resolution of 1.3 m per pixel and spectral coverage ranging from 0.43 to 0.86$ \mu m $. The ground truth is designated into 9 urban land-cover classes. \textbf{Pavia Center (PC):} This dataset is also acquired by the Reflective Optics Imaging Spectrometer (ROSIS-03) sensor. The size of the scene is $ 1096 \times 715 \times 102 $ with a spatial resolution 1.3 m per pixel and spectral coverage ranging from 0.43 to 0.86$ \mu m $. The ground truth available is divided into 9 classes. In the follow-up experiments, the \textbf{UP} and \textbf{PC} datasets are divided into training, verification and testing sets, and the ratio of training, verification and testing within each class is the same. Here, we choose 0.5\% of the \textbf{UP} data for training and 0.1\% of the \textbf{PC} data for training, and the number of samples in the training set and verification set of the \textbf{UP} and \textbf{PC} datasets are the same. But for the \textbf{IP} data, the training ratio is too low, which will leads to the absence of training samples in some categories. To ensure that there are at least 5 samples in each category, we use 5\% of the \textbf{IP} data for training and do not set a verification set.The detailed division information of the three datasets are listed in Table \ref{table3}, Table \ref{table4} and Table \ref{table5}, respectively. \begin{table}[H] \centering \caption{THE SAMPLES FOR EACH CATEGORY OF TRAINING, VALIDATION AND TESTING FOR THE \textbf{IP} DATASET.} \scalebox{0.74}{ \begin{tabular}{c|c|c|c|c|c} \toprule \toprule No. & Class & Total number & Training & Validation & Testing \\ \hline 1 & Alfal & 46 & 5 & - & 41 \\ 2 & Corn-notill & 1428 & 71 & - & 1357 \\ 3 & Corn-mintill & 830 & 41 & - & 789 \\ 4 & Corn & 237 & 12 & - & 225 \\ 5 & Grass-pasture &483 & 24 & - & 459 \\ 6 & Grass-trees & 730 & 37 & - & 693 \\ 7 & Grass-pasture-mowed & 28 & 5 & - & 23 \\ 8 & Hay-windrowed & 478 & 24 & - & 454 \\ 9 & Oats & 20 & 5 & - & 15 \\ 10 & Soybean-notill & 972 & 49 & - & 923 \\ 11 & Soybean-mintill & 2455 & 109 & - & 2346 \\ 12 & Soybean-clean & 593 & 30 & - & 563 \\ 13 & Wheat & 205 & 12 & - & 193 \\ 14 & Woods &1265 & 63 & - & 1202 \\ 15 & Buildings-Grass-Trees &386 & 19 & - & 367 \\ 16 & Stone-Steel-Towers & 93 & 6 & - & 87 \\ \hline & Total & 10249 & 512 & - & 9737 \\ \bottomrule \bottomrule \end{tabular}} \label{table3} \end{table} \begin{table}[H] \centering \caption{THE SAMPLES FOR EACH CATEGORY OF TRAINING, VALIDATION AND TESTING FOR THE \textbf{UP} DATASET.} \scalebox{0.75}{ \begin{tabular}{c|c|c|c|c|c} \toprule \toprule No. & Class & Total number & Training & Validation & Testing \\ \hline 1 & Asphalt & 6631 & 33 & 33 & 6565 \\ 2 & Meadows & 18649 & 93 & 93 & 18463 \\ 3 & Gravel & 2099 & 10 & 11 & 2078 \\ 4 & Trees & 3064 & 15 & 16 & 3033 \\ 5 & Painted metal sheets & 1345 & 7 & 6 & 1332 \\ 6 & Bare Soil & 5029 & 25 & 25 & 4979 \\ 7 & Bitumen & 1330 & 7 & 6 & 1317 \\ 8 & Self-Blocking Bricks & 3682 & 19 & 19 & 3644 \\ 9 & Shadows & 947 & 5 & 5 & 937 \\ \hline & Total & 42776 & 214 & 214 & 42348 \\ \bottomrule \bottomrule \end{tabular}} \label{table4} \end{table} \begin{table}[H] \centering \caption{THE SAMPLES FOR EACH CATEGORY OF TRAINING, VALIDATION AND TESTING FOR THE \textbf{PC} DATASET.} \scalebox{0.75}{ \begin{tabular}{c|c|c|c|c|c} \toprule \toprule No. & Class & Total number & Training & Validation & Testing \\ \hline 1 & Water & 65971 & 65 & 65 & 65841 \\ 2 & Trees & 7598 & 7 & 7 & 7584 \\ 3 & Asphalt & 3090 & 3 & 3 & 3084 \\ 4 & Self-Blocking Bricks & 2685 & 3 & 3 & 2679 \\ 5 & Bitumen & 6584 & 6 & 6 & 6572 \\ 6 & Tiles & 9248 & 9 & 9 & 9230\\ 7 & Shadows & 7287 & 7 & 7 & 7273 \\ 8 & Meadows & 42826 & 42 & 42 & 42742 \\ 9 & Bare Soil & 2863 & 3 & 3 & 2857 \\ \hline & Total & 148152 & 145 & 145 & 147862 \\ \bottomrule \bottomrule \end{tabular}} \label{table5} \end{table} \subsection{Classification Maps and Categorized Results} \subsubsection{Classification Maps and Categorized Results for the \textbf{IP} Dataset} The results of using different algorithms to classify the \textbf{IP} dataset are shown in Table \ref{table6}, and the classification maps of the different algorithms and the GT categories are shown in Fig. \ref{figure11}. The main characteristic of IP data is that the amount of data is small and the data distribution is imbalanced. In particular, the amount of data in categories 1, 7, 9 and 16 is less than 100, far less than that in other categories. Using the training data in Table \ref{table3}, the proposed algorithm performs best in OA and Kappa compared with other six algorithms. On most categories, LiteDepthwiseNet achieves similar results to LiteDenseNet with 6.06\% parameters and 28.09\% FLOPS. In the small sample categories, compared with other four large networks, the two lightweight networks show good classification properties. Among them, for the 9th category, our proposed network performs poorly, while HybridSN and LiteDenseNet showed $ 100 \% $ accuracy. It is worth noting that the overall performance of HybridSN is not satisfactory. LeeEtAlNet, the only one that is less computationally expensive than our proposed method, is already unacceptable in terms of accuracy. \begin{table}[!t] \centering \caption{THE CATEGORIZED RESULTS FOR THE \textbf{IP} DATASET USING $ 5 \% $ TRAINING SAMPLES} \scalebox{0.45}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \toprule \toprule Class & Color & SVM & ChenEtAlNet & HybridSN & HamidaEtAlNet & LeeEtAlNet & LiteDenseNet & LiteDepthwiseNet \\ \hline 1 & \card{myred} & 9.09 & 31.82 & 34.09 & 0.00 & 0.00 & \textbf{100.00} & \textbf{100.00} \\ 2 & \card{mygreen} & 73.03 & 34.83 & 89.17 & 0.00 & 27.93 & 91.97 & \textbf{97.42} \\ 3 & \card{myblue} & 55.00 & 48.81 & 78.83 & 7.10 & 0.00 & \textbf{95.31} & 94.68 \\ 4 & \card{myyellow} & 32.00 & 45.24 & 61.78 & 0.00 & 36.89 & \textbf{98.67} & 98.22 \\ 5 & \card{mypurple} & 89.11 & 46.72 & 79.52 & 0.00 & 0.00 & \textbf{88.89} & \textbf{88.89} \\ 6 & \card{mycyan} & 95.96 & 92.58 & 98.56 & 96.54 & 91.92 & \textbf{99.28} & \textbf{99.28} \\ 7 & \card{mybrown} & 59.26 & 32.00 & 11.11 & 0.00 & 0.00 & \textbf{100.00} & \textbf{100.00} \\ 8 & \card{mygrassgreen} & 99.56 & 86.04 & 96.26 & 92.95 & 95.37 & \textbf{99.78} & \textbf{99.78} \\ 9 & \card{mybluepurple} & 15.79 & 17.65 & \textbf{100.00} & 0.00 & 0.00 & \textbf{100.00} & 86.67 \\ 10 & \card{mycrimson} & 62.19 & 72.46 & 79.52 & 0.00 & 0.00 & \textbf{92.20} & 90.68 \\ 11 & \card{mylightgreen} & 75.17 & 81.49 & 94.85 & 94.60 & 84.18 & 96.12 & \textbf{96.59} \\ 12 & \card{mydarkblue} & 49.02 & 29.54 & 82.77 & 0.00 & 0.00 & 95.74 & \textbf{95.74} \\ 13 & \card{mydarkbrown} & 93.33 & 84.10 & 79.49 & 1.54 & 0.00 & \textbf{96.89} & 95.34 \\ 14 & \card{myblackishgreen} & 93.18 & 78.39 & 98.17 & 99.42 & 96.42 & 96.34 & \textbf{99.58} \\ 15 & \card{mydeeppurple} & 52.59 & 51.39 & 68.94 & 0.27 & 1.91 & \textbf{97.55} & 95.37 \\ 16 & \card{mypink} & 94.32 & 6.90 & 50.00 & 0.00 & 0.00 & 98.85 & \textbf{100.00} \\ \hline OA $ \times 100 $ & & 74.22 & 66.00 & 87.68 & 46.75 & 47.87 & 95.35 & \textbf{96.29} \\ AA $ \times 100 $ & & 61.68 & 52.52 & 75.19 & 23.08 & 25.57 & \textbf{96.72} & 96.14 \\ Kappa $ \times 100 $ & & 70.50 & 59.39 & 85.89 & 35.60 & 37.60 & 94.70 & \textbf{95.77} \\ \hline FLOPs & & & 6.030G & 2.483G & 1.148G & \textbf{37.819M} & 1.316G & \textbf{369.331M} \\ Parameters & & & 1.120M & 8.256M & 6.452M & 5.532M & \textbf{852.309k} & \textbf{51.616k} \\ \bottomrule \bottomrule \end{tabular}} \label{table6} \end{table} \begin{figure}[!t] \centering \subfloat[False color image]{ \includegraphics[width=2.5cm]{IP_FC.jpg} } \hfil \subfloat[GT]{ \includegraphics[width=2.5cm]{IP_GT.jpg} } \hfil \subfloat[SVM]{ \includegraphics[width=2.5cm]{IP_SVM.jpg} } \hfil \subfloat[ChenEtAlNet]{ \includegraphics[width=2.5cm]{IP_ChenEtAl.jpg} } \hfil \subfloat[HybridSN]{ \includegraphics[width=2.5cm]{IP_HybridSN.jpg} } \subfloat[HamidaEtAlNet]{ \includegraphics[width=2.5cm]{IP_HamidaEtAl.jpg} } \hfil \subfloat[LeeEtAlNet]{ \includegraphics[width=2.5cm]{IP_LiEtAl.jpg} } \hfil \subfloat[LiteDenseNet]{ \includegraphics[width=2.5cm]{IP_LiteDenseNet.jpg} } \hfil \subfloat[LiteDepthwiseNet]{ \includegraphics[width=2.5cm]{IP_Proposed.jpg} } \centering \caption{Classification maps for the \textbf{IP} dataset using $ 5 \% $ of training samples. (a) False-color image. (b) GT. (c)-(i) The classification maps with disparate algorithms.} \label{figure11} \end{figure} \subsubsection{Classification Maps and Categorized Results for the \textbf{UP} Dataset} The results of using different algorithms to classify the \textbf{UP} dataset are shown in Table \ref{table7}, and the classification maps of different algorithms and the ground truth are shown in Fig. \ref{figure12}. The main characteristics of this data are the large amount of data and the balanced distribution of categories. In the case of using only 0.5\% of the training samples, our proposed method achieves state-of-the-art performance on OA, AA, and Kappa. It can be clearly seen that some large networks that performed well in the past do not perform well at a small training rate, such as HybridSN, which only achieves an OA of $ 87.68 \% $. Our proposed LiteDepthwiseNet generates the best classification results with the minimal parameters and calculations. \begin{figure}[!t] \centering \subfloat[False color image]{ \includegraphics[width=2.5cm]{UP_FC.jpg} } \hfil \subfloat[GT]{ \includegraphics[width=2.5cm]{UP_GT.jpg} } \hfil \subfloat[SVM]{ \includegraphics[width=2.5cm]{UP_SVM.jpg} } \hfil \subfloat[ChenEtAlNet]{ \includegraphics[width=2.5cm]{UP_ChenEtAl.jpg} } \hfil \subfloat[HybridSN]{ \includegraphics[width=2.5cm]{UP_HybridSN.jpg} } \hfil \subfloat[HamidaEtAlNet]{ \includegraphics[width=2.5cm]{UP_HamidaEtAl.jpg} }\hfil \subfloat[LeeEtAlNet]{ \includegraphics[width=2.5cm]{UP_LiEtAl.jpg} } \hfil \subfloat[LiteDenseNet]{ \includegraphics[width=2.5cm]{UP_LiteDenseNet.jpg} } \hfil \subfloat[LiteDepthwiseNet]{ \includegraphics[width=2.5cm]{UP_Proposed.jpg} } \centering \caption{Classification maps for the \textbf{UP} dataset using $ 0.5 \% $ of training samples. (a) False-color image. (b) Ground truth (GT). (c)-(i) The classification maps with disparate algorithms.} \label{figure12} \end{figure} \begin{table}[!t] \centering \caption{THE CATEGORIZED RESULTS FOR THE \textbf{UP} DATASET USING $ 0.5 \% $ TRAINING SAMPLES} \scalebox{0.45}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \toprule \toprule Class & Color & SVM & ChenEtAlNet & HybridSN & HamidaEtAlNet & LeeEtAlNet & LiteDenseNet & LiteDepthwiseNet \\ \hline 1 & \card{myred} & 90.77 & 94.41 & 92.73 & 96.60 & 87.24 & 93.15 & \textbf{97.15} \\ 2 & \card{mygreen} & 98.23 & 68.13 & 98.21 & 90.25 & 98.42 & 99.21 & \textbf{99.77} \\ 3 & \card{myblue} & 51.97 & 0.00 & 78.36 & 2.65 & 7.51 & \textbf{84.84} & 78.10 \\ 4 & \card{myyellow} & 88.20 & \textbf{98.55} & 82.06 & 76.26 & 69.04 & 95.91 & 96.21 \\ 5 & \card{mypurple} & 98.72 & 99.93 & \textbf{100.00} & 96.85 & 99.92 & 99.25 & 99.40 \\ 6 & \card{mycyan} & 62.56 & 0.58 & 97.78 & 25.29 & 2.13 & \textbf{98.35} & 96.77 \\ 7 & \card{mybrown} & 61.12 & 0.00 & 41.50 & 0.23 & 0.00 & 88.84 & \textbf{96.20} \\ 8 & \card{mygrassgreen} & 89.19 & 90.51 & 44.24 & 76.60 & 0.03 & 95.86 & \textbf{97.97} \\ 9 & \card{mybluepurple} & \textbf{100.00} & 95.52 & 73.67 & \textbf{100.00} & 99.89 & 98.51 & 98.72 \\ \hline OA $ \times 100 $ & & 88.11 & 64.50 & 88.28 & 74.75 & 67.35 & 96.60 & \textbf{97.39} \\ AA $ \times 100 $ & & 73.95 & 54.76 & 78.73 & 56.47 & 46.42 & 94.88 & \textbf{95.59} \\ Kappa $ \times 100 $ & & 83.90 & 53.90 & 84.27 & 65.40 & 53.50 & 95.50 & \textbf{96.54} \\ \hline FLOPs & & & 1.953G & 1.208G & 591.067M & \textbf{17.88M} & 664.921M & \textbf{187.531M} \\ Parameters & & & 1.068M & 6.467M & 1.975M & 1.572M & \textbf{437.157k} & \textbf{30.453k} \\ \bottomrule \bottomrule \end{tabular}} \label{table7} \end{table} \subsubsection{Classification Maps and Categorized Results for the \textbf{PC} Dataset} Table \ref{table8} and Fig. \ref{figure13} show the classification results and maps of different algorithms on the \textbf{PC} dataset, respectively. The overall sample size of the \textbf{PC} dataset is large and basically balanced. Among them, the first and eighth categories are the two categories with the largest number of samples, far exceeding other categories. Therefore, the OA on this dataset is easy to achieve a higher score, but the difficulty lies in the classification of the other seven categories with a small sample size. It can be seen from the above table that, except for the ChenEtAlNet, other six methods achieve good performance and the proposed network achieves the highest OA, AA, Kappa with the smallest amount of parameters, and the amount of calculation is much lower than that of LiteDenceNet. And the LeeEtAlNet still cannot achieve acceptable accuracy on four small sample categories. \begin{figure}[!t] \centering \subfloat[False color image]{ \includegraphics[width=2.5cm]{PC_FC.jpg} } \hfil \subfloat[GT]{ \includegraphics[width=2.5cm]{PC_GT.jpg} } \hfil \subfloat[SVM]{ \includegraphics[width=2.5cm]{PC_SVM.jpg} } \hfil \subfloat[ChenEtAlNet]{ \includegraphics[width=2.5cm]{PC_ChenEtAl.jpg} } \hfil \subfloat[HybridSN]{ \includegraphics[width=2.5cm]{PC_HybridSN.jpg} } \hfil \subfloat[HamidaEtAlNet]{ \includegraphics[width=2.5cm]{PC_HamidaEtAl.jpg} } \hfil \subfloat[LeeEtAlNet]{ \includegraphics[width=2.5cm]{PC_LiEtAl.jpg} } \hfil \subfloat[LiteDenseNet]{ \includegraphics[width=2.5cm]{PC_LiteDenseNet.jpg} } \hfil \subfloat[LiteDepthwiseNet]{ \includegraphics[width=2.5cm]{PC_Proposed.jpg} } \caption{Classification maps for the \textbf{PC} dataset using $ 0.1 \% $ of training samples. (a) False-color image. (b) Ground truth (GT). (c)-(i) The classification maps with disparate algorithms.} \label{figure13} \end{figure} \begin{table}[!t] \centering \caption{THE CATEGORIZED RESULTS FOR THE \textbf{PC} DATASET USING $ 0.1 \% $ TRAINING SAMPLES} \scalebox{0.45}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \toprule \toprule Class & Color & SVM & ChenEtAlNet & HybridSN & HamidaEtAlNet & LeeEtAlNet & LiteDenseNet & LiteDepthwiseNet \\ \hline 1 & \card{myred} & 99.96 & 94.73 & \textbf{99.98} & 99.10 & 99.76 & 99.83 & 99.80 \\ 2 & \card{mygreen} & 91.15 & 0.00 & 78.80 & 96.35 & \textbf{99.91} & 94.14 & 91.82 \\ 3 & \card{myblue} & \textbf{93.30} & 0.00 & 59.73 & 67.47 & 10.57 & 84.53 & 93.09 \\ 4 & \card{myyellow} & 59.51 & 0.00 & 31.77 & 39.26 & 32.90 & 71.75 & \textbf{97.50} \\ 5 & \card{mypurple} & 96.03 & 0.00 & 70.66 & 94.34 & 83.13 & 95.80 & \textbf{97.52} \\ 6 & \card{mycyan} & 96.92 & 0.00 & 75.13 & 97.91 & \textbf{99.03} & 97.27 & 96.61 \\ 7 & \card{mybrown} & 83.35 & 0.00 & 85.98 & 79.59 & 81.42 & \textbf{91.97} & 90.87 \\ 8 & \card{mygrassgreen} & 99.38 & 0.00 & 99.96 & 99.22 & 99.95 & 99.37 & \textbf{99.98} \\ 9 & \card{mybluepurple} & \textbf{99.93} & 98.35 & 6.54 & 97.79 & 96.28 & 86.24 & 85.33 \\ \hline OA & & 97.28 & 44.47 & 91.46 & 95.98 & 94.99 & 97.59 & \textbf{98.24} \\ AA & & 81.95 & 9.47 & 67.62 & 77.10 & 70.29 & 91.21 & \textbf{94.73} \\ Kappa $ \times 100 $ & & 96.20 & 0.00 & 87.70 & 94.30 & 92.90 & 96.58 & \textbf{97.51} \\ \hline FLOPs & & & 1.953G & 1.95G & 591.067M & \textbf{17.880M} & 651.353M & \textbf{183.743M} \\ Parameters & & & 1.068M & 6.448M & 1.975M & 1.572M & \textbf{428.517k} & \textbf{30.021k} \\ \bottomrule \bottomrule \end{tabular}} \label{table8} \end{table} \subsection{Investigation of the Effects of Different Loss Functions} Table \ref{table9} gives the experimental results of the proposed algorithm with different loss functions on the three dataset. It can be seen that the focal loss has a significant improvement on the performance of hyperspectral classification. Specifically, for the \textbf{IP} dataset, due to the imbalance of sample categories, the balanced cross entropy loss increases by about $ 14 \% $ AA compared with the standard cross entropy loss, which benefits from increasing the loss weight of small class samples. Based on this, focal loss increases the weight of samples that are difficult to classify, and further improves OA, AA, and Kappa, increasing by $ 2.63 \% $, $ 0.68 \% $, and $ 3 \% $ respectively. Compared with the growth of balanced cross-entropy loss, the performance improvement of focal loss obviously benefits from difficult samples. For the \textbf{UP} and \textbf{PC} datasets, there is no problem of sample imbalance, so standard cross entropy and balanced cross entropy show similar performance, while focal loss still has a significant improvement. In the \textbf{UP} dataset, compared with the CEL and the BCEL, the OA, AA, Kappa of FL increase by about $ 2.77 \% , 10.22 \% $, and $ 3.72 \% $, respectively. The overall classification of the \textbf{PC} dataset is less difficult and the room for improvement is small, but there is still $0.37\%, 2.7\%$, and $0.54\%$ increase on the three quantitative indicators respectively. In summary, the mining of difficult samples by focal loss improves the performance to a certain extent for HSI classification. \begin{table}[!t] \centering \caption{THE OA COMPARISON OF LITEDEPTHWISENET BETWEEN DIFFERENT LOSS FUNCTIONS} \scalebox{0.65}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \toprule \toprule \multirow{2}*{Dataset} & \multicolumn{3}{c|}{Standard Cross Entropy} & \multicolumn{3}{c|}{Balanced Cross Entropy} & \multicolumn{3}{c}{Focal Loss} \\ \cline{2-10} & OA & AA & Kappa & OA & AA & Kappa & OA & AA & Kappa \\ \hline IP & 92.13 & 81.56 & 91.03 & 93.66 & 95.46 & 92.77 & 96.29 & 96.14 & 95.77 \\ UP & 94.63 & 85.38 & 92.83 & 94.63 & 85.40 & 92.83 & 97.39 & 95.59 & 96.54 \\ PC & 97.92 & 92.02 & 97.04 & 97.91 & 91.99 & 97.03 & 98.24 & 94.73 & 97.51 \\ \bottomrule \bottomrule \end{tabular}} \label{table9} \end{table} \subsection{Investigation of the $ \gamma $ of Focal Loss} Fig. \ref{figure14}, Fig. \ref{figure15} and Fig. \ref{figure16} show the effect of different values of $ \gamma $ in the proposed LiteDepthwiseNet for OA, AA, and Kappa on the three datasets. For different datasets, the optimal weight and degree of influence are different. Of these, the value of $ \gamma $ has a particularly significant impact on AA because AA is particularly sensitive to the accuracy of small sample categories (often difficult-classified samples), and adjusting $ \gamma $ can significantly enhance attention to difficult-classified samples. In our experiment, the optimal values of $ \gamma $ for the three datasets \textbf{IP}, \textbf{UP}, and \textbf{PC} are 46, 8, and 7, respectively. \begin{figure}[H] \centering \includegraphics[scale=0.5]{IP_gamma.png} \caption{The OA, AA, and Kappa of LiteDepthwiseNet with different values of $ \gamma $ on the \textbf{IP} dataset.} \label{figure14} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.5]{UP_gamma.jpg} \caption{The OA, AA, and Kappa of LiteDepthwiseNet with different values of $ \gamma $ on the \textbf{UP} dataset.} \label{figure15} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.5]{PC_gamma.jpg} \caption{The OA, AA, and Kappa of LiteDepthwiseNet with different values of $ \gamma $ on the \textbf{PC} dataset.} \label{figure16} \end{figure} \section{Conclusion}\label{V} In this paper, we propose an HSI classification network, LiteDepthwiseNet, which can achieve state-of-the-art performance with minimal parameters and computational cost. The network is constructed by the modified 3D depthwise convolution, which greatly reduces the model parameters and computational overhead. At the same time, this modification enhances the linearity of the model so our model avoids the phenomenon of overfitting on datasets with very small samples. In addition, compared with the CE loss, we find that the focal loss can improve the limited performance of the model. As a lightweight network, the proposed LiteDepthwiseNet can be deployed on devices with low computational power for HSI classification. We believe that further work can be devoted to improving the generalization ability of the model. When the distribution of the training and verification set is different, it should also achieve good performance. When the untrained data meets the category that has been seen, it should also show excellent performance, so it can be deployed in industry to improve productivity. \section*{Acknowledgments} The authors would like to thank Prof. D. Landgrebe for providing the Indian Pines data set, and Prof. P. Gamba for providing the University of Pavia and Pavia Center data sets. \bibliographystyle{IEEEtran}
1,116,691,501,213
arxiv
\section{Introduction and main results} \label{s1} Let $\alpha_1< \cdots <\alpha_p$ be any $p$ distinct real numbers, and let $\rho_1,\dots ,\rho_p$ be positive numbers. The \textit{generalized Lam\'e equation}\footnote{Usually the term generalized Lam\'e equation refers to such equations in which the $\alpha_j$'s are allowed to be complex. But here we only consider the real case.} is the second order ODE given by \begin{equation} \label{lame1} S''(x) + \sum_{j=1}^p \frac{\rho_j}{x-\alpha_j} \, S'(x) = \frac{V(x)}{A(x)} \, S(x) \end{equation} where $A(x)=\prod_{j=1}^p(x-\alpha_j)$ and $V(x)$ is a polynomial of degree $p-2$. A result of Stieltjes \cite{MR1554669}, known as the Heine-Stieiltjes Theorem \cite{sz75}, says that there exist exactly $k+p-2\choose k$ polynomials $V(x)$ for which \eqref{lame1} has a polynomial solution $S$ of degree $k$. These polynomial solutions are often called \textit{Stieltjes} or \textit{Heine-Stieltjes polynomials}, and the corresponding polynomials $V(x)$ are known as \textit{Van Vleck polynomials}. In this paper we will consider the case when $p=3$, in which case \eqref{lame1} is a Heun equation\footnote{The Heun equation is \eqref{lame1} with $p=3$ and where the $\rho_j$'s are allowed to be negative but must satisfy some other conditions (cf. \cite{MR1928260}).}, and the Heine-Stieltjes Theorem says that there are $k+1$ values of $\nu$ for which \eqref{lame1} has a polynomial solution of degree $k$ with $V(x)=\mu(x-\nu)$. We refer to these values of $\nu$ as the Van Vleck zeros of order $k$. The equation \eqref{lame1} was studied by Lam\'e in the 1830's in the special case, $\rho_j=1/2$, $\alpha_1+\alpha_2+\alpha_3 = 0$ in connection with the separation of variables in the Laplace equation using elliptical coordinates \cite[Ch. 23]{whwa96}. The equation has since found a strikingly wide variety of other applications, from electrostatics \cite{MR1662694, MR2345246, dias00, mcboag09} to completely quantum integrable systems such as the quantum C. Neumann oscillators, the asymmetric top and the geodesic flow on an ellipsoid \cite{agbo07, grve08, bomc09, hawi95}. In particular, the zeros of the Stieltjes polynomials may be nicely interpreted as the equilibrium positions of $k$ unit charges in a logarithmic potential in which at each position $\alpha_j$ in the complex plane is fixed a charge of magnitude $\rho_j$. Much is known about the properties of Stieltjes and Van Vleck polynomials for a fixed degree of the Stieltjes polynomial (see, e.g. \cite{vo04} for recent results), but there are few results relating Van Vleck and Stieltjes zeros of different degrees. A few important facts are that the zeros of any Stieltjes polynomial are simple, lie inside the interval $(\alpha_1,\alpha_3)$, and none of them can equal $\alpha_2$ or its corresponding Van Vleck zero. Similarly, for fixed $k$, the Van Vleck zeros are distinct and also lie within $(\alpha_1,\alpha_3)$. The proofs of these results can be found in \cite{vv98, sz75, MR0230954}. Recently we showed in \cite{bomcva08} that the Van Vleck zeros of successive orders interlace. That is, if the Van Vleck zeros of order $k$ are written in increasing order as $\nu_1^{(k)} < \nu_2^{(k)} < \cdots < \nu_{k+1}^{(k)}$, then \begin{equation} \alpha_1 < \nu_{1}^{(k+1)} < \nu_1^{(k)} < \nu_{2}^{(k+1)} < \nu_2^{(k)} < \cdots < \nu_{k+1}^{(k)} < \nu_{k+2}^{(k+1)} < \alpha_3. \label{inequality} \end{equation} Much of the research in the past several years has focused on the asymptotic properties of the zeros of Stieltjes and Van Vleck polynomials as the degree of the corresponding Stieltjes polynomials tends toward infinity \cite{bosh08, bo08, MR1928260, mamaor05}. Interlacing theorems such as the one above are interesting not only for what they tell us about the zeros for a finite degree, but they also help us to understand such asymptotic limits. They are ``classical'' results in the sense of being statements about finite degree polynomials, and they are also a bridge to connect other classical results with asymptotic limits. Our results below are further steps in this direction. In this paper we present a theorem on the distribution of the Stieltjes zeros and two additional interlacing theorems. For each positive integer $k$ we label the $k+1$ Stieltjes polynomials of degree $k$ according to their Van Vleck zeros, as $S_j^{(k)}(x)$. That is, $S_j^{(k)}(x)$ is the polynomial of degree $k$ that satisfies \begin{equation} \label{lame2} \left[\frac{d^2}{dx^2} + \sum_{j=1}^3 \frac{\rho_j}{x-\alpha_j} \, \frac{d}{dx} - \frac{\mu\left(x-\nu_j^{(k)}\right)}{A(x)} \right] S_j^{(k)}(x) = 0 \end{equation} Our first result is a strengthening of a classical result of Stieltjes: \begin{theo} \label{lem1} There are exactly $j-1$ zeros of $S_j^{(k)}$ in the interval $(\alpha_1,\alpha_2)$, and $k-j+1$ zeros of $S_j^{(k)}$ in the interval $(\alpha_2,\alpha_3)$. Moreover, there are no zeros of $S_j^{(k)}$ between $\alpha_2$ and its corresponding Van Vleck zero $\nu_j^{(k)}$. \end{theo} Next we show that the zeros of successive Stieltjes polynomials of the same degree interlace. In the third theorem we establish that the zeros of the $j$th Stieltjes polynomials of successive degrees interlace. The interlacing properties are illustrated in Figure~\ref{figure1}. \begin{theo} \label{theo1} The zeros of $S_j^{(k)}$ and $S_{j+1}^{(k)}$ interlace; between any two consecutive zeros of $S_j^{(k)}$ there is exactly one zero of $S_{j+1}^{(k)}$. Moreover, the smallest zero of $S_{j+1}^{(k)}$ is less than the smallest zero of $S_j^{(k)}$. That is, let $x_{i,j}^{(k)}$ be the zeros of the Stieltjes polynomial $S_j^{(k)}$, arranged in increasing order. Then \begin{equation} \alpha_1 < x_{1,j+1}^{(k)} < x_{1,j}^{(k)} < x_{2,j+1}^{(k)} < x_{2,j}^{(k)} < \cdots < x_{k,j+1}^{(k)} < x_{k,j}^{(k)} < \alpha_3. \end{equation} \end{theo} \begin{theo} \label{theo2} The zeros of $S_j^{(k)}$ and $S_i^{(k+1)}$ interlace if and only if $i=j$ or $i=j+1$. That is, if $i=j$ or $i=j+1$ then \begin{equation} \alpha_1 < x_{1,i}^{(k+1)} < x_{1,j}^{(k)} < x_{2,i}^{(k+1)} < x_{2,j}^{(k)} < \cdots < x_{k,j}^{(k)} < x_{k+1,i}^{(k+1)} < \alpha_3. \end{equation} Otherwise the zeros of $S_j^{(k)}$ and $S_i^{(k+1)}$ do not interlace. \end{theo} \begin{figure}[h] \begin{center} \includegraphics*[scale=.75]{intstieltjes.pdf} \end{center} \caption{Zeros of Stieltjes and Van Vleck polynomials. On row $j$ are shown $*$: zeros of $S_j^{(k)}$; \ $+$: zeros of $S_j^{(k+1)}$; \ O: Van Vleck zero $\nu_j^{(k)}$; \ X: Van Vleck zero $\nu_j^{(k+1)}$, for $k=6$. Values of the parameters are $(\alpha_1, \alpha_2, \alpha_3) = (-1, 0, 2), \; (\rho_1, \rho_2, \rho_3) = (1, 2, 1/3)$.} \label{figure1} \end{figure} In the following section we prove these results. Following this, in Section \ref{s3}, we show how these results can be combined with known asymptotic properties of Stieltjes and Van Vleck polynomials to produce new results. In particular, we construct sequences of Van Vleck zeros to converge to any number in $[\alpha_1, \alpha_3]$, and calculate the asymptotic zero distribution of the Stieltjes polynomials associated with these Van Vleck zeros. Here we also take up the question of whether or not there exist any orthogonal sequences of Stieltjes polynomials. This is a natural question to ask, for several reasons. For one thing, we see from Theorem~\ref{theo2} that any sequence of Stieltjes polynomials $\{S_{j_k}^{(k)}\}$ with $j_{k+1}=j_k$ or $j_k+1$ shares at least one thing in common with orthogonal polynomials, namely the well known fact that the zeros of orthogonal polynomials of successive degrees interlace. Moreover, equation \eqref{lame1} with $p=2$ is the Jacobi differential equation, and the polynomial solutions are the Jacobi polynomials which are, of course, orthogonal \cite{to05}. We will also show that certain of the sequences $\{S_{j_k}^{(k)}\}$ have the same asymptotic zero distribution as sequences of orthogonal polynomials. Additionally, Ismail \cite{is05} has shown that under fairly general conditions, equilibrium positions of particles in a logarithmic potential are the zeros of orthogonal polynomials. In our setting, the sufficient condition is only violated by the fact that the term in front of $S'(x)$ in \eqref{lame1} is not differentiable at $\alpha_2$. Ismail's theorem gives only sufficient conditions for orthogonality, however, so the possibility of orthogonality of Stieltjes polynomials has remained an important open question. It is worth noting that, in spite of these intriguing similarities, in the over 150 year literature on the Lam\'e equation, we have found no definitive statement or conjecture concerning orthogonality of polynomial solutions of \eqref{lame1} when $p\geq 3$. We end Section~\ref{s3} by proving that there are no sequences of Stieltjes polynomials that are orthogonal with respect to a measure. We conclude in Section~\ref{s4} with a few remarks and comments on future directions. \section{Proofs of main results } \label{s2} For the proofs of the theorems we will need the following lemma, which is basically the Sturm separation theorem applied in the intervals $(\alpha_1, \alpha_2)$ and $(\alpha_2, \alpha_3)$. \begin{lemma} There is a zero of $S_{j+1}^{(k)}$ between every two zeros of $S_j^{(k)}$ in the interval $(\alpha_1, \alpha_2)$, and between every zero of $S_j^{(k)}$ in $(\alpha_1, \alpha_2)$ and either of the singular points $\alpha_1, \alpha_2$. Likewise, there is a zero of $S_{j}^{(k)}$ between every two zeros of $S_{j+1}^{(k)}$ in the interval $(\alpha_2, \alpha_3)$, and between every zero of $S_{j+1}^{(k)}$ in $(\alpha_2, \alpha_3)$ and either of the singular points $\alpha_2, \alpha_3$. \label{lem2} \end{lemma} \begin{proof} First we note that the constant $\mu$ in \eqref{lame1} is determined by the degree $k$ of the Stieltjes polynomials by substitution and identification of the powers, as \begin{equation} \mu = \mu_k = k\left( k-1 + \rho_1+\rho_2+\rho_3\right). \label{mu} \end{equation} For the remainder of this proof, since we are dealing with Stieltjes polynomials of fixed degree $k$, we will omit the superscripts and simply write $S_j = S_j^{(k)}$ and $S_{j+1}=S_{j+1}^{(k)}$. Now, the Stieltjes polynomials $S_j$ and $S_{j+1}$ satisfy \begin{eqnarray} \left[\frac{d^2}{dx^2} + \sum_{j=1}^3 \frac{\rho_j}{x-\alpha_j} \, \frac{d}{dx} - \frac{\mu_k\left(x-\nu_j^{(k)}\right)}{A(x)} \right] S_j(x) = 0, \label{Sj}\\ \left[\frac{d^2}{dx^2} + \sum_{j=1}^3 \frac{\rho_j}{x-\alpha_j} \, \frac{d}{dx} - \frac{\mu_k\left(x-\nu_{j+1}^{(k)}\right)}{A(x)} \right] S_{j+1}(x) = 0. \label{Sj+1} \end{eqnarray} Define the integrating factor \begin{equation}} \newcommand{\eeq}{\end{equation} J(x) = \prod_{j=1}^3 \left| x - \alpha_j \right|^{\rho_j}. \eeq Then \begin{equation}} \newcommand{\eeq}{\end{equation} J'(x) = J(x)\sum_{j=1}^3 \frac{\rho_j}{x-\alpha_j}. \eeq Upon multiplying \eqref{Sj} by $S_{j+1}$ and \eqref{Sj+1} by $S_j$, taking the difference of the result, and then multiplying by $J$, we obtain \begin{equation} \frac{d}{dx} \left[ J \left(S_{j+1}' S_j - S_{j+1} S_j ' \right)\right] = Q S_j S_{j+1}, \label{sturm} \end{equation} where \begin{equation}} \newcommand{\eeq}{\end{equation} Q(x) = \left(\nu_{j} -\nu_{j+1}\right) \frac{J(x)}{A(x)}. \eeq Note that $Q<0$ in $(\alpha_1, \alpha_2)$ and $Q>0$ in $(\alpha_2, \alpha_3)$. Now, consider two consecutive zeros of $S_j$, $x_1$ and $x_2$, in the interval $(\alpha_1, \alpha_2)$. Then $S_j'$ must alternate signs at $x_1$ and $x_2$. Thus, \begin{equation}} \newcommand{\eeq}{\end{equation} \left.J\left(S_{j+1}'S_j-S_{j+1}S_j'\right)\right|_{x=x_1}^{x=x_2} = -\left.J\left(S_{j+1}S_j'\right)\right|_{x=x_1}^{x=x_2} \eeq is positive if $S_jS_{j+1}>0$ in $(x_1, x_2)$, and negative if $S_jS_{j+1}<0$ in $(x_1, x_2)$. But \eqref{sturm} implies that this expression is negative if $S_jS_{j+1} > 0$ in $(x_1, x_2)$ and positive if $S_jS_{j+1} < 0$ in $(x_1, x_2)$. Thus it must be that $S_{j+1}$ changes sign in $(x_1, x_2)$, and we have shown that between any two consecutive zeros of $S_j$ in $(\alpha_1, \alpha_2)$, there is a zero of $S_{j+1}$. Now let $x_1$ be the smallest zero of $S_j$ in $(\alpha_1, \alpha_2)$. Then, since $J(\alpha_1)=0$, \begin{equation}} \newcommand{\eeq}{\end{equation} \left.J\left(S_{j+1}'S_j-S_{j+1}S_j'\right)\right|_{x=\alpha_1}^{x=x_1} = -J(x_1)S_{j+1}(x_1)S_j'(x_1) \eeq also has the same sign as $S_jS_{j+1}$ in $(\alpha_1, x_1)$ if $S_{j+1}$ does not change sign in this interval, again contradicting \eqref{sturm}. A similar argument can be applied to the largest zero of $S_j$ in $(\alpha_1, \alpha_2)$ and $\alpha_2$, which establishes that between every zero of $S_j$ in $(\alpha_1, \alpha_2)$ and either of the singular points $\alpha_1, \alpha_2$, there is a zero of $S_{j+1}$. This argument may be exactly repeated in the interval $(\alpha_2, \alpha_3)$, noting that $Q>0$ in this interval. It follows that between any two consecutive zeros of $S_{j+1}$ in $(\alpha_2,\alpha_3)$, or between $\alpha_2$ and the smallest zero of $S_{j+1}$ in $(\alpha_2,\alpha_3)$, or between the largest zero of $S_{j+1}$ in $(\alpha_2, \alpha_3)$ and $\alpha_3$, there is a zero of $S_j$. \end{proof} \begin{proof}[Proof of Theorem \ref{lem1}] According to a classical result of Stieltjes \cite{MR1554669, whwa96}, every possible distribution of the zeros of the Stieltjes polynomials in the intervals $(\alpha_1, \alpha_2)$ and $(\alpha_2,\alpha_3)$ occurs. That is, for each integer $k$ and any integer $0\leq m\leq k$, there is a Stieltjes polynomial of degree $k$ with $m$ zeros in $(\alpha_1, \alpha_2)$ and $k-m$ zeros in $(\alpha_2,\alpha_3)$. Thus it suffices to show that there are at least as many zeros of $S_{j+1}^{(k)}$ than of $S_j^{(k)}$ in $(\alpha_1,\alpha_2)$. But this is an immediate consequence of Lemma \ref{lem2}, so the first part of the theorem is proved. For the second part of the theorem, according to Shah \cite[cf. Theorem 2]{MR0230954}, between any zero of $S(x)$ and $\nu$ there is either a zero of $S'(x)$ or $\alpha_2$. Suppose $\nu>\alpha_2$, and that there is a zero $x_1$ of $S(x)$ between $\alpha_2$ and $\nu$. Since there is a zero of $S'(x)$ between $x_1$ and $\nu$ there must be a zero $x_2$ of $S(x)$ greater than $\nu$. We may assume that $x_1$ and $x_2$ are consecutive. But since the zeros of $S(x)$ are simple there cannot be a zero of $S'(x)$ in both intervals $(x_1, \nu)$ and $(\nu, x_2)$, which is a contradiction. The case when $\nu<\alpha_2$ is similar. \end{proof} \begin{proof}[Proof of Theorem \ref{theo1}] Combining Lemma \ref{lem2} with Theorem \ref{lem1}, we see that between the $j-1$ zeros of $S_j$ in $(\alpha_1, \alpha_2)$ there are $j-2$ zeros of $S_{j+1}$. The other two zeros of $S_{j+1}$ in $(\alpha_1,\alpha_2)$ lie between $\alpha_1$ and the smallest zero of $S_j$ and between the largest zero of $S_j$ in $(\alpha_1, \alpha_2)$ and $\alpha_2$. Similarly, between the $k-j$ zeros of $S_{j+1}$ in $(\alpha_2,\alpha_3)$ there are $k-j-1$ zeros of $S_j$. And the other two zeros of $S_j$ in $(\alpha_2, \alpha_3)$ lie between $\alpha_2$ and the smallest zero of $S_{j+1}$ in $(\alpha_2, \alpha_3)$ and between the largest zero of $S_{j+1}$ in $(\alpha_2, \alpha_3)$ and $\alpha_3$. We have thus accounted for all of the zeros of $S_j$ and $S_{j+1}$, and the interlacing is proved. \end{proof} We have actually proved a stronger statement than Theorem \ref{theo1}. We note this in the following corollary. \begin{coro} Let $l>j$. Then between any two zeros of $S_j^{(k)}$ in $(\alpha_1, \alpha_2)$ there is a zero of $S_l^{(k)}$. Between any two zeros of $S_l^{(k)}$ in $(\alpha_2, \alpha_3)$ there is a zero of $S_j^{(k)}$. \end{coro} \begin{proof}[Proof of Theorem \ref{theo2}] First we prove the ``only if'' part. In order for the zeros of $S_i^{(k+1)}$ and $S_j^{(k)}$ to interlace, it must be the case that the smallest zero of $S_i^{(k+1)}$ is smaller than the smallest zero of $S_j^{(k)}$, which is impossible if $i<j$ since then there would be fewer zeros of $S_i^{(k+1)}$ than of $S_j^{(k)}$ in the interval $(\alpha_1,\alpha_2)$. Similarly, if $i>j+1$ then there are fewer zeros of $S_i^{(k+1)}$ than of $S_j^{(k)}$ in the interval $(\alpha_2, \alpha_3)$. For the ``if'' part, as in the proof of Lemma \ref{lem2}, we derive the following expression \begin{equation} \frac{d}{dx} \left[ J \left(\frac{dS_i^{(k+1)}}{dx} S_j^{(k)} - S_i^{(k+1)} \frac{dS_j^{(k)}}{dx} \right)\right] = QS_j^{(k)} S_i^{(k+1)}, \label{sturm2} \end{equation} where in this case \begin{eqnarray} Q(x) &=& \left(\mu_{k+1}-\mu_k\right)\frac{J(x)}{A(x)}\left( x - \hat{\nu}_i\right), \quad \mbox{and} \\ \hat{\nu}_i & = & \frac{\mu_{k+1}\nu_i^{(k+1)} - \mu_k\nu_j^{(k)}}{\mu_{k+1}-\mu_k}. \end{eqnarray} Note that \eqref{inequality} implies \begin{equation}} \newcommand{\eeq}{\end{equation} \hat{\nu}_j < \nu_j^{(k+1)} < \alpha_3 \quad \mbox{and} \quad \alpha_1 < \nu_{j+1}^{(k+1)} < \hat{\nu}_{j+1}. \eeq Suppose, for the moment, that $\alpha_1 < \hat{\nu}_j < \alpha_2$. Then, since $Q>0$ in $(\hat{\nu}_j, \alpha_2)$, between every two consecutive zeros of $S_j^{(k+1)}$ in $(\hat{\nu}_j,\alpha_2)$, and between the largest zero of $S_j^{(k+1)}$ in $(\hat{\nu}_j,\alpha_2)$ and $\alpha_2$, there is a zero of $S_j^{(k)}$. Since $Q<0$ in $(\alpha_1, \hat{\nu}_j)$, there is a zero of $S_j^{(k+1)}$ between every two zeros of $S_j^{(k)}$ in $(\alpha_1, \hat{\nu}_j)$, and between $\alpha_1$ and the smallest zero of $S_j^{(k)}$ in $(\alpha_1, \hat{\nu}_j)$. This accounts for all of the $j-1$ zeros of $S_j^{(k)}$ and of $S_j^{(k+1)}$ in $(\alpha_1, \alpha_2)$. Similarly, since $Q<0$ in $(\alpha_2, \alpha_3)$, between every two zeros of $S_j^{(k)}$ in $(\alpha_2, \alpha_3)$ and between every zero of $S_j^{(k)}$ in $(\alpha_2, \alpha_3)$ and either of the singular points $\alpha_2, \alpha_3$, there is a zero of $S_j^{(k+1)}$. This accounts for the $k$ zeros of $S_j^{(k)}$ and the $k+1$ zeros of $S_j^{(k+1)}$ and the interlacing is proved in this case. A nearly identical argument holds in the case when $\alpha_2<\hat{\nu}_{j+1}<\alpha_3$ to show that the zeros of $S_{j+1}^{(k+1)}$ and $S_j^{(k)}$ interlace. The proof is thus completed by the following lemma. \end{proof} \begin{lemma} $\alpha_1<\hat{\nu}_j<\alpha_2$ and $\alpha_2 < \hat{\nu}_{j+1} < \alpha_3$. \end{lemma} \begin{proof} Suppose that $\hat{\nu}_j\leq\alpha_1$. Then it would be the case that $Q>0$ in $(\alpha_1, \alpha_2)$ and by arguments similar to the above, between every two zeros of $S_j^{(k+1)}$ in $(\alpha_1, \alpha_2)$ and between every zero of $S_j^{(k+1)}$ in $(\alpha_1, \alpha_2)$ and either of the singular points $\alpha_1, \alpha_2$, there is a zero of $S_j^{(k)}$. But this would imply the existence of at least $j$ zeros of $S_j^{(k)}$ in $(\alpha_1, \alpha_2)$, which contradicts Theorem \ref{lem1}. Likewise, if $\hat{\nu}_j\geq\alpha_2$, an similar argument would imply the existence of at least $j$ zeros of $S_j^{(k+1)}$ in $(\alpha_1, \alpha_2)$, which is also a contradiction. This argument may be repeated to prove the second statement. \end{proof} \section{Asmyptotic properties} \label{s3} We now investigate how the zeros of Stieltjes polynomials distribute over the interval $(\alpha_1, \alpha_3)$ in the limit as the degree of the Stieltjes polynomials tends to infinity. Recall that Theorem \ref{lem1} says that if $\nu<\alpha_2$, then there are $j-1$ zeros of $S_{j}^{(k)}$ in $(\alpha_1, \nu)$ and $k-j+1$ zeros in $(\alpha_2, \alpha_3)$, and similarly if $\nu\geq\alpha_2$, \textit{mutatis mutandi}. In \cite{bosh08} it is shown that the Van Vleck zeros have the asymptotic distribution given by a probability measure supported on $(\alpha_1, \alpha_3)$, with density $\rho_V(x)$ given by the following: \begin{equation} \rho_V(x) = \left\{\begin{array}{ll} \displaystyle \frac{1}{2\pi} \int_{\alpha_2}^{\alpha_3} \frac{ds}{\sqrt{A(s)(x-s)}} & \; \mbox{ \ if \ } \; \alpha_1 < x < \alpha_2, \\[.2in] \displaystyle \frac{1}{2\pi} \int_{\alpha_1}^{\alpha_2} \frac{ds}{\sqrt{A(s)(x-s)}} & \; \mbox{ \ if \ } \; \alpha_2 < x < \alpha_3. \end{array}\right. \label{vvasy} \end{equation} Thus, given any point $\nu\in(\alpha_1, \alpha_3)$ there is a sequence of Van Vleck zeros $\left\{\nu_{j_k}^{(k)}\right\}$ that converges to $\nu$ as $k\rightarrow\infty$. So we may ask, What is the distribution of the corresponding Stieltjes zeros in this same limit? A partial answer to this question is given by Theorem \ref{lem1}, which implies that the limiting density is supported on a subset of $(\alpha_1, \alpha_3)\setminus I$, where $I$ is the interval bounded by $\alpha_2$ and $\nu$. The question can be resolved by applying the results of \cite{MR1928260}. Mart\'inez-Finkelshtein and Saff \cite{MR1928260} consider the more general case of \eqref{lame1} for an arbitrary number $p$ of $\alpha_i$'s. The authors fix relative proportions $\theta_1, \dots, \theta_{p-1}$ in each of the intervals $(\alpha_i, \alpha_{i+1}), \; i=1, \dots, p-1$, and extract a sequence of Stieltjes polynomials such that as the degree of each polynomial in the sequence tends to infinity, the fraction of its zeros in the interval $(\alpha_i, \alpha_{i+1})$ tends to $\theta_i$. Under this assumption the authors derive asymptotic results for the zeros of Stieltjes and Van Vleck polynomials. We specialize as before to the $p=3$ case and recast these calculations in the light of our results. Let $0\leq \theta\leq 1$. Theorem 1 of \cite{MR1928260} gives the asymptotic distribution of the zeros of a sequence of Stieltjes polynomials. We interpret these in our setting in the following way. Suppose the sequence $\left\{S_{j_k}^{(k)}\right\}$ is chosen such that the fraction of the zeros of $S_{j_k}^{(k)}$ in $(\alpha_1, \alpha_2)$ tends to $\theta$. Some simple but tedious calculations shows that the limit $\lim_{k\rightarrow\infty}\nu_{j_k}^{(k)} = \nu$ of the corresponding Van Vleck zeros is determined by \begin{equation}} \newcommand{\eeq}{\end{equation} \frac{1}{\pi}\int_{\alpha_1}^{\min(\alpha_2, \nu)} \sqrt{\left| \frac{\nu - x}{A(x)}\right|} dx = \theta. \label{nu} \eeq Moreover, let $I$ by the interval bounded by $\alpha_2$ and $\nu$. Then the asymptotic distribution of the Stieltjes polynomials $S_{j_k}^{(k)}$ is given by \begin{equation}} \newcommand{\eeq}{\end{equation} \rho_S(x) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{\pi}\sqrt{ \frac{\nu-x}{A(x)}} & \; \mbox{ \ if \ } \; x \in (\alpha_1, \alpha_3)\setminus I \\[.1in] 0 & \; \mbox{ \ if \ } \; x \in I \end{array} \right. \label{Sden} \eeq We note, in particular, that if $\nu=\alpha_1, \alpha_2$ or $\alpha_3$ then $\rho_S$ is the so-called ``arcsin distribution'' supported on the intervals $(\alpha_2, \alpha_3)$, $(\alpha_1, \alpha_3)$ and $(\alpha_1,\alpha_2)$, respectively. Now, by Theorem \ref{lem1}, $j_k-1$ zeros of $S_{j_k}^{(k)}$ lie in $(\alpha_1, \alpha_2)$. So, if $j_k/k\rightarrow\theta$, then the fraction of zeros of $S_{j_k}^{(k)}$ in $(\alpha_1, \alpha_2)$ tends to $\theta$. (One such sequence, that also satisfies $j_{k+1}=j_k$ or $j_k+1$, is $j_k=\lceil k\theta+1\rceil $.) Combining our results with the asymptotic properties, we have the following. \begin{theo} (i) Let $\{j_k\}$ be a sequence of positive integers such that $j_k/k \rightarrow \theta$. Then we have the limit of sequences of Van Vleck zeros: \begin{equation}} \newcommand{\eeq}{\end{equation} \lim_{k\rightarrow\infty}\nu_{j_k}^{(k)}=\nu, \eeq where $\nu$ is determined by \eqref{nu}. (ii) Given any $\nu\in[\alpha_1, \alpha_3]$, if $\theta$ is defined by \eqref{nu} and $j_k/k\rightarrow\theta$, then the sequence $\left\{\nu_{j_k}^{(k)}\right\}$ converges to $\nu$. (iii) If the sequence $\{j_k\}$ satisfies $j_{k+1} = j_k$ or $j_k+1$, then the sequence of Stieltjes polynomials $\left\{ S_{j_k}^{(k)}\right\}$ is an infinite sequence of polynomials with interlacing zeros and asymptotic zero distribution given by \eqref{Sden}. \end{theo} We may also calculate conditions under which the limiting distribution of Stieltjes zeros is supported throughout $(\alpha_1, \alpha_3)$. A necessary condition given by Theorem \ref{lem1} is that $\nu =\alpha_2$, and \eqref{Sden} tells us that this is also a sufficient condition. In order for the limiting distribution to be supported on $(\alpha_1, \alpha_3)$, the fraction of the zeros of $S_{j_k}^{(k)}$ in $(\alpha_1, \alpha_2)$ must tend to $\theta_c$, where $\theta_c$ is calculated by setting $\nu=\alpha_2$ in \eqref{nu}. A simple calculation shows that this is (see also \cite[Prop. 1]{MR1928260}): \begin{equation}} \newcommand{\eeq}{\end{equation} \theta_c = \frac{2}{\pi} \sin^{-1}\sqrt{\frac{\alpha_2 - \alpha_1}{\alpha_3 - \alpha_1}}. \eeq One remaining question is whether there exist sequences of orthogonal Stieltjes polynomials. Orthogonality is a rather strict condition on sequences of polynomials, but, as we mentioned in the Introduction, there are compelling reasons to suspect that some sequences of Stieltjes polynomials might be orthogonal. It is well known (see, e.g. \cite{sz75}) that if a sequence $\{p_n\}$ of monic polynomials of degree $n$ is orthogonal with respect to some measure, then the zeros of $p_n$ and $p_{n+1}$ interlace. Moreover, $\{p_n\}$ is orthogonal if and only if there exist sequences $\{a_n\}$ and $\{b_n\}$, with $a_n\in\mathbb{R}$ and $b_n>0$ such that $p_n= (x-a_n)p_{n-1} - b_np_{n-2}. $ We have found no simple contradiction to emerge from assuming that this holds for sequences of Stieltjes polynomials. Additionally, if $a_n\rightarrow a\in\mathbb{R}$ and $b_n\rightarrow b\in (0,\infty)$, then according to Theorem 5.3 of \cite{ne79}, the polynomials $p_n$ have the asymptotic zero distribution $\omega_{[\alpha, \beta]}$ with density supported on the interval $[\alpha, \beta]$ given by \begin{equation}} \newcommand{\eeq}{\end{equation} \frac{d\omega_{[\alpha, \beta]}(x)}{dx} = \frac{1}{\pi\sqrt{(\beta-x)(x-\alpha)}}, \label{opden} \eeq where $\alpha=a-2/b$ and $\beta=a+2/b$. Interestingly, the density \eqref{opden} is identical to \eqref{Sden} when $\nu$ in \eqref{Sden} is one of $\alpha_1, \alpha_2, \alpha_3$. Thus, if $\theta = 0, 1$ or $\theta_c$ the sequence $\left\{ S_{\lceil k\theta + 1\rceil}^{(k)}\right\}$ is an infinite sequence of polynomials with interlacing zeros and asymptotic zero distribution supported in an open interval and given by a density of the form \eqref{opden}. These sequences thus share many properties of orthogonal polynomial sequences. However, despite these commonalities, we have the following result. \begin{theo} If $p=3$, then given any measure $\omega$ supported on $(\alpha_1, \alpha_3)$, there is no sequence of Stieltjes polynomials orthogonal with respect to $\omega$. \end{theo} \begin{proof} Our argument is based on the uniqueness of the orthogonal measure for products of Stieltjes polynomials. Let $\{S^{(n)}\}_{n=1}^{\infty}$ be a sequence of Stieltjes polynomials. In \cite{vo99}, Volkmer shows that for $n \neq m$, the two variables polynomials $\prod_{k=1}^2 S^{(n)}(x_k)$ and $\prod_{k=1}^2 S^{(m)}(x_k)$ are orthogonal with respect to the measure \begin{equation}} \newcommand{\eeq}{\end{equation} \mu(x_1,x_2)= \prod_{j=1}^3 \prod_{k=1}^2 |x_k-\alpha_j|^{\rho_j-1} (x_2-x_1) \eeq on the open rectangle $(\alpha_1,\alpha_2) \times (\alpha_2,\alpha_3)$. The measure can easily be extended to the bigger rectangle $R=(\alpha_1,\alpha_3) \times (\alpha_1,\alpha_3)$ by letting \begin{equation}} \newcommand{\eeq}{\end{equation} \nu(x_1, x_2) = \chi_{[\alpha_1, \alpha_2]}(x_1) \chi_{[\alpha_2, \alpha_3]}(x_2) \mu( x_1, x_2), \eeq where $\chi_E(x)$ is the characteristic function of $E$. Since the condition \[ \int_{R} e^{x_1^2+x_2^2} \ d\nu(x_1,x_2) < \infty \] holds, Theorem 3.1.17 in \cite{duxu01} implies that $\nu$ is deterministic, i.e. $\nu$ is uniquely determined by its moments. It then follows, that $\nu$ is the unique measure on $R$ for which $\prod_{k=1}^2 S^{(n)}(x_k)$ and $\prod_{k=1}^2 S^{(m)}(x_k)$ are orthogonal. It is now easy to establish that Stieltjes polynomials are not orthogonal on $(\alpha_1,\alpha_3)$ with respect to any measure. Assume the contrary, i.e. assume there exists a measure $\omega$ with support in $(\alpha_1,\alpha_3)$ such that \begin{equation}} \newcommand{\eeq}{\end{equation} \int_{\alpha_1}^{\alpha_3} S^{(n)}(x) S^{(m)}(x) \ d\omega(x) = 0 \ \text{ whenever } n \neq m. \eeq By Fubini's Theorem, the products $\prod_{k=1}^2 S^{(n)}(x_k)$ and $\prod_{k=1}^2 S^{(m)}(x_k)$ are then orthogonal on $R$ with respect to the product measure $\omega(x_1) \times \omega(x_2)$. But clearly, \begin{equation}} \newcommand{\eeq}{\end{equation} \nu(x_1,x_2) \neq \omega(x_1) \times \omega(x_2) \eeq as $\nu$ cannot be written as the product of a measure in $x_1$ with a measure in $x_2$. This contradicts the uniqueness of $\nu$ and proves the theorem. \end{proof} \section{Concluding remarks and open questions} \label{s4} It is interesting that the asymptotic properties of Stieltjes and Van Vleck polynomials are independent of the fixed charges $\rho_j$, and depend only on the locations $\alpha_j$ of the fixed charges. Thus, for any configuration of the $\alpha_j$'s, there is a three parameter $(\rho_1, \rho_2, \rho_3)$ family of Stieltjes polynomials with the same asymptotic properties. Of course, for finite $k$ the zeros of $S_j^{(k)}$ depend on the $\rho_j$'s, and so our results are a way to connect the classical results on Stieltjes and Van Vleck polynomials to the more recent work done on the asymptotic properties of these functions. This allows us to present a fairly complete description in the case we have considered of real $\alpha_j$'s and $p=3$. An obvious way to generalize the results we have presented would be to consider the case of more general $p$. In this case the Van Vleck polynomials are of degree $p-2$. Hence one question that arises is how to order the Van Vleck and Stieltjes polynomials in a manner analogous to the ordering defined in \eqref{lame2}. It may be that there is no generally consistent ordering as there is for $p=3$. However, it still may be possible to generalize some of the results of this paper. We conclude by noting that an important conjecture in quantum chaos due to M. V. Berry and M. Tabor asserts that for generic quantum integrable systems, the mean level spacing should exhibit a random distribution in the semi-classical limit. It is also believed that a similar behavior holds for the zeros of the corresponding eigenfunctions. An interesting application of our interlacing results may consist of computing the mean level spacing of the quantum integrable systems mentioned in the introduction, and see if the Berry-Tabor conjecture holds in these cases.
1,116,691,501,214
arxiv
\section{\label{sec:introduction}Introduction} Increasing the channel capacity of wireless systems is an essential demand in the implementation of new generations of telecommunication networks. An attractive way to meet this need without increasing the frequency bandwidth is to utilize photon orbital angular momentum (OAM) in the microwave regime \cite{thide2007}. Electromagnetic waves with helical spatial phase profiles can support different orthogonal OAM modes. Theoretically, the number of such modes is infinite. Consequently, combining these new multiplexing dimensions with other existing schemes such as time, frequency and polarization multiplexing can significantly increase the channel capacity \cite{chen2020}. In recent years, various methods have been proposed to generate the radio-frequency waves carrying OAM modes. Utilizing phased array antennas with circular arrangements \cite{gao2014}, spiral phase plates \cite{hui2015}, all-dielectric transformers \cite{yi2019, yi2_2019}, transmit-arrays \cite{jiang2018} and reflect-arrays \cite{karimipour2019} are among the most common methods for synthesizing OAM vortex waves. In phased array antennas the complexity of the feed network and the destructive effects of mutual coupling among adjacent elements are important limiting factors. Note that, eliminating the mutual couplings among elements in phased arrays drastically increases the complexity of realization process. To avoid these issues, all-dielectric transformers are proposed in the literature. In addition, reducing the divergence effects of vortex waves can be achieved by these transformers \cite{yi2_2019}. However, all-dielectric transformers and spiral phase plates are relatively bulky, which are unsuitable for use in low profile integrated systems. Recently, the advent of metasurfaces \cite{yu2011}-\cite{yu2014} has paved the way for the realization of integrated optical devices. These structures can manipulate electromagnetic wave front to obtain the desired properties by introducing abrupt phase shifts on the incident wave. Reflect/transmit arrays can be realized by metasurfaces. The use of metasurface-based reflect/transmit arrays has several advantages, including high gain, low divergence angles, low manufacturing cost and simple fabrication processes. However, in reflect/transmit arrays the feeding systems need to be mounted outside the structure, which makes them bulky. This issue contradicts the integrability of metasurfaces. An alternative solution to achieve the advantages of reflect/transmit arrays without the need for bulky feed systems is utilizing the leaky-wave meta-holograms, in such a way that the feed can be integrated with the metasurface plane. The application of holographic technique in antenna engineering was first proposed by Checcacci \cite{checcacci1970} and a practical example of leaky-wave hologram was presented in \cite{fong2010}. In the holographic technique, the information obtained from the interference of the surface wave (as the reference wave) and the desired space wave (as the object wave) is utilized for the synthesis of metasurface. In \cite{oraizi2020} the isotropic hologram is used to generate radiations capable of carrying OAM modes. In \cite{amini_2020}-\cite{bodehou_OAM_2019} anisotropic holograms are designed as polarized vortex beam generators. A serious obstacle in designing and optimizing leaky-wave holographic antennas is their large size. Generally, the dimensions of such structures to achieve a high gain radiation pattern should be selected about $10\lambda$ to $20\lambda$ ($\lambda$ as the free-space wavelength). On the other hand, due to the small size of unit cells in terms of wavelength ($\approx \lambda/6$), the number of meshes in full-wave simulations may increase significantly. This issue greatly increases the computational complexity and CPU time consumption. Various theoretical models have been reported for the analysis and synthesis of leaky-wave holograms. The aperture field estimation (AFE) method is proposed as an accurate synthesis method for realizing leaky-wave holograms with circular \cite{casaletti_2017} and vertical polarizations \cite{amini_2020}. In \cite{ovejero2015, bodehou2019}, the Method of Moments (MoM) framework is developed for the analysis of anisotropic holographic antennas, which is accurate and fast for treating large size structures. This method has been used to explain the radiation mechanism of holograms with circularly polarized shaped beams \cite{minatti_2015}, multiple beams \cite{ovejero2017, bodehou2020} and multi band radiations \cite{faenzi2019}. Another analytical method that can be considered is the adiabatic Floquet-wave expansion (AFW) method, which is relatively simpler than MoM in terms of mathematical complexity. This method is proposed for the first time by Minatti et al. \cite{minatti_2016, minatti_2_2016} for the shaping of far-zone patterns with different polarizations. In this paper, the combination of aperture field estimation (AFE) technique (as a synthesis method) and adiabatic Floquet-wave (AFW) expansion method (as an analysis approach) is utilized to get deep insight into the radiation mechanism of OAM wave radiators enabled by leaky-wave holograms. Using AFE technique, the aperture field (consisting of both phase and amplitude information) and then impedance distribution for obtaining the polarized vortex beam have been estimated. Furthermore, AFW method determines the leakage properties of the surface wave. This method is much faster than the full-wave method, which makes it suitable for synthesizing and optimizing complex vortex beams. \section{Adiabatic floquet-wave expansion} The adiabatic generalization of the Floquet-wave (FW) theorem is a well known method for the analysis of leaky-wave holograms generating pencil beams in the microwave regime, which was initially proposed by Minatti et al. \cite{minatti_2016}. The advantage of using this method is the precise analysis of anisotropic holograms without the need for full-wave simulations, which significantly reduces the synthesis time consumption. A conceptual structure of anisotropic hologram is shown in Fig.\ref{fig:Fig1a}, including the modulated metasurface (hologram) and vertical monopole as surface wave generator. The monopole excites a cylindrical magnetic surface wave, which can be represented by \cite{moeini_scirep_2019}: \begin{equation} \vec{H}_t|_{z=0^+}\approx \vec{J}_{sw}H_1^{(2)}(k^{(0)}\rho) \end{equation} where $H_1^{(2)}$ is the Hankel function of second kind and first order. Also, $k^{(0)}$ is the complex surface wave number. \begin{figure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width = \textwidth]{Fig1.png} \caption{} \label{fig:Fig1a} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width = \textwidth]{Zs.png} \caption{} \label{fig:Fig1b} \end{subfigure} \caption{(a) Conceptual schematic of anisotropic hologram consisting of monopole located at the origin. (b) Definition of transparent anisotropic surface impedance in presence of dielectric host medium.} \label{fig:Fig1} \end{figure} The metasurface consists of pseudo-periodic patches printed on the grounded dielectric substrate (as shown in Fig.\ref{fig:Fig1b}). If the dimensions of patches are small enough compared to the wavelength ($\approx\lambda/6$), the metasurface effectively acts as an impedance surface. By proper patterning of surface impedance, the surface wave can be converted to leaky mode with the desired direction, polarization and topological charge. The main purpose of using adiabatic Floquet-wave analysis is to provide a systematic approach for calculating the distribution of leakage parameter ($\alpha$) across the aperture, that can be used to synthesize radiation patterns with the desired specifications \cite{minatti_2_2016}. In this paper, this method is used for the analysis of anisotropic holograms with the ability to generate vortex waves carrying OAM modes. \subsection{Surface impedance distribution} In the analysis of leaky-wave holograms, the characteristics of metasurface can be described by the impedance boundary condition. For the metasurface placed at z = 0 (see Fig.\ref{fig:Fig1b}), the transparent impedance boundary condition can be expressed as \cite{tretyakov_2003}: \begin{equation} \vec{E}_t = j\bar{\bar{X}}(\vec{\rho}).\hat{z}(\vec{H}_t|_{z=0^+} - \vec{H}_t|_{z=0^-}) = j\bar{\bar{X}}(\vec{\rho}).\vec{J} \label{eq:Et} \end{equation} where \begin{equation} \bar{\bar{X}}(\vec{\rho}) = \begin{pmatrix} X_{\rho\rho}(\vec{\rho}) & X_{\rho\phi}(\vec{\rho}) \\ X_{\rho\phi}(\vec{\rho}) & X_{\phi\phi}(\vec{\rho}) \end{pmatrix} \end{equation} Note that, $\bar{\bar{X}}(\vec{\rho})$ indicates the tensorial reactance and is dependent on the observation vector $\vec{\rho}$ in the cylindrical coordinates. $\vec{E}_t$ and $\vec{H}_t$ represent the tangential electric and magnetic fields, respectively. In the general case, the distribution of surface reactance can be expressed as follows, which are obtained directly from the generalized holographic theory \cite{minatti_2016}: \begin{equation} X_{\rho\rho}(\vec{\rho})=X_\rho [1+m_\rho(\vec{\rho})\cos (Ks(\vec{\rho}) + \Phi_{\rho}(\vec{\rho}))] \label{eq:X_rho_rho} \end{equation} \begin{equation} X_{\rho\phi}(\vec{\rho})=X_\rho m_\phi(\vec{\rho})\cos(Ks(\vec{\rho}) + \Phi_\phi(\vec{\rho})) \label{eq:X_rho_phi} \end{equation} \begin{equation} X_{\phi\phi}(\vec{\rho})=X_\phi[1-m_\rho(\vec{\rho})\cos(Ks(\vec{\rho}) + \Phi_\rho(\vec{\rho}))] \label{eq:X_phi_phi} \end{equation} In (\ref{eq:X_rho_rho})-(\ref{eq:X_phi_phi}) the coefficients $X_\rho$ and $X_\phi$ demonstrate the average surface reactances, which are independent from the position vector $\vec{\rho}$. Also, $m_\rho$ and $m_\phi$ are the modulation indices controlling the distribution of leakage constant across the radiation aperture. Note that, $Ks(\vec{\rho})$ is the rapidly varying part and $\Phi_{\rho,\phi}(\vec{\rho})$ are the slowly varying parts of modulation phase such that: \begin{equation} |\nabla_{\vec{\rho}}Ks(\vec{\rho})|\gg|\nabla_{\vec{\rho}}\Phi_{\rho, \phi}(\vec{\rho})| \label{eq:nabla} \end{equation} where $\nabla_{\vec{\rho}}$ is the gradient operator on $\vec{\rho}$. The slowly varying parts of modulation phase ($\Phi_{\rho, \phi}(\vec{\rho})$) play an important role in control of polarization and beam vorticity. If they are linear functions of azimuth angle ($\phi$) such that: \begin{equation} \Phi_{\rho}(\vec{\rho}) = \Phi_{\phi}(\vec{\rho})=\pm l\phi \quad l = 0, 1, 2, ... \end{equation} then the radiated beam can support OAM mode with an integer topological charge. Note that, the local periodicity of the boundary condition in the $\rho$ direction is obtained by: \begin{equation} p(\vec{\rho}) =\frac{2\pi}{\nabla_{\vec{\rho}}(Ks(\vec{\rho})\pm l\phi).\hat{\rho}} = \frac{2\pi}{\nabla_{\vec{\rho}}(Ks(\vec{\rho}).\hat{\rho}} \end{equation} In order to determine $Ks$, $\Phi_\rho$ and $\Phi_\phi$, we will use the AFE method \cite{casaletti_2017, amini_2020}. In this method, the aperture field is estimated from the inverse Fourier transformation of the desired radiation pattern. \subsection{Floquet-wave expansion of surface current} According to the periodicity of surface impedance in (\ref{eq:X_rho_rho})-(\ref{eq:X_phi_phi}), the higher order modes existing in the surface current must be considered. This suggests that the induced surface current may be expanded in terms of $n$'th harmonic as follows \cite{oliner_1959, minatti_2016}: \begin{equation} \vec{J}\approx \sum _{n=-\infty}^{+\infty}(J_\rho^{(n)}\hat{\rho} + J_\phi^{(n)}\hat{\phi})e^{-jnKs(\vec{\rho})}H_1^{(2)}(k^{(0)}\rho) \label{eq:J} \end{equation} Using asymptotic form of the Hankel function, the equation (\ref{eq:J}) can be written as: \begin{equation} \vec{J}\approx \sum _{n=-\infty}^{+\infty} \sqrt{\frac{2j}{\pi k^{(0)} \rho}}(J_\rho^{(n)}\hat{\rho} + J_\phi^{(n)}\hat{\phi})e^{-j(nKs(\vec{\rho}) + k^{(0)}\rho)} \label{eq:J_mod} \end{equation} Note that the spatial derivative of the phase in (\ref{eq:J_mod}) gives the $n$-indexed complex wave vector: \begin{equation} \vec{k}^{(n)}(\vec{\rho}) = \vec{\beta}^{(n)}(\vec{\rho}) - j\delta\vec{\alpha}(\vec{\rho}) = \nabla_{\vec{\rho}}(k^{(0)}\rho + nKs(\vec{\rho})) \end{equation} where $\delta\vec{\alpha}$ is the leakage vector and represents the energy converted from surface wave into leaky wave and $\vec{\beta}^{(n)}$ is the propagation vector corresponding to the $n$'th harmonic of Floquet mode. Given that the value of $Ks(\vec{\rho})$ is assumed to be real, the leakage vector is independent of $n$. So we can write: \begin{equation} \begin{split} \vec{\beta}^{(n)}(\vec{\rho})= Re\{\vec{k}^{(n)}(\vec{\rho})\} = Re\{\nabla_{\vec{\rho}}(k^{(0)}\rho+nKs(\vec{\rho}))\}=\\ \vec{\beta}^{(0)}_{um} + \delta \vec{\beta} (\vec{\rho}) + n\nabla_{\vec{\rho}}Ks(\vec{\rho})\\ n = 0, \pm1, \pm2, ... \end{split} \label{eq:beta_n} \end{equation} \begin{equation} \delta \vec{\alpha}(\vec{\rho})=-Im\{\vec{k}^{(n)}(\vec{\rho})\} = -Im\{\nabla_{\vec{\rho}}(k^{(0)}\rho)\} \label{eq:alpha_n} \end{equation} In (\ref{eq:beta_n}), $\vec{\beta}_{um}^{(0)}$ is the propagation vector for unmodulated impedance (i.e. $m_\rho(\vec{\rho})=m_\phi(\vec{\rho})=0$). In this case the wave is confined on the surface and the wave number becomes purely real. The vectors $\delta \vec{\beta}$ and $\delta \vec{\alpha}$ are the small deviations with respect to the $\vec{\beta}_{um}^{(0)}$ when modulation is applied. If the parameters $m_\rho$ and $m_\phi$ have small values ($|m_{\rho,\phi}|\leq1$), we can neglect $\delta \vec{\beta}$ in the calculation of propagation vector \cite{oliner_1959, patel_2011}. To have a comprehensive insight of the radiation mechanism, the parameters $\vec{\beta}_{um}^{(0)}$ and $\delta \vec{\alpha}$ must be accurately determined. For this purpose, the expansion of electrical field in (\ref{eq:Et}) must be written in terms of Floquet waves and an Eigen-value problem should be solved. By considering sufficiently large numbers of Floquet modes, the desired accuracy can be achieved. As an alternative solution, in \cite{minatti_2016} closed-form expressions are derived for $\vec{\beta}_{um}^{(0)}$ and $\delta\vec{\alpha}$ which are sufficiently accurate for our purpose. In this paper, the latter is used to determine the complex wave vector. Using transverse resonance technique for TM-like modes, yields \begin{equation} \vec{\beta}_{um}^{(0)}=\hat{\rho}k\sqrt{1+(\frac{X_s}{\eta_0})^2} \end{equation} where $X_s$ is obtained from the following nonlinear equation: \begin{equation} X_s = X_\rho[1-\frac{X_s \epsilon_r \cot(kh\sqrt{\epsilon_r - 1 - (X_s/\eta_0)^2})}{\eta_0 \sqrt{\epsilon_r - 1 - (X_s/\eta_0)^2}}] \end{equation} For $\delta \vec{\alpha}$ we have: \begin{equation} \delta \vec{\alpha}(\vec{\rho})\approx\hat{\rho}\frac{-Re\{(\hat{\rho}-\frac{\chi_{\phi\rho}^*}{\chi_{\phi\phi}^*}\hat{\phi}).\bar{\bar{z}}^{(-1)\dagger}.(\hat{\rho}-\frac{\chi_{\phi\rho}}{\chi_{\phi\phi}}\hat{\phi})\}}{\frac{X_\rho^2\beta_{um}^{(0)}k}{\eta_0}[\frac{2\epsilon_r}{h(\epsilon_r k^2 - (\beta_{um}^{(0)})^2)^2} + \frac{1}{((\beta_{um}^{(0)})^2 - k^2)^{3/2}}]} \label{eq:aalpha} \end{equation} where $\chi_{\phi\rho}$ and $\chi_{\phi\phi}$ are the components of the tensor $\bar{\bar{\chi}}$ (see Appendix A). Both $\bar{\bar{\chi}}$ and $\bar{\bar{z}}$ are dependent on the dyadic Green's function of grounded slab. The process of calculating the quantities in (\ref{eq:aalpha}) is described in Appendix A. In Fig.\ref{fig:delta_alpha}, the colored map of $\delta\alpha/k$ in terms of modulation indices (for the case of $m_\rho = m_\phi$) and $X_s$ is plotted. Rogers RO4003 with $\epsilon_r = 3.55$ and $h=1.524mm$ is chosen as the dielectric host medium. Observe that for $X_s < 0.8\eta_0$ ($\eta_0$ as the free-space impedance), with increasing modulation index, the leakage constant will increase and would vary from 0 to $0.015k$. For $X_s>0.8\eta_0$ the leakage constant peaks at a certain value and decreases again thereafter. \begin{figure} \includegraphics[width = 0.37\textwidth]{delta_alpha.png} \caption{Variation of $\delta\alpha$ versus modulation index and $X_s$.} \label{fig:delta_alpha} \end{figure} \subsection{Estimation of $Ks(\vec{\rho})$ and $\Phi_{\rho,\phi}(\vec{\rho})$} In this section, we develop the theoretical procedure for the synthesis of vortex wave with arbitrary topological charge using leaky-wave holograms. A common method for designing leaky-wave holograms is to use the aperture field estimation technique \cite{casaletti_2017, minatti_2015}. In this method the relationship between the surface reactance tensor and the aperture field vector ($\vec{E}_{ap}$) can be expressed as \cite{minatti_2015}: \begin{equation} \bar{\bar{X}}(\vec{\rho}).\hat{\rho} = X_0[\hat{\rho} + 2Im\{\frac{\vec{E_{ap}}}{-J_{sw}H_1^{(2)}(k^{(0)}\rho)}\}] \label{eq:Xs_Ea} \end{equation} where $H_1^{(2)}$ denotes the surface wave function excited by the monopole launcher. Using the asymptotic expansion of Hankel function in (\ref{eq:Xs_Ea}) yields \begin{equation} \begin{split} \bar{\bar{X}}(\vec{\rho}).\hat{\rho}=X_0[\hat{\rho} + \frac{\sqrt{2\pi\rho\sqrt{(\beta_{um}^{(0)} + \delta\beta (\vec{\rho}))^2 + \delta\alpha^2(\vec{\rho})}}}{J_{sw}}\times\\ Im \{\vec{E}_{ap}je^{j(k^{(0)}\rho-\zeta_0)}\}] \end{split} \label{eq:Xs_asym} \end{equation} The aperture field vector can be expanded in terms of x and y components: \begin{equation} \vec{E}_{ap}(\vec{\rho})=E_{ax}(\vec{\rho})\hat{x}+E_{ay}(\vec{\rho})\hat{y} \label{eq:Eap} \end{equation} To estimate the aperture field vector, three points must be considered: 1- The distribution of field magnitude that controls the shape of radiated beam; 2- The phase of field that determines the direction and vorticity state of beam; 3- The relationship between the x and y components of the aperture field that determines the polarization state of radiated wave. In order to have an object wave propagating in $\theta_0$ and $\phi_0$ direction and carrying OAM mode with arbitrary topological charge, we can define the components of aperture field vector as \begin{equation} \begin{split} E_{ax, ay}(\vec{\rho})=M_{x, y}(\vec{\rho})\frac{J_{sw}}{\sqrt{2\pi\rho\sqrt{(\beta_{um}^{(0)} + \delta\beta (\vec{\rho}))^2 + \delta\alpha^2(\vec{\rho})}}}\times\\ e^{-\delta\alpha (\vec{\rho})\rho} e^{-j(k\rho \sin \theta_0 \cos (\phi - \phi_0)+ l\phi)} \end{split} \label{eq:Eaxay} \end{equation} Note that, $M_x$ and $M_y$ may be dependent on $\vec{\rho}$. To obtain a helical wave front , the term of $l\phi$ is added to the phase of aperture field. In (\ref{eq:Eaxay}), for simplicity, the field amplitude is chosen so that only $M_x$ and $M_y$ appear in the impedance equation (Eq. (\ref{eq:Xs_asym})). Substituting (\ref{eq:Eaxay}) in (\ref{eq:Xs_asym}) and comparing the obtained impedance function with (\ref{eq:X_rho_rho})-(\ref{eq:X_phi_phi}), yields \begin{equation} Ks(\vec{\rho})=(\beta^{(0)}_{um} + \delta \beta(\vec{\rho}))\rho - k \sin \theta_0 \cos (\phi-\phi_0)\rho \end{equation} \begin{equation} \Phi_\rho(\vec{\rho})=l\phi - arg\{M_x(\vec{\rho})\cos \phi + M_y(\vec{\rho})\sin \phi\} \label{eq:Phi_rho} \end{equation} \begin{equation} \Phi_\phi(\vec{\rho})=l\phi - arg\{-M_x(\vec{\rho})\sin \phi + M_y(\vec{\rho})\cos \phi\} \label{eq:Phi_phi} \end{equation} \begin{equation} m_\rho(\vec{\rho}) = |M_x(\vec{\rho})\cos \phi + M_y(\vec{\rho})\sin \phi| \label{eq:m_rho} \end{equation} \begin{equation} m_\phi(\vec{\rho})=|-M_x(\vec{\rho})\sin \phi + M_y(\vec{\rho})\cos \phi| \label{eq:m_phi} \end{equation} Using (\ref{eq:beta_n}) the propagation constant for $n=-1$ (radiation mode) can be obtained as \begin{equation} \begin{split} \vec{\beta}^{(-1)}(\vec{\rho})=(\beta_{um}^{(0)}+\delta \beta(\vec{\rho})\hat{\rho} - \nabla_{\vec{\rho}}ks(\vec{\rho}) = \\ k\sin \theta_0 \cos (\phi - \phi_0)\hat{\rho} - k\sin \theta_0 \sin (\phi - \phi_0)\hat{\phi} \end{split} \end{equation} Therefore, $\beta^{(-1)}$ always satisfies the radiation condition, that is \begin{equation} |\vec{\beta}^{(-1)}(\vec{\rho})|=k\sin \theta_0<k \end{equation} \section{Calculation of far-zone field} In the analysis of radiative apertures, the Fourier transformation and stationary phase theory can be applied to determine the far-zone fields. Thus, for the $\theta$ and $\phi$ components of the far-zone field we have \cite{balanis_2016}: \begin{equation} F_\theta(\theta, \phi) = \tilde{E}_{ax} \cos \phi + \tilde{E}_{ay} \sin \phi \label{eq:FF_theta} \end{equation} \begin{equation} F_\phi (\theta, \phi) = \cos \theta (-\tilde{E}_{ax} \sin \phi + \tilde{E}_{ay} \cos \phi) \label{eq:FF_phi} \end{equation} where $\tilde{E}_{ax}$ and $\tilde{E}_{ay}$ are the Fourier transforms of $E_{ax}$ and $E_{ay}$ respectively. Hence, in the cylindrical coordinate system we can write \begin{equation} \tilde{E}_{ax} = \iint_{ap}E_{ax}e^{jk\rho'\sin \theta \cos (\phi - \phi')}\rho'd\rho' d\phi' \end{equation} \begin{equation} \tilde{E}_{ay} = \iint_{ap}E_{ay}e^{jk\rho'\sin \theta \cos (\phi - \phi')}\rho'd\rho' d\phi' \end{equation} \section{Design and analysis of hologram} To validate the analytical approach in the previous section, we present two examples of leaky-wave holograms generating OAM modes. In the first section, an isotropic unit cell is used to realize hologram with a broadside beam. In the second, anisotropic hologram with circular polarization is designed. Without loss of generality, the operation frequency of antennas is selected to be 18 GHz. Rogers RO4003 with dielectric constant 3.55, loss tangent $\tan\delta = 0.0027$ and thickness 1.524 mm is used as the substrate. The square patch printed on the grounded dielectric has been chosen for realization of isotropic (scalar) impedance. The period of unit cell is chosen as 2.8 mm which is equal to $\lambda/6$ at the operating frequency. Fig.\ref{fig:square_patch} shows the proposed unit cell and reactance curve for variation of square width (namely $a$), which is obtained by Eigen-mode solver in CST microwave studio \cite{cst}. \begin{figure} \includegraphics[width = 0.3\textwidth]{reactance_iso.png} \caption{Variation of scalar surface reactance for square patch.} \label{fig:square_patch} \end{figure} To implement the anisotropic (tensorial) impedances, we utilize asymmetric rectangular patches as the composition pixels. Fig.\ref{fig:UC_aniso_a} shows the proposed anisotropic unit cell. The width of the patch (namely $b$) and its orientation angle (namely $\psi$) are considered for changing the impedance. To extract the impedance tensor, first the reactances $X_1$ and $X_2$ are calculated for the surface wave propagating along the x and y directions, respectively. In both directions, the parameter $\psi$ is kept equal to zero and the parameter $b$ is varied from 0.2 to 2.4 mm. The reactance curves for $X_1$ and $X_2$ are plotted in Figs \ref{fig:UC_aniso_b} and \ref{fig:UC_aniso_c}, respectively. For the retrieval of the impedance tensor the method proposed in \cite{werner_2014} is used, which can be expressed as \begin{equation} \bar{\bar{Z}}_s=j\bar{\bar{X}}_s=jR^T(\psi)\bar{\bar{X}}(b)R(\psi) \end{equation} where \begin{equation} \bar{\bar{X}}(b) = \begin{pmatrix} X_1(b) & 0 \\ 0 & X_2(b) \end{pmatrix} \end{equation} and $R$ is the rotation matrix: \begin{equation} R(\psi) = \begin{pmatrix} \cos \psi & -\sin \psi\\ \sin \psi & \cos \psi \end{pmatrix} \end{equation} Note that $T$ indicates the transpose operator. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width =\textwidth]{UC_aniso.png} \caption{} \label{fig:UC_aniso_a} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \includegraphics[width =\textwidth]{X1.png} \caption{} \label{fig:UC_aniso_b} \end{subfigure} \begin{subfigure}[b]{0.235\textwidth} \includegraphics[width =\textwidth]{X2.png} \caption{} \label{fig:UC_aniso_c} \end{subfigure} \caption{(a) Asymmetric rectangular patch for realization of tensorial impedance. (b) Impedance curve for the wave propagating along the x axis ($X_1$). (c) Impedance curve for the wave propagating along the y axis ($X_2$).} \label{fig:UC_aniso} \end{figure} Fig.\ref{fig:psi_b} shows the impedance maps of proposed anisotropic unit cell versus the width and orientation angle of patch. \begin{figure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width =\textwidth]{X_rho_rho.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width =\textwidth]{X_rho_phi.png} \caption{} \end{subfigure} \caption{Design maps for asymmetric rectangular patch in Fig.\ref{fig:UC_aniso_a} in terms of width and orientation angle. (a) $X_{\rho\rho}$. (b) $X_{\rho\phi}$.} \label{fig:psi_b} \end{figure} \subsection{Aperture field for isotropic holograms} For the isotropic surface reactance, the aperture field should have only the $\rho$ component in (\ref{eq:Xs_Ea}). In this case, the x and y components of the aperture field cannot be defined independently and will be related as follows: \begin{equation} E_{ax}(\vec{\rho})\sin \phi = E_{ay}(\vec{\rho})\cos \phi \end{equation} Isotropic conditions impose some constraints on the definition of the field components. Due to this limitation, polarization control of leaky modes on isotropic holograms is difficult. According to the above considerations, the x and y components of the field are selected as: \begin{equation} \begin{split} E_{ax}(\vec{\rho})=M\cos \phi\frac{J_{sw}}{\sqrt{2\pi\rho\sqrt{(\beta_{um}^{(0)} + \delta\beta (\vec{\rho}))^2 + \delta\alpha^2(\vec{\rho})}}}\times\\ e^{-\delta\alpha (\vec{\rho})\rho} e^{-j(k\rho \sin \theta_0 \cos (\phi - \phi_0)+ l\phi)} \end{split} \label{eq:Mx_iso} \end{equation} \begin{equation} \begin{split} E_{ay}(\vec{\rho})=M\sin \phi\frac{J_{sw}}{\sqrt{2\pi\rho\sqrt{(\beta_{um}^{(0)} + \delta\beta (\vec{\rho}))^2 + \delta\alpha^2(\vec{\rho})}}}\times\\ e^{-\delta\alpha (\vec{\rho})\rho} e^{-j(k\rho \sin \theta_0 \cos (\phi - \phi_0)+ l\phi)} \end{split} \label{eq:My_iso} \end{equation} In (\ref{eq:Mx_iso}) and (\ref{eq:My_iso}), the modulation indices $M_x$ and $M_y$ are selected as $M \cos \phi$ and $M \sin \phi$, respectively. Substituting (\ref{eq:Mx_iso}) and (\ref{eq:My_iso}) in (\ref{eq:m_rho}) and (\ref{eq:m_phi}) concludes that: \begin{equation} m_\rho (\vec{\rho}) = M \end{equation} \begin{equation} m_\phi (\vec{\rho}) = 0 \end{equation} which indicates that $X_{\rho\phi}$ is zero. Also \begin{equation} \Phi_\rho (\vec{\rho}) = \Phi_\phi(\vec{\rho}) = l \phi \end{equation} Without loss of generality, if the hologram is designed to radiate vortex wave in the direction of broadside (namely, $\theta_0 = 0^\circ$ and $\phi_0 = 0^\circ$), by substituting (\ref{eq:Mx_iso}) and (\ref{eq:My_iso}) in (\ref{eq:FF_theta}) and (\ref{eq:FF_phi}) the far-zone fields can obtained as: \begin{equation} \begin{split} F_\theta(\theta, \phi) = \iint_{ap}\frac{ M \cos (\phi-\phi') J_{sw}}{\sqrt{2\pi\rho \sqrt{(\beta_{um}^{(0)} + \delta \beta(\vec{\rho}))^2 + \delta \alpha^2 (\vec{\rho})}}}\times\\ e^{-\delta \alpha (\vec{\rho})} e^{jk\rho' \sin \theta \cos (\phi - \phi') } e^{-jl\phi'}\rho'd\rho'd\phi' \end{split} \label{eq:int_iso1} \end{equation} \begin{equation} \begin{split} F_\phi(\theta, \phi) = \iint_{ap}\frac{ -M \sin (\phi-\phi') \cos \theta J_{sw}}{\sqrt{2\pi\rho \sqrt{(\beta_{um}^{(0)} + \delta \beta(\vec{\rho}))^2 + \delta \alpha^2 (\vec{\rho})}}}\times\\ e^{-\delta \alpha (\vec{\rho})} e^{jk\rho' \sin \theta \cos (\phi - \phi') } e^{-jl\phi'}\rho'd\rho'd\phi' \end{split} \label{eq:int_iso2} \end{equation} Setting $u = \phi - \phi'$ and using substitution rule in (\ref{eq:int_iso1}) and (\ref{eq:int_iso2}), we can conclude that the phases of $F_\theta$ and $F_\phi$ have the angular dependence in the form of $l\phi$. This means that the number of twists of the resulting wave front around the singularity point is $m = l$, which indicates the topological charge of radiated wave \cite{vaity_2015}. To investigate the validity of the proposed method, here, an isotropic hologram with the ability to generate vortex beam with topological charge of m = 2 has been designed. The direction of object wave is supposed to be broadside ($\theta_0 = 0$). Fig.\ref{fig:imp_iso} shows the scalar surface impedance pattern ($X_{\rho\rho}$) and realized model for $M=0.5$ and $X_{s}=0.7\eta_0$. The parameters $M$ and $X_{s}$ are selected such that the calculated impedances can be realized easily. The overall dimensions of the hologram are large enough ($14\lambda\times 14\lambda$) for the surface wave to be effectively converted into leaky wave. \begin{figure} \includegraphics[width = 0.44\textwidth]{imp_iso.png} \caption{Scalar impedance pattern for generating OAM wave with topological charge of m = 2.} \label{fig:imp_iso} \end{figure} \begin{figure*} \begin{subfigure}{0.24\textwidth} \includegraphics[width =\textwidth]{ISO_RHCP_PHI_0.png} \caption{} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width =\textwidth]{ISO_LHCP_PHI_0.png} \caption{} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width =\textwidth]{ISO_RHCP_PHI_90.png} \caption{} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width =\textwidth]{ISO_LHCP_PHI_90.png} \caption{} \end{subfigure} \caption{Comparison between the analytical and simulation results of isotropic hologram. (a) RHCP component at $\phi=0^\circ$. (b) LHCP component at $\phi=0^\circ$. (c) RHCP component at $\phi=90^\circ$. (d) LHCP component at $\phi=90^\circ$. } \label{fig:iso_patterns} \end{figure*} In Fig.\ref{fig:iso_patterns} the results of analytical model and full-wave simulation for radiation patterns are compared, which are in good agreement. Note that although there is no degree of freedom to control the polarization of aperture field, the main beam is circularly polarized, which is caused by spiral variations in the scalar surface impedance (see Fig.\ref{fig:imp_iso}) \cite{minatti_2011}. Results in Fig.\ref{fig:iso_patterns} show that the LHCP component (cross-pol.) of far-field pattern is approximately 8 dB lower than RHCP component (co-pol.). Fig.\ref{fig:phase_iso} shows the phase distributions of $F_\theta$ and $F_\phi$. Observe that the number of twists of wave front is 2 and the phase singularity occurs at the broadside. Note that the phase of $F_\theta$ has a desirable form only at a very small spatial angle. This issue will be solved by anisotropic structures. \begin{figure} \begin{subfigure}[b]{0.237\textwidth} \includegraphics[width =\textwidth]{Phase_f_phi_iso.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.237\textwidth} \includegraphics[width =\textwidth]{Phase_f_theta_iso.png} \caption{} \end{subfigure} \caption{Phase distributions of far-zone components. (a) $F_\phi$. (b) $F_\theta$, for isotropic hologram in Fig.\ref{fig:imp_iso}. } \label{fig:phase_iso} \end{figure} \subsection{Aperture field for anisotropic holograms} Unlike isotropic structures, in anisotropic holograms the components of field vector can be defined independently. This enables us to control the polarization of radiation field. For example, to achieve circular polarization at ($\theta_0$, $\phi_0$) the horizontal and vertical components of far-zone field must satisfy the following condition: \begin{equation} F_\phi(\theta_0, \phi_0) = e^{\pm j \frac{\pi}{2}} F_\theta(\theta_0, \phi_0) \label{eq:F_phi_circ} \end{equation} where signs + and - represent the left-hand and right-hand polarizations, respectively. Equation (\ref{eq:F_phi_circ}) imposes the following condition on aperture field components: \begin{equation} E_{ay}(\vec{\rho}) = E_{ax}(\vec{\rho})\frac{\cos \theta_0 \sin \phi_0 + e^{\pm j \pi/2} \cos \phi_0}{\cos \theta_0 \cos \phi_0 - e^{\pm j \pi/2} \sin \phi_0} \end{equation} If $E_{ax}$ is defined according to Eq. (\ref{eq:Eaxay}), the modulation indices must have the following relationship: \begin{equation} M_y(\vec{\rho})=M_x(\vec{\rho})\frac{\cos \theta_0 \sin \phi_0 + e^{\pm j \pi/2} \cos \phi_0}{\cos \theta_0 \cos \phi_0 - e^{\pm j \pi/2} \sin \phi_0} \end{equation} For example, to have radiation with right circular polarization (RHCP) in the broadside ($\theta_0=0^\circ$ and $\phi_0 = 0^\circ$), the following condition must be met: \begin{equation} M_y(\vec{\rho})=-jM_x(\vec{\rho})=-jM \label{eq:My_circ} \end{equation} where M is assumed to be constant. Using (\ref{eq:FF_theta}) and (\ref{eq:FF_phi}) we can estimate the far-zone components as \begin{equation} \begin{split} F_\theta(\theta, \phi) = \iint_{ap}\frac{ M J_{sw} e^{-j\phi}}{\sqrt{2\pi\rho \sqrt{(\beta_{um}^{(0)} + \delta \beta(\vec{\rho}))^2 + \delta \alpha^2 (\vec{\rho})}}}\times\\ e^{-\delta \alpha (\vec{\rho})} e^{jk\rho' \sin \theta \cos (\phi - \phi') } e^{-jl\phi'}\rho'd\rho'd\phi' \end{split} \label{eq:int_aniso1} \end{equation} \begin{equation} \begin{split} F_\phi(\theta, \phi) = \iint_{ap}\frac{-jMJ_{sw} e^{-j\phi} \cos \theta}{\sqrt{2\pi\rho \sqrt{(\beta_{um}^{(0)} + \delta \beta(\vec{\rho}))^2 + \delta \alpha^2 (\vec{\rho})}}}\times\\ e^{-\delta \alpha (\vec{\rho})} e^{jk\rho' \sin \theta \cos (\phi - \phi')} e^{-jl\phi'}\rho'd\rho'd\phi' \end{split} \label{eq:int_aniso2} \end{equation} By substituting $u=\phi - \phi'$ in (\ref{eq:int_aniso1}) and (\ref{eq:int_aniso2}) and simplifying them, it can be observed that the phase of $F_\theta$ and $F_\phi$ possess the angular dependence in the form of $(l+1)\phi$, which represents the topological charge of the beam. Substituting (\ref{eq:My_circ}) in (\ref{eq:Phi_rho}) and (\ref{eq:Phi_phi}) results \begin{equation} \Phi_\rho(\vec{\rho}) = (l+1)\phi \end{equation} \begin{equation} \Phi_\phi(\vec{\rho}) = (l+1)\phi + \frac{\pi}{2} \end{equation} Fig.\ref{fig:imp_aniso} shows the synthesized surface impedance distribution to generate OAM beam with right-hand polarization and topological charge of m = 2. The comparison between RHCP components of theoretical and simulation results are given in Figs \ref{fig:pattern_comp_aniso_phi0} and \ref{fig:pattern_comp_aniso_phi90}. The agreement between the results confirms the accuracy of theoretical model. Fig.\ref{fig:F_aniso} shows the phase of $F_\theta$ and $F_\phi$. It can be observed that both $F_\theta$ and $F_\phi$ have the phase singularity in broadside with topological charge of m = 2. \begin{figure*}[t] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width = \textwidth]{imp_aniso.png} \caption{} \label{fig:imp_aniso} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width =\textwidth]{Pattern_comp_phi0.png} \caption{} \label{fig:pattern_comp_aniso_phi0} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width =\textwidth]{Pattern_comp_phi90.png} \caption{} \label{fig:pattern_comp_aniso_phi90} \end{subfigure} \caption{(a) Anisotropic surface impedance distribution and realized model. (b) RHCP component of pattern at $\phi = 0^\circ$. (c) RHCP component of pattern at $\phi = 90^\circ$. } \label{fig:imp_aniso} \end{figure*} \begin{figure*} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width =\textwidth]{Phase_F_phi_aniso.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width =\textwidth]{Phase_F_theta_aniso.png} \caption{} \end{subfigure} \caption{Phase distributions of far-zone components. (a) $F_\phi$. (b) $F_\theta$, for anisotropic hologram in Fig.\ref{fig:imp_aniso}. } \label{fig:F_aniso} \end{figure*} \begin{table*}[t!] \caption{Computational performances of theoretical and full-wave simulations for the analysis of proposed holograms in Figs \ref{fig:imp_iso} and \ref{fig:imp_aniso}.} \label{tab:table1} \begin{tabular}{ccccc} \hline\hline Solver & Analysis Method & Processor & Allocated RAM & Time per Process \\ \hline CST Microwave Studio & FIT & \begin{tabular}[c]{@{}c@{}}Core i7 6850K\\ (3.6 GHz per processor)\end{tabular} & 64 GB & \begin{tabular}[c]{@{}c@{}}Isotropic: $\approx$ 8 h\\ Anisotropic: $\approx$ 10 h\end{tabular} \\ Our Code (in Matlab) & AFW & \begin{tabular}[c]{@{}c@{}}Core i7 6500U\\ (2.5 GHz per processor)\end{tabular} & 12 GB & \begin{tabular}[c]{@{}c@{}}Isotropic: $\approx$ 150 sec\\ Anisotropic: $\approx$ 140 sec\end{tabular} \\ \hline \end{tabular} \end{table*} \section{Computational performance of proposed method} To clarify the advantage of theoretical model over the full-wave simulation, in this section, the computational performance of the two methods are compared. Generally, leaky-wave holograms can be considered as traveling wave radiators. However, in order to have an appropriate radiation performance, they must be relatively large. The dimensions of these antennas to achieve high gain beams are about $10\lambda$ to $20\lambda$. On the other hand, the dimensions of unit cells to realize the hologram must be small enough ($<\lambda/5$) so that the surface can act as a homogenized impedance boundary condition. Therefore, the full-wave simulator applies dense meshing to be able to distinguish each inclusion, which greatly increases the simulation time. Table \ref{tab:table1} compares the CPU time for both full-wave and theoretical methods. The allocated RAM for the theoretical model is 12 GB. Also, the processor dedicated to it is Core i7 6500U (Ultra Low Voltage) with two real cores. The processor used in full-wave simulations is the Core i7 6850K model with 6 real cores and the dedicated RAM is 64 GB. The results of Table \ref{tab:table1} show that the analysis time for the proposed method is significantly shorter than the full-wave simulations. Observe too that the proposed method can be easily developed on conventional computers, where the full-wave simulation is impractical. \section{Conclusion} In this work, an analytical method based on adiabatic floquet-wave expansion and aperture field estimation technique is proposed for the implementation of vortex beam radiators. To determine the aperture field, three points must be considered: 1- The amplitude of the field specifying the shape of the pattern; 2- The phase of the aperture field that determines the direction and topological charge of OAM wave; 3- The relationship between the x and y components of the aperture field that controls the polarization of radiation beam. According to the above considerations, two structures based on isotropic and anisotropic unit cells is proposed. For both structures, the aperture fields are estimated to achieve the desired patterns, polarizations and topological charges and are synthesized by the relevant unit cells. It has been shown that the anisotropic unit cell has the distinct advantage of polarization control over that of the isotropic one. The advantage of using the analytical method is its high speed and less resource requirements and consumption. Furthermore, it has very good accuracy, which makes it suitable for analyzing and designing holograms with very large dimensions. \section*{Appendix A:} This appendix describes the calculation procedure of $\delta \alpha$ in detail. In (\ref{eq:aalpha}) the tensor $\bar{\bar{\chi}}$ is obtained from the following equation \cite{minatti_2016}: \begin{equation} \begin{split} \bar{\bar{\chi}}=\chi_{\rho\rho}\hat{\rho}\hat{\rho} + \chi_{\rho\phi}\hat{\rho}\hat{\phi} + \chi_{\phi\rho}\hat{\phi}\hat{\rho} + \hat{\chi}_{\phi\phi}\hat{\phi}\hat{\phi}\\ = -(\bar{\bar{X}}^{(0)}+ j\bar{\bar{Z}}_{GF}^{(0)}) - j(\bar{\bar{z}}^{(+1)} + \bar{\bar{z}}^{(-1)}) \end{split} \label{eq:chi} \end{equation} Note that $\bar{\bar{Z}}_{GF}^{(0)}$ is the tensorial Green's function of grounded dielectric slab evaluated at $\vec{k}=\beta_{um}^{(0)}\hat{\rho}$ and can be obtained as follows: \begin{equation} \bar{\bar{Z}}_{GF}^{(0)} = -j[\bar{\bar{X}}_0^{-1}(\beta_{um}^{(0)})+ \bar{\bar{X}}_G^{-1}(\beta_{um}^{(0)})]^{-1} \label{eq:ZZ_GF} \end{equation} where \begin{equation} \bar{\bar{X}}_0(\beta_{um}^{(0)}) = -\frac{\eta_0\sqrt{(\beta_{um}^{(0)})^2 - k^2}}{k}\hat{\rho}\hat{\rho} + \frac{\eta_0 k}{\sqrt{(\beta_{um}^{(0)})^2 - k^2}}\hat{\phi}\hat{\phi} \label{eq:XX_0} \end{equation} \begin{equation} \begin{split} \bar{\bar{X}}_G(\beta_{um}^{(0)}) = \\ \frac{\eta_0 \sqrt{\epsilon_r k^2 - (\beta_{um}^{(0)})^2}}{k \epsilon_r} \tan (h\sqrt{\epsilon_r k^2 - (\beta_{(um)}^{(0)})^2})\hat{\rho}\hat{\rho} \\ + \frac{\eta_0 k}{\sqrt{\epsilon_r k^2 - (\beta_{(um)^{(0)}})^2}}\tan (h\sqrt{\epsilon_r k^2 - (\beta_{(um)}^{(0)})^2})\hat{\phi}\hat{\phi} \end{split} \label{eq:XX_G} \end{equation} In (\ref{eq:chi}) the tensor $\bar{\bar{X}}^{(0)}$ denotes the average reactance in the case of $m_\rho = m_\phi = 0$ and is defined as: \begin{equation} \bar{\bar{X}}^{(0)}=X_\rho \hat{\rho}\hat{\rho} + X_\phi \hat{\phi}\hat{\phi} \end{equation} Also, $\bar{\bar{z}}^{(+1)}$ and $\bar{\bar{z}}^{(-1)}$ have the following forms: \begin{equation} \bar{\bar{z}}^{(+1)} = \bar{\bar{X}}^{(-1)}.[\bar{\bar{Z}}_{GF}^{(-1)} - j\bar{\bar{X}}^{(0)}]. \bar{\bar{X}}^{(+1)} \label{eq:z_plus} \end{equation} \begin{equation} \bar{\bar{z}}^{(-1)} = \bar{\bar{X}}^{(+1)}.[\bar{\bar{Z}}_{GF}^{(+1)} - j\bar{\bar{X}}^{(0)}]. \bar{\bar{X}}^{(-1)} \label{eq:z_minus} \end{equation} where \begin{equation} \begin{split} \bar{\bar{X}}^{(+1)} = \frac{1}{2}e^{-jKs(\vec{\rho})}[m_\rho(\vec{\rho})e^{-j\Phi_\rho(\vec{\rho})}(X_\rho \hat{\rho}\hat{\rho} - X_\phi \hat{\phi}\hat{\phi}) + \\ m_\phi(\vec{\rho}) e^{-j\Phi_\phi(\vec{\rho})}X_\rho (\hat{\rho}\hat{\phi} + \hat{\phi}\hat{\rho})] \end{split} \end{equation} \begin{equation} \begin{split} \bar{\bar{X}}^{(-1)} = \frac{1}{2}e^{+jKs(\vec{\rho})}[m_\rho(\vec{\rho})e^{+j\Phi_\rho(\vec{\rho})}(X_\rho \hat{\rho}\hat{\rho} - X_\phi \hat{\phi}\hat{\phi}) + \\ m_\phi(\vec{\rho}) e^{+j\Phi_\phi(\vec{\rho})}X_{\rho} (\hat{\rho}\hat{\phi} + \hat{\phi}\hat{\rho})] \end{split} \end{equation} Note that, In (\ref{eq:z_plus}) and (\ref{eq:z_minus}), the tensors $\bar{\bar{Z}}_{GF}^{(+1)}$ and $\bar{\bar{Z}}_{GF}^{(-1)}$ are the Green's Functions of the dielectric slab at $\vec{k} = \beta^{(+1)}\hat{\rho}$ and $\vec{k} = \beta^{(-1)}\hat{\rho}$, respectively. Using (\ref{eq:ZZ_GF})-(\ref{eq:XX_G}) and substituting $\beta_{um}^{(0)}$ with $\beta^{(+1)}$ and $\beta^{(-1)}$ we can determine the tensors $\bar{\bar{Z}}_{GF}^{(+1)}$ and $\bar{\bar{Z}}_{GF}^{(-1)}$. \FloatBarrier \bibliographystyle{unsrt}